id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
28,280,731 | https://en.wikipedia.org/wiki/Medial%20graph | In the mathematical discipline of graph theory, the medial graph of plane graph G is another graph M(G) that represents the adjacencies between edges in the faces of G. Medial graphs were introduced in 1922 by Ernst Steinitz to study combinatorial properties of convex polyhedra, although the inverse construction was already used by Peter Tait in 1877 in his foundational study of knots and links.
Formal definition
Given a connected plane graph G, its medial graph M(G) has
a vertex for each edge of G and
an edge between two vertices for each face of G in which their corresponding edges occur consecutively.
The medial graph of a disconnected graph is the disjoint union of the medial graphs of each connected component. The definition of medial graph also extends without modification to graph embeddings on surfaces of higher genus.
Properties
The medial graph of any plane graph is a 4-regular plane graph.
For any plane graph G, the medial graph of G and the medial graph of the dual graph of G are isomorphic. Conversely, for any 4-regular plane graph H, the only two plane graphs with medial graph H are dual to each other.
Since the medial graph depends on a particular embedding, the medial graph of a planar graph is not unique; the same planar graph can have non-isomorphic medial graphs. In the picture, the red graphs are not isomorphic because the two vertices with self loops share an edge in one graph but not in the other.
Every 4-regular plane graph is the medial graph of some plane graph. For a connected 4-regular plane graph H, a planar graph G with H as its medial graph can be constructed as follows. Color the faces of H with just two colors, which is possible since H is Eulerian (and thus the dual graph of H is bipartite). The vertices in G correspond to the faces of a single color in H. These vertices are connected by an edge for each vertex shared by their corresponding faces in H. Note that performing this construction using the faces of the other color as the vertices produces the dual graph of G.
The medial graph of a 3-regular plane graph coincides with its line graph. However, this is not true for medial graphs of plane graphs that have vertices of degree greater than three.
Applications
For a plane graph G, twice the evaluation of the Tutte polynomial at the point (3,3) equals the sum over weighted Eulerian orientations in the medial graph of G, where the weight of an orientation is 2 to the number of saddle vertices of the orientation (that is, the number of vertices with incident edges cyclically ordered "in, out, in out"). Since the Tutte polynomial is invariant under embeddings, this result shows that every medial graph has the same sum of these weighted Eulerian orientations.
Directed medial graph
The medial graph definition can be extended to include an orientation. First, the faces of the medial graph are colored black if they contain a vertex of the original graph and white otherwise. This coloring causes each edge of the medial graph to be bordered by one black face and one white face. Then each edge is oriented so that the black face is on its left.
A plane graph and its dual do not have the same directed medial graph; their directed medial graphs are the transpose of each other.
Using the directed medial graph, one can effectively generalize the result on evaluations of the Tutte polynomial at (3,3). For a plane graph G, n times the evaluation of the Tutte polynomial at the point (n+1,n+1) equals the weighted sum over all edge colorings using n colors in the directed medial graph of G so that each (possibly empty) set of monochromatic edges forms a directed Eulerian graph, where the weight of a directed Eulerian orientation is 2 to the number of monochromatic vertices.
See also
Rectification (geometry) - The equivalent operation on polyhedrons
References
Further reading
Graph operations
Graph families
Planar graphs | Medial graph | Mathematics | 833 |
69,907 | https://en.wikipedia.org/wiki/List%20of%20severe%20weather%20phenomena | Severe weather phenomena are weather conditions that are hazardous to human life and property.
Severe weather can occur under a variety of situations, but three characteristics are generally needed: a temperature or moisture boundary, moisture, and (in the event of severe, precipitation-based events) instability in the atmosphere.
Examples
Atmospheric
Fog
Haar (fog)
Ice fog
Electrical storms
Thunderstorm
Derecho
Multicellular thunderstorm
Pulse storm
Squall line
Storm cell (single-cell)
Supercells, rotating thunderstorms
Lightning
Fire
Wildfire or bushfire (ignition of wildfires is sometimes by lightning strike, especially in "dry thunderstorms")
Firestorm
Fire whirl, also called firenado and fire tornado
Flood
Floods
Flash flood
Coastal flooding
Tidal flooding
Storm surge
Oceans and bodies of water
Harmful algal bloom
Blue green algae
Red tide
High seas
Sneaker wave
High tides
King tide
Ice shove
Rogue wave
Seiche
Swell (ocean)
Tidal surge
Storm surge
Rip currents
Undertow (water waves)
Whirlpools
Snow
Avalanche
Blizzard
Lake effect snow
Snownado
Snow devil
Polar vortex
Ice
Black ice
Glaze ice
Hailstorm
Ice shove
Ice storm
Megacryometeor
Rain
Acid rain
Blood rain
Cold drop (; archaic as a meteorological term), colloquially, any high impact rainfall event along the Mediterranean coast of Spain
Drought, a prolonged water supply shortage, often caused by persistent lack of, or much reduced, rainfall
Floods
Flash flood
Rainstorm
Red rain in Kerala (for related phenomena, see Blood rain)
Monsoon
Surface movement
Avalanche
Mass wasting and landslips
Landslide
Debris flows
Mudslide
Rockfall
Coastal erosion
Sinkhole
Temperature
Cold wave
Heat wave
Heat burst
Polar vortex
Wind
Cyclones
Extratropical cyclone
European windstorms
Australian East Coast Low
"Medicane", Mediterranean tropical-like cyclones
Polar cyclone
Tropical cyclone, also called a hurricane, typhoon, or just "cyclone"
Subtropical cyclone
Australian east coast low
Explosive cyclogenesis or weather bomb
Dust storm
Haboob
Dust devil
Sandstorm
Hurricane
Katabatic winds
Bohemian wind
Bora
Piteraq
Gregale
Anabatic wind
Valley exit jet
Santa Ana winds
Williwaws
Chinook
Gale
Monsoon
Nor’easter
Nor'westers
Steam devil
Squall
Straight-line winds
Derecho
Tornado (also colloquially referred to as a "whirlwind" or "twister")
Landspout
Gustnado, a "gust front tornado"
Waterspout
Winter storms
Wind gust
Windstorm
Gust front
Other
Heat lightning
Zud, widespread livestock death, mainly by starvation, caused by climatic conditions
Hayfever
Asthma
Some related meteorological terms: weather front, gust front, bow echo, Atmospheric river
Phenomena caused by severe thunderstorms
Excessive Lightning
Derecho
Extreme wind (70 mph or greater)
Downpours
Heavy rain
Flood, flash flood, coastal flooding
Hail
High winds – 93 km/h(58 mph) or higher.
Lightning
Thundersnow, Snowsquall
Tornado
Windstorm (gradient pressure induced)
Severe thunderstorm (hailstorm, downburst: microbursts and macrobursts)
Severe weather caused by humans
Air pollution
See also
Extreme weather
List of weather-related phenomena
Meteorology
Severe weather terminology (United States)
Space weather
Lists of natural disasters
Glossaries of meteorology
Wikipedia glossaries using unordered lists
Weather-related lists
Lists of phenomena
de:Unwetter
ja:荒天 | List of severe weather phenomena | Physics | 676 |
40,789,733 | https://en.wikipedia.org/wiki/Grupo%20de%20Astronom%C3%ADa%20y%20Ciencias%20del%20Espacio | The Grupo de Astronomía y Ciencias del Espacio (Group of Astronomy and Space Sciences, GACE-UV) is an astrophysics research group, part of the Image Processing Laboratory (IPL) in the University of Valencia.
It is located in the Parc Científic in Paterna, Valencia.
See also
List of astronomical societies
References
External links
GACE
IPL
Astrophysics
Astronomy organizations
University of Valencia | Grupo de Astronomía y Ciencias del Espacio | Physics,Astronomy | 88 |
7,008,701 | https://en.wikipedia.org/wiki/Decrement%20table | Decrement tables, also called life table methods, are used to calculate the probability of certain events.
Birth control
Life table methods are often used to study birth control effectiveness. In this role, they are an alternative to the Pearl Index.
As used in birth control studies, a decrement table calculates a separate effectiveness rate for each month of the study, as well as for a standard period of time (usually 12 months). Use of life table methods eliminates time-related biases (i.e. the most fertile couples getting pregnant and dropping out of the study early, and couples becoming more skilled at using the method as time goes on), and in this way is superior to the Pearl Index.
Two kinds of decrement tables are used to evaluate birth control methods. Multiple-decrement (or competing) tables report net effectiveness rates. These are useful for comparing competing reasons for couples dropping out of a study. Single-decrement (or noncompeting) tables report gross effectiveness rates, which can be used to accurately compare one study to another.
See also
Survival analysis
Footnotes
Birth control
Actuarial science | Decrement table | Mathematics | 234 |
25,110,709 | https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance | Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are disturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. High-resolution nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). The original application of NMR to condensed matter physics is nowadays mostly devoted to strongly correlated electron systems. It reveals large many-body couplings by fast broadband detection and should not be confused with solid state NMR, which aims at removing the effect of the same couplings by Magic Angle Spinning techniques.
The most commonly used nuclei are and , although isotopes of many other elements, such as , , and , can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic angular momentum and nuclear magnetic dipole moment. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore not NMR-active.
In its application to molecules the NMR effect can be observed only in the presence of a static magnetic field. However, in the ordered phases of magnetic materials, very large internal fields are produced at the nuclei of magnetic ions (and of close ligands), which allow NMR to be performed in zero applied field. Additionally, radio-frequency transitions of nuclear spin I > with large enough electric quadrupolar coupling to the electric field gradient at the nucleus may also be excited in zero applied magnetic field (nuclear quadrupole resonance).
In the dominant chemistry application, the use of higher fields improves the sensitivity of the method (signal-to-noise ratio scales approximately as the power of with the magnetic field strength) and the spectral resolution. Commercial NMR spectrometers employing liquid helium cooled superconducting magnets with fields of up to 28 Tesla have been developed and are widely used.
It is a key feature of NMR that the resonance frequency of nuclei in a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. This effect serves as the basis of magnetic resonance imaging.
The principle of NMR usually involves three sequential steps:
The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0.
The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio frequency (RF) pulse. The oscillation frequency required for significant perturbation is dependent upon the static magnetic field (B0) and the nuclei of observation.
The detection of the NMR signal during or after the RF pulse, due to the voltage induced in a detection coil by precession of the nuclear spins around B0. After an RF pulse, precession usually occurs with the nuclei's Larmor frequency and, in itself, does not involve transitions between spin states or energy levels.
The two magnetic fields are usually chosen to be perpendicular to each other as this maximizes the NMR signal strength. The frequencies of the time-signal response by the total magnetization (M) of the nuclear spins are analyzed in NMR spectroscopy and magnetic resonance imaging. Both use applied magnetic fields (B0) of great strength, usually produced by large currents in superconducting coils, in order to achieve dispersion of response frequencies and of very high homogeneity and stability in order to deliver spectral resolution, the details of which are described by chemical shifts, the Zeeman effect, and Knight shifts (in metals). The information provided by NMR can also be increased using hyperpolarization, and/or using two-dimensional, three-dimensional and higher-dimensional techniques.
NMR phenomena are also utilized in low-field NMR, NMR spectroscopy and MRI in the Earth's magnetic field (referred to as Earth's field NMR), and in several types of magnetometers.
History
Nuclear magnetic resonance was first described and measured in molecular beams by Isidor Rabi in 1938, by extending the Stern–Gerlach experiment, and in 1944, Rabi was awarded the Nobel Prize in Physics for this work. In 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952.
Russell H. Varian filed the "Method and means for correlating nuclear properties of atoms and magnetic fields", on October 21, 1948 and was accepted on July 24, 1951. Varian Associates developed the first NMR unit called NMR HR-30 in 1952.
Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technology's Radiation Laboratory. His work during that project on the production and detection of radio frequency power and on the absorption of such RF power by matter laid the foundation for his discovery of NMR in bulk matter.
Rabi, Bloch, and Purcell observed that magnetic nuclei, like and , could absorb RF energy when placed in a magnetic field and when the RF was of a frequency specific to the identity of the nuclei. When this absorption occurs, the nucleus is described as being in resonance. Different atomic nuclei within a molecule resonate at different (radio) frequencies in the same applied static magnetic field, due to various local magnetic fields. The observation of such magnetic resonance frequencies of the nuclei present in a molecule makes it possible to determine essential chemical and structural information about the molecule.
The improvements of the NMR method benefited from the development of electromagnetic technology and advanced electronics and their introduction into civilian use. Originally as a research tool it was limited primarily to dynamic nuclear polarization, by the work of Anatole Abragam and Albert Overhauser, and to condensed matter physics, where it produced one of the first demonstrations of the validity of the BCS theory of superconductivity by the observation by Charles Slichter of the Hebel-Slichter effect. It soon showed its potential in organic chemistry, where NMR has become indispensable, and by the 1990s improvement in the sensitivity and resolution of NMR spectroscopy resulted in its broad use in analytical chemistry, biochemistry and materials science.
In the 2020s zero- to ultralow-field nuclear magnetic resonance (ZULF NMR), a form of spectroscopy that provides abundant analytical results without the need for large magnetic fields, was developed. It is combined with a special technique that makes it possible to hyperpolarize atomic nuclei.
Theory of nuclear magnetic resonance
Nuclear spins and magnets
All nucleons, that is neutrons and protons, composing any atomic nucleus, have the intrinsic quantum property of spin, an intrinsic angular momentum analogous to the classical angular momentum of a spinning sphere. The overall spin of the nucleus is determined by the spin quantum number S. If the numbers of both the protons and neutrons in a given nuclide are even then , i.e. there is no overall spin. Then, just as electrons pair up in nondegenerate atomic orbitals, so do even numbers of protons or even numbers of neutrons (both of which are also spin- particles and hence fermions), giving zero overall spin.
However, an unpaired proton and unpaired neutron will have a lower energy when their spins are parallel, not anti-parallel. This parallel spin alignment of distinguishable particles does not violate the Pauli exclusion principle. The lowering of energy for parallel spins has to do with the quark structure of these two nucleons. As a result, the spin ground state for the deuteron (the nucleus of deuterium, the 2H isotope of hydrogen), which has only a proton and a neutron, corresponds to a spin value of 1, not of zero. On the other hand, because of the Pauli exclusion principle, the tritium isotope of hydrogen must have a pair of anti-parallel spin neutrons (of total spin zero for the neutron spin-pair), plus a proton of spin . Therefore, the tritium total nuclear spin value is again , just like the simpler, abundant hydrogen isotope, 1H nucleus (the proton). The NMR absorption frequency for tritium is also similar to that of 1H. In many other cases of non-radioactive nuclei, the overall spin is also non-zero and may have a contribution from the orbital angular momentum of the unpaired nucleon. For example, the nucleus has an overall spin value .
A non-zero spin is associated with a non-zero magnetic dipole moment, , via the relation where γ is the gyromagnetic ratio. Classically, this corresponds to the proportionality between the angular momentum and the magnetic dipole moment of a spinning charged sphere, both of which are vectors parallel to the rotation axis whose length increases proportional to the spinning frequency. It is the magnetic moment and its interaction with magnetic fields that allows the observation of NMR signal associated with transitions between nuclear spin levels during resonant RF irradiation or caused by Larmor precession of the average magnetic moment after resonant irradiation. Nuclides with even numbers of both protons and neutrons have zero nuclear magnetic dipole moment and hence do not exhibit NMR signal. For instance, is an example of a nuclide that produces no NMR signal, whereas , , and are nuclides that do exhibit NMR spectra. The last two nuclei have spin S > and are therefore quadrupolar nuclei.
Electron spin resonance (ESR) is a related technique in which transitions between electronic rather than nuclear spin levels are detected. The basic principles are similar but the instrumentation, data analysis, and detailed theory are significantly different. Moreover, there is a much smaller number of molecules and materials with unpaired electron spins that exhibit ESR (or electron paramagnetic resonance (EPR)) absorption than those that have NMR absorption spectra. On the other hand, ESR has much higher signal per spin than NMR does.
Values of spin angular momentum
Nuclear spin is an intrinsic angular momentum that is quantized. This means that the magnitude of this angular momentum is quantized (i.e. S can only take on a restricted range of values), and also that the x, y, and z-components of the angular momentum are quantized, being restricted to integer or half-integer multiples of ħ, the reduced Planck constant. The integer or half-integer quantum number associated with the spin component along the z-axis or the applied magnetic field is known as the magnetic quantum number, m, and can take values from +S to −S, in integer steps. Hence for any given nucleus, there are a total of angular momentum states.
The z-component of the angular momentum vector () is therefore . The z-component of the magnetic moment is simply:
Spin energy in a magnetic field
Consider nuclei with a spin of one-half, like , or . Each nucleus has two linearly independent spin states, with m = or m = − (also referred to as spin-up and spin-down, or sometimes α and β spin states, respectively) for the z-component of spin. In the absence of a magnetic field, these states are degenerate; that is, they have the same energy. Hence the number of nuclei in these two states will be essentially equal at thermal equilibrium.
If a nucleus with spin is placed in a magnetic field, however, the two states no longer have the same energy as a result of the interaction between the nuclear magnetic dipole moment and the external magnetic field. The energy of a magnetic dipole moment in a magnetic field B0 is given by:
Usually the z-axis is chosen to be along B0, and the above expression reduces to:
or alternatively:
As a result, the different nuclear spin states have different energies in a non-zero magnetic field. In less formal language, we can talk about the two spin states of a spin as being aligned either with or against the magnetic field. If γ is positive (true for most isotopes used in NMR) then ("spin up") is the lower energy state.
The energy difference between the two states is:
and this results in a small population bias favoring the lower energy state in thermal equilibrium. With more spins pointing up than down, a net spin magnetization along the magnetic field B0 results.
Precession of the spin magnetization
A central concept in NMR is the precession of the spin magnetization around the magnetic field at the nucleus, with the angular frequency where relates to the oscillation frequency and B is the magnitude of the field. This means that the spin magnetization, which is proportional to the sum of the spin vectors of nuclei in magnetically equivalent sites (the expectation value of the spin vector in quantum mechanics), moves on a cone around the B field. This is analogous to the precessional motion of the axis of a tilted spinning top around the gravitational field. In quantum mechanics, is the Bohr frequency of the and expectation values. Precession of non-equilibrium magnetization in the applied magnetic field B0 occurs with the Larmor frequency without change in the populations of the energy levels because energy is constant (time-independent Hamiltonian).
Magnetic resonance and radio-frequency pulses
A perturbation of nuclear spin orientations from equilibrium will occur only when an oscillating magnetic field is applied whose frequency νrf sufficiently closely matches the Larmor precession frequency νL of the nuclear magnetization. The populations of the spin-up and -down energy levels then undergo Rabi oscillations, which are analyzed most easily in terms of precession of the spin magnetization around the effective magnetic field in a reference frame rotating with the frequency νrf. The stronger the oscillating field, the faster the Rabi oscillations or the precession around the effective field in the rotating frame. After a certain time on the order of 2–1000 microseconds, a resonant RF pulse flips the spin magnetization to the transverse plane, i.e. it makes an angle of 90° with the constant magnetic field B0 ("90° pulse"), while after a twice longer time, the initial magnetization has been inverted ("180° pulse"). It is the transverse magnetization generated by a resonant oscillating field which is usually detected in NMR, during application of the relatively weak RF field in old-fashioned continuous-wave NMR, or after the relatively strong RF pulse in modern pulsed NMR.
Chemical shielding
It might appear from the above that all nuclei of the same nuclide (and hence the same γ) would resonate at exactly the same frequency but this is not the case. The most important perturbation of the NMR frequency for applications of NMR is the "shielding" effect of the shells of electrons surrounding the nucleus. Electrons, similar to the nucleus, are also charged and rotate with a spin to produce a magnetic field opposite to the applied magnetic field. In general, this electronic shielding reduces the magnetic field at the nucleus (which is what determines the NMR frequency). As a result, the frequency required to achieve resonance is also reduced.
This shift in the NMR frequency due to the electronic molecular orbital coupling to the external magnetic field is called chemical shift, and it explains why NMR is able to probe the chemical structure of molecules, which depends on the electron density distribution in the corresponding molecular orbitals. If a nucleus in a specific chemical group is shielded to a higher degree by a higher electron density of its surrounding molecular orbitals, then its NMR frequency will be shifted "upfield" (that is, a lower chemical shift), whereas if it is less shielded by such surrounding electron density, then its NMR frequency will be shifted "downfield" (that is, a higher chemical shift).
Unless the local symmetry of such molecular orbitals is very high (leading to "isotropic" shift), the shielding effect will depend on the orientation of the molecule with respect to the external field (B0). In solid-state NMR spectroscopy, magic angle spinning is required to average out this orientation dependence in order to obtain frequency values at the average or isotropic chemical shifts. This is unnecessary in conventional NMR investigations of molecules in solution, since rapid "molecular tumbling" averages out the chemical shift anisotropy (CSA). In this case, the "average" chemical shift (ACS) or isotropic chemical shift is often simply referred to as the chemical shift.
Radiation Damping
In 1949, Suryan first suggested that the interaction between a radiofrequency coil and a sample's bulk magnetization could explain why experimental observations of relaxation times differed from theoretical predictions. Building on this idea, Bloembergen and Pound further developed Suryan's hypothesis by mathematically integrating the Maxwell–Bloch equations, a process through which they introduced the concept of "radiation damping."
Radiation damping (RD) in Nuclear Magnetic Resonance (NMR) is an intrinsic phenomenon observed in many high-field NMR experiments, especially relevant in systems with high concentrations of nuclei like protons or fluorine. RD occurs when transverse bulk magnetization from the sample, following a radio frequency pulse, induces an electromagnetic field (emf) in the receiver coil of the NMR spectrometer. This generates an oscillating current and a non-linear induced transverse magnetic field which returns the spin system to equilibrium faster than other mechanisms of relaxation.
RD can result in line broadening and measurement of a shorter spin-lattice relaxation time (). For instance, a sample of water in a 400 MHz NMR spectrometer will have around 20 ms, whereas its is hundreds of milliseconds. This effect is often described using modified Bloch equations that include terms for radiation damping alongside the conventional relaxation terms. The longitudinal relaxation time of radiation damping () is given by the equation [1].
[1]
where is the gyromagnetic ratio, is the magnetic permeability, is the equilibrium magnetization per unit volume, is the filling factor of the probe which is the ratio of the probe coil volume to the sample volume enclosed, is the quality factor of the probe, and , , and are the resonance frequency, inductance, and resistance of the coil, respectively. The quantification of line broadening due to radiation damping can be determined by measuring the and use equation [2].
[2]
Radiation damping in NMR is influenced significantly by system parameters. It is notably more prominent in systems where the NMR probe possesses a high quality factor () and a high filling factor , resulting in a strong coupling between the probe coil and the sample. The phenomenon is also impacted by the concentration of the nuclei within the sample and their magnetic moments, which can intensify the effects of radiation damping. The strength of the magnetic field is inversely proportional to the lifetime of RD. The impact of radiation damping on NMR signals is multifaceted. It can accelerate the decay of the NMR signal faster than intrinsic relaxation processes would suggest. This acceleration can complicate the interpretation of NMR spectra by causing broadening of spectral lines, distorting multiplet structures, and introducing artifacts, especially in high-resolution NMR scenarios. Such effects make it challenging to obtain clear and accurate data without considering the influence of radiation damping.
To mitigate these effects, various strategies are employed in NMR spectroscopy. These methods majorly stem from hardware or software. Hardware modifications including RF feed-circuit and Q-factor switches reduce the feedback loop between the sample magnetization and the electromagnetic field induced by the coil and function successfully. Other approaches such as designing selective pulse sequences also effectively manage the fields induced by radiation damping. These approaches aim to control and limit the disruptive effects of radiation damping during NMR experiments and all approaches are successful in eliminating RD to a fairly large extent.
Overall, understanding and managing radiation damping is crucial for obtaining high-quality NMR data, especially in modern high-field spectrometers where the effects can be significant due to the increased sensitivity and resolution.
Relaxation
The process of population relaxation refers to nuclear spins that return to thermodynamic equilibrium in the magnet. This process is also called T1, "spin-lattice" or "longitudinal magnetic" relaxation, where T1 refers to the mean time for an individual nucleus to return to its thermal equilibrium state of the spins. After the nuclear spin population has relaxed, it can be probed again, since it is in the initial, equilibrium (mixed) state.
The precessing nuclei can also fall out of alignment with each other and gradually stop producing a signal. This is called T2, "spin-spin" or transverse relaxation. Because of the difference in the actual relaxation mechanisms involved (for example, intermolecular versus intramolecular magnetic dipole-dipole interactions), T1 is usually (except in rare cases) longer than T2 (that is, slower spin-lattice relaxation, for example because of smaller dipole-dipole interaction effects). In practice, the value of T2*, which is the actually observed decay time of the observed NMR signal, or free induction decay (to of the initial amplitude immediately after the resonant RF pulse), also depends on the static magnetic field inhomogeneity, which may be quite significant. (There is also a smaller but significant contribution to the observed FID shortening from the RF inhomogeneity of the resonant pulse). In the corresponding FT-NMR spectrum—meaning the Fourier transform of the free induction decay— the width of the NMR signal in frequency units is inversely related to the T2* time. Thus, a nucleus with a long T2* relaxation time gives rise to a very sharp NMR peak in the FT-NMR spectrum for a very homogeneous ("well-shimmed") static magnetic field, whereas nuclei with shorter T2* values give rise to broad FT-NMR peaks even when the magnet is shimmed well. Both T1 and T2 depend on the rate of molecular motions as well as the gyromagnetic ratios of both the resonating and their strongly interacting, next-neighbor nuclei that are not at resonance.
A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence that is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T2 time.
NMR spectroscopy
NMR spectroscopy is one of the principal techniques used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of the nuclear spins in the sample. Peak splittings due to J- or dipolar couplings between nuclei are also useful. NMR spectroscopy can provide detailed and quantitative information on the functional groups, topology, dynamics and three-dimensional structure of molecules in solution and the solid state. Since the area under an NMR peak is usually proportional to the number of spins involved, peak integrals can be used to determine composition quantitatively.
Structure and molecular dynamics can be studied (with or without "magic angle" spinning (MAS)) by NMR of quadrupolar nuclei (that is, with spin ) even in the presence of magnetic "dipole-dipole" interaction broadening (or simply, dipolar broadening), which is always much smaller than the quadrupolar interaction strength because it is a magnetic vs. an electric interaction effect.
Additional structural and chemical information may be obtained by performing double-quantum NMR experiments for pairs of spins or quadrupolar nuclei such as . Furthermore, nuclear magnetic resonance is one of the techniques that has been used to design quantum automata, and also build elementary quantum computers.
Continuous-wave (CW) spectroscopy
In the first few decades of nuclear magnetic resonance, spectrometers used a technique known as continuous-wave (CW) spectroscopy, where the transverse spin magnetization generated by a weak oscillating magnetic field is recorded as a function of the oscillation frequency or static field strength B0. When the oscillation frequency matches the nuclear resonance frequency, the transverse magnetization is maximized and a peak is observed in the spectrum. Although NMR spectra could be, and have been, obtained using a fixed constant magnetic field and sweeping the frequency of the oscillating magnetic field, it was more convenient to use a fixed frequency source and vary the current (and hence magnetic field) in an electromagnet to observe the resonant absorption signals. This is the origin of the counterintuitive, but still common, "high field" and "low field" terminology for low frequency and high frequency regions, respectively, of the NMR spectrum.
As of 1996, CW instruments were still used for routine work because the older instruments were cheaper to maintain and operate, often operating at 60 MHz with correspondingly weaker (non-superconducting) electromagnets cooled with water rather than liquid helium. One radio coil operated continuously, sweeping through a range of frequencies, while another orthogonal coil, designed not to receive radiation from the transmitter, received signals from nuclei that reoriented in solution. As of 2014, low-end refurbished 60 MHz and 90 MHz systems were sold as FT-NMR instruments, and in 2010 the "average workhorse" NMR instrument was configured for 300 MHz.
CW spectroscopy is inefficient in comparison with Fourier analysis techniques (see below) since it probes the NMR response at individual frequencies or field strengths in succession. Since the NMR signal is intrinsically weak, the observed spectrum suffers from a poor signal-to-noise ratio. This can be mitigated by signal averaging, i.e. adding the spectra from repeated measurements. While the NMR signal is the same in each scan and so adds linearly, the random noise adds more slowly – proportional to the square root of the number of spectra added (see random walk). Hence the overall signal-to-noise ratio increases as the square-root of the number of spectra measured. However, monitoring an NMR signal at a single frequency as a function of time may be better suited for kinetic studies than pulsed Fourier-transform NMR spectrosocopy.
Fourier-transform spectroscopy
Most applications of NMR involve full NMR spectra, that is, the intensity of the NMR signal as a function of frequency. Early attempts to acquire the NMR spectrum more efficiently than simple CW methods involved illuminating the target simultaneously with more than one frequency. A revolution in NMR occurred when short radio-frequency pulses began to be used, with a frequency centered at the middle of the NMR spectrum. In simple terms, a short pulse of a given "carrier" frequency "contains" a range of frequencies centered about the carrier frequency, with the range of excitation (bandwidth) being inversely proportional to the pulse duration, i.e. the Fourier transform of a short pulse contains contributions from all the frequencies in the neighborhood of the principal frequency. The restricted range of the NMR frequencies for most light spin- nuclei made it relatively easy to use short (1 - 100 microsecond) radio frequency pulses to excite the entire NMR spectrum.
Applying such a pulse to a set of nuclear spins simultaneously excites all the single-quantum NMR transitions. In terms of the net magnetization vector, this corresponds to tilting the magnetization vector away from its equilibrium position (aligned along the external magnetic field). The out-of-equilibrium magnetization vector then precesses about the external magnetic field vector at the NMR frequency of the spins. This oscillating magnetization vector induces a voltage in a nearby pickup coil, creating an electrical signal oscillating at the NMR frequency. This signal is known as the free induction decay (FID), and it contains the sum of the NMR responses from all the excited spins. In order to obtain the frequency-domain NMR spectrum (NMR absorption intensity vs. NMR frequency) this time-domain signal (intensity vs. time) must be Fourier transformed. Fortunately, the development of Fourier transform (FT) NMR coincided with the development of digital computers and the digital fast Fourier transform (FFT). Fourier methods can be applied to many types of spectroscopy.
Richard R. Ernst was one of the pioneers of pulsed NMR and won a Nobel Prize in chemistry in 1991 for his work on Fourier Transform NMR and his development of multi-dimensional NMR spectroscopy.
Multi-dimensional NMR spectroscopy
The use of pulses of different durations, frequencies, or shapes in specifically designed patterns or pulse sequences allows production of a spectrum that contains many different types of information about the molecules in the sample. In multi-dimensional nuclear magnetic resonance spectroscopy, there are at least two pulses: one leads to the directly detected signal and the others affect the starting magnetization and spin state prior to it. The full analysis involves repeating the sequence with the pulse timings systematically varied in order to probe the oscillations of the spin system are point by point in the time domain. Multidimensional Fourier transformation of the multidimensional time signal yields the multidimensional spectrum. In two-dimensional nuclear magnetic resonance spectroscopy (2D-NMR), there will be one systematically varied time period in the sequence of pulses, which will modulate the intensity or phase of the detected signals. In 3D-NMR, two time periods will be varied independently, and in 4D-NMR, three will be varied.
There are many such experiments. In some, fixed time intervals allow (among other things) magnetization transfer between nuclei and, therefore, the detection of the kinds of nuclear–nuclear interactions that allowed for the magnetization transfer. Interactions that can be detected are usually classified into two kinds. There are through-bond and through-space interactions. Through-bond interactions relate to structural connectivity of the atoms and provide information about which ones are directly connected to each other, connected by way of a single other intermediate atom, etc. Through-space interactions relate to actual geometric distances and angles, including effects of dipolar coupling and the nuclear Overhauser effect.
Although the fundamental concept of 2D-FT NMR was proposed by Jean Jeener from the Free University of Brussels at an international conference, this idea was largely developed by Richard Ernst, who won the 1991 Nobel prize in Chemistry for his work in FT NMR, including multi-dimensional FT NMR, and especially 2D-FT NMR of small molecules. Multi-dimensional FT NMR experiments were then further developed into powerful methodologies for studying molecules in solution, in particular for the determination of the structure of biopolymers such as proteins or even small nucleic acids.
In 2002 Kurt Wüthrich shared the Nobel Prize in Chemistry (with John Bennett Fenn and Koichi Tanaka) for his work with protein FT NMR in solution.
Solid-state NMR spectroscopy
This technique complements X-ray crystallography in that it is frequently applicable to molecules in an amorphous or liquid-crystalline state, whereas crystallography, as the name implies, is performed on molecules in a crystalline phase. In electronically conductive materials, the Knight shift of the resonance frequency can provide information on the mobile charge carriers. Though nuclear magnetic resonance is used to study the structure of solids, extensive atomic-level structural detail is more challenging to obtain in the solid state. Due to broadening by chemical shift anisotropy (CSA) and dipolar couplings to other nuclear spins, without special techniques such as MAS or dipolar decoupling by RF pulses, the observed spectrum is often only a broad Gaussian band for non-quadrupolar spins in a solid.
Professor Raymond Andrew at the University of Nottingham in the UK pioneered the development of high-resolution solid-state nuclear magnetic resonance. He was the first to report the introduction of the MAS (magic angle sample spinning; MASS) technique that allowed him to achieve spectral resolution in solids sufficient to distinguish between chemical groups with either different chemical shifts or distinct Knight shifts. In MASS, the sample is spun at several kilohertz around an axis that makes the so-called magic angle θm (which is ~54.74°, where 3cos2θm-1 = 0) with respect to the direction of the static magnetic field B0; as a result of such magic angle sample spinning, the broad chemical shift anisotropy bands are averaged to their corresponding average (isotropic) chemical shift values. Correct alignment of the sample rotation axis as close as possible to θm is essential for cancelling out the chemical-shift anisotropy broadening. There are different angles for the sample spinning relative to the applied field for the averaging of electric quadrupole interactions and paramagnetic interactions, correspondingly ~30.6° and ~70.1°. In amorphous materials, residual line broadening remains since each segment is in a slightly different environment, therefore exhibiting a slightly different NMR frequency.
Line broadening or splitting by dipolar or J-couplings to nearby 1H nuclei is usually removed by radio-frequency pulses applied at the 1H frequency during signal detection. The concept of cross polarization developed by Sven Hartmann and Erwin Hahn was utilized in transferring magnetization from protons to less sensitive nuclei by M.G. Gibby, Alex Pines and John S. Waugh. Then, Jake Schaefer and Ed Stejskal demonstrated the powerful use of cross polarization under MAS conditions (CP-MAS) and proton decoupling, which is now routinely employed to measure high resolution spectra of low-abundance and low-sensitivity nuclei, such as carbon-13, silicon-29, or nitrogen-15, in solids. Significant further signal enhancement can be achieved by dynamic nuclear polarization from unpaired electrons to the nuclei, usually at temperatures near 110 K.
Sensitivity
Because the intensity of nuclear magnetic resonance signals and, hence, the sensitivity of the technique depends on the strength of the magnetic field, the technique has also advanced over the decades with the development of more powerful magnets. Advances made in audio-visual technology have also improved the signal-generation and processing capabilities of newer instruments.
As noted above, the sensitivity of nuclear magnetic resonance signals is also dependent on the presence of a magnetically susceptible nuclide and, therefore, either on the natural abundance of such nuclides or on the ability of the experimentalist to artificially enrich the molecules, under study, with such nuclides. The most abundant naturally occurring isotopes of hydrogen and phosphorus (for example) are both magnetically susceptible and readily useful for nuclear magnetic resonance spectroscopy. In contrast, carbon and nitrogen have useful isotopes but which occur only in very low natural abundance.
Other limitations on sensitivity arise from the quantum-mechanical nature of the phenomenon. For quantum states separated by energy equivalent to radio frequencies, thermal energy from the environment causes the populations of the states to be close to equal. Since incoming radiation is equally likely to cause stimulated emission (a transition from the upper to the lower state) as absorption, the NMR effect depends on an excess of nuclei in the lower states. Several factors can reduce sensitivity, including:
Increasing temperature, which evens out the Boltzmann population of states. Conversely, low temperature NMR can sometimes yield better results than room-temperature NMR, providing the sample remains liquid.
Saturation of the sample with energy applied at the resonant radiofrequency. This manifests in both CW and pulsed NMR; in the first case (CW) this happens by using too much continuous power that keeps the upper spin levels completely populated; in the second case (pulsed), each pulse (that is at least a 90° pulse) leaves the sample saturated, and four to five times the (longitudinal) relaxation time (5T1) must pass before the next pulse or pulse sequence can be applied. For single pulse experiments, shorter RF pulses that tip the magnetization by less than 90° can be used, which loses some intensity of the signal, but allows for shorter recycle delays. The optimum there is called an Ernst angle, after the Nobel laureate. Especially in solid state NMR, or in samples containing very few nuclei with spin (diamond with the natural 1% of carbon-13 is especially troublesome here) the longitudinal relaxation times can be on the range of hours, while for proton-NMR they are often in the range of one second.
Non-magnetic effects, such as electric-quadrupole coupling of spin-1 and spin- nuclei with their local environment, which broaden and weaken absorption peaks. , an abundant spin-1 nucleus, is difficult to study for this reason. High resolution NMR instead probes molecules using the rarer isotope, which has spin-.
Isotopes
Many isotopes of chemical elements can be used for NMR analysis.
Commonly used nuclei:
, the most commonly used spin- nucleus in NMR investigations, has been studied using many forms of NMR. Hydrogen is highly abundant, especially in biological systems. It is the nucleus providing the strongest NMR signal (apart from , which is not commonly used due to its instability and radioactivity). Proton NMR has a narrow chemical-shift range but gives sharp signals in solution state. Fast acquisition of quantitative spectra (with peak integrals in stoichiometric ratios) is possible due to short relaxation time. The nucleus has provided the sole diagnostic signal for clinical magnetic resonance imaging (MRI).
, a spin-1 nucleus, is commonly utilized to provide a signal-free medium in the form of deuterated solvents for proton NMR, to avoid signal interference from hydrogen-containing solvents in measurement of NMR of solutes. It is also used in determining the behavior of lipids in lipid membranes and other solids or liquid crystals as it is a relatively non-perturbing label which can selectively replace . Alternatively, can be detected in media specially labeled with . Deuterium resonance is commonly used in high-resolution NMR spectroscopy to monitor drift of the magnetic field strength (lock) and to monitor the homogeneity of the external magnetic field.
is very sensitive to NMR. It exists at a very low concentration in natural helium and can be purified from . It is used mainly in studies of endohedral fullerenes, where its chemical inertness is beneficial to ascertaining the structure of the entrapping fullerene.
is more sensitive than and yields sharper signals. The nuclear spin of 10B is 3 and that of 11B is . Quartz tubes must be used because borosilicate glass interferes with measurement.
, a spin- nucleus, is widely used, despite its relative paucity in naturally occurring carbon (approximately 1.1%). It is stable to nuclear decay. Since there is a low percentage in natural carbon, spectrum acquisition on samples which have not been enriched in takes a long time. Frequently used for labeling of compounds in synthetic and metabolic studies. Has low sensitivity and moderately wide chemical shift range, yields sharp signals. Low percentage makes it useful by preventing spin–spin couplings and makes the spectrum appear less crowded. Slow relaxation of 13C not bonded to hydrogen means that spectra are not integrable unless long acquisition times are used.
, spin-1, is a medium sensitivity nucleus with wide chemical shift range. Its large quadrupole moment interferes with acquisition of high resolution spectra, limiting usefulness to smaller molecules and functional groups with a high degree of symmetry such as in the head-groups of lipids.
, spin-, is relatively commonly used. Can be used for isotopically labeling compounds. Very insensitive but yields sharp signals. Low percentage in natural nitrogen together with low sensitivity requires high concentrations or expensive isotope enrichment.
, spin-, low sensitivity and very low natural abundance (0.037%), wide chemical shift range (up to 2000 ppm). Its quadrupole moment causes line broadening. Used in metabolic and biochemical studies of chemical equilibria.
, spin-, relatively commonly measured. Sensitive, yields sharp signals, has a wide chemical shift range.
, spin-, 100% of natural phosphorus. Medium sensitivity, wide chemical shift range, yields sharp lines. Spectra tend to have a moderate level of noise. Used in biochemical studies and in coordination chemistry with phosphorus-containing ligands.
and , spin-, broad signal. is significantly more sensitive, preferred over despite its slightly broader signal. Organic chlorides yield very broad signals. Its use is limited to inorganic and ionic chlorides and very small organic molecules.
, spin-, relatively small quadrupole moment, moderately sensitive, very low natural abundance. Used in biochemistry to study calcium binding to DNA, proteins, etc.
, used in studies of catalysts and complexes.
Other nuclei (usually used in the studies of their complexes and chemical bonding, or to detect presence of the element):
,
, ,
,
,
,
,
,
Applications
NMR is extensively used in medicine in the form of magnetic resonance imaging. NMR is widely used in organic chemistry and industrially mainly for analysis of chemicals. The technique is also used to measure the ratio between water and fat in foods, monitor the flow of corrosive fluids in pipes, or to study molecular structures such as catalysts.
Medicine
The application of nuclear magnetic resonance best known to the general public is magnetic resonance imaging for medical diagnosis and magnetic resonance microscopy in research settings. However, it is also widely used in biochemical studies, notably in NMR spectroscopy such as proton NMR, carbon-13 NMR, deuterium NMR and phosphorus-31 NMR. Biochemical information can also be obtained from living tissue (e.g. human brain tumors) with the technique known as in vivo magnetic resonance spectroscopy or chemical shift NMR microscopy.
These spectroscopic studies are possible because nuclei are surrounded by orbiting electrons, which are charged particles that generate small, local magnetic fields that add to or subtract from the external magnetic field, and so will partially shield the nuclei. The amount of shielding depends on the exact local environment. For example, a hydrogen bonded to an oxygen will be shielded differently from a hydrogen bonded to a carbon atom. In addition, two hydrogen nuclei can interact via a process known as spin–spin coupling, if they are on the same molecule, which will split the lines of the spectra in a recognizable way.
As one of the two major spectroscopic techniques used in metabolomics, NMR is used to generate metabolic fingerprints from biological fluids to obtain information about disease states or toxic insults.
Chemistry
The aforementioned chemical shift came as a disappointment to physicists who had hoped that the resonance frequency of each nuclear species would be constant in a given magnetic field. But about 1951, chemist S. S. Dharmatti pioneered a way to determine the structure of many compounds by studying the peaks of nuclear magnetic resonance spectra. It can be a very selective technique, distinguishing among many atoms within a molecule or collection of molecules of very similar type but which differ only in terms of their local chemical environment. NMR spectroscopy is used to unambiguously identify known and novel compounds, and as such, is usually required by scientific journals for identity confirmation of synthesized new compounds. See the articles on carbon-13 NMR and proton NMR for detailed discussions.
A chemist can determine the identity of a compound by comparing the observed nuclear precession frequencies to known or predicted frequencies. Further structural data can be elucidated by observing spin–spin coupling, a process by which the precession frequency of a nucleus can be influenced by the spin orientation of a chemically bonded nucleus. Spin–spin coupling is easily observed in NMR of hydrogen-1 ( NMR) since its natural abundance is nearly 100%.
Because the nuclear magnetic resonance timescale is rather slow, compared to other spectroscopic methods, changing the temperature of a T2* experiment can also give information about fast reactions, such as the Cope rearrangement or about structural dynamics, such as ring-flipping in cyclohexane. At low enough temperatures, a distinction can be made between the axial and equatorial hydrogens in cyclohexane.
An example of nuclear magnetic resonance being used in the determination of a structure is that of buckminsterfullerene (often called "buckyballs", composition C60). This now famous form of carbon has 60 carbon atoms forming a sphere. The carbon atoms are all in identical environments and so should see the same internal H field. Unfortunately, buckminsterfullerene contains no hydrogen and so nuclear magnetic resonance has to be used. spectra require longer acquisition times since carbon-13 is not the common isotope of carbon (unlike hydrogen, where is the common isotope). However, in 1990 the spectrum was obtained by R. Taylor and co-workers at the University of Sussex and was found to contain a single peak, confirming the unusual structure of buckminsterfullerene.
Battery
Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for investigating the local structure and ion dynamics in battery materials. NMR provides unique insights into the short-range atomic environments within complex electrochemical systems such as batteries. Electrochemical processes rely on redox reactions, in which 7Li or 23Na are often involved. Accordingly, their NMR spectroscopies are affected by the electronic structure of the material, which makes NMR an essential technique for probing the behavior of battery components during operation.
Applications of NMR in Battery Research
Electrodes and Structural Transformations: During charge and discharge cycles, the materials in the anodes and cathodes undergo local structural transformations. These changes can be monitored using NMR by analyzing the signal's line shape, line intensity, and chemical shift. These transformations are often not captured by X-ray diffraction techniques (providing long-range information), making NMR indispensable for understanding the underlying mechanisms of energy storage.
Metal Dendrite Formation: One of the challenges in lithium and sodium-based batteries is the formation of metal dendrites, which can lead to short circuits and catastrophic battery failure. In Situ NMR allows researchers to observe the formation of lithium or sodium dendrites in real time during battery cycling. Varying the cycling rates can also quantify the effect on dendrite formation, aiding in the development of strategies to suppress dendrite growth and reduce the risk of short circuits.
Solid Electrolytes and Interfaces: Solid electrolytes, a key focus of next-generation battery research, often suffer from limited ion diffusion rates. NMR techniques can measure diffusivity in solid electrolytes, helping researchers understand how to enhance ion conductivity. Furthermore, NMR is used to study the Solid Electrolyte Interface (SEI), a layer that forms on the electrode surface and thus influences battery stability. Solid-state NMR (ssNMR) is particularly valuable for characterizing the composition and ion dynamics within the SEI layer due to its nondestructive testing capabilities.
In Situ and Ex Situ NMR Techniques
NMR technology can be divided into two main experimental approaches in battery research: In Situ NMR and Ex Situ NMR. Each offers unique advantages depending on the research goals.
In Situ NMR: In situ NMR enables real-time observation of chemical and structural changes in batteries while they are operating. This is particularly important for studying transient species that only exist under working conditions, such as certain intermediate reaction products. In situ NMR has become a critical tool for understanding processes like lithium and sodium plating and dendrite formation during battery cycling.
Ex Situ NMR: Ex situ NMR is used after the battery has been disassembled, allowing for high-resolution analysis of battery components. It is often employed to study a wide range of nuclei, including 1H, 2H, 6Li, 7Li, 13C, 15N, 17O, 19F, 25Mg, 29Si, 31P, 51V, 133Cs. Many of these nuclei are quadrupolar or present in low abundance, making them difficult to detect. However, ex situ NMR benefits from better sensitivity and narrower linewidths, which can be further improved by employing larger sample volumes, higher magnetic fields, or magic angle spinning (MAS).
Purity determination (w/w NMR)
While NMR is primarily used for structural determination, it can also be used for purity determination, provided that the structure and molecular weight of the compound is known. This technique requires the use of an internal standard of known purity. Typically this standard will have a high molecular weight to facilitate accurate weighing, but relatively few protons so as to give a clear peak for later integration e.g. 1,2,4,5-tetrachloro-3-nitrobenzene. Accurately weighed portions of the standard and sample are combined and analysed by NMR. Suitable peaks from both compounds are selected and the purity of the sample is determined via the following equation.
Where:
wstd: weight of internal standard
wspl: weight of sample
n[H]std: the integrated area of the peak selected for comparison in the standard, corrected for the number of protons in that functional group
n[H]spl: the integrated area of the peak selected for comparison in the sample, corrected for the number of protons in that functional group
MWstd: molecular weight of standard
MWspl: molecular weight of sample
P: purity of internal standard
Non-destructive testing
Nuclear magnetic resonance is extremely useful for analyzing samples non-destructively. Radio-frequency magnetic fields easily penetrate many types of matter and anything that is not highly conductive or inherently ferromagnetic. For example, various expensive biological samples, such as nucleic acids, including RNA and DNA, or proteins, can be studied using nuclear magnetic resonance for weeks or months before using destructive biochemical experiments. This also makes nuclear magnetic resonance a good choice for analyzing dangerous samples.
Segmental and molecular motions
In addition to providing static information on molecules by determining their 3D structures, one of the remarkable advantages of NMR over X-ray crystallography is that it can be used to obtain important dynamic information. This is due to the orientation dependence of the chemical-shift, dipole-coupling, or electric-quadrupole-coupling contributions to the instantaneous NMR frequency in an anisotropic molecular environment. When the molecule or segment containing the NMR-observed nucleus changes its orientation relative to the external field, the NMR frequency changes, which can result in changes in one- or two-dimensional spectra or in the relaxation times, depending on the correlation time and amplitude of the motion.
Data acquisition in the petroleum industry
Another use for nuclear magnetic resonance is data acquisition in the petroleum industry for petroleum and natural gas exploration and recovery. Initial research in this domain began in the 1950s, however, the first commercial instruments were not released until the early 1990s. A borehole is drilled into rock and sedimentary strata into which nuclear magnetic resonance logging equipment is lowered. Nuclear magnetic resonance analysis of these boreholes is used to measure rock porosity, estimate permeability from pore size distribution and identify pore fluids (water, oil and gas). These instruments are typically low field NMR spectrometers.
NMR logging, a subcategory of electromagnetic logging, measures the induced magnet moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements respond to the presence of hydrogen. Because hydrogen atoms primarily occur in pore fluids, NMR effectively responds to the volume, composition, viscosity, and distribution of these fluids, for example oil, gas or water. NMR logs provide information about the quantities of fluids present, the properties of these fluids, and the sizes of the pores containing these fluids. From this information, it is possible to infer or estimate:
The volume (porosity) and distribution (permeability) of the rock pore space
Rock composition
Type and quantity of fluid hydrocarbons
Hydrocarbon producibility
The basic core and log measurement is the T2 decay, presented as a distribution of T2 amplitudes versus time at each sample depth, typically from 0.3 ms to 3 s. The T2 decay is further processed to give the total pore volume (the total porosity) and pore volumes within different ranges of T2. The most common volumes are the bound fluid and free fluid. A permeability estimate is made using a transform such as the Timur-Coates or SDR permeability transforms. By running the log with different acquisition parameters, direct hydrocarbon typing and enhanced diffusion are possible.
Flow probes for NMR spectroscopy
Real-time applications of NMR in liquid media have been developed using specifically designed flow probes (flow cell assemblies) which can replace standard tube probes. This has enabled techniques that can incorporate the use of high performance liquid chromatography (HPLC) or other continuous flow sample introduction devices. These flow probes have used in various online process monitoring such as chemical reactions, environmental pollutant degradation.
Process control
NMR has now entered the arena of real-time process control and process optimization in oil refineries and petrochemical plants. Two different types of NMR analysis are utilized to provide real time analysis of feeds and products in order to control and optimize unit operations. Time-domain NMR (TD-NMR) spectrometers operating at low field (2–20 MHz for ) yield free induction decay data that can be used to determine absolute hydrogen content values, rheological information, and component composition. These spectrometers are used in mining, polymer production, cosmetics and food manufacturing as well as coal analysis. High resolution FT-NMR spectrometers operating in the 60 MHz range with shielded permanent magnet systems yield high resolution NMR spectra of refinery and petrochemical streams. The variation observed in these spectra with changing physical and chemical properties is modeled using chemometrics to yield predictions on unknown samples. The prediction results are provided to control systems via analogue or digital outputs from the spectrometer.
Earth's field NMR
In the Earth's magnetic field, NMR frequencies are in the audio frequency range, or the very low frequency and ultra low frequency bands of the radio frequency spectrum. Earth's field NMR (EFNMR) is typically stimulated by applying a relatively strong dc magnetic field pulse to the sample and, after the end of the pulse, analyzing the resulting low frequency alternating magnetic field that occurs in the Earth's magnetic field due to free induction decay (FID). These effects are exploited in some types of magnetometers, EFNMR spectrometers, and MRI imagers. Their inexpensive portable nature makes these instruments valuable for field use and for teaching the principles of NMR and MRI.
An important feature of EFNMR spectrometry compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other aspects observable at high fields are not observable at low fields. This is because:
Electron-mediated heteronuclear J-couplings (spin–spin couplings) are field independent, producing clusters of two or more frequencies separated by several Hz, which are more easily observed in a fundamental resonance of about 2 kHz."Indeed it appears that enhanced resolution is possible due to the long spin relaxation times and high field homogeneity which prevail in EFNMR."
Chemical shifts of several ppm are clearly separated in high field NMR spectra, but have separations of only a few millihertz at proton EFNMR frequencies, so are usually not resolved.
Zero field NMR
In zero field NMR all magnetic fields are shielded such that magnetic fields below 1 nT (nanotesla) are achieved and the nuclear precession frequencies of all nuclei are close to zero and indistinguishable. Under those circumstances the observed spectra are no-longer dictated by chemical shifts but primarily by J-coupling interactions which are independent of the external magnetic field. Since inductive detection schemes are not sensitive at very low frequencies, on the order of the J-couplings (typically between 0 and 1000 Hz), alternative detection schemes are used. Specifically, sensitive magnetometers turn out to be good detectors for zero field NMR. A zero magnetic field environment does not provide any polarization hence it is the combination of zero field NMR with hyperpolarization schemes that makes zero field NMR desirable.
Quantum computing
NMR quantum computing uses the spin states of nuclei within molecules as qubits. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems; in this case, molecules.
Magnetometers
Various magnetometers use NMR effects to measure magnetic fields, including proton precession magnetometers (PPM) (also known as proton magnetometers), and Overhauser magnetometers.
SNMR
Surface magnetic resonance (or magnetic resonance sounding) is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. SNMR is used to estimate aquifer properties, including quantity of water contained in the aquifer, porosity, and hydraulic conductivity.
Makers of NMR equipment
Major NMR instrument makers include Thermo Fisher Scientific, Magritek, Oxford Instruments, Bruker, Spinlock SRL, General Electric, JEOL, Kimble Chase, Philips, Siemens AG, and formerly Agilent Technologies (who acquired Varian, Inc.).
See also
Benchtop NMR spectrometer
Larmor equation (Not to be confused with Larmor formula).
Least-squares spectral analysis
Liquid nitrogen
NMR crystallography
NMR spectra database
Nuclear magnetic resonance in porous media
Nuclear quadrupole resonance (NQR)
Protein dynamics
Rabi cycle
Relaxometry
Spin echo
Structure-based assignment
References
Further reading
K.V.R. Chary, Girjesh Govil (2008) NMR in Biological Systems: From Molecules to Human. Springer. .
The Feynman Lectures on Physics Vol. II Ch. 35: Paramagnetism and Magnetic Resonance
External links
Tutorial
NMR/MRI tutorial
NMR Library NMR Concepts
NMR Course Notes
Downloadable NMR exercises as PowerPoint (english/german) and PDF (german only) files
Animations and simulations
A free interactive simulation of NMR principles
Interactive simulation on the Bloch sphere
Video
introduction to NMR and MRI
Richard Ernst, NL – Developer of multidimensional NMR techniques Freeview video provided by the Vega Science Trust.
'An Interview with Kurt Wuthrich' Freeview video by the Vega Science Trust (Wüthrich was awarded a Nobel Prize in Chemistry in 2002 "for his development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution").
The Nobel Prize Winner - Documentary about Richard R. Ernst by Lukas Schwarzenbacher and Susanne Schmid (Swiss German with English subtitles)
Other
Spotlight on nuclear magnetic resonance: a timeless technique
Scientific techniques
Articles containing video clips
Biomagnetics | Nuclear magnetic resonance | Physics,Chemistry,Biology | 12,507 |
32,251,811 | https://en.wikipedia.org/wiki/Built-in%20hold | A built-in hold is a period in a launch countdown during which no activities are scheduled and the countdown clock is stopped. The hold serves as a milestone in the countdown, an opportunity for non-launch activities (such as a shift change or meal break), and a chance to perform unanticipated activities such as equipment repair.
Most importantly, a hold provides an opportunity to synchronize the pre-launch activity schedule (concluding at T−0) with the desired wall-clock time of launch (L−0). Activities might take more or less time than planned, or the launch time might be moved, e.g. due to weather.
A planned hold may be of a fixed or variable duration. Criteria for exiting the hold and restarting the countdown may be based on a fixed time, the completion of a checklist of work items, or a go/no-go decision from mission management.
For example, space shuttle launch countdowns begin at T−43 hours and include seven holds at T−27 hours, T−19 hours, T−11 hours, T−6 hours, T−3 hours, T−20 minutes, and T−9 minutes. These holds total about 26 hours, so the launch countdown begins at about L−69 hours.
References
NASA
Spaceflight | Built-in hold | Astronomy | 269 |
731,893 | https://en.wikipedia.org/wiki/Grey | Grey (more frequent British English) or gray (more frequent American English) is an intermediate color between black and white. It is a neutral or achromatic color, meaning that it has no chroma and therefore no hue. It is the color of a cloud-covered sky, of ash, and of lead.
The first recorded use of grey as a color name in the English language was in 700 CE. Grey is the dominant spelling in European and Commonwealth English, while gray is more common in American English; however, both spellings are valid in both varieties of English.
In Europe and North America, surveys show that gray is the color most commonly associated with neutrality, conformity, boredom, uncertainty, old age, indifference, and modesty. Only one percent of respondents chose it as their favorite color.
Etymology
Grey comes from the Middle English or , from the Old English , and is related to the Dutch and German . There are no certain cognates outside Germanic languages; terms such as Spanish and Italian are considered Germanic loanwords from Medieval Latin griseus. The first recorded use of grey as a color name in the English language was in 700 AD.
The distinction between grey and gray spellings in usual Commonwealth and American English respectively developed the 20th century.
In history and art
Antiquity through the Middle Ages
In antiquity and the Middle Ages, grey was the color of undyed wool, and thus was the color most commonly worn by peasants and the poor. It was also the color worn by Cistercian monks and friars of the Franciscan and Capuchin orders as a symbol of their vows of humility and poverty. Franciscan friars in England and Scotland were commonly known as the grey friars, and that name is now attached to many places in Great Britain.
Renaissance and the Baroque
During the Renaissance and the Baroque, grey began to play an important role in fashion and art. Black became the most popular color of the nobility, particularly in Italy, France, and Spain, and grey and white were harmonious with it.
Grey was also frequently used for the drawing of oil paintings, a technique called grisaille. The painting would first be composed in grey and white, and then the colors, made with thin transparent glazes, would be added on top. The grisaille beneath would provide the shading, visible through the layers of color. Sometimes, the grisaille was simply left uncovered, giving the appearance of carved stone.
Grey was a particularly good background color for gold and for skin tones. It became the most common background for the portraits of Rembrandt van Rijn and for many of the paintings of El Greco, who used it to highlight the faces and costumes of the central figures. The palette of Rembrandt was composed almost entirely of somber colors. He composed his warm greys out of black pigments made from charcoal or burnt animal bones, mixed with lead white or a white made of lime, which he warmed with a little red lake color from cochineal or madder. In one painting, the portrait of Margaretha de Geer (1661), one part of a grey wall in the background is painted with a layer of dark brown over a layer of orange, red, and yellow earths, mixed with ivory black and some lead white. Over this he put an additional layer of glaze made of mixture of blue smalt, red ochre, and yellow lake. Using these ingredients and many others, he made greys which had, according to art historian Philip Ball, "an incredible subtlety of pigmentation". The warm, dark and rich greys and browns served to emphasize the golden light on the faces in the paintings.
Eighteenth and nineteenth centuries
Grey became a highly fashionable color in the 18th century, both for women's dresses and for men's waistcoats and coats. It looked particularly luminous coloring the silk and satin fabrics worn by the nobility and wealthy.
Women's fashion in the 19th century was dominated by Paris, while men's fashion was set by London. The grey business suit appeared in the mid-19th century in London; light grey in summer, dark grey in winter; replacing the more colorful palette of men's clothing early in the century.
The clothing of women working in the factories and workshops of Paris in the 19th century was usually grey. This gave them the name of grisettes. "Gris" or grey also meant drunk, and the name "grisette" was also given to the lower class of Parisian prostitutes.
Grey also became a common color for military uniforms; in an age of rifles with longer range, soldiers in grey were less visible as targets than those in blue or red. Grey was the color of the uniforms of the Confederate Army during the American Civil War, and of the Prussian Army for active service wear from 1910 onwards.
Several artists of the mid-19th century used tones of grey to create memorable paintings; Jean-Baptiste-Camille Corot used tones of green-grey and blue grey to give harmony to his landscapes, and James McNeill Whistler created a special grey for the background of the portrait of his mother, and for his own self-portrait.
Whistler's arrangement of tones of grey had an effect on the world of music, on the French composer Claude Debussy. In 1894, Debussy wrote to violinist Eugène Ysaÿe describing his Nocturnes as "an experiment in the combinations that can be obtained from one color – what a study in grey would be in painting".
Twentieth and twenty-first centuries
In the late 1930s, grey became a symbol of industrialization and war. It was the dominant color of Pablo Picasso's celebrated painting about the horrors of the Spanish Civil War, Guernica.
After the war, the grey business suit became a metaphor for uniformity of thought, popularized in such books as The Man in the Gray Flannel Suit (1955), which became a successful film in 1956.
In the sciences, nature, and technology
Storm clouds
The whiteness or darkness of clouds is a function of their depth. Small, fluffy white clouds in summer look white because the sunlight is being scattered by the tiny water droplets they contain, and that white light comes to the viewer's eye. However, as clouds become larger and thicker, the white light cannot penetrate through the cloud, and is reflected off the top. Clouds look darkest grey during thunderstorms, when they can be as much as 20,000 to 30,000 feet high.
Stratiform clouds are a layer of clouds that covers the entire sky, and which have a depth of between a few hundred to a few thousand feet thick. The thicker the clouds, the darker they appear from below, because little of the sunlight is able to pass through. From above, in an airplane, the same clouds look perfectly white, but from the ground the sky looks gloomy and grey.
The greying of hair
The color of a person's hair is created by the pigment melanin, found in the core of each hair. Melanin is also responsible for the color of the skin and of the eyes. There are only two types of pigment: dark (eumelanin) or light (phaeomelanin). Combined in various combinations, these pigments create all natural hair colors.
Melanin itself is the product of a specialized cell, the melanocyte, which is found in each hair follicle, from which the hair grows. As hair grows, the melanocyte injects melanin into the hair cells, which contain the protein keratin and which makes up our hair, skin, and nails. As long as the melanocytes continue injecting melanin into the hair cells, the hair retains its original color. At a certain age, however, which varies from person to person, the amount of melanin injected is reduced and eventually stops. The hair, without pigment, turns grey and eventually white. The reason for this decline of production of melanocytes is uncertain. In the February 2005 issue of Science, a team of Harvard scientists suggested that the cause was the failure of the melanocyte stem cells to maintain the production of the essential pigments, due to age or genetic factors, after a certain period of time. For some people, the breakdown comes in their twenties; for others, many years later. According to the site of the magazine Scientific American, "Generally speaking, among Caucasians 50 percent are 50 percent grey by age 50." Adult male gorillas also develop silver hair, but only on their backs – see Physical characteristics of gorillas.
Optics
Over the centuries, artists have traditionally created grey by mixing black and white in various proportions. They added a little red to make a warmer grey, or a little blue for a cooler grey. Artists could also make a grey by mixing two complementary colors, such as orange and blue.
Today the grey on televisions, computer displays, and telephones is usually created using the RGB color model. Red, green, and blue light combined at full intensity on the black screen makes white; by lowering the intensity, it is possible to create shades of grey.
In printing, grey is usually obtained with the CMYK color model, using cyan, magenta, yellow, and black. Grey is produced either by using black and white, or by combining equal amounts of cyan, magenta, and yellow. Most greys have a cool or warm cast to them, as the human eye can detect even a minute amount of color saturation. Yellow, orange, and red create a "warm grey". Green, blue, and violet create a "cool grey". When no color is added, the color is "neutral grey", "achromatic grey", or simply "grey". Images consisting wholly of black, white and greys are called monochrome, black-and-white, or greyscale.
RGB model
Grey values result when r = g = b, for the color (r, g, b)
CMYK model
Grey values are produced by c = m = y = 0, for the color (c, m, y, k). Lightness is adjusted by varying k. In theory, any mixture where c = m = y is neutral, but in practice such mixtures are often a muddy brown.
HSL and HSV model
Achromatic greys have no hue, so the h code is marked as "undefined" using a dash: --; greys also result whenever s is 0 or undefined, as is the case when v is 0 or l is 0 or 1
Web colors
There are several tones of grey available for use with HTML and Cascading Style Sheets (CSS) as named colors, while 254 true greys are available by specification of a hex triplet for the RGB value. All are spelled gray, using the spelling grey can cause errors. This spelling was inherited from the X11 color list. Internet Explorer's Trident browser engine does not recognize grey and renders it green. Another anomaly is that gray is in fact much darker than the X11 color marked darkgray; this is because of a conflict with the original HTML gray and the X11 gray, which is closer to HTML's silver. The three slategray colors are not themselves on the greyscale, but are slightly saturated toward cyan (green + blue). Since there are an even (256, including black and white) number of unsaturated tones of grey, there are two grey tones straddling the midpoint in the 8-bit greyscale. The color name gray has been assigned the lighter of the two shades (128, also known as #808080), due to rounding up.
Pigments
Until the 19th century, artists traditionally created grey by simply combining black and white. Rembrandt Van Rijn, for instance, usually used lead white and either carbon black or ivory black, along with touches of either blues or reds to cool or warm the grey.
In the early 19th century, a new grey, Payne's grey, appeared on the market. Payne's grey is a dark blue-grey, a mixture of ultramarine and black or of ultramarine and sienna. It is named after William Payne, a British artist who painted watercolors in the late 18th century. The first recorded use of Payne's grey as a color name in English was in 1835.
Animal color
Grey is a very common color for animals, birds, and fish, ranging in size from whales to mice. It provides a natural camouflage and allows them to blend with their surroundings.
Grey matter of the brain
The substance that composes the brain is sometimes referred to as grey matter, or "the little grey cells", so the color grey is associated with things intellectual. However, the living human brain is actually pink in color; it only turns grey when dead.
Nanotechnology and grey goo
Grey goo is a hypothetical end-of-the-world scenario, also known as ecophagy: out-of-control self-replicating nanobots consume all living matter on Earth while building more of themselves.
Grey noise
In sound engineering, grey noise is random noise subjected to an equal-loudness contour, such as an inverted A-weighting curve, over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies.
In culture
Religion
In the Christian religion, grey is the color of ashes, and so a biblical symbol of mourning and repentance, described as sackcloth and ashes. It can be used during Lent or on special days of fasting and prayer. As the color of humility and modesty, grey is worn by friars of the Order of Friars Minor Capuchin and Franciscan order as well as monks of the Cistercian order. Grey cassocks are worn by clergy of the Brazilian Catholic Apostolic Church.
Buddhist monks and priests in Japan and Korea will often wear a sleeved grey, brown, or black outer robe.
Taoist priests in China also often wear grey.
Politics
Grey is rarely used as a color by political parties, largely because of its common association with conformity, boredom and indecision. An example of a political party using grey as a color are the German Grey Panthers.
The term "grey power" or "the grey vote" is sometimes used to describe the influence of older voters as a voting bloc. In the United States, older people are more likely to vote, and usually vote to protect certain social benefits, such as Social Security.
Greys is a term sometimes used pejoratively by environmentalists in the green movement to describe those who oppose environmental measures and supposedly prefer the grey of concrete and cement.
Military
During the American Civil War, the soldiers of the Confederate Army wore grey uniforms. At the beginning of the war, the armies of the North and of the South had very similar uniforms; some Confederate units wore blue, and some Union units wore grey. There naturally was confusion, and sometimes soldiers fired by mistake at soldiers of their own army. On June 6, 1861, the Confederate government issued regulations standardizing the army uniform and establishing cadet grey as the uniform color. This was (and still is) the color of the uniform of cadets at the United States Military Academy at West Point, and cadets at the Virginia Military Institute, which produced many officers for the Confederacy.
The new uniforms were designed by Nicola Marschall, a German-American artist, who also designed the original Confederate flag. He closely followed the design of contemporary French and Austrian military uniforms. Grey was not chosen for its camouflage value; this benefit was not appreciated for several more decades. The South lacked a major dye industry, though, and grey dyes were inexpensive and easy to manufacture. While some units had uniforms colored with good-quality dyes, which were a solid bluish-grey, others had uniforms colored with vegetable dyes made from sumac or logwood, which quickly faded in sunshine to the yellowish color of butternut squash.
The German Army wore grey uniforms from 1907 until 1945, during both the First World War and Second World War. The color chosen was a grey-green called field grey (). It was chosen because it was less visible at a distance than the previous German uniforms, which were Prussian blue. It was one of the first uniform colors to be chosen for its camouflage value, important in the new age of smokeless powder and more accurate rifles and machine guns. It gave the Germans a distinct advantage at the beginning of the First World War, when the French soldiers were dressed in blue jackets and red trousers. The Finnish Army also began using grey uniforms on the German model.
Some of the more recent uniforms of the German Army and East German Army were field grey, as were some uniforms of the Swedish army. The formal dress (M/83) of the Finnish Army is grey. The Army of Chile wears field grey today.
The grey suit
During the 19th century, women's fashions were largely dictated by Paris, while London set fashions for men. The intent of a business suit was above all to show seriousness, and to show one's position in business and society. Over the course of the century, bright colors disappeared from men's fashion, and were largely replaced by a black or dark charcoal grey frock coat in winter, and lighter greys in summer. In the early 20th century, the frock coat was gradually replaced by the lounge suit, a less formal version of evening dress, which was also usually black or charcoal grey. In the 1930s the English suit style was called the drape suit, with wide shoulders and a nipped waist, usually dark or light grey. After World War II, the style changed to a slimmer fit called the continental cut, but the color remained grey.
Sports
In baseball, grey is the color typically used for road uniforms. This came about because in the 19th and early 20th century, away teams did not normally have access to laundry facilities on the road, thus stains were not noticeable on the darker grey uniforms as opposed to the white uniforms worn by the home team.
The Vegas Golden Knights of the National Hockey League features steel grey as its primary color and its current alternate uniforms are steel grey.
New Caledonia national football teams have worn grey home shirts and the color is featured on its football badge.
Georgetown University's basketball teams traditionally wear grey uniforms at home.
Gay culture
In gay slang, a grey queen is a gay person who works for the financial services industry. This term originates from the fact that in the 1950s, people who worked in this profession often wore grey flannel suits.
Associations and symbolism
In America and Europe, grey is one of the least popular colors; In a European survey, only one percent of men said it was their favorite color, and thirteen percent called it their least favorite color; the response from women was almost the same. According to color historian Eva Heller, "grey is too weak to be considered masculine, but too menacing to be considered a feminine color. It is neither warm nor cold, neither material or spiritual. With grey, nothing seems to be decided." It also denotes undefinedness and ambiguity, as in a grey area.
Grey is the color most commonly associated in many cultures with the elderly and old age, because of the association with grey hair; it symbolizes the wisdom and dignity that come with experience and age. The New York Times is sometimes called The Grey Lady because of its long history and esteemed position in American journalism.
Grey is the color most often associated in Europe and America with modesty.
See also
Shades of grey
Black
Black-and-white
Eigengrau
List of colors
Vin gris (grey wine in French)
White
References
Bibliography
Color
Optical spectrum
01
Web colors | Grey | Physics | 4,042 |
40,407,379 | https://en.wikipedia.org/wiki/Birectified%2016-cell%20honeycomb | In four-dimensional Euclidean geometry, the birectified 16-cell honeycomb (or runcic tesseractic honeycomb) is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space.
Symmetry constructions
There are 3 different symmetry constructions, all with 3-3 duoprism vertex figures. The symmetry doubles on in three possible ways, while contains the highest symmetry.
Related honeycombs
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
16-cell honeycomb
24-cell honeycomb
Rectified 24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
x3o3x *b3x *b3o, x3o3o *b3x4o, o3o3x4o3o - bricot - O106
Honeycombs (geometry)
5-polytopes | Birectified 16-cell honeycomb | Physics,Chemistry,Materials_science | 346 |
73,636,178 | https://en.wikipedia.org/wiki/Modifiable%20temporal%20unit%20problem | The Modified Temporal Unit Problem (MTUP) is a source of statistical bias that occurs in time series and spatial analysis when using temporal data that has been aggregated into temporal units. In such cases, choosing a temporal unit (e.g., days, months, years) can affect the analysis results and lead to inconsistencies or errors in statistical hypothesis testing.
Background
The MTUP is closely related to the modifiable areal unit problem or MAUP, in that they both relate to the scale of analysis and the issue of choosing an appropriate analysis. While the MAUP refers to the choice of spatial enumeration units, the MTUP arises because different temporal units have different properties and characteristics, such as the number of periods they contain or the amount of detail they provide. For example, daily sales data for a product can be aggregated into weekly, monthly, or yearly sales data. In this case, using monthly data instead of daily data can result in losing important information about the timing of events, and using yearly data can obscure short-term trends and patterns. However, the daily data in the example may have too much noise, temporal autocorrelation, or be inconsistent with other datasets. With only daily data, conducting an analysis accurately at the hourly rate would not be possible. In addition, the Modifiable Temporal Unit Problem can also arise when the time units are irregular or when the data is missing for some periods. In such cases, the choice of the time unit can affect the amount of missing data, which can impact the accuracy of the analysis and forecasting.
Overall, the Modifiable Temporal Unit Problem highlights the importance of carefully considering the time unit when analyzing and forecasting time series data. It is often necessary to try different time units and evaluate the results to determine the most appropriate choice.
Temporal autocorrelation
Temporal autocorrelation refers to the degree of correlation or similarity between values of a variable at different time points. It examines how a variable's past values are related to its current values over a sequence of time intervals. High temporal autocorrelation implies that past observations influence future observations, while low autocorrelation suggests that current values are independent of past values. This concept is often used in time series analysis to understand patterns, trends, and dependencies within a time-ordered dataset, helping to make predictions and infer the underlying dynamics of a system over time. By adjusting the temporal unit used to bin the data in the analysis, temporal autocorrelation can be addressed.
Implications
Crime
The impact of MTUP on crime analysis can be significant, as it can affect the accuracy and reliability of crime data and its conclusions about crime patterns and trends. For example, suppose the temporal unit of analysis is changed from days to weeks. In that case, the number of reported crimes may decrease or increase, even if the underlying pattern remains constant. This can lead to incorrect conclusions about the effectiveness of crime prevention strategies or the overall level of crime in a given area.
Food accessibility
The MTUP can also have an impact on food accessibility. This issue arises when the temporal unit of analysis is changed, leading to changes in the patterns and trends observed in food accessibility data. For example, if food accessibility data is analyzed from different years or aggregated differently, then the results of a study are likely to be impacted. This can affect our understanding of the availability of food in different areas over time, and can result in incorrect or incomplete conclusions about food accessibility.
Epidemiology
The MTUP can affect our understanding of the incidence and prevalence of diseases or health outcomes in different populations over time, resulting in incorrect or incomplete conclusions about the public health situation. The timeframe chosen for collecting and analyzing public health data is something that needs to be considered by researchers.
Suggested solutions
To address the MTUP, it is important to consider the temporal resolution of the data and choose the most appropriate temporal unit based on the research question and the goals of the analysis. In some cases, it may be necessary to aggregate or interpolate the data to a consistent temporal unit. Additionally, it may be helpful to use multiple temporal units or to present results for different temporal units to demonstrate the sensitivity of the results to the choice of temporal unit.
See also
Arbia's law of geography
Boundary problem (spatial analysis)
Coastline paradox
Concepts and Techniques in Modern Geography
Chronology
Ecological fallacy
Facility location problem
Geographic information systems
Historical GIS
Neighborhood effect averaging problem
Torsten Hägerstrand
Spatial epidemiology
Technical geography
Time geography
Timestamp
Tobler's first law of geography
Tobler's second law of geography
Uncertain geographic context problem
References
Bias
Geographic information systems
Problems in spatial analysis | Modifiable temporal unit problem | Technology | 951 |
15,315,808 | https://en.wikipedia.org/wiki/Phytosome | A phytosome is a complex formed by a natural active ingredient and a phospholipid. The most common example of a phytosome is Lecithin.
Phytosomes are claimed to enhance the absorption of "conventional herbal extracts" or isolated active principles, both topically and orally.
Complexation with phospholipids has been applied to a number of popular herbal extracts and active molecules including Ginkgo biloba extract, bilobalide isolated from Ginkgo biloba, silybin isolated from milk thistle (Silybum marianum), curcumin isolated from turmeric, and green tea extract (Camellia sinensis).
An attempt to trademark the term in the USA failed on appeal. Legal analysis in the USA concluded, "Applicant's fatal error, according to the Board, was in using the term as the sole designation for its new product."
At least one dictionary defined it as "a new term cosmetologists are using for the combination of liposomes ... and plant extracts."
Nevertheless, Phytosome - along with Meriva - is a registered trademark of Indena S.p.A. in major countries.
Footnotes
Phospholipids | Phytosome | Chemistry | 264 |
43,320,329 | https://en.wikipedia.org/wiki/Single%20instruction%2C%20multiple%20threads | Single instruction, multiple threads (SIMT) is an execution model used in parallel computing where single instruction, multiple data (SIMD) is combined with multithreading. It is different from SPMD in that all instructions in all "threads" are executed in lock-step. The SIMT execution model has been implemented on several GPUs and is relevant for general-purpose computing on graphics processing units (GPGPU), e.g. some supercomputers combine CPUs with GPUs.
The processors, say a number of them, seem to execute many more than tasks. This is achieved by each processor having multiple "threads" (or "work-items" or "Sequence of SIMD Lane operations"), which execute in lock-step, and are analogous to SIMD lanes.
The simplest way to understand SIMT is to imagine a multi-core system, where each core has its own register file, its own ALUs (both SIMD and Scalar) and its own data cache, but that unlike a standard multi-core system which has multiple independent instruction caches and decoders, as well as multiple independent Program Counter registers, the instructions are synchronously broadcast to all SIMT cores from a single unit with a single instruction cache and a single instruction decoder which reads instructions using a single Program Counter.
The key difference between SIMT and SIMD lanes is that each of the SIMT cores may have a completely different Stack Pointer (and thus perform computations on completely different data sets), whereas SIMD lanes are simply part of an ALU that knows nothing about memory per se.
History
SIMT was introduced by Nvidia in the Tesla GPU microarchitecture with the G80 chip. ATI Technologies, now AMD, released a competing product slightly later on May 14, 2007, the TeraScale 1-based "R600" GPU chip.
Description
As access time of all the widespread RAM types (e.g. DDR SDRAM, GDDR SDRAM, XDR DRAM, etc.) is still relatively high, engineers came up with the idea to hide the latency that inevitably comes with each memory access. Strictly, the latency-hiding is a feature of the zero-overhead scheduling implemented by modern GPUs. This might or might not be considered to be a property of 'SIMT' itself.
SIMT is intended to limit instruction fetching overhead, i.e. the latency that comes with memory access, and is used in modern GPUs (such as those of Nvidia and AMD) in combination with 'latency hiding' to enable high-performance execution despite considerable latency in memory-access operations. This is where the processor is oversubscribed with computation tasks, and is able to quickly switch between tasks when it would otherwise have to wait on memory. This strategy is comparable to multithreading in CPUs (not to be confused with multi-core). As with SIMD, another major benefit is the sharing of the control logic by many data lanes, leading to an increase in computational density. One block of control logic can manage N data lanes, instead of replicating the control logic N times.
A downside of SIMT execution is the fact that thread-specific control-flow is performed using "masking", leading to poor utilization where a processor's threads follow different control-flow paths. For instance, to handle an IF-ELSE block where various threads of a processor execute different paths, all threads must actually process both paths (as all threads of a processor always execute in lock-step), but masking is used to disable and enable the various threads as appropriate. Masking is avoided when control flow is coherent for the threads of a processor, i.e. they all follow the same path of execution. The masking strategy is what distinguishes SIMT from ordinary SIMD, and has the benefit of inexpensive synchronization between the threads of a processor.
See also
General-purpose computing on graphics processing units (GPGPU)
References
Classes of computers
Computer architecture
GPGPU
Parallel computing
SIMD computing
Threads (computing) | Single instruction, multiple threads | Technology,Engineering | 856 |
20,791,131 | https://en.wikipedia.org/wiki/Dragon%20kill%20points | Dragon kill points or DKP are a semi-formal score-keeping system (loot system) used by guilds in massively multiplayer online games. Players in these games are faced with large scale challenges, or raids, which may only be surmounted through the concerted effort of dozens of players at a time. While many players may be involved in defeating a boss, the boss will reward the group with only a small number of items desired by the players. Faced with this scarcity, some system of fairly distributing the items must be established. Used originally in the massively multiplayer online role-playing game EverQuest, dragon kill points are points that are awarded to players for defeating bosses and redeemed for items that those bosses would "drop". At the time, most of the bosses faced by the players were dragons, hence the name.
While not transferable outside of a particular guild, DKP are often treated in a manner similar to currency by guilds. They are paid out at a specified rate and redeemed in modified first or second price auctions, although these are not the only methods by which DKP may be redeemed or awarded. However, Dragon kill points are distinct from the virtual currencies in each game world which are designed by the game developers; DKP systems vary from guild to guild and the points themselves only have value in regard to the dispersal of boss "loot".
Origin and motivation
DKP systems were first designed for Everquest in 1999 by Thott as part of the creation of a guild called "Afterlife" and named for two dragons, Lady Vox and Lord Nagafen. Since then, it has been adapted for use in other similar online games, in World of Warcraft for example an Avatar named Dragonkiller started its popular use and other programmers designed applications so that the system could work in game as an application to track data for achievements made. Unlike pen and paper or more traditional role-playing video games, massively multiplayer online games could present challenges so significant that the number of players required to defeat them would greatly exceed the number of items awarded to the raid following the boss kill—a raid of 25 individuals may only see two or three items "drop". The actual number of players required to defeat a specific boss varies from game to game, but the person-hours invested are non-trivial. Raid encounters may involve "10-200 players organized to achieve a common goal over a period of typically around 3-6 continuous hours" and demand teamwork and competence from all raid members.
As the number of players required to defeat a boss grows, so does the problem of distributing the rewards from such efforts. Since these items appear, or "drop", in quantities much smaller than the total number of players in the group required to defeat them, a means of deciding which of the players should receive the items is necessary. At the "endgame", new items rewarded from boss kills represent one of the only means to continue to enhance the combat effectiveness of the character or the social standing of the player. As such, individual players care about receiving a fair shot at dropped items. Guilds facing smaller challenges with fewer players typically begin by allotting items through a simulated roll of the dice (provided by the software serving the game itself), similar to dice rolls used to dictate the outcome of contingent events in pen and paper role-playing games. As the number of players expands, rolls may be weighted by seniority within the guild or adjusted by some other measure so as to ensure that veterans of the guild do not lose out on an item to a new member. Games and dungeons which require larger groups of players may create the incentive for more formal DKP systems. Methods to reward items according to seniority or performance developed out of these modifications, including systems relying on a formal allotment of points per kill.
Mechanics of a DKP system
The basic concepts of most DKP systems are simple. Players are given points for participating in raids or other guild events and spend those points on the item of their choice when the boss 'drops' the item. A player who does not get a chance to spend their DKP accumulates it between raids and is able to spend it in the future. These points, while earned and spent like currency, are not the same thing as the virtual currency provided by the game company for the virtual world. The points themselves represent only the social contract that guilds extend to players. Should that player leave the guild or the guild disband, those points become valueless. These measures vary considerably in usage. Some guilds eschew formalized 'loot' systems completely, allowing guild leaders to direct which players receive items from bosses. Some use complex measures to determine item price while others use an auction system to allocate goods via bidding. A few common variations are described below.
Zero-sum DKP
Zero-sum DKP systems are designed to ensure the net change in points among the raid is zero for each item dropped, as the name might suggest. When the item drops, each player who is interested in it indicates as much to a guild leader. The player who has the highest DKP total receives the item for its specified price and the same number of points are divided evenly among the rest of the raid and given out, resulting in no net change to the raid total. As a result, the raid would only be rewarded DKP if at least one player desired the item dropped by the boss. Since over time guilds will revisit the same boss multiple times, some zero-sum DKP systems are modified to introduce a "dummy" character which may be awarded DKP for the boss "kill" even though no player in the guild received an item. This is purely an accounting measure and allows the guild to reward players for defeating a boss if they are using an automated point tracking system.
Simple DKP
The simplest DKP variation is one where every item has a set price list and each player earns some specified number of DKP each time they participate in a guild raid. Like zero-sum systems, the player with the most points recorded actually received the item, paying the specified price. Unlike zero-sum, a simple DKP system does not compensate the rest of the raid based in the value of the items received.
Auction systems
Setting "prices" in DKP for specific items can be difficult, as analysis of a particular item can be subjective and laborious. In order to avoid this quandary, guilds may establish an auction system for items. Points are awarded to the player at some specified rate and when the items are awarded to the raid group, players bid DKP for the item of their choice. Auctions may be conducted in an open ascending fashion or through sealed bids over private messages to guild leaders. While this process results in relatively efficient allocation of items to players willing to part with DKP, it presents the social consequence that perceived selfish bidding could result in an item being awarded to a character who would not make the best use of it.
GDKP (Gold DKP)
Gold DKP (GDKP) is a system developed for pick up groups (PUG). This system was introduced to support individuals without a guild to support raids for difficult bosses/zones. In GDKP, when a boss is killed, each item dropped is put up for auction with a low value. Each item is then auctioned. The eventual winner pays the loot master, and after every item has been auctioned off, every participant in the group is rewarded an equal share of gold. For example, if 20 members were in the group, and 500 gold was spent on items, each raid participant would receive 25 gold.
While in itself a purely in-game transaction; the GDKP system garnered controversity due to its connection to gold farming and real money transactions. GDKP both introduces significant demand for in-game gold at max level, and launders gold that was bought for real money by distributing it among a raid.
DKP as virtual capital
Since the intention of DKP is to allocate scarce resources amongst guild members, they can be understood in the context of virtual capital. Players "earn" and "spend" DKP, bidding in a system of auctions for an item which holds some value for them. DKP are referred to as "currency" a guild leader pays his "employees". Despite these analogies, DKP remain a kind of "private money system", allowing guilds to mete out these otherwise unachievable items in return for participation and discipline. The points cannot be traded or redeemed outside the guild and are not actually part of the game itself; they are tracked on external websites. In contrast, the virtual currencies created by game developers are part of the game software and may be traded between players without respect to any social affiliation. Just as DKP is valueless outside the guild, parlaying of economic capital for DKP (paying real world currency in exchange for DKP) is almost unheard of. Because guilds mete out DKP in return for participation in events, the functional result is that DKP serve less as currency or material capital and more as what Torill Mortensen refers to as a "social stabilizer"; players who attend raids more frequently or play by the rules reap the rewards while more "casual" gamers do not. This provides an incentive for players to remain in the social system (the guild) longer than they might otherwise.
Within the guild, DKP may stand in for competence—high level items (Krista-Lee Malone mentions a specific item from World of Warcraft, the "Cold Snap" wand) are forms of cultural capital themselves. Since the items are "bound" to the player who first receives them, the only way to wield a desired item is to be involved in the raid that defeated the boss which rewards it. As such, a "Cold Snap" represents a signal to other players that the bearer has defeated a particular high-level monster and therefore mastered the skills needed to do so. The points themselves represent a mélange of cultural and material capital. The language of material capital is used: "price", "bid", and "currency", but these terms belie a unit of account that "crosses the line between material and symbolic".
Notes
External links
Massively multiplayer online role-playing games | Dragon kill points | Technology | 2,170 |
41,660,263 | https://en.wikipedia.org/wiki/Critical%20making | Critical making refers to the hands-on productive activities that link digital technologies to society.
It was invented to bridge the gap between creative, physical, and conceptual exploration. The purpose of critical making resides in the learning extracted from the process of making rather than the experience derived from the finished output. The term "critical making" was popularized by Matt Ratto, an associate professor at the University of Toronto. Ratto describes one of the main goals of critical making as a way "to use material forms of engagement with technologies to supplement and extend critical reflection and, in doing so, to reconnect our lived experiences with technologies to social and conceptual critique." "Critical making", as defined by practitioners like Matt Ratto and Stephen Hockema, "is an elision of two typically disconnected modes of engagement in the world — "critical thinking," often considered as abstract, explicit, linguistically based, internal and cognitively individualistic; and "making," typically understood as tacit, embodied, external, and community-oriented."
History of Critical Making
Matt Ratto and Critical Making
Matt Ratto coined the term in 2008 to describe his workshop activities that linked conceptual reflection and technical making. This concept explores how learning is influenced by the learner's participation towards creating and/or making things within a technological context. Ratto's first publication to use the term was in 2009. Ratto claims that his goal was to connect the conceptual understanding of technology in social life to materialized activities. By situating himself within the area of "design-oriented research" rather than "research-oriented research," Ratto believes that critical making enhances the shared experience in both theoretical and practical understandings of critical socio-technical issues. However, critical making should not be reviewed as design, but rather as a type of practice. The quality of a critical making lab is evaluated based on the physical "making" process, regardless of the quality of the final material production. Prior studies have noted the separation between critical thinking and physical "making". Specifically, experts in technology lack a knowledge of art, and vice versa. However, it is very important that technology be embedded in a context rather than being left in isolation, especially when it comes to critical making.
The Critical Making Lab was founded by Matt Ratto in the Faculty of Information, University of Toronto. The Critical Making Lab provides participants tools and basic knowledge of digital technology used in critical making. The mission of the lab is to enhance collaboration, communication, and practice-based engagement in critical making.
The main focus of critical making is an open design. Open design develops a critical perspective on the current institutions, practices, and norms of society, reconnecting materiality and morality. Matt Ratto introduces Critical Making as processes of material and conceptual exploration and creation of novel understandings by the makers themselves. Critical Making includes digital software and hardware. the software usually refers to the Raspberry Pi or Arduino. Hardware refers to a computer, or any other device that facilitates an operation.
Eric Paulos and Critical Making
In 2012, Eric Paulos launched Critical Making as a studio course at UC Berkeley. This Critical Making course was designed to operationalize and critique the practice of “making” through both foundational literature and hands-on studio culture. As hybrid practitioners, students develop fluency in readily collaging and incorporating a variety of physical materials and protocols into their practice. With design research as a lens, students envision and create future computational experiences that critically explore social and culturally relevant technological themes such as community, privacy, environment, education, economics, energy, food, biology, democracy, activism, healthcare, social justice, etc. The course has been offered continuously since 2012 and featured publicly at showcases and exhibitions, including Maker Faire as well as other public venues. Selected projects are archived online on the course website.
Garnet Hertz and Critical Making
In 2012, Garnet Hertz adopted the term for a series of ten handmade booklets titled "Critical Making," published in 2012. It explores how hands-on productive work ‐ making ‐ can supplement and extend critical reflection on technology and society. It works to blend and extend the fields of design, contemporary art, DIY/craft, and technological development. In this project, 70 different authors - including Norman White, Julian Bleecker, Dunne & Raby, Daniel Charny, Albert Borgmann, Golan Levin, Matt Ratto, Natalie Jeremijenko, McKenzie Wark, Paul Dourish, Mitch Altman, Dale Dougherty, Mark Pauline, Scott Snibbe, Reed Ghazala and others - reflected on the term and critical responses to the maker movement. Generally speaking, Hertz's use of the term critical making is focused around studio production and the creation of objects as "things to think with".
Hertz's project consisted of academic papers, detailed technical projects, interviews, and documented pieces of artwork. He then categorized the information into specific topics, thereby producing multiple booklets. The booklet itself is a testament to critical making. It was printed using a hacked photocopier, and roughly 100,000 pages were manually folded and stapled to create 300 copies of 10 booklets each. The publication asks us to look at aspects of the DIY culture that go beyond buying an Arduino, getting a 3D printer, and doing DIY projects as a weekend hobby. These books embrace social issues, the history of technology, activism, and politics. The project also stemmed from a specific disappointment of Make partnering with the US military through DARPA funding in 2012. Many opposed this move, including Mitch Altman, and Hertz's project worked to explore the mixture of making, technology, politics and ethics - as well as bringing the fields of critical design and media arts into conversation with maker culture.
In 2014, Hertz founded "The Studio for Critical Making" at Emily Carr University of Art and Design as Canada Research Chair in Design and Media Arts. The facility "explores how humanities-based modes of critical inquiry – like the arts and ethics – can be directly applied to building more engaging product concepts and information technologies. The lab works to replace the traditional engineering goals of efficiency, speed, or usability with more complex cultural, social, and human-oriented values. The end result is a technology that is more culturally relevant, socially engaged, and personalized."
Other uses of Critical Making
In 2012, John Maeda began using the term while at the Rhode Island School of Design (RISD): first as a title for their strategic plan for 2012-2017 and next as part of the title of an edited collection titled "The Art of Critical Making: Rhode Island School of Design on Creative Practice" published by John Wiley & Sons, Inc. Other individuals who use the term critical making to orient their work include Amaranth Borsuk (University of Washington-Bothell), Jentery Sayers (University of Victoria), Roger Whitson (Washington State University), Kari Kraus (University of Maryland), Amy Papaelias (SUNY-New Paltz), and Jessica Barness (Kent State University).
Nancy Mauro-Flude and Tactical Magick Faerie Circuits
'Networked Art Forms and Tactical Magick Faerie Circuits' was a
series of events inspired by critical making and computer subculture curated and devised by Nancy Mauro-Flude in collaboration, in lutruwita Tasmania, at Contemporary Art Tasmania (CAT), and on the Internet, from 31 May-30 June 2013. A cohort of artists, programmers and thinkers from the frontline of the critical maker aesthetic, remotely and locally, linking through live streaming of artist talks, workshops, exhibitions, performances, podcasts, and interview broadcasts, Internet relay chat successively compiled into an online archive.
A durational event that included symposia, exhibitions, performances, workshops and social gatherings. Collaborating with community Edge Radio, and Island Magazine, NAF:TMFC was also a satellite program of the 19th ISEA International International Symposium on Electronic Art. It brought together leading Australian and International artists and educators whose work responds to the emergent conditions of a networked world; a realm increasingly transmitted through fiber and code, this included Matthew Fuller; Florian Cramer; Josephine Bosma; Mez Breeze; Julian Oliver, Danja Vasliev from Critical Engineering working group; Constant Dullart; Jeff Malpas; Doll Yoko); Linda Dement, Rosa Menkman who mentored local artists and designers who were invited to respond to the work and ideas generated through the project which formed the closing exhibition which was programmed as Notorious R + D as a part of the inaugural Dark Mofo festival. The event received critical acclaim such as: "Perhaps it is appropriate that I review NAF:TMFC because it sits just outside my comfort zone....the heart, the ritual, the idea of frailty, are returned to technology, seem to capture the spirit of the endeavour. They hold something I can grasp or identify with. And with time, exploring the exhibition in a near empty gallery, I see that the collected works cultivate potential for play and discovery—qualities that chip away at the white noise and allow for a singular experience."
Participants were encouraged to adopt a feminist and holistic approach to digital literacy, using various aesthetic tools and means to explore systems through critical making and experiential prototyping that enable insightful experiences in an increasingly data driven existence.
Concepts Related to Critical Making
DIY and critical making
Traditional DIY is criticized due to its costs and standards. DIY products are difficult to spread in lower-income areas where issues of cost and ease are more commonly cited (Williams, 276). It is not only a lifestyle choice but also a technological product. "DIY activity is not, for example, seen as a coping practice used by those unable to afford to externalize the activity to formal firms and/or self-employed individuals. Instead, and reflecting the broader cultural turn in retail studies, their explanation for engagement in DIY is firmly grounded in human agency" (Williams, 273).
Speculative Design and Critical Making
According to DiSalvo and Lukens, "Speculative design is an approach to design that emphasizes inquiry, experimentation, and expression, over usability, usefulness or desirability. A particular characteristic of speculative design is that it tends to be future-oriented. However, this should not be mistaken as being fantasy-like sense, suggesting that it is "unreal" and therefore dismissible (DiSalvo and Lukens, 2009)."
The term speculative design involves practices from various disciplines, including visionary or futurist forms of architecture, design fiction, and critical design or design for debate instead of referring to a specific movement or style. More than just diagrams of unbuilt structures, the speculative design aims to explore the space of interaction between culture, technology, and the built environment (Lukens and DiSalvo, 2012, p. 25). Practitioners of speculative design engage in design as a sort of provocation, one that asks uncomfortable questions about the long-term implications of technology. These practices also integrate pairs of concerns that are traditionally separate, such as fact and fiction, science and art, and commerce and academia. This provocation extends to questions about design itself.
See also
Adversarial design
Critical technical practice
Critical thinking
Critical design
Speculative design
Maker culture
Technology
Arduino
3D Printing
References
External links
Arduino
Open Design Now
Raspberry Pi or Arduino
Critical Making - Paulos Syllabus (Berkeley)
Critical Making - Hertz (2012)
The Studio for Critical Making (Emily Carr University of Art and Design)
Nancy Mauro-Flude: Interrupts aren’t hidden Experiential Prototyping
Design
Hacker culture | Critical making | Engineering | 2,407 |
28,153,829 | https://en.wikipedia.org/wiki/Rectilinear%20minimum%20spanning%20tree | In graph theory, the rectilinear minimum spanning tree (RMST) of a set of n points in the plane (or more generally, in ) is a minimum spanning tree of that set, where the weight of the edge between each pair of points is the rectilinear distance between those two points.
Properties and algorithms
By explicitly constructing the complete graph on n vertices, which has n(n-1)/2 edges, a rectilinear minimum spanning tree can be found using existing algorithms for finding a minimum spanning tree. In particular, using Prim's algorithm with an adjacency matrix yields time complexity O(n2).
Planar case
In the planar case, more efficient algorithms exist. They are based on the idea that connections may only happen with the nearest neighbour of a point in each octant - that is, each of the eight regions of the plane delimited by the coordinate axis from this point and their bisectors.
The resulting graph has only a linear number of edges and can be constructed in O(n log n) using a divide and conquer algorithm or a sweep line algorithm.
Applications
Electronic design
The problem commonly arises in physical design of electronic circuits. In modern high-density integrated circuits wire routing is performed by wires which consist of segments running horizontally in one layer of metal and vertically in another metal layer. As a result, the wire length between two points is naturally measured with rectilinear distance. Although the routing of a whole net with multiple nodes is better represented by the rectilinear Steiner tree, the RMST provides a reasonable approximation and wire length estimate.
See also
Euclidean minimum spanning tree
References
Computational geometry
Geometric graphs
Spanning tree | Rectilinear minimum spanning tree | Mathematics | 346 |
47,082,927 | https://en.wikipedia.org/wiki/Nokia%20105%20%282015%29 | The Nokia 105 (2015) and Nokia 105 Dual SIM (2015) are Nokia-branded feature phones originally developed by Microsoft Mobile. The phones were originally released on 3 June 2015, as a revival of the original Nokia 105 (released in 2013), and sold again by HMD Global. The Nokia 105 (2015) has one SIM card slot, and the Nokia 105 Dual SIM (2015) with two slots. The selectable colours are black, white and cyan.
References
External links
Nokia 105 (2015) Specs - GSMArena
Nokia 105 Dual SIM (2015) Specs - GSMArena
105 (2015)
Microsoft hardware
Mobile phones with user-replaceable battery
Mobile phones introduced in 2015 | Nokia 105 (2015) | Technology | 143 |
61,669,935 | https://en.wikipedia.org/wiki/Imd%20pathway | The Imd pathway is a broadly-conserved NF-κB immune signalling pathway of insects and some arthropods that regulates a potent antibacterial defence response. The pathway is named after the discovery of a mutation causing severe immune deficiency (the gene was named "Imd" for "immune deficiency"). The Imd pathway was first discovered in 1995 using Drosophila fruit flies by Bruno Lemaitre and colleagues, who also later discovered that the Drosophila Toll gene regulated defence against Gram-positive bacteria and fungi. Together the Toll and Imd pathways have formed a paradigm of insect immune signalling; as of September 2, 2019, these two landmark discovery papers have been cited collectively over 5000 times since publication on Google Scholar.
The Imd pathway responds to signals produced by Gram-negative bacteria. Peptidoglycan recognition proteins (PGRPs) sense DAP-type peptidoglycan, which activates the Imd signalling cascade. This culminates in the translocation of the NF-κB transcription factor Relish, leading to production of antimicrobial peptides and other effectors. Insects lacking Imd signalling either naturally or by genetic manipulation are extremely susceptible to infection by a wide variety of pathogens and especially bacteria.
Similarity to human pathways
The Imd pathway bears a number of similarities to mammalian TNFR signalling, though many of the intracellular regulatory proteins of Imd signalling also bear homology to different signalling cascades of human Toll-like receptors.
Similarity to TNFR signalling
The following genes are analogous or homologous between Drosophila melanogaster (in bold) and human TNFR1 signalling:
Imd: human orthologue = RIP1
Tak1: human orthologue = Tak1
TAB2: human orthologue = TAB2
Dredd: human orthologue = caspase-8
FADD: human orthologue = FADD
Key/Ikkγ: human orthologue = NEMO
Ird5: human orthologue = IKK2
Relish: human orthologues = p65/p50 and IκB
Iap2: human orthologue = cIAP2
UEV1a: human orthologue = UEV1a
bend: human orthologue = UBC13
In Drosophila
While the exact epistasis of Imd pathway signalling components is continually scrutinized, the mechanistic order of many key components of the pathway is well-established. The following sections discuss Imd signalling as it is found in Drosophila melanogaster, where it is exceptionally well-characterized. Imd signalling is activated by a series of steps from recognition of a bacterial substance (e.g. peptidoglycan) to the transmission of that signal leading to activation of the NF-κB transcription factor Relish. Activated Relish then forms dimers that move into the nucleus and bind to DNA leading to the transcription of antimicrobial peptides and other effectors.
Peptidoglycan recognition proteins (PGRPs)
The sensing of bacterial signals is performed by peptidoglycan recognition protein LC (PGRP-LC), a transmembrane protein with an intracellular domain. Binding of bacterial peptidoglycan leads to dimerization of PGRP-LC which generates the conformation needed to bind and activate the Imd protein. However alternate isoforms of PGRP-LC can also be expressed with different functions: PGRP-LCx recognizes polymeric peptidoglycan, while PGRP-LCa does not bind peptidoglycan directly but acts alongside PGRP-LCx to bind monomeric peptidoglycan fragments (called tracheal cytotoxin or "TCT"). Another PGRP (PGRP-LE) also acts intracellularly to bind TCT that has crossed the cell membrane or is derived from an intracellular infection. PGRP-LA promotes the activation of Imd signalling in epithelial cells, but the mechanism is still unknown.
Other PGRPs can inhibit the activation of Imd signalling by binding bacterial signals or inhibiting host signalling proteins: PGRP-LF is a transmembrane PGRP that lacks an intracellular domain and does not bind peptidoglycan. Instead PGRP-LF forms dimers with PGRP-LC preventing PGRP-LC dimerization and consequently activation of Imd signalling. A number of secreted PGRPs have amidase activity that downregulate the Imd pathway by digesting peptidoglycan into short, non-immunogenic fragments. These include PGRP-LB, PGRP-SC1A, PGRP-SC1B, and PGRP-SC2. Additionally, PGRP-LB is the major regulator in the gut.
Intracellular signalling components
The principle intracellular signalling protein is Imd, a death domain-containing protein that binds with FADD and Dredd to form a complex. Dredd is activated following ubiquitination by the Iap2 complex (involving Iap2, UEV1a, bend, and eff), which allows Dredd to cleave the 30 residue N-terminus of Imd, allowing it to also be ubiquitinated by Iap2. Following this, the Tak1/TAB2 complex binds to the activated form of Imd and subsequently activates the IKKγ/Ird5 complex through phosphorylation. This IKKγ complex activates Relish by phosphorylation, leading to cleavage of Relish and thereby producing both N-terminal and C-terminal Relish fragments. The N-terminal Relish fragments dimerize leading to their translocation into the nucleus where these dimers bind to Relish-family NF-κB binding sites. Binding of Relish promotes the transcription of effectors such as antimicrobial peptides.
While Relish is integral for transcription of Imd pathway effectors, there is additional cooperation with other pathways such as Toll and JNK. The TAK1/TAB2 complex is key to propagating intracellular signalling of not only the Imd pathway, but also the JNK pathway. As a result, mutants for JNK signalling have severely reduced expression of Imd pathway antimicrobial peptides.
The antimicrobial response
Imd signalling regulates a number of effector peptides and proteins that are produced en masse following immune challenge. This includes many of the major antimicrobial peptide genes of Drosophila, particularly: Diptericin, Attacin, Drosocin, Cecropin, and Defensin. The Imd pathway regulates hundreds of genes after infection, however the antimicrobial peptides play one of the most essential roles of Imd signalling in defence. Flies lacking multiple antimicrobial peptide genes succumb to infections by a broad suite of Gram-negative bacteria. Classical thinking suggested that antimicrobial peptides worked as a generalist cocktail in defence, where each peptide provided a small and somewhat redundant contribution. However Hanson and colleagues found that single antimicrobial peptide genes displayed an unexpectedly high degree of specificity for defence against specific microbes. The fly Diptericin A gene is essential for defence against the bacterium Providencia rettgeri (also suggested by an earlier evolutionary study). A second specificity is encoded by Diptericin B, which defends flies against Acetobacter bacteria of the fly microbiome. A third specificity is encoded by the gene Drosocin. Flies lacking Drosocin are highly susceptible to Enterobacter cloacae infection. The Drosocin gene itself encodes two peptides (named Drosocin and Buletin), wherein it is specifically the Drosocin peptide that is responsible for defence against E. cloacae, while the Buletin peptide instead mediates a specific defence against another bacterium, Providencia burhodogranariea. These works accompany others on antimicrobial peptides and effectors regulated by the Drosophila Toll pathway, which also display a specific importance in defence against certain fungi or bacteria.
This work on Drosophila immune antimicrobial peptides and effectors has greatly revised the former view that such peptides are generalist molecules. The modern interpretation is now that specific molecules might provide a somewhat redundant layer of defence, but also single peptides can have critical importance, individually, against relevant microbes.
Conservation in insects
The Imd pathway appears to have evolved in the last common ancestor of centipedes and insects. However certain lineages of insects have since lost core components of Imd signalling. The first-discovered and most famous example is the pea aphid Acyrthosiphon pisum. It is thought that plant-feeding aphids have lost Imd signalling as they bear a number of bacterial endosymbionts, including both nutritional symbionts that would be disrupted by aberrant expression of antimicrobial peptides, and defensive symbionts that cover for some of the immune deficiency caused by loss of Imd signalling. It has also been suggested that antimicrobial peptides, the downstream components of Imd signalling, may be detrimental to fitness and lost by insects with exclusively plant-feeding ecologies.
Crosstalk between the Imd and Toll signalling pathways
While the Toll and Imd signalling pathways of Drosophila are commonly depicted as independent for explanatory purposes, the underlying complexity of Imd signalling involves a number of likely mechanisms wherein Imd signalling interacts with other signalling pathways including Toll and JNK. While the paradigm of Toll and Imd as largely independent provides a useful context for the study of immune signalling, the universality of this paradigm as it applies to other insects has been questioned. In Plautia stali stinkbugs, suppression of either Toll or Imd genes simultaneously leads to reduced activity of classic Toll and Imd effectors from both pathways.
Insects and arthropods lacking Imd signalling
The pea aphid Acyrthosiphon pisum
The bed bug Cimex lectularius
The mite Tetranychus urticae
References
Signal transduction
Genes
Evolutionary developmental biology
Arthropodology
Insect biology | Imd pathway | Chemistry,Biology | 2,139 |
41,930,603 | https://en.wikipedia.org/wiki/Sigma%20Pegasi | σ Pegasi, Latinised as Sigma Pegasi, is a binary star system in the northern constellation of Pegasus. With a combined apparent visual magnitude of 5.16, it is faintly visible to the naked eye. Based upon an annual parallax shift of 36.66 mas as seen from Earth, the system is located 89 light years distant from the Sun. It has a relatively high proper motion, advancing across the celestial sphere at the rate of 0.524 arcseconds per year.
The primary, component A, is a yellow-white hued F-type main-sequence star with a stellar classification of F6 V. However, Frasca et al. (2009) lists it as a somewhat more evolved F-type subgiant star with a class of F7 IV. At the age of 2.7 billion years, it has an inactive chromosphere and is spinning with a leisurely projected rotational velocity of 3 km/s. It has a faint, magnitude 13.23 red dwarf companion, designated component B, at an angular separation of 248 arc seconds. The system is most likely (96% chance) a member of the thin disk population of the Milky Way.
References
External links
F-type subgiants
F-type main-sequence stars
M-type main-sequence stars
Binary stars
Pegasus (constellation)
Pegasi, Sigma
Durchmusterung objects
Pegasi, 49
216385
112935
8697 | Sigma Pegasi | Astronomy | 297 |
2,488,636 | https://en.wikipedia.org/wiki/Base%20excision%20repair | Base excision repair (BER) is a cellular mechanism, studied in the fields of biochemistry and genetics, that repairs damaged DNA throughout the cell cycle. It is responsible primarily for removing small, non-helix-distorting base lesions from the genome. The related nucleotide excision repair pathway repairs bulky helix-distorting lesions. BER is important for removing damaged bases that could otherwise cause mutations by mispairing or lead to breaks in DNA during replication. BER is initiated by DNA glycosylases, which recognize and remove specific damaged or inappropriate bases, forming AP sites. These are then cleaved by an AP endonuclease. The resulting single-strand break can then be processed by either short-patch (where a single nucleotide is replaced) or long-patch BER (where 2–10 new nucleotides are synthesized).
Lesions processed by BER
Single bases in DNA can be chemically damaged by a variety of mechanisms, the most common ones being deamination, oxidation, and alkylation. These modifications can affect the ability of the base to hydrogen-bond, resulting in incorrect base-pairing, and, as a consequence, mutations in the DNA. For example, incorporation of adenine across from 8-oxoguanine (right) during DNA replication causes a G:C base pair to be mutated to T:A. Other examples of base lesions repaired by BER include:
Oxidized bases: 8-oxoguanine, 2,6-diamino-4-hydroxy-5-formamidopyrimidine (FapyG, FapyA)
Alkylated bases: 3-methyladenine, 7-methylguanosine
Deaminated bases: hypoxanthine formed from deamination of adenine. Xanthine formed from deamination of guanine. (Thymidine products following deamination of 5-methylcytosine are more difficult to recognize, but can be repaired by mismatch-specific glycosylases)
Uracil inappropriately incorporated in DNA or formed by deamination of cytosine
In addition to base lesions, the downstream steps of BER are also utilized to repair single-strand breaks.
The choice between long-patch and short-patch repair
The choice between short- and long-patch repair is currently under investigation. Various factors are thought to influence this decision, including the type of lesion, the cell cycle stage, and whether the cell is terminally differentiated or actively dividing. Some lesions, such as oxidized or reduced AP sites, are resistant to pol β lyase activity and, therefore, must be processed by long-patch BER.
Pathway preference may differ between organisms, as well. While human cells utilize both short- and long-patch BER, the yeast Saccharomyces cerevisiae was long thought to lack a short-patch pathway because it does not have homologs of several mammalian short-patch proteins, including pol β, DNA ligase III, XRCC1, and the kinase domain of PNKP. The recent discovery that the poly-A polymerase Trf4 possesses 5' dRP lyase activity has challenged this view.
Proteins involved in base excision repair
DNA glycosylases
DNA glycosylases are responsible for initial recognition of the lesion. They flip the damaged base out of the double helix, as pictured, and cleave the N-glycosidic bond of the damaged base, leaving an AP site. There are two categories of glycosylases: monofunctional and bifunctional. Monofunctional glycosylases have only glycosylase activity, whereas bifunctional glycosylases also possess AP lyase activity. Therefore, bifunctional glycosylases can convert a base lesion into a single-strand break without the need for an AP endonuclease. β-Elimination of an AP site by a glycosylase-lyase yields a 3' α,β-unsaturated aldehyde adjacent to a 5' phosphate, which differs from the AP endonuclease cleavage product. Some glycosylase-lyases can further perform δ-elimination, which converts the 3' aldehyde to a 3' phosphate. A wide variety of glycosylases have evolved to recognize different damaged bases. Examples of DNA glycosylases include Ogg1, which recognizes 8-oxoguanine, MPG, which recognizes 3-methyladenine, and UNG, which removes uracil from DNA.
AP endonucleases
The AP endonucleases cleave an AP site to yield a 3' hydroxyl adjacent to a 5' deoxyribosephosphate (dRP). AP endonucleases are divided into two families based on their homology to the ancestral bacterial AP endonucleases endonuclease IV and exonuclease III. Many eukaryotes have members of both families, including the yeast Saccharomyces cerevisiae, in which Apn1 is the EndoIV homolog and Apn2 is related to ExoIII. In humans, two AP endonucleases, APE1 and APE2, have been identified. It is a member of the ExoIII family.
End processing enzymes
In order for ligation to occur, a DNA strand break must have a hydroxyl on its 3' end and a phosphate on its 5' end. In humans, polynucleotide kinase-phosphatase (PNKP) promotes formation of these ends during BER. This protein has a kinase domain, which phosphorylates 5' hydroxyl ends, and a phosphatase domain, which removes phosphates from 3' ends. Together, these activities ready single-strand breaks with damaged termini for ligation. The AP endonucleases also participate in 3' end processing. Besides opening AP sites, they possess 3' phosphodiesterase activity and can remove a variety of 3' lesions including phosphates, phosphoglycolates, and aldehydes. 3'-Processing must occur before DNA synthesis can initiate because DNA polymerases require a 3' hydroxyl to extend from.
DNA polymerases
Pol β is the main human polymerase that catalyzes short-patch BER, with pol λ able to compensate in its absence. These polymerases are members of the Pol X family and typically insert only a single nucleotide. In addition to polymerase activity, these enzymes have a lyase domain that removes the 5' dRP left behind by AP endonuclease cleavage. During long-patch BER, DNA synthesis is thought to be mediated by pol δ and pol ε along with the processivity factor PCNA, the same polymerases that carry out DNA replication. These polymerases perform displacing synthesis, meaning that the downstream 5' DNA end is "displaced" to form a flap (see diagram above). Pol β can also perform long-patch displacing synthesis and can, therefore, participate in either BER pathway. Long-patch synthesis typically inserts 2-10 new nucleotides.
Flap endonuclease
FEN1 removes the 5' flap generated during long patch BER. This endonuclease shows a strong preference for a long 5' flap adjacent to a 1-nt 3' flap. The yeast homolog of FEN1 is RAD27. In addition to its role in long-patch BER, FEN1 cleaves flaps with a similar structure during Okazaki fragment processing, an important step in lagging strand DNA replication.
DNA ligase
DNA ligase III along with its cofactor XRCC1 catalyzes the nick-sealing step in short-patch BER in humans. DNA ligase I ligates the break in long-patch BER.
Links with cancer
Defects in a variety of DNA repair pathways lead to cancer predisposition, and BER appears to follow this pattern. Deletion mutations in BER genes have shown to result in a higher mutation rate in a variety of organisms, implying that loss of BER could contribute to the development of cancer. Indeed, somatic mutations in Pol β have been found in 30% of human cancers, and some of these mutations lead to transformation when expressed in mouse cells. Mutations in the DNA glycosylase MYH are also known to increase susceptibility to colon cancer.
Epigenetic deficiencies in cancers
Epigenetic alterations (epimutations) in base excision repair genes have only recently begun to be evaluated in a few cancers, compared to the numerous previous studies of epimutations in genes acting in other DNA repair pathways (such as MLH1 in mismatch repair and MGMT in direct reversal). Some examples of epimutations in base excision repair genes that occur in cancers are summarized below.
MBD4
MBD4 (methyl-CpG-binding domain protein 4) is a glycosylase employed in an initial step of base excision repair. MBD4 protein binds preferentially to fully methylated CpG sites and to the altered DNA bases at those sites. These altered bases arise from the frequent hydrolysis of cytosine to uracil (see image) and hydrolysis of 5-methylcytosine to thymine, producing G:U and G:T base pairs. If the improper uracils or thymines in these base pairs are not removed before DNA replication, they will cause transition mutations. MBD4 specifically catalyzes the removal of T and U paired with guanine (G) within CpG sites. This is an important repair function since about 1/3 of all intragenic single base pair mutations in human cancers occur in CpG dinucleotides and are the result of G:C to A:T transitions. These transitions comprise the most frequent mutations in human cancer. For example, nearly 50% of somatic mutations of the tumor suppressor gene p53 in colorectal cancer are G:C to A:T transitions within CpG sites. Thus, a decrease in expression of MBD4 could cause an increase in carcinogenic mutations.
MBD4 expression is reduced in almost all colorectal neoplasms due to methylation of the promoter region of MBD4. Also MBD4 is deficient due to mutation in about 4% of colorectal cancers.
A majority of histologically normal fields surrounding neoplastic growths (adenomas and colon cancers) in the colon also show reduced MBD4 mRNA expression (a field defect) compared to histologically normal tissue from individuals who never had a colonic neoplasm. This finding suggests that epigenetic silencing of MBD4 is an early step in colorectal carcinogenesis.
In a Chinese population that was evaluated, the MBD4 Glu346Lys polymorphism was associated with about a 50% reduced risk of cervical cancer, suggesting that alterations in MBD4 may be important in cancer.
NEIL1
NEIL1 recognizes (targets) and removes certain oxidatively-damaged bases and then incises the abasic site via β,δ elimination, leaving 3′ and 5′ phosphate ends. NEIL1 recognizes oxidized pyrimidines, formamidopyrimidines, thymine residues oxidized at the methyl group, and both stereoisomers of thymine glycol. The best substrates for human NEIL1 appear to be the hydantoin lesions, guanidinohydantoin, and spiroiminodihydantoin that are further oxidation products of 8-oxoG. NEIL1 is also capable of removing lesions from single-stranded DNA as well as from bubble and forked DNA structures. A deficiency in NEIL1 causes increased mutagenesis at the site of an 8-oxo-Gua:C pair, with most mutations being G:C to T:A transversions.
A study in 2004 found that 46% of primary gastric cancers had reduced expression of NEIL1 mRNA, though the mechanism of reduction was not known. This study also found that 4% of gastric cancers had mutations in NEIL1. The authors suggested that low NEIL1 activity arising from reduced expression and/or mutation in NEIL1 was often involved in gastric carcinogenesis.
A screen of 145 DNA repair genes for aberrant promoter methylation was performed on head and neck squamous cell carcinoma (HNSCC) tissues from 20 patients and from head and neck mucosa samples from 5 non-cancer patients. This screen showed that NEIL1, with substantially increased hypermethylation, had the most significantly different frequency of methylation. Furthermore, the hypermethylation corresponded to a decrease in NEIL1 mRNA expression. Further work with 135 tumor and 38 normal tissues also showed that 71% of HNSCC tissue samples had elevated NEIL1 promoter methylation.
When 8 DNA repair genes were evaluated in non-small cell lung cancer (NSCLC) tumors, 42% were hypermethylated in the NEIL1 promoter region. This was the most frequent DNA repair abnormality found among the 8 DNA repair genes tested. NEIL1 was also one of six DNA repair genes found to be hypermethylated in their promoter regions in colorectal cancer.
Links with cognition
Active DNA methylation and demethylation is required for the cognition process of memory formation and maintenance. In rats, contextual fear conditioning can trigger life-long memory for the event with a single trial, and methylation changes appear to be correlated with triggering particularly long-lived memories. With contextual fear conditioning, after 24 hours, DNA isolated from the rat brain hippocampus region had 2097
differentially methylated genes, with a proportion being demethylated. As reviewed by Bayraktar and Kreutz, DNA demethylation is dependent on base excision repair (see figure).
Physical exercise has well established beneficial effects on learning and memory (see Neurobiological effects of physical exercise). BDNF is a particularly important regulator of learning and memory. As reviewed by Fernandes et al., in rats, exercise enhances the hippocampus expression of the gene Bdnf, which has an essential role in memory formation. Enhanced expression of Bdnf occurs through demethylation of its CpG island promoter at exon IV and demethylation depends on base excision repair (see figure).
Decline in BER with age
The activity of the DNA glycosylase that removes methylated bases in human leukocytes declines with age. The reduction in the excision of methylated bases from DNA suggests an age-dependent decline in 3-methyladenine DNA glycosylase, a BER enzyme responsible for removing alkylated bases.
Young rats (4- to 5 months old), but not old rats (24- to 28 months old), have the ability to induce DNA polymerase beta and AP endonuclease in response to oxidative damage.
See also
DNA mismatch repair
DNA repair
Homologous recombination
Non-homologous end joining
Nucleotide excision repair
Host-cell reactivation assay
References
External links
DNA repair
Human proteins
Proteomics | Base excision repair | Biology | 3,239 |
42,381,742 | https://en.wikipedia.org/wiki/Hsu%E2%80%93Robbins%E2%80%93Erd%C5%91s%20theorem | In the mathematical theory of probability, the Hsu–Robbins–Erdős theorem states that if is a sequence of i.i.d. random variables with zero mean and finite variance and
then
for every .
The result was proved by Pao-Lu Hsu and Herbert Robbins in 1947.
This is an interesting strengthening of the classical strong law of large numbers in the direction of the Borel–Cantelli lemma. The idea of such a result is probably due to Robbins, but the method of proof is vintage Hsu. Hsu and Robbins further conjectured in that the condition of finiteness of the variance of is also a necessary condition for to hold. Two years later, the famed mathematician Paul Erdős proved the conjecture.
Since then, many authors extended this result in several directions.
References
Theorems in measure theory
Probabilistic inequalities | Hsu–Robbins–Erdős theorem | Mathematics | 177 |
9,796,822 | https://en.wikipedia.org/wiki/Pohlsepia | Pohlsepia mazonensis is a species of fossil organism with unknown affinity. Although it was originally identified as an extinct cephalopod, later studies denied that interpretation. The species is known from a single exceptionally preserved fossil discovered in the late Carboniferous (Pennsylvanian) Francis Creek Shale (Mazon Creek fossil beds) of the Carbondale Formation, north-east Illinois, United States.
Pohlsepia mazonensis is named after its discoverer, James Pohl, and the type locality, Mazon Creek. Its habitat was the shallows seawards of a major river delta in what at that time was an inland ocean between the Midwest and the Appalachians. In its initial description, it was considered to be the oldest known octopus, but later studies have considered this classification dubious. In 2022, it was even suggested that it may not be a mollusk.
The type specimen is reposited at the Field Museum of Natural History in Chicago, Illinois.
Fossil
The Pohlsepia mazonensis fossil found by James Pohl is the only known example of the species. Most notably, the fossil has ten arms. The extra two arms are shorter, while the other eight are similar in length.
The wide fossil is “sack-shaped” with indistinct features including a poorly defined head. While it is unclear, one of these features could be an ink sac. The fossil lacks arm hooks and suckers and all of these factors combine to make the assigning of the order Cirroctopoda controversial.
Etymology
Genus name Pohlsepia is came from its discoverer James Pohl. He is the son of Joe Pohl and together they have collected fossils in the Mazon Creek area. Originally from Wisconsin and Minnesota, Pohl is a native Midwesterner. He and his father have donated their fossils to museums in the area, including Pohlsepia mazonensis to the Field Museum.
Classification
In 2000, Joanne Kluessendorf assigned Pohlsepia mazonensis to the order Cirroctopoda. Many other researchers disagreed, citing the lack of internal structure. The possible evidence of fins and the huge time difference between the Pohlsepia mazonensis fossil and first confirmed cirrate octopus fossils is problematic. However, the species can be classified as an octopod. Despite the number of arms being unclear, the fact that the fossil has an indistinct head, sac like body and similar fins to cirrate octopods gives enough evidence to classify Pohlsepia mazonensis in the order Cirroctopoda.
When looking at the groups Teudopsidae, Trachyteuthididae, the Vampyromorpha, cirrate octopods, incirrate octopods and the fossil Loligosepiina, the describing authors proposed that Pohlsepia mazonensis would be most closely related to the octopods based on its lack of a shell.
However, later studies found the placement within Octopoda to be dubious, due to the fossils poor preservation, and the fact that other fossils have now shown true octopuses to have first arisen in the Jurassic. In 2021, it is considered that is even unlikely to be cephalopod or mollusk. Lack of a shell is a highly unlikely combination in a Carboniferous cephalopod. In addition, its appendages lack hooks, suckers, cirri, an arm web, and the characteristic 8/10 arm count. There is neither a beak, unambiguous ink sac, nor radula. The bulbous body outline and presence of appendages more likely to show the affinity as a cnidarian, a phylum of invertebrate animals including jellyfish and sea anemones. In 2019, fossils included some fossils including ones from Mazon Creek like vertebrates, Tullimonstrum, and Pohlsepia are examined, to consider affinity of Tullimonstrum. Although this study treated Pohlsepia as cephalopod, melanosomes cannot be identified from its eyespot.
Mazon Creek
Located in what is currently northern Illinois, the Mazon Creek preserved the Pohlsepia mazonensis fossil extraordinarily. The Pohlsepia mazonensis fossil was found specifically in the Pit 11 region, within the Francis Creek Shale Member. Like most soft tissue fossils found in Mazon Creek, it is preserved as a 2D light-on-dark discolouration of the matrix. The Francis Creek Shale Member of the Carbon Formation has a diverse array of preserved plants and animals.
Previously, it was thought that these organisms were immediately killed and buried in storm surges, where bursts of water would submerge the organisms in sediments, creating an environment where their remains were protected from scavengers before most decomposition could start. However, there is limited geological evidence for the hypothesis of storm surges, and the kill mechanism in the Mazon Creek is not fully understood but high sedimentation could have choked, killed, and buried organisms rapidly
References
External links
The Octopus News Magazine Online: Fossil Octopuses
Fossil taxa described in 2000
Controversial taxa
Species known from a single specimen | Pohlsepia | Biology | 1,063 |
2,522,070 | https://en.wikipedia.org/wiki/Multiferroics | Multiferroics are defined as materials that exhibit more than one of the primary ferroic properties in the same phase:
ferromagnetism – a magnetisation that is switchable by an applied magnetic field
ferroelectricity – an electric polarisation that is switchable by an applied electric field
ferroelasticity – a deformation that is switchable by an applied stress
While ferroelectric ferroelastics and ferromagnetic ferroelastics are formally multiferroics, these days the term is usually used to describe the magnetoelectric multiferroics that are simultaneously ferromagnetic and ferroelectric. Sometimes the definition is expanded to include nonprimary order parameters, such as antiferromagnetism or ferrimagnetism. In addition, other types of primary order, such as ferroic arrangements of magnetoelectric multipoles of which ferrotoroidicity is an example, were proposed.
Besides scientific interest in their physical properties, multiferroics have potential for applications as actuators, switches, magnetic field sensors and new types of electronic memory devices.
History
A Web of Science search for the term multiferroic yields the year 2000 paper "Why are there so few magnetic ferroelectrics?" from N. A. Spaldin (then Hill) as the earliest result. This work explained the origin of the contraindication between magnetism and ferroelectricity and proposed practical routes to circumvent it, and is widely credited with starting the modern explosion of interest in multiferroic materials. The availability of practical routes to creating multiferroic materials from 2000 stimulated intense activity. Particularly key early works were the discovery of large ferroelectric polarization in epitaxially grown thin films of magnetic BiFeO3, the observation that the non-collinear magnetic ordering in orthorhombic TbMnO3 and TbMn2O5 causes ferroelectricity, and the identification of unusual improper ferroelectricity that is compatible with the coexistence of magnetism in hexagonal manganite YMnO3. The graph to the right shows in red the number of papers on multiferroics from a Web of Science search until 2008; the exponential increase continues today.
Magnetoelectric materials
To place multiferroic materials in their appropriate historical context, one also needs to consider magnetoelectric materials, in which an electric field modifies the magnetic properties and vice versa. While magnetoelectric materials are not necessarily multiferroic, all ferromagnetic ferroelectric multiferroics are linear magnetoelectrics, with an applied electric field inducing a change in magnetization linearly proportional to its magnitude. Magnetoelectric materials and the corresponding magnetoelectric effect have a longer history than multiferroics, shown in blue in the graph to the right. The first known mention of magnetoelectricity is in the 1959 Edition of Landau & Lifshitz' Electrodynamics of Continuous Media which has the following comment at the end of the section on piezoelectricity: "Let us point out two more phenomena, which, in principle, could exist. One is piezomagnetism, which consists of linear coupling between a magnetic field in a solid and a deformation (analogous to piezoelectricity). The other is a linear coupling between magnetic and electric fields in a media, which would cause, for example, a magnetization proportional to an electric field. Both these phenomena could exist for certain classes of magnetocrystalline symmetry. We will not however discuss these phenomena in more detail because it seems that till present, presumably, they have not been observed in any substance." One year later, I. E. Dzyaloshinskii showed using symmetry arguments that the material Cr2O3 should have linear magnetoelectric behavior, and his prediction was rapidly verified by D. Astrov. Over the next decades, research on magnetoelectric materials continued steadily in a number of groups in Europe, in particular in the former Soviet Union and in the group of H. Schmid at U. Geneva. A series of East-West conferences entitled Magnetoelectric Interaction Phenomena in Crystals (MEIPIC) was held between 1973 (in Seattle) and 2009 (in Santa Barbara), and indeed the term "multi-ferroic magnetoelectric" was first used by H. Schmid in the proceedings of the 1993 MEIPIC conference (in Ascona).
Mechanisms for combining ferroelectricity and magnetism
To be defined as ferroelectric, a material must have a spontaneous electric polarization that is switchable by an applied electric field. Usually such an electric polarization arises via an inversion-symmetry-breaking structural distortion from a parent centrosymmetric phase. For example, in the prototypical ferroelectric barium titanate, BaTiO3, the parent phase is the ideal cubic ABO3 perovskite structure, with the B-site Ti4+ ion at the center of its oxygen coordination octahedron and no electric polarisation. In the ferroelectric phase the Ti4+ ion is shifted away from the center of the octahedron causing a polarization. Such a displacement only tends to be favourable when the B-site cation has an electron configuration with an empty d shell (a so-called d0 configuration), which favours energy-lowering covalent bond formation between the B-site cation and the neighbouring oxygen anions.
This "d0-ness" requirement is a clear obstacle for the formation of multiferroics, since the magnetism in most transition-metal oxides arises from the presence of partially filled transition metal d shells. As a result, in most multiferroics, the ferroelectricity has a different origin. The following describes the mechanisms that are known to circumvent this contraindication between ferromagnetism and ferroelectricity.
Lone-pair-active
In lone-pair-active multiferroics, the ferroelectric displacement is driven by the A-site cation, and the magnetism arises from a partially filled d shell on the B site. Examples include bismuth ferrite, BiFeO3, BiMnO3 (although this is believed to be anti-polar), and PbVO3. In these materials, the A-site cation (Bi3+, Pb2+) has a so-called stereochemically active 6s2 lone-pair of electrons, and off-centering of the A-site cation is favoured by an energy-lowering electron sharing between the formally empty A-site 6p orbitals and the filled O 2p orbitals.
Geometric ferroelectricity
In geometric ferroelectrics, the driving force for the structural phase transition leading to the polar ferroelectric state is a rotational distortion of the polyhedra rather than an electron-sharing covalent bond formation. Such rotational distortions occur in many transition-metal oxides; in the perovskites for example they are common when the A-site cation is small, so that the oxygen octahedra collapse around it. In perovskites, the three-dimensional connectivity of the polyhedra means that no net polarization results; if one octahedron rotates to the right, its connected neighbor rotates to the left and so on. In layered materials, however, such rotations can lead to a net polarization.
The prototypical geometric ferroelectrics are the layered barium transition metal fluorides, BaMF4, M=Mn, Fe, Co, Ni, Zn, which have a ferroelectric transition at around 1000K and a magnetic transition to an antiferromagnetic state at around 50K. Since the distortion is not driven by a hybridisation between the d-site cation and the anions, it is compatible with the existence of magnetism on the B site, thus allowing for multiferroic behavior.
A second example is provided by the family of hexagonal rare earth manganites (h-RMnO3 with R=Ho-Lu, Y), which have a structural phase transition at around 1300 K consisting primarily of a tilting of the MnO5 bipyramids. While the tilting itself has zero polarization, it couples to a polar corrugation of the R-ion layers which yields a polarisation of ~6 μC/cm2. Since the ferroelectricity is not the primary order parameter it is described as improper. The multiferroic phase is reached at ~100K when a triangular antiferromagnetic order due to spin frustration arises.
Charge ordering
Charge ordering can occur in compounds containing ions of mixed valence when the electrons, which are delocalised at high temperature, localize in an ordered pattern on different cation sites so that the material becomes insulating. When the pattern of localized electrons is polar, the charge ordered state is ferroelectric. Usually the ions in such a case are magnetic and so the ferroelectric state is also multiferroic. The first proposed example of a charge ordered multiferroic was LuFe2O4, which charge orders at 330 K with an arrangement of Fe2+ and Fe3+ ions. Ferrimagnetic ordering occurs below 240 K. Whether or not the charge ordering is polar has recently been questioned, however. In addition, charge ordered ferroelectricity is suggested in magnetite, Fe3O4, below its Verwey transition, and .
Magnetically-driven ferroelectricity
In magnetically driven multiferroics the macroscopic electric polarization is induced by long-range magnetic order which is non-centrosymmetric. Formally, the electric polarisation, , is given in terms of the magnetization, , by
.
Like the geometric ferroelectrics discussed above, the ferroelectricity is improper, because the polarisation is not the primary order parameter (in this case the primary order is the magnetisation) for the ferroic phase transition.
The prototypical example is the formation of the non-centrosymmetric magnetic spiral state, accompanied by a small ferroelectric polarization, below 28K in TbMnO3. In this case the polarization is small, 10−2 μC/cm2, because the mechanism coupling the non-centrosymmetric spin structure to the crystal lattice is the weak spin-orbit coupling. Larger polarizations occur when the non-centrosymmetric magnetic ordering is caused by the stronger superexchange interaction, such as in orthorhombic HoMnO3 and related materials. In both cases the magnetoelectric coupling is strong because the ferroelectricity is directly caused by the magnetic order.
f-electron magnetism
While most magnetoelectric multiferroics developed to date have conventional transition-metal d-electron magnetism and a novel mechanism for the ferroelectricity, it is also possible to introduce a different type of magnetism into a conventional ferroelectric. The most obvious route is to use a rare-earth ion with a partially filled shell of f electrons on the A site. An example is EuTiO3 which, while not ferroelectric under ambient conditions, becomes so when strained a little bit, or when its lattice constant is expanded for example by substituting some barium on the A site.
Composites
It remains a challenge to develop good single-phase multiferroics with large magnetization and polarization and strong coupling between them at room temperature. Therefore, composites combining magnetic materials, such as FeRh, with ferroelectric materials, such as PMN-PT, are an attractive and established route to achieving multiferroicity. Some examples include magnetic thin films on piezoelectric PMN-PT substrates and Metglass/PVDF/Metglass trilayer structures. Recently an interesting layer-by-layer growth of an atomic-scale multiferroic composite has been demonstrated, consisting of individual layers of ferroelectric and antiferromagnetic LuFeO3 alternating with ferrimagnetic but non-polar LuFe2O4 in a superlattice.
A new promising approach are core-shell type ceramics where a magnetoelectric composite is formed in-situ during synthesis. In the system (BiFe0.9Co0.1O3)0.4-(Bi1/2K1/2TiO3)0.6 (BFC-BKT) very strong ME coupling has been observed on a microscopic scale using PFM under magnetic field. Furthermore, switching of magnetization via electric field has been observed using MFM. Here, the ME active core-shell grains consist of magnetic CoFe2O4 (CFO) cores and a (BiFeO3)0.6-(Bi1/2K1/2TiO3)0.4 (BFO-BKT) shell where core and shell have an epitaxial lattice structure. The mechanism of strong ME coupling is via magnetic exchange interaction between CFO and BFO across the core-shell interface, which results in an exceptionally high Neel-Temperature of 670 K of the BF-BKT phase.
Other
There have been reports of large magnetoelectric coupling at room-temperature in type-I multiferroics such as in the "diluted" magnetic perovskite (PbZr0.53Ti0.47O3)0.6–(PbFe1/2Ta1/2O3)0.4 (PZTFT) in certain Aurivillius phases. Here, strong ME coupling has been observed on a microscopic scale using PFM under magnetic field among other techniques. Organic-inorganic hybrid multiferroics have been reported in the family of metal-formate perovskites, as well as molecular multiferroics such as [(CH3)2NH2][Ni(HCOO)3], with elastic strain-mediated coupling between the order parameters.
Classification
Type-I and type-II multiferroics
A helpful classification scheme for multiferroics into so-called type-I and type-II multiferroics was introduced in 2009 by D. Khomskii.
Khomskii suggested the term type-I multiferroic for materials in which the ferroelectricity and magnetism occur at different temperatures and arise from different mechanisms. Usually the structural distortion which gives rise to the ferroelectricity occurs at high temperature, and the magnetic ordering, which is usually antiferromagnetic, sets in at lower temperature. The prototypical example is BiFeO3 (TC=1100 K, TN=643 K), with the ferroelectricity driven by the stereochemically active lone pair of the Bi3+ ion and the magnetic ordering caused by the usual superexchange mechanism. YMnO3 (TC=914 K, TN=76 K) is also type-I, although its ferroelectricity is so-called "improper", meaning that it is a secondary effect arising from another (primary) structural distortion. The independent emergence of magnetism and ferroelectricity means that the domains of the two properties can exist independently of each other. Most type-I multiferroics show a linear magnetoelectric response, as well as changes in dielectric susceptibility at the magnetic phase transition.
The term type-II multiferroic is used for materials in which the magnetic ordering breaks the inversion symmetry and directly "causes" the ferroelectricity. In this case the ordering temperatures for the two phenomena are identical. The prototypical example is TbMnO3, in which a non-centrosymmetric magnetic spiral accompanied by a ferroelectric polarization sets in at 28 K. Since the same transition causes both effects they are by construction strongly coupled. The ferroelectric polarizations tend to be orders of magnitude smaller than those of the type-I multiferroics however, typically of the order of 10−2 μC/cm2. The opposite effect has also been reported, in the Mott insulating charge-transfer salt –. Here, a charge-ordering transition to a polar ferroelectric case drives a magnetic ordering, again giving an intimate coupling between the ferroelectric and, in this case antiferromagnetic, orders.
Symmetry and coupling
The formation of a ferroic order is always associated with the breaking of a symmetry. For example, the symmetry of spatial inversion is broken when ferroelectrics develop their electric dipole moment, and time reversal is broken when ferromagnets become magnetic. The symmetry breaking can be described by an order parameter, the polarization P and magnetization M in these two examples, and leads to multiple equivalent ground states which can be selected by the appropriate conjugate field; electric or magnetic for ferroelectrics or ferromagnets respectively. This leads for example to the familiar switching of magnetic bits using magnetic fields in magnetic data storage.
Ferroics are often characterized by the behavior of their order parameters under space inversion and time reversal (see table). The operation of space inversion reverses the direction of polarisation (so the phenomenon of polarisation is space-inversion antisymmetric) while leaving the magnetisation invariant. As a result, non-polar ferromagnets and ferroelastics are invariant under space inversion whereas polar ferroelectrics are not. The operation of time reversal, on the other hand, changes the sign of M (which is therefore time-reversal antisymmetric), while the sign of P remains invariant. Therefore, non-magnetic ferroelastics and ferroelectrics are invariant under time reversal whereas ferromagnets are not.
Magnetoelectric multiferroics are both space-inversion and time-reversal anti-symmetric since they are both ferromagnetic and ferroelectric.
The combination of symmetry breakings in multiferroics can lead to coupling between the order parameters, so that one ferroic property can be manipulated with the conjugate field of the other. Ferroelastic ferroelectrics, for example, are piezoelectric, meaning that an electric field can cause a shape change or a pressure can induce a voltage, and ferroelastic ferromagnets show the analogous piezomagnetic behavior. Particularly appealing for potential technologies is the control of the magnetism with an electric field in magnetoelectric multiferroics, since electric fields have lower energy requirements than their magnetic counterparts.
Applications
Electric-field control of magnetism
The main technological driver for the exploration of multiferroics has been their potential for controlling magnetism using electric fields via their magneto electric coupling. Such a capability could be technologically transformative, since the production of electric fields is far less energy intensive than the production of magnetic fields (which in turn require electric currents) that are used in most existing magnetism-based technologies. There have been successes in controlling the orientation of magnetism using an electric field, for example in heterostructures of conventional ferromagnetic metals and multiferroic BiFeO3, as well as in controlling the magnetic state, for example from antiferromagnetic to ferromagnetic in FeRh.
In multiferroic thin films, the coupled magnetic and ferroelectric order parameters can be exploited for developing magnetoelectronic devices. These include novel spintronic devices such as tunnel magnetoresistance (TMR) sensors and spin valves with electric field tunable functions. A typical TMR device consists of two layers of ferromagnetic materials separated by a thin tunnel barrier (~2 nm) made of a multiferroic thin film. In such a device, spin transport across the barrier can be electrically tuned. In another configuration, a multiferroic layer can be used as the exchange bias pinning layer. If the antiferromagnetic spin orientations in the multiferroic pinning layer can be electrically tuned, then magnetoresistance of the device can be controlled by the applied electric field. One can also explore multiple state memory elements, where data are stored both in the electric and the magnetic polarizations.
Radio and high-frequency devices
Multiferroic composite structures in bulk form are explored for high-sensitivity ac magnetic field sensors and electrically tunable microwave devices such as filters, oscillators and phase shifters (in which the ferri-, ferro- or antiferro-magnetic resonance is tuned electrically instead of magnetically).
Cross-over applications in other areas of physics
Multiferroics have been used to address fundamental questions in cosmology and particle physics. In the first, the fact that an individual electron is an ideal multiferroic, with any electric dipole moment required by symmetry to adopt the same axis as its magnetic dipole moment, has been exploited to search for the electric dipole moment of the electron. Using the designed multiferroic material , the change in net magnetic moment on switching of the ferroelectric polarisation in an applied electric field was monitored, allowing an upper bound on the possible value of the electron electric dipole moment to be extracted. This quantity is important because it reflects the amount of time-reversal (and hence CP) symmetry breaking in the universe, which imposes severe constraints on theories of elementary particle physics. In a second example, the unusual improper geometric ferroelectric phase transition in the hexagonal manganites has been shown to have symmetry characteristics in common with proposed early universe phase transitions. As a result, the hexagonal manganites can be used to run experiments in the laboratory to test various aspects of early universe physics. In particular, a proposed mechanism for cosmic-string formation has been verified, and aspects of cosmic string evolution are being explored through observation of their multiferroic domain intersection analogues.
Applications beyond magnetoelectricity
A number of other unexpected applications have been identified in the last few years, mostly in multiferroic bismuth ferrite, that do not seem to be directly related to the coupled magnetism and ferroelectricity. These include a photovoltaic effect, photocatalysis, and gas sensing behaviour. It is likely that the combination of ferroelectric polarisation, with the small band gap composed partially of transition-metal d states are responsible for these favourable properties.
Multiferroic films with appropriate band gap structure into solar cells was developed which results in high energy conversion efficiency due to efficient ferroelectric polarization driven carrier separation and overband spacing generation photo-voltage. Various films have been researched, and there is also a new approach to effectively adjust the band gap of the double perovskite multilayer oxide by engineering the cation order for Bi2FeCrO6.
Dynamics
Dynamical multiferroicity
Recently it was pointed out that, in the same way that electric polarisation can be generated by spatially varying magnetic order, magnetism can be generated by a temporally varying polarisation. The resulting phenomenon was called Dynamical Multiferroicity. The magnetisation, is given by
where is the polarisation and the indicates the vector product. The dynamical multiferroicity formalism underlies the following diverse range of phenomena:
The phonon Zeeman effect, in which phonons of opposite circular polarisation have different energies in a magnetic field. This phenomenon awaits experimental verification.
Resonant magnon excitation by optical driven phonons.
Dzylaoshinskii-Moriya-type electromagnons.
The inverse Faraday effect.
Exotic flavours of quantum criticality.
Dynamical processes
The study of dynamics in multiferroic systems is concerned with understanding the time evolution of the coupling between various ferroic orders, in particular under external applied fields. Current research in this field is motivated both by the promise of new types of application reliant on the coupled nature of the dynamics, and the search for new physics lying at the heart of the fundamental understanding of the elementary MF excitations. An increasing number of studies of MF dynamics are concerned with the coupling between electric and magnetic order parameters in the magnetoelectric multiferroics. In this class of materials, the leading research is exploring, both theoretically and experimentally, the fundamental limits (e.g. intrinsic coupling velocity, coupling strength, materials synthesis) of the dynamical magnetoelectric coupling and how these may be both reached and exploited for the development of new technologies.
At the heart of the proposed technologies based on magnetoelectric coupling are switching processes, which describe the manipulation of the material's macroscopic magnetic properties with electric field and vice versa. Much of the physics of these processes is described by the dynamics of domains and domain walls. An important goal of current research is the minimization of the switching time, from fractions of a second ("quasi"-static regime), towards the nanosecond range and faster, the latter being the typical time scale needed for modern electronics, such as next generation memory devices.
Ultrafast processes operating at picosecond, femtosecond, and even attosecond scale are both driven by, and studied using, optical methods that are at the front line of modern science. The physics underpinning the observations at these short time scales is governed by non-equilibrium dynamics, and usually makes use of resonant processes. One demonstration of ultrafast processes is the switching from collinear antiferromagnetic state to spiral antiferromagnetic state in CuO under excitation by 40 fs 800 nm laser pulse. A second example shows the possibility for the direct control of spin waves with THz radiation on antiferromagnetic NiO. These are promising demonstrations of how the switching of electric and magnetic properties in multiferroics, mediated by the mixed character of the magnetoelectric dynamics, may lead to ultrafast data processing, communication and quantum computing devices.
Current research into MF dynamics aims to address various open questions; the practical realisation and demonstration of ultra-high speed domain switching, the development of further new applications based on tunable dynamics, e.g. frequency dependence of dielectric properties, the fundamental understanding of the mixed character of the excitations (e.g. in the ME case, mixed phonon-magnon modes – 'electromagnons'), and the potential discovery of new physics associated with the MF coupling.
Domains and domain walls
Like any ferroic material, a multiferroic system is fragmented into domains. A domain is a spatially extended region with a constant direction and phase of its order parameters. Neighbouring domains are separated by transition regions called domain walls.
Properties of multiferroic domains
In contrast to materials with a single ferroic order, domains in multiferroics have additional properties and functionalities. For instance, they are characterized by an assembly of at least two order parameters. The order parameters may be independent (typical yet not mandatory for a Type-I multiferroic) or coupled (mandatory for a Type-II multiferroic).
Many outstanding properties that distinguish domains in multiferroics from those in materials with a single ferroic order are consequences of the coupling between the order parameters.
The coupling can lead to patterns with a distribution and/or topology of domains that is exclusive to multiferroics.
The order-parameter coupling is usually homogeneous across a domain, i.e., gradient effects are negligible.
In some cases the averaged net value of the order parameter for a domain pattern is more relevant for the coupling than the value of the order parameter of an individual domain.
These issues lead to novel functionalities which explain the current interest in these materials.
Properties of multiferroic domain walls
Domain walls are spatially extended regions of transition mediating the transfer of the order parameter from one domain to another. In comparison to the domains the domain walls are not homogeneous and they can have a lower symmetry. This may modify the properties of a multiferroic and the coupling of its order parameters. Multiferroic domain walls may display particular static and dynamic properties.
Static properties refer to stationary walls. They can result from
The reduced dimensionality
The finite width of the wall
The different symmetry of the wall
The inherent chemical, electronic, or order-parameter inhomogeneity within the walls and the resulting gradient effects.
Synthesis
Multiferroic properties can appear in a large variety of materials. Therefore, several conventional material fabrication routes are used, including solid state synthesis, hydrothermal synthesis, sol-gel processing, vacuum based deposition, and floating zone.
Some types of multiferroics require more specialized processing techniques, such as
Vacuum based deposition (for instance: MBE, PLD) for thin film deposition to exploit certain advantages that may come with 2-dimensional layered structures such as: strain mediated multiferroics, heterostructures, anisotropy.
High pressure solid state synthesis to stabilize metastable or highly distorted structures, or in the case of the Bi-based multiferroics due to the high volatility of bismuth.
List of materials
Most multiferroic materials identified to date are transition-metal oxides, which are compounds made of (usually 3d) transition metals with oxygen and often an additional main-group cation. Transition-metal oxides are a favorable class of materials for identifying multiferroics for a few reasons:
The localised 3d electrons on the transition metal are usually magnetic if they are partially filled with electrons.
Oxygen is at a "sweet spot" in the periodic table in that the bonds it makes with transition metals are neither too ionic (like its neighbor fluorine, F) or too covalent (like its neighbor nitrogen, N). As a result, its bonds with transition metals are rather polarizable, which is favorable for ferroelectricity.
Transition metals and oxygen tend to be earth abundant, non-toxic, stable and environmentally benign.
Many multiferroics have the perovskite structure. This is in part historical most of the well-studied ferroelectrics are perovskites and in part because of the high chemical versatility of the structure.
Below is a list of some the most well-studied multiferroics with their ferroelectric and magnetic ordering temperatures. When a material shows more than one ferroelectric or magnetic phase transition, the most relevant for the multiferroic behavior is given.
See also
Ferrotoroidicity
Reviews on Multiferroics
Talks and documentaries on multiferroics
France 24 documentary "Nicola Spaldin: The pioneer behind multiferroics" (12 minutes) Nicola Spaldin: The pioneer behind multiferroics
Seminar "Electric field control of magnetism" by R. Ramesh at U Michigan (1 hour) Ramamoorthy Ramesh | Electric Field Control of Magnetism
Max Roessler prize for multiferroics at ETH Zürich (5 minutes): Nicola Spaldin, Professor of Materials Theory at ETH Zurich
ICTP Colloquium "From materials to cosmology; Studying the early universe under the microscope" by Nicola Spaldin (1 hour) From Materials to Cosmology: Studying the early universe under the microscope - ICTP COLLOQUIUM
Tsuyoshi Kimura's research on "Toward highly functional devices using mulitferroics" (4 minutes): Toward highly functional devices using multi-ferroics
"Strong correlation between electricity and magnetism in materials" by Yoshi Tokura (45 minutes): 4th Kyoto Prize Symposium [Materials Science and Engineering Yoshinori Tokura, July 2, 2017]
"Breaking the wall to the next material age", Falling Walls, Berlin (15 minutes): How Materials Science Heralds a New Class of Technologies | NICOLA SPALDIN
References
Condensed matter physics
Materials science
Magnetism
Phases of matter
Hysteresis | Multiferroics | Physics,Chemistry,Materials_science,Engineering | 6,712 |
362,728 | https://en.wikipedia.org/wiki/Negative%20temperature | Certain systems can achieve negative thermodynamic temperature; that is, their temperature can be expressed as a negative quantity on the Kelvin or Rankine scales. This phenomenon was first discovered at the University of Alberta. This should be distinguished from temperatures expressed as negative numbers on non-thermodynamic Celsius or Fahrenheit scales, which are nevertheless higher than absolute zero. A system with a truly negative temperature on the Kelvin scale is hotter than any system with a positive temperature. If a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system. A standard example of such a system is population inversion in laser physics.
Thermodynamic systems with unbounded phase space cannot achieve negative temperatures: adding heat always increases their entropy. The possibility of a decrease in entropy as energy increases requires the system to "saturate" in entropy. This is only possible if the number of high energy states is limited. For a system of ordinary (quantum or classical) particles such as atoms or dust, the number of high energy states is unlimited (particle momenta can in principle be increased indefinitely). Some systems, however (see the examples below), have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease.
History
The possibility of negative temperatures was first predicted by Lars Onsager in 1949.
Onsager was investigating 2D vortices confined within a finite area, and realized that since their positions are not independent degrees of freedom from their momenta, the resulting phase space must also be bounded by the finite area. Bounded phase space is the essential property that allows for negative temperatures, and can occur in both classical and quantum systems. As shown by Onsager, a system with bounded phase space necessarily has a peak in the entropy as energy is increased. For energies exceeding the value where the peak occurs, the entropy decreases as energy increases, and high-energy states necessarily have negative Boltzmann temperature.
The limited range of states accessible to a system with negative temperature means that negative temperature is associated with emergent ordering of the system at high energies. For example in Onsager's point-vortex analysis negative temperature is associated with the emergence of large-scale clusters of vortices. This spontaneous ordering in equilibrium statistical mechanics goes against common physical intuition that increased energy leads to increased disorder.
It seems negative temperatures were first found experimentally in 1951, when Purcell and Pound observed evidence for them in the nuclear spins of a lithium fluoride crystal placed in a magnetic field, and then removed from this field. They wrote:
A system in a negative temperature state is not cold, but very hot, giving up energy to any system at positive temperature put into contact with it. It decays to a normal state through infinite temperature.
Definition of temperature
The absolute temperature (Kelvin) scale can be loosely interpreted as the average kinetic energy of the system's particles. The existence of negative temperature, let alone negative temperature representing "hotter" systems than positive temperature, would seem paradoxical in this interpretation. The paradox is resolved by considering the more rigorous definition of thermodynamic temperature in terms of Boltzmann's entropy formula. This reveals the tradeoff between internal energy and entropy contained in the system, with "coldness", the reciprocal of temperature, being the more fundamental quantity. Systems with a positive temperature will increase in entropy as one adds energy to the system, while systems with a negative temperature will decrease in entropy as one adds energy to the system.
The definition of thermodynamic temperature is a function of the change in the system's entropy under reversible heat transfer :
Entropy being a state function, the integral of over any cyclical process is zero. For a system in which the entropy is purely a function of the system's energy , the temperature can be defined as:
Equivalently, thermodynamic beta, or "coldness", is defined as
where is the Boltzmann constant.
Note that in classical thermodynamics, is defined in terms of temperature. This is reversed here, is the statistical entropy, a function of the possible microstates of the system, and temperature conveys information on the distribution of energy levels among the possible microstates. For systems with many degrees of freedom, the statistical and thermodynamic definitions of entropy are generally consistent with each other.
Some theorists have proposed using an alternative definition of entropy as a way to resolve perceived inconsistencies between statistical and thermodynamic entropy for small systems and systems where the number of states decreases with energy, and the temperatures derived from these entropies are different. It has been argued that the new definition would create other inconsistencies; its proponents have argued that this is only apparent.
Heat and molecular energy distribution
Negative temperatures can only exist in a system where there are a limited number of energy states (see below). As the temperature is increased on such a system, particles move into higher and higher energy states, so that the number of particles in the lower energy states and in the higher energy states approaches equality. (This is a consequence of the definition of temperature in statistical mechanics for systems with limited states.) By injecting energy into these systems in the right fashion, it is possible to create a system in which there are more particles in the higher energy states than in the lower ones. The system can then be characterized as having a negative temperature.
A substance with a negative temperature is not colder than absolute zero, but rather it is hotter than infinite temperature. As Kittel and Kroemer (p. 462) put it,
The corresponding inverse temperature scale, for the quantity (where is the Boltzmann constant), runs continuously from low energy to high as +∞, …, 0, …, −∞. Because it avoids the abrupt jump from +∞ to −∞, is considered more natural than . Although a system can have multiple negative temperature regions and thus have −∞ to +∞ discontinuities.
In many familiar physical systems, temperature is associated to the kinetic energy of atoms. Since there is no upper bound on the momentum of an atom, there is no upper bound to the number of energy states available when more energy is added, and therefore no way to get to a negative temperature. However, in statistical mechanics, temperature can correspond to other degrees of freedom than just kinetic energy (see below).
Temperature and disorder
The distribution of energy among the various translational, vibrational, rotational, electronic, and nuclear modes of a system determines the macroscopic temperature. In a "normal" system, thermal energy is constantly being exchanged between the various modes.
However, in some situations, it is possible to isolate one or more of the modes. In practice, the isolated modes still exchange energy with the other modes, but the time scale of this exchange is much slower than for the exchanges within the isolated mode. One example is the case of nuclear spins in a strong external magnetic field. In this case, energy flows fairly rapidly among the spin states of interacting atoms, but energy transfer between the nuclear spins and other modes is relatively slow. Since the energy flow is predominantly within the spin system, it makes sense to think of a spin temperature that is distinct from the temperature associated to other modes.
A definition of temperature can be based on the relationship:
The relationship suggests that a positive temperature corresponds to the condition where entropy, , increases as thermal energy, , is added to the system. This is the "normal" condition in the macroscopic world, and is always the case for the translational, vibrational, rotational, and non-spin-related electronic and nuclear modes. The reason for this is that there are an infinite number of these types of modes, and adding more heat to the system increases the number of modes that are energetically accessible, and thus increases the entropy.
Examples
Noninteracting two-level particles
The simplest example, albeit a rather nonphysical one, is to consider a system of particles, each of which can take an energy of either or but are otherwise noninteracting. This can be understood as a limit of the Ising model in which the interaction term becomes negligible. The total energy of the system is
where is the sign of the th particle and is the number of particles with positive energy minus the number of particles with negative energy. From elementary combinatorics, the total number of microstates with this amount of energy is a binomial coefficient:
By the fundamental assumption of statistical mechanics, the entropy of this microcanonical ensemble is
We can solve for thermodynamic beta () by considering it as a central difference without taking the continuum limit:
hence the temperature
This entire proof assumes the microcanonical ensemble with energy fixed and temperature being the emergent property. In the canonical ensemble, the temperature is fixed and energy is the emergent property. This leads to ( refers to microstates):
Following the previous example, we choose a state with two levels and two particles. This leads to microstates , , , and .
The resulting values for , , and all increase with and never need to enter a negative temperature regime.
Nuclear spins
The previous example is approximately realized by a system of nuclear spins in an external magnetic field. This allows the experiment to be run as a variation of nuclear magnetic resonance spectroscopy. In the case of electronic and nuclear spin systems, there are only a finite number of modes available, often just two, corresponding to spin up and spin down. In the absence of a magnetic field, these spin states are degenerate, meaning that they correspond to the same energy. When an external magnetic field is applied, the energy levels are split, since those spin states that are aligned with the magnetic field will have a different energy from those that are anti-parallel to it.
In the absence of a magnetic field, such a two-spin system would have maximum entropy when half the atoms are in the spin-up state and half are in the spin-down state, and so one would expect to find the system with close to an equal distribution of spins. Upon application of a magnetic field, some of the atoms will tend to align so as to minimize the energy of the system, thus slightly more atoms should be in the lower-energy state (for the purposes of this example we will assume the spin-down state is the lower-energy state). It is possible to add energy to the spin system using radio frequency techniques. This causes atoms to flip from spin-down to spin-up.
Since we started with over half the atoms in the spin-down state, this initially drives the system towards a 50/50 mixture, so the entropy is increasing, corresponding to a positive temperature. However, at some point, more than half of the spins are in the spin-up position. In this case, adding additional energy reduces the entropy, since it moves the system further from a 50/50 mixture. This reduction in entropy with the addition of energy corresponds to a negative temperature. In NMR spectroscopy, this corresponds to pulses with a pulse width of over 180° (for a given spin). While relaxation is fast in solids, it can take several seconds in solutions and even longer in gases and in ultracold systems; several hours were reported for silver and rhodium at picokelvin temperatures. It is still important to understand that the temperature is negative only with respect to nuclear spins. Other degrees of freedom, such as molecular vibrational, electronic and electron spin levels are at a positive temperature, so the object still has positive sensible heat. Relaxation actually happens by exchange of energy between the nuclear spin states and other states (e.g. through the nuclear Overhauser effect with other spins).
Lasers
This phenomenon can also be observed in many lasing systems, wherein a large fraction of the system's atoms (for chemical and gas lasers) or electrons (in semiconductor lasers) are in excited states. This is referred to as a population inversion.
The Hamiltonian for a single mode of a luminescent radiation field at frequency is
The density operator in the grand canonical ensemble is
For the system to have a ground state, the trace to converge, and the density operator to be generally meaningful, must be positive semidefinite. So if , and is negative semidefinite, then must itself be negative, implying a negative temperature.
Motional degrees of freedom
Negative temperatures have also been achieved in motional degrees of freedom. Using an optical lattice, upper bounds were placed on the kinetic energy, interaction energy and potential energy of cold potassium-39 atoms. This was done by tuning the interactions of the atoms from repulsive to attractive using a Feshbach resonance and changing the overall harmonic potential from trapping to anti-trapping, thus transforming the Bose-Hubbard Hamiltonian from . Performing this transformation adiabatically while keeping the atoms in the Mott insulator regime, it is possible to go from a low entropy positive temperature state to a low entropy negative temperature state. In the negative temperature state, the atoms macroscopically occupy the maximum momentum state of the lattice. The negative temperature ensembles equilibrated and showed long lifetimes in an anti-trapping harmonic potential.
Two-dimensional vortex motion
The two-dimensional systems of vortices confined to a finite area can form thermal equilibrium states at negative temperature, and indeed negative temperature states were first predicted by Onsager in his analysis of classical point vortices. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019.
See also
Negative resistance
Two's complement
References
Further reading
External links
Temperature
Entropy
Magnetism
Laser science | Negative temperature | Physics,Chemistry,Mathematics | 2,810 |
3,905,860 | https://en.wikipedia.org/wiki/Environmental%20stress%20fracture | In materials science, environmental stress fracture or environment assisted fracture is the generic name given to premature failure under the influence of tensile stresses and harmful environments of materials such as metals and alloys, composites, plastics and ceramics.
Metals and alloys exhibit phenomena such as stress corrosion cracking, hydrogen embrittlement, liquid metal embrittlement and corrosion fatigue all coming under this category. Environments such as moist air, sea water and corrosive liquids and gases cause environmental stress fracture. Metal matrix composites are also susceptible to many of these processes.
Plastics and plastic-based composites may suffer swelling, debonding and loss of strength when exposed to organic fluids and other corrosive environments, such as acids and alkalies. Under the influence of stress and environment, many structural materials, particularly the high-specific strength ones become brittle and lose their resistance to fracture. While their fracture toughness remains unaltered, their threshold stress intensity factor for crack propagation may be considerably lowered. Consequently, they become prone to premature fracture because of sub-critical crack growth. This article aims to give a brief overview of the various degradation processes mentioned above.
Stress corrosion cracking
Stress corrosion cracking is a phenomenon where a synergistic action of corrosion and tensile stress leads to brittle fracture of normally ductile materials at generally lower stress levels. During stress corrosion cracking, the material is relatively unattacked by the corrosive agent (no general corrosion, only localized corrosion), but fine cracks form within it. This process has serious implications on the utilisation of the material because the applicable safe stress levels are drastically reduced in the corrosive medium. Season cracking and caustic embrittlement are two stress corrosion cracking processes which affected the serviceability of brass cartridge cases and riveted steel boilers respectively.
Hydrogen embrittlement
Small quantities of hydrogen present inside certain metallic materials make the latter brittle and susceptible to sub-critical crack growth under stress. Hydrogen embrittlement may occur as a side effect of electroplating processes.
Delayed failure is the fracture of a component under stress after an elapsed time, is a characteristic feature of hydrogen embrittlement (2). Hydrogen entry into the material may be effected during plating, pickling, phosphating, melting, casting or welding. Corrosion during service in moist environments generates hydrogen, part of which may enter the metal as atomic hydrogen (H•) and cause embrittlement. Presence of a tensile stress, either inherent or externally applied, is necessary for metals to be damaged. As in the case of stress corrosion cracking, hydrogen embrittlement may also lead to a decrease in the threshold stress intensity factor for crack propagation or an increase in the sub-critical crack growth velocity of the material. The most visible effect of hydrogen in materials is a drastic reduction in ductility during tensile tests. It may increase, decrease or leave unaffected the yield strength of the material.
Hydrogen may also cause serrated yielding in certain metals such as niobium, nickel and some steels (3).
Case studies
One of the worst disasters caused by stress corrosion cracking was the fall of the Silver Bridge, WV in 1967, when a single brittle crack formed by rusting grew to criticality. The crack was on one of the tie bar links of one of the suspension chains, and the whole joint failed quickly by overload. The event escalated and the whole bridge disappeared in less than a minute, killing 46 drivers or passengers on the bridge at the time.
See also
References
Mars G. Fontana, Corrosion Engineering, 3rd Edition, McGraw-Hill, Singapore, 1987
A. R. Troiano, Trans. American Society for Metals, 52 (1960), 54
T. K. G. Namboodhiri, Trans. Indian Institute of Metals, 37 (1984), 764
A. S. Tetelman, Fundamental Aspects of Stress Corrosion Cracking, eds., R. W. Staehle, A. J. Forty and D. Van Rooyan, National Association of Corrosion Engineers, Houston, Texas, (1967), 446
N. J. Petch and P. Stables, Nature, 169 (1952), 842
R.A. Oriani, Berichte der Bunsen-Gesellschaft für physikalische Chemie, 76 (1972), 705
C. D. Beachem, Metall. Trans., 3 (1972), 437
D. G. Westlake, Trans. ASM, 62 (1969), 1000
Corrosion
Fracture mechanics | Environmental stress fracture | Chemistry,Materials_science,Engineering | 933 |
17,317,817 | https://en.wikipedia.org/wiki/Task%20allocation%20and%20partitioning%20in%20social%20insects | Task allocation and partitioning is the way that tasks are chosen, assigned, subdivided, and coordinated within a colony of social insects. Task allocation and partitioning gives rise to the division of labor often observed in social insect colonies, whereby individuals specialize on different tasks within the colony (e.g., "foragers", "nurses"). Communication is closely related to the ability to allocate tasks among individuals within a group. This entry focuses exclusively on social insects. For information on human task allocation and partitioning, see division of labour, task analysis, and workflow.
Definitions
Task allocation "... is the process that results in specific workers being engaged in specific tasks, in numbers appropriate to the current situation. [It] operates without any central or hierarchical control..." The concept of task allocation is individual-centric. It focuses on decisions by individuals about what task to perform. However, different biomathematical models give different weights to inter-individual interactions vs. environmental stimuli.
Task partitioning is the division of one task into sequential actions done by more than one individual. The focus here is on the task, and its division, rather than on the individuals performing it. For example, "hygienic behavior" is a task in which worker bees uncap and remove diseased brood cells that may be affected by American foulbrood (Paenibacillus larvae) or the parasitic mite Varroa destructor. In this case, individual bees often focus on either uncapping or removing diseased brood. Therefore, the task is partitioned, and performed by multiple individuals.
Introduction
Social living provides a multitude of advantages to its practitioners, including predation risk reduction, environmental buffering, food procurement, and possible mating advantages. The most advanced form of sociality is eusociality, characterized by overlapping generations, cooperative care of the young, and reproductive division of labor, which includes sterility or near-sterility of the overwhelming majority of colony members. With few exceptions, all the practitioners of eusociality are insects of the orders Hymenoptera (ants, bees, and wasps), Isoptera (termites), Thysanoptera (thrips), and Hemiptera (aphids). Social insects have been extraordinarily successful ecologically and evolutionarily. This success has at its most pronounced produced colonies 1) having a persistence many times the lifespan of most individuals of the colony, and 2) numbering thousands or even millions of individuals. Social insects can exhibit division of labor with respect to non-reproductive tasks, in addition to the aforementioned reproductive one. In some cases this takes the form of markedly different, alternative morphological development (polymorphism), as in the case of soldier castes in ants, termites, thrips, and aphids, while in other cases it is age-based (temporal polyethism), as with honey bee foragers, who are the oldest members of the colony (with the exception of the queen). Evolutionary biologists are still debating the fitness-advantage gained by social insects due to their advanced division of labor and task allocation, but hypotheses include: increased resilience against a fluctuating environment, reduced energy costs of continuously switching tasks, increased longevity of the colony as a whole, or reduced rate of pathogen transmission. Division of labor, large colony sizes, temporally-changing colony needs, and the value of adaptability and efficiency under Darwinian competition, all form a theoretical basis favoring the existence of evolved communication in social insects. Beyond the rationale, there is well-documented empirical evidence of communication related to tasks; examples include the waggle dance of honey bee foragers, trail marking by ant foragers such as the red harvester ants, and the propagation via pheromones of an alarm state in Africanized honey bees.
Worker Polymorphism
One of the most well known mechanisms of task allocation is worker polymorphism, where workers within a colony have morphological differences. This difference in size is determined by the amount of food workers are fed as larvae, and is set once workers emerge from their pupae. Workers may vary just in size (monomorphism) or size and bodily proportions (allometry). An excellent example of the monomorphism is in bumblebees (Bombus spp.). Bumblebee workers display a large amount of body size variation which is normally distributed. The largest workers may be ten times the mass of the smallest workers. Worker size is correlated with several tasks: larger workers tend to forage, while smaller workers tend to perform brood care and nest thermoregulation. Size also affects task efficiency. Larger workers are better at learning, have better vision, carry more weight, and fly at a greater range of temperatures. However, smaller workers are more resistant to starvation. In other eusocial insects as well, worker size can determine what polymorphic role they become. For instance, larger workers in Myrmecocystus mexicanus (a North America species of honeypot ant) tend to become repletes, or workers so engorged with food that they become immobile and act a living food storage for the rest of the colonies.
In many ants and termites, on the other hand, workers vary in both size and bodily proportions, which have a bimodal distribution. This is present in approximately one in six ant genera. In most of these there are two developmentally distinct pathways, or castes, into which workers can develop. Typically members of the smaller caste are called minors and members of the larger caste are called majors or soldiers. There is often variation in size within each caste. The term soldiers may be apt, as in Cephalotes, but in many species members of the larger caste act primarily as foragers or food processors. In a few ant species, such as certain Pheidole species, there is a third caste, called supersoldiers.
Temporal polyethism
Temporal polyethism is a mechanism of task allocation, and is ubiquitous among eusocial insect colonies. Tasks in a colony are allocated among workers based on their age. Newly emerged workers perform tasks within the nest, such as brood care and nest maintenance, and progress to tasks outside the nest, such as foraging, nest defense, and corpse removal as they age. In honeybees, the youngest workers exclusively clean cells, which is then followed by tasks related to brood care and nest maintenance from about 2–11 days of age. From 11– 20 days, they transition to receiving and storing food from foragers, and at about 20 days workers begin to forage. Similar temporal polyethism patterns can be seen in primitive species of wasps, such as Ropalidia marginata as well as the eusocial wasp Vespula germanica. Young workers feed larvae, and then transition to nest building tasks, followed by foraging. Many species of ants also display this pattern. This pattern is not rigid, though. Workers of certain ages have strong tendencies to perform certain tasks, but may perform other tasks if there is enough need. For instance, removing young workers from the nest will cause foragers, especially younger foragers, to revert to tasks such as caring for brood. These changes in task preference are caused by epigenetic changes over the life of the individual. Honeybee workers of different ages show substantial differences in DNA methylation, which causes differences in gene expression. Reverting foragers to nurses by removing younger workers causes changes in DNA methylation similar to younger workers.
Temporal polyethism is not adaptive because of maximized efficiency; indeed older workers are actually more efficient at brood care than younger workers in some ant species. Rather it allows workers with the lowest remaining life expectancy to perform the most dangerous tasks. Older workers tend to perform riskier tasks, such as foraging, which has high risks of predation and parasitism, while younger workers perform less dangerous tasks, such as brood care. If workers experience injuries, which shortens their life expectancies, they will start foraging sooner than healthy workers of the same age.
Response-Threshold Model
A dominant theory of explaining the self-organized division of labor in social insect societies such as honey bee colonies is the Response-Threshold Model. It predicts that individual worker bees have inherent thresholds to stimuli associated with different tasks. Individuals with the lowest thresholds will preferentially perform that task. Stimuli could include the “search time” that elapses while a foraging bee waits to unload her nectar and pollen to a receiver bee at the hive, the smell of diseased brood cells, or any other combination of environmental inputs that an individual worker bee encounters. The Response-Threshold Model only provides for effective task allocation in the honey bee colony if thresholds are varied among individual workers. This variation originates from the considerable genetic diversity among worker daughters of a colony due to the queen’s multiple matings.
Network representation of information flow and task allocation
To explain how colony-level complexity arises from the interactions of several autonomous individuals, a network-based approach has emerged as a promising area of social insect research. Social insect colonies can be viewed as a self-organized network, in which interacting elements (i.e. nodes) communicate with each other. As decentralized networks, colonies are capable of distributing information rapidly which facilitates robust responsiveness to their dynamic environments. The efficiency of information flow is critical for colony-level flexibility because worker behavior is not controlled by a centralized leader but rather is based on local information.
Social insect networks are often non-randomly distributed, wherein a few individuals act as ‘hubs,’ having disproportionately more connections to other nestmates than other workers in the colony. In harvester ants, the total interactions per ant during recruitment for outside work is right-skewed, meaning that some ants are more highly connected than others. Computer simulations of this particular interaction network demonstrated that inter-individual variation in connectivity patterns expedites information flow among nestmates.
Task allocation within a social insect colony can be modeled using a network-based approach, in which workers are represented by nodes, which are connected by edges that signify inter-node interactions. Workers performing a common task form highly connected clusters, with weaker links across tasks. These weaker, cross-task connections are important for allowing task-switching to occur between clusters. This approach is potentially problematic because connections between workers are not permanent, and some information is broadcast globally, e.g. through pheromones, and therefore does not rely on interaction networks. One alternative approach to avoid this pitfall is to treat tasks as nodes and workers as fluid connections.
To demonstrate how time and space constraints of individual-level interactions affect colony function, social insect network approaches can also incorporate spatiotemporal dynamics. These effects can impose upper bounds to information flow rate in the network. For example, the rate of information flow through Temnothorax rugatulus ant colonies is slower than would be predicted if time spent traveling and location within the nest were not considered. In Formica fusca L. ant colonies, a network analysis of spatial effects on feeding and the regulation of food storage revealed that food is distributed heterogeneously within colony, wherein heavily loaded workers are located centrally within the nest and those storing less food were located at the periphery.
Studies of inter-nest pheromone trail networks maintained by super-colonies of Argentine ants (Linepithema humile) have shown that different colonies establish networks with very similar topologies. Insights from these analyses revealed that these networks – which are used to guide workers transporting brood, workers and food between nests – are formed through a pruning process, in which individual ants initially create a complex network of trails, which are then refined to eliminate extraneous edges, resulting in a shorter, more efficient inter-nest network.
Long-term stability of interaction networks has been demonstrated in Odontomachus hastatus ants, in which initially highly connected ants remain highly connected over an extended time period. Conversely, Temnothorax rugatulus ant workers are not persistent in their interactive role, which might suggest that social organization is regulated differently among different eusocial species.
A network is pictorially represented as a graph, but can equivalently be represented as an adjacency list or adjacency matrix. Traditionally, workers are the nodes of the graph, but Fewell prefers to make the tasks the nodes, with workers as the links. O'Donnell has coined the term "worker connectivity" to stand for "communicative interactions that link a colony's workers in a social network and affect task performance". He has pointed out that connectivity provides three adaptive advantages compared to individual direct perception of needs:
It increases both the physical and temporal reach of information. With connectivity, information can travel farther and faster, and additionally can persist longer, including both direct persistence (i.e. through pheromones), memory effects, and by initiating a sequence of events.
It can help overcome task inertia and burnout, and push workers into performing hazardous tasks. For reasons of indirect fitness, this latter stimulus should not be necessary if all workers in the colony are highly related genetically, but that is not always the case.
Key individuals may possess superior knowledge, or have catalytic roles. Examples, respectively, are a sentry who has detected an intruder, or the colony queen.
O'Donnell provides a comprehensive survey, with examples, of factors that have a large bearing on worker connectivity. They include:
graph degree
size of the interacting group, especially if the network has a modular structure
sender distribution (i.e. a small number of controllers vs. numerous senders)
strength of the interaction effect, which includes strength of the signal sent, recipient sensitivity, and signal persistence (i.e. pheromone signal vs. sound waves)
recipient memory, and its decay function
socially-transmitted inhibitory signals, as not all interactions provide positive stimulus
specificity of both the signal and recipient response
signal and sensory modalities, and activity and interaction rates
Task taxonomy and complexity
Anderson, Franks, and McShea have broken down insect tasks (and subtasks) into a hierarchical taxonomy; their focus is on task partitioning and its complexity implications. They classify tasks as individual, group, team, or partitioned; classification of a task depends on whether there are multiple vs. individual workers, whether there is division of labor, and whether subtasks are done concurrently or sequentially. Note that in their classification, in order for an action to be considered a task, it must contribute positively to inclusive fitness; if it must be combined with other actions to achieve that goal, it is considered to be a subtask. In their simple model, they award 1, 2, or 3 points to the different tasks and subtasks, depending on its above classification. Summing all tasks and subtasks point values down through all levels of nesting allows any task to be given a score that roughly ranks relative complexity of actions. See also the review of task partitioning by Ratnieks and Anderson.
Note: model-building
All models are simplified abstractions of the real-life situation. There exists a basic tradeoff between model precision and parameter precision. A fixed amount of information collected, will, if split amongst the many parameters of an overly precise model, result in at least some of the parameters being represented by inadequate sample sizes. Because of the often limited quantities and limited precision of data from which to calculate parameters values in non-human behavior studies, such models should generally be kept simple. Therefore, we generally should not expect models for social insect task allocation or task partitioning to be as elaborate as human workflow ones, for example.
Metrics for division of labor
With increased data, more elaborate metrics for division of labor within the colony become possible. Gorelick and Bertram survey the applicability of metrics taken from a wide range of other fields. They argue that a single output statistic is desirable, to permit comparisons across different population sizes and different numbers of tasks. But they also argue that the input to the function should be a matrix representation (of time spent by each individual on each task), in order to provide the function with better data. They conclude that "... normalized matrix-input generalizations of Shannon's and Simpson's index ... should be the indices of choice when one wants to simultaneously examine division of labor amongst all individuals in a population". Note that these indexes, used as metrics of biodiversity, now find a place measuring division of labor.
See also
Patterns of self-organization in ants
Network theory
References
Further reading
Behavioral ecology
Superorganisms
Insect behavior | Task allocation and partitioning in social insects | Biology | 3,437 |
27,698,347 | https://en.wikipedia.org/wiki/Gamma%20scale | The γ (gamma) scale is a non-octave repeating musical scale invented by Wendy Carlos while preparing Beauty in the Beast (1986) though it does not appear on the album. It is derived from approximating just intervals using multiples of a single interval without, as is standard in equal temperaments, requiring an octave (2:1). It may be approximated by splitting the perfect fifth (3:2) into 20 equal parts (3:2≈35.1 cents), by splitting the neutral third into two equal parts, or ten equal parts of approximately 35.1 cents each () for 34.188 steps per octave.
The scale step may also precisely be derived from using 20:11 (B, 1035 cents, ) to approximate the interval , which equals 6:5 (E, 315.64 cents, ). Thus the step is approximately 35.099 cents and there are 34.1895 per octave.
and ()
"It produces nearly perfect triads." "A 'third flavor', sort of intermediate to 'alpha' and 'beta', although a melodic diatonic scale is easily available."
See also
Alpha scale
Beta scale
Delta scale
Bohlen–Pierce scale
Gamma chord
References
Equal temperaments
Non–octave-repeating scales
Wendy Carlos | Gamma scale | Physics | 263 |
2,329,809 | https://en.wikipedia.org/wiki/Zeiss%20projector | A Zeiss projector is one of a line of planetarium projectors manufactured by the Carl Zeiss Company.
Main models include Copernican (1924), Model I (1925), Model II (1926), Model III (1957), Model IV (1957), Model V (1965), Model VI (1968), Spacemaster (1970), Cosmorana (1984), Skymaster ZKP2 (1977), and Skymaster ZKP3 (1993).
The first modern planetarium projectors were designed and built in 1924 by the Zeiss Works of Jena, Germany. Zeiss projectors are designed to sit in the middle of a dark, dome-covered room and project an accurate image of the stars and other astronomical objects on the dome. They are generally large, complicated, and imposing machines.
The first Zeiss Mark I projector (the first planetarium projector in the world) was installed in the Deutsches Museum in Munich in August, 1923. It possessed a distinctive appearance, with a single sphere of projection lenses supported above a large, angled "planet cage". Marks II through VI were similar in appearance, using two spheres of star projectors separated along a central axis that contained projectors for the planets. Beginning with Mark VII, the central axis was eliminated and the two spheres were merged into a single, egg-shaped projection unit.
History of development and production
The Mark I was created in 1923–1924 and was the world's first modern planetarium projector. The Mark II was developed during the 1930s by Carl Zeiss AG in Jena. Following WWII division of Germany and the founding of Carl Zeiss (West Germany) in Oberkochen (while the original Jena plant was located in East Germany), each factory developed its own line of projectors.
Marks III – VI were developed in Oberkochen (West Germany) from 1957 to 1989. Meanwhile, the East German facility in Jena developed the ZKP projector line. The Mark VII was developed in 1993 and was the first joint project of the two Zeiss factories following German reunification.
, Zeiss currently manufactures three main models of planetarium projectors. The flagship Universarium models continue the "Mark" model designation and use a single "starball" design, where the fixed stars are projected from a single egg-shaped projector, and moving objects such as planets have their own independent projectors or are projected using a full-dome digital projection system. The Starmaster line of projectors are designed for smaller domes than the Universarium, but also use the single starball design. The Skymaster ZKP projectors are designed for the smallest domes and use a "dumbbell" design similar to the Mark II-VI projectors, where two smaller starballs for the northern and southern hemispheres are connected by a truss containing projectors for planets and other moving objects.
List of planetariums that have featured a Zeiss projector
Between 1923 and 2011, Zeiss manufactured a total of 631 projectors. Therefore, the following table is highly incomplete.
See also
List of planetariums
Planetarium
Planetarium Jena
Walther Bauersfeld
References
External links
Zeiss Planetariums
Planetarium projection
Carl Zeiss AG | Zeiss projector | Astronomy | 668 |
69,082,863 | https://en.wikipedia.org/wiki/Speed%20limits%20in%20Malta | The general speed limits in Malta are as follows:
References
Malta
Roads in Malta | Speed limits in Malta | Physics | 17 |
60,623,303 | https://en.wikipedia.org/wiki/Planar%20SAT | In computer science, the planar 3-satisfiability problem (abbreviated PLANAR 3SAT or PL3SAT) is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
Like 3SAT, PLANAR-SAT is NP-complete, and is commonly used in reductions.
Definition
Every 3SAT problem can be converted to an incidence graph in the following manner: For every variable , the graph has one corresponding node , and for every clause , the graph has one corresponding node An edge is created between variable and clause whenever or is in . Positive and negative literals are distinguished using edge colorings.
The formula is satisfiable if and only if there is a way to assign TRUE or FALSE to each variable node such that every clause node is connected to at least one TRUE by a positive edge or FALSE by a negative edge.
A planar graph is a graph that can be drawn on the plane in a way such that no two of its edges cross each other. Planar 3SAT is a subset of 3SAT in which the incidence graph of the variables and clauses of a Boolean formula is planar. It is important because it is a restricted variant, and is still NP-complete. Many problems (for example games and puzzles) cannot represent non-planar graphs. Hence, Planar 3SAT provides a way to prove those problems to be NP-hard.
Proof of NP-completeness
The following proof sketch follows the proof of D. Lichtenstein.
Trivially, PLANAR 3SAT is in NP. It is thus sufficient to show that it is NP-hard via reduction from 3SAT.
This proof makes use of the fact that is equivalent to and that is equivalent to .
First, draw the incidence graph of the 3SAT formula. Since no two variables or clauses are connected, the resulting graph will be bipartite. Suppose the resulting graph is not planar. For every crossing of edges (a, c1) and (b, c2), introduce nine new variables a1, b1, α, β, γ, δ, ξ, a2, b2, and replace every crossing of edges with a crossover gadget shown in the diagram. It consists of the following new clauses:
If the edge (a, c1) is inverted in the original graph, (a1, c1) should be inverted in the crossover gadget. Similarly if the edge (b, c2) is inverted in the original, (b1, c2) should be inverted.
One can easily show that these clauses are satisfiable if and only if and .
This algorithm shows that it is possible to convert each crossing into its planar equivalent using only a constant amount of new additions. Since the number of crossings is polynomial in terms of the number of clauses and variables, the reduction is polynomial.
Variants and related problems
Planar 3SAT with a variable-cycle: Here, in addition to the incidence-graph, the graph also includes a cycle going through all the variables, and each clause is either inside or outside this cycle. The resulting graph must still be planar. This problem is NP-complete.
However, if the problem is further restricted such that all clauses are inside the variable-cycle, or all clauses are outside it, then the problem can be solved in polynomial time using dynamic programming.
Planar 3SAT with literals: The bipartite incidence graph of the literals and clauses is planar too. This problem is NP-complete.
Planar rectilinear 3SAT: Vertices of the graph are represented as horizontal segments. Each variable lies on the x-axis while each clause lies above/below the x-axis. Every connection between a variable and a clause must be a vertical segment. Each clause may only have up to 3 connections with variables and are either all-positive or all-negative. This problem is NP-complete.
Planar monotone rectilinear 3SAT: This is a variant of planar rectilinear 3SAT where the clauses above the x-axis are all-positive and the clauses below the x-axis are all-negative. This problem is NP-complete and remains NP-complete when each clause containing three variables has two neighboring variables that are adjacent on the x-axis (i.e., no other variable appears horizontally between the neighboring variables).
Planar 1-in-3SAT: This is the planar equivalent of 1-in-3SAT. It is NP-complete.
Planar positive rectilinear 1-in-3SAT: This is the planar equivalent of positive 1-in-3SAT. It is NP-complete.
Planar NAE 3SAT: This problem is the planar equivalent of NAE 3SAT. Unlike the other variants, this problem can be solved in polynomial time. The proof is by reduction to planar maximum cut.
Planar circuit SAT: This is a variant of circuit SAT in which the circuit, computing the SAT formula, is a planar directed acyclic graph. Note that this is a different graph than the adjacency graph of the formula. This problem is NP-complete.
Reductions
Logic puzzles
Reduction from Planar SAT is a commonly used method in NP-completeness proofs of logic puzzles. Examples of these include Fillomino, Nurikabe, Shakashaka, Tatamibari, and Tentai Show. These proofs involve constructing gadgets that can simulate wires carrying signals (Boolean values), input and output gates, signal splitters, NOT gates and AND (or OR) gates in order to represent the planar embedding of any Boolean circuit. Since the circuits are planar, crossover of wires do not need to be considered.
Flat folding of fixed-angle chains
This is the problem of deciding whether a polygonal chain with fixed edge lengths and angles has a planar configuration without crossings. It has been proven to be strongly NP-hard via a reduction from planar monotone rectilinear 3SAT.
Minimum edge-length partition
This is the problem of partitioning a polygon into simpler polygons such that the total length of all edges used in the partition is as small as possible.
When the figure is a rectilinear polygon and it should be partitioned into rectangles, and the polygon is hole-free, then the problem is polynomial. But if it contains holes (even degenerate holes—single points), the problem is NP-hard, by reduction from Planar SAT. The same holds if the figure is any polygon and it should be partitioned into convex figures.
A related problem is minimum-weight triangulation - finding a triangulation of minimal total edge length. The decision version of this problem is proven to be NP-complete via a reduction from a variant of Planar 1-in-3SAT.
References
Satisfiability problems
NP-complete problems
Electronic design automation
Boolean algebra | Planar SAT | Mathematics | 1,583 |
65,270,577 | https://en.wikipedia.org/wiki/Pranav%20Sharma | Pranav Sharma (प्रणव शर्मा) is an astronomer and science historian known for his work on the history of the Indian Space Program. He has curated Space Museum at the B. M. Birla Science Centre (Hyderabad, India). Sharma was in charge of the history of the Indo-French scientific partnership project supported by the Embassy of France in India. He is a national award-winning science communicator and has extensively worked in the popularization of astronomy education in India.
He also served as the Policy and Diplomacy Advisor to United Nations International Computation Centre and Member Secretary (Policy, Transdisciplinary Disruptive Science, and Communications) for G20-Science20.
Sharma is the Co-Lead on the History of Data-Driven Astronomy Project , Adjunct Researcher at Raman Research Institute, Scientific Advisor to Arc Ventures, Science Diplomacy Consultant to Indian National Science Academy, and Visiting Faculty at The Druk Gyalpo's Institute, Bhutan. He is an Associate Member of Astronomical Society of India.
He has co-authored the book Essential Astrophysics: Interstellar Medium to Stellar Remnants, CRC Press, 2019.
References
Astronomy
Social work
Indian scientists
Living people
Year of birth missing (living people) | Pranav Sharma | Astronomy | 253 |
52,909,236 | https://en.wikipedia.org/wiki/Biryukov%20equation | In the study of dynamical systems, the Biryukov equation (or Biryukov oscillator), named after Vadim Biryukov (1946), is a non-linear second-order differential equation used to model damped oscillators.
The equation is given by
where is a piecewise constant function which is positive, except for small as
Eq. (1) is a special case of the Lienard equation; it describes the auto-oscillations.
Solution (1) at a separate time intervals when f(y) is constant is given by
where denotes the exponential function. Here
Expression (2) can be used for real and complex values of .
The first half-period’s solution at is
The second half-period’s solution is
The solution contains four constants of integration , the period and the boundary between and needs to be found. A boundary condition is derived from continuity of and .
Solution of (1) in the stationary mode thus is obtained by solving a system of algebraic equations as
The integration constants are obtained by the Levenberg–Marquardt algorithm.
With , Eq. (1) named Van der Pol oscillator. Its solution cannot be expressed by elementary functions in closed form.
References
Differential equations
Analog circuits | Biryukov equation | Mathematics,Engineering | 267 |
240,123 | https://en.wikipedia.org/wiki/Plasticity%20%28physics%29 | In physics and materials science, plasticity (also known as plastic deformation) is the ability of a solid material to undergo permanent deformation, a non-reversible change of shape in response to applied forces. For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding.
Plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, and foams. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations. Such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure; in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks. In cellular materials such as liquid foams or biological tissues, plasticity is mainly a consequence of bubble or cell rearrangements, notably T1 processes.
For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain.
Elastic deformation, however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation".
Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming, may need increasingly higher stresses to deform further. Generally, plastic deformation is also dependent on the deformation speed, i.e. higher stresses usually have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically.
Contributing properties
The plasticity of a material is directly proportional to the ductility and malleability of the material.
Physical mechanisms
In metals
Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece.
Most metals show more plasticity when hot than when cold. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals. Most metals are rendered plastic by heating and hence shaped hot.
Slip systems
Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page. The result is a permanent change of shape within the crystal and plastic deformation. The presence of dislocations increases the likelihood of planes.
Reversible plasticity
On the nanoscale the primary plastic deformation in simple face-centered cubic metals is reversible, as long as there is no material transport in form of cross-slip. Shape-memory alloys such as Nitinol wire also exhibit a reversible form of plasticity which is more properly called pseudoelasticity.
Shear banding
The presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands.
Microplasticity
Microplasticity is a local phenomenon in metals. It occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain.
Amorphous materials
Crazing
In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of strain and stretch marks.
Cellular materials
These materials plastically deform when the bending moment exceeds the fully plastic moment. This applies to open cell foams where the bending moment is exerted on the cell walls. The foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3. This is because beams yield axially instead of bending. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells.
Soils and sand
Soils, particularly clays, display a significant amount of inelasticity under load. The causes of plasticity in soils can be quite complex and are strongly dependent on the microstructure, chemical composition, and water content. Plastic behavior in soils is caused primarily by the rearrangement of clusters of adjacent grains.
Rocks and concrete
Inelastic deformations of rocks and concrete are primarily caused by the formation of microcracks and sliding motions relative to these cracks. At high temperatures and pressures, plastic behavior can also be affected by the motion of dislocations in individual grains in the microstructure.
Time-independent yielding and plastic flow in crystalline materials
Time-independent plastic flow in both single crystals and polycrystals is defined by a critical/maximum resolved shear stress (τCRSS), initiating dislocation migration along parallel slip planes of a single slip system, thereby defining the transition from elastic to plastic deformation behavior in crystalline materials.
Time-independent yielding and plastic flow in single crystals
The critical resolved shear stress for single crystals is defined by Schmid’s law τCRSS=σy/m, where σy is the yield strength of the single crystal and m is the Schmid factor. The Schmid factor comprises two variables λ and φ, defining the angle between the slip plane direction and the tensile force applied, and the angle between the slip plane normal and the tensile force applied, respectively. Notably, because m > 1, σy > τCRSS.
Critical resolved shear stress dependence on temperature, strain rate, and point defects
There are three characteristic regions of the critical resolved shear stress as a function of temperature. In the low temperature region 1 (T ≤ 0.25Tm), the strain rate must be high to achieve high τCRSS which is required to initiate dislocation glide and equivalently plastic flow. In region 1, the critical resolved shear stress has two components: athermal (τa) and thermal (τ*) shear stresses, arising from the stress required to move dislocations in the presence of other dislocations, and the resistance of point defect obstacles to dislocation migration, respectively. At T = T*, the moderate temperature region 2 (0.25Tm < T < 0.7Tm) is defined, where the thermal shear stress component τ* → 0, representing the elimination of point defect impedance to dislocation migration. Thus the temperature-independent critical resolved shear stress τCRSS = τa remains so until region 3 is defined. Notably, in region 2 moderate temperature time-dependent plastic deformation (creep) mechanisms such as solute-drag should be considered. Furthermore, in the high temperature region 3 (T ≥ 0.7Tm) έ can be low, contributing to low τCRSS, however plastic flow will still occur due to thermally activated high temperature time-dependent plastic deformation mechanisms such as Nabarro–Herring (NH) and Coble diffusional flow through the lattice and along the single crystal surfaces, respectively, as well as dislocation climb-glide creep.
Stages of time-independent plastic flow, post yielding
During the easy glide stage 1, the work hardening rate, defined by the change in shear stress with respect to shear strain (dτ/dγ) is low, representative of a small amount of applied shear stress necessary to induce a large amount of shear strain. Facile dislocation glide and corresponding flow is attributed to dislocation migration along parallel slip planes only (i.e. one slip system). Moderate impedance to dislocation migration along parallel slip planes is exhibited according to the weak stress field interactions between these dislocations, which heightens with smaller interplanar spacing. Overall, these migrating dislocations within a single slip system act as weak obstacles to flow, and a modest rise in stress is observed in comparison to the yield stress. During the linear hardening stage 2 of flow, the work hardening rate becomes high as considerable stress is required to overcome the stress field interactions of dislocations migrating on non-parallel slip planes (i.e. multiple slip systems), acting as strong obstacles to flow. Much stress is required to drive continual dislocation migration for small strains. The shear flow stress is directly proportional to the square root of the dislocation density (τflow ~ρ½), irrespective of the evolution of dislocation configurations, displaying the reliance of hardening on the number of dislocations present. Regarding this evolution of dislocation configurations, at small strains the dislocation arrangement is a random 3D array of intersecting lines. Moderate strains correspond to cellular dislocation structures of heterogeneous dislocation distribution with large dislocation density at the cell boundaries, and small dislocation density within the cell interior. At even larger strains the cellular dislocation structure reduces in size until a minimum size is achieved. Finally, the work hardening rate becomes low again in the exhaustion/saturation of hardening stage 3 of plastic flow, as small shear stresses produce large shear strains. Notably, instances when multiple slip systems are oriented favorably with respect to the applied stress, the τCRSS for these systems may be similar and yielding may occur according to dislocation migration along multiple slip systems with non-parallel slip planes, displaying a stage 1 work-hardening rate typically characteristic of stage 2. Lastly, distinction between time-independent plastic deformation in body-centered cubic transition metals and face centered cubic metals is summarized below.
Time-independent yielding and plastic flow in polycrystals
Plasticity in polycrystals differs substantially from that in single crystals due to the presence of grain boundary (GB) planar defects, which act as very strong obstacles to plastic flow by impeding dislocation migration along the entire length of the activated slip plane(s). Hence, dislocations cannot pass from one grain to another across the grain boundary. The following sections explore specific GB requirements for extensive plastic deformation of polycrystals prior to fracture, as well as the influence of microscopic yielding within individual crystallites on macroscopic yielding of the polycrystal. The critical resolved shear stress for polycrystals is defined by Schmid’s law as well (τCRSS=σy/ṁ), where σy is the yield strength of the polycrystal and ṁ is the weighted Schmid factor. The weighted Schmid factor reflects the least favorably oriented slip system among the most favorably oriented slip systems of the grains constituting the GB.
Grain boundary constraint in polycrystals
The GB constraint for polycrystals can be explained by considering a grain boundary in the xz plane between two single crystals A and B of identical composition, structure, and slip systems, but misoriented with respect to each other. To ensure that voids do not form between individually deforming grains, the GB constraint for the bicrystal is as follows:
εxxA = εxxB (the x-axial strain at the GB must be equivalent for A and B), εzzA = εzzB (the z-axial strain at the GB must be equivalent for A and B), and εxzA = εxzB (the xz shear strain along the xz-GB plane must be equivalent for A and B). In addition, this GB constraint requires that five independent slip systems be activated per crystallite constituting the GB. Notably, because independent slip systems are defined as slip planes on which dislocation migrations cannot be reproduced by any combination of dislocation migrations along other slip system’s planes, the number of geometrical slip systems for a given crystal system - which by definition can be constructed by slip system combinations - is typically greater than that of independent slip systems. Significantly, there is a maximum of five independent slip systems for each of the seven crystal systems, however, not all seven crystal systems acquire this upper limit. In fact, even within a given crystal system, the composition and Bravais lattice diversifies the number of independent slip systems (see the table below). In cases for which crystallites of a polycrystal do not obtain five independent slip systems, the GB condition cannot be met, and thus the time-independent deformation of individual crystallites results in cracks and voids at the GBs of the polycrystal, and soon fracture is realized. Hence, for a given composition and structure, a single crystal with less than five independent slip systems is stronger (exhibiting a greater extent of plasticity) than its polycrystalline form.
Implications of the grain boundary constraint in polycrystals
Although the two crystallites A and B discussed in the above section have identical slip systems, they are misoriented with respect to each other, and therefore misoriented with respect to the applied force. Thus, microscopic yielding within a crystallite interior may occur according to the rules governing single crystal time-independent yielding. Eventually, the activated slip planes within the grain interiors will permit dislocation migration to the GB where many dislocations then pile up as geometrically necessary dislocations. This pile up corresponds to strain gradients across individual grains as the dislocation density near the GB is greater than that in the grain interior, imposing a stress on the adjacent grain in contact. When considering the AB bicrystal as a whole, the most favorably oriented slip system in A will not be the that in B, and hence τACRSS ≠ τBCRSS. Paramount is the fact that macroscopic yielding of the bicrystal is prolonged until the higher value of τCRSS between grains A and B is achieved, according to the GB constraint. Thus, for a given composition and structure, a polycrystal with five independent slip systems is stronger (greater extent of plasticity) than its single crystalline form. Correspondingly, the work hardening rate will be higher for the polycrystal than the single crystal, as more stress is required in the polycrystal to produce strains. Importantly, just as with single crystal flow stress, τflow ~ρ½, but is also inversely proportional to the square root of average grain diameter (τflow ~d-½ ). Therefore, the flow stress of a polycrystal, and hence the polycrystal’s strength, increases with small grain size. The reason for this is that smaller grains have a relatively smaller number of slip planes to be activated, corresponding to a fewer number of dislocations migrating to the GBs, and therefore less stress induced on adjacent grains due to dislocation pile up. In addition, for a given volume of polycrystal, smaller grains present more strong obstacle grain boundaries. These two factors provide an understanding as to why the onset of macroscopic flow in fine-grained polycrystals occurs at larger applied stresses than in coarse-grained polycrystals.
Mathematical descriptions
Deformation theory
There are several mathematical descriptions of plasticity. One is deformation theory (see e.g. Hooke's law) where the Cauchy stress tensor (of order d-1 in d dimensions) is a function of the strain tensor. Although this description is accurate when a small part of matter is subjected to increasing loading (such as strain loading), this theory cannot account for irreversibility.
Ductile materials can sustain large plastic deformations without fracture. However, even ductile metals will fracture when the strain becomes large enough—this is as a result of work hardening of the material, which causes it to become brittle. Heat treatment such as annealing can restore the ductility of a worked piece, so that shaping can continue.
Flow plasticity theory
In 1934, Egon Orowan, Michael Polanyi and Geoffrey Ingram Taylor, roughly simultaneously, realized that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations. The mathematical theory of plasticity, flow plasticity theory, uses a set of non-linear, non-integrable equations to describe the set of changes on strain and stress with respect to a previous state and a small increase of deformation.
Yield criteria
If the stress exceeds a critical value, as was mentioned above, the material will undergo plastic, or irreversible, deformation. This critical stress can be tensile or compressive. The Tresca and the von Mises criteria are commonly used to determine whether a material has yielded. However, these criteria have proved inadequate for a large range of materials and several other yield criteria are also in widespread use.
Tresca criterion
The Tresca criterion is based on the notion that when a material fails, it does so in shear, which is a relatively good assumption when considering metals. Given the principal stress state, we can use Mohr's circle to solve for the maximum shear stresses our material will experience and conclude that the material will fail if
where σ1 is the maximum normal stress, σ3 is the minimum normal stress, and σ0 is the stress under which the material fails in uniaxial loading. A yield surface may be constructed, which provides a visual representation of this concept. Inside of the yield surface, deformation is elastic. On the surface, deformation is plastic. It is impossible for a material to have stress states outside its yield surface.
Huber–von Mises criterion
The Huber–von Mises criterion is based on the Tresca criterion but takes into account the assumption that hydrostatic stresses do not contribute to material failure. M. T. Huber was the first who proposed the criterion of shear energy. Von Mises solves for an effective stress under uniaxial loading, subtracting out hydrostatic stresses, and states that all effective stresses greater than that which causes material failure in uniaxial loading will result in plastic deformation.
Again, a visual representation of the yield surface may be constructed using the above equation, which takes the shape of an ellipse. Inside the surface, materials undergo elastic deformation. Reaching the surface means the material undergoes plastic deformations.
See also
Yield (engineering)
Atterberg limits
Deformation (mechanics)
Deformation (engineering)
Plastometer
Poisson's ratio
References
Further reading
Solid mechanics
Deformation (mechanics) | Plasticity (physics) | Physics,Materials_science,Engineering | 4,078 |
40,978,917 | https://en.wikipedia.org/wiki/Anthecology | Anthecology, or pollination biology, is the study of pollination as well as the relationships between flowers and their pollinators. Floral biology is a bigger field that includes these studies. Most flowering plants, or angiosperms, are pollinated by animals, and especially by insects. The major flower-frequenting insect taxa include beetles, flies, wasps, bees, ants, thrips, butterflies, and moths. Insects carry out pollination when visiting flowers to obtain nectar or pollen, to prey on other species, or when pseudo-copulating with insect-mimicking flowers such as orchids. Pollination-related interactions between plants and insects are considered mutualistic, and the relationships between plants and their pollinators have likely led to increased diversity of both angiosperms and the animals that pollinate them.
Anthecology brings together many disciplines, such as botany, horticulture, entomology, and ecology.
History
Anthecology began as a descriptive science relying on observation, and more recently has come to rely upon quantitative and experimental studies.
By the 17th century, the sexual nature of plant reproduction was recognized following the work of Nehemiah Grew and the experiments of Rudolf Jakob Camerarius, who showed that pistillate plants need both male and female organs for reproduction. Tulips and maize were popular subjects of study during this time. In 1735, Carl Linnaeus developed a "sexual system" of the classification of seed plants. In the mid-to-late 18th century, Joseph Gottlieb Kölreuter demonstrated that pollen must be transferred from stamen to stigma for reproduction to occur, and also clarified the distinction between nectar and honey.
In the late 18th century, Christian Konrad Sprengel showed evidence that flowers attract insects and reward them with nectar. Following the emergence of the Darwinian theory of evolution in the late 19th century, scientists became keen to the selective advantage of cross-pollination. In an 1873 book, Hermann Müller made detailed observations of the particular relationships between certain insects and certain flowering plants, called pollination syndromes, and additional comprehensive surveys were made by Paul Knuth.
Anthecology went into decline for several decades, but the field was kept alive by several studies including those of honey bees by Karl von Frisch in the mid 20th century.
Anthecology gained a resurgence in the 20th century during the rise of neo-Darwinism. Specific plant-insect interactions were further documented, with an emphasis on tropical anthecology, comparative anthecology, and co-evolution.
Today, the biology of pollination has attracted the attention of scientists, governments, and the media, following observations of rapid pollinator decline in the late 20th century, including the unexplained and sudden disappearance of honey bees in a phenomenon known as colony collapse disorder.
References
External links
Anthecology — non-profit site on anthecology
Insect ecology
Botany
Pollination | Anthecology | Biology | 592 |
72,422,533 | https://en.wikipedia.org/wiki/HD%2036187 | HD 36187, also known as HR 1835, is a solitary, bluish-white hued star located in the southern constellation Columba, the dove. It has an apparent magnitude of 5.55, making it faintly visible to the naked eye under ideal conditions. Based on parallax measurements from the Gaia spacecraft, it is estimated to be 282 light years away from the Solar System. However, it is receding rapidly with a heliocentric radial velocity of . At its current distance, HD 36187's brightness is diminished by 0.21 magnitude due to interstellar dust.
HD 36187 has a stellar classification of either A1 V or A0 V, depending on the source. Nevertheless, both classes indicate that it is an ordinary A-type main-sequence star that is fusing hydrogen in its core. It has double the mass and radius of the Sun. It radiates 48 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 36187 is estimated to be 311 million years old, having completed 66.9% of its main sequence lifetime. Like many hot stars HR 1835 spins rapidly, having a projected rotational velocity of .
References
A-type main-sequence stars
Columba (constellation)
Columbae, 20
CD-37 02220
036187
025608
1835 | HD 36187 | Astronomy | 283 |
21,161,543 | https://en.wikipedia.org/wiki/Flubendazole | Flubendazole is an anthelmintic, used both in humans and for veterinarian purposes. It is very close chemically to mebendazole, the only difference being an added fluorine group.
Human use
It is available for human use to treat worm infections. In certain countries such as France, it is inexpensive and available OTC (without prescription) under the brand name Fluvermal as an alternative to mebendazole which is not currently sold there.
Veterinarian use
Under veterinary use, its brand name is Flutelmium which is a paste manufactured by Janssen Pharmaceutica N.V. used by veterinarians for protection against internal parasites and worms in dogs and cats. Other brand names are Flubenol, Biovermin, and Flumoxal.
Since 2000, Flubendazole-treated grit has increasingly been laid out on a landscape-scale across many UK grouse-shooting moors by gamekeepers in an attempt to reduce the impact on bird numbers from strongyle worm. Evidence of high worm burden is required before a veterinarian can dispense and sell the product, known as 'medicated grit'. However, there has been increasing concern about contaminants entering the ground waters running off from moorlands, as well as from its use in farming environments and its presence in manure. Researchers are starting to gather research evidence in order to inform policy development on the presence of this and other veterinary medicines in the wider environment.
References
Anthelmintics
Antiparasitic agents
Diarylketones
Benzimidazoles
Carbamates
4-Fluorophenyl compounds | Flubendazole | Biology | 340 |
365,001 | https://en.wikipedia.org/wiki/Hypnic%20jerk | A hypnic jerk, hypnagogic jerk, sleep start, sleep twitch, myoclonic jerk, or night start is a brief and sudden involuntary contraction of the muscles of the body which occurs when a person is beginning to fall asleep, often causing the person to jump and awaken suddenly for a moment. Hypnic jerks are one form of involuntary muscle twitches called myoclonus.
Physically, hypnic jerks resemble the "jump" experienced by a person when startled, sometimes accompanied by a falling sensation. Hypnic jerks are associated with a rapid heartbeat, quickened breathing, sweat, and sometimes "a peculiar sensory feeling of 'shock' or 'falling into the void. It can also be accompanied by a vivid dream experience or hallucination. A higher occurrence is reported in people with irregular sleep schedules. When they are particularly frequent and severe, hypnic jerks have been reported as a cause of sleep-onset insomnia.
Hypnic jerks are common physiological phenomena. Around 70% of people experience them at least once in their lives with 10% experiencing them daily. They are benign and do not cause any neurological sequelae.
Causes
According to the American Academy of Sleep Medicine (AASM), there is a wide range of potential causes, including anxiety, stimulants like caffeine and nicotine, stress, and strenuous activities in the evening. It also may be facilitated by fatigue or sleep deprivation. However, most hypnic jerks occur essentially at random in healthy people. Nevertheless, these repeated, intensifying twitches can cause anxiety in some individuals and a disruption to their sleep onset.
Sometimes, hypnic jerks are mistaken for another form of movement during sleep. For example, hypnic jerks can be confused with restless leg syndrome, periodic limb movement disorder, hypnagogic foot tremor, rhythmic movement disorder, and hereditary or essential startle syndrome, including the hyperplexia syndrome. But some phenomena can help to distinguish hypnic jerk from these other conditions. For example, the occurrence of hypnic jerk arises only at sleep onset and it happens without any rhythmicity or periodicity of the movements and EMG bursts. Also, other pertinent history allows to differentiate it.
This physiological phenomenon can also be mistaken for myoclonic seizure, but it can also be distinguished by different criteria such as the fact that hypnic jerk occurs at sleep onset only or that the EEG is normal and constant. In addition, unlike seizures, there are no tongue bites, urinary incontinence and postictal confusion in hypnic jerk. This phenomenon can therefore be distinguished from other more serious conditions.
The causes of hypnic jerk are yet unclear and under study. None of the several theories that have attempted to explain it have been fully accepted.
One hypothesis posits that the hypnic jerk is a form of reflex, initiated in response to normal bodily events during the lead-up to the first stages of sleep, including a decrease in blood pressure and the relaxation of muscle tissue. Another theory postulates that the body mistakes the sense of relaxation that is felt when falling asleep as a sign that the body is falling. As a consequence, it causes a jerk to wake the sleeper up so they can catch themselves. A researcher at the University of Colorado suggested that a hypnic jerk could be "an archaic reflex to the brain's misinterpretation of muscle relaxation with the onset of sleep as a signal that a sleeping primate is falling out of a tree. The reflex may also have had selective value by having the sleeper readjust or review his or her sleeping position in a nest or on a branch in order to assure that a fall did not occur", but evidence is lacking.
During an epilepsy and intensive care study, the lack of a preceding spike discharge measured on an epilepsy monitoring unit, along with the presence only at sleep onset, helped differentiate hypnic jerks from epileptic myoclonus.
According to a study on sleep disturbances in the Journal of Neural Transmission, a hypnic jerk occurs during the non-rapid eye movement sleep cycle and is an "abrupt muscle action flexing movement, generalized or partial and asymmetric, which may cause arousal, with an illusion of falling". Hypnic jerks are more frequent in childhood with 4 to 7 per hour in the age range from 8 to 12 years old, and they decrease toward 1 or 2 per hour by 65 to 80 years old.
Treatment
There are ways to reduce hypnic jerks, including reducing consumption of stimulants such as nicotine or caffeine, avoiding physical exertion prior to sleep, and consuming sufficient magnesium.
Some medication can also help to reduce or eliminate the hypnic jerks. For example, low-dose clonazepam at bedtime may make the twitches disappear over time.
In addition, some people may develop a fixation on these hypnic jerks leading to increased anxiety, worrying about the disruptive experience. This increased anxiety and fatigue increases the likelihood of experiencing these jerks, resulting in a positive feedback loop.
See also
References
Sleep disorders | Hypnic jerk | Biology | 1,073 |
40,605,848 | https://en.wikipedia.org/wiki/Polyvinylcarbazole | Polyvinylcarbazole (PVK) is a temperature-resistant thermoplastic polymer produced by radical polymerization from the monomer N-vinylcarbazole. It is a photoconductive polymer and thus the basis for photorefractive polymers and organic light-emitting diodes.
History
Polyvinylcarbazole was discovered by the chemists Walter Reppe (1892-1969), Ernst Keyssner and Eugen Dorrer and patented by I.G. Farben in the USA in 1937. PVK was the first polymer whose photoconductivity was known. Starting in the 1960s, further polymers of this kind were sought.
Production
Polyvinylcarbazole is obtained from N-vinylcarbazole by radical polymerization in various ways. It can be produced by suspension polymerization at 180 °C with sodium chloride and potassium chromate as catalyst. Alternatively, AIBN can also be used as a radical starter or a Ziegler-Natta catalyst.
Properties
Physical properties
PVK can be used at temperatures of up to 160 - 170 °C and is therefore a temperature-resistant thermoplastic. The electrical conductivity changes depending on the illumination. For this reason, PVK is classified as a semiconductor or photoconductor. The polymer is extremely brittle, but the brittleness can be reduced by copolymerization with a little isoprene.
Chemical properties
Polyvinylcarbazole is soluble in aromatic hydrocarbons, halogenated hydrocarbons and ketones. It is resistant to acids, alkalis, polar solvents and aliphatic hydrocarbons. The addition of PVK to other plastic masses increases their temperature resistance.
Use
Due to its high price and special properties, the use of PVK is limited to special areas. It is used in insulation technology, electrophotography (e.g. in copiers and laser printers), for the fabrication of polymer photonic crystals, for organic light-emitting diodes and photovoltaic devices. In addition, PVK is a well researched component in photorefractive polymers and therefore plays an important role in holography. Another application is the production of cooking-proof copolymers with styrene.
See also
Organic photorefractive materials
References
Thermoplastics
Semiconductor materials
Carbazoles | Polyvinylcarbazole | Chemistry | 487 |
12,071,235 | https://en.wikipedia.org/wiki/Integrated%20multi-trophic%20aquaculture | Integrated multi-trophic aquaculture (IMTA) is a type of aquaculture where the byproducts, including waste, from one aquatic species are used as inputs (fertilizers, food) for another. Farmers combine fed aquaculture (e.g., fish, shrimp) with inorganic extractive (e.g., seaweed) and organic extractive (e.g., shellfish) aquaculture to create balanced systems for environment remediation (biomitigation), economic stability (improved output, lower cost, product diversification and risk reduction) and social acceptability (better management practices).
Selecting appropriate species and sizing the various populations to provide necessary ecosystem functions allows the biological and chemical processes involved to achieve a stable balance, mutually benefiting the organisms and improving ecosystem health.
Ideally, the co-cultured species each yield valuable commercial "crops". IMTA can synergistically increase total output, even if some of the crops yield less than they would, short-term, in a monoculture.
Terminology and related approaches
"Integrated" refers to intensive and synergistic cultivation, using water-borne nutrient and energy transfer. "Multi-trophic" means that the various species occupy different trophic levels, i.e., different (but adjacent) links in the food chain.
IMTA is a specialized form of the age-old practice of aquatic polyculture, which was the co-culture of various species, often without regard to trophic level. In this broader case, the organisms may share biological and chemical processes that may be minimally complementary, potentially leading to reduced production of both species due to competition for the same food resource. However, some traditional systems such as polyculture of carps in China employ species that occupy multiple niches within the same pond, or the culture of fish that is integrated with a terrestrial agricultural species, can be considered forms of IMTA.
The more general term "Integrated Aquaculture" is used to describe the integration of monocultures through water transfer between the culture systems. The terms "IMTA" and "integrated aquaculture" differ primarily in their precision and are sometimes interchanged. Aquaponics, fractionated aquaculture, integrated agriculture-aquaculture systems, integrated peri-urban-aquaculture systems, and integrated fisheries-aquaculture systems are all variations of the IMTA concept.
Range of approaches
Today, low-intensity traditional/incidental multi-trophic aquaculture is much more common than modern IMTA. Most are relatively simple, such as fish, seaweed or shellfish.
True IMTA can be land-based, using ponds or tanks, or even open-water marine or freshwater systems. Implementations have included species combinations such as shellfish/shrimp, fish/seaweed/shellfish, fish/seaweed, fish/shrimp and seaweed/shrimp.
IMTA in open water (offshore cultivation) can be done by the use of buoys with lines on which the seaweed grows. The buoys/lines are placed next to the fishnets or cages in which the fish grows. In some tropical Asian countries some traditional forms of aquaculture of finfish in floating cages, nearby fish and shrimp ponds, and oyster farming integrated with some capture fisheries in estuaries can be considered a form of IMTA. Since 2010, IMTA has been used commercially in Norway, Scotland, and Ireland.
In the future, systems with other components for additional functions, or similar functions but different size brackets of particles, are likely. Multiple regulatory issues remain open.
Modern history of land-based systems
Ryther and co-workers created modern, integrated, intensive, land mariculture. They originated, both theoretically and experimentally, the integrated use of extractive organisms—shellfish, microalgae and seaweeds—in the treatment of household effluents, descriptively and with quantitative results. A domestic wastewater effluent, mixed with seawater, was the nutrient source for phytoplankton, which in turn became food for oysters and clams. They cultivated other organisms in a food chain rooted in the farm's organic sludge. Dissolved nutrients in the final effluent were filtered by seaweed (mainly Gracilaria and Ulva) biofilters. The value of the original organisms grown on human waste effluents was minimal.
In 1976, Huguenin proposed adaptations to the treatment of intensive aquaculture effluents in both inland and coastal areas. Tenore followed by integrating with their system of carnivorous fish and the macroalgivore abalone.
In 1977, Hughes-Games described the first practical marine fish/shellfish/phytoplankton culture, followed by Gordin, et al., in 1981. By 1989, a semi-intensive (1 kg fish/m−3) seabream and grey mullet pond system by the Gulf of Aqaba (Eilat) on the Red Sea supported dense diatom populations, excellent for feeding oysters. Hundreds of kilos of fish and oysters cultured here were sold. Researchers also quantified the water quality parameters and nutrient budgets in (5 kg fish m−3) green water seabream ponds. The phytoplankton generally maintained reasonable water quality and converted on average over half the waste nitrogen into algal biomass. Experiments with intensive bivalve cultures yielded high bivalve growth rates. This technology supported a small farm in southern Israel.
Sustainability
IMTA promotes economic and environmental sustainability by converting byproducts and uneaten feed from fed organisms into harvestable crops, thereby reducing eutrophication, and increasing economic diversification.
Properly managed multi-trophic aquaculture accelerates growth without detrimental side-effects. This increases the site's ability to assimilate the cultivated organisms, thereby reducing negative environmental impacts.
IMTA enables farmers to diversify their output by replacing purchased inputs with byproducts from lower trophic levels, often without new sites. Initial economic research suggests that IMTA can increase profits and can reduce financial risks due to weather, disease and market fluctuations. Over a dozen studies have investigated the economics of IMTA systems since 1985.
Nutrient flow
Typically, carnivorous fish or shrimp occupy IMTA's higher trophic levels. They excrete soluble ammonia and phosphorus (orthophosphate). Seaweeds and similar species can extract these inorganic nutrients directly from their environment. Fish and shrimp also release organic nutrients which feed shellfish and deposit feeders.
Species such as shellfish that occupy intermediate trophic levels often play a dual role, both filtering organic bottom-level organisms from the water and generating some ammonia. Waste feed may also provide additional nutrients; either by direct consumption or via decomposition into individual nutrients. In some projects, the waste nutrients are also gathered and reused in the food given to the fish in cultivation. This can happen by processing the seaweed grown into food.
Recovery efficiency
Nutrient recovery efficiency is a function of technology, harvest schedule, management, spatial configuration, production, species selection, trophic level biomass ratios, natural food availability, particle size, digestibility, season, light, temperature, and water flow. Since these factors significantly vary by site and region, recovery efficiency also varies.
In a hypothetical family-scale fish/microalga /bivalve/seaweed farm, based on pilot scale data, at least 60% of nutrient input reached commercial products, nearly three times more than in modern net pen farms. Expected average annual yields of the system for a hypothetical were of seabream, of bivalves and of seaweeds. These results required precise water quality control and attention to suitability for bivalve nutrition, due to the difficulty in maintaining consistent phytoplankton populations.
Seaweeds' nitrogen uptake efficiency ranges from 2-100% in land-based systems. Uptake efficiency in open-water IMTA is unknown.
Food safety and quality
Feeding the wastes of one species to another has the potential for contamination, although this has yet to be observed in IMTA systems. Mussels and kelp growing adjacent to Atlantic salmon cages in the Bay of Fundy have been monitored since 2001 for contamination by medicines, heavy metals, arsenic, PCBs and pesticides. Concentrations are consistently either non-detectable or well below regulatory limits established by the Canadian Food Inspection Agency, the United States Food and Drug Administration and European Community Directives. Taste testers indicate that these mussels are free of "fishy" taste and aroma and could not distinguish them from "wild" mussels. The mussels' meat yield is significantly higher, reflecting the increase in nutrient availability.
Recent findings suggest mussels grown adjacent to salmon farms are advantageous for winter harvest because they maintain high meat weight and condition index (meat to shell ratio). This finding is of particular interest because the Bay of Fundy, where this research was conducted, produces low condition index mussels during winter months in monoculture situations, and seasonal presence of paralytic shellfish poisoning (PSP) typically restricts mussel harvest to the winter months.
Selected projects
Historic and ongoing research projects include:
Asia
Japan, China, South Korea, Thailand, Vietnam, Indonesia, Bangladesh, etc. have co-cultured aquatic species for centuries in marine, brackish and fresh water environments. Fish, shellfish and seaweeds have been cultured together in bays, lagoons and ponds. Trial and error has improved integration over time. The proportion of Asian aquaculture production that occurs in IMTA systems is unknown.
After the 2004 tsunami, many of the shrimp farmers in Aceh Province of Indonesia and Ranong Province of Thailand were trained in IMTA. This has been especially important as the mono-culture of marine shrimp was widely recognized as unsustainable. Production of tilapia, mud crabs, seaweeds, milkfish, and mussels have been incorporated. AquaFish Collaborative Research Support Program
Canada
Bay of Fundy
Industry, academia and government are collaborating here to expand production to commercial scale. The current system integrates Atlantic salmon, blue mussels and kelp; deposit feeders are under consideration. AquaNet (one of Canada's Networks of Centres of Excellence) funded phase one. The Atlantic Canada Opportunities Agency is funding phase two. The project leaders are Thierry Chopin (University of New Brunswick in Saint John) and Shawn Robinson (Department of Fisheries and Oceans, St. Andrews Biological Station).
Pacific SEA-lab
Pacific SEA-lab is researching and is licensed for the co-culture of sablefish, scallops, oysters, blue mussels, urchins and kelp. "SEA" stands for Sustainable Ecological Aquaculture. The project aims to balance four species. The project is headed by Stephen Cross under a British Columbia Innovation Award at the University of Victoria Coastal Aquaculture Research & Training (CART) network.
Chile
The i-mar Research Center at the Universidad de Los Lagos, in Puerto Montt is working to reduce the environmental impact of intensive salmon culture. Initial research involved trout, oysters and seaweeds. Present research is focusing on open waters with salmon, seaweeds and abalone. The project leader is Alejandro Buschmann.
Israel
SeaOr Marine Enterprises Ltd.
SeaOr Marine Enterprises Ltd., which operated for several years on the Israeli Mediterranean coast, north of Tel Aviv, cultured marine fish (gilthead seabream), seaweeds (Ulva and Gracilaria) and Japanese abalone. Its approach leveraged local climate and recycled fish waste products into seaweed biomass, which was fed to the abalone. It also effectively purified the water sufficiently to allow the water to be recycled to the fishponds and to meet point-source effluent environmental regulations.
PGP Ltd.
PGP Ltd. is a small farm in Southern Israel. It cultures marine fish, microalgae, bivalves and Artemia. Effluents from seabream and seabass collect in sedimentation ponds, where dense populations of microalgae—mostly diatoms—develop. Clams, oysters and sometimes Artemia filter the microalgae from the water, producing a clear effluent. The farm sells the fish, bivalves and Artemia.
The Netherlands
In the Netherlands, Willem Brandenburg of UR Wageningen (Plant Sciences Group) has established the first seaweed farm in the Netherlands. The farm is called "De Wierderij" and is used for research.
South Africa
Three farms grow seaweeds for feed in abalone effluents in land-based tanks. Up to 50% of re-circulated water passes through the seaweed tanks. Somewhat uniquely, neither fish nor shrimp comprise the upper trophic species. The motivation is to avoid over-harvesting natural seaweed beds and red tides, rather than nutrient abatement. These commercial successes developed from research collaboration between Irvine and Johnson Cape Abalone and scientists from the University of Cape Town and the University of Stockholm.
United Kingdom
The Scottish Association for Marine Science, in Oban is developing co-cultures of salmon, oysters, sea urchins, and brown and red seaweeds via several projects. Research focuses on biological and physical processes, as well as production economics and implications for coastal zone management. Researchers include: M. Kelly, A. Rodger, L. Cook, S. Dworjanyn, and C. Sanderson.
Bangladesh
Indian carps and stinging catfish are cultured in Bangladesh, but the methods could be more productive. The pond and cage cultures used are based only on the fish. They don't take advantage of the productivity increases that could take place if other trophic levels were included. Expensive artificial feeds are used, partly to supply the fish with protein. These costs could be reduced if freshwater snails, such as Viviparus bengalensis, were simultaneously cultured, thus increasing the available protein. The organic and inorganic wastes produced as a byproduct of culturing could also be minimized by integrating freshwater snail and aquatic plants, such as water spinach, respectively.
Gallery
See also
Agribusiness
Extensive farming
Factory farming
Genetically modified organism
History of agriculture
Industrial agriculture
Industrial agriculture (animals)
Industrial agriculture (crops)
Intensive farming
Organic farming
Sustainable agriculture
Zero waste agriculture
Notes
References
Neori A, Troell M, Chopin T, Yarish C, Critchley A and Buschmann AH. 2007. The need for a balanced ecosystem approach to blue revolution aquaculture. Environment 49(3): 36–43.
External links
AquaNet IMTA
www.sams.ac.uk
World Aquaculture Conference 2007: IMTA session
Chopin lab
The Comparative Roles of Suspension-Feeders in Ecosystems The use of bivalves as biofilters and valuable product in land based aquaculture systems - review.
Seaweed Resources of the World Algae: key for sustainable mariculture.
Ecological and Genetic Implications of Aquaculture Activities Evaluation of macroalgae, microalgae, and bivalves as biofilters in sustainable land-based mariculture systems.
Hydrography
Physical oceanography
Aquaculture | Integrated multi-trophic aquaculture | Physics,Environmental_science | 3,128 |
10,990,251 | https://en.wikipedia.org/wiki/%C3%89milien%20Dumas | Jean Louis George Émilien Dumas (16 october 1804 – 21 September 1870) was a French scholar, palaeontologist, and geologist.
Biography
Born to a Protestant family of the bourgeoisie in Gard, Émilien Dumas was immersed from his early childhood in an atmosphere of learning and erudition. His father, a former merchant involved in agriculture, was an educated man. The native flora of Gard provided him with his first field of study. From 1815 to 1824, he studied at Morges, Switzerland, then at Basel, where his passion for the natural sciences matured. He returned to his homeland in 1824 following the death of his mother.
Embarking on a career in the sciences, he went to Paris and studied at the Collège de France, the Ecole des Mines de Paris and the Muséum national d'histoire naturelle, and with Georges Cuvier, Étienne Geoffroy Saint-Hilaire, and Adrien-Henri de Jussieu.
His education in the natural sciences was well rounded, and he threw himself with equal passion into Zoology, Mineralogy, and Botany, as well as engaging in the contemporary debate over Lamarckism.
In 1828, he returned to Sommières, where he married Pauline Borel, a wealthy heiress from Orange, and daughter of a silk manufacturer. The same year, he unveiled a rich paleontological dig site at Pondres (Gard) whose human and animal remains fueled Lamarckist arguments, particularly in the field of Archaeozoology
He surveyed his region with great patience and tenacity over a period of 20 years, to produce a geological map of the département of Gard. During a long voyage in the 1860s he studied the geography of southern Europe. As an avid collector, he cultivated his curiosity throughout his life, and the Natural History Museum at Nîmes now preserves a large part of his numerous collections spanning the fields of Greek antiquities, botany, and geology.
The missing piece in this portrait of the "Explorer of Gard" is his taste for theater and acting. He was a willing participant as well as observer, which was considered by his contemporaries as incompatible with his role as a scientist.
He died on September 21, 1870, in Ax-sur-Ariège.
Works
Émilien Dumas, , 1876
Bibliography
Édouard Dumas, Émilien Dumas et l'empreinte de Sommières, Lacour-Ollé, 1993.
« Émilien Dumas, l'explorateur du Gard », Catalogue de l'expostion organisée à l'occasion du bicentenaiee de sa naissance, Musée d'Histoire naturelle de Nîmes.
External links
An article by the Sommières Association (French)
The text of his treatise on the geology of Gard (French)
1804 births
1870 deaths
French geologists
French paleontologists
Lamarckism | Émilien Dumas | Biology | 593 |
19,555,086 | https://en.wikipedia.org/wiki/PSX%20%28digital%20video%20recorder%29 | The PSX is a digital video recorder and home video game console released by Sony in Japan on December 13, 2003. Since it was designed to be a general-purpose consumer video device, it was marketed by the main Sony Corporation instead of Sony Computer Entertainment and does not carry the usual PlayStation branding. Initial sales were strong, with the console selling 100,000 units during its first week, thus selling out. Its high cost, however, resulted in poor sales later on, prompting Sony to cancel plans to release the PSX outside Japan. After the price had been lowered in September 2004, sales increased again.
Features
The device is a fully functional digital video recorder with an included Infrared remote control and S-Video, composite video, and RF inputs. It is able to tune analog VHF and CATV. It can also be linked with a PlayStation Portable to transfer photos, videos and music via USB ports, and features software for non-linear video editing, image editing and audio editing. DVD+R support was to be introduced in a future update.
It was the first device to use Sony's XrossMediaBar (XMB) graphical user interface, which was later used on the PlayStation Portable, PlayStation 3, some Blu-ray Disc players, and 2008-era BRAVIA TVs. Like standard PS2 consoles, the PSX can be laid horizontally or stood up vertically.
The PSX fully supports both PlayStation and PlayStation 2 software by its slot-loading DVD drive, as the onboard EE+GS chip is a unification of the PS2's Emotion Engine and Graphics Synthesizer chips. Online game compatibility was available using the broadband connection; Games that used the PS2 HDD (such as Final Fantasy XI) were supported as well.
Shortly before release, Sony omitted numerous features from the PSX, citing that it was necessary to launch by Christmas 2003 time. Playback of CD-R and DVD+RW discs for example were dropped, as well as MP3 format audio, and instead DVD-RW discs and ATRAC format audio were retained upon release. However, firmware updates (versions 1.10 and 1.20) added these features later on.
Peripherals
The PSX is compatible with all first-party PlayStation and PlayStation 2 controllers and memory cards, with the exception of the PocketStation. The main unit has two controller ports located on the back side and two memory card slots on the front side hidden behind a panel cover. While the unit itself was sold without a game controller, a PSX-branded variant of the DualShock 2 analog controller was sold separately which features a 4-meter long cord (a bit longer than the standard versions of the controller). Because of the different placement of controller ports and memory card slots (which are located above each other on standard PlayStation and PlayStation 2 consoles), the PSX is incompatible with all versions of the multitap, and no PSX-specific multitap was ever made to get around this issue. Games that require the use of two or more USB ports are also incompatible with the PSX.
Retail configurations
The PSX was released in eight retail configurations during its lifespan; the 5000 series (with an embossed logo on top and grey stripe at the back) shipped with 160 GB Hard disk drives, while the 7000 series (with a colored logo on top and black stripe at the back) contained 250 GB drives. Software updates were made available by disc and download.
The 7500/7700 models added a Ghost Reduction Tuner. The inclusion of BS and UHF/VHF connectors varied by model. Starting with firmware version 2.10, users could export videos to a Memory Stick. The exported files are compatible for playback on the PSP. This feature is unavailable on earlier models due to the later firmware versions never being released for them. Contrary to popular belief, no variant of this console supports PSP games or is compatible with UMD discs.
All models have two sets of status LEDs and Infrared receivers; one along the front for horizontal orientation, and a second strip along the top-back for vertical orientation. The 'Disk Rec' indicator is only on the front of the device in later models. Additionally, some models have one or two decorative blue LED light strips, either on the front located under the disc slot or in the back. DESR-5XXX consoles have a solid white case with an embossed PSX logo (except DESR-5100S, which has a silver case). DESR-7XXX consoles have a clear-white case with a colored PSX logo printed on.
Etymology
Up until the release of the PlayStation 2, the first PlayStation console came to be known colloquially outside of Japan by its provisional codename of PSX (this was adopted to echo the MSX, a home computer standard sold by Sony and other companies throughout the 1980s). This can cause some confusion as to which device is being referred to.
Colors
The PSX was initially displayed at CEATEC in white, silver, yellow, red and blue. The white variant was released commercially, with a limited edition silver model made available in 2004.
See also
Panasonic Q
References
External links
Digital video recorders
Sony consoles
Sixth-generation video game consoles
PlayStation (console)
PlayStation 2
Products introduced in 2003
Discontinued video game consoles
Japan-exclusive video game hardware
PlayStation (brand)
MIPS-based video game consoles | PSX (digital video recorder) | Technology | 1,096 |
29,027,916 | https://en.wikipedia.org/wiki/Gene%20transfer%20agent | Gene transfer agents (GTAs) are DNA-containing virus-like particles that are produced by some bacteria and archaea and mediate horizontal gene transfer. Different GTA types have originated independently from viruses in several bacterial and archaeal lineages. These cells produce GTA particles containing short segments of the DNA present in the cell. After the particles are released from the producer cell, they can attach to related cells and inject their DNA into the cytoplasm. The DNA can then become part of the recipient cells' genome.
GTAs are classified as viriforms in the ICTV taxonomy. Among the GTAs mentioned by the article, RcGTA and DsGTA are now in the family Rhodogtaviriformidae, BaGTA in Bartogtaviriformidae, and VSH-1 in Brachygtaviriformidae. Dd1 and VTA do not yet have a classification.
Discovery of gene transfer agents
The first GTA system was discovered in 1974, when mixed cultures of Rhodobacter capsulatus strains produced a high frequency of cells with new combinations of genes. The factor responsible was distinct from known gene-transfer mechanisms in being independent of cell contact, insensitive to deoxyribonuclease, and not associated with phage production. Because of its presumed function it was named gene transfer agent (GTA, now RcGTA) More recently other gene transfer agent systems have been discovered by incubating filtered (cell-free) culture medium with a genetically distinct strain.
GTA genes and evolution
The genes specifying GTAs are derived from bacteriophage (phage) DNA that has integrated into a host chromosome. Such prophages often acquire mutations that make them defective and unable to produce phage particles. Many bacterial genomes contain one or more defective prophages that have undergone more-or less-extensive mutation and deletion. Gene transfer agents, like defective prophages, arise by mutation of prophages, but they retain functional genes for the head and tail components of the phage particle (structural genes) and the genes for DNA packaging. The phage genes specifying its regulation and DNA replication have typically been deleted, and expression of the cluster of structural genes is under the control of cellular regulatory systems. Additional genes that contribute to GTA production or uptake are usually present at other chromosome locations. Some of these have regulatory functions, and others contribute directly to GTA production (e.g. the phage-derived lysis genes) or uptake and recombination (e.g. production of cell-surface capsule and DNA transport proteins) These GTA-associated genes are often under coordinated regulation with the main GTA gene cluster. Phage-derived cell-lysis proteins (holin and endolysin) then weaken the cell wall and membrane, allowing the cell to burst and release the GTA particles. The number of GTA particles produced by each cell is not known.
Some GTA systems appear to be recent additions to their host genomes, but others have been maintained for many millions of years. Where studies of sequence divergence have been done (dN/dS analysis), they indicate that the genes are being maintained by natural selection for protein function (i.e. defective versions are being eliminated).
However, the nature of this selection is not clear. Although the discoverers of GTA assumed that gene transfer was the function of the particles, the presumed benefits of gene transfer come at a substantial cost to the population. Most of this cost arises because GTA-producing cells must lyse (burst open) to release their GTA particles, but there are also genetic costs associated with making new combinations of genes because most new combinations will usually be less fit than the original combination. One alternative explanation is that GTA genes persist because GTAs are genetic parasites that spread infectiously to new cells. However this is ruled out because GTA particles are typically too small to contain the genes that encode them. For example, the main RcGTA cluster (see below) is 14 kb long, but RcGTA particles can contain only 4–5 kb of DNA.
Most bacteria have not been screened for the presence of GTAs, and many more GTA systems may await discovery. Although DNA-based surveys for GTA-related genes have found homologs in many genomes, but interpretation is hindered by the difficulty of distinguishing genes that encode GTAs from ordinary prophage genes.
GTA production
In laboratory cultures, production of GTAs is typically maximized by particular growth conditions that induce transcription of the GTA genes; most GTAs are not induced by the DNA-damaging treatments that induce many prophages. Even under maximally inducing conditions only a small fraction of the culture produces GTAs, typically less than 1%.
The steps in GTA production are derived from those of phage infection. The structural genes are first transcribed and translated, and the proteins assembled into empty heads and unattached tails. The DNA packaging machinery then packs DNA into each head, cutting the DNA when the head is full, attaching a tail to the head, and then moving the newly-created DNA end on to a new empty head. Unlike prophage genes, the genes encoding GTAs are not excised from the genome and replicated for packaging in GTA particles. The two best studied GTAs (RcGTA and BaGTA) randomly package all of the DNA in the cell, with no overrepresentation of GTA-encoding genes. The number of GTA particles produced by each cell is not known.
GTA-mediated transduction
Whether release of GTA particles leads to transfer of DNA to new genomes depends on several factors. First, the particles must survive in the environment – little is known about this, although particles are reported to be quite unstable under laboratory conditions. Second, particles must encounter and attach to suitable recipient cells, usually members of the same or a closely related species. Like phages, GTAs attach to specific protein or carbohydrate structures on the recipient cell surface before injecting their DNA. Unlike phage, the well-studied GTAs appear to inject their DNA only across the first of the two membranes surrounding the recipient cytoplasm, and they use a different system, competence-derived rather than phage-derived, to transport one strand of the double-stranded DNA across the inner membrane into the cytoplasm.
If the cell's recombinational repair machinery finds a chromosomal sequence very similar to the incoming DNA, it replaces the former with the latter by homologous recombination, mediated by the cell's RecA protein. If the sequences are not identical this will produce a cell with a new genetic combination. However, if the incoming DNA is not closely related to DNA sequences in the cell it will be degraded, and the cell will reuse its nucleotides for DNA replication.
Specific GTA systems
RcGTA/Rhodobactegtaviriform (Rhodobacter capsulatus)
The GTA produced by the alphaproteobacterium Rhodobacter capsulatus, named R. capsulatus GTA (RcGTA), is currently the best studied GTA. When laboratory cultures of R. capsulatus enter stationary phase, a subset of the bacterial population induces production of RcGTA, and the particles are subsequently released from the cells through cell lysis. Most of the RcGTA structural genes are encoded in a ~ 15 kb genetic cluster on the bacterial chromosome. However, other genes required for RcGTA function, such as the genes required for cell lysis, are located separately. RcGTA particles contain 4.5 kb DNA fragments, with even representation of the whole chromosome except for a 2-fold dip at the site of the RcGTA gene cluster.
Regulation of GTA production and transduction has been best studied in R. capsulatus, where a quorum-sensing system and a CtrA-phosphorelay control expression of not only the main RcGTA gene cluster, but also a holin/endolysin cell lysis system, particle head spikes, an attachment protein (possibly tail fibers), and the capsule and DNA processing genes needed for RcGTA recipient function. An uncharacterized stochastic process further limits expression of the gene cluster is to only 0.1-3% of the cells.
RcGTA-like clusters are found in a large subclade of the alphaproteobacteria, although the genes also appear to be frequently lost by deletion. Recently, several members of the order Rhodobacterales have been demonstrated to produce functional RcGTA-like particles. Groups of genes with homology to the RcGTA are present in the chromosomes of various types of alphaproteobacteria.
DsGTA/Dinogtaviriform (Dinoroseobacter shibae)
D. shibae, like R. capsulatus, is a member of the Order Rhodobacterales, and its GTA shares a common ancestor and many features with RcGTA, including gene organization, packaging of short DNA fragments (4.2 kb) and regulation by quorum sensing and a CtrA phosphorelay. However, its DNA packaging machinery has much more specificity, with sharp peaks and valleys of coverage suggestion that it may preferentially initiate packaging at specific sites in the genome. The DNA of the major DsGTA gene cluster is packaged very poorly.
BaGTA/Bartonegtaviriform (Bartonella species)
Bartonella species are members of the Alphaproteobacteria like R. capsulatus and D. shibae, but BaGTA is not related to RcGTA and DsGTA. BaGTA particles are larger than RcGTA and contain 14 kb DNA fragments. Although this capacity could in principle allow BaGTA to package and transmit its 14 kb GTA cluster, measurements of DNA coverage show reduced coverage of the cluster. An adjacent region of high coverage is thought to be due to local DNA replication.
VSH-1 (Brachyspira hyodysenteriae)
Brachyspira is a genus of spirochete; several species have been shown to carry homologous GTA gene clusters. Particles contain 7.5 kb DNA fragments. Production of VSH-1 is stimulated by the DNA-damaging agent mitomycin C and by some antibiotics. It is also associated with detectable cell lysis, indicating that a substantial fraction of the culture may be producing VSH-1.
Dd1 (Desulfovibriondesulfuricans)
D. desulfuricans is a soil bacterium in the deltaproteobacteria; Dd1 packages 13.6 kb of DNA fragments. It is unclear which genes encode for this GTA: there is one 17.8 kb area with phage-like structural genes in the bacterial genome, but their link to GTA production is not yet experimentally proven.
VTA (Methanococcus voltae)
M. voltae is an archaean; its GTA is known to transfer 4.4 kb DNA fragments but has not been otherwise characterized, although a defective provirus related to Methanococcus head-tailed viruses (Caudoviricetes) in M. voltae A3 genome has been suggested to represent the GTA locus. A possible terL terminase () was again identified in 2019.
See also
Gene transfer agent-release holin family
Horizontal gene transfer
References
Genetics
Microbial population biology | Gene transfer agent | Biology | 2,410 |
28,949,720 | https://en.wikipedia.org/wiki/Coil%E2%80%93globule%20transition | In polymer physics, the coil–globule transition is the collapse of a macromolecule from an expanded coil state through an ideal coil state to a collapsed globule state, or vice versa. The coil–globule transition is of importance in biology due to the presence of coil-globule transitions in biological macromolecules such as proteins and DNA. It is also analogous with the swelling behavior of a crosslinked polymer gel and is thus of interest in biomedical engineering for controlled drug delivery. A particularly prominent example of a polymer possessing a coil-globule transition of interest in this area is that of Poly(N-isopropylacrylamide) (PNIPAAm).
Description
In its coil state, the radius of gyration of the macromolecule scales as its chain length to the three-fifths power. As it passes through the coil–globule transition, it shifts to scaling as chain length to the half power (at the transition) and finally to the one third power in the collapsed state. The direction of the transition is often specified by the constructions 'coil-to-globule' or 'globule-to-coil' transition.
Origin
This transition is associated with the transition of a polymer chain from good solvent behavior through ideal or theta solvent behavior to poor solvent behavior. The canonical coil–globule transition is associated with the Upper critical solution temperature and the associated Flory theta point. In this case, collapse occurs with cooling and results from favorable attractive energy of the polymer to itself. A second type of coil–globule transition is instead associated with the lower critical solution temperature and its corresponding theta point. This collapse occurs with increasing temperature and is driven by an unfavorable entropy of mixing. An example of this type is embodied by the polymer PNIPAAM, mentioned above. Coil globule transitions may also be driven by charge effects, in the case of polyelectrolytes. In this case pH and ionic strength changes within the solution may trigger collapse, with increasing counterion concentration generally leading to collapse in a uniformly charged polyelectrolyte. In polyampholytes containing both positive and negative charges, the opposite may hold true.
See also
Upper critical solution temperature
Lower critical solution temperature
Critical point
Ideal solution
Citations
Biochemistry
Thermodynamic processes
Polymer chemistry
Polymer physics | Coil–globule transition | Physics,Chemistry,Materials_science,Engineering,Biology | 500 |
14,120,660 | https://en.wikipedia.org/wiki/Homeobox%20protein%20CDX-2 | Homeobox protein CDX-2 is a protein that in humans is encoded by the CDX2 gene. The CDX-2 protein is a homeobox transcription factor expressed in the nuclei of intestinal epithelial cells, playing an essential role in the development and function of the digestive system. CDX2 is part of the ParaHox gene cluster, a group of three highly conserved developmental genes present in most vertebrate species. Together with CDX1 and CDX4, CDX2 is one of three caudal-related genes in the human genome.
Function
In common with the two other Cdx genes, CDX2 regulates several essential processes in the development and function of the lower gastrointestinal tract (from the duodenum to the anus) in vertebrates. In vertebrate embryonic development, CDX2 becomes active in endodermal cells that are posterior to the developing stomach. These cells eventually form the intestinal epithelium. The activity of CDX2 at this stage is essential for the correct formation of the intestine and the anus. CDX2 is also required for the development of the placenta.
Later in development, CDX2 is expressed in intestinal epithelial stem cells, which are cells that continuously differentiate into the cells that form the intestinal lining. This differentiation is dependent on CDX2, as illustrated by experiments where the expression of this gene was knocked-out or overexpressed in mice. Heterozygous CDX2 knock-outs have intestinal lesions caused by the differentiation of intestinal cells into gastric epithelium; this can be considered a form of homeotic transformation. Conversely, the over-expression of CDX2 leads to the formation of intestinal epithelium in the stomach.
In addition to roles in endoderm, CDX2 is also expressed in very early stages of mouse and human embryonic development, specifically marking the trophectoderm lineage of cells in the blastocyst of mouse and human. Trophectoderm cells contribute to the placenta.
Pathology
Ectopic expression of CDX2 was reported in more than 85% of the human patients with acute myeloid leukemia (AML). Ectopic expression of Cdx2 in murine bone marrow induced AML in mice and upregulate Hox genes in bone marrow progenitors. CDX2 is also implicated in the pathogenesis of Barrett's esophagus where it has been shown that components from gastroesophageal reflux such as bile acids are able to induce the expression of an intestinal differentiation program through up-regulation of NF-κB and CDX2.
Biomarker for intestinal cancer
CDX2 is also used in diagnostic surgical pathology as a marker for gastrointestinal differentiation, especially colorectal.
Possible use in stem cell research
This gene (or, more specifically, the equivalent gene in humans) has come up in the proposal by the President's Council on Bioethics, as a solution to the stem cell controversy. According to one of the plans put forth, by deactivating the gene, it would not be possible for a properly organized embryo to form, thus providing stem cells without requiring the destruction of an embryo. Other genes that have been proposed for this purpose include Hnf4, which is required for gastrulation.
Interactions
CDX2 has been shown to interact with EP300, and PAX6.
References
Further reading
External links
Transcription factors | Homeobox protein CDX-2 | Chemistry,Biology | 737 |
20,895,158 | https://en.wikipedia.org/wiki/Olszewski%20tube | An Olszewski tube is a pipe designed to bring oxygen-poor water from the bottom of a lake to the top. This tube was first proposed by a Polish limnologist named Przemysław Olszewski in 1961 and helps combat the negative effects of eutrophication, high nutrient content, in lakes. The basic concept behind the Olszewski tube is the reduction of nutrient concentration and destratification; the more specific goal is hypolimnetic withdrawal.
Eutrophication
When nutrients build up in a lake, eutrophication occurs, and this generally occurs in the top layer of a lake. The nutrients come both naturally and artificially and usually contain phosphates. The artificial nutrients can come from sewage and fertilizers, from agricultural runoff. Phosphorus from the phosphates causes algae to grow rapidly and spread throughout the top layer of the lake. Algal blooms have negative effects on both the aesthetics and the ecology of the lake. Aesthetically, the lake is not pleasing because it is covered with algae. Ecologically, eutrophication causes organisms in the lake to die because the algae deplete the dissolved oxygen in the lake.
Design
At the most simple level, the Olszewski tube is a pipe that spans from the bottom, hypolimnetic layer of the lake to the outlet. The outlet part of the pipe is installed under lake level in order for the device to act as a siphon. Once warm water flows in the lake at the surface, it forces the cold anoxic water of the hypolimnetic layer through and up the tube. This oxygen-poor water is then brought to the top of the lake where the eutrophication occurs. This eventually helps the lake as a whole because the bottom of the lake will have more dissolved oxygen and the top of the lake will have less eutrophication.
Implementations
The first implementation of the Olszewski tube was attempted at Lake Kortowo in Poland and this led to oligotrophication, reduction of nutrient cycling. This tube has shown the most promise in a 3.9 meter deep eutrophic lake in Switzerland because the phosphorus and nitrogen levels in the summer drastically decreased, oxygen levels increased, and the amount of cyanobacteria decreased from 152 grams per square meter to 41 grams per square meter. It has also been reported by a scientist named Bjork that there have been successes with the Olszewski tube in European lakes. Other limnologists like Pechlaner and Gachter have reported successes in small lakes where the total phosphorus decreased, transparency of water increased, and less algae was present.
Complications
Some complications that can arise with the use of an Olszewski tube include disruption of the thermocline and excessive water loss. The thermocline separates the upper layer of water that is mixed temperatures with the deeper, cooler water. If the thermocline is disrupted, it could alter the ecology of the lake, potentially making it uninhabitable. Another complication is that the installation must be a long-term process. Short-term uses of Olszewski tubes have largely failed because it takes some time for the anoxic condition of the hypolimnetic layer to increase in dissolved oxygen. Also, it must be a slow process in order to avoid disrupting the thermocline in a lake. If the Olszewski tube is operated slowly enough, the rate of water going in and going out will be fairly constant causing the thermocline to stay intact.
Cost
One advantage to hypolimnetic withdrawal is that it is relatively inexpensive to install an Olszewski tube or any similar device. Along with low initial cost, it also has a relatively low annual maintenance cost. The following are four systems installed in the United States (2002), their area in hectares, the rate of flow in cube-meters per minute, and their initial installation costs in US dollars:
Lake Ballinger
41 ha
3.4 m3/min
$420,000
Lake Waramaug
287 ha
6.3 m3/min
$62,000
Devil's Lake
151 ha
9.1 m3/min
$310,000
Pine Lake
412 ha
5.3 m3/min
$282,000
Other Techniques
Aside from using an Olszewski tube and hypolimnetic withdrawal, there are other techniques implemented to achieve the same goals as an Olszewski tube. These include increasing dissolved oxygen, reducing nutrient concentration, and lessening the amount of algae and unwanted biomass in lakes.
Sediment oxidation is the artificial oxidation of the top 15 to 20 centimeters of anaerobic lake sediment. This technique reduces internal nutrient release through a series of chemical reactions starting with iron(III) chloride. After these reactions, the concentrations of phosphorus and ammonium (another nutrient found in lakes) decrease and the demand for oxygen gas in reduced as well. This technique is still not fully developed yet but can mirror the effects of the Olszewski tube.
Biological control methods are the most promising techniques because they do the least harm to the ecosystem. These methods introduce a particular species (e.g. fish, bacteria, etc.) into a lake as a solution to a current problem. The introduction of a certain type of bacteria can help decrease nutrients. In turn the algae will not spread and the oxygen in the lake will stay in high dissolved concentrations.
Hypolimnetic aeration is another technique in which oxygen is added to the lake. This helps increase the concentration of dissolved oxygen in the lake as well as bring down the levels of phosphorus. While the results of this technique are similar to those of the Olszewski tube, hypolimnetic aeration differs in that it uses compressed air to move the water rather than a siphoning effect.
References
Limnology
Environmental engineering | Olszewski tube | Chemistry,Engineering | 1,210 |
3,138,212 | https://en.wikipedia.org/wiki/Electron%20acceptor | An electron acceptor is a chemical entity that accepts electrons transferred to it from another compound. Electron acceptors are oxidizing agents.
The electron accepting power of an electron acceptor is measured by its redox potential.
In the simplest case, electron acceptors are reduced by one electron. The process can alter the structure of the acceptor substantially. When the added electron is highly delocalized, the structural consequences of the reduction can be subtle. The central C-C distance in the electron acceptor tetracyanoethylene elongates from 135 to 143 pm upon acceptance of an electron. In the formation of some donor-acceptor complexes, less than one electron is transferred. TTF-TCNQ is a charge transfer complex.
Biology
In biology, a terminal electron acceptor often refers to either the last compound to receive an electron in an electron transport chain, such as oxygen during cellular respiration, or the last cofactor to receive an electron within the electron transfer domain of a reaction center during photosynthesis. All organisms obtain energy by transferring electrons from an electron donor to an electron acceptor.
One practical illustration of the role of electron acceptors in biology is the high toxicity of the paraquat. The activity of this broad spectrum herbicide results from the electron acceptor property of N,N'-dimethyl-4,4'-bipyridinium.
Materials science
In some solar cells, the photocurrent entails transfer of electrons from a donor to an electron acceptor.
See also
Acceptor (semiconductors)
Redox reaction
Semiconductor
References
External links
Electron acceptor definition at United States Geological Survey website
Environmental Protection Agency
Electrochemical concepts | Electron acceptor | Chemistry | 343 |
531,373 | https://en.wikipedia.org/wiki/Gravity%20Probe%20A | Gravity Probe A (GP-A) was a space-based experiment to test the equivalence principle, a feature of Einstein's theory of relativity. It was performed jointly by the Smithsonian Astrophysical Observatory and the National Aeronautics and Space Administration. The experiment sent a hydrogen masera highly accurate frequency standardinto space to measure with high precision the rate at which time passes in a weaker gravitational field. Masses cause distortions in spacetime, which leads to the effects of length contraction and time dilation, both predicted results of Albert Einstein's theory of general relativity. Because of the bending of spacetime, an observer on Earth (in a lower gravitational potential) should measure a slower rate at which time passes than an observer that is higher in altitude (at higher gravitational potential). This effect is known as gravitational time dilation.
The experiment was a test of a major consequence of Einstein's general relativity, the equivalence principle. The equivalence principle states that a reference frame in a uniform gravitational field is indistinguishable from a reference frame that is under uniform acceleration. Further, the equivalence principle predicts that phenomenon of different time flow rates, present in a uniformly accelerating reference frame, will also be present in a stationary reference frame that is in a uniform gravitational field.
The probe was launched on June 18, 1976 from the NASA-Wallops Flight Center in Wallops Island, Virginia. The probe was carried via a Scout rocket, and attained a height of , while remaining in space for 1 hour and 55 minutes, as intended. It returned to Earth by splashing down into the Atlantic Ocean.
Background
The objective of the Gravity Probe A experiment was to test the validity of the equivalence principle. The equivalence principle is a key component of Albert Einstein's theory of general relativity, and states that the laws of physics are the same in an accelerating reference frame as they are in a reference frame that is acted upon by a uniform gravitational field.
Equivalence principle
The equivalence principle can be understood by comparing a rocket ship in two scenarios. First, imagine a rocket ship that is at rest on the Earth's surface; objects dropped within the rocket ship will fall towards the floor with an acceleration of . Now, imagine a distant rocket ship that has escaped Earth's gravitational field and is accelerating at a constant due to thrust from its rockets; objects in the rocket ship that are unconstrained will move towards the floor with an acceleration of . This example shows one way that a uniformly accelerating reference frame is indistinguishable from a gravitational reference frame.
Furthermore, the equivalence principle postulates that phenomena that are caused by inertial effects will also be present due to gravitational effects. Consider a beam of light that is shined horizontally across a rocket ship, which is accelerating. According to a non-accelerating observer outside the rocket ship, the floor of the rocket ship accelerates towards the light beam. Therefore, the light beam does not seem to travel on a horizontal path according to the inside observer, rather the light ray appears to bend toward the floor. This is an example of an inertial effect that causes light to bend. The equivalence principle states that this inertial phenomenon will occur in a gravitational reference frame as well. Indeed, the phenomenon of gravitational lensing states that matter can bend light, and this phenomenon has been observed by the Hubble Space Telescope, and other experiments.
Time dilation
Time dilation refers to the expansion or contraction in the rate at which time passes, and was the subject of the Gravity Probe A experiment. Under Einstein's theory of general relativity, matter distorts the surrounding spacetime. This distortion causes time to pass more slowly in the vicinity of a massive object, compared to the rate experienced by a distant observer. The Schwarzschild metric, surrounding a spherically symmetric gravitating body, has a smaller coefficient at closer to the body, which means slower rate of time flow there.
There is a similar idea of time dilation occurrence in Einstein's theory of special relativity (which involves neither gravity nor the idea of curved spacetime). Such time dilation appears in the Rindler coordinates, attached to a uniformly accelerating particle in a flat spacetime. Such a particle would observe time passing faster on the side it is accelerating towards and more slowly on the opposite side. From this apparent variance in time, Einstein inferred that change in velocity affects the relativity of simultaneity for the particle. Einstein's equivalence principle generalizes this analogy, stating that an accelerating reference frame is locally indistinguishable from an inertial reference frame with a gravity force acting upon it. In this way, the Gravity Probe A was a test of the equivalence principle, matching the observations in the inertial reference frame (of special relativity) of the Earth's surface affected by gravity, with the predictions of special relativity for the same frame treated as being accelerating upwards with respect to free fall reference, which can thought of being inertial and gravity-less.
Experimental setup
The Gravity Probe A spacecraft housed an atomic hydrogen maser system. Maser is an acronym for microwave amplification by stimulated emission of radiation, and is similar to a laser, as it produces coherent electromagnetic waves in the microwave region of the electromagnetic spectrum. A hydrogen maser produces a very accurate signal (1.42 billion cycles per second), which is highly stableto one part in a quadrillion (). This is equivalent to a clock that drifts by less than two seconds every 100 million years. A microwave signal derived from the maser frequency was transmitted to the ground throughout the mission. The one-way signal received from the rocket was relativistically Doppler shifted due to the speed of the rocket and in addition was gravitationally Doppler blue-shifted by a minute amount.
In addition to the hydrogen maser carried by the rocket, another hydrogen maser on the ground was used as a source for continuous transmission of a microwave signal to the rocket. A microwave transponder carried on the rocket returned the signal to the Earth. On the way up, the signal received by the rocket was Doppler shifted due to the speed of the rocket and was gravitationally red-shifted by a minute amount. The transponder signal received on the ground was Doppler shifted due to the speed of the rocket and was gravitationally blue-shifted by the same amount that it was red-shifted on the way up. Since the gravitational Doppler shift of the signals on the way up always exactly cancelled the gravitational Doppler shift on its way down, the two-way Doppler shift of the signal received on the ground depended only on the speed of the rocket.
In a microwave frequency mixer, one-half of the two-way Doppler shift from the transponded ground maser signal was subtracted from the Doppler shift of the space maser. In this way, the Doppler shift due to the spacecraft's motion was completely cancelled out, leaving only the gravitational component of the Doppler shift.
The probe was launched nearly vertically upward to cause a large change in the gravitational potential, reaching a height of . At this height, general relativity predicted a clock should run 4.5 parts in faster than one on the Earth, or about one second every 73 years. The maser oscillations represented the ticks of a clock, and by measuring the frequency of the maser as it changed elevation, the effects of gravitational time dilation were detected.
Results
The goal of the experiment was to measure the rate at which time passes in a higher gravitational potential, so to test this the maser in the probe was compared to a similar maser that remained on Earth. Before the two clock rates could be compared, the Doppler shift was subtracted from the clock rate measured by the maser that was sent into space, to correct for the relative motion between the observers on Earth and the motion of the probe. The two clock rates were then compared and further compared against the theoretical predictions of how the two clock rates should differ. The stability of the maser permitted measurement of changes in the rate of the maser of 1 part in for a 100-second measurement.
The experiment was thus able to test the equivalence principle. Gravity Probe A confirmed the prediction that deeper in the gravity well, the flow of time is slower, and the observed effects matched the predicted effects to an accuracy of about 70 parts per million.
See also
Doppler Effect
General Relativity
Gravitational Redshift
Gravity Probe B
Pound–Rebka experiment
Timeline of gravitational physics
Primary references
References
Further reading
Validation of Local Position Invariance through Gravitational Red-Shift Experiment
External links
Gravity Probe A Collection, The University of Alabama in Huntsville Archives and Special Collections
Physics experiments
Tests of general relativity
1976 in science
1976 in spaceflight | Gravity Probe A | Physics | 1,797 |
70,931,046 | https://en.wikipedia.org/wiki/ZFK%20equation | ZFK equation, abbreviation for Zeldovich–Frank-Kamenetskii equation, is a reaction–diffusion equation that models premixed flame propagation. The equation is named after Yakov Zeldovich and David A. Frank-Kamenetskii who derived the equation in 1938 and is also known as the Nagumo equation. The equation is analogous to KPP equation except that is contains an exponential behaviour for the reaction term and it differs fundamentally from KPP equation with regards to the propagation velocity of the traveling wave. In non-dimensional form, the equation reads
with a typical form for given by
where is the non-dimensional dependent variable (typically temperature) and is the Zeldovich number. In the ZFK regime, . The equation reduces to Fisher's equation for and thus corresponds to KPP regime. The minimum propagation velocity (which is usually the long time asymptotic speed) of a traveling wave in the ZFK regime is given by
whereas in the KPP regime, it is given by
Traveling wave solution
Similar to Fisher's equation, a traveling wave solution can be found for this problem. Suppose the wave to be traveling from right to left with a constant velocity , then in the coordinate attached to the wave, i.e., , the problem becomes steady. The ZFK equation reduces to
satisfying the boundary conditions and . The boundary conditions are satisfied sufficiently smoothly so that the derivative also vanishes as . Since the equation is translationally invariant in the direction, an additional condition, say for example , can be used to fix the location of the wave. The speed of the wave is obtained as part of the solution, thus constituting a nonlinear eigenvalue problem. Numerical solution of the above equation, , the eigenvalue and the corresponding reaction term are shown in the figure, calculated for .
Asymptotic solution
The ZFK regime as is formally analyzed using activation energy asymptotics. Since is large, the term will make the reaction term practically zero, however that term will be non-negligible if . The reaction term will also vanish when and . Therefore, it is clear that is negligible everywhere except in a thin layer close to the right boundary . Thus the problem is split into three regions, an inner diffusive-reactive region flanked on either side by two outer convective-diffusive regions.
Outer region
The problem for outer region is given by
The solution satisfying the condition is . This solution is also made to satisfy (an arbitrary choice) to fix the wave location somewhere in the domain because the problem is translationally invariant in the direction. As , the outer solution behaves like which in turn implies
The solution satisfying the condition is . As , the outer solution behaves like and thus .
We can see that although is continuous at , has a jump at . The transition between the derivatives is described by the inner region.
Inner region
In the inner region where , reaction term is no longer negligible. To investigate the inner layer structure, one introduces a stretched coordinate encompassing the point because that is where is approaching unity according to the outer solution and a stretched dependent variable according to Substituting these variables into the governing equation and collecting only the leading order terms, we obtain
The boundary condition as comes from the local behaviour of the outer solution obtained earlier, which when we write in terms of the inner zone coordinate becomes and . Similarly, as . we find . The first integral of the above equation after imposing these boundary conditions becomes
which implies . It is clear from the first integral, the wave speed square is proportional to integrated (with respect to ) value of (of course, in the large limit, only the inner zone contributes to this integral). The first integral after substituting is given by
KPP–ZFK transition
In the KPP regime, For the reaction term used here, the KPP speed that is applicable for is given by
whereas in the ZFK regime, as we have seen above . Numerical integration of the equation for various values of showed that there exists a critical value such that only for , For , is greater than . As , approaches thereby approaching the ZFK regime. The region between the KPP regime and the ZFK regime is called the KPP–ZFK transition zone.
The critical value depends on the reaction model, for example we obtain
Clavin–Liñán model
To predict the KPP–ZFK transition analytically, Paul Clavin and Amable Liñán proposed a simple piecewise linear model
where and are constants. The KPP velocity of the model is , whereas the ZFK velocity is obtained as in the double limit and that mimics a sharp increase in the reaction near .
For this model there exists a critical value such that
See also
Fisher's equation
References
Partial differential equations
Combustion | ZFK equation | Chemistry | 977 |
26,901,862 | https://en.wikipedia.org/wiki/Ciladopa | Ciladopa (developmental code name AY-27,110) is a dopamine agonist with a similar chemical structure to dopamine. It was under investigation as an antiparkinsonian agent but was discontinued due to concerns of tumorogenesis in rodents.
References
Abandoned drugs
Catechol ethers
Catecholamines
Dopamine agonists
Phenylethanolamines
Piperazines
Tropones | Ciladopa | Chemistry | 90 |
39,226,969 | https://en.wikipedia.org/wiki/Bacterial%20filtration%20efficiency | Bacterial Filtration Efficiency or BFE is a measurement of a respirator material's resistance to penetration of bacteria. Results are reported as percent efficiency and correlate with the ability of the fabric to resist bacterial penetration. Higher numbers in this test indicate better barrier efficiency. Wrap fabrics were compared based on grade as well as basis weight.
Measurement Methodology
Kimberly-Clark uses a test procedure where samples were challenged with a biological aerosol of Staphylococcus aureus and the results employ a ratio of the bacterial challenge counts to sample effluent counts, to determine percent bacterial filtration efficiency (%BFE).
Surgical mask standards in China, Europe, and the United States measure BFE by using particles of size 3.0 μm.
References
Filtration | Bacterial filtration efficiency | Chemistry | 160 |
28,064,303 | https://en.wikipedia.org/wiki/Nova%20Igua%C3%A7u%20level%20crossing%20disaster | The Nova Iguaçu level crossing disaster occurred on June 7, 1951, at 4:25 a.m. when an electric train struck a gasoline truck on a level crossing near Nova Iguacu, twenty miles north of Rio de Janeiro, killing 54 people.
The Estrada de Ferro Central do Brasil company's train was travelling from Belém, and its five steel cars were full of early morning commuters heading for Rio. The truck, which carried 4,500 gallons of gasoline, had stalled on the level crossing. When the train struck the tanker exploded; many of the passengers were burnt alive; some were still sitting in their seats, while others piled up by the doors. In all 54 people were killed and 44 more injured.
Sources
Railway Wrecks by Edgar A. Haine, page 146, publ 1993,
See also
Train wreck
Railway accidents in 1951
Level crossing incidents in Brazil
Transport in Rio de Janeiro (state)
1951 in Brazil
1951 road incidents
June 1951 events in South America | Nova Iguaçu level crossing disaster | Technology | 202 |
61,486,848 | https://en.wikipedia.org/wiki/Rock%20analogs%20for%20structural%20geology | This is a compilation of the properties of different analog materials used to simulate deformational processes in structural geology. Such experiments are often called analog or analogue models. The organization of this page follows the review of rock analog materials in structural geology and tectonics of Reber et al. 2020.
Materials used to simulate upper crustal deformation
These materials need to exhibit brittle deformation upon failure as well as elastic and viscous deformation before failure.
Materials that simulate upper crustal deformation
Dry granular materials
Materials used to simulate deformation of the lower crust and mantle
Various fluids are used to simulate deformation of the lower crust and mantle, such as: linear, non-linear, and yield stress fluids.
Materials used to simulate deformation of the middle crust
Composite Model Materials
Composite materials combine phases with different physical properties. A common composite mixture contains dry granular materials and fluids. These analog materials have been used:
Sediment transport (Parker et al., 1982) using low viscosity fluids
Dynamics in the middle crust (Mookerjee et al., 2017; Reber et al., 2014) employing high viscosity fluids
Stick-slip dynamics (Higashi and Sumita, 2009; Reber et al., 2014)
Strain softening and hardening processes (Panien et al., 2006)
The most commonly used granular materials in composite mixtures are:
Sand
Glass beads
Acrylic discs
Common fluids used in composite mixtures are:
Carbopol
Silicone
Wax, which can behave as a brittle or viscous material depending on the melting temperature (Mookerjee et al., 2017)
Visco-elasto-plastic model materials
Visco-elasto-plastic deformation exhibits a combination of elastic, viscous, and plastic deformation at the same time. Various asphalts and bituminous materials demonstrate visco-elasto-plastic deformation but they are rarely as modeling materials (McBirney and Best, 1961).Common modeling materials demonstrating complex rheology are;
Carbopol (Piau, 2007; Shafiei et al., 2018)
Kaolinite clay (Cooke and van der Elst, 2012)
References
Structural geology
Earth sciences
Deformation (mechanics) | Rock analogs for structural geology | Materials_science,Engineering | 446 |
54,368,685 | https://en.wikipedia.org/wiki/NGC%20473 | NGC 473 is a lenticular galaxy in the constellation of Pisces. Its velocity with respect to the cosmic microwave background is 1819 ± 22km/s, which corresponds to a Hubble distance of . In addition, one non redshift measurement gives a distance of . It was discovered on December 20, 1786 by William Herschel.
See also
List of NGC objects (1–1000)
References
External links
0473
Pisces (constellation)
Lenticular galaxies
17861220
004785
00859
Discoveries by William Herschel
01172+1616
+03-04-022 | NGC 473 | Astronomy | 124 |
3,948,271 | https://en.wikipedia.org/wiki/Leccinum%20scabrum | Leccinum scabrum, commonly known as the rough-stemmed bolete, scaber stalk, and birch bolete, is an edible mushroom in the family Boletaceae, and was formerly classified as Boletus scaber. The birch bolete is widespread in Europe, in the Himalayas in Asia, and elsewhere in the Northern Hemisphere, occurring only in mycorrhizal association with birch trees. It fruits from June to October. This mushroom is also becoming increasingly common in Australia and New Zealand where it is likely introduced.
Description
The cap is wide. At first, it is hemispherical, and later becomes flatter. The skin of the cap is tan or brownish, usually with a lighter edge; it is smooth, bald, and dry to viscid.
The pores are whitish at a young age, later gray. In older specimens, the pores on the pileus can bulge out, while around the stipe they dent in strongly. The pore covering is easy to remove from the skin of the pileus.
The stipe is long and wide, slim, with white and dark to black flakes, and tapers upward. The basic mycelium is white.
The flesh is whitish, sometimes darkening following exposure. In young specimens, the meat is relatively firm, but it very soon becomes spongy and holds water, especially in rainy weather. When cooked, the meat of the birch bolete turns black.
Leccinum scabrum has been found in association with ornamental birch trees planted outside of its native range, such as in California.
Similar species
Several different species of Leccinum mushrooms are found in mycorrhiza with birches, and can be confused by amateurs and mycologists alike. L. variicolor has a bluish stipe. L. oxydabile has firmer, pinkish flesh and a different pileus skin structure. L. melaneum is darker in color and has yellowish hues under the skin of the pileus and stipe. L. holopus is paler and whitish in all parts.
Habitat and distribution
Leccinum scabrum is a European species that has been introduced to various areas of the world, mostly appearing in urban areas. In New Zealand, it associates solely with Betula pendula.
Uses
The birch bolete is edible but considered not to be worthwhile by some guides. It can be cooked in various mushroom dishes. It can also be pickled in brine or vinegar. It is commonly harvested for food in Finland and Russia.
A few reports in North America (New England and the Rocky Mountains) after 2009 suggest that Leccinums (birch boletes) should only be consumed with much caution.
In Nordic countries all Leccinum species are considered likely poisonous unless cooked for at least 15-20 minutes.
See also
List of Leccinum species
List of North American boletes
References
Further reading
Kallenbach: Die Röhrlinge (Boletaceae), Leipzig, Klinkhardt, (1940–42)
Gerhardt, Ewald: Pilze. Band 2: Röhrlinge, Porlinge, Bauchpilze, Schlauchpilze und andere, (Spektrum der Natur BLV Intensiv), (1985)
External links
Pilzgalerie: Leccinum scabrum (Birkenpilz)
scabrum
Edible fungi
Fungi described in 1783
Fungi of Europe
Fungi of New Zealand
Fungi of North America
Taxa named by Jean Baptiste François Pierre Bulliard
Fungus species | Leccinum scabrum | Biology | 735 |
20,782,736 | https://en.wikipedia.org/wiki/Opsis | See also the suffix -opsis.
Opsis () is the Greek word for spectacle in the theatre and performance. Its first use has been traced back to Aristotle's Poetics. It is now taken up by theatre critics, historians, and theorists to describe the mise en scène of a performance or theatrical event.
It is also the word used in the Bible for “sight” or “appearance”.
Origins
Opsis comes from the ancient Greek for "appearance, sight, view." The English word optic is derived from this word.
Aristotle and the Greeks
Aristotle's use of the term opsis, as Marvin Carlson points out, is the "final element of tragedy," but the term "receive[d] no further consideration". Aristotle discusses opsis in book 6 of the poetics, but only goes as far as to suggest that "spectacle has, indeed, an emotional attraction of its own, but, of all the parts, it is the least artistic, and connected least with the art of poetry. For the power of Tragedy, we may be sure, is felt even apart from representation and actors. Besides, the production of spectacular effects depends more on the art of the stage machinist than on that of the poet".
Contemporary theatre theory
In Theories of the Theatre by Marvin Carlson, the word opsis is replaced with the English equivalent "spectacle," but gives opsis/spectacle as much focus as Aristotle does in the Poetics; however, in Dictionary of the Theatre: Terms, Concepts, and Analysis Opsis is listed in the "terms" section, and defined as:that which is visible, offered to the [gaze], hence its connections with the notions of spectacle and performance. In Aristotle's Poetics, spectacle is one of the six constituent parts of tragedy, but ranks below others considered to be more essential ... The place in theatre history assigned subsequently to the opsis, to what we would now call the [mise-en-scene], determined the mode of transmission and the overall meaning of the performance. Opsis is a spectific feature of the performing arts.
J. Michael Walton, in The Greek Sense of Theatre: Tragedy Reviewed, challenges the traditional assumptions about Ancient Greek theatre. He states that “the visual aspect of the Greek theatre has for so long taken second place to the spoken word...it is still the common belief that what was said in the Greek tragedies was more important than what was seen.” Walton's thesis suggests that Ancient theatre lacks evidence of original productions, but that the written text, in comparison, is more accessible, and as a result, has caused Ancient theatre critics to relegate spectacle/mise-en-sene/opsis to less important aspects of theatre than the spoken word.
Ronald W. Vince, suggests that while it may seem logical simply to recognize opsis as stage spectacle or the mise-en-scene and so include it — if anywhere — in the vocabulary of performance theory. But there is implied even in Aristotle's use of the term a possible interpretation which would link opsis with the art of writing plays as well as with the art of staging them.
Notes
Further reading
Michael Peter Bolus, Modern Mask
Gregory Michael Sifakis, Aristotle on the Function of Tragic Poetry, Crete University Press, 2001
Theatre
Aristotelianism
Theatre theorists
Scenic design
Concepts in ancient Greek aesthetics | Opsis | Engineering | 698 |
1,140,356 | https://en.wikipedia.org/wiki/World%20Toilet%20Day | World Toilet Day (WTD) is an official United Nations international observance day on 19 November to inspire action to tackle the global sanitation crisis. Worldwide, 4.2 billion people live without "safely managed sanitation" and around 673 million people practice open defecation. Sustainable Development Goal 6 aims to "Ensure availability and sustainable management of water and sanitation for all". In particular, target 6.2 is to "End open defecation and provide access to sanitation and hygiene". When the Sustainable Development Goals Report 2020 was published, United Nations Secretary-General António Guterres said, "Today, Sustainable Development Goal 6 is badly off track" and it "is hindering progress on the 2030 Agenda, the realization of human rights and the achievement of peace and security around the world".
World Toilet Day exists to inform, engage and inspire people to take action toward achieving this goal. The UN General Assembly declared World Toilet Day an official UN day in 2013, after Singapore had tabled the resolution (its first resolution before the UN's General Assembly of 193 member states). Prior to that, World Toilet Day had been established unofficially by the World Toilet Organization (a Singapore-based NGO) in 2001.
UN-Water is the official convener of World Toilet Day. UN-Water maintains the official World Toilet Day website and chooses a special theme for each year. In 2020 the theme was "Sustainable sanitation and climate change". In 2019 the theme was 'Leaving no one behind', which is the central theme of the Sustainable Development Goals. Themes in previous years include nature-based solutions, wastewater, toilets and jobs, and toilets and nutrition. World Toilet Day is marked by communications campaigns and other activities. Events are planned by UN entities, international organizations, local civil society organizations and volunteers to raise awareness and inspire action.
Toilets are important because access to a safe functioning toilet has a positive impact on public health, human dignity, and personal safety, especially for females. Sanitation systems that do not safely treat excreta (feces) allow the spread of disease. Serious soil-transmitted diseases and waterborne diseases such as cholera, diarrhea, typhoid, dysentery and schistosomiasis can result.
Convener
In 2013, UN-Water and the "Thematic Priority Area (TPA) on Drinking Water and Basic Sanitation" received the mandate to oversee World Toilet Day each year. This mandate is described in the United Nations Resolution A/67/L.75.
In consultation with the UN-Water World Toilet Day Task Force, made up of UN-Water member organizations, UN-Water selects the theme based on that year's World Water Development Report and develops content for World Toilet Day communications campaigns.
UN-Water manages the World Toilet Day website which promotes key issues and stories, provides communications and campaigns resources, and announces events and opportunities to participate.
The overall World Toilet Day campaign mobilizes civil society, think tanks, non-governmental organizations, academics, corporations and the general public to participate in the associated social media and communications campaigns. Ultimately, the aim is to encourage organizations and governments to plan activities and action on sanitation issues to make progress on Sustainable Development Goal 6.
Annual themes
Starting in 2012, World Toilet Day themes were selected for each year and form the basis of the related communications campaigns. Since 2016, the same overall annual theme has been used for both World Toilet Day and World Water Day, based on the World Water Development Report.
2012 – "I give a shit, do you?" (slogan)
2013 – Tourism and water
2014 – Equality and dignity
2015 – Toilets and nutrition
2016 – Toilets and jobs
2017 – Wastewater
2018 – Nature-based solutions (slogan: "When Nature calls")
2019 – Leaving no one behind – The campaign draws attention to those people being "left behind without sanitation and the social, economic and environmental consequences of inaction". This is closely related to Sustainable Development Goal 6 which has a target to eliminate open defecation and ensure "everyone has access to sustainable sanitation services by 2030, paying special attention to the needs of women and girls and those in vulnerable situations".
2020 – Sustainable sanitation and climate change
2021 – Valuing toilets: The WTO and Bill Gates both believe that if value can be given to toilet waste, then funds will be generated to pay for any cleanup including profits for entrepreneurs interested in investing in related industries.
2022 – Groundwater and sanitation – making the invisible visible.
2023 - Accelerating Change
2024 - Toilets are a place for peace.
Examples of activities and events
Launch of reports
Some organizations launch toilet-related (or sanitation-related) reports on World Toilet Day. For example:
The Toilet Board Coalition (2017) "Sanitation Economy"
Water and Sanitation for the Urban Poor (WSUP) (2017) "Guide to strengthening the enabling environment for faecal sludge management"
The International Labour Office (ILO) (2016) "WASH@Work: self-training handbook
WHO, UNICEF and USAID (2015) "Improving Nutrition Outcomes with Better Water, Sanitation and Hygiene: Practical Solutions for Policies and Programmes"
Events
2019: Planned events for World Toilet Day 2019 include for example a workshop in the USA entitled "Manure Management – What Poop Can Teach Youth!", art installations in Ireland under the theme "Think Before You Flush", and a "Toilets for all Campaign in Rural areas" in Madhya Pradesh, India.
2018: Events for World Toilet Day in 2018 included diverse activities such as a 'hackathon' in Ghana to promote digital solutions, a seminar hosted by Engineers without Borders in Denmark, a screening and discussion of the Bollywood movie Toilet: Ek Prem Katha (in English – Toilet: A Love Story) in Canada, and a school drawing competition in India.
2017: Members of the Sustainable Sanitation Alliance (SuSanA) used the momentum around World Toilet Day in 2017 to update Wikipedia articles on WASH-related topics. This contributed to public education about the sanitation crisis. The documentary "Follow the Flush," released 19 November 2017, educated people about what happens beneath the streets of New York City after a person flushes a toilet in Manhattan. In the lead-up to World Toilet Day 2017, communities worldwide came together for sanitation-themed "Urgent Runs". More than 63 events were held in 42 countries. Events included fun runs, awareness walks, toilet cleaning programs, carnivals and even motorbike parades. Countries participating include: Bangladesh, Benin, Bhutan, Burundi, Cambodia, Cameroon, Canada, China, Congo-Brazzaville, France, Gambia, Germany, Ghana, India, Indonesia, Italy, Kenya, Mongolia, Mozambique, Namibia, Netherlands, Pakistan, Philippines, Senegal, Tanzania, United States and Vietnam.
Impacts
Social media impacts
The World Toilet Day campaign and related publications reach millions of people through social media, dedicated websites and other channels. Over 100 events in 40 countries were registered on the World Toilet Day website in both 2016 and in 2017. In 2017, the hashtag #WorldToiletDay had a maximum potential reach of over 750 million people on social media. In 2018, the maximum potential reach increased by 15%,compared to 2017; the online activity and authors also increased by 12% and 22% compared to 2017, respectively.
History
On 19 November 2001, the NGO World Toilet Organization (WTO) was founded by Jack Sim, a philanthropist from Singapore. He subsequently declared 19 November as World Toilet Day. The name "World Toilet Day" and not "World Sanitation Day" was chosen for ease of public messaging, even though toilets are only the first stage of sanitation systems.
World Toilet Day events and public awareness campaigns increase public awareness of the broader sanitation systems that include wastewater treatment, fecal sludge management, municipal solid waste management, stormwater management, hygiene, and handwashing. Also, the UN Sustainable Development Goals call for more than just toilets. Goal 6 calls for adequate sanitation, which includes the whole system for assuring that waste is safely processed.
The WTO began pushing for global recognition for World Toilet Day and, in 2007, the Sustainable Sanitation Alliance (SuSanA) began to actively support World Toilet Day, too. Their efforts to raise attention for the sanitation crisis were bolstered in 2010 when the human right to water and sanitation was officially declared a human right by the UN.
In 2013, a joint initiative between the Government of Singapore and the World Toilet Organization led to Singapore's first ever UN resolution, named "Sanitation for All". The resolution calls for collective action to end the world's sanitation crisis. World Toilet Day was declared an official UN day in 2013. That resolution was adopted by 122 countries at the 67th session of the UN General Assembly in New York.
The Sustainable Development Goals (SDGs) replaced the Millennium Development Goals (MDGs) in 2016. On World Toilet Day on 19 November 2015, United Nations Secretary-General Ban Ki-moon urged broad action to renew efforts to provide access to adequate sanitation for all. He reminded everyone of the "Call to Action on Sanitation" which was launched in 2013, and the aim to end open defecation by 2025. He also said: "By many accounts, sanitation is the most-missed target of the Millennium Development Goals."
The UN Deputy Secretary-General, Jan Eliasson, was honored on World Toilet Day in 2016 in New York for his deep commitment to breaking the sanitation taboo. For example, he had delivered a video message to attendees of a WaterAid and Unilever joint event in the European Parliament on World Toilet Day 2014. In 2016, UN-Water supported "A Toast for Toilets" in New York with the United Nations Mission of Singapore.
Background
Worldwide, 4.2 billion people live without "safely managed sanitation" and around 673 million people worldwide practice open defecation. Having to urinate in the open can also be difficult for women and girls. Females tend to resort to the cover of darkness to give them more privacy, but then risk being attacked when alone at night.
It has been estimated that 58% of all cases of diarrhea worldwide in 2015 were caused by unsafe water, poor sanitation and poor hygiene practices, such as inadequate handwashing. This resulted in half a million children under the age of five dying from diarrhea per year. Providing sanitation has been estimated to lower the odds of children suffering diarrhea by 7–17%, and under-five mortality by 5–20%.
The Human Right to Water and Sanitation was recognized as a human right by the United Nations (UN) General Assembly on 28 July 2010. Lack of access to sanitation (toilets) has an impact on public health, dignity, and safety. The spread of many diseases (e.g. soil-transmitted helminthiasis, diarrhea, schistosomiasis) and stunted growth in children is directly related to people being exposed to human feces because toilets are either not available or not used.
Sustainable Development Goal 6 aims to provide sanitation for all.
See also
Bindeshwar Pathak another toilet pioneer
Global Handwashing Day
Human right to water and sanitation
International Men's Day, also on 19 November
Menstrual Hygiene Day
WASH
Water issues in developing countries
Workers' right to access the toilet
References
External links
World Toilet Day history on Stamps
World Toilet Day official website
World Toilet Day on United Nations website
World Toilet Day Hygiene and Sanitation
Sanitation
Hygiene
Environmental awareness days
Health awareness days
November observances
United Nations days
Water and politics
Water and society
Toilets
Recurring events established in 2001 | World Toilet Day | Biology | 2,347 |
3,201,172 | https://en.wikipedia.org/wiki/High%20Power%20Electric%20Propulsion | High Power Electric Propulsion (HiPEP) is a variation of ion thruster for use in nuclear electric propulsion applications. It was ground-tested in 2003 by NASA and was intended for use on the Jupiter Icy Moons Orbiter, which was canceled in 2005.
Theory
The HiPEP thruster differs from earlier ion thrusters because the xenon ions are produced using a combination of microwave and magnetic fields. The ionization is achieved through a process called Electron Cyclotron Resonance (ECR). In ECR, the small number of free electrons present in the neutral gas gyrate around the static magnetic field lines. The injected microwaves' frequency is set to match this gyrofrequency and a resonance is established. Energy is transferred from the right-hand polarized portion of the microwave to the electrons. This energy is then transferred to the bulk gas/plasma via the rare - yet important - collisions between electrons and neutrals. During these collisions, electrons can be knocked free from the neutrals, forming ion-electron pairs. The process is a highly efficient means of creating a plasma in low density gases. Previously the electrons required were provided by a hollow cathode.
Specifications
The thruster itself is in the 20-50 kW class, with a specific impulse of 6,000-9,000 seconds, and a propellant throughput capability exceeding 100 kg/kW. The goal of the project, as of June 2003, was to achieve a technology readiness level of 4-5 within 2 years.
The pre-prototype HiPEP produced 670 millinewton (mN) of thrust at a power level of 39.3 kW using 7.0 mg/s of fuel giving a specific impulse of 9620 s. Downrated to 24.4 kW, the HiPEP used 5.6 mg/s of fuel giving a specific impulse of 8270 s and 460 mN of thrust.
Project and development history
Phase 1 of HiPEP development concluded in early 2003. Conceptual Design of the thruster was completed, and individual component testing concluded. A full-scale laboratory thruster was constructed for Phase 2 of the HiPEP's development. However, with cancellation of the Jupiter Icy Moon Orbiter mission in 2005, HiPEP's development also came to a halt. Before cancellation, HiPEP completed a 2000 hour wear test.
See also
Exploration of Jupiter
List of spacecraft with electric propulsion
Solar electric propulsion
References
External links
NASA GRC Media Packet on HiPEP.
Ion engines | High Power Electric Propulsion | Physics,Chemistry | 503 |
49,803,641 | https://en.wikipedia.org/wiki/Excess%20noise%20ratio | In electronics, excess noise ratio is a characteristic of a noise generator such as a "noise diode", that is used to measure the noise performance of amplifiers. The Y-factor method is a common measurement technique for this purpose.
By using a noise diode, the output noise of an amplifier is measured using two input noise levels, and by measuring the output noise factor (referred to as Y) the noise figure of the amplifier can be determined without having to measure the amplifier gain.
Background
Any amplifier generates noise. In a radio receiver the first stage dominates the overall noise of the receiver and in most cases thermal, or Johnson noise, determines the overall noise performance of a receiver. As radio signals decrease in size, the noise at the input of the receiver will determine a lower threshold of what can be received. The level of noise is determined by calculating the noise in a 50 ohm resistor at the input of the receiver as follows:
where:
= Boltzmann constant = 1.38 × 10−23 J/K
= Temperature
= Bandwidth
Thus, receivers with a narrow bandwidth have a higher sensitivity than receivers with a large bandwidth and input noise can be decreased by cooling the receiver input stage.
A noise diode is a device which has a defined excess noise ratio (ENR).
When the diode is off (unpowered) the noise from it will be thermal noise defined by the above formula. The bandwidth to be used is the bandwidth of the receiver.
When the diode is on (powered) the noise from it will be increased from the thermal noise by the diode's excess noise ratio. This figure could be 6 dB for testing an amplifier with 40 dB gain and could be 16 dB for an amplifier with less gain or higher noise.
To determine the noise figure of an amplifier one uses a noise diode at the input to the amplifier and determines the output noise Y with the diode switched on and off.
Knowing both Y and the ENR, one can then determine the amount of noise contributed by the amplifier and hence can calculate the noise figure of the amplifier.
Other techniques exist for making this measurement but either require accurate measurements of impedance or are inaccurate.
The following formula relates Y-factor to ENR:
Measurements
Noise figure measurements can be made with a noise diode, a power supply for the noise diode, and a spectrum analyser. They can also be made with a specialist noise figure meter. The advantage of the noise figure meter is that it will automatically switch the noise diode on and off, giving a continuous reading of Y; it will also have the correct bandwidths in its receiver to average the received noise in an optimum fashion. However, accurate noise figure measurements are possible with the noise figure meter and a spectrum analyser.
References
Noise (electronics)
Engineering ratios | Excess noise ratio | Mathematics,Engineering | 571 |
50,345 | https://en.wikipedia.org/wiki/Urban%20design | Urban design is an approach to the design of buildings and the spaces between them that focuses on specific design processes and outcomes. In addition to designing and shaping the physical features of towns, cities, and regional spaces, urban design considers 'bigger picture' issues of economic, social and environmental value and social design. The scope of a project can range from a local street or public space to an entire city and surrounding areas. Urban designers connect the fields of architecture, landscape architecture and urban planning to better organize physical space and community environments.
Some important focuses of urban design on this page include its historical impact, paradigm shifts, its interdisciplinary nature, and issues related to urban design.
Theory
Urban design deals with the larger scale of groups of buildings, infrastructure, streets, and public spaces, entire neighbourhoods and districts, and entire cities, with the goal of making urban environments that are equitable, beautiful, performative, and sustainable.
Urban design is an interdisciplinary field that utilizes the procedures and the elements of architecture and other related professions, including landscape design, urban planning, civil engineering, and municipal engineering. It borrows substantive and procedural knowledge from public administration, sociology, law, urban geography, urban economics and other related disciplines from the social and behavioral sciences, as well as from the natural sciences. In more recent times different sub-subfields of urban design have emerged such as strategic urban design, landscape urbanism, water-sensitive urban design, and sustainable urbanism. Urban design demands an understanding of a wide range of subjects from physical geography to social science, and an appreciation for disciplines, such as real estate development, urban economics, political economy, and social theory.
Urban design theory deals primarily with the design and management of public space (i.e. the 'public environment', 'public realm' or 'public domain'), and the way public places are used and experienced. Public space includes the totality of spaces used freely on a day-to-day basis by the general public, such as streets, plazas, parks, and public infrastructure. Some aspects of privately owned spaces, such as building facades or domestic gardens, also contribute to public space and are therefore also considered by urban design theory. Important writers on urban design theory include Christopher Alexander, Peter Calthorpe, Gordon Cullen, Andrés Duany, Jane Jacobs, Jan Gehl, Allan B. Jacobs, Kevin Lynch, Aldo Rossi, Colin Rowe, Robert Venturi, William H. Whyte, Camillo Sitte, Bill Hillier (space syntax), and Elizabeth Plater-Zyberk.
History
Although contemporary professional use of the term 'urban design' dates from the mid-20th century, urban design as such has been practiced throughout history. Ancient examples of carefully planned and designed cities exist in Asia, Africa, Europe, and the Americas, and are particularly well known within Classical Chinese, Roman, and Greek cultures. Specifically, Hippodamus of Miletus was a famous ancient Greek architect and urban planner, and all around academic that is often considered to be a "father of European urban planning", and the namesake of the "Hippodamian plan", also known as the grid plan of a city layout.
European Medieval cities are often, and often erroneously, regarded as exemplars of undesigned or 'organic' city development. There are many examples of considered urban design in the Middle Ages. In England, many of the towns listed in the 9th-century Burghal Hidage were designed on a grid, examples including Southampton, Wareham, Dorset and Wallingford, Oxfordshire, having been rapidly created to provide a defensive network against Danish invaders. 12th century western Europe brought renewed focus on urbanisation as a means of stimulating economic growth and generating revenue. The burgage system dating from that time and its associated burgage plots brought a form of self-organising design to medieval towns.
Throughout history, the design of streets and deliberate configuration of public spaces with buildings have reflected contemporaneous social norms or philosophical and religious beliefs. Yet the link between designed urban space and the human mind appears to be bidirectional. Indeed, the reverse impact of urban structure upon human behaviour and upon thought is evidenced by both observational study and historical records. There are clear indications of impact through Renaissance urban design on the thought of Johannes Kepler and Galileo Galilei. Already René Descartes in his Discourse on the Method had attested to the impact Renaissance planned new towns had upon his own thought, and much evidence exists that the Renaissance streetscape was also the perceptual stimulus that had led to the development of coordinate geometry.
Early modern era
The beginnings of modern urban design in Europe are associated with the Renaissance but, especially, with the Age of Enlightenment. Spanish colonial cities were often planned, as were some towns settled by other imperial cultures. These sometimes embodied utopian ambitions as well as aims for functionality and good governance, as with James Oglethorpe's plan for Savannah, Georgia. In the Baroque period the design approaches developed in French formal gardens such as Versailles were extended into urban development and redevelopment. In this period, when modern professional specializations did not exist, urban design was undertaken by people with skills in areas as diverse as sculpture, architecture, garden design, surveying, astronomy, and military engineering. In the 18th and 19th centuries, urban design was perhaps most closely linked with surveyors engineers and architects. The increase in urban populations brought with it problems of epidemic disease, the response to which was a focus on public health, the rise in the UK of municipal engineering and the inclusion in British legislation of provisions such as minimum widths of street in relation to heights of buildings in order to ensure adequate light and ventilation.
Much of Frederick Law Olmsted's work was concerned with urban design, and the newly formed profession of landscape architecture also began to play a significant role in the late 19th century.
Modern urban design
In the 19th century, cities were industrializing and expanding at a tremendous rate. Private businesses largely dictated the pace and style of this development. The expansion created many hardships for the working poor and concern for public health increased. However, the laissez-faire style of government, in fashion for most of the Victorian era, was starting to give way to a New Liberalism. This gave more power to the public. The public wanted the government to provide citizens, especially factory workers, with healthier environments. Around 1900, modern urban design emerged from developing theories on how to mitigate the consequences of the industrial age.
The first modern urban planning theorist was Sir Ebenezer Howard. His ideas, although utopian, were adopted around the world because they were highly practical. He initiated the garden city movement. in 1898.
His garden cities were intended to be planned, self-contained communities surrounded by parks. Howard wanted the cities to be proportional with separate areas of residences, industry, and agriculture. Inspired by the Utopian novel Looking Backward and Henry George's work Progress and Poverty, Howard published his book Garden Cities of To-morrow in 1898. His work is an important reference in the history of urban planning. He envisioned the self-sufficient garden city to house 32,000 people on a site of . He planned on a concentric pattern with open spaces, public parks, and six radial boulevards, wide, extending from the center. When it reached full population, Howard wanted another garden city to be developed nearby. He envisaged a cluster of several garden cities as satellites of a central city of 50,000 people, linked by road and rail. His model for a garden city was first created at Letchworth and Welwyn Garden City in Hertfordshire. Howard's movement was extended by Sir Frederic Osborn to regional planning.
20th century
In the early 1900s, urban planning became professionalized. With input from utopian visionaries, civil engineers, and local councilors, new approaches to city design were developed for consideration by decision-makers such as elected officials. In 1899, the Town and Country Planning Association was founded. In 1909, the first academic course on urban planning was offered by the University of Liverpool. Urban planning was first officially embodied in the Housing and Town Planning Act of 1909 Howard's 'garden city' compelled local authorities to introduce a system where all housing construction conformed to specific building standards. In the United Kingdom following this Act, surveyor, civil engineers, architects, and lawyers began working together within local authorities. In 1910, Thomas Adams became the first Town Planning Inspector at the Local Government Board and began meeting with practitioners. In 1914, The Town Planning Institute was established. The first urban planning course in America was not established until 1924 at Harvard University. Professionals developed schemes for the development of land, transforming town planning into a new area of expertise.
In the 20th century, urban planning was changed by the automobile industry. Car-oriented design impacted the rise of 'urban design'. City layouts now revolved around roadways and traffic patterns.
In June 1928, the International Congresses of Modern Architecture (CIAM) was founded at the Chateau de la Sarraz in Switzerland, by a group of 28 European architects organized by Le Corbusier, Hélène de Mandrot, and Sigfried Giedion. The CIAM was one of many 20th century manifestos meant to advance the cause of "architecture as a social art".
Postwar
Team X was a group of architects and other invited participants who assembled starting in July 1953 at the 9th Congress of the International Congresses of Modern Architecture (CIAM) and created a schism within CIAM by challenging its doctrinaire approach to urbanism.
In 1956, the term "Urban design" was first used at a series of conferences hosted by Harvard University. The event provided a platform for Harvard's Urban Design program. The program also utilized the writings of famous urban planning thinkers: Gordon Cullen, Jane Jacobs, Kevin Lynch, and Christopher Alexander.
In 1961, Gordon Cullen published The Concise Townscape. He examined the traditional artistic approach to city design of theorists including Camillo Sitte, Barry Parker, and Raymond Unwin. Cullen also created the concept of 'serial vision'. It defined the urban landscape as a series of related spaces.
Also in 1961, Jane Jacobs published The Death and Life of Great American Cities. She critiqued the modernism of CIAM (International Congresses of Modern Architecture). Jacobs also claimed crime rates in publicly owned spaces were rising because of the Modernist approach of 'city in the park'. She argued instead for an 'eyes on the street' approach to town planning through the resurrection of main public space precedents (e.g. streets, squares).
In the same year, Kevin Lynch published The Image of the City. He was seminal to urban design, particularly with regards to the concept of legibility. He reduced urban design theory to five basic elements: paths, districts, edges, nodes, landmarks. He also made the use of mental maps to understand the city popular, rather than the two-dimensional physical master plans of the previous 50 years.
Other notable works:
Architecture of the City by Aldo Rossi (1966)
Learning from Las Vegas by Robert Venturi and Denise Scott Brown (1972)
Collage City by Colin Rowe (1978)
The Next American Metropolis by Peter Calthorpe (1993)
The Social Logic of Space by Bill Hillier and Julienne Hanson (1984)
The popularity of these works resulted in terms that become everyday language in the field of urban planning. Aldo Rossi introduced 'historicism' and 'collective memory' to urban design. Rossi also proposed a 'collage metaphor' to understand the collection of new and old forms within the same urban space. Peter Calthorpe developed a manifesto for sustainable urban living via medium-density living. He also designed a manual for building new settlements in his concept of Transit Oriented Development (TOD). Bill Hillier and Julienne Hanson introduced Space Syntax to predict how movement patterns in cities would contribute to urban vitality, anti-social behaviour, and economic success. 'Sustainability', 'livability', and 'high quality of urban components' also became commonplace in the field.
Current trends
Today, urban design seeks to create sustainable urban environments with long-lasting structures, buildings, and overall livability. Walkable urbanism is another approach to practice that is defined within the Charter of New Urbanism. It aims to reduce environmental impacts by altering the built environment to create smart cities that support sustainable transport. Compact urban neighborhoods encourage residents to drive less. These neighborhoods have significantly lower environmental impacts when compared to sprawling suburbs. To prevent urban sprawl, Circular flow land use management was introduced in Europe to promote sustainable land use patterns.
As a result of the recent New Classical Architecture movement, sustainable construction aims to develop smart growth, walkability, architectural tradition, and classical design. It contrasts with modernist and globally uniform architecture. In the 1980s, urban design began to oppose the increasing solitary housing estates and suburban sprawl.
Managed Urbanisation with the view to making the urbanising process completely culturally and economically, and environmentally sustainable, and as a possible solution to the urban sprawl, Frank Reale has submitted an interesting concept of Expanding Nodular Development (E.N.D.) that integrates many urban designs and ecological principles, to design and build smaller rural hubs with high-grade connecting freeways, rather than adding more expensive infrastructure to existing big cities and the resulting congestion.
Paradigm shifts
Throughout the young existence of the Urban Design discipline, many paradigm shifts have occurred that have affected the trajectory of the field regarding theory and practice. These paradigm shifts cover multiple subject areas outside of the traditional design disciplines.
Team 10 - The first major paradigm shift was the formation of Team 10 out of CIAM, or the Congres Internationaux d'Architecture Moderne. They believed that Urban Design should introduce ideas of 'Human Association', which pivots the design focus from the individual patron to concentrating on the collective urban population.
The Brundtland Report and Silent Spring - Another paradigm shift was the publication of the Brundtland Report and the book Silent Spring by Rachel Carson. These writings introduced the idea that human settlements could have detrimental impacts on ecological processes, as well as human health, which spurred a new era of environmental awareness in the field.
The Planner's Triangle - The Planner's Triangle, created by Scott Cambell, emphasized three main conflicts in the planning process. This diagram exposed the complex relationships between Economic Development, Environmental Protection, and Equity and Social Justice. For the first time, the concept of Equity and Social Justice was considered as equally important as Economic Development and Environmental Protection within the design process.
Death of Modernism (Demolition of Pruitt Igoe) - Pruitt Igoe was a spatial symbol and representation of Modernist theory regarding social housing. In its failure and demolition, these theories were put into question and many within the design field considered the era of Modernism to be dead.
Neoliberalism & the election of Reagan - The election of President Reagan and the rise of Neoliberalism affected the Urban Design discipline because it shifted the planning process to emphasize capitalistic gains and spatial privatization. Inspired by the trickle-down approach of Reaganomics, it was believed that the benefits of a capitalist emphasis within design would positively impact everyone. Conversely, this led to exclusionary design practices and to what many consider as "the death of public space".
Right to the City - The spatial and political battle over our citizens' rights to the city has been an ongoing one. David Harvey, along with Dan Mitchell and Edward Soja, discussed rights to the city as a matter of shifting the historical thinking of how spatial matter was determined in a critical form. This change of thinking occurred in three forms: ontologically, sociologically, and the combination of this socio-spatial dialect. Together the aim shifted to be able to measure what matters in a socio-spatial context.
Black Lives Matter (Ferguson) - The Black Lives Matter movement challenged design thinking because it emphasized the injustices and inequities suffered by people of color in urban space, as well as emphasized their right to public space without discrimination and brutality. It claims that minority groups lack certain spatial privileges and that this deficiency can result in matters of life and death. In order to reach an equitable state of urbanism, there needs to be equal identification of socio-economic lives within our urbanscapes.
New approaches
There have been many different theories and approaches applied to the practice of urban design.
New Urbanism is an approach that began in the 1980s as a place-making initiative to combat suburban sprawl. Its goal is to increase density by creating compact and complete towns and neighborhoods. The 10 principles of new urbanism are walkability, connectivity, mixed-use and diversity, mixed housing, quality architecture and urban design, traditional neighborhood structure, increased density, smart transportation, sustainability, and quality of life. New urbanism and the developments that it has created are sources of debates within the discipline, primarily with the landscape urbanist approach but also due to its reproduction of idyllic architectural tropes that do not respond to the context. Andres Duany, Elizabeth Plater-Zyberk, Peter Calthorpe, and Jeff Speck are all strongly associated with New Urbanism and its evolution over the years.
Landscape Urbanism is a theory that first surfaced in the 1990s, arguing that the city is constructed of interconnected and ecologically rich horizontal field conditions, rather than the arrangement of objects and buildings. Charles Waldheim, Mohsen Mostafavi, James Corner, and Richard Weller are closely associated with this theory. Landscape urbanism theorises sites, territories, ecosystems, networks, and infrastructures through landscape practice according to Corner, while applying a dynamic concept to cities as ecosystems that grow, shrink or change phases of development according to Waldheim.
Everyday Urbanism is a concept introduced by Margaret Crawford and influenced by Henry Lefebvre that describes the everyday lived experience shared by urban residents including commuting, working, relaxing, moving through city streets and sidewalks, shopping, buying, eating food, and running errands. Everyday urbanism is not concerned with aesthetic value. Instead, it introduces the idea of eliminating the distance between experts and ordinary users and forces designers and planners to contemplate a 'shift of power' and address social life from a direct and ordinary perspective.
Tactical Urbanism (also known as DIY Urbanism, Planning-by-Doing, Urban Acupuncture, or Urban Prototyping) is a city, organizational, or citizen-led approach to neighborhood-building that uses short-term, low-cost, and scalable interventions and policies to catalyze long term change.
Top-up Urbanism is the theory and implementation of two techniques in urban design: top-down and bottom-up. Top-down urbanism is when the design is implemented from the top of the hierarchy - normally the government or planning department. Bottom-up or grassroots urbanism begins with the people or the bottom of the hierarchy. Top-up means that both methods are used together to make a more participatory design, so it is sure to be comprehensive and well regarded in order to be as successful as possible.
Infrastructural Urbanism is the study of how the major investments that go into making infrastructural systems can be leveraged to be more sustainable for communities. Instead of the systems being solely about efficiency in both cost and production, infrastructural urbanism strives to utilize these investments to be more equitable for social and environmental issues as well. Linda Samuels is a designer investigating how to accomplish this change in infrastructure in what she calls "next-generation infrastructure" which is "multifunctional; public; visible; socially productive; locally specific, flexible, and adaptable; sensitive to the eco-economy; composed of design prototypes or demonstration projects; symbiotic; technologically smart; and developed collaboratively across disciplines and agencies".
Sustainable Urbanism is the study from the 1990s of how a community can be beneficial for the ecosystem, the people, and the economy for which it is associated. It is based on Scott Campbell's planner's triangle which tries to find the balance between economy, equity, and the environment. Its main concept is to try and make cities as self-sufficient as possible while not damaging the ecosystem around them, today with an increased focus on climate stability. A key designer working with sustainable urbanism is Douglas Farr.
Feminist Urbanism is the study and critique of how the built environment affects genders differently because of patriarchal social and political structures in society. Typically, the people at the table making design decisions are men, so their conception about public space and the built environment relates to their life perspectives and experiences, which do not reflect the same experiences of women or children. Dolores Hayden is a scholar who has researched this topic from 1980 to the present day. Hayden's writing says, “when women, men, and children of all classes and races can identify the public domain as the place where they feel most comfortable as citizens, Americans will finally have homelike urban space.”
Educational Urbanism is an emerging discipline, at the crossroads of urban planning, educational planning, and pedagogy. An approach that tackles the notion that economic activities, the need for new skills at the workplace, and the spatial configuration of the workplace rely on the spatial reorientation in the design of educational spaces and the urban dimension of educational planning.
Black Urbanism is an approach in which black communities are active creators, innovators, and authors of the process of designing and creating the neighborhoods and spaces of the metropolitan areas they have done so much to help revive over the past half-century. The goal is not to build black cities for black people but to explore and develop the creative energy that exists in so-called black areas: that has the potential to contribute to the sustainable development of the whole city.
Debates in urbanism
Underlying the practice of urban design are the many theories about how to best design the city. Each theory makes a unique claim about how to effectively design thriving, sustainable urban environments. Debates over the efficacy of these approaches fill the urban design discourse. Landscape Urbanism and New Urbanism are commonly debated as distinct approaches to addressing suburban sprawl. While Landscape Urbanism proposes landscape as the basic building block of the city and embraces horizontality, flexibility, and adaptability, New Urbanism offers the neighborhood as the basic building block of the city and argues for increased density, mixed uses, and walkability. Opponents of Landscape Urbanism point out that most of its projects are urban parks, and as such, its application is limited. Opponents of New Urbanism claim that its preoccupation with traditional neighborhood structures is nostalgic, unimaginative, and culturally problematic. Everyday Urbanism argues for grassroots neighborhood improvements rather than master-planned, top-down interventions. Each theory elevates the roles of certain professions in the urban design process, further fueling the debate. In practice, urban designers often apply principles from many urban design theories. Emerging from the conversation is a universal acknowledgement of the importance of increased interdisciplinary collaboration in designing the modern city.
Urban design as an integrative profession
Urban designers work with architects, landscape architects, transportation engineers, urban planners, and industrial designers to reshape the city. Cooperation with public agencies, authorities and the interests of nearby property owners is necessary to manage public spaces. Users often compete over the spaces and negotiate across a variety of spheres. Input is frequently needed from a wide range of stakeholders. This can lead to different levels of participation as defined in Arnstein's Ladder of Citizen Participation.
While there are some professionals who identify themselves specifically as urban designers, a majority have backgrounds in urban planning, architecture, or landscape architecture. Many collegiate programs incorporate urban design theory and design subjects into their curricula. There is an increasing number of university programs offering degrees in urban design at the post-graduate level.
Urban design considers:
Pedestrian zones
Incorporation of nature within a city
Aesthetics
Urban structure – arrangement and relation of business and people
Urban typology, density, and sustainability - spatial types and morphologies related to the intensity of use, consumption of resources, production, and maintenance of viable communities
Accessibility – safe and easy transportation
Legibility and wayfinding – accessible information about travel and destinations
Animation – Designing places to stimulate public activity
Function and fit – places support their varied intended uses
Complimentary mixed uses – Locating activities to allow constructive interaction between them
Character and meaning – Recognizing differences between places
Order and incident – Balancing consistency and variety in the urban environment
Continuity and change – Locating people in time and place, respecting heritage and contemporary culture
Civil society – people are free to interact as civic equals, important for building social capital
Participation/engagement – including people in the decision-making process can be done at many different scales.
Relationships with other related disciplines
The original urban design was thought to be separated from architecture and urban planning. Urban Design has developed to a certain extent, and comes from the foundation of engineering. In Anglo-Saxon countries, it is often considered as a branch under the architecture, urban planning, and landscape architecture and limited as the construction of the urban physical environment. However Urban Design is more integrated into the social science-based, cultural, economic, political, and other aspects. Not only focus on space and architectural group, but also look at the whole city from a broader and more holistic perspective to shape a better living environment. Compared to architecture, the spatial and temporal scale of Urban Design processing is much larger. It deals with neighborhoods, communities, and even the entire city.
The urban design education
The University of Liverpool's Department of Civic Design is the first urban design school in the world founded in 1909. Following the 1956 Urban Design conference, Harvard University established the first graduate program with urban design in its title, The Master of Architecture in Urban Design, although as a subject taught in universities its history in Europe is far older. Urban design programs explore the built environment from diverse disciplinary backgrounds and points of view. The pedagogically innovative combination of interdisciplinary studios, lecture courses, seminars, and independent study creates an intimate and engaging educational atmosphere in which students thrive and learn. Soon after in 1961, Washington University in St. Louis founded their Master of Urban Design program. Today, twenty urban design programs exist in the United States:
Andrews University, Berrien Springs, MI
Clemson University - Charleston, SC
Columbia Graduate School of Architecture, Planning and Preservation - New York, NY
City College of New York - New York, NY
Estopinal College of Architecture and Planning at Ball State University - Muncie, IN
Georgia Institute of Technology College of Design - Atlanta, GA
Harvard Graduate School of Design - Cambridge, MA
Iowa State University - Ames, IA
New York Institute of Technology - New York, NY
Notre Dame School of Architecture - Notre Dame, IN
Pratt Institute - Brooklyn, NY
Sam Fox School of Design & Visual Arts at Washington University in St. Louis - St. Louis, MO
Savannah College of Art and Design - Savannah, GA
Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA
Taubman College of Architecture and Urban Planning at University of Michigan - Ann Arbor, MI
University of California, Berkeley - Berkeley, CA
University of Colorado Denver - Denver, CO
University of Maryland - College Park, MD
University of Miami - Miami, FL
Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA
University of Texas at Austin School of Architecture - Austin, TX
University of North Carolina at Charlotte - Charlotte, NC
In the United Kingdom, Master's programmes in Urban Design at University of Manchester or University of Sheffield and Cardiff University or London South Bank University and City Design at the Royal College of Art or Queen's University Belfast are offered.
Issues
The field of urban design holds enormous potential for helping us address today's biggest challenges: an expanding population, mass urbanization, rising inequality, and climate change. In its practice as well as its theories, urban design attempts to tackle these pressing issues. As climate change progresses, urban design can mitigate the results of flooding, temperature changes, and increasingly detrimental storm impacts through a mindset of sustainability and resilience. In doing so, the urban design discipline attempts to create environments that are constructed with longevity in mind, such as zero-carbon cities. Cities today must be designed to minimize resource consumption, waste generation, and pollution while also withstanding the unprecedented impacts of climate change. To be truly resilient, our cities need to be able to not just bounce back from a catastrophic climate event but to bounce forward to an improved state.
Another issue in this field is that it is often assumed that there are no mothers of planning and urban design. However, this is not the case, many women have made proactive contributions to the field, including the work of Mary Kingsbury Simkhovitch, Florence Kelley, and Lillian Wald, to name a few of whom were prominent leaders in the City Social movement. The City Social was a movement that steamed between the commonly known City Practical and City Beautiful movements. It was a movement mainly concerning lay with the economic and social equalities regarding urban issues.
Justice is and will always be a key issue in urban design. As previously mentioned, past urban strategies have caused injustices within communities incapable of being remedied via simple means. As urban designers tackle the issue of justice, they often are required to look at the injustices of the past and must be careful not to overlook the nuances of race, place, and socioeconomic status in their design efforts. This includes ensuring reasonable access to basic services, transportation, and fighting against gentrification and the commodification of space for economic gain. Organizations such as the Divided Cities Initiatives at Washington University in St. Louis and the Just City Lab at Harvard work on promoting justice in urban design.
Until the 1970s, the design of towns and cities took little account of the needs of people with disabilities. At that time, disabled people began to form movements demanding recognition of their potential contribution if social obstacles were removed. Disabled people challenged the 'medical model' of disability which saw physical and mental problems as an individual 'tragedy' and people with disabilities as 'brave' for enduring them. They proposed instead a 'social model' which said that barriers to disabled people result from the design of the built environment and attitudes of able-bodied people. 'Access Groups' were established composed of people with disabilities who audited their local areas, checked planning applications, and made representations for improvements. The new profession of 'access officer' was established around that time to produce guidelines based on the recommendations of access groups and to oversee adaptations to existing buildings as well as to check on the accessibility of new proposals. Many local authorities now employ access officers who are regulated by the Access Association. A new chapter of the Building Regulations (Part M) was introduced in 1992. Although it was beneficial to have legislation on this issue the requirements were fairly minimal but continue to be improved with ongoing amendments. The Disability Discrimination Act 1995 continues to raise awareness and enforce action on disability issues in the urban environment.
The issue of walkability has gained prominence in recent years, not only with the concerns of the aforementioned climate change, but also the health outcomes of residents. Car-centric urban design has an invariably negative effect on such outcomes. With proximity to internal combustion engines, residents tend to suffer from dangerous levels of air pollution which lead to cardiovascular complications ranging from the acute, in hypertension and alterations in heart rate, and the chronic, the outright development of atherosclerosis. More people die from air pollution each year than from car accidents. This issue has been used to fuel movements for alternative forms of long to mid range transportation such as trains and bicycles, with walking as the primary means of short-range travel. This would bring benefits from two simultaneous avenues. The physical activity from walking, and the lack of particulate matter (carbon dioxide, sulfur dioxide, nitrogen dioxide, etc.) has shown to alleviate and lower the risk of many maladies such as diabetes, hypertension and cardiovascular disease. Physical activity levels from walking are closely related to the abundance of open public spaces, commercial shops, greenery, among others. These attributes also have been stated to contribute to stronger social and emotional health as the open public spaces facilitate more social interaction within communities. This issue is most prevalent in the United States, where the rise of neoliberalism directly and intentionally caused the car-centric infrastructure.
See also
Blue space
Complete streets
Continuous productive urban landscape
Crime prevention through environmental design
Cyclability
Neighbourhood character
New Urbanism
Permeability (spatial and transport planning)
Sustainable urbanism
Urban density
Urban forest
Urban heat island
Urban green space
Urban planning
Urban vitality
Urbanism
Walkability
References
Further reading
Carmona, Matthew Public Places Urban Spaces, The Dimensions of Urban Design, Routledge, London New York, .
Carmona, Matthew, and Tiesdell, Steve, editors, Urban Design Reader, Architectural Press of Elsevier Press, Amsterdam Boston other cities 2007, .
Larice, Michael, and MacDonald, Elizabeth, editors, The Urban Design Reader, Routledge, New York London 2007, .
External links
Cities of the Future: overview of important urban design elements
Landscape
Landscape architecture | Urban design | Engineering | 6,767 |
75,510,166 | https://en.wikipedia.org/wiki/UBXD8 | UBXD8 is a protein in the Ubiquitin regulatory X (UBX) domain-containing protein family. The UBX domain contains many eukaryotic proteins that have similarities in amino acid sequence to the tiny protein modifier ubiquitin. UBXD8 engages in a molecular interaction with p97, a protein that is essential for the degradation of membrane proteins associated with the endoplasmic reticulum (ER) through the proteasome. Ubxd8 possesses a UBA domain, alongside the UBX domain, that could interact with polyubiquitin chains. Additionally, it possesses a UAS domain of undetermined function, and this protein is used as a protein sensor that detects long chain unsaturated fatty acids (FAs), having a vital function in regulating the balance of Fatty Acids within cells to maintain cellular homeostasis.
Influence of UBXD8 on lipid droplets
The hairpin loop in cell membranes helps Ubxd8 get inside by sensing unsaturated fatty acids (FAs) and controlling the production of triglycerides (TGs). The inhibition of TG synthesis is caused by Ubxd8, which blocks the conversion of diacylglycerols (DAGs) to TGs. However, this inhibition is alleviated when there is an abundance of unsaturated fatty acids. The structure of Ubxd8 is altered by unsaturated FAs, which in turn releases the brake on the synthesis of TG. Ubxd8 contributes to maintaining cellular energy balance by attracting p97/VCP to lipid droplets (LDs) and suppressing the function of adipose triglyceride lipase (ATGL), the enzyme that controls the rate of triacylglycerol breakdown. Moreover, VCP brings UBXD8 to mitochondria, where it participates in the regulation of mitochondrial protein quality. Disruption of UBXD8 gene hinders the breakdown of the pro-survival protein Mcl1 and excessively stimulates the process of mitophagy. To better understand how lipo-toxicity is caused by saturated fatty acids, it might be helpful to learn how Ubxd8 works with unsaturated fatty acids. The inhibitory effect of long-chain unsaturated fatty acids (FAs) on the interaction between Ubxd8 and Insig-1 is due to their ability to obstruct the binding between these two proteins, hence impeding the extraction of Insig-1 from the membrane. This inhibition is independent of the ubiquitination of Insig-1 and occurs after ubiquitination. Without affecting its ubiquitination, unsaturated FAs stabilize Insig-1, and they improve the capacity of sterols to inhibit the proteolytic activation of SREBP-1. The polymerization of the UAS domain of Ubxd8 occurs when it interacts with long-chain unsaturated FAs, which is essential for this process. For the polymerization reaction to be facilitated, the surface area of the UAS domain must be positively charged. The capacity of long-chain unsaturated FAs to stimulate oligomerization of Ubxd8 is hindered by mutations that take place in this specific region.
References
Proteins | UBXD8 | Chemistry | 717 |
61,186,829 | https://en.wikipedia.org/wiki/5G%20network%20slicing | 5G network slicing is a network architecture that enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each network slice is an isolated end-to-end network tailored to fulfill diverse requirements requested by a particular application.
For this reason, this technology assumes a central role to support 5G mobile networks that are designed to efficiently embrace a plethora of services with very different service level requirements (SLR). The realization of this service-oriented view of the network leverages on the concepts of software-defined networking (SDN) and network function virtualization (NFV) that allow the implementation of flexible and scalable network slices on top of a common network infrastructure.
From a business model perspective, each network slice is administrated by a mobile virtual network operator (MVNO). The infrastructure provider (the owner of the telecommunication infrastructure) leases its physical resources to the MVNOs that share the underlying physical network. According to the availability of the assigned resources, a MVNO can autonomously deploy multiple network slices that are customized to the various applications provided to its own users.
History
The history of network slicing can be tracked back to the late 80s with the introduction of the concept of "slice" in the networking field. Overlay networks provided the first form of network slicing since heterogeneous network resources were combined to create virtual networks over a common infrastructure. However, they lacked a mechanism that could enable their programmability.
In the early 2000s, PlanetLab introduced a virtualization framework that allowed groups of users to program network functions in order to obtain isolated and application-specific slices. The advent of SDN technologies in 2009 further extended the programmability capabilities via open interfaces that enabled the realization of fully configurable and scalable network slices.
In the context of mobile networks, network slicing evolved from the concept of RAN sharing that was initially introduced in LTE standard. Examples of such technology are multi-operator radio access networks (MORAN) and multi-operator core networks (MOCN), which allow network operators to share common LTE resources within the same radio access network (RAN).
Key concepts
The "one-size-fits-all" network paradigm employed in the past mobile networks (2G, 3G and 4G) is no longer suited to efficiently address a market model composed of very different applications like machine-type communication, ultra reliable low latency communication and enhanced mobile broadband content delivery.
Network slicing emerges as an essential technique in 5G networks to accommodate such different and possibly contrasting quality of service (QoS) requirements exploiting a single physical network infrastructure.
The basic idea of network slicing is to "slice" the original network architecture in multiple logical and independent networks that are configured to effectively meet the various services requirements. To quantitatively realize such concept, several techniques are employed:
Network functions: they express elementary network functionalities that are used as "building blocks" to create every network slice.
Virtualization: it provides an abstract representation of the physical resources under a unified and homogeneous scheme. In addition, it enables a scalable slice deployment relying on NFV that allows the decoupling of each network function instance from the network hardware it runs on.
Orchestration: it is a process that allows coordination of all the different network components that are involved in the life-cycle of each network slice. In this context, SDN is employed to enable a dynamic and flexible slice configuration.
Impact and applications
In commercial terms, network slicing allows a mobile operator to create specific virtual networks that cater to particular clients and use cases. Certain applications - such as mobile broadband, machine-to-machine communications (e.g. in manufacturing or logistics), or smart cars - will benefit from leveraging different aspects of 5G technology. One might require higher speeds, another low latency, and yet another access to edge computing resources. By creating separate slices that prioritise specific resources a 5G operator can offer tailored solutions to particular industries. Some sources insist this will revolutionise industries like marketing, augmented reality, or mobile gaming, while others are more cautious, pointing to unevenness in network coverage and poor reach of advantages beyond increased speed.
Slicing will be very useful to MVNOs as different use cases can be supported in a layer based on parameters like low latency high speed for video streaming for OTT focused MVNOs, similarly telemetry operations could have lower speed parameter and as on.
Slicing can also enhance service continuity via improved roaming across networks, by creating a virtual network running on physical infrastructure that spans multiple local or national networks; or by allowing a host network to create an optimised virtual network which replicates the one offered by a roaming device's home network.
Architecture overview
Although there are different proposals of network slice architectures, it is possible to define a general architecture that maps the common elements of each solution into a general and unified framework. From a high-level perspective, the network slicing architecture can be considered as composed of two mains blocks, one dedicated to the actual slice implementation and the other dedicated to the slice management and configuration. The first block is designed as a multi-tier architecture composed by three layers (service layer, network function layer, infrastructure layer), where each one contributes to the slice definition and deployment with distinct tasks. The second block is designed as a centralized network entity, generically denoted as network slice controller, that monitors and manages the functionalities between the three layers in order to efficiently coordinate the coexistence of multiple slices.
Service layer
The service layer interfaces directly with the network business entities (e.g. MVNOs and 3rd party service providers) that share the underlying physical network and it provides a unified vision of the service requirements. Each service is formally represented as service instance, which embeds all the network characteristics in the form of SLA requirements that are expected to be fully satisfied by a suitable slice creation.
Network function layer
The network function layer is in charge of the creation of each network slice according to service instance requests coming from the upper layer. It is composed of a set of network functions that embody well-defined behaviors and interfaces. Multiple network functions are placed over the virtual network infrastructure and chained together to create an end-to-end network slice instance that reflects the network characteristics requested by the service. The configuration of the network functions are performed by means of a set of network operations that allow management of their full lifecycle (from their placement when a slice is created to their de-allocation when the function provided is no longer needed).
To increase resource usage efficiency, the same network function can be simultaneously shared by different slices at the cost of an increase in the complexity of operations management. Conversely, a one-to-one mapping between each network function and each slice eases the configuration procedures, but can lead to poor and inefficient resource usage.
Infrastructure layer
The infrastructure layer represents the actual physical network topology (radio access network, transport network and core network) upon which every network slice is multiplexed and it provides the physical network resources to host the several network functions composing each slice.
The network domain of the available resources includes a heterogeneous set of infrastructure components like data centers (storage and computation capacity resources), devices enabling network connectivity such as routers (networking resources) and base stations (radio bandwidth resources).
Network slice controller
The network slice controller is defined as a network orchestrator, which interfaces with the various functionalities performed by each layer to coherently manage each slice request. The benefit of such network element is that it enables an efficient and flexible slice creation that can be reconfigured during its life-cycle. Operationally, the network slice controller oversees several tasks that provide more effective coordination between the aforementioned layers:
End-to-end service management: mapping of the various service instances expressed in terms of SLA requirements with suitable network functions capable of satisfying the service constraints.
Virtual resources definition: virtualization of the physical network resources in order to simplify the resources management operations performed to allocate network functions.
Slice life-cycle management: slice performance monitoring across all the three layers in order to dynamically reconfigure each slice to accommodate possible SLA requirements modifications.
Due to the complexity of the performed tasks which address different purposes, the network slice controller can be composed by multiple orchestrators that independently manage a subset of functionalities of each layer. To fulfill the service requirements, the various orchestration entities need to coordinate with each other by exchanging high-level information about the state of the operations involved in the slice creation and deployment.
Slice isolation
Slice isolation is an important requirement that allows enforcing the core concept of network slicing about the simultaneous coexistence of multiple slices sharing the same infrastructure. This property is achieved by imposing that each slice's performance must not have any impact on the other slice's performance. The benefit of this design choice is that enhances the network slice architecture in two main aspects:
Slice security: cyber-attacks or faults occurrences affect only the target slice and have limited impact on the life-cycle of other existing slices.
Slice privacy: private information related to each slice (e.g. user statistics, MVNO business model) are not shared among other slices.
Guaranteeing QoS
Slicing has become an important part of 5G networks, but we don't have to forget to guarantee the QoS. Some studies have demonstrated that formulating the problem with the QoS as a stochastic problem, permit us to maximize the average throughput of the AP, while satisfying the constraints related to the QoS.
Monetizing 5G network slicing
Monetizing 5G services faster is one of the topics that interests network operators the most because the costs of building and maintaining 5G networks are high, and it's difficult to predict the demand for 5G services. 5G network slicing is one of the effective ways to offer customized services for different industries such as manufacturing, transportation, and healthcare. Combined with AIOps, ML/AI-driven automation and 5G lifecycle optimization, it can reduce OpEx and increase revenues for network operators.
5G core network slicing
In the 3GPP 5G core architecture, the user plane and control plane functions are separated. Control plane capabilities, for instance, session management, access authentication, policy management, and user data storage are independent of the user plane functionality. The user plane handles packet forwarding, encapsulation or de-capsulation, and associated transport level specifics. This separation leads to the distribution of the user plane functions close to the edge of network slices (e.g., so as to reduce latency) and to be independent of the control plane. The main 5G core network entities are the Authentication server function (AUSF), Unstructured data storage network function (UDSF), Network exposure function (NEF), NF repository function (NRF), Policy control function (PCF), Unified data management (UDM), Network Slice Selection Function (NSSF), Communication Service Management Function (CSMF), AMF, SMF, and UPF. The AMF (as a function of the CP) controls UEs that have been authenticated to use the services of the operator and manages the mobility of the UEs across the gNBs. The SMF (again part of the CP) manages the sessions of UEs, while AMF transmits the session management messages between the UEs and SMF. UPF (as part of the UP) performs the processing and forwarding of the user data. NSSF (as a function of the CP) is responsible for the management and orchestration of network slices. CSMF (as a function of the CP) translates the requirements of services to requirements relating to network slices. 5G Core network functions can be sliced to support specific services for different UEs. Thanks to the modular nature of the 5G core, the network functions of the 5G core can be split and shared between different network slices to reduce management complexity. In general, we can perform 5G core network slicing in two ways. We can implement dedicated core network functions per network slice. In this architecture, each network slice has a set of completely dedicated core network functions (e.g., AUSF, AMF, SMF, and UDM). The UEs can access various services from network slices and different core networks. Alternatively, we can share some control plane functions between the network slices while others such as user plane functions are slice specific (e.g., UPF). AMF is usually shared by several network slices, while SMF and UPF are usually dedicated to specific network slices. The AMF function will be shared between different network slices in order to reduce the mobility management signaling when the UE uses the services of different network slices simultaneously. For example, UE location management or the control signaling between the UE and the old AMF will be reduced when it will be connected to the new AMF of another network slice. Also, UDM and NSSF are typically shared by all network slices to reduce the management complexity of network slices.
Network slicing security
The emergence of network slicing also exposes novel security and privacy challenges, primarily related to aspects such as network slicing life-cycle security, inter-slice security, intra-slice security, slice broker security, zero-touch network and management security, and blockchain security. Therefore, enhancing the security, privacy, and trust of network slicing has become a key research area toward realizing the true capabilities of 5G. Various security solutions are proposed for resolving the security threats, challenges, and issues of network slicing. These solutions include artificial intelligence based solutions, security orchestration, blockchain based solutions, Security Service Level Agreement (SSLA) and policy based solutions, security monitoring based solutions, slice isolation, security-by-design and privacy-by-design, and offering security as a service.
See also
APN
NGAP
5G
Network virtualization
Software-defined networking
Network orchestration
Network service
5G NR frequency bands
References
Network architecture
5G (telecommunication) | 5G network slicing | Engineering | 2,874 |
11,370,002 | https://en.wikipedia.org/wiki/Interleukin-1%20receptor%20antagonist | The interleukin-1 receptor antagonist (IL-1RA) is a protein that in humans is encoded by the IL1RN gene.
IL-1RA was initially called the IL-1 inhibitor and was discovered separately in 1984 by two independent laboratories. IL-1RA is an agent that binds non-productively to the cell surface interleukin-1 receptor (IL-1R), the same receptor that binds interleukin 1 family (IL-1), preventing IL-1's from sending a signal to that cell.
Function
IL-1RA is a member of the interleukin 1 cytokine family. IL-1RA is secreted by various types of cells including immune cells, epithelial cells, and adipocytes, and is a natural inhibitor of the pro-inflammatory effect of IL1β. This protein inhibits the activities of interleukin 1, alpha (IL1A) and interleukin 1, beta (IL1B), and modulates a variety of interleukin 1 related immune and inflammatory responses. This gene and five other closely related cytokine genes form a gene cluster spanning approximately 400 kb on chromosome 2. Four alternatively spliced transcript variants encoding distinct isoforms have been reported.
Clinical significance
A polymorphism of this gene is reported to be associated with increased risk of osteoporotic fractures and gastric cancer.
Biallelic deleterious mutations in the IL1RN gene results in a rare autoinflammatory disease called deficiency of the interleukin-1–receptor antagonist (DIRA). Variants of the IL1RN gene is also associated with risk of schizophrenia. Elevated levels of IL-1RA has been found in serum of schizophrenia patients.
In treatment of temporomandibular joint osteoarthritis (TMJOA) the messenger RNA (mRNA) of IL-1RA can be used. The IL-1RA mRNA reduces pain and joint inflammation by blocking inflammatory cascade signals that lead to osteoarthritis progression.
A recombinant, slightly modified version of interleukin 1 receptor antagonist called anakinra is used in the treatment of rheumatoid arthritis, an autoimmune disease in which IL-1 plays a key role. Anakinra differs from native human IL-1RA in that it has the addition of a single methionine residue at its amino terminus
The cytoplasmic and secreted isoforms of IL-1RA can suppress tumors such as squamous cell carcinoma. The cytoplasmic isoform can protect epithelial cells from environmental factors and compete with IL1A in binding with receptors preventing activation. Then, the secreted isoform regulates IL1B in tumor microenvironments by inhibiting glycolysis of IL1B and proliferation of tumor cells, thus preventing the movement of tumor cells.
Use in horses
Interleukin 1 receptor antagonist is used in horses for the treatment of equine lameness secondary to joint and soft-tissue injury. IL-1RA obstructs the IL1B inflammatory cascade rather than helping to restore damaged tissue.
References
Further reading
External links
Cytokines | Interleukin-1 receptor antagonist | Chemistry | 672 |
5,758,871 | https://en.wikipedia.org/wiki/Protoporphyrinogen%20IX | Protoporphyrinogen IX is an organic chemical compound which is produced along the synthesis of porphyrins, a class of critical biochemicals that include hemoglobin and chlorophyll. It is a direct precursor of protoporphyrin IX.
The compound is a porphyrinogen, meaning that it has a non-aromatic hexahydroporphine core, which will be oxidized to a porphine core in later stages of the heme synthesis. Like most porphyrinogens, it is colorless.
Biosynthesis
The compound is synthesized in most organisms from coproporphyrinogen III by the enzyme coproporphyrinogen oxidase:
The process entails conversion of two of four propionic acid groups to vinyl groups. In coproporphyrinogen III, the substituents on the pyrrole rings have the arrangement MP-MP-MP-PM, where M and P are methyl and propionic acid, respectively. In protoporphyrinogen IX, the sequence becomes MV-MV-MP-PM, where V is vinyl.
By the action of protoporphyrinogen oxidase, protoporphyrinogen IX is later converted into protoporphyrin IX, the first colored tetrapyrrole in the biosynthesis of hemes.
References
See also
Protoporphyrinogen oxidase
Macrocycles
Tetrapyrroles | Protoporphyrinogen IX | Chemistry | 300 |
30,662 | https://en.wikipedia.org/wiki/Triangulum%20Australe | Triangulum Australe is a small constellation in the far Southern Celestial Hemisphere. Its name is Latin for "the southern triangle", which distinguishes it from Triangulum in the northern sky and is derived from the acute, almost equilateral pattern of its three brightest stars. It was first depicted on a celestial globe as Triangulus Antarcticus by Petrus Plancius in 1589, and later with more accuracy and its current name by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756.
Alpha Trianguli Australis, known as Atria, is a second-magnitude orange giant and the brightest star in the constellation, as well as the 42nd-brightest star in the night sky. Completing the triangle are the two white main sequence stars Beta and Gamma Trianguli Australis. Although the constellation lies in the Milky Way and contains many stars, deep-sky objects are not prominent. Notable features include the open cluster NGC 6025 and planetary nebula NGC 5979.
The Great Attractor, the gravitational center of the Laniakea Supercluster which includes the Milky Way galaxy, straddles between Triangulum Australe and the neighboring constellation Norma.
History
Italian navigator Amerigo Vespucci explored the New World at the beginning of the 16th century. He learnt to recognize the stars in the southern hemisphere and made a catalogue for his patron king Manuel I of Portugal, which is now lost. As well as the catalogue, Vespucci wrote descriptions of the southern stars, including a triangle which may be either Triangulum Australe or Apus. This was sent to his patron in Florence, Lorenzo di Pierfrancesco de' Medici, and published as Mundus Novus in 1504. The first depiction of the constellation was provided in 1589 by Flemish astronomer and clergyman Petrus Plancius on a -cm diameter celestial globe published in Amsterdam by Dutch cartographer Jacob van Langren, where it was called Triangulus Antarcticus and incorrectly portrayed to the south of Argo Navis. His student Petrus Keyzer, along with Dutch explorer Frederick de Houtman, coined the name Den Zuyden Trianghel. Triangulum Australe was more accurately depicted in Johann Bayer's celestial atlas Uranometria in 1603, where it was also given its current name.
Nicolas Louis de Lacaille portrayed the constellations of Norma, Circinus and Triangulum Australe as a set square and ruler, a compass, and a surveyor's level respectively in a set of draughtsman's instruments in his 1756 map of the southern stars. Also depicting it as a surveyor's level, German Johann Bode gave it the alternate name of Libella in his Uranographia.
German poet and author Philippus Caesius saw the three main stars as representing the Three Patriarchs, Abraham, Isaac and Jacob (with Atria as Abraham). The Wardaman people of the Northern Territory in Australia perceived the stars of Triangulum Australe as the tail of the Rainbow Serpent, which stretched out from near Crux across to Scorpius. Overhead in October, the Rainbow Serpent "gives Lightning a nudge" to bring on the wet season rains in November.
Characteristics
Triangulum Australe is a small constellation bordered by Norma to the north, Circinus to the west, Apus to the south and Ara to the east. It lies near the Pointers (Alpha and Beta Centauri), with only Circinus in between. The constellation is located within the Milky Way, and hence has many stars. A roughly equilateral triangle, it is easily identifiable. Triangulum Australe lies too far south in the celestial southern hemisphere to be visible from Europe, yet is circumpolar from most of the southern hemisphere. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "TrA". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 18 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −60.26° and −70.51°. Triangulum Australe culminates each year at 9 p.m. on 23 August.
Notable features
Bright stars
In defining the constellation, Lacaille gave twelve stars Bayer designations of Alpha through to Lambda, with two close stars called Eta (one now known by its Henry Draper catalogue number), while Lambda was later dropped due to its dimness. The three brightest stars, Alpha, Beta and Gamma, make up the triangle. Readily identified by its orange hue, Alpha Trianguli Australis is a bright giant star of spectral type K2 IIb-IIIa with an apparent magnitude of +1.91 that is the 42nd-brightest star in the night sky. It lies away and has an absolute magnitude of −3.68 and is 5,500 times more luminous than the Sun. With a diameter 130 times that of the Sun, it would almost reach the orbit of Venus if placed at the centre of the Solar System. The proper name Atria is a contraction of its Bayer designation. Beta Trianguli Australis is a double star, the primary being a F-type main-sequence star with a stellar classification of F1V, and an apparent magnitude of 2.85. Lying only away, it has an absolute magnitude of 2.38. Its companion, almost 3 arcminutes away, is a 13th-magnitude star which may or may not be in orbit around Beta. The remaining member of the triangle is Gamma Trianguli Australis with an apparent magnitude of 2.87. It is an A-type main sequence star of spectral class A1 V, which lies away.
Located outside the triangle near Beta, Delta Trianguli Australis is the fourth-brightest star at apparent magnitude +3.8. It is a yellow giant of spectral type G2Ib-II and lies away. Lying halfway between Beta and Gamma, Epsilon Trianguli Australis is an optical double. The brighter star, Epsilon Trianguli Australis A, is an orange K-type sub-giant of spectral type K1.5III with an apparent magnitude of +4.11. The optical companion, Epsilon Trianguli Australis B (or HD 138510), is a white main sequence star of spectral type A9IV/V which has an apparent magnitude of +9.32. Zeta Trianguli Australis appears as a star of apparent magnitude +4.91 and spectral class F9V, but is actually a spectroscopic binary with a near companion, probably a red dwarf. The pair orbit each other once every 13 days. A young star, its proper motion indicates it is a member of the Ursa Major moving group. Iota Trianguli Australis shows itself to be a multiple star system composed of a yellow and a white star when seen though a 7.5 cm telescope. The brighter star has a spectral type of F4IV and is a spectroscopic binary whose components are two yellow-white stars which orbit each other every 39.88 days. The primary is a Gamma Doradus variable, pulsating over a period of 1.45 days. The fainter star is not associated with the system, hence the system is an optical double. HD 147018 is a Sun-like star of apparent magnitude 8.3 and spectral type G9V, which was found to have two exoplanets, HD 147018 b and HD 147018 c, in 2009.
Of apparent magnitude 5.11, the yellow bright giant Kappa Trianguli Australis of spectral type G5IIa lies around distant from the Solar System. Eta Trianguli Australis (or Eta1 Trianguli Australis) is a Be star of spectral type B7IVe which is from Earth, with an apparent magnitude of 5.89. Lacaille named a close-by star as Eta as well, which was inconsistently followed by Francis Baily, who used the name for the brighter or both stars in two different publications. Despite their faintness, Benjamin Gould upheld their Bayer designation as they were closer than 25 degrees to the south celestial pole. The second Eta is now designated as HD 150550. It is a variable star of average magnitude 6.53 and spectral type A1III.
Variable stars
Triangulum Australe contains several cepheid variables, all of which are too faint to be seen with the naked eye: R Trianguli Australis ranges from apparent magnitude 6.4 to 6.9 over a period of 3.389 days, S Trianguli Australis varies from magnitude 6.1 to 6.8 over 6.323 days, and U Trianguli Australis' brightness changes from 7.5 to 8.3 over 2.568 days. All three are yellow-white giants of spectral type F7Ib/II, F8II, and F8Ib/II respectively. RT Trianguli Australis is an unusual cepheid variable which shows strong absorption bands in molecular fragments of C2, ⫶CH and ⋅CN, and has been classified as a carbon cepheid of spectral type R. It varies between magnitudes 9.2 and 9.97 over 1.95 days. Lying nearby Gamma, X Trianguli Australis is a variable carbon star with an average magnitude of 5.63. It has two periods of around 385 and 455 days, and is of spectral type C5, 5(Nb).
EK Trianguli Australis, a dwarf nova of the SU Ursae Majoris type, was first noticed in 1978 and officially described in 1980. It consists of a white dwarf and a donor star which orbit each other every 1.5 hours. The white dwarf sucks matter from the other star onto an accretion disc and periodically erupts, reaching magnitude 11.2 in superoutbursts, 12.1 in normal outbursts and remaining at magnitude 16.7 when quiet. NR Trianguli Australis was a slow nova which peaked at magnitude 8.4 in April 2008, before fading to magnitude 12.4 by September of that year.
Deep-sky objects
Triangulum Australe has few deep-sky objects—one open cluster and a few planetary nebulae and faint galaxies. NGC 6025 is an open cluster with about 30 stars ranging from 7th to 9th magnitude. Located 3 degrees north and 1 east of Beta Trianguli Australis, it lies about away and is about in diameter. Its brightest star is MQ Trianguli Australis at apparent magnitude 7.1. NGC 5979, a planetary nebula of apparent magnitude 12.3, has a blue-green hue at higher magnifications, while Henize 2-138 is a smaller planetary nebula of magnitude 11.0. NGC 5938 is a remote spiral galaxy around 300 million light-years (90 megaparsecs) away. It is located 5 degrees south of Epsilon Trianguli Australis. ESO 69-6 is a pair of merging galaxies located about 600 million light-years (185 megaparsecs) away. Their contents have been dragged out in long tails by the interaction.
In culture
Triangulum Australe appears on the flag of Brazil, symbolizing the three states of the South Region.
It also appears as the only constellation used for the flag of secessionist movement The South Is My Country.
See also
IAU-recognized constellations
Triangulum Australe (Chinese astronomy)
References
Citations
Sources
Online sources
External links
The Deep Photographic Guide to the Constellations: Triangulum Australe
Starry Night Photography: Triangulum Australe
Southern constellations
Constellations listed by Petrus Plancius | Triangulum Australe | Astronomy | 2,487 |
32,040,747 | https://en.wikipedia.org/wiki/Affine%20braid%20group | In mathematics, an affine braid group is a braid group associated to an affine Coxeter system. Their group rings have quotients called affine Hecke algebras. They are subgroups of double affine braid groups.
Definition
References
Macdonald, I. G. Affine Hecke Algebras and Orthogonal Polynomials. Cambridge Tracts in Mathematics, 157. Cambridge University Press, Cambridge, Eng., 2003. x+175 pp.
Braid groups
Representation theory | Affine braid group | Mathematics | 94 |
23,527,575 | https://en.wikipedia.org/wiki/Gumblar | Gumblar is a malicious JavaScript trojan horse file that redirects a user's Google searches, and then installs rogue security software. Also known as Troj/JSRedir-R this botnet first appeared in 2009.
Infection
Windows Personal Computers
Gumblar.X infections were widely seen on systems running newer MacOS operating systems. Visitors to an infected site will be redirected to an alternative site containing further malware. Initially, this alternative site was gumblar.cn, but it has since switched to a variety of domains. The site sends the visitor an infected PDF that is opened by the visitor's browser or Acrobat Reader. The PDF will then exploit a known vulnerability in Acrobat to gain access to the user's computer. Newer variations of Gumblar redirect users to sites running fake anti-virus software.
The virus will find FTP clients such as FileZilla and Dreamweaver and download the clients' stored passwords. Gumblar also enables promiscuous mode on the network card, allowing it to sniff local network traffic for FTP details. It is one of the first viruses to incorporate an automated packet analyzer.
Servers
Using passwords obtained from site admins, the host site will access a website via FTP and infect that website. It will download large portions of the website and inject malicious code into the website's files before uploading the files back onto the server. The code is inserted in any file that contains a <body> tag, such as HTML, PHP, JavaScript, ASP and ASPx files. The inserted PHP code contains base64-encoded JavaScript that will infect computers that execute the code. In addition, some pages may have inline frames inserted into them. Typically, iframe code contains hidden links to malicious websites.
The virus will also modify .htaccess and HOSTS files, and create images.php files in directories named 'images'. The infection is not a server-wide exploit. It will only infect sites on the server that it has passwords to.
Gumblar variants
Different companies use different names for Gumblar and variants. Initially, the malware was connecting to gumblar.cn domain but this server was shut down in May 2009. However, many badware variants have emerged after that and they connect to other malicious servers via iframe code.
Gumblar resurfaced in January 2010, stealing FTP usernames and passwords and infecting HTML, PHP and JavaScript files on webservers to help spread itself. This time it used multiple domains, making it harder to detect/stop.
See also
E-mail spam
Malware
References
External links
Internet security
Distributed computing projects
Spamming
Botnets | Gumblar | Engineering | 574 |
2,016,798 | https://en.wikipedia.org/wiki/Fujiwhara%20effect | The Fujiwhara effect, sometimes referred to as the Fujiwara effect, Fujiw(h)ara interaction or binary interaction, is a phenomenon that occurs when two nearby cyclonic vortices move around each other and close the distance between the circulations of their corresponding low-pressure areas. The effect is named after Sakuhei Fujiwhara, the Japanese meteorologist who initially described the effect. Binary interaction of smaller circulations can cause the development of a larger cyclone, or cause two cyclones to merge into one. Extratropical cyclones typically engage in binary interaction when within of one another, while tropical cyclones typically interact within of each other.
Description
When cyclones are in proximity of one another, their centers will circle each other cyclonically (counter-clockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere) about a point between the two systems due to their cyclonic wind circulations. The two vortices will be attracted to each other, and eventually spiral into the center point and merge. It has not been agreed upon whether this is due to the divergent portion of the wind or vorticity advection. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will circle around it. The effect is named after Sakuhei Fujiwhara, the Japanese meteorologist who initially described it in a 1921 paper about the motion of vortices in water.
Tropical cyclones
Tropical cyclones can form when smaller circulations within the Intertropical Convergence Zone merge. The effect is often mentioned in relation to the motion of tropical cyclones, although the final merging of the two storms is uncommon. The effect becomes noticeable when they approach within of each other. Rotation rates within binary pairs accelerate when tropical cyclones close within of each other. Merger of the two systems (or shearing out of one of the pair) becomes realized when they are within of one another.
Extratropical cyclones
Binary interaction is seen between nearby extratropical cyclones when within of each other, with significant acceleration occurring when the low-pressure areas are within of one another. Interactions between their circulations at the 500 hPa level ( above sea level) behave more predictably than their surface circulations. This most often results in a merging of the two low-pressure systems into a single extratropical cyclone, or can less commonly result in a change of direction of one or both of the cyclones. The precise results of such interactions depend on factors such as the size of the two cyclones, their distance from each other, and the prevailing atmospheric conditions around them.
See also
Satellite tornado
References
External links
Edward N. Rappaport, NOAA Hurricane Research Division – "Hurricane Iris Preliminary Report"
Vortices
Tropical cyclone meteorology
Articles containing video clips | Fujiwhara effect | Chemistry,Mathematics | 561 |
12,374,274 | https://en.wikipedia.org/wiki/Aliquot%20sum | In number theory, the aliquot sum of a positive integer is the sum of all proper divisors of , that is, all divisors of other than itself.
That is,
It can be used to characterize the prime numbers, perfect numbers, sociable numbers, deficient numbers, abundant numbers, and untouchable numbers, and to define the aliquot sequence of a number.
Examples
For example, the proper divisors of 12 (that is, the positive divisors of 12 that are not equal to 12) are , and 6, so the aliquot sum of 12 is 16 i.e. ().
The values of for are:
0, 1, 1, 3, 1, 6, 1, 7, 4, 8, 1, 16, 1, 10, 9, 15, 1, 21, 1, 22, 11, 14, 1, 36, 6, 16, 13, 28, 1, 42, 1, 31, 15, 20, 13, 55, 1, 22, 17, 50, 1, 54, 1, 40, 33, 26, 1, 76, 8, 43, ...
Characterization of classes of numbers
The aliquot sum function can be used to characterize several notable classes of numbers:
1 is the only number whose aliquot sum is 0.
A number is prime if and only if its aliquot sum is 1.
The aliquot sums of perfect, deficient, and abundant numbers are equal to, less than, and greater than the number itself respectively. The quasiperfect numbers (if such numbers exist) are the numbers whose aliquot sums equal . The almost perfect numbers (which include the powers of 2, being the only known such numbers so far) are the numbers whose aliquot sums equal .
The untouchable numbers are the numbers that are not the aliquot sum of any other number. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable. Paul Erdős proved that their number is infinite. The conjecture that 5 is the only odd untouchable number remains unproven, but would follow from a form of Goldbach's conjecture together with the observation that, for a semiprime number , the aliquot sum is .
The mathematicians noted that one of Erdős' "favorite subjects of investigation" was the aliquot sum function.
Iteration
Iterating the aliquot sum function produces the aliquot sequence of a nonnegative integer (in this sequence, we define ).
Sociable numbers are numbers whose aliquot sequence is a periodic sequence. Amicable numbers are sociable numbers whose aliquot sequence has period 2.
It remains unknown whether these sequences always end with a prime number, a perfect number, or a periodic sequence of sociable numbers.
See also
Sum of positive divisors function, the sum of the (th powers of the) positive divisors of a number
William of Auberive, medieval numerologist interested in aliquot sums
References
External links
Arithmetic dynamics
Arithmetic functions
Divisor function
Perfect numbers | Aliquot sum | Mathematics | 656 |
1,718,317 | https://en.wikipedia.org/wiki/Lorenz%20gauge%20condition | In electromagnetism, the Lorenz gauge condition or Lorenz gauge (after Ludvig Lorenz) is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. (See, however, the Note added below for a different interpretation.) The condition is Lorentz invariant. The Lorenz gauge condition does not completely determine the gauge: one can still make a gauge transformation where is the four-gradient and is any harmonic scalar function: that is, a scalar function obeying the equation of a massless scalar field.
The Lorenz gauge condition is used to eliminate the redundant spin-0 component in Maxwell's equations when these are used to describe a massless spin-1 quantum field. It is also used for massive spin-1 fields where the concept of gauge transformations does not apply at all.
Description
In electromagnetism, the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials. The condition is
where is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant. It still leaves substantial gauge degrees of freedom.
In ordinary vector notation and SI units, the condition is
where is the magnetic vector potential and is the electric potential; see also gauge fixing.
In Gaussian units the condition is
A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field:
Therefore,
Since the curl is zero, that means there is a scalar function such that
This gives a well known equation for the electric field:
This result can be plugged into the Ampère–Maxwell equation,
This leaves
To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefore, it is convenient to choose the Lorenz gauge condition, which makes the left hand side zero and gives the result
A similar procedure with a focus on the electric scalar potential and making the same gauge choice will yield
These are simpler and more symmetric forms of the inhomogeneous Maxwell's equations.
Here
is the vacuum velocity of light, and is the d'Alembertian operator with the metric signature. These equations are not only valid under vacuum conditions, but also in polarized media, if and are source density and circulation density, respectively, of the electromagnetic induction fields and calculated as usual from and by the equations
The explicit solutions for and – unique, if all quantities vanish sufficiently fast at infinity – are known as retarded potentials.
History
When originally published in 1867, Lorenz's work was not received well by James Clerk Maxwell. Maxwell had eliminated the Coulomb electrostatic force from his derivation of the electromagnetic wave equation since he was working in what would nowadays be termed the Coulomb gauge. The Lorenz gauge hence contradicted Maxwell's original derivation of the EM wave equation by introducing a retardation effect to the Coulomb force and bringing it inside the EM wave equation alongside the time varying electric field, which was introduced in Lorenz's paper "On the identity of the vibrations of light with electrical currents". Lorenz's work was the first use of symmetry to simplify Maxwell's equations after Maxwell himself published his 1865 paper. In 1888, retarded potentials came into general use after Heinrich Rudolf Hertz's experiments on electromagnetic waves. In 1895, a further boost to the theory of retarded potentials came after J. J. Thomson's interpretation of data for electrons (after which investigation into electrical phenomena changed from time-dependent electric charge and electric current distributions over to moving point charges).
Note added on 26 November 2024: It should be pointed out that Lorenz actually derived the 'condition' from postulated integral expressions for the potentials (nowadays known as retarded potentials), whereas Lorentz (and before him Emil Wiechert) imposed it on the potentials to fix the gauge (see, e.g, his 1904 Encyclopedia article on electron theory). So Lorenz' equation is not a real condition but a mathematical result. It is therefore misleading to attribute the gauge condition to Lorenz.
See also
Gauge fixing
References
External links and further reading
General
Further reading
See also
History
Electromagnetism
Concepts in physics | Lorenz gauge condition | Physics | 923 |
18,050,749 | https://en.wikipedia.org/wiki/Nucleic%20acid%20notation | The nucleic acid notation currently in use was first formalized by the International Union of Pure and Applied Chemistry (IUPAC) in 1970. This universally accepted notation uses the Roman characters G, C, A, and T, to represent the four nucleotides commonly found in deoxyribonucleic acids (DNA).
Given the rapidly expanding role for genetic sequencing, synthesis, and analysis in biology, some researchers have developed alternate notations to further support the analysis and manipulation of genetic data. These notations generally exploit size, shape, and symmetry to accomplish these objectives.
IUPAC notation
Degenerate base symbols in biochemistry are an IUPAC representation for a position on a DNA sequence that can have multiple possible alternatives. These should not be confused with non-canonical bases because each particular sequence will have in fact one of the regular bases. These are used to encode the consensus sequence of a population of aligned sequences and are used for example in phylogenetic analysis to summarise into one multiple sequences or for BLAST searches, even though IUPAC degenerate symbols are masked (as they are not coded).
Under the commonly used IUPAC system, nucleobases are represented by the first letters of their chemical names: guanine, cytosine, adenine, and thymine. This shorthand also includes eleven "ambiguity" characters associated with every possible combination of the four DNA bases. The ambiguity characters were designed to encode positional variations in order to report DNA sequencing errors, consensus sequences, or single-nucleotide polymorphisms. The IUPAC notation, including ambiguity characters and suggested mnemonics, is shown in Table 1.
Despite its broad and nearly universal acceptance, the IUPAC system has a number of limitations, which stem from its reliance on the Roman alphabet. The poor legibility of upper-case Roman characters, which are generally used when displaying genetic data, may be chief among these limitations. The value of external projections in distinguishing letters has been well documented. However, these projections are absent from upper case letters, which in some cases are only distinguishable by subtle internal cues. Take for example the upper case C and G used to represent cytosine and guanine. These characters generally comprise half the characters in a genetic sequence but are differentiated by a small internal tick (depending on the typeface). Nevertheless, these Roman characters are available in the ASCII character set most commonly used in textual communications, which reinforces this system's ubiquity.
Another shortcoming of the IUPAC notation arises from the fact that its eleven ambiguity characters have been selected from the remaining characters of the Roman alphabet. The authors of the notation endeavored to select ambiguity characters with logical mnemonics. For example, S is used to represent the possibility of finding cytosine or guanine at genetic loci, both of which form strong cross-strand binding interactions. Conversely, the weaker interactions of thymine and adenine are represented by a W. However, convenient mnemonics are not as readily available for the other ambiguity characters displayed in Table 1. This has made ambiguity characters difficult to use and may account for their limited application.
Nucleic acid nomenclature
The positions of the carbons in the ribose sugar that forms the backbone of the nucleic acid chain are numbered, and are used to indicate the direction of nucleic acids (5'->3' versus 3'->5'). This is referred to as directionality.
Alternative visually enhanced notations
Legibility issues associated with IUPAC-encoded genetic data have led biologists to consider alternative strategies for displaying genetic data. These creative approaches to visualizing DNA sequences have generally relied on the use of spatially distributed symbols and/or visually distinct shapes to encode lengthy nucleic acid sequences. Alternative notations for nucleotide sequences have been attempted, however general uptake has been low. Several of these approaches are summarized below.
Stave projection
In 1986, Cowin et al. described a novel method for visualizing DNA sequence known as the Stave Projection. Their strategy was to encode nucleotides as circles on series of horizontal bars akin to notes on musical stave. As illustrated in Figure 1, each gap on the five-line staff corresponded to one of the four DNA bases. The spatial distribution of the circles made it far easier to distinguish individual bases and compare genetic sequences than IUPAC-encoded data.
The order of the bases (from top to bottom, G, A, T, C) is chosen so that the complementary strand can be read by turning the projection upside down.
Geometric symbols
Zimmerman et al. took a different approach to visualizing genetic data. Rather than relying on spatially distributed circles to highlight genetic features, they exploited four geometrically diverse symbols found in a standard computer font to distinguish the four bases. The authors developed a simple WordPerfect macro to translate IUPAC characters into the more visually distinct symbols.
DNA Skyline
With the growing availability of font editors, Jarvius and Landegren devised a novel set of genetic symbols, known as the DNA Skyline font, which uses increasingly taller blocks to represent the different DNA bases. While reminiscent of Cowin et al.'s spatially distributed Stave Projection, the DNA Skyline font is easy to download and permits translation to and from the IUPAC notation by simply changing the font in most standard word processing applications.
Ambigraphic notations
Ambigrams (symbols that convey different meaning when viewed in a different orientation) have been designed to mirror structural symmetries found in the DNA double helix. By assigning ambigraphic characters to complementary bases (i.e. guanine: b, cytosine: q, adenine: n, and thymine: u), it is possible to complement DNA sequences by simply rotating the text 180 degrees. An ambigraphic nucleic acid notation also makes it easy to identify genetic palindromes, such as endonuclease restriction sites, as sections of text that can be rotated 180 degrees without changing the sequence.
One example of an ambigraphic nucleic acid notation is AmbiScript, a rationally designed nucleic acid notations that combined many of the visual and functional features of its predecessors. Its notation also uses spatially offset characters to facilitate the visual review and analysis of genetic data. AmbiScript was also designed to indicate ambiguous nucleotide positions via compound symbols. This strategy aimed to offer a more intuitive solution to the use of ambiguity characters first proposed by the IUPAC. As with Jarvius and Landegren's DNA Skyline fonts, AmbiScript fonts can be downloaded and applied to IUPAC-encoded sequence data.
Triple Helix Base Pairing
Watson and Crick base pairs are indicated by a "•" or a "-" or a "." (example: A•T, or poly(rC)•2poly(rC)).
Hoogsteen triple helix base pairs are indicated by a "*" or a ":" (example: C•G*G+, or T•A*T, or C•G*G, or T•A*A).
See also
IUPAC for amino acids
DNA replication
Nucleotide
References
DNA
Notation
DNA replication
Nucleic acids
Nucleotides | Nucleic acid notation | Chemistry,Mathematics,Biology | 1,501 |
3,183,679 | https://en.wikipedia.org/wiki/Aubrey%20Manning | Aubrey William George Manning, OBE, FRSE, FRSB, (24 April 1930 – 20 October 2018) was an English zoologist and broadcaster.
Life
Manning, the son of William, who worked for the Home and Colonial Stores, and Hilda, was born in Chiswick, but moved with his family to Englefield Green in Surrey when the Second World War broke out, to escape the Blitz.
He was educated at Strode's Grammar School in Egham, at University College London, where he studied zoology, and then at Merton College, Oxford, where he completed his DPhil under Niko Tinbergen.
After National Service in the Royal Artillery, he joined the University of Edinburgh as an assistant lecturer in 1956. His main research and teaching interests were on animal behaviour, development, and evolution. He was involved with environmental issues since 1966, and with the Centre for Human Ecology since its inception at the University of Edinburgh in 1970. He was Professor of Natural History at the university from 1973 to 1997. In December 1997, a gallery in the Natural History Collection of Edinburgh University was named in his honour on his retirement. He later became Emeritus Professor.
Manning died on 20 October 2018.
Honours and public offices
Manning was elected Fellow of the Royal Society of Edinburgh (1973), and received an OBE in 1998. He also held honorary doctorates from Université Paul Sabatier in Toulouse, the University of St Andrews, and the Open University. He received the Zoological Society of London Silver Medal in 2003, for public understanding of science.
Among his many posts, he was Chairman of Edinburgh Brook Advisory Centre, Chairman of the Council of the Scottish Wildlife Trust, and a trustee of the National Museums of Scotland and of Project Wallacea. He was President of the Royal Society of Wildlife Trusts from 2005 to 2010, and was Patron of Population Matters (formerly known as the Optimum Population Trust).
Writing and broadcasting
He wrote An Introduction to Animal Behaviour (1967) published by Cambridge University Press, which is now in its sixth edition (last three editions co-authored with Professor Marian Stamp Dawkins. His television broadcasts included: BBC Two's Earth Story, Landscape Mysteries and Talking Landscapes. His radio broadcasts included The Rules of Life for BBC Radio 4 and the Open University in 2006. He also broadcast five series of Radio 4's Unearthing Mysteries, Sounds of Life and Origins: the Human Connection.
Family
In 1959, he married zoologist Margaret Bastock (d. 1982) with whom he had two sons. In 1985, he married Joan Herrmann, a child psychotherapist, with whom he had another son.
See also
Human overpopulation
References
External links
Presenter of Seven Natural Wonders of the South
Population: Can We Begin to Talk Sensibly? (November 2011). Posted on the official YouTube channel of The University of Edinburgh
Aubrey Manning: A lifetime in conservation
1930 births
2018 deaths
Alumni of University College London
Alumni of Merton College, Oxford
Alumni of the University of Edinburgh
English biologists
English zoologists
Ethologists
Fellows of the Royal Society of Edinburgh
Academics of the University of Edinburgh
Officers of the Order of the British Empire
People educated at Strode's Grammar School
People from Chiswick
People from Englefield Green
Fellows of the Royal Society of Biology
Royal Artillery soldiers
Military personnel from the London Borough of Hounslow
20th-century British Army personnel | Aubrey Manning | Biology | 679 |
36,382,771 | https://en.wikipedia.org/wiki/USA-50 | USA-50, also known as GPS II-6 and GPS SVN-18, was an American navigation satellite which formed part of the Global Positioning System. It was the sixth of nine Block II GPS satellites to be launched, which were the first operational GPS satellites to fly.
Background
It was part of the 21-satellite Global Positioning System (GPS) Block II series that provides precise position data (accurate to within 16 m) to military and civilian users worldwide. Its signals could be received on devices as small as a telephone. The GPS II satellites, built by Rockwell International for the Air Force Space Systems Division, each have a 7.5-year design life. The Air Force intends to launch a GPS II every 2 to 3 months until the constellation of 21 operational satellite and 3 spares is aloft. The GPS Block II join 7 operational Block 1 satellites.
Launch
USA-50 was launched at 22:55:01 UTC on 24 January 1990, atop a Delta II launch vehicle, flight number D191, flying in the 6925-9.5 configuration. The launch took place from Launch Complex 17A (LC-17A) at the Cape Canaveral Air Force Station (CCAFS), and placed USA-50 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37XFP apogee motor.
Mission
On 25 February 1990, USA-50 was in an orbit with a perigee of , an apogee of , a period of 717.92 minutes, and 54.6° of inclination to the equator. The satellite had a mass of , and generated 710 watts of power. It had a design life of 7.5 years, and was retired from service on 18 August 2000.
References
GPS satellites
USA satellites
Spacecraft launched in 1990 | USA-50 | Technology | 368 |
33,004,328 | https://en.wikipedia.org/wiki/Metric%20differential | In mathematical analysis, a metric differential is a generalization of a derivative for a Lipschitz continuous function defined on a Euclidean space and taking values in an arbitrary metric space. With this definition of a derivative, one can generalize Rademacher's theorem to metric space-valued Lipschitz functions.
Discussion
Rademacher's theorem states that a Lipschitz map f : Rn → Rm is differentiable almost everywhere in Rn; in other words, for almost every x, f is approximately linear in any sufficiently small range of x. If f is a function from a Euclidean space Rn that takes values instead in a metric space X, it doesn't immediately make sense to talk about differentiability since X has no linear structure a priori. Even if you assume that X is a Banach space and ask whether a Fréchet derivative exists almost everywhere, this does not hold. For example, consider the function f : [0,1] → L1([0,1]), mapping the unit interval into the space of integrable functions, defined by f(x) = χ[0,x], this function is Lipschitz (and in fact, an isometry) since, if 0 ≤ x ≤ y≤ 1, then
but one can verify that limh→0(f(x + h) − f(x))/h does not converge to an L1 function for any x in [0,1], so it is not differentiable anywhere.
However, if you look at Rademacher's theorem as a statement about how a Lipschitz function stabilizes as you zoom in on almost every point, then such a theorem exists but is stated in terms of the metric properties of f instead of its linear properties.
Definition and existence of the metric differential
A substitute for a derivative of f:Rn → X is the metric differential of f at a point z in Rn which is a function on Rn defined by the limit
whenever the limit exists (here d X denotes the metric on X).
A theorem due to Bernd Kirchheim states that a Rademacher theorem in terms of metric differentials holds: for almost every z in Rn, MD(f, z) is a seminorm and
The little-o notation employed here means that, at values very close to z, the function f is approximately an isometry from Rn with respect to the seminorm MD(f, z) into the metric space X.
References
Lipschitz maps
Mathematical analysis | Metric differential | Mathematics | 521 |
1,297,539 | https://en.wikipedia.org/wiki/Free%20particle | In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space.
Classical free particle
The classical free particle is characterized by a fixed velocity v. The momentum is given by
and the kinetic energy (equal to total energy) by
where m is the mass of the particle and v is the vector velocity of the particle.
Quantum free particle
Mathematical description
A free particle with mass in non-relativistic quantum mechanics is described by the free Schrödinger equation:
where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by a complex plane wave:
with amplitude A and has two different rules according to its mass:
if the particle has mass : (or equivalent ).
if the particle is a massless particle: .
The eigenvalue spectrum is infinitely degenerate since for each eigenvalue E>0, there corresponds an infinite number of eigenfunctions corresponding to different directions of .
The De Broglie relations: , apply. Since the potential energy is (stated to be) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics:
As for all quantum particles free or bound, the Heisenberg uncertainty principles apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, these stationary states can not correspond to physical realizable states.
Measurement and calculations
The normalization condition for the wave function states that if a wavefunction belongs to the quantum state space
then the integral of the probability density function
where * denotes complex conjugate, over all space is the probability of finding the particle in all space, which must be unity if the particle exists:
The state of a free particle given by plane wave solutions is not normalizable as
for any fixed time . Using wave packets, however, the states can be expressed as functions that are normalizable.
Wave packet
Using the Fourier inversion theorem, the free particle wave function may be represented by a superposition of momentum eigenfunctions, or, wave packet:
where
and is the Fourier transform of a "sufficiently nice" initial wavefunction .
The expectation value of the momentum p for the complex plane wave is
and for the general wave packet it is
The expectation value of the energy E is
Group velocity and phase velocity
The phase velocity is defined to be the speed at which a plane wave solution propagates, namely
Note that is not the speed of a classical particle with momentum ; rather, it is half of the classical velocity.
Meanwhile, suppose that the initial wave function is a wave packet whose Fourier transform is concentrated near a particular wave vector . Then the group velocity of the plane wave is defined as
which agrees with the formula for the classical velocity of the particle. The group velocity is the (approximate) speed at which the whole wave packet propagates, while the phase velocity is the speed at which the individual peaks in the wave packet move. The figure illustrates this phenomenon, with the individual peaks within the wave packet propagating at half the speed of the overall packet.
Spread of the wave packet
The notion of group velocity is based on a linear approximation to the dispersion relation near a particular value of . In this approximation, the amplitude of the wave packet moves at a velocity equal to the group velocity without changing shape. This result is an approximation that fails to capture certain interesting aspects of the evolution a free quantum particle. Notably, the width of the wave packet, as measured by the uncertainty in the position, grows linearly in time for large times. This phenomenon is called the spread of the wave packet for a free particle.
Specifically, it is not difficult to compute an exact formula for the uncertainty as a function of time, where is the position operator. Working in one spatial dimension for simplicity, we have:
where is the time-zero wave function. The expression in parentheses in the second term on the right-hand side is the quantum covariance of and .
Thus, for large positive times, the uncertainty in grows linearly, with the coefficient of equal to . If the momentum of the initial wave function is highly localized, the wave packet will spread slowly and the group-velocity approximation will remain good for a long time. Intuitively, this result says that if the initial wave function has a very sharply defined momentum, then the particle has a sharply defined velocity and will (to good approximation) propagate at this velocity for a long time.
Relativistic quantum free particle
There are a number of equations describing relativistic particles: see relativistic wave equations.
See also
Wave packet
Group velocity
Particle in a box
Finite square well
Delta potential
Notes
References
Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004,
Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd Edition), R. Eisberg, R. Resnick, John Wiley & Sons, 1985,
Stationary States, A. Holden, College Physics Monographs (USA), Oxford University Press, 1971,
Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006,
Elementary Quantum Mechanics, N.F. Mott, Wykeham Science, Wykeham Press (Taylor & Francis Group), 1972,
Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum's Outlines, Mc Graw Hill (USA), 1998,
Further reading
The New Quantum Universe, T.Hey, P.Walters, Cambridge University Press, 2009, .
Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008,
Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum's Easy Outlines Crash Course, Mc Graw Hill (USA), 2006,
Concepts in physics
Classical mechanics
Quantum models | Free particle | Physics | 1,343 |
16,761,786 | https://en.wikipedia.org/wiki/Type%20%28Unix%29 | In Unix and Unix-like operating systems, type is a command that describes how its arguments would be interpreted if used as command names.
Function
Where applicable, type will display the command name's path. Possible command types are:
shell built-in
function
alias
hashed command
keyword
The command returns a non-zero exit status if command names cannot be found.
Examples
$ type test
test is a shell builtin
$ type cp
cp is /bin/cp
$ type unknown
unknown not found
$ type type
type is a shell builtin
History
The type command was a shell builtin for Bourne shell that was introduced in AT&T's System V Release 2 (SVR2) in 1984, and continues to be included in many other POSIX-compatible shells such as Bash. However, type is not part of the POSIX standard. With a POSIX shell, similar behavior is retrieved with
command -V name
In the KornShell, the command whence provides similar functionality.
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
See also
List of Unix commands
which (command)
hash (Unix)
References
Standard Unix programs
Unix SUS2008 utilities
IBM i Qshell commands | Type (Unix) | Technology | 267 |
9,165,312 | https://en.wikipedia.org/wiki/Acetylmorphone | Acetylmorphone (dihydromorphinone acetate) is an opiate analogue that is an acetylated derivative of hydromorphone which was developed in the early 1900s as a potential cough suppressant and analgesic. It is prepared by the acetylation of hydromorphone using either acetyl chloride or acetic anhydride. It was banned internationally in 1930 by the Health Committee of the League of Nations, in order to prevent its sale as an analogue of heroin.
Acetylmorphone is not currently used in medicine, but may have a higher bioavailability than hydromorphone due to its greater lipid solubility, and hence is likely to be more potent than the parent drug, although probably slower acting due to the requirement for deacetylation to the active metabolite hydromorphone. It can be expected to have similar side effects to other opiates, which would include itching, nausea and respiratory depression.
References
Acetate esters
Ketones
4,5-Epoxymorphinans
Mu-opioid receptor agonists
Semisynthetic_opioids | Acetylmorphone | Chemistry | 242 |
37,576,693 | https://en.wikipedia.org/wiki/Voltage%20controller | A voltage controller, also called an AC voltage controller or AC regulator is an electronic module based on either thyristors, triodes for alternating current, silicon-controlled rectifiers or insulated-gate bipolar transistors, which converts a fixed voltage, fixed frequency alternating current (AC) electrical input supply to obtain variable voltage in output delivered to a resistive load. This varied voltage output is used for dimming street lights, varying heating temperatures in homes or industry, speed control of fans and winding machines and many other applications, in a similar fashion to an autotransformer. Voltage controller modules come under the purview of power electronics. Because they are low-maintenance and very efficient, voltage controllers have largely replaced such modules as magnetic amplifiers and saturable reactors in industrial use.
Modes of operation
Electronic voltage controllers work in two different ways; either through "on-and-off control" or through "phase control".
On-and-off control
In an on-and-off controller, thyristors are used to switch on the circuits for a few cycles of voltage and off for certain cycles, thus altering the total RMS voltage value of the output and acting as a high speed AC switch. The rapid switching results in high frequency distortion artifacts which can cause a rise in temperature, and may lead to interference in nearby electronics. Such designs are not practical except in low power applications.
Phase angle control
In phase angle control, thyristors are used to selectively pass only a part of each AC cycle through to the load. By controlling the phase angle or trigger angle, the output RMS voltage of the load can be varied. The thyristor is turned on for every half-cycle and switched off for each remaining half-cycle. The phase angle is the position at which the thyristor is switched on.
Applications
Light dimming circuits
Temperature control of electrical heating systems
Speed control of motors
AC magnet controls
See also
Dimmer
Motor soft starter
DC injection braking
Space Vector Modulation
Variable-speed air compressor
Vector control (motor)
Motor controller
Adjustable-speed drive
Electronic speed control
Variable-frequency drive
Thyristor drive
DC motor starter section of Electric motor
References
Power electronics | Voltage controller | Engineering | 440 |
33,592,307 | https://en.wikipedia.org/wiki/C2H6N4O2 | {{DISPLAYTITLE:C2H6N4O2}}
The molecular formula C2H6N4O2 (molar mass: 118.09 g/mol, exact mass: 118.0491 u) may refer to:
Biurea
Oxalyldihydrazide | C2H6N4O2 | Chemistry | 64 |
70,534,081 | https://en.wikipedia.org/wiki/Michael%20Start | Michael Start (born 27 October 1960) is a British automata maker and restorer. He trained in Technical Horology at Hackney College in London, and now specialises in the conservation and restoration of antique automata, with a focus on 19th Century automata.
Michael Start is co-founder of "The House of Automata". Together with his wife, Maria Start, they restore and deal in antique automata. Founded in London, it is now based in the North of Scotland, where “The House of Automata” operates from a workshop studio.
Media and television
Michael Start has worked as a consultant to the media, advising on automata and horology for screen. In 2011, Start designed the mechanism for the automaton that was featured in the Martin Scorsese screen adaptation of ‘’Hugo’’.
Michael Start features as an expert on Salvage Hunters- The Restorers, produced by Quest and Discovery Channel. Together with his wife Maria, they appear in Series 3, 4 and 5, as experts in Automata restoration.
References
External links
The House of Automata
The House of Automata
The House of Automata - YouTube
Login • Instagram
Salvage Hunters on Quest discovery+ | Stream 55,000+ Real-Life TV Episodes
Michael Start
Automata Convention
1960 births
Living people
Automata (mechanical)
Conservator-restorers
21st-century British people | Michael Start | Engineering | 283 |
71,992,445 | https://en.wikipedia.org/wiki/Mermin%27s%20device | In physics, Mermin's device or Mermin's machine is a thought experiment intended to illustrate the non-classical features of nature without making a direct reference to quantum mechanics. The challenge is to reproduce the results of the thought experiment in terms of classical physics. The input of the experiment are particles, starting from a common origin, that reach detectors of a device that are independent from each other, the output are the lights of the device that turn on following a specific set of statistics depending on the configuration of the device.
The results of the thought experiment are constructed in such a way to reproduce the result of a Bell test using quantum entangled particles, which demonstrate how quantum mechanics cannot be explained using a local hidden variable theory. In this way Mermin's device is a pedagogical tool to introduce the unconventional features of quantum mechanics to a larger public.
History
The original version with two particles and three settings per detector, was first devised in a paper called "Bringing home the atomic world: Quantum mysteries for anybody" authored by the physicist N. David Mermin in 1981. Richard Feynman told Mermin that it was "One of the most beautiful papers in physics". Mermin later described this accolade as "the finest reward of my entire career in physics". Ed Purcell shared Mermin's article with Willard Van Orman Quine, who then asked Mermin to write a version intended for philosophers, which he then produced.
Mermin also published a second version of the thought experiment in 1990 based on the GHZ experiment, with three particles and detectors with only two configurations. In 1993, Lucien Hardy devised a paradox that can be made into a Mermin-device-type thought experiment with two detectors and two settings.
Original two particle device
Assumptions
In Mermin's original thought experiment, he considers a device consisting of three parts: two detectors A and B, and a source C. The source emits two particles whenever a button is pushed, one particle reaches detector A and the other reaches detector B. The three parts A, B and C are isolated from each other (no connecting pipes, no wires, no antennas) in such a way that the detectors are not signaled when the button of the source has been pushed nor when the other detector has received a particle.
Each detector (A and B) has a switch with three configurations labeled (1,2 and 3) and a red and a green light bulb. Either the green or the red light will turn on (never both) when a particle enters the device after a given period of time. The light bulbs only emit light in the direction of the observer working on the device.
Additional barriers or instrument can be put in place to check that there is no interference between the three parts (A,B,C), as the parts should remain as independent as possible. Only allowing for a single particle to go from C to A and a single particle from C to B, and nothing else between A and B (no vibrations, no electromagnetic radiation).
The experiment runs in the following way. The button of the source C is pushed, particles take some time to travel to the detectors and the detectors flash a light with a color determined by the switch configuration. There are nine total possible configuration of the switches (three for A, three for B).
The switches can be changed at any moment during the experiment, even if the particles are still traveling to reach the detectors, but not after the detectors flash a light. The distance between the detectors can be changed so that the detectors flash a light at the same time or at different times. If detector A is set to flash a light first, the configuration of the switch of detector B can be changed after A has already flashed (similarly if B set to flash first, the settings of A can be change before A flashes).
Expected results
The expected results of the experiment are given in this table in percentages:
Every time the detectors are set to the same setting, the bulbs in each detector always flash same colors (either A and B flash red, or A and B flash green) and never opposite colors (A red B green, or A green B red). Every time the detectors are at different setting, the detectors flash the same color a quarter of the time and opposite colors 3/4 of the time. The challenge consists in finding a device that can reproduce these statistics.
Hidden variables and classical implementation
In order to make sense of the data using classical mechanics, one can consider the existence of three variables per particle that are measured by the detectors and follow the percentages above. Particle that goes into detector A has variables and the particle that goes into detector B has variables . These variables determine which color will flash for a specific setting (1,2 and 3). For example, if the particle that goes in A has variables (R,G,G), then if the detector A is set to 1 it will flash red (labelled R), set to 2 or 3 it will flash green (labelled G).
We have 8 possible states:
where in order to reproduce the results of table 1 when selecting the same setting for both detectors.
For any given configuration, if the detector settings were chosen randomly, when the settings of the devices are different (12,13,21,23,31,32), the color of their lights would agree 100% of the time for the states (GGG) and (RRR) and for the other states the results would agree 1/3 of the time.
Thus we reach an impossibility: there is no possible distribution of these states that would allow for the system to flash the same colors 1/4 of the time when the settings are not the same. Thereby, it is not possible to reproduce the results provided in Table 1.
Quantum mechanical implementation
Contrary to the classical implementation, table 1 can be reproduced using quantum mechanics using quantum entanglement. Mermin reveals a possible construction of his device based on David Bohm's version of the Einstein–Podolsky–Rosen paradox.
One can set two spin-1/2 particles in the maximally entangled singlet Bell state:
,
to leave the experiment, where () is the state where the projection of the spin of particle 1 is aligned (anti-aligned) with a given axis and particle 2 is anti-aligned (aligned) to the same axis. The measurement devices can be replaced with Stern–Gerlach devices, that measure the spin in a given direction. The three different settings determine whether the detectors are vertical or at ±120° to the vertical in the plane of perpendicular to the line of flight of the particles. Detector A flashes green when the spin of the measured particle is aligned with the detector's magnetic field and flashes red when anti-aligned. Detector B has the opposite color scheme with respect to A. Detector B flashes red when the spin of the measured particle is aligned and flashes green when anti-aligned. Another possibility is to use photons that have two possible polarizations, using polarizers as detectors, as in Aspect's experiment.
Quantum mechanics predicts a probability of measuring opposite spin projections given by
where is the relative angle between settings of the detectors. For and the system reproduces the result of table 1 keeping all the assumptions.
Three particle device
Mermin's improved three particle device demonstrates the same concepts deterministically: no statistical analysis of multiple experiments is necessary.
It has three detectors, each with two settings 1 and 2, and two lights, one red and one green. Each run of the experiment consists of setting the switches to values 1 or 2 and observing the color of the lights that flash when particles enter the detectors. The detectors again are assumed independent of one another, and cannot interact.
For the improved device, the expected results are the following: if one detector is switched to setting 1 while the others are on setting 2, an odd number of red lights flash. If all three detectors set to 1, and odd number of red light flashes never occurs.
Mermin then imagines that each of three particles emitted from the common source and entering the detectors has a hidden instruction set, dictating which light to flash for each switch setting. If only one device of three has a switch set to 1, there will always be an odd number of red flashes. However, Mermin shows that all possible instruction sets predict an odd number of red lights when all three devices are set to 1. No instruction set built in to the particles can explain the expected results. This contradiction implies that local hidden variable theory cannot explain such a device.
Quantum mechanical implementation
The improved device can be built using quantum mechanics. This implementation is based on the Greenberger–Horne–Zeilinger (GHZ) experiment. The device can be constructed if the three particles are quantum entangled in a GHZ state, written as
where and represent two states of a two-level quantum system. For electrons, the two states can be the up and down projections of the spin along the z-axis. The detector settings correspond to other two orthogonal measurement directions (for example, projections along x-axis or along the y-axis).
See also
Quantum pseudo-telepathy
References
Additional references
Physical paradoxes
Quantum measurement
Thought experiments in quantum mechanics | Mermin's device | Physics | 1,883 |
44,381,794 | https://en.wikipedia.org/wiki/Gliese%2015%20Ab | Gliese 15 Ab (GJ 15 Ab), also called Groombridge 34 Ab, rarely called GX Andromedae b is an extrasolar planet approximately 11 light-years away in the constellation of Andromeda. It is found in the night sky orbiting the star Gliese 15 A, which is at right ascension 00h 18m 22.89s and declination +44° 01′ 22.6″.
Discovery
It was discovered in August 2014, deduced from analysis of the radial velocities of the parent star by the Eta-Earth Survey using HIRES at Keck Observatory. It has around 5.35 ± 0.75 Earth masses, and is thought to be a Super-Earth with a diameter greater than that of the Earth. However, researches using the CARMENES spectrograph failed to detect the planet in 2017. The detection of planet was recovered in 2018, with revised minimum mass of 3.03 .
Orbit
Gliese 15 Ab has a close inner orbit around Gliese 15 A with a semi-major axis of only 0.0717 ± 0.0034 AU, making an orbital period that is just a little longer than 11.4 days, the orbit appears to be relatively circular, with an orbital eccentricity of about 0.12. It orbits too close to Gliese 15 A to be located in the habitable zone and is unlikely to harbour life.
Notes
References
External links
Open Exoplanet Catalogue entry
Andromeda (constellation)
Exoplanets discovered in 2014
Terrestrial planets
Exoplanets detected by radial velocity
Hot Neptunes
1
0015
fr:Gliese 15#Les planètes GJ 15 Ab et GJ 15 Ac | Gliese 15 Ab | Astronomy | 351 |
43,193,621 | https://en.wikipedia.org/wiki/25%20Serpentis | 25 Serpentis is a star system in the constellation of Serpens Caput. With an apparent magnitude of 5.37, it is just barely visible to the naked eye. The system is estimated to be some 450 light-years (138 parsecs) based on its parallax.
25 Serpentis is a spectroscopic binary, meaning that the individual components are too close to be resolved, but periodic Doppler shifts in their spectra indicate orbital motion. The system consists of a hot B-type giant and an A-type main-sequence star. The two stars orbit each other every 38.9 days, and have a very eccentric orbit, with an orbital eccentricity of 0.731. The primary is a slowly pulsating B-type star, which causes the system to vary by 0.03 magnitudes; for that reason it has been given the variable star designation PT Serpentis.
References
Serpens
Slowly pulsating B-type stars
Serpentis, 25
B-type giants
A-type main-sequence stars
Serpentis, A2
140873
5863
Durchmusterung objects
077227
Serpentis, PT
Spectroscopic binaries | 25 Serpentis | Astronomy | 245 |
5,862,776 | https://en.wikipedia.org/wiki/Stamp%20sand | Stamp sand is a coarse sand left over from the processing of ore in a stamp mill. In the United States, the most well-known deposits of stamp sand are in the Copper Country of the Upper Peninsula of Michigan, where it is black or dark gray, and may contain hazardous concentrations of trace metals.
In the 19th and early 20th centuries, many metal mines used stamp mills to process ore-bearing rock. The rock was brought to a stamp mill to be crushed. After crushing the material was mechanically separated to extract metals, or chemically treated by acids if the metal could be leached out. The size of the crushed material depended on the nature of the ore found in each mining district.
Copper Country of Michigan
In the Copper Country region, the rock was reduced to fragments because further crushing would not result in enough additional copper recovery to be economical. The sand was then usually disposed near the mill. As mills often relied on steam power to operate and water for some of the processing methods, they were built on the shore of lakes and rivers. The stamp sand was thus dumped into the water, sometimes growing deep enough to create entirely new land. Stamp sand discarded into the water was sometimes reclaimed with dredges to be re-stamped when more efficient stamping technology was developed (for example, Quincy Dredge Number Two).
Stamp sand may be hazardous to human health, since it contains trace amounts of harmful heavy metals (such as arsenic). For this reason, land created from stamp sand may be poisonous to plant life, and can pollute nearby water as well. For example, aquatic life in the Keweenaw Waterway, near the Keweenaw copper mines of Michigan, has declined significantly near stamp sand deposits, while the waterway is reasonably healthy in other areas. Several stamp sand dumps have been designated as Superfund sites to remove or contain the sands. Some stamp sand land has been covered with clean fill dirt and used for housing developments.
The coarseness of the sand has led to its use in place of (or in combination with) road salt in some areas, such as the Copper Country of Michigan. Typically, only stamp sand which has not been chemically processed is used, due to environmental concerns. In addition, some companies have developed methods to reprocess stamp sands to reclaim their small mineral content.
References
External links
Stamp sand research in the Copper Country
Upper Peninsula of Michigan
Metallurgical processes
Stamp mills | Stamp sand | Chemistry,Materials_science,Engineering | 493 |
41,162,459 | https://en.wikipedia.org/wiki/Transfer%20%28travel%29 | In travel, a transfer is local travel arranged as part of an itinerary, typically airport to hotel and hotel to hotel. Transfer has some features that distinguish it from ground transportation alternatives. These features are meeting directly in a transport hub, the opportunity to choose a car class and additional options like a baby seat.
Classification
There are few transfer classifications based on
route:
from/to the transport hub
intercity
to or from sightseeing locations;
purpose:
touristic
business;
number of tourists:
individual (one person or family)
group (more than four people);
comfort level:
economy
comfort
premium
The most popular tourist transfers are individual economy class transfers (about 52,3% of all orders).
References
Travel | Transfer (travel) | Physics | 144 |
28,162,265 | https://en.wikipedia.org/wiki/Palladium%20tetrafluoride | Palladium (IV) fluoride, also known as palladium tetrafluoride, is the chemical compound of palladium and fluorine with the chemical formula PdF4. The palladium atoms in PdF4 are in the +4 oxidation state.
Synthesis
Palladium tetrafluoride has been prepared by reacting palladium(II,IV) fluoride with fluorine gas at pressures around 7 atm and at 300 °C for several days.
Reactivity
PdF4 is a strong oxidising agent and undergoes rapid hydrolysis in moist air.
See also
Palladium fluorides
References
Palladium compounds
Fluorides
Platinum group halides | Palladium tetrafluoride | Chemistry | 142 |
15,022 | https://en.wikipedia.org/wiki/Infrared | Infrared (IR; sometimes called infrared light) is electromagnetic radiation (EMR) with wavelengths longer than that of visible light but shorter than microwaves. The infrared spectral band begins with waves that are just longer than those of red light (the longest waves in the visible spectrum), so IR is invisible to the human eye. IR is generally understood to include wavelengths from around to . IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of the solar spectrum. Longer IR wavelengths (30–100 μm) are sometimes included as part of the terahertz radiation band. Almost all black-body radiation from objects near room temperature is in the IR band. As a form of EMR, IR carries energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon.
It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate.
Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range.
Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting.
Definition and relationship to the electromagnetic spectrum
There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 780 nm to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz).
Nature
Sunlight, at an effective temperature of 5,780 K (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kW per square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm.
On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy.
Regions
In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed.
Visible limit
Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions.
Commonly used subdivision scheme
A commonly used subdivision scheme is:
NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared".
CIE division scheme
The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands:
ISO 20473 scheme
ISO 20473 specifies the following scheme:
Astronomy division scheme
Astronomers typically divide the infrared spectrum as follows:
These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space.
The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers.
Sensor response division scheme
A third scheme divides up the band based on the response of various detectors:
Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon).
Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5μm.
Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe).
Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers).
Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon).
Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available.
The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., from lasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage.
Telecommunication bands
In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors:
The C-band is the dominant band for long-distance telecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed.
Heat
Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law).
Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth.
The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers.
Applications
Night vision
Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source.
The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment.
Thermography
Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs.
Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name).
Hyperspectral imaging
A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements.
Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications.
Other imaging
In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy.
Tracking
Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background.
Heating
Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing).
Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating.
Cooling
A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces maximize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes.
Communications
IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances.
Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared.
Free-space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen."
Infrared lasers are used to provide the light for optical fiber communications systems. Wavelengths around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers.
IR data transmission of audio versions of printed signs is being researched as an aid for visually impaired people through the Remote infrared audible signage project.
Transmitting IR data from one device to another is sometimes referred to as beaming.
IR is sometimes used for assistive audio as an alternative to an audio induction loop.
Spectroscopy
Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum.
Thin film metrology
In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures.
Meteorology
Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels).
Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low clouds such as stratus or fog can have a temperature similar to the surrounding land or sea surface and do not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low clouds can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied.
These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information.
The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere.
Climatology
In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation.
A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm.
Astronomy
Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium.
The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy.
The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.)
Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared.
Cleaning
Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting.
Art conservation and analysis
Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting.
Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's Woman Ironing and Blue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today.
Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well.
Biological systems
The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system.
Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata), darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans). By detecting the heat that their prey emits, crotaline and boid snakes identify and capture their prey using their IR-sensitive pit organs. Comparably, IR-sensitive pits on the Common Vampire Bat (Desmodus rotundus) aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle, Melanophila acuminata, locates forest fires via infrared pit organs, where on recently burnt trees, they deposit their eggs. Thermoreceptors on the wings and antennae of butterflies with dark pigmentation, such Pachliopta aristolochiae and Troides rhadamantus plateni, shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs (Triatoma infestans) locate their warm-blooded victims by sensing their body heat.
Some fungi like Venturia inaequalis require near-infrared light for ejection.
Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters.
Photobiomodulation
Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms.
Health hazards
Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places.
Scientific history
The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century. An earlier experiment in 1790 by Marc-Auguste Pictet demonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light.
Other important dates include:
1830: Leopoldo Nobili made the first thermopile IR detector.
1840: John Herschel produces the first thermal image, called a thermogram.
1860: Gustav Kirchhoff formulated the blackbody theorem .
1873: Willoughby Smith discovered the photoconductivity of selenium.
1878: Samuel Pierpont Langley invents the first bolometer, a device which is able to measure small temperature fluctuations, and thus the power of far infrared sources.
1879: Stefan–Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4.
1880s and 1890s: Lord Rayleigh and Wilhelm Wien solved part of the blackbody equation, but both solutions diverged in parts of the electromagnetic spectrum. This problem was called the "ultraviolet catastrophe and infrared catastrophe".
1892: Willem Henri Julius published infrared spectra of 20 organic compounds measured with a bolometer in units of angular displacement.
1901: Max Planck published the blackbody equation and theorem. He solved the problem by quantizing the allowable energy transitions.
1905: Albert Einstein developed the theory of the photoelectric effect.
1905–1908: William Coblentz published infrared spectra in units of wavelength (micrometers) for several chemical compounds in Investigations of Infra-Red Spectra.
1917: Theodore Case developed the thallous sulfide detector, which helped produce the first infrared search and track device able to detect aircraft at a range of one mile (1.6 km).
1935: Lead salts – early missile guidance in World War II.
1938: Yeou Ta predicted that the pyroelectric effect could be used to detect infrared radiation.
1945: The Zielgerät 1229 "Vampir" infrared weapon system was introduced as the first portable infrared device for military applications.
1952: Heinrich Welker grew synthetic InSb crystals.
1950s and 1960s: Nomenclature and radiometric units defined by Fred Nicodemenus, G. J. Zissis and R. Clark; Robert Clark Jones defined D*.
1958: W. D. Lawson (Royal Radar Establishment in Malvern) discovered IR detection properties of Mercury cadmium telluride (HgCdTe).
1958: Falcon and Sidewinder missiles were developed using infrared technology.
1960s: Paul Kruse and his colleagues at Honeywell Research Center demonstrate the use of HgCdTe as an effective compound for infrared detection.
1962: J. Cooper demonstrated pyroelectric detection.
1964: W. G. Evans discovered infrared thermoreceptors in a pyrophile beetle.
1965: First IR handbook; first commercial imagers (Barnes, Agema (now part of FLIR Systems Inc.)); Richard Hudson's landmark text; F4 TRAM FLIR by Hughes; phenomenology pioneered by Fred Simmons and A. T. Stair; U.S. Army's night vision lab formed (now Night Vision and Electronic Sensors Directorate (NVESD)), and Rachets develops detection, recognition and identification modeling there.
1970: Willard Boyle and George E. Smith proposed CCD at Bell Labs for picture phone.
1973: Common module program started by NVESD.
1978: Infrared imaging astronomy came of age, observatories planned, IRTF on Mauna Kea opened; 32 × 32 and 64 × 64 arrays produced using InSb, HgCdTe and other materials.
2013: On 14 February, researchers developed a neural implant that gives rats the ability to sense infrared light, which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.
See also
Notes
References
External links
Infrared: A Historical Perspective (Omega Engineering)
Infrared Data Association , a standards organization for infrared data interconnection
SIRC Protocol
How to build a USB infrared receiver to control PC's remotely
Infrared Waves: detailed explanation of infrared light. (NASA)
Herschel's original paper from 1800 announcing the discovery of infrared light
The thermographic's library , collection of thermogram
Infrared reflectography in analysis of paintings at ColourLex
Molly Faries, Techniques and Applications – Analytical Capabilities of Infrared Reflectography: An Art Historian s Perspective , in Scientific Examination of Art: Modern Techniques in Conservation and Analysis, Sackler NAS Colloquium, 2005
Electromagnetic spectrum | Infrared | Physics | 7,051 |
78,895,170 | https://en.wikipedia.org/wiki/DNA%20photoionization | DNA photoionization is the phenomenon according to which ultraviolet radiation absorbed directly by a DNA system (mononucleotide, single or double strand, G-quadruplex…) induces the ejection of electrons, leaving electron holes on the nucleic acid.
The loss of an electron gives rise to a radical cation on the DNA. Radical cations are precursors to oxidative damage, ultimately leading to carcinogenic mutations and cell death. This aspect, detrimental to the health, is exploited in the germicidal equipments using far-UVC lamps. The electric charges photogenarated in DNA could potentially find applications in optoelectronic devices.
Two properties are crucial regarding photoionization. On the one hand, the ionization energy (also called ionization potential, IP), refers to the energy necessary to remove one electron from a molecule; the lowest IP, corresponding to the ejection of a first electron, is the most biologically relevant factor. On the other hand, the photoionization quantum yield Φ, that is the number of electrons that are ejected over the number of absorbed photons; Φ depends on the irradiation wavelength and decreases.
The mechanism underlying DNA ionization depends on the number of photons that provoke the ejection of one electron (one-photon or multiphoton, induced by intense laser pulses). And, in the case of one-photon process, it differs according to the photon energy (high-energy or low-energy). While one- and two-photon ionization in condensed phase (aqueous solutions, cells…) is mainly studied in respect with the UV-induced oxidative damage, multiphoton ionization in the gas phase, often coupled to mass spectroscopy, is used in various techniques in order to obtain broader spectroscopic,
analytical,
structural or therapeutic information.
Ionization potentials
Since the end of the 20th century, numerous theoretical studies, performed using various types of quantum chemistry methods, focus on the computation of the lowest IP of nucleobases. Particular effort is being dedicated to evaluate environmental effects, such as the presence of water molecules, base-pairing, base stacking or base-sequence. All these studies agree that the IP decreases in the order: thymine, cytosine, adenine, guanine.
Experimentally, IPs are determined by photoelectron spectroscopy. A series of systematic measurements of all the elementary DNA components as well as of genomic DNA in liquid jets, associated with computations, provided important information regarding the ionization in aqueous media. The IP values measured for nucleosides/nucleotides (8.1, 8.1, 7.6 and 7.3 eV for thymidine monophosphate, cytosine, adenosine and guanosine, respectively) match those computed for vertical ionization. The latter corresponds to electron ejection without prior geometrical rearrangement of the molecular framework. Most importantly, it was evidenced that base-pairing and base-stacking do not have any significant effect.
One photon ionization
Photoionization quantum yields
Photoionization quantum yields are determined for DNA in aqueous solution by means of the transient absorption spectroscopy using as excitation source nanosecond laser pulses. The ejected electrons are solvated by the water molecules (hydrated) on the sub-picosecond time scale. As the absorption spectrum of hydrated electrons, peaking 720 nm, is well known, they can be characterized in a quantitative way.
High-energy photoionization
The first experiments were reported in the 1990s using excitation at 193 nm. The quantum yields determined for the nucleobases at this wavelength amount to a few percent. In agreement with the later studies performed by photoelectron spectroscopy, the Φ found for genomic DNA is the linear combination of the quantum yield values of the individual nucleobases, in agreement with the findings of the photoelectron spectroscopy.
Low-energy photoionization
The first studies on low-energy photoionization, occurring at wavelengths for which the photon energy is significantly smaller compared to the lowest ionization potential of DNA, were reported back in 2005 (G-Quadruplexes at 308 nm) and 2006 (single and double strands at 266 nm). But this unexpected phenomenon started to be studied in a systematic way only ten years later. To that effect, specific protocols regarding the purity of the nucleic acids and the ingredients of the aqueous solution as well as the intensity of the exciting laser pulses were established.
In contrast to the high-energy, low-energy photoionization strongly depends on the secondary DNA structure. It is not observed for mononucleosides, mononucleotides or purely stacked single strands (Φ<0.5x10−4). The quantum yields determined for duplexes fall in the range of (1-2)x10−3 while the highest Φ values, up to 1.4x10−2, have been detected for G-Quadruplexes. The photonization quantum yield determined for genomic DNA is similar to that reported for the formation of bipyrimidine photoproducts.
The detailed examination of the structural factors affecting the low-energy photoionization, combined to quantum chemical calculations, indicates that it occurs via a complex mechanism. The latter involves excited charge transfer states, in which an atomic charge is transferred from one nucleobase to a neighboring one; such states are known to be populated during the electronic relaxation following photon absorption. Subsequently, a small population of these states undergoes charge separation. And, eventually, the electron is ejected from the nucleobase bearing the negative charge, because its ionization potential is lower compared to those of neutral nucleobases.
Two-photon ionization
Two-photon photoionization is provoked by intense laser pulses of short duration. In this case, a first photon absorbed by DNA gives rise to an electronic excited state. During its lifetime, the latter may absorb a second photon. The electron is then ejected from this excited state and not from the ground state, as happens for the one-photon ionization.
This ionization mode started to be used already from the 1980sin order to characterize chemically the final DNA lesions (single and double strand breaks, 8-oxo-7,8-dihydroguanine,..), stemming from this process. Typically, lasers emitting at 248 or 266 nm have been employed in combination to analytical or biochemical methods. Such measurements are performed both on DNA solutions and on cells.
The need to correlate the observed lesions with the ejected electrons lead to first time-resolved absorption studies on the process triggered by absorption of UV radiation directly by DNA. Thus, signatures of the nucleobase radicals were discovered either in the UV-visible spectral domain or in the infrared.
References
Further reading
Reviews and Accounts
Book Chapters
DNA
Photochemistry | DNA photoionization | Chemistry | 1,440 |
41,729 | https://en.wikipedia.org/wiki/Spectral%20width | In telecommunications, spectral width is the width of a spectral band, i.e., the range of wavelengths or frequencies over which the magnitude of all spectral components is significant, i.e., equal to or greater than a specified fraction of the largest magnitude.
In fiber-optic communication applications, the usual method of specifying spectral width is the full width at half maximum (FWHM). This is the same convention used in bandwidth, defined as the frequency range where power drops by less than half (at most −3 dB).
The FWHM method may be difficult to apply when the spectrum has a complex shape. Another method of specifying spectral width is a special case of root-mean-square deviation where the independent variable is wavelength, λ, and f (λ) is a suitable radiometric quantity.
The relative spectral width, Δλ/λ, is frequently used where Δλ is obtained according to note 1, and λ is the center wavelength.
See also
Spectral linewidth in optics
Spectral bandwidth
References
Telecommunication theory
Optical communications
Optical quantities
Spectrum (physical sciences) | Spectral width | Physics,Mathematics,Engineering | 222 |
76,492,287 | https://en.wikipedia.org/wiki/Gerald%20Koch | Gerald Koch (born in 1968) is a German wood scientist and professor, senior researcher and research scientific director at the Thünen-Institute of Wood Research at Hamburg, who is an elected fellow of the International Academy of Wood Science.
Research career
Koch obtained his PhD degree in wood science from the University of Hamburg in 1998.
Since 2004, he is the curator of the scientific wood collection, and also the head of wood anatomy at the Institute of Wood Research in Hamburg.
His research interests include topics of wood sciences related to macroscopic and microscopic wood identification of internationally traded timbers, forensic timber identification, investigation of wood structure, properties and utilisation of lesser known species, and also, topochemical analyses of wooden tissues on a subcellular level.
In the area of wood anatomy, he has been the initiator of the mobile apps, CITESwoodID and macroHOLZdata, which are commonly used by educational personnel, and professionals in wood industry and trade for the identification of tropical and non-tropical timbers.
Recognition
In 2008, he was elected as a fellow at the International Academy of Wood Science for his scientific work.
He is a member of the International Association of Wood Anatomists. In addition, Koch is an appointed advisor of the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV), specifically on matters of subtropical and tropical timbers including certification and CITES regulations.
Koch presently serves as a member in the editorial boards of estemeed wood-related journals Holzforschung and European Journal of Wood and Wood Products.
Until April 2024, Koch has published and presented more than 200 research works in several referred journals, conferences and symposia, and possesses more than 4,000 citations at Google Scholar.
References
External links
Published work
Google Scholar
Wood biology, by Koch et al.
German scientists
Fellows of the International Academy of Wood Science
Wood scientists
1968 births
Living people
University of Hamburg alumni | Gerald Koch | Materials_science | 394 |
19,004,505 | https://en.wikipedia.org/wiki/Nano-RK | Nano-RK is a wireless sensor networking real-time operating system (RTOS) from Carnegie Mellon University, designed to run on microcontrollers for use in sensor networks. Nano-RK supports a fixed-priority fully preemptive scheduler with fine-grained timing primitives to support real-time task sets. "Nano" implies that the RTOS is small, using 2 KB of random-access memory (RAM) and using 18 KB of flash memory, while RK is short for resource kernel. A resource kernel provides reservations on how often system resources can be used. For example, a task might only be allowed to execute 10 ms every 150 ms (CPU reservation), or a node might only be allowed to transmit 10 network packets per minute (network reservation). These reservations form a virtual energy budget to ensure a node meets its designed battery lifetime and to prevent a failed node from generating excessive network traffic. Nano-RK is open-source software, is written in C and runs on the Atmel-based FireFly sensor networking platform, the MicaZ motes, and the MSP430 processor.
Tradeoffs occur when using an RTOS in sensor networks.
Advantages
NanoRK takes advantage of priority-based preemptive scheduling to help honor the real-time factor of being deterministic, thus ensuring task timeliness and synchronization. Due to the characteristic of limited battery power on the wireless node, Nano-RK provides central processing unit (CPU), network, and sensor efficiency through the use of virtual energy reservations, labeling this system as a resource kernel. These energy reservations can enforce energy and communication budgets to minimize the negative impact on the node's operational lifetime from unintentional errors or malicious behavior by other nodes within the network. It supports packet forwarding, routing and other network scheduling protocols with the help of a light-weight wireless networking stack. Compared with other current sensor operating systems, Nano-RK provides rich functionality and timeliness scheduling with a small size for its embedded resource kernel (RK).
Features
Static Configuration – Nano-RK uses a static design-time approach for energy use control, and disallows dynamic task creation, requiring application developers to set both task and reservation quotas/priorities in a static testbed design. This design allows creating an energy budget for each task to maintain application requirements and energy efficiency throughout the system's lifetime. Using a static configuration approach, all of the runtime configurations, and the power requirements, are predefined and verified by the designer before the system is deployed and executed in the real world. This approach also helps to guarantee the stability and small-size characteristics relative to traditional RTOSs.
Watchdog Timer support – Watchdog is a software timer that triggers a system reset action if the system hangs on crucial faults for an extended period of time. The watchdog mechanism can bring the system back from the nonresponsive state into normal operation by waiting until the timer goes off and subsequently rebooting the device. In Nano-RK, the watchdog timer is tied directly to the processor's reset signal REBOOT ON ERROR. By default, it is enabled when the system boots and reset each time the scheduler executes. If the system fails to respond within the predefined time period, the system will reboot and run the initialization instruction sequence to hopefully regain control.
Deep Sleep Mode – For energy efficiency reasons, if there are no eligible tasks to run, the system can be powered down and given the option to enter deep sleep mode. When the system is in this mode, only the deep sleep timer can wake the system with a predefined latency period. After waking from sleep mode, the next context swap time is set to guarantee the CPU wakes in time. If a sensor node does not wish to perform deep sleep, it also is presented with the choice to go into a low energy use state while still managing its peripherals.
Ready queue
Nano-RK has implemented a double-linked list of ready queue nodes within a fixed-size array, termed the ready queue, that orders all ready tasks in decreasing order by whichever of the task's priorities is higher. As the number of tasks running within the Nano-RK implementation is statically configured in a testbed before deployment, the ready queue size is also fixed to this number of tasks that can be ready to run. A fixed-length array named nrk readyQ is found within the nrk defs.h file along with two pointers to reference the two most important cells within this array. The free node pointer (free node) and the head node pointer (head node) point to the next cell in the array to be allocated and the current highest priority task ready to run, respectively.
Scheduler
The core of Nano-RK is a static preemptive real-time scheduler which is priority-based and energy efficient. For priority-based preemptive scheduling, the scheduler always selects the highest priority task from the ready queue. To save energy, tasks do not poll for a resource but rather tasks will be blocked on certain events and can be unlocked when the events occur. When there is no task in the ready queue, the system can be powered down to save energy. When the system is working, one and only one task (current task), signified by the nrk cur task tcb, is running for a predefined period. So the most important job of the scheduler is to decide which task should be run next and for how long the next task should run until the scheduler is triggered to run again.
References
External links
Wireless sensor network
Embedded operating systems
Free software operating systems | Nano-RK | Technology | 1,171 |
34,816,085 | https://en.wikipedia.org/wiki/Web%20%28differential%20geometry%29 | In mathematics, a web permits an intrinsic characterization in terms of Riemannian geometry of the additive separation of variables in the Hamilton–Jacobi equation.
Formal definition
An orthogonal web on a Riemannian manifold (M,g) is a set of n pairwise transversal and orthogonal foliations of connected submanifolds of codimension 1 and where n denotes the dimension of M.
Note that two submanifolds of codimension 1 are orthogonal if their normal vectors are orthogonal and in a nondefinite metric orthogonality does not imply transversality.
Alternative definition
Given a smooth manifold of dimension n, an orthogonal web (also called orthogonal grid or Ricci’s grid) on a Riemannian manifold (M,g) is a set of n pairwise transversal and orthogonal foliations of connected submanifolds of dimension 1.
Remark
Since vector fields can be visualized as stream-lines of a stationary flow or as Faraday’s lines of force, a non-vanishing vector field in space generates a space-filling system of lines through each point, known to mathematicians as a congruence (i.e., a local foliation). Ricci’s vision filled Riemann’s n-dimensional manifold with n congruences orthogonal to each other, i.e., a local orthogonal grid.
Differential geometry of webs
A systematic study of webs was started by Blaschke in the 1930s. He extended the same group-theoretic approach to web geometry.
Classical definition
Let be a differentiable manifold of dimension N=nr. A d-web W(d,n,r) of codimension r in an open set is a set of d foliations of codimension r which are in general position.
In the notation W(d,n,r) the number d is the number of foliations forming a web, r is the web codimension, and n is the ratio of the dimension nr of the manifold M and the web codimension. Of course, one may define a d-web of codimension r without having r as a divisor of the dimension of the ambient manifold.
See also
Foliation
Parallelization (mathematics)
Notes
References
Differential geometry
Manifolds | Web (differential geometry) | Mathematics | 480 |
2,928,094 | https://en.wikipedia.org/wiki/CXML | cXML (commerce eXtensible Markup Language) is a protocol, created by Ariba in 1999, intended for communication of business documents between procurement applications, e-commerce hubs and suppliers. cXML is based on XML and provides formal XML schemas for standard business transactions, allowing programs to modify and validate documents without prior knowledge of their form.
The protocol does not include the full breadth of interactions some parties may wish to communicate. However, it can be expanded through the use of extrinsic elements and newly defined domains for various identifiers. This expansion is the limit of point-to-point configurations necessary for communication.
The current protocol includes documents for setup (company details and transaction profiles), catalogue content, application integration (including the widely used PunchOut feature), original, change and delete purchase orders and responses to all of these requests, order confirmation and ship notice documents (cXML analogues of EDI 855 and 856 transactions) and new invoice documents.
PunchOut is a protocol for interactive sessions managed across the Internet, a communication from one application to another, achieved through a dialog of real-time, synchronous cXML messages, which support user interaction at a remote site. This protocol is most commonly used today in the form of Procurement PunchOut, which specifically supports interactions between a procurement application and a supplier's eCommerce web site and possibly includes an intermediary for authentication and version matching. The buyer leaves or "punches out" of their company's system and goes to the supplier's web-based catalog to locate and add items to their shopping cart, while their application transparently maintains connection with the web site and gathers pertinent information. A vendor catalog, enhanced for this process, is known as a punchout catalog. PunchOut enables communication between the software and the web site so that relevant information about the transaction is delivered to the appropriate channels.
Since SAP's acquisition of Ariba in 2012, this protocol is owned by SAP.
Benefits
Standardized method used for automated order receipt, fulfilment updates and catalogue transport
Many sell-side solutions come with the protocol out of the box
cXML supports remote shopping session (PunchOut) transactions
Extensible: If your buyer relationships require more information than cXML supports intrinsically, that data may still be sent end-to-end
Leverages XML, which is a robust open language for describing information
cXML leaves much of the syntax from EDI behind
Proprietary issues
cXML is published based on the input of many companies, and is controlled by Ariba. cXML is a protocol that is published for free on the Internet along with its DTD. It is open to all for their use without restrictions apart from publications of modifications and naming that new protocol. Essentially, everyone is free to use cXML with any and all modifications as long as they don't publish their own standard and call it "cXML". Beginning in February 1999, the cXML standard has been available for all to use. The details of its license agreement are found at http://cxml.org/license.html.
See also
OCI
EDI
XML
References
External links
CXML.org
XML-based standards
Computer-related introductions in 1999 | CXML | Technology | 679 |
46,195,932 | https://en.wikipedia.org/wiki/C30H41NO6S | {{DISPLAYTITLE:C30H41NO6S}}
The molecular formula C30H41NO6S (molar mass: 543.25 g/mol) may refer to:
Radalbuvir
Sagopilone | C30H41NO6S | Chemistry | 53 |
11,888,310 | https://en.wikipedia.org/wiki/Mummia | Mummia, mumia, or originally mummy referred to several different preparations in the history of medicine, from "mineral pitch" to "powdered human mummies". It originated from Arabic mūmiyā "a type of resinous bitumen found in Western Asia and used curatively" in traditional Islamic medicine, which was translated as pissasphaltus (from "pitch" and "asphalt") in ancient Greek medicine. In medieval European medicine, mūmiyā "bitumen" was transliterated into Latin as mumia meaning both "a bituminous medicine from Persia" and "mummy". Merchants in apothecaries dispensed expensive mummia bitumen, which was thought to be an effective cure-all for many ailments. It was also used as an aphrodisiac.
Beginning around the 12th century when supplies of imported natural bitumen ran short, mummia was misinterpreted as "mummy", and the word's meaning expanded to "a black resinous exudate scraped out from embalmed Egyptian mummies". This began a period of lucrative trade between Egypt and Europe, and suppliers substituted rare mummia exudate with entire mummies, either embalmed or desiccated. After Egypt banned the shipment of mummia in the 16th century, unscrupulous European apothecaries began to sell fraudulent mummia prepared by embalming and desiccating fresh corpses.
During the Renaissance, scholars proved that translating bituminous mummia as mummy was a mistake, and physicians stopped prescribing the ineffective drug. Artists in the 17–19th centuries still used ground up mummies to tint a popular oil-paint called mummy brown.
Terminology
The etymologies of both English mummia and mummy derive from Medieval Latin mumia, which transcribes Arabic mūmiyā "a kind of bitumen used medicinally; a bitumen-embalmed body" from mūm "wax (used in embalming)", which descend from Persian mumiya and mum.
The Oxford English Dictionary records the complex semantic history of mummy and mummia. Mummy was first recorded meaning "a medicinal preparation of the substance of mummies; hence, an unctuous liquid or gum used medicinally" (c. 1400), which Shakespeare used jocularly for "dead flesh; body in which life is extinct" (1598), and later "a pulpy substance or mass" (1601). Second, it was semantically extended to mean "a sovereign remedy" (1598), "a medicinal bituminous drug obtained from Arabia and the East" (1601), "a kind of wax used in the transplanting and grafting of trees" (1721), and "a rich brown bituminous pigment" (1854). The third mummy meaning was "the body of a human being or animal embalmed (according to the ancient Egyptian or some analogous method) as a preparation for burial" (1615), and "a human or animal body desiccated by exposure to sun or air" (1727). Mummia was originally used in mummy'''s first meaning "a medicinal preparation…" (1486), then in the second meaning "a sovereign remedy" (1741), and lastly to specify "in mineralogy, a sort of bitumen, or mineral pitch, which is soft and tough, like shoemaker's wax, when the weather is warm, but brittle, like pitch, in cold weather. It is found in Persia, where it is highly valued" (1841). In modern English usage, mummy commonly means "embalmed body" as distinguished from mummia "a medicine" in historical contexts.Mummia or mumia is defined by three English mineralogical terms. Bitumen (from Latin bitūmen) originally meant "a kind of mineral pitch found in Palestine and Babylon, used as mortar, etc. The same as asphalt, mineral pitch, Jew's pitch, Bitumen judaicum", and in modern scientific use means "the generic name of certain mineral inflammable substances, native hydrocarbons more or less oxygenated, liquid, semi-solid, and solid, including naphtha, petroleum, asphalt, etc." Asphalt (from Ancient Greek ásphaltos "asphalt, bitumen”) first meant "A bituminous substance, found in many parts of the world, a smooth, hard, brittle, black or brownish-black resinous mineral, consisting of a mixture of different hydrocarbons; called also mineral pitch, Jews' pitch, and in the [Old Testament] 'slime'", and presently means "A composition made by mixing bitumen, pitch, and sand, or manufactured from natural bituminous limestones, used to pave streets and walks, to line cisterns, etc.", used as an abbreviation for asphalt concrete. Until the 20th century, the Latinate term asphaltum was also used. Pissasphalt (from Greek pissasphaltus "pitch" and "asphalt") names "A semi-liquid variety of bitumen, mentioned by ancient writers".
The medicinal use of bituminous mummia has a parallel in Ayurveda: shilajit or silajit (from Sanskrit shilajatu "rock-conqueror") or mumijo (from Persian mūmiyā "wax") is "A name given to various solid or viscous substances found on rock in India and Nepal … esp. a usu. dark-brown odoriferous substance which is used in traditional Indian medicine and probably consists principally of dried animal urine".
History
The usage of mumiya as medicine began with the famous Persian mumiya black pissasphalt remedy for wounds and fractures, which was confused with similarly appearing black bituminous materials used in Egyptian mummification. This was misinterpreted by Medieval Latin translators to mean whole mummies. Starting in the 12th century and continuing until as far as the 19th century, mummies and bitumen from mummies would be central in European medicine and art, as well as Egyptian trade.
Bitumen or asphalt had many uses in the ancient world such as glue, mortar, and waterproofing. The ancient Egyptians began to use bitumen for embalming mummies during the Twelfth Dynasty (1991–1802 BCE).S. G. F. Brandon, "Mummification." Man, Myth and Magic: An Illustrated Encyclopedia of the Supernatural.
According to historians of pharmacy, mummia became part of the materia medica of the Arabs, discussed by Muhammad ibn Zakariya al-Razi (845–925) and Ibn al-Baitar (1197–1248). Medieval Persian physicians used bitumen/asphalt both as a salve for cuts, bruises, and bone fractures, and as an internal medicine for stomach ulcers and tuberculosis. They achieved the best results with a black pissasphalt that seeped from a mountain in Darabgerd, Persia. The Greek physician Pedanius Dioscorides' c. 50–70 De Materia Medica ranked bitumen from the Dead Sea as medicinally superior to the pissasphalt from Apollonia (Illyria), both of which were considered to be an equivalent substitute for the scarce and expensive Persian mumiya.
During the Crusades, European soldiers learned firsthand of the drug mummia, which was considered to have great healing powers in cases of fracture and rupture. The demand for mummia increased in Europe and since the supply of natural bitumen from Persia and the Dead Sea was limited, the search for a new source turned to the tombs of Egypt.
Misinterpreting the Latin word mumia "medicinal bitumen" involved several steps. The first was to substitute substances exuded by Egyptian mummies for the natural product. The Arab physician Serapion the Younger (fl. 12th century) wrote about bituminous mumia and its many uses, but the Latin translation of Simon Geneunsis (d. 1303) said, "Mumia, this is the mumia of the sepulchers with aloes and myrrh mixed with the liquid (humiditate) of the human body". Two 12th century Italian examples: Gerard of Cremona, mistakenly translated Arabic mumiya as "the substance found in the land where bodies are buried with aloes by which the liquid of the dead, mixed with the aloes, is transformed and it is similar to marine pitch", and the physician Matthaeus Platearius said "Mumia is a spice found in the sepulchers of the dead.... That is best which is black, ill-smelling, shiny, and massive".
The second step was to confuse and replace the rare black exudation from embalmed corpses with the black bitumen that Egyptians used as an embalming preservative. The Baghdad physician Abd al-Latif al-Baghdadi (1162–1231) described ancient Egyptian mummies, "In the belly and skull of these corpses is also found in great abundance called mummy", added that although the word properly denoted bitumen or asphalt, "The mummy found in the hollows of the corpses in Egypt, differs but immaterially from the nature of mineral mummy; and where any difficulty arises in procuring the latter, may be substituted in its stead."
The third step in misinterpreting mummia was to substitute the blackened flesh of an entire mummy for the hardened bituminous materials from the interior cavities of the cadavers. The ancient tombs of Egypt and the deserts could not meet the European demand for the drug mumia, so a commerce developed in the manufacture and sale of fraudulent mummies, sometimes called mumia falsa. The Italian surgeon Giovanni da Vigo (1450–1525) defined mumia as "The flesh of a dead body that is embalmed, and it is hot and dry in the second [grade], and therefore it has virtue to incarne [i.e., heal over] wounds and to staunch blood", and included it in his list of essential drugs.
The Swiss-German polymath Paracelsus (1493–1541) gave mummia a new meaning of "intrinsic spirit" and said true pharmaceutical mummia must be "the body of a man who did not die a natural death but rather died an unnatural death with a healthy body and without sickness". The German physician Oswald Croll (1563–1609) said mumia was "not the liquid matter which is found in the Egyptian sepulchers," but rather "the flesh of a man that perishes a violent death, and kept for some time in the air", and gave a detailed recipe for making tincture of mumia from the corpse of a young red-haired man, who had been hanged, bludgeoned on the breaking wheel, exposed to the air for days, then cut into small pieces, sprinkled with powdered myrrh and aloes, soaked in wine, and dried.
Renaissance scholars and physicians first expressed opposition to using human mumia in the 16th century. The French naturalist Pierre Belon (1517–1564) concluded that the Arab physicians, from whom the western writers derived their knowledge of mumia, had actually referred to the pissasphalt of Dioscorides, which had been misconstrued by the translators. He said Europeans were importing both the "falsely called" mumia obtained from the scraping the bodies of cadavers, and "artificial mumia" made by exposing buried dead bodies to the heat of the sun before grinding them up. While he considered the available mumia to be a valueless and even dangerous drug, he noted that King Francis I always carried with him a mixture of mumia and rhubarb to use as an immediate remedy for any injury. The barber surgeon Ambroise Paré (d. 1590) revealed the manufacture of fake mummia both in France, where apothecaries would steal the bodies of executed criminals, dry them in an oven, and sell the flesh; and in Egypt, where a merchant, who admitted collecting dead bodies and preparing mummia, expressed surprise that the Christians, "so dainty-mouthed, could eat the bodies of the dead". Paré admitted to having personally administered mumia a hundred times, but condemned "this wicked kinde of Drugge, doth nothing helpe the diseased," and so he stopped prescribing it and encouraged others not to use mumia. The English herbalist John Gerard's 1597 Herball described the ancient Egyptians using cedar pitch for embalming, and noted that the preserved bodies that shopkeepers falsely call "mumia" should be what the Greeks called pissasphalton. Gerard blamed the error on the translator of Serapion who interpreted mumia "according to his own fancie" that it is the exudate from an embalmed human corpse.
The medical use of Egyptian mumia continued through the 17th century. The physicist Robert Boyle (1627–1691) praised it as "one of the useful medicines commended and given by our physicians for falls and bruises, and in other cases too." The Dutch physician Steven Blankaart's 1754 Lexicon medicum renovatum listed four types of mumia: Arabian exudate from bodies embalmed with spices and asphalt, Egyptian bodies embalmed with pissasphalt, sun-dried bodies found in the desert, the natural pissasphalt. Mummia's familiarity as a remedy in Britain is demonstrated by passing references in Shakespeare, Francis Beaumont and John Fletcher, and John Donne, and also by more detailed remarks in the writings of Thomas Browne, Francis Bacon, and Robert Boyle.
By the 18th century, skepticism about the pharmaceutical value of mumia was increasing, and medical opinion was turning against its use. The English medical writer John Quincy wrote in 1718 that although mumia was still listed in medicinal catalogues, "it is quite out of use in Prescription". Mummia was offered for sale medicinally as late as 1924 in the price list of Merck & Co.
Both mummia and asphalt have long been used as pigments. The British chemist and painter Arthur Herbert Church described the use of mummia for making "mummy brown" oil paint:
'Mummy,' as a pigment, is inferior to prepared, but superior to raw, asphalt, inasmuch as it has been submitted to a considerable degree of heat, and has thereby lost some of its volatile hydrocarbons. Moreover, it is usual to grind up the bones and other parts of the mummy together, so that the resulting powder has more solidity and is less fusible than the asphalt alone would be. A London colourman informs me that one Egyptian mummy furnishes sufficient material to satisfy the demands of his customers for twenty years. It is perhaps scarcely necessary to add that some samples of the pigment sold as 'mummy' are spurious.
The modern pigment sold as "mummy brown" is composed of a mixture of kaolin, quartz, goethite and hematite.
See also
Bitumen of Judea
Human fat
Medical cannibalism
Mellified man
References
Additional sources
External links
Sheba's Secret Mummies – Channel 4 documentary
The Gruesome History of Eating Corpses as Medicine, Smithsonian''
Ancient Egyptian mummies
Medical cannibalism
History of pharmacy
Magic powders
Resins
Traditional medicine | Mummia | Physics | 3,206 |
869,238 | https://en.wikipedia.org/wiki/Metmyoglobin | Metmyoglobin is the oxidized form of the oxygen-carrying hemeprotein myoglobin.
Metmyoglobin is the cause of the characteristic brown colouration of meat that occurs as it ages.
In living muscle, the concentration of metmyoglobin is vanishingly small, due to the presence of the enzyme metmyoglobin reductase, which, in the presence of the cofactor NADH and the coenzyme cytochrome b4 converts the Fe3+ in the heme prosthetic group of metmyoglobin back to the Fe2+ of normal myoglobin.
In meat, which is dead muscle, the normal processes of removing metmyoglobin are prevented from effecting this repair, or alternatively the rate of metmyoglobin formation exceeds their capacity, so that there is a net accumulation of metmyoglobin as the meat ages.
Metmyoglobin reduction helps limit the oxidation of myoglobin and the oxidation of myoglobin is specific to each species. In other words, metmyoglobin gains electrons in order to limit myoglobin from losing electrons. Metmyoglobin after being oxidized by myoglobin shows the undesirable brown color which can be seen in many types of meat. Metmyoglobin is more susceptible to oxidation when being compared to oxymyoglobin. The metmyoglobin reducing activity varies across species and was studied particularly in beef, porcine, bison, deer, emu, equine, goats and sheep.
Currently there is not a standard technique in measuring the metmyoglobin in all species. But many techniques are used including reflectance spectrophotometry and absorbance spectrophotometry are used.
References
External links
Hemoproteins | Metmyoglobin | Chemistry | 388 |
40,400,729 | https://en.wikipedia.org/wiki/Limiting%20case%20%28mathematics%29 | In mathematics, a limiting case of a mathematical object is a special case that arises when one or more components of the object take on their most extreme possible values. For example:
In statistics, the limiting case of the binomial distribution is the Poisson distribution. As the number of events tends to infinity in the binomial distribution, the random variable changes from the binomial to the Poisson distribution.
A circle is a limiting case of various other figures, including the Cartesian oval, the ellipse, the superellipse, and the Cassini oval. Each type of figure is a circle for certain values of the defining parameters, and the generic figure appears more like a circle as the limiting values are approached.
Archimedes calculated an approximate value of π by treating the circle as the limiting case of a regular polygon with 3 × 2n sides, as n gets large.
In electricity and magnetism, the long wavelength limit is the limiting case when the wavelength is much larger than the system size.
In economics, two limiting cases of a demand curve or supply curve are those in which the elasticity is zero (the totally inelastic case) or infinity (the infinitely elastic case).
In finance, continuous compounding is the limiting case of compound interest in which the compounding period becomes infinitesimally small, achieved by taking the limit as the number of compounding periods per year goes to infinity.
A limiting case is sometimes a degenerate case in which some qualitative properties differ from the corresponding properties of the generic case. For example:
A point is a degenerate circle, whose radius is zero.
A parabola can degenerate into two distinct or coinciding parallel lines.
An ellipse can degenerate into a single point or a line segment.
A hyperbola can degenerate into two intersecting lines.
See also
Degeneracy (mathematics)
Limit (mathematics)
References
Mathematical concepts | Limiting case (mathematics) | Mathematics | 397 |
15,543,044 | https://en.wikipedia.org/wiki/Parovicenko%20space | In mathematics, a Parovicenko space is a topological space similar to the space of non-isolated points of the Stone–Čech compactification of the integers.
Definition
A Parovicenko space is a topological space X satisfying the following conditions:
X is compact Hausdorff
X has no isolated points
X has weight c, the cardinality of the continuum (this is the smallest cardinality of a base for the topology).
Every two disjoint open Fσ subsets of X have disjoint closures
Every non-empty Gδ of X has non-empty interior.
Properties
The space βN\N is a Parovicenko space, where βN is the Stone–Čech compactification of the natural numbers N. proved that the continuum hypothesis implies that every Parovicenko space is isomorphic to βN\N. showed that if the continuum hypothesis is false then there are other examples of Parovicenko spaces.
References
General topology | Parovicenko space | Mathematics | 196 |
48,540,681 | https://en.wikipedia.org/wiki/Yilan%20Brick%20Kiln | The Yilan Brick Kiln () is a former brick manufacturing factory in Beijin Village, Yilan City, Yilan County, Taiwan.
History
In 1831, Yilan established its first own brick klin due to the discoveries of sites around Yilan which has high viscosity clay, which was suitable for bricks making. The Yilan Brick Kiln was originally established as Chen He Cheng Kiln Factory. It was later changed to Yilan Brick Kiln and ended its operation in the 1980s. In 1999, the Yilan County Government planned to build residential buildings within the area. However, after various oppositions from the local residents, the kiln was finally preserved.
Architecture
The kiln building is a rectangular shaped with a series of individual kilns. Each kiln can fire its own product and operate separately with exhaust holes connecting to each other with a unified exhaust pipe at the end. The kiln has a brick domed roof and a brick floor. The kiln features a 37-meter tall chimney.
Transportation
The building is accessible within walking distance north of Yilan Station of Taiwan Railways.
See also
List of tourist attractions in Taiwan
Former Tangrong Brick Kiln
References
1938 establishments in Taiwan
Buildings and structures in Yilan County, Taiwan
Kilns in Taiwan
Tourist attractions in Yilan County, Taiwan | Yilan Brick Kiln | Chemistry,Engineering | 265 |
50,399,682 | https://en.wikipedia.org/wiki/Predictive%20engineering%20analytics | Predictive engineering analytics (PEA) is a development approach for the manufacturing industry that helps with the design of complex products (for example, products that include smart systems). It concerns the introduction of new software tools, the integration between those, and a refinement of simulation and testing processes to improve collaboration between analysis teams that handle different applications. This is combined with intelligent reporting and data analytics. The objective is to let simulation drive the design, to predict product behavior rather than to react on issues which may arise, and to install a process that lets design continue after product delivery.
Industry needs
In a classic development approach, manufacturers deliver discrete product generations. Before bringing those to market, they use extensive verification and validation processes, usually by combining several simulation and testing technologies. But this approach has several shortcomings when looking at how products are evolving. Manufacturers in the automotive industry, the aerospace industry, the marine industry or any other mechanical industry all share similar challenges: they have to re-invent the way they design to be able to deliver what their customers want and buy today.
Complex products that include smart systems
Products include, besides the mechanics, ever more electronics, software and control systems. Those help to increase performance for several characteristics, such as safety, comfort, fuel economy and many more. Designing such products using a classic approach, is usually ineffective. A modern development process should be able to predict the behavior of the complete system for all functional requirements and including physical aspects from the very beginning of the design cycle.
The use of new materials and manufacturing methods
To achieve reduced costs or fuel economy, manufacturers need to continually consider adopting new materials and corresponding manufacturing methods. That makes product development more complex, as engineers cannot rely on their decades of experience anymore, like they did when working with traditional materials, such as steel and aluminium, and traditional manufacturing methods, such as casting. New materials such as composites, behave differently when it comes to structural behavior, thermal behavior, fatigue behavior or noise insulation for example, and require dedicated modeling.
On top of that, as design engineers do not always know all manufacturing complexities that come with using these new materials, it is possible that the "product as manufactured" is different from the "product as designed". Of course all changes need to be tracked, and possibly even an extra validation iteration needs to be done after manufacturing.
Product development continues after delivery
Today's products include many sensors that allow them to communicate with each other, and to send feedback to the manufacturer. Based on this information, manufacturers can send software updates to continue optimizing behavior, or to adapt to a changing operational environment. Products will create the internet of things, and manufacturers should be part of it. A product "as designed" is never finished, so development should continue when the product is in use. This evolution is also referred to as Industry 4.0, or the fourth industrial revolution. It challenges design teams, as they need to react quickly and make behavioral predictions based on an enormous amount of data.
The inclusion of predictive functionality
The ultimate intelligence a product can have, is that it remembers the individual behavior of its operator, and takes that into consideration. In this way, it can for example anticipate certain actions, predict failure or maintenance, or optimize energy consumption in a self-regulating manner. That requires a predictive model inside the product itself, or accessible via cloud. This one should run very fast and should behave exactly the same as the actual product. It requires the creation of a digital twin: a replica of the product that remains in-sync over its entire product lifecycle.
Ever increasing pressure on time, cost, quality and diversification
Consumers today can get easy access to products that are designed in any part of the world. That puts an enormous pressure on the time-to-market, the cost and the product quality. It's a trend which has been going on for decades. But with people making ever more buying decisions online, it has become more relevant than ever. Products can easily be compared in terms of price and features on a global scale. And reactions on forums and social media can be very grim when product quality is not optimal. This comes on top of the fact that in different parts of the world, consumer have different preferences, or even different standards and regulations are applicable.
As a result, modern development processes should be able to convert very local requirements into a global product definition, which then should be rolled out locally again, potentially with part of the work being done by engineers in local affiliates. That calls for a firm globally operating product lifecycle management system that starts with requirements definition. And the design process should have the flexibility to effectively predict product behavior and quality for various market needs.
Enabling processes and technologies
Dealing with these challenges is exactly the aim of a predictive engineering analytics approach for product development. It refers to a combination of tools deployment and a good alignment of processes. Manufacturers gradually deploy the following methods and technologies, to an extent that their organization allows it and their products require it:
Deploying a closed-loop systems-driven product development process
In this multi-disciplinary simulation-based approach, the global design is considered as a collection of mutually interacting subsystems from the very beginning. From the very early stages on, the chosen architecture is virtually tested for all critical functional performance aspects simultaneously. These simulations use scalable modeling techniques, so that components can be refined as data becomes available. Closing the loop happens on 2 levels:
Concurrent development of the mechanical components with the control systems
Inclusion of data of products in use (in case of continued development the actual product)
Closed-loop systems driven product development aims at reducing test-and-repair. Manufacturers implement this approach to pursue their dream of designing right the first time.
Increasing the use of 1D multi-physics system simulation
1D system simulation, also referred to as 1D CAE or mechatronics system simulation, allows scalable modeling of multi-domain systems. The full system is presented in a schematic way, by connecting validated analytical modeling blocks of electrical, hydraulic, pneumatic and mechanical subsystems (including control systems). It helps engineers predict the behavior of concept designs of complex mechatronics, either transient or steady-state.
Manufacturers often have validated libraries available that contain predefined components for different physical domains. Or if not, specialized software suppliers can provide them. Using those, the engineers can do concept predictions very early, even before any Computer-aided Design (CAD) geometry is available. During later stages, parameters can then be adapted.
1D system simulation calculations are very efficient. The components are analytically defined, and have input and output ports. Causality is created by connecting inputs of a components to outputs of another one (and vice versa). Models can have various degrees of complexity, and can reach very high accuracy as they evolve. Some model versions may allow real-time simulation, which is particularly useful during control systems development or as part of built-in predictive functionality.<
Improving 3D simulation technologies
3D simulation or 3D CAE is usually applied at a more advanced stage of product development than 1D system simulation, and can account for phenomena that cannot be captured in 1D models. The models can evolve into highly detailed representations that are very application-specific and can be very computationally intensive.
3D simulation or 3D CAE technologies were already essential in classic development processes for verification and validation, often proving their value by speeding up development and avoiding late-stage changes. 3D simulation or 3D CAE are still indispensable in the context of predictive engineering analytics, becoming a driving force in product development. Software suppliers put great effort into enhancements, by adding new capabilities and increasing performance on modeling, process and solver side. While such tools are generally based on a single common platform, solution bundles are often provided to cater for certain functional or performance aspects, while industry knowledge and best practices are provided to users in application verticals. These improvements should allow 3D simulation or 3D CAE to keep pace with ever shorter product design cycles.
Establishing a strong coupling between 1D simulation, 3D simulation and controls engineering
As the closed-loop systems-driven product development approach requires concurrent development of the mechanical system and controls, strong links must exist between 1D simulation, 3D simulation and control algorithm development. Software suppliers achieve this through offering co-simulation capabilities for :de:Model in the Loop (MiL), Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) processes.
Model-in-the-Loop
Already when evaluating potential architectures, 1D simulation should be combined with models of control software, as the electronic control unit (ECU) will play a crucial role in achieving and maintaining the right balance between functional performance aspects when the product will operate. During this phase, engineers cascade down the design objectives to precise targets for subsystems and components. They use multi-domain optimization and design trade-off techniques. The controls need to be included in this process. By combining them with the system models in MiL simulations, potential algorithms can be validated and selected.
In practice, MiL involves co-simulation between virtual controls from dedicated controller modeling software and scalable 1D models of the multi-physical system. This provides the right combination of accuracy and calculation speed for investigation of concepts and strategies, as well as controllability assessment.
Software-in-the-Loop
After the conceptual control strategy has been decided, the control software is further developed while constantly taking the overall global system functionality into consideration. The controller modeling software can generate new embedded C-code and integrate it in possible legacy C-code for further testing and refinement.
Using SiL validation on a global, full-system multi-domain model helps anticipate the conversion from floating point to fixed point after the code is integrated in the hardware, and refine gain scheduling when the code action needs to be adjusted to operating conditions.
SiL is a closed-loop simulation process to virtually verify, refine and validate the controller in its operational environment, and includes detailed 1D and/or 3D simulation models.
Hardware-in-the-Loop
During the final stages of controls development, when the production code is integrated in the ECU hardware, engineers further verify and validate using extensive and automated HiL simulation. The real ECU hardware is combined with a downsized version of the multi-domain global system model, running in real time. This HiL approach allows engineers to complete upfront system and software troubleshooting to limit the total testing and calibration time and cost on the actual product prototype.
During HiL simulation, the engineers verify if regulation, security and failure tests on the final product can happen without risk. They investigate interaction between several ECUs if required. And they make sure that the software is robust and provides quality functionality under every circumstance. When replacing the global system model running in real-time with a more detailed version, engineers can also include pre-calibration in the process. These detailed models are usually available anyway since controls development happens in parallel to global system development.
Closely aligning simulation with physical testing
Evolving from verification and validation to predictive engineering analytics means that the design process has to become more simulation-driven. Physical testing remains a crucial part of that process, both for validation of simulation results as well as for the testing of final prototypes, which would always be required prior to product sign-off. The scale of this task will become even bigger than before, as more conditions and parameters combinations will need to be tested, in a more integrated and complex measurement system that can combine multiple physical aspects, as well as control systems.
Besides, also in other development stages, combining test and simulation in a well aligned process will be essential for successful predictive engineering analytics.
Increasing realism of simulation models
Modal testing or experimental modal analysis (EMA) was already essential in verification and validation of pure mechanical systems. It is a well-established technology that has been used for many applications, such as structural dynamics, vibro-acoustics, vibration fatigue analysis, and more, often to improve finite element models through correlation analysis and model updating. The context was however very often trouble-shooting.
As part of predictive engineering analytics, modal testing has to evolve, delivering results that increase simulation realism and handle the multi-physical nature of the modern, complex products. Testing has to help to define realistic model parameters, boundary conditions and loads. Besides mechanical parameters, different quantities need to be measured. And testing also needs to be capable to validate multi-body models and 1D multi-physical simulation models. In general a whole new range of testing capabilities (some modal-based, some not) in support of simulation becomes important, and much earlier in the development cycle than before.
Using simulation for more efficient testing
As the number of parameters and their mutual interaction explodes in complex products, testing efficiency is crucial, both in terms of instrumentation and definition of critical test cases. A good alignment between test and simulation can greatly reduce the total test effort and boost productivity.
Simulation can help to analyze upfront which locations and parameters can be more effective to measure a certain objective. And it also allows to investigate the coupling between certain parameters, so that the amount of sensors and test conditions can be minimized.
On top of that, simulation can be used to derive certain parameters that cannot be measured directly. Here again, a close alignment between simulation and testing activities is a must. Especially 1D simulation models can open the door to a large number of new parameters that cannot directly accessed with sensors.
Creating hybrid models
As complex products are in fact combinations of subsystems which are not necessarily concurrently developed, systems and subsystems development requires ever more often setups that include partially hardware, partially simulation models and partially measurement input. These hybrid modeling techniques will allow realistic real-time evaluation of system behavior very early in the development cycle. Obviously this requires dedicated technologies as a very good alignment between simulation (both 1D and 3D) and physical testing.
Tightly integrating 1D and 3D CAE, as well as testing in the complete product lifecycle management process
Tomorrow's products will live a life after delivery. They will include predictive functionalities based on system models, adapt to their environment, feed information back to design, and more. From this perspective, design and engineering are more than turning an idea into a product. They are an essential part of the digital thread through the entire product value chain, from requirements definition to product in use.
Closing the loop between design and engineering on one hand, and product in use on the other, requires that all steps are tightly integrated in a product lifecycle management software environment. Only this can enable traceability between requirements, functional analysis and performance verification, as well as analytics of use data in support of design. It will allow models to become digital twins of the actual product. They remain in-sync, undergoing the same parameter changes and adapting to the real operational environment.
See also
smart systems
control systems
3D simulation or 3D CAE
Industrie 4.0
Internet of Things
real-time simulation
Hardware-in-the-Loop (HiL)
co-simulation
Modal testing
product lifecycle management
digital twins
References
Computer-aided engineering
Product lifecycle management
Engineering disciplines | Predictive engineering analytics | Engineering | 3,102 |
170,939 | https://en.wikipedia.org/wiki/Step%20function | In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
Definition and first consequences
A function is called a step function if it can be written as
, for all real numbers
where , are real numbers, are intervals, and is the indicator function of :
In this definition, the intervals can be assumed to have the following two properties:
The intervals are pairwise disjoint: for
The union of the intervals is the entire real line:
Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function
can be written as
Variations in the definition
Sometimes, the intervals are required to be right-open or allowed to be singleton. The condition that the collection of intervals must be finite is often dropped, especially in school mathematics, though it must still be locally finite, resulting in the definition of piecewise constant functions.
Examples
A constant function is a trivial example of a step function. Then there is only one interval,
The sign function , which is −1 for negative numbers and +1 for positive numbers, and is the simplest non-constant step function.
The Heaviside function , which is 0 for negative numbers and 1 for positive numbers, is equivalent to the sign function, up to a shift and scale of range (). It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system.
The rectangular function, the normalized boxcar function, is used to model a unit pulse.
Non-examples
The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors also define step functions with an infinite number of intervals.
Properties
The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers.
A step function takes only a finite number of values. If the intervals for in the above definition of the step function are disjoint and their union is the real line, then for all
The definite integral of a step function is a piecewise linear function.
The Lebesgue integral of a step function is where is the length of the interval , and it is assumed here that all intervals have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.
A discrete random variable is sometimes defined as a random variable whose cumulative distribution function is piecewise constant. In this case, it is locally a step function (globally, it may have an infinite number of steps). Usually however, any random variable with only countably many possible values is called a discrete random variable, in this case their cumulative distribution function is not necessarily locally a step function, as infinitely many intervals can accumulate in a finite region.
See also
Crenel function
Piecewise
Sigmoid function
Simple function
Step detection
Heaviside step function
Piecewise-constant valuation
References
Special functions | Step function | Mathematics | 662 |
52,792,035 | https://en.wikipedia.org/wiki/De%20novo%20sequence%20assemblers | De novo sequence assemblers are a type of program that assembles short nucleotide sequences into longer ones without the use of a reference genome. These are most commonly used in bioinformatic studies to assemble genomes or transcriptomes. Two common types of de novo assemblers are greedy algorithm assemblers and De Bruijn graph assemblers.
Types of de novo assemblers
There are two types of algorithms that are commonly utilized by these assemblers: greedy, which aim for local optima, and graph method algorithms, which aim for global optima. Different assemblers are tailored for particular needs, such as the assembly of (small) bacterial genomes, (large) eukaryotic genomes, or transcriptomes.
Greedy algorithm assemblers are assemblers that find local optima in alignments of smaller reads. Greedy algorithm assemblers typically feature several steps: 1) pairwise distance calculation of reads, 2) clustering of reads with greatest overlap, 3) assembly of overlapping reads into larger contigs, and 4) repeat. These algorithms typically do not work well for larger read sets, as they do not easily reach a global optimum in the assembly, and do not perform well on read sets that contain repeat regions. Early de novo sequence assemblers, such as SEQAID (1984) and CAP (1992), used greedy algorithms, such as overlap-layout-consensus (OLC) algorithms. These algorithms find overlap between all reads, use the overlap to determine a layout (or tiling) of the reads, and then produce a consensus sequence. Some programs that used OLC algorithms featured filtration (to remove read pairs that will not overlap) and heuristic methods to increase speed of the analyses.
Graph method assemblers come in two varieties: string and De Bruijn. String graph and De Bruijn graph method assemblers were introduced at a DIMACS workshop in 1994 by Waterman and Gene Myers. These methods represented an important step forward in sequence assembly, as they both use algorithms to reach a global optimum instead of a local optimum. While both of these methods made progress towards better assemblies, the De Bruijn graph method has become the most popular in the age of next-generation sequencing. During the assembly of the De Bruijn graph, reads are broken into smaller fragments of a specified size, k. The k-mers are then used as edges in the graph assembly. Nodes are built as (k-1)-mers connect by an edge. The assembler will then construct sequences based on the De Bruijn graph. De Bruijn graph assemblers typically perform better on larger read sets than greedy algorithm assemblers (especially when they contain repeat regions).
Commonly used programs
Different assemblers are designed for different type of read technologies. Reads from second generation technologies (called short read technologies) like Illumina are typically short (with lengths of the order of 50-200 base pairs) and have error rates of around 0.5-2%, with the errors chiefly being substitution errors. However, reads from third generation technologies like PacBio and fourth generation technologies like Oxford Nanopore (called long read technologies) are longer with read lengths typically in the thousands or tens of thousands and have much higher error rates of around 10-20% with errors being chiefly insertions and deletions. This necessitates different algorithms for assembly from short and long read technologies.
Assemblathon
There are numerous programs for de novo sequence assembly and many have been compared in the Assemblathon. The Assemblathon is a periodic, collaborative effort to test and improve the numerous assemblers available. Thus far, two assemblathons have been completed (2011 and 2013) and a third is in progress (as of April 2017). Teams of researchers from across the world choose a program and assemble simulated genomes (Assemblathon 1) and the genomes of model organisms whose that have been previously assembled and annotated (Assemblathon 2). The assemblies are then compared and evaluated using numerous metrics.
Assemblathon 1
Assemblathon 1 was conducted in 2011 and featured 59 assemblies from 17 different groups and the organizers. The goal of this Assembalthon was to most accurately and completely assemble a genome that consisted of two haplotypes (each with three chromosomes of 76.3, 18.5, and 17.7 Mb, respectively) that was generated using Evolver. Numerous metrics were used to assess the assemblies, including: NG50 (point at which 50% of the total genome size is reached when scaffold lengths are summed from the longest to the shortest), LG50 (number of scaffolds that are greater than, or equal to, the N50 length), genome coverage, and substitution error rate.
Software compared: ABySS, Phusion2, phrap, Velvet, SOAPdenovo, PRICE, ALLPATHS-LG
N50 analysis: assemblies by the Plant Genome Assembly Group (using the assembler Meraculous) and ALLPATHS, Broad Institute, USA (using ALLPATHS-LG) performed the best in this category, by an order of magnitude over other groups. These assemblies scored an N50 of >8,000,000 bases.
Coverage of genome by assembly: for this metric, BGI's assembly via SOAPdenovo performed best, with 98.8% of the total genome being covered. All assemblers performed relatively well in this category, with all but three groups having coverage of 90% and higher, and the lowest total coverage being 78.5% (Dept. of Comp. Sci., University of Chicago, USA via Kiki).
Substitution errors: the assembly with the lowest substitution error rate was submitted by the Wellcome Trust Sanger Institute, UK team using the software SGA.
Overall: No one assembler performed significantly better in others in all categories. While some assemblers excelled in one category, they did not in others, suggesting that there is still much room for improvement in assembler software quality.
Assemblathon 2
Assemblathon 2 improved on Assemblathon 1 by incorporating the genomes of multiples vertebrates (a bird (Melopsittacus undulatus), a fish (Maylandia zebra), and a snake (Boa constrictor constrictor)) with genomes estimated to be 1.2, 1.0, and 1.6Gbp in length) and assessment by over 100 metrics. Each team was given four months to assemble their genome from Next-Generation Sequence (NGS) data, including Illumina and Roche 454 sequence data.
Software compared: ABySS, ALLPATHS-LG, PRICE, Ray, and SOAPdenovo
N50 analysis: for the assembly of the bird genome, the Baylor College of Medicine Human Genome Sequencing Center and ALLPATHS teams had the highest NG50s, at over 16,000,000 and over 14,000,000 bp, respectively.
Presence of core genes: Most assemblies performed well in this category (~80% or higher), with only one dropping to just over 50% in their bird genome assembly (Wayne State University via HyDA).
Overall: Overall, the Baylor College of Medicine Human Genome Sequencing Center utilizing a variety of assembly methods (SeqPrep, KmerFreq, Quake, BWA, Newbler, ALLPATHS-LG, Atlas-Link, Atlas-GapFill, Phrap, CrossMatch, Velvet, BLAST, and BLASR) performed the best for the bird and fish assemblies. For the snake genome assembly, the Wellcome Trust Sanger Institute using SGA, performed best. For all assemblies, SGA, BCM, Meraculous, and Ray submitted competitive assemblies and evaluations. The results of the many assemblies and evaluations described here suggest that while one assembler may perform well on one species, it may not perform as well on another. The authors make several suggestions for assembly: 1) use more than one assembler, 2) use more than one metric for evaluation, 3) select an assembler that excels in metrics of more interest (e.g., N50, coverage), 4) low N50s or assembly sizes may not be concerning, depending on user needs, and 5) assess the levels of heterozygosity in the genome of interest.
See also
Sequence assembly
Sequence alignment
De novo transcriptome assembly
References
Bioinformatics algorithms
Bioinformatics software
DNA sequencing
Metagenomics software | De novo sequence assemblers | Chemistry,Biology | 1,789 |
22,875,393 | https://en.wikipedia.org/wiki/Forever%20Friends%20%28brand%29 | Forever Friends is a brand of Hallmark Cards, based on a Bear design. The Forever Friends bear can be found in 40 countries and in 15 languages.The bear was designed by artist Deborah Jones.
History
In 1987, artist Deborah Jones drew the first ever Forever Friends bear in her sketchbook. She approached Bath, Somerset based greeting cards publishers Andrew Brownsword, who agreed to release the bear design as a greeting card. The pair worked in a flat above a Chinese takeaway in Reading, Berkshire in the early 1980s:
"I wanted to develop a teddy bear that appealed to adults as well as children. I based Forever Friends specifically on the teddy bear that Sebastian Flyte carried around in Brideshead Revisited. It became the bear found in the attic."
In 1989, the design was registered under the trademark Forever Friends.
In 1994 Andrew Brownsword Ltd was purchased by Hallmark Cards. Brownsword himself agreed to become the CEO of Hallmark Europe, while the new money allowed the whole of card design was updated to include a border.
In 1997 new sub-brand Between Friends was issued, focusing on children's cards. This was followed in 2000 by Blanc, a plain pencil on whitecard design focused on older people. This was also the last design set Deborah Jones worked on, as she retired from what was now Hallmark's Bath design studio.
In 2005, Forever Friends was relaunched with a water colour based look, textured paper on cards and a more modern font. By 2008, the range of cuddly toys had been supplemented by a range of digital downloads.
In 2011 the "Forever Friends Naturals" range was launched by Grace Cole Limited, taking the lovable bears as the face of a cute and cuddly range of natural skincare products for babies and children.
Jones also drew the Hoppy Street series, a rabbit that looked similar to Forever Friends bears.
The Forever Friends range included cards, postcards and stationery, plush bears, framed art, mugs and kitchenware, figurines and bed linen.
Some time around 2019-2020, production of Forever Friends plush bears ceased. Jones died in 2022.
References
British brands
Greeting cards
Teddy bears
Hallmark Cards
Fictional bears | Forever Friends (brand) | Engineering | 442 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.