id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
5,478,160
https://en.wikipedia.org/wiki/CLEO%20%28particle%20detector%29
CLEO was a general purpose particle detector at the Cornell Electron Storage Ring (CESR), and the name of the collaboration of physicists who operated the detector. The name CLEO is not an acronym; it is short for Cleopatra and was chosen to go with CESR (pronounced Caesar). CESR was a particle accelerator designed to collide electrons and positrons at a center-of-mass energy of approximately 10 GeV. The energy of the accelerator was chosen before the first three bottom quark Upsilon resonances were discovered between 9.4 GeV and 10.4 GeV in 1977. The fourth Υ resonance, the Υ(4S), was slightly above the threshold for, and therefore ideal for the study of, B meson production. CLEO was a hermetic detector that in all of its versions consisted of a tracking system inside a solenoid magnet, a calorimeter, particle identification systems, and a muon detector. The detector underwent five major upgrades over the course of its thirty-year lifetime, both to upgrade the capabilities of the detector and to optimize it for the study of B mesons. The CLEO I detector began collecting data in October 1979, and CLEO-c finished collecting data on March 3, 2008. CLEO initially measured the properties of the Υ(1–3S) resonances below the threshold for producing B mesons. Increasing amounts of accelerator time were spent at the Υ(4S) as the collaboration became more interested in the study of B mesons. Once the CUSB experiment was discontinued in the late 1980s, CLEO then spent most of its time at the Υ(4S) and measured many important properties of the B mesons. While CLEO was studying the B mesons, it was also able to measure the properties of D mesons and tau leptons, and discover many new charm hadrons. When the BaBar and Belle B factories began to collect large amounts of data in the early 2000s, CLEO was no longer able to make competitive measurements of B mesons. CLEO revisited the Υ(1-3S) resonances, then underwent its last upgrade to CLEO-c. CESR ran at lower energies and CLEO measured many properties of the ψ resonances and D mesons. CLEO was the longest running experiment in the history of particle physics. History Proposal and construction Cornell University had built a series of synchrotrons since the 1940s. The 10 GeV synchrotron in operation during the 1970s had conducted a number of experiments, but it ran at much lower energy than the 20 GeV linear accelerator at SLAC. As late as October 1974, Cornell planned to upgrade the synchrotron to reach energies of 25 GeV and build a new synchrotron to reach 40 GeV. After the discovery of the J/Ψ in November 1974 demonstrated that interesting physics could be done with an electron-positron collider, Cornell submitted a proposal in 1975 for an electron-positron collider operating up to center-of-mass energies of 16 GeV using the existing synchrotron tunnel. An accelerator at 16 GeV would explore the energy region between that of the SPEAR accelerator and the PEP and PETRA accelerators. CESR and CLEO were approved in 1977 and mostly finished by 1979. CLEO was built in the large experimental hall at the south end of CESR; a smaller detector named CUSB (for Columbia University-Stony Brook) was built at the north interaction region. Between the proposal for and construction of CESR and CLEO, Fermilab discovered the Υ resonances and suggested that as many as three states existed. The Υ(1S) and Υ(2S) were confirmed at the DORIS accelerator. The first order of business once CESR was running was to find the Υs. CLEO and CUSB found the Υ(1S) shortly after beginning to collect data, and used the mass difference from DORIS to quickly find the Υ(2S). CESR's higher beam energies allowed CLEO and CUSB to find the more massive Υ(3S) and discover the Υ(4S). Furthermore, the presence of an excess of electrons and muons at the Υ(4S) indicated that it decayed to B mesons. CLEO proceeded to publish over sixty papers using the original CLEO I configuration of the detector. CLEO had competition in the measurement of B mesons, particularly from the ARGUS collaboration. The CLEO collaboration was worried that the ARGUS detector at DESY would be better than CLEO, therefore it began to plan for an upgrade. The improved detector would use a new drift chamber for tracking and dE/dx measurements, a cesium iodide calorimeter inside a new solenoid magnet, time of flight counters, and new muon detectors. The new drift chamber (DR2) had the same outer radius as the original drift chamber to allow it to be installed before the other components were ready. CLEO collected data for two years in the CLEO I.V configuration: new drift chamber, ten layer vertex detector (VD) inside the drift chamber, three layer straw tube drift chamber insert (IV) inside the VD, and a prototype CsI calorimeter replacing one of the original pole-tip shower detectors. The highlight of the CLEO I.V era was the observation of semi-leptonic B decays to charmless final states, submitted less than three weeks before a similar observation from ARGUS. The shutdown for the installation of DR2 allowed ARGUS to beat CLEO to the observation of B mixing, which was the most cited measurement of any of the symmetric B experiments. CLEO II CLEO shut down in April 1988 to begin the remainder of the CLEO II installation, and finished the upgrade in August 1989. A six layer straw chamber precision tracker (PT) replaced the IV, and the time-of-flight detectors, CsI calorimeter, solenoid magnet and iron, and muon chambers were all installed. This would be the CLEO II configuration of the detector. During the CLEO II era, the collaboration observed the flavor changing neutral current decays B+,0→ K*+,0 γ and b → s γ. Decays of B mesons to two charmless mesons were also discovered during CLEO II. These decays were of interest because of the possibilility to observe CP violation in decays such as K±π0, although such a measurement would require large amounts of data. Observation of time-dependent asymmetries in the production of certain flavor-symmetric final states (such as J/Ψ K) was an easier way to detect CP violation in B mesons, both theoretically and experimentally. An asymmetric accelerator, one in which the electrons and positrons had different energies, was necessary to measure the time difference between B0 and 0 decays. CESR and CLEO submitted a proposal to build a low energy ring in the existing tunnel and upgrade the CLEO II detector with NSF funding. SLAC also submitted a proposal to build a B factory with DOE funds. The initial designs were first reviewed in 1991, but DOE and NSF agreed that insufficient funds were available to build either facility and a decision on which one to build was postponed. The proposals were reconsidered in 1993, this time with both facilities competing for DOE money. In October 1993, it was announced that the B factory would be built at SLAC. After losing the competition for the B factory, CESR and CLEO proceeded with a two-part plan to upgrade the accelerator and the detector. The first phase was the upgrade to the CLEO II.V configuration between May and October 1995, which included a silicon detector to replace the PT and a change of the gas mixture in the drift chamber from an argon-ethane mix to a helium-propane mix. The silicon detector provided excellent vertex resolution, allowing precise measurements of D0, D+, Ds and τ lifetimes and D mixing. The drift chamber had better efficiency and momentum resolution. CLEO III The second phase of the upgrade included new superconducting quadrupoles near the detector. The VD and DR2 detectors would need to be replaced to make room for the quadrupole magnets. A new silicon detector and particle identification chamber would also be included in the CLEO-III configuration. The CLEO III upgrade replaced the drift chamber and silicon detector and added a ring-imaging Cherenkov (RICH) detector for enhanced particle identification. The CLEO III drift chamber (DR3) achieved the same momentum resolution as the CLEO II.V drift chamber, despite having a shorter lever arm to accommodate the RICH detector. The mass of the CLEO III endplates was also reduced to allow better resolution in the endcap calorimeters. CLEO II.V had stopped collecting data in February 1999. The RICH detector was installed beginning in June 1999, and DR3 was installed immediately afterwards. The silicon detector was to be installed next, but it was still being built. An engineering run was taken until the silicon detector was ready for installation in February 2000. CLEO III collected 6 fb−1 of data at the Υ(4S) and another 2 fb−1 below the Υ(4S). With the advent of the high luminosity BaBar and Belle experiments, CLEO could no longer make competitive measurements of most of the properties of the B mesons. CLEO decided to study the various bottom and charm quarkonia states and charm mesons. The program began by revisiting the Υ states below the B meson threshold and the last data collected with the CLEO-III detector was at the Υ(1-3S) resonances. CLEO-c CLEO-c was the final version of the detector, and it was optimized for taking data at the reduced beam energies needed for studies of the charm quark. It replaced the CLEO III silicon detector, which suffered from lower-than-expected efficiency, with a six layer, all stereo drift chamber (ZD). CLEO-c also operated with the solenoid magnet at a reduced magnetic field of 1 T to improve the detection of low momentum charged particles. The low particle multiplicities at these energies allowed efficient reconstruction of D mesons. CLEO-c measured properties of the D mesons that served as inputs to the measurements made by the B factories. It also measured many of the quarkonia states that helped verify lattice QCD calculations. Detector CLEO's subdetectors perform three main tasks: tracking of charged particles, calorimetry of neutral particles and electrons, and identification of charged particle type. Tracking CLEO has always used a solenoid magnet to allow the measurement of charged particles. The original CLEO design called for a superconducting solenoid, but it was clear that one could not be built in time. A conventional 0.42 T solenoid was installed first, then replaced by the superconducting magnet in September 1981. The superconducting coil was designed to operate at 1.2 T, but it was never operated above 1.0 T. A new magnet was built for the CLEO II upgrade and was placed between the calorimeter and the muon detector. It operated at 1.5 T until CLEO-c, when the magnetic field was reduced to 1.0 T. Wire chambers The original CLEO detector used three separate tracking chambers. The innermost chamber (IZ) was a three layer proportional wire chamber that occupied the region between a radius of 9 cm and 17 cm. Each layer had 240 anode wires to measure track azimuth and 144 cathode strip hoops 5 mm wide inside and outside the anode wires (864 cathode strips total) to measure track z. The CLEO I drift chamber (DR) was immediately outside the IZ and occupied the region between a radius of 17.3 cm and 95 cm. It consisted of seventeen layers of 11.3 mm × 10.0 mm cells with 42.5 mm between the layers, for a total of 5304 cells. There were two layers of field wires for every layer of sense wires. The odd-numbered layers were axial layers, and the even-numbered layers were alternating stereo layers. The last CLEO I dedicated tracking chamber was the planar outer Z drift chamber (OZ) between the solenoid magnet and the dE/dx chambers. It consisted of three layers separated radially by 2.5 cm. The innermost layer was perpendicular to the beamline, and the outer two layers were at ±10° relative to the innermost chamber to provide some azimuthal tracking information. Each octant was equipped with an OZ chamber. A new drift chamber, DR2, was built to replace the original drift chamber. The new drift chamber had the same outer radius as the original one so that it could be installed before the rest of the CLEO II upgrades were ready. DR2 was a 51 layer detector, with a 000+000- axial/stereo layer arrangement. DR2 had only one layer of field wires between each layer of sense wires, allowing many more layers to fit in the allotted space. The axial sense wires had a half-cell stagger to help resolve the left-right ambiguity of the original drift chamber. The inner and outer field layers of the chamber were cathode strips to make measurements of the longitudinal coordinate of tracks. DR2 was also designed to make dE/dx measurements in addition to tracking measurements. The IZ chamber was replaced with a ten-layer drift chamber (VD) in 1984. When the beampipe radius was reduced from 7.5  to 5.0 cm in 1986, a three-layer straw chamber (IV) was built to occupy the newly available space. The IV was replaced during the CLEO II upgrade with a five-layer straw tube with a 3.5 cm inner radius. The CLEO III drift chamber (DR3) was designed to have similar performance as the CLEO II/II.V drift chamber even though it would be smaller to allow space for the RICH detector. The innermost sixteen layers were axial, and the outermost 31 layers were grouped in alternating stereo four-layer superlayers. The outer wall of the drift chamber was instrumented with 1 cm wide cathode pads to provide additional z measurements. The last drift chamber built for CLEO was the inner drift chamber ZD for the CLEO-c upgrade. Its six layer, all stereo layer design would provide longitudinal measurements of low-momentum tracks that would not reach stereo layers of the main drift chamber. With the exception of the larger stereo angle and smaller cell size, the ZD design was very similar to the DR3 design. Silicon detectors CLEO built its first silicon vertex detector for the CLEO II.V upgrade. The silicon detector was a three-layer device, arranged in octants. The innermost layer was at a radius of 2.4 cm and the outermost layer was at a radius of 4.7 cm. A total of 96 silicon wafers were used, with a total of 26208 readout channels. The CLEO III upgrade included a new four layer, double-sided silicon vertex detector. It was made of 447 identical 1 in × 2 in wafers with a 50 micrometre strip pitch on the r-φ side and a 100 micrometre pitch on the z side. The silicon detector achieved 85% efficiency after installation, but soon began to suffer increasingly large inefficiencies. The inefficiencies were found in roughly semi-circular regions on the wafers. The silicon detector was replaced for CLEO-c because of its poor performance, the reduced need for vertexing capabilities, and the desire to minimize the material near the beampipe. Calorimetry CLEO I had three separate calorimeters. All used layers of proportional tubes interleaved with sheets of lead. The octant shower detectors were outside the time-of-flight detectors in each of the octants. Each octant detector had 44 layers of proportional tubes, alternating parallel and perpendicular to the beampipe. Wires were ganged together to reduce the number of readout channels for a total of 774 gangs. The octant end shower detectors were sixteen layer devices placed at either end of the dE/dx chambers. The layers followed an azimuthal, positive stereo, azimuthal, negative stereo pattern. The stereo wires were parallel to the slanted sides of the detector. The layers were ganged in a similar fashion as the octant shower detectors. The pole tip shower detector was placed between the ends of the drift chamber and the pole tips of the magnet flux return. The pole tip shower detector had 21 layers, with seven groups of vertical, +120°, -120° layers. The shower detector on each side was built in two halves to allow access to the beampipe. The calorimetry was significantly improved during the CLEO II upgrade. The new electromagnetic calorimeter used 7784 CsI crystals doped with thallium. Each crystal was roughly 30 cm deep and had a 5 cm × 5 cm face. The central region of the calorimeter was a cylinder placed between the drift chamber and the solenoid magnet, and two endcap calorimeters were placed at either end of the drift chamber. The crystals in the endcap were oriented parallel to the beam line. The crystals in the central calorimeter faced a point displaced from the interaction point both longitudinally and transversely by a few centimeters to avoid inefficiencies from particles passing between neighboring crystals. The calorimeter primarily measured the energy of photons or electrons, however it was also used to detect antineutrons. All versions of the detector from CLEO-II through CLEO-c used the CsI calorimeter. Particle identification Five types of long-lived, charged particles are produced at CLEO: electrons, pions, muons, kaons and protons. Proper identification of each of these types significantly improves the capabilities of the detector. Particle identification was done by both dedicated subdetectors and by the calorimeter and drift chamber. The outer portion of the CLEO detector was divided into independent octants that were primarily dedicated to charged particle identification. No clear consensus was reached on the choice of technology for particle identification, therefore two octants were equipped with dE/dx ionization chambers, two octants were equipped with high pressure gas Cerenkov detectors, and four octants were equipped with low pressure gas Cerenkov detectors. The dE/dx system demonstrated superior particle identification performance and aided in tracking, therefore in September 1981 all eight octants were equipped with dE/dx chambers. The dE/dx chambers measured the ionization of charged particles as they passed through a multiwire proportional chamber (MWPC). Each dE/dx octant was made with 124 separate modules, and each module contained 117 wires. Groups of ten modules were ganged together to minimize the number of readout channels. The first two and last two modules were not instrumented, therefore each octant had twelve cells. The time-of-flight detector was directly outside the dE/dx chambers. It identified a charged particle by measuring its velocity and comparing it to the momentum measurement from the tracking chambers. Scintillating bars were arranged parallel to the beamline, with six bars for each half of the octant. The six bars in each octant half overlapped to avoid having any uninstrumented regions. The scintillation photons were detected by photomultiplier tubes. Each bar was 2.03 m × 0.312 m× 0.025 m. The CLEO I muon drift chambers were the outermost detectors. Two layers of muon detectors were outside the magnet iron on either end of CLEO. The barrel region had two additional layers of muon chambers after 15 cm and 30 cm of magnet iron. The muon detectors were between 4 and 10 radiation lengths deep and were sensitive to muons with energies of at least 1-2 GeV. The magnet yoke weighed 580 tons, and each of four movable carts at each corner of the detector weighed 240 tons, for a total of 1540 tons. CLEO II used time-of-flight detectors between the drift chamber and the calorimeter, one in the barrel region, the other in the endcap region. The barrel region consisted of 64 Bicron bars with light guides leading to photomultiplier tubes outside the magnetic field region. A similar system covered the endcap region. The TOF system had a timing resolution of 150 cm. The central and endcap TOF detectors combined covered 97% of the solid angle. The CLEO I muon detector was far away enough from the interaction region that in-flight decays of pions and kaons were a significant background. The more compact structure of the CLEO II detector allowed the muon detectors to be moved closer to the interaction point. Three layers of muon detectors were placed behind layers of iron absorbers. The streamer counters were read out from each end to determine the z position. The CLEO III upgrade included the addition of the RICH subdetector, a dedicated particle identification subdetector. The RICH detector was required to be less than 20 cm in the radial direction, between the drift chamber and the calorimeter, and less than 12% of a radiation length. The RICH detector used the Cerenkov radiation of charged particles to measure their velocity. Combined with the momentum measurement from the tracking detectors, the mass of the particle, and therefore its identity, could be determined. Charged particles produced Cerenkov light as they pass through a LiF window. Fourteen rings of thirty LiF crystals comprised the radiator of the RICH, and the four centermost rings had a sawtooth pattern to prevent total internal reflection of the Cerenkov photons. The photons traveled through a nitrogen expansion volume, which allowed the cone angle to be precisely determined. The photons were detected by 7.5 mm × 8.0 mm cathode pads in a multi-wire chamber containing a methane-triethylamine gas mixture. Physics program CLEO has published over 200 articles in Physical Review Letters and more than 180 articles in Physical Review. The reports of inclusive and exclusive b → s γ have both been cited over 500 times. B physics was usually CLEO's top priority, but the collaboration has made measurements across a wide spectrum of particle physics topics. B mesons CLEO's most cited paper reported the first measurement of the flavor-changing neutral current decay b→sγ. The measurement agreed well with the Standard Model and placed significant constraints on numerous beyond the Standard Model proposals, such as charged Higgs and anomalous WWγ couplings. The analogous exclusive decay B+,0→ K*+,0 γ was also measured. CLEO and ARGUS reported nearly simultaneous measurements of inclusive charmless semileptonic B meson decays, which directly established a non-zero value of the CKM matrix element |Vub|. Exclusive charmless semileptonic B meson decays were first observed by CLEO six years later in the modes B → πlν, ρlν, and were used to determine |Vub|. CLEO also discovered many of the hadronic analogs: B+,0→ K(892)+π−, φ K(*), K+π0, K0π0, π+π−, π+ρ0, π+ρ−, π+ω η K*, η′ K and K0π+, K+π−. These charmless hadronic decay modes can probe CP violation and are sensitive to the angles α and γ of the unitarity triangle. Finally, CLEO observed many exclusive charmed decays of B mesons, including several that are sensitive to |Vcb|: B→ D(*)K*−, 0→ D*0π0 B→ Λπ−, Λπ+π−, 0→ D*0π+π+π−π−, 0→ D*ρ′−, B0→ D*−pπ+, D*−p, B→ J/Ψ φ K, B0→ D*+D*−, and B+→ 0 K+. Charm hadrons Although CLEO ran mainly near the Υ(4S) to study B mesons, it was also competitive with experiments designed to study charm hadrons. The first measurement of charm hadron properties by CLEO was the observation of the Ds. CLEO measured a mass of 1970±7 MeV, considerably lower than previous observations at 2030±60 MeV and 2020±10 MeV. CLEO discovered the DsJ(2573) and the DsJ(2463). CLEO was the first experiment to measure the doubly Cabibbo suppressed decay D0→ K+π−, and CLEO performed Dalitz analyses of D0,+ in several decay modes. CLEO studied the D*(2010)+, making the first measurement of its width and the most precise measurement of the D*-D0 mass difference. CLEO-c made many of the most accurate measurements of D meson branching ratios in inclusive channels, μ+νμ, semileptonic decays, and hadronic decays. These branching fractions are important inputs to B meson measurements at BaBar and Belle. CLEO first observed the purely leptonic decay D→μ+, which provided an experimental measure of the decay constant fDs. CLEO-c made the most precise measurements of fD+ and fDs. These decay constants are in turn a key input to the interpretation of other measurements, such as B mixing. Other D decay modes discovered by CLEO are p, ωπ+, η ρ+, η'ρ+, φρ+, η π+, η'π+, and φ l ν. CLEO discovered many charmed baryons and discovered or improved the measurement of many charmed baryon decay modes. Before BaBar and Belle began discovering new charm baryons in 2005, CLEO had discovered thirteen of the twenty known charm baryons: Ξ, Ξ(2790), Ξ(2815), Ξ, Σ(2520), Ξ(2645), Ξ(2645), and Λ(2593). Charmed baryon decay modes discovered at CLEO are Ω→ Ω−e+e; Λ→ p0η, Ληπ+, Σ+η, Σ*+η, Λ0K+, Σ+π0, Σ+ω, Λπ+π+π−π0, Λωπ+; and Ξ→Ξ0e+ e. Quarkonium Quarkonium states provide experimental input for lattice QCD and non-relativistic QCD calculations. CLEO studied the Υ system until the end of the CUSB and CUSB-II experiments, then returned to the Υ system with the CLEO III detector. CLEO-c studied the lower mass ψ states. CLEO and CUSB published their first papers back-to-back, reporting observation of the first three Υ states. Earlier claims of the Υ(3S) relied on fits of one peak with three components; CLEO and CUSB's observation of three well separated peaks dispelled any remaining doubt about the existence of the Υ(3S). The Υ(4S) was discovered shortly after by CLEO and CUSB and was interpreted as decaying to B mesons because of its large decay width. An excess of electrons and muons at the Υ(4S) demonstrated the existence of weak decays and confirmed the interpretation of the Υ(4S) decaying to B mesons. CLEO and CUSB later reported the existence of the Υ(5S) and Υ(6S) states. CLEO I through CLEO II had significant competition in Υ physics, primarily from the CUSB, Crystal Ball and ARGUS experiments. CLEO was able, however, to observe a number of Υ(1S) decays: τ+τ−, J/Ψ X and γ X with X = π+, π0, 2π+, π+K+, π+p, 2K+, 3π+, 2π+K+, and 2π+p. The radiative decays are sensitive to the production of glueballs. CLEO collected more data at the Υ(1-3S) resonances at the end of the CLEO III era. CLEO III discovered the Υ(1D) state, the χb1,2(2P)→ωΥ(1S) transitions, and Υ(3S)→τ+τ− decays among others. CLEO-c measured many of the properties of the charmonium states. Highlights include confirmation of ηc', confirmation of Y(4260), pseudoscalar-vector decays of ψ(2S), ψ(2S)→J/ψ decays, observation of thirteen new hadronic decays of ψ(2S), observation of hc(1P1), and measurement of the mass and branching fractions of η in ψ(2S)→J/ψ decay. Tau leptons CLEO discovered six decay modes of the τ: τ → K−π0ντ, e−ντeγ, π−π−π+η ντ, π−π0π0η ντ, f1π ντ, K−η ντ and K−ωντ. CLEO measured the lifetime of the τ three times with a precision comparable or better than any other measurements at the time. CLEO also measured the mass of the τ twice. CLEO set limits on the mass of ντ several times, although the CLEO limit was never the most stringent one. CLEO's measurements of the Michel parameters were the most precise for their time, many by a substantial margin. Other measurements CLEO has studied two-photon physics, where both an electron and positron radiate a photon. The two photons interact to produce either a vector meson or hadron-antihadron pairs. CLEO published measurements of both the vector meson process and the hadron-antihadron process. CLEO performed an energy scan for center-of-mass energies between 7 GeV and 10 GeV to measure the hadronic cross section ratio. CLEO made the first measurements of the π+ and K+ electromagnetic form factors above Q2 > 4 GeV2. Finally, CLEO has performed searches for Higgs and beyond SM particles: Higgs bosons, axions, magnetic monopoles, neutralinos, fractionally charged particles, bottom squarks, and familons. Collaboration Initial design of a detector for the south interaction region of CESR began in 1975. Physicists from Harvard University, Syracuse University and the University of Rochester had worked at the Cornell synchrotron, and were natural choices as collaborators with Cornell. They were joined by groups from Rutgers University and Vanderbilt University, along with collaborators from LeMoyne College and Ithaca College. Additional institutions were assigned responsibility for detector components as they joined the collaboration. Cornell appointed a physicist to oversee development of the portion of the detector inside the magnet, outside the magnet, and of the magnet itself. The structure of the collaboration was designed to avoid perceived shortcomings at SLAC, where SLAC physicists were felt to dominate operations by virtue of their access to the accelerator and detector and to computing and machine facilities. Collaborators were free to work on the analysis of their choosing, and the approval of results for publication was by collaboration-wide vote. The spokesperson (later spokespeople) were also selected by collaboration-wide vote, including graduate students. The other officers in the collaboration were an analysis coordinator and a run manager, then later also a software coordinator. The first CLEO paper listed 73 authors from eight institutions. Cornell University, Syracuse University and the University of Rochester have been members of CLEO for its entire history, and forty-two institutions have been members of CLEO at one time. The collaboration was its largest in 1996 at 212 members, before collaborators began to move to the BaBar and Belle experiments. The largest number of authors to appear on a CLEO paper was 226. A paper published near the time CLEO stopped taking data had 123 authors. Notes References AIP Study of Multi-Institutional Collaborations Phase I: High-Energy Physics Particle detectors
CLEO (particle detector)
[ "Technology", "Engineering" ]
6,779
[ "Particle detectors", "Measuring instruments" ]
5,479,047
https://en.wikipedia.org/wiki/Aortic%20orifice
The aortic orifice (aortic opening) is a circular opening, in front and to the right of the left atrioventricular orifice, from which it is separated by the anterior cusp of the bicuspid valve. It is guarded by the aortic semilunar valve. The portion of the ventricle immediately below the aortic orifice is termed the aortic vestibule, and has fibrous instead of muscular walls. References External links Circulatory system
Aortic orifice
[ "Biology" ]
109
[ "Organ systems", "Circulatory system" ]
5,479,075
https://en.wikipedia.org/wiki/Rapid%20modes%20of%20evolution
Rapid modes of evolution have been proposed by several notable biologists after Charles Darwin proposed his theory of evolutionary descent by natural selection. In his book On the Origin of Species (1859), Darwin stressed the gradual nature of descent, writing: It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were. (1859) Evolutionary developmental biology Work in developmental biology has identified dynamical and physical mechanisms of tissue morphogenesis that may underlie such abrupt morphological transitions. Consequently, consideration of mechanisms of phylogenetic change that are actually (not just apparently) non-gradual is increasingly common in the field of evolutionary developmental biology, particularly in studies of the origin of morphological novelty. A description of such mechanisms can be found in the multi-authored volume Origination of Organismal Form. See also Evolution Evolutionary developmental biology Otto Schindewolf Punctuated equilibrium Quantum evolution Richard Goldschmidt Saltationism Industrial melanism Peppered moth evolution Bibliography Darwin, C. (1859) On the Origin of Species London: Murray. Goldschmidt, R. (1940) The Material Basis of Evolution. New Haven, Conn.: Yale University Press. Gould, S. J. (1977) "The Return of Hopeful Monsters" Natural History 86 (June/July): 22-30. Gould, S. J. (2002) The Structure of Evolutionary Theory. Cambridge MA: Harvard Univ. Press. Müller, G. B. and Newman, S. A., eds. (2003) Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology. Cambridge: The MIT Press. Schindewolf, O. H. (1963) Neokatastrophismus? Zeits. Deutsch. Geol. Res. 114: 430-435. Newman, S. A. and Bhat, R. (2009) Dynamical patterning modules: a "pattern language" for development and evolution of multicellular form. Int. J. Dev. Biol. 53: 693-705 Evolutionary biology
Rapid modes of evolution
[ "Biology" ]
532
[ "Evolutionary biology" ]
5,480,019
https://en.wikipedia.org/wiki/Immunochemistry
Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays. In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization. Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry. One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins. Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry). References Branches of immunology
Immunochemistry
[ "Biology" ]
380
[ "Branches of immunology" ]
5,480,302
https://en.wikipedia.org/wiki/Differentiation%20in%20Fr%C3%A9chet%20spaces
In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation, as it is Gateaux derivative between Fréchet spaces, is significantly weaker than the derivative in a Banach space, even between general topological vector spaces. Nevertheless, it is the weakest notion of differentiation for which many of the familiar theorems from calculus hold. In particular, the chain rule is true. With some additional constraints on the Fréchet spaces and functions involved, there is an analog of the inverse function theorem called the Nash–Moser inverse function theorem, having wide applications in nonlinear analysis and differential geometry. Mathematical details Formally, the definition of differentiation is identical to the Gateaux derivative. Specifically, let and be Fréchet spaces, be an open set, and be a function. The directional derivative of in the direction is defined by if the limit exists. One says that is continuously differentiable, or if the limit exists for all and the mapping is a continuous map. Higher order derivatives are defined inductively via A function is said to be if is continuous. It is or smooth if it is for every Properties Let and be Fréchet spaces. Suppose that is an open subset of is an open subset of and are a pair of functions. Then the following properties hold: Fundamental theorem of calculus. If the line segment from to lies entirely within then The chain rule. For all and Linearity. is linear in More generally, if is then is multilinear in the 's. Taylor's theorem with remainder. Suppose that the line segment between and lies entirely within If is then where the remainder term is given by Commutativity of directional derivatives. If is then for every permutation σ of The proofs of many of these properties rely fundamentally on the fact that it is possible to define the Riemann integral of continuous curves in a Fréchet space. Smooth mappings Surprisingly, a mapping between open subset of Fréchet spaces is smooth (infinitely often differentiable) if it maps smooth curves to smooth curves; see Convenient analysis. Moreover, smooth curves in spaces of smooth functions are just smooth functions of one variable more. Consequences in differential geometry The existence of a chain rule allows for the definition of a manifold modeled on a Fréchet space: a Fréchet manifold. Furthermore, the linearity of the derivative implies that there is an analog of the tangent bundle for Fréchet manifolds. Tame Fréchet spaces Frequently the Fréchet spaces that arise in practical applications of the derivative enjoy an additional property: they are tame. Roughly speaking, a tame Fréchet space is one which is almost a Banach space. On tame spaces, it is possible to define a preferred class of mappings, known as tame maps. On the category of tame spaces under tame maps, the underlying topology is strong enough to support a fully fledged theory of differential topology. Within this context, many more techniques from calculus hold. In particular, there are versions of the inverse and implicit function theorems. See also References Banach spaces Differential calculus Euclidean geometry Functions and mappings Generalizations of the derivative Topological vector spaces
Differentiation in Fréchet spaces
[ "Mathematics" ]
650
[ "Mathematical analysis", "Functions and mappings", "Vector spaces", "Calculus", "Mathematical objects", "Space (mathematics)", "Topological vector spaces", "Mathematical relations", "Differential calculus" ]
5,480,422
https://en.wikipedia.org/wiki/Field%20cycling
Field cycling is a measurement method which uses variable magnetic fields to measure the magnetization of a sample. Fast field cycling is the same method except with fast switchable magnetic fields. Field cycling is either "mechanical" or "electrical." Mechanical field cycling moves the sample between two positions with different (static) magnetic fields and can be done using static magnetic fields. Electrical field cycling requires switchable fields. The sample remains at its original position during both forms of field cycling. Field cycling is used in fast field cycling relaxometry to measure specific physical and chemical properties of materials. For instance, nuclear magnetic resonance frequencies depend on the molecular environment. Furthermore, nuclear spin-lattice relaxation rates depend on local molecular mobility. See also NMR spectroscopy References Measurement
Field cycling
[ "Physics", "Mathematics" ]
151
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
5,480,651
https://en.wikipedia.org/wiki/Materialise%20Mimics
Materialise Mimics is an image processing software for 3D design and modeling, developed by Materialise NV, a Belgian company specialized in additive manufacturing software and technology for medical, dental and additive manufacturing industries. Materialise Mimics is used to create 3D surface models from stacks of 2D image data. These 3D models can then be used for a variety of engineering applications. Mimics is an acronym for Materialise Interactive Medical Image Control System. It is developed in an ISO environment with CE and FDA 510k premarket clearance. Materialise Mimics is commercially available as part of the Materialise Mimics Innovation Suite, which also contains Materialise 3-matic, a design and meshing software for anatomical data. The current version is 24.0(released in 2021), and it supports Windows 10, Windows 7, Vista and XP in x64. Process Materialise Mimics calculates surface 3D models from stacked image data such as Computed Tomography (CT), Micro CT, Magnetic Resonance Imaging (MRI), Confocal Microscopy, X-ray and Ultrasound, through image segmentation. The ROI, selected in the segmentation process is converted to a 3D surface model using an adapted marching cubes algorithm that takes the partial volume effect into account, leading to very accurate 3D models. The 3D files are represented in the STL format. Uploading Data DICOM data from CT or MRI images can be uploaded into Materialise Mimics in order to begin the segmentation process. From this data, 3 different views are present: the coronal, axial, and sagittal views. Another window is present to display 3D objects. Mask Creation The "New Mask" tool can be used to highlight specific anatomy from the DICOM data. Printing Models Models can be sent to 3D printers in the form of STLs. Gallery See also 3D modeling 3D Slicer Computer representation of surfaces Computed tomography Medical imaging References External links User community Biomedical engineering Windows graphics-related software 3D graphics software Computer-aided design software
Materialise Mimics
[ "Engineering", "Biology" ]
406
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
573,875
https://en.wikipedia.org/wiki/Measurement%20in%20quantum%20mechanics
In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be moving, if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems. Measuring a quantum system generally changes the quantum state that describes that system. This is a central feature of quantum mechanics, one that is both mathematically intricate and conceptually subtle. The mathematical tools for making predictions about what measurement outcomes may occur, and how quantum states can change, were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability. However, on a more philosophical level, debates continue about the meaning of the measurement concept. Mathematical formalism "Observables" as self-adjoint operators In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable". These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space), exotic possibilities for sets of eigenvalues, like Cantor sets; and so forth. These issues can be satisfactorily resolved using spectral theory; the present article will avoid them whenever possible. Projective measurement The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that where is the density operator, and is the projection operator onto the basis vector corresponding to the measurement outcome . The average of the eigenvalues of a von Neumann observable, weighted by the Born rule probabilities, is the expectation value of that observable. For an observable , the expectation value given a quantum state is A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome ). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it. The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator. Generalized measurement (POVM) In functional analysis and quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information. In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices on a Hilbert space that sum to the identity matrix, In quantum mechanics, the POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by , where is the trace operator. When the quantum state being measured is a pure state this formula reduces to . State change due to measurement A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product: The Kraus operators , named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products are. If upon performing the measurement the outcome is obtained, then the initial state is updated to An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable: If the initial state is pure, and the projectors have rank 1, they can be written as projectors onto the vectors and , respectively. The formula simplifies thus to Lüders rule has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state implies a probability-one prediction for any von Neumann observable that has as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again. We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation: It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost. Examples The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states and with complex coefficients: A measurement in the basis will yield outcome with probability and outcome with probability , so by normalization, An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for self-adjoint matrices: where the real numbers are the coordinates of a point within the unit ball and POVM elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates of the state are the expectation values of the three von Neumann measurements defined by the Pauli matrices. If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of are the basis states and , and a measurement of is often called a measurement in the "computational basis." After a measurement in the computational basis, the outcome of a or measurement is maximally uncertain. A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis, a set of four maximally entangled states: A common and useful example of quantum mechanics applied to a continuous degree of freedom is the quantum harmonic oscillator. This system is defined by the Hamiltonian where , the momentum operator and the position operator are self-adjoint operators on the Hilbert space of square-integrable functions on the real line. The energy eigenstates solve the time-independent Schrödinger equation: These eigenvalues can be shown to be given by and these values give the possible numerical outcomes of an energy measurement upon the oscillator. The set of possible outcomes of a position measurement on a harmonic oscillator is continuous, and so predictions are stated in terms of a probability density function that gives the probability of the measurement outcome lying in the infinitesimal interval from to . History of the measurement concept The "old quantum theory" The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include Planck's calculation of the blackbody radiation spectrum, Einstein's explanation of the photoelectric effect, Einstein and Debye's work on the specific heat of solids, Bohr and van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. The Stern–Gerlach experiment, proposed in 1921 and implemented in 1922, became a prototypical example of a quantum measurement having a discrete set of possible outcomes. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to the particles' quantized spin. Transition to the “new” quantum theory A 1925 paper by Heisenberg, known in English as "Quantum theoretical re-interpretation of kinematic and mechanical relations", marked a pivotal moment in the maturation of quantum physics. Heisenberg sought to develop a theory of atomic phenomena that relied only on "observable" quantities. At the time, and in contrast with the later standard presentation of quantum mechanics, Heisenberg did not regard the position of an electron bound within an atom as "observable". Instead, his principal quantities of interest were the frequencies of light emitted or absorbed by atoms. The uncertainty principle dates to this period. It is frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Kennard, Pauli, and Weyl, and its generalization to arbitrary pairs of noncommuting observables is due to Robertson and Schrödinger. Writing and for the self-adjoint operators representing position and momentum respectively, a standard deviation of position can be defined as and likewise for the momentum: The Kennard–Pauli–Weyl uncertainty relation is This inequality means that no preparation of a quantum particle can imply simultaneously precise predictions for a measurement of position and for a measurement of momentum. The Robertson inequality generalizes this to the case of an arbitrary pair of self-adjoint operators and . The commutator of these two operators is and this provides the lower bound on the product of standard deviations: Substituting in the canonical commutation relation , an expression first postulated by Max Born in 1925, recovers the Kennard–Pauli–Weyl statement of the uncertainty principle. From uncertainty to no-hidden-variables The existence of the uncertainty principle naturally raises the question of whether quantum mechanics can be understood as an approximation to a more exact theory. Do there exist "hidden variables", more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide? A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. Bell published the theorem now known by his name in 1964, investigating more deeply a thought experiment originally proposed in 1935 by Einstein, Podolsky and Rosen. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics. Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Quantum systems as measuring devices The Robertson–Schrödinger uncertainty principle establishes that when two observables do not commute, there is a tradeoff in predictability between them. The Wigner–Araki–Yanase theorem demonstrates another consequence of non-commutativity: the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. Further investigation in this line led to the formulation of the Wigner–Yanase skew information. Historically, experiments in quantum physics have often been described in semiclassical terms. For example, the spin of an atom in a Stern–Gerlach experiment might be treated as a quantum degree of freedom, while the atom is regarded as moving through a magnetic field described by the classical theory of Maxwell's equations. But the devices used to build the experimental apparatus are themselves physical systems, and so quantum mechanics should be applicable to them as well. Beginning in the 1950s, Rosenfeld, von Weizsäcker and others tried to develop consistency conditions that expressed when a quantum-mechanical system could be treated as a measuring apparatus. One proposal for a criterion regarding when a system used as part of a measuring device can be modeled semiclassically relies on the Wigner function, a quasiprobability distribution that can be treated as a probability distribution on phase space in those cases where it is everywhere non-negative. Decoherence A quantum state for an imperfectly isolated system will generally evolve to be entangled with the quantum state for the environment. Consequently, even if the system's initial state is pure, the state at a later time, found by taking the partial trace of the joint system-environment state, will be mixed. This phenomenon of entanglement produced by system-environment interactions tends to obscure the more exotic features of quantum mechanics that the system could in principle manifest. Quantum decoherence, as this effect is known, was first studied in detail during the 1970s. (Earlier investigations into how classical physics might be obtained as a limit of quantum mechanics had explored the subject of imperfectly isolated systems, but the role of entanglement was not fully appreciated.) A significant portion of the effort involved in quantum computing is to avoid the deleterious effects of decoherence. To illustrate, let denote the initial state of the system, the initial state of the environment and the Hamiltonian specifying the system-environment interaction. The density operator can be diagonalized and written as a linear combination of the projectors onto its eigenvectors: Expressing time evolution for a duration by the unitary operator , the state for the system after this evolution is which evaluates to The quantities surrounding can be identified as Kraus operators, and so this defines a quantum channel. Specifying a form of interaction between system and environment can establish a set of "pointer states," states for the system that are (approximately) stable, apart from overall phase factors, with respect to environmental fluctuations. A set of pointer states defines a preferred orthonormal basis for the system's Hilbert space. Quantum information and computation Quantum information science studies how information science and its application as technology depend on quantum-mechanical phenomena. Understanding measurement in quantum physics is important for this field in many ways, some of which are briefly surveyed here. Measurement, entropy, and distinguishability The von Neumann entropy is a measure of the statistical uncertainty represented by a quantum state. For a density matrix , the von Neumann entropy is writing in terms of its basis of eigenvectors, the von Neumann entropy is This is the Shannon entropy of the set of eigenvalues interpreted as a probability distribution, and so the von Neumann entropy is the Shannon entropy of the random variable defined by measuring in the eigenbasis of . Consequently, the von Neumann entropy vanishes when is pure. The von Neumann entropy of can equivalently be characterized as the minimum Shannon entropy for a measurement given the quantum state , with the minimization over all POVMs with rank-1 elements. Many other quantities used in quantum information theory also find motivation and justification in terms of measurements. For example, the trace distance between quantum states is equal to the largest difference in probability that those two quantum states can imply for a measurement outcome: Similarly, the fidelity of two quantum states, defined by expresses the probability that one state will pass a test for identifying a successful preparation of the other. The trace distance provides bounds on the fidelity via the Fuchs–van de Graaf inequalities: Quantum circuits Quantum circuits are a model for quantum computation in which a computation is a sequence of quantum gates followed by measurements. The gates are reversible transformations on a quantum mechanical analog of an n-bit register. This analogous structure is referred to as an n-qubit register. Measurements, drawn on a circuit diagram as stylized pointer dials, indicate where and how a result is obtained from the quantum computer after the steps of the computation are executed. Without loss of generality, one can work with the standard circuit model, in which the set of gates are single-qubit unitary transformations and controlled NOT gates on pairs of qubits, and all measurements are in the computational basis. Measurement-based quantum computation Measurement-based quantum computation (MBQC) is a model of quantum computing in which the answer to a question is, informally speaking, created in the act of measuring the physical system that serves as the computer. Quantum tomography Quantum state tomography is a process by which, given a set of data representing the results of quantum measurements, a quantum state consistent with those measurement results is computed. It is named by analogy with tomography, the reconstruction of three-dimensional images from slices taken through them, as in a CT scan. Tomography of quantum states can be extended to tomography of quantum channels and even of measurements. Quantum metrology Quantum metrology is the use of quantum physics to aid the measurement of quantities that, generally, had meaning in classical physics, such as exploiting quantum effects to increase the precision with which a length can be measured. A celebrated example is the introduction of squeezed light into the LIGO experiment, which increased its sensitivity to gravitational waves. Laboratory implementations The range of physical procedures to which the mathematics of quantum measurement can be applied is very broad. In the early years of the subject, laboratory procedures involved the recording of spectral lines, the darkening of photographic film, the observation of scintillations, finding tracks in cloud chambers, and hearing clicks from Geiger counters. Language from this era persists, such as the description of measurement outcomes in the abstract as "detector clicks". The double-slit experiment is a prototypical illustration of quantum interference, typically described using electrons or photons. The first interference experiment to be carried out in a regime where both wave-like and particle-like aspects of photon behavior are significant was G. I. Taylor's test in 1909. Taylor used screens of smoked glass to attenuate the light passing through his apparatus, to the extent that, in modern language, only one photon would be illuminating the interferometer slits at a time. He recorded the interference patterns on photographic plates; for the dimmest light, the exposure time required was roughly three months. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi implemented the double-slit experiment using single electrons and a television tube. A quarter-century later, a team at the University of Vienna performed an interference experiment with buckyballs, in which the buckyballs that passed through the interferometer were ionized by a laser, and the ions then induced the emission of electrons, emissions which were in turn amplified and detected by an electron multiplier. Modern quantum optics experiments can employ single-photon detectors. For example, in the "BIG Bell test" of 2018, several of the laboratory setups used single-photon avalanche diodes. Another laboratory setup used superconducting qubits. The standard method for performing measurements upon superconducting qubits is to couple a qubit with a resonator in such a way that the characteristic frequency of the resonator shifts according to the state for the qubit, and detecting this shift by observing how the resonator reacts to a probe signal. Interpretations of quantum mechanics Despite the consensus among scientists that quantum physics is in practice a successful theory, disagreements persist on a more philosophical level. Many debates in the area known as quantum foundations concern the role of measurement in quantum mechanics. Recurring questions include which interpretation of probability theory is best suited for the probabilities calculated from the Born rule; and whether the apparent randomness of quantum measurement outcomes is fundamental, or a consequence of a deeper deterministic process. Worldviews that present answers to questions like these are known as "interpretations" of quantum mechanics; as the physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." A central concern within quantum foundations is the "quantum measurement problem," though how this problem is delimited, and whether it should be counted as one question or multiple separate issues, are contested topics. Of primary interest is the seeming disparity between apparently distinct types of time evolution. Von Neumann declared that quantum mechanics contains "two fundamentally different types" of quantum-state change. First, there are those changes involving a measurement process, and second, there is unitary time evolution in the absence of measurement. The former is stochastic and discontinuous, writes von Neumann, and the latter deterministic and continuous. This dichotomy has set the tone for much later debate. Some interpretations of quantum mechanics find the reliance upon two different types of time evolution distasteful and regard the ambiguity of when to invoke one or the other as a deficiency of the way quantum theory was historically presented. To bolster these interpretations, their proponents have worked to derive ways of regarding "measurement" as a secondary concept and deducing the seemingly stochastic effect of measurement processes as approximations to more fundamental deterministic dynamics. However, consensus has not been achieved among proponents of the correct way to implement this program, and in particular how to justify the use of the Born rule to calculate probabilities. Other interpretations regard quantum states as statistical information about quantum systems, thus asserting that abrupt and discontinuous changes of quantum states are not problematic, simply reflecting updates of the available information. Of this line of thought, Bell asked, "Whose information? Information about what?" Answers to these questions vary among proponents of the informationally-oriented interpretations. See also Einstein's thought experiments Holevo's theorem Quantum error correction Quantum limit Quantum logic Quantum Zeno effect Schrödinger's cat SIC-POVM Notes References Further reading Philosophy of physics fr:Problème de la mesure quantique
Measurement in quantum mechanics
[ "Physics" ]
5,548
[ "Philosophy of physics", "Quantum measurement", "Applied and interdisciplinary physics", "Quantum mechanics" ]
574,024
https://en.wikipedia.org/wiki/Hilbert%20transform
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions. Definition The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as When the Hilbert transform is applied twice in succession to a function , the result is provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is . This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below). For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upper half complex plane , and , then up to an additive constant, provided this Hilbert transform exists. Notation In signal processing the Hilbert transform of is commonly denoted by . However, in mathematics, this notation is already extensively used to denote the Fourier transform of . Occasionally, the Hilbert transform may be denoted by . Furthermore, many sources define the Hilbert transform as the negative of the one defined here. History The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case. These results were restricted to the spaces and . In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in (Lp space) for , that the Hilbert transform is a bounded operator on for , and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform. The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals. Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today. Relationship with the Fourier transform The Hilbert transform is a multiplier operator. The multiplier of is , where is the signum function. Therefore: where denotes the Fourier transform. Since , it follows that this result applies to the three common definitions of . By Euler's formula, Therefore, has the effect of shifting the phase of the negative frequency components of by +90° ( radians) and the phase of the positive frequency components by −90°, and has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation (i.e., a multiplication by −1). When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated; i.e., , because Table of selected Hilbert transforms In the following table, the frequency parameter is real. Notes An extensive table of Hilbert transforms is available. Note that the Hilbert transform of a constant is zero. Domain of definition It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in for . More precisely, if is in for , then the limit defining the improper integral exists for almost every . The limit function is also in and is in fact the limit in the mean of the improper integral as well. That is, as in the norm, as well as pointwise almost everywhere, by the Titchmarsh theorem. In the case , the Hilbert transform still converges pointwise almost everywhere, but may itself fail to be integrable, even locally. In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an function does converge, however, in -weak, and the Hilbert transform is a bounded operator from to . (In particular, since the Hilbert transform is also a multiplier operator on , Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that is bounded on .) Properties Boundedness If , then the Hilbert transform on is a bounded linear operator, meaning that there exists a constant such that for all The best constant is given by An easy way to find the best for being a power of 2 is through the so-called Cotlar's identity that for all real valued . The same best constants hold for the periodic Hilbert transform. The boundedness of the Hilbert transform implies the convergence of the symmetric partial sum operator to in Anti-self adjointness The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between and the dual space where and are Hölder conjugates and . Symbolically, for and Inverse transform The Hilbert transform is an anti-involution, meaning that provided each transform is well-defined. Since preserves the space this implies in particular that the Hilbert transform is invertible on and that Complex structure Because ("" is the identity operator) on the real Banach space of real-valued functions in the Hilbert transform defines a linear complex structure on this Banach space. In particular, when , the Hilbert transform gives the Hilbert space of real-valued functions in the structure of a complex Hilbert space. The (complex) eigenstates of the Hilbert transform admit representations as holomorphic functions in the upper and lower half-planes in the Hardy space by the Paley–Wiener theorem. Differentiation Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute: Iterating this identity, This is rigorously true as stated provided and its first derivatives belong to One can check this easily in the frequency domain, where differentiation becomes multiplication by . Convolutions The Hilbert transform can formally be realized as a convolution with the tempered distribution Thus formally, However, a priori this may only be defined for a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in . Alternatively, one may use the fact that h(t) is the distributional derivative of the function ; to wit For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform applied on only one of either of the factors: This is rigorously true if and are compactly supported distributions since, in that case, By passing to an appropriate limit, it is thus also true if and provided that from a theorem due to Titchmarsh. Invariance The Hilbert transform has the following invariance properties on . It commutes with translations. That is, it commutes with the operators for all in It commutes with positive dilations. That is it commutes with the operators for all . It anticommutes with the reflection . Up to a multiplicative constant, the Hilbert transform is the only bounded operator on 2 with these properties. In fact there is a wider set of operators that commute with the Hilbert transform. The group acts by unitary operators on the space by the formula This unitary representation is an example of a principal series representation of In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space and its conjugate. These are the spaces of boundary values of holomorphic functions on the upper and lower halfplanes. and its conjugate consist of exactly those functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to , with being the orthogonal projection from onto and the identity operator, it follows that and its orthogonal complement are eigenspaces of for the eigenvalues . In other words, commutes with the operators . The restrictions of the operators to and its conjugate give irreducible representations of – the so-called limit of discrete series representations. Extending the domain of definition Hilbert transform of distributions It is further possible to extend the Hilbert transform to certain spaces of distributions . Since the Hilbert transform commutes with differentiation, and is a bounded operator on , restricts to give a continuous transform on the inverse limit of Sobolev spaces: The Hilbert transform can then be defined on the dual space of , denoted , consisting of distributions. This is accomplished by the duality pairing: For define: It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand and Shilov, but considerably more care is needed because of the singularity in the integral. Hilbert transform of bounded functions The Hilbert transform can be defined for functions in as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps to the Banach space of bounded mean oscillation (BMO) classes. Interpreted naïvely, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with , the integral defining diverges almost everywhere to . To alleviate such difficulties, the Hilbert transform of an function is therefore defined by the following regularized form of the integral where as above and The modified transform agrees with the original transform up to an additive constant on functions of compact support from a general result by Calderón and Zygmund. Furthermore, the resulting integral converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation. A deep result of Fefferman's work is that a function is of bounded mean oscillation if and only if it has the form for some Conjugate functions The Hilbert transform can be understood in terms of a pair of functions and such that the function is the boundary value of a holomorphic function in the upper half-plane. Under these circumstances, if and are sufficiently integrable, then one is the Hilbert transform of the other. Suppose that Then, by the theory of the Poisson integral, admits a unique harmonic extension into the upper half-plane, and this extension is given by which is the convolution of with the Poisson kernel Furthermore, there is a unique harmonic function defined in the upper half-plane such that is holomorphic and This harmonic function is obtained from by taking a convolution with the conjugate Poisson kernel Thus Indeed, the real and imaginary parts of the Cauchy kernel are so that is holomorphic by Cauchy's integral formula. The function obtained from in this way is called the harmonic conjugate of . The (non-tangential) boundary limit of as is the Hilbert transform of . Thus, succinctly, Titchmarsh's theorem Titchmarsh's theorem (named for E. C. Titchmarsh who included it in his 1937 work) makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform. It gives necessary and sufficient conditions for a complex-valued square-integrable function on the real line to be the boundary value of a function in the Hardy space of holomorphic functions in the upper half-plane . The theorem states that the following conditions for a complex-valued square-integrable function are equivalent: is the limit as of a holomorphic function in the upper half-plane such that The real and imaginary parts of are Hilbert transforms of each other. The Fourier transform vanishes for . A weaker result is true for functions of class for . Specifically, if is a holomorphic function such that for all , then there is a complex-valued function in such that in the norm as (as well as holding pointwise almost everywhere). Furthermore, where is a real-valued function in and is the Hilbert transform (of class ) of . This is not true in the case . In fact, the Hilbert transform of an function need not converge in the mean to another function. Nevertheless, the Hilbert transform of does converge almost everywhere to a finite function such that This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc. Although usually called Titchmarsh's theorem, the result aggregates much work of others, including Hardy, Paley and Wiener (see Paley–Wiener theorem), as well as work by Riesz, Hille, and Tamarkin Riemann–Hilbert problem One form of the Riemann–Hilbert problem seeks to identify pairs of functions and such that is holomorphic on the upper half-plane and is holomorphic on the lower half-plane, such that for along the real axis, where is some given real-valued function of The left-hand side of this equation may be understood either as the difference of the limits of from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem. Formally, if solve the Riemann–Hilbert problem then the Hilbert transform of is given by Hilbert transform on the circle For a periodic function the circular Hilbert transform is defined: The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel, is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied. The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel periodic. More precisely, for Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence. Another more direct connection is provided by the Cayley transform , which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map of onto The operator carries the Hardy space onto the Hardy space . Hilbert transform in signal processing Bedrosian's theorem Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or where and are the low- and high-pass signals respectively. A category of communication signals to which this applies is called the narrowband signal model. A member of that category is amplitude modulation of a high-frequency sinusoidal "carrier": where is the narrow bandwidth "message" waveform, such as voice or music. Then by Bedrosian's theorem: Analytic representation A specific type of conjugate function is: known as the analytic representation of The name reflects its mathematical tractability, due largely to Euler's formula. Applying Bedrosian's theorem to the narrowband model, the analytic representation is: A Fourier transform property indicates that this complex heterodyne operation can shift all the negative frequency components of above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms. Angle (phase/frequency) modulation The form: is called angle modulation, which includes both phase modulation and frequency modulation. The instantaneous frequency is    For sufficiently large , compared to and: Single sideband modulation (SSB) When in  is also an analytic representation (of a message waveform), that is: the result is single-sideband modulation: whose transmitted component is: Causality The function presents two causality-based challenges to practical implementation in a convolution (in addition to its undefined value at 0): Its duration is infinite (technically infinite support). Finite-length windowing reduces the effective frequency range of the transform; shorter windows result in greater losses at low and high frequencies. See also quadrature filter. It is a non-causal filter. So a delayed version, is required. The corresponding output is subsequently delayed by When creating the imaginary part of an analytic signal, the source (real part) must also be delayed by . Discrete Hilbert transform For a discrete function, with discrete-time Fourier transform (DTFT), , and discrete Hilbert transform the DTFT of in the region is given by: The inverse DTFT, using the convolution theorem, is: where which is an infinite impulse response (IIR). Practical considerations Method 1: Direct convolution of streaming data with an FIR approximation of which we will designate by Examples of truncated are shown in figures 1 and 2. Fig 1 has an odd number of anti-symmetric coefficients and is called Type III. This type inherently exhibits responses of zero magnitude at frequencies 0 and Nyquist, resulting in a bandpass filter shape. A Type IV design (even number of anti-symmetric coefficients) is shown in Fig 2. It has a highpass frequency response. Type III is the usual choice. for these reasons: A typical (i.e. properly filtered and sampled) sequence has no useful components at the Nyquist frequency. The Type IV impulse response requires a sample shift in the sequence. That causes the zero-valued coefficients to become non-zero, as seen in Figure 2. So a Type III design is potentially twice as efficient as Type IV. The group delay of a Type III design is an integer number of samples, which facilitates aligning with to create an analytic signal. The group delay of Type IV is halfway between two samples. The abrupt truncation of creates a rippling (Gibbs effect) of the flat frequency response. That can be mitigated by use of a window function to taper to zero. Method 2: Piecewise convolution. It is well known that direct convolution is computationally much more intensive than methods like overlap-save that give access to the efficiencies of the Fast Fourier transform via the convolution theorem. Specifically, the discrete Fourier transform (DFT) of a segment of is multiplied pointwise with a DFT of the sequence. An inverse DFT is done on the product, and the transient artifacts at the leading and trailing edges of the segment are discarded. Over-lapping input segments prevent gaps in the output stream. An equivalent time domain description is that segments of length (an arbitrary parameter) are convolved with the periodic function: When the duration of non-zero values of is the output sequence includes samples of   outputs are discarded from each block of and the input blocks are overlapped by that amount to prevent gaps. Method 3: Same as method 2, except the DFT of is replaced by samples of the distribution (whose real and imaginary components are all just or ) That convolves with a periodic summation: for some arbitrary parameter, is not an FIR, so the edge effects extend throughout the entire transform. Deciding what to delete and the corresponding amount of overlap is an application-dependent design issue. Fig 3 depicts the difference between methods 2 and 3. Only half of the antisymmetric impulse response is shown, and only the non-zero coefficients. The blue graph corresponds to method 2 where is truncated by a rectangular window function, rather than tapered. It is generated by a Matlab function, hilb(65). Its transient effects are exactly known and readily discarded. The frequency response, which is determined by the function argument, is the only application-dependent design issue. The red graph is corresponding to method 3. It is the inverse DFT of the distribution. Specifically, it is the function that is convolved with a segment of by the MATLAB function, hilbert(u,512). The real part of the output sequence is the original input sequence, so that the complex output is an analytic representation of When the input is a segment of a pure cosine, the resulting convolution for two different values of is depicted in Fig 4 (red and blue plots). Edge effects prevent the result from being a pure sine function (green plot). Since is not an FIR sequence, the theoretical extent of the effects is the entire output sequence. But the differences from a sine function diminish with distance from the edges. Parameter is the output sequence length. If it exceeds the length of the input sequence, the input is modified by appending zero-valued elements. In most cases, that reduces the magnitude of the edge distortions. But their duration is dominated by the inherent rise and fall times of the impulse response. Fig 5 is an example of piecewise convolution, using both methods 2 (in blue) and 3 (red dots). A sine function is created by computing the Discrete Hilbert transform of a cosine function, which was processed in four overlapping segments, and pieced back together. As the FIR result (blue) shows, the distortions apparent in the IIR result (red) are not caused by the difference between and (green and red in Fig 3). The fact that is tapered (windowed) is actually helpful in this context. The real problem is that it's not windowed enough. Effectively, whereas the overlap-save method needs Number-theoretic Hilbert transform The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo an appropriate prime number. In this it follows the generalization of discrete Fourier transform to number theoretic transforms. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences. See also Analytic signal Harmonic conjugate Hilbert spectroscopy Hilbert transform in the complex plane Hilbert–Huang transform Kramers–Kronig relation Riesz transform Single-sideband signal Singular integral operators of convolution type Notes Page citations References ; also http://www.fuchs-braun.com/media/d9140c7b3d5004fbffff8007fffffff0.pdf ; also https://www.dsprelated.com/freebooks/mdft/Analytic_Signals_Hilbert_Transform.html Further reading External links Derivation of the boundedness of the Hilbert transform Mathworld Hilbert transform — Contains a table of transforms an entry level introduction to Hilbert transformation. Harmonic functions Integral transforms Signal processing Singular integrals Schwartz distributions
Hilbert transform
[ "Technology", "Engineering" ]
4,887
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
574,491
https://en.wikipedia.org/wiki/Pessary
A pessary is a prosthetic device inserted into the vagina for structural and pharmaceutical purposes. It is most commonly used to treat stress urinary incontinence to stop urinary leakage and to treat pelvic organ prolapse to maintain the location of organs in the pelvic region. It can also be used to administer medications locally in the vagina or as a method of contraception. Pessaries come in different shapes and sizes, so it is important that individuals be fitted for them by health care professionals to avoid any complications. However, there are a few instances and circumstances that allow pessaries to be purchased without a prescription or without seeking help from a health care professional. Some side effects may occur if pessaries are not sized properly or regularly maintained, but with the appropriate care, pessaries are generally safe and well tolerated. History Early use of pessaries dates back to the ancient Egyptians, as they described using pessaries to treat pelvic organ prolapse. The term 'pessary' itself, is derived from the Ancient Greek word 'pessós', meaning round stone used for games. Pessaries are even mentioned in the oldest surviving copy of the Greek medical text, Hippocratic Oath, as something that physicians should never administer for the purposes of an abortion: "Similarly I will not give to a woman a pessary to cause abortion." The earliest documented pessaries were natural products. For example, Greek physicians Hippocrates and Soranus described inserting half of a pomegranate into the vagina to treat prolapse. It was not until the 16th century that the first purpose-made pessaries were made. For instance, in the late 1500s, Ambroise Paré was described as making oval pessaries from hammered brass and waxed cork. Nowadays, pessaries are generally made from silicone and are well tolerated and effective among patients who need them. Medical uses Pelvic organ prolapse The most common use for pessaries is to treat pelvic organ prolapse. A pelvic organ prolapse can occur when the muscles and tissues surrounding the bladder, uterus, vagina, small bowel, and rectum stop working properly to hold the organs in place and the organs begin to drop outside the body. The most common cause of such prolapse is childbirth, usually multiple births. Obesity, long-term respiratory problems, constipation, pelvic organ cancers, and hysterectomies can all be causes for pelvic organ prolapses as well. Some signs and symptoms include feeling pressure in the pelvic area, lower back pain, painful intercourse, urinary incontinence, a feeling that something is out of place, constipation, or bleeding from the vagina. Pessaries are manual devices that are inserted into the vagina to help support and reposition descended pelvic organs, which helps to prevent the worsening of prolapse, helps with symptom relief, and can delay or prevent the need for surgery. Further, pessaries can be used for surgery preparation as a way to maintain prolapse without progression. This is especially useful when a surgery may need to be delayed. Stress urinary incontinence Stress urinary incontinence is leakage of urine that is caused by sudden pressure on the bladder. It occurs during activities that increase the amount of pressure on the bladder such as coughing, sneezing, laughing, and exercising. The pressure causes opening of the sphincter muscles which usually help prevent urine leakage. Stress urinary incontinence is a common medical problem especially in women as about 1 in 3 women are affected by this condition at some point in their lives. Pessaries are considered a safe non-surgical treatment option for stress urinary incontinence as it can control the urine leakage by pushing the urethra closed. Pessaries can be removed any time. Other Some additional uses for pessaries are for an incarcerated uterus, prevention of preterm birth and an incompetent cervix. In early pregnancy the uterus can be displaced, which can lead to pain and rectal and urinary complications. A pessary can be used to treat this condition and support the uterus. Preterm birth is when babies are born prematurely, which puts the baby at increased risk for complications and even death. Currently, the use of pessaries to help prevent preterm birth is an ongoing area of research. The use of pessaries for an incompetent cervix is not commonly practiced today, but they have been used in the past. Specifically, an incompetent cervix is when the cervix begins to open up prematurely. This can lead to a preterm birth or even a miscarriage. Pessaries can be used to correctly position the cervix, increasing the success of pregnancy. Types of pessaries Therapeutic pessaries A therapeutic pessary is a medical device similar to the outer ring of a diaphragm. Therapeutic pessaries are used to support the uterus, vagina, bladder, or rectum. Pessaries are most commonly used for pelvic organ prolapse and considered a good treatment option for women who need or desire non-surgical management or future pregnancy. It is used to treat prolapse of uterine, vaginal wall (vaginal vault), bladder (cystocele), rectum (rectocele), or small bowel (enterocele). It is also used to treat stress urinary incontinence. There are different types of pessaries but most of them are made out of silicone—a harmless and durable material. Pessaries are mainly categorized into two types, supporting pessaries and space-occupying pessaries. Support pessaries function by supporting the prolapse and space-occupying pessaries by filling the vaginal space. There are also lever type pessaries. Support pessary Ring with support pessaries are the supporting type. These are often used as a first-line treatment and used for earlier stage prolapse since individuals can easily insert and remove them on their own without a doctor's help. These can be easily folded in half for insertion. Gellhorn pessaries are considered a type of supporting and space-occupying pessary. These resemble the shape of a mushroom and are used for more advanced pelvic organ prolapse. These are less preferred than ring with support pessaries due to difficulty with self-removal and insertion. Marland pessaries are another type of supporting pessary. These are used to treat pelvic organ prolapse as well as stress urinary incontinence. These pessaries have a ring at their base and a wedge-shaped ridge on one side. Although these pessaries are less likely to fall out than standard ring with support pessaries, individuals find it difficult to insert or remove them on their own. Space-occupying pessary Donut pessaries are considered space-occupying pessaries. These are used for more advanced pelvic organ prolapse including cystocele or rectocele as well as a second or third-degree uterine prolapse. Due to its shape and size, it is one of the hardest ones to insert and remove. Cube pessaries are space-occupying pessaries in the shape of a cube that are available in 7 sizes. The pessary is inserted into the vagina and kept in place by the suction of its 6 surfaces to the vaginal wall. Cube pessaries must be removed before sexual intercourse and replaced daily. Cube pessaries are generally used as a last resort only if the individuals cannot retain any other pessaries. This is due to undesirable side effects such as vaginal discharge and erosion of the vaginal wall. In order to remove the cube pessary, the suction must be broken by grasping the device. Gehrung pessaries are space-occupying pessaries that are similar to the Gellhorn pessaries. They are silicone devices that are placed into the vagina and used for second or third degree (more severe) uterine prolapse. These contain metal and should be removed prior to any MRI, ultrasound or X-rays. They can also be used to help with stress urinary incontinence such as urine leaks during exercising or coughing. These types of pessaries need to be fitted by a health care professional to ensure proper size. Once placed it should not move when standing, sitting, or squatting. It should be cleaned with mild soap and warm water every day or two. Lever pessary Hodge pessaries are a type of lever pessary. Although these can be used for mild cystocele and stress urinary incontinence, they are not commonly used. Smith, and Risser pessaries are other types of lever pessaries and they differ in shape. Pharmaceutical pessaries Treating vaginal yeast infections is one of the most common uses of pharmaceutical pessaries. They are also known as vaginal suppositories, which are inserted into the vagina and are designed to dissolve at body temperature. They usually contain a single use antifungal agent such as clotrimazole. Oral antifungal agents are also available. Pessaries can also be used in a similar way to help induce labor for women who have overdue expected delivery dates or who experience premature rupture of membranes. Prostaglandins are usually the medication used in these kinds of pessaries in order to relax the cervix and promote contractions. According to Pliny the Elder, pessaries were used as birth control in ancient times. Occlusive pessaries Occlusive pessaries are most commonly used for contraception. Also known as a contraceptive cap, they work similar to a diaphragm as a barrier form of contraception. They are inserted into the vagina and block sperm from entering the uterus through the cervix. The cap must be used in conjunction with a spermicide in order to be effective in preventing pregnancy. When used correctly the cap is thought to be 92–96% effective. These caps are reusable but come in different sizes. It is recommended for anyone attempting this form of contraception to be fitted for the correct size by a trained health care professional. Stem pessary The stem pessary, a type of occlusive pessary, was an early form of the cervical cap. Shaped like a dome, it covered the cervix, and a central rod or "stem" entered the uterus through the external orifice of the uterus, also known as the cervical canal or the os, to hold it in place. Side effects and complications When pessaries are used correctly, they are tolerated well for pelvic organ prolapse or stress urinary incontinence. However, pessaries are still a foreign device that is inserted into the vagina, so side effects can occur. Some more common side effects include vaginal discharge and odor. Vaginal discharge and odor may be associated with bacterial vaginosis, characterized by an overgrowth of naturally occurring bacteria in the vagina. These symptoms can be treated with the appropriate medications. More serious side effects include fistula formation between the vagina and rectum or the vagina and bladder, or erosion, or thinning, of the vaginal wall. Fistula formation is rare, but erosion of the vaginal wall occurs more frequently. Low estrogen production can also increase the risk of vaginal wall thinning. For individuals with pessaries that are not fitted for them, herniations of the cervix and uterus can occur through the opening of the pessary. This can lead to tissue necrosis in the cervix and uterus. To prevent these side effects, individuals can be fitted properly for their pessaries and undergo routine follow-up visits with their health care professionals to ensure the individual has the correct pessary size and no other complications. In addition, those with an increased risk of vaginal wall thinning can be prescribed estrogen to prevent erosion and prevent these complications. If pessaries are not used properly or not maintained periodically, more serious complications can occur. For example, the pessary can become embedded into the vagina, which makes it harder to remove. Estrogen can decrease the inflammation of the vaginal walls and promote skin cells in the vagina to mature, so use of estrogen cream can allow removal of the pessary more easily. In rare cases, pessaries would need to be removed through surgical procedures. To prevent complications, individuals should not use pessaries if they have characteristics that exclude them from this method of therapy. Contraindications to pessary use include current infections in the pelvis or vagina, or allergies to the material of the pessary (which can be silicone or latex). In addition, individuals should not be fitted for a pessary if they are less likely to properly maintain their pessary. See also United States v. One Package of Japanese Pessaries Diaphragm (birth control) Suppository References Dosage forms Drug delivery devices Implants (medicine) Medical equipment Vagina
Pessary
[ "Chemistry", "Biology" ]
2,834
[ "Pharmacology", "Drug delivery devices", "Medical equipment", "Medical technology" ]
575,641
https://en.wikipedia.org/wiki/Casting%20out%20nines
Casting out nines is any of three arithmetical procedures: Adding the decimal digits of a positive whole number, while optionally ignoring any 9s or digits which sum to 9 or a multiple of 9. The result of this procedure is a number which is smaller than the original whenever the original has more than one digit, leaves the same remainder as the original after division by nine, and may be obtained from the original by subtracting a multiple of 9 from it. The name of the procedure derives from this latter property. Repeated application of this procedure to the results obtained from previous applications until a single-digit number is obtained. This single-digit number is called the "digital root" of the original. If a number is divisible by 9, its digital root is 9. Otherwise, its digital root is the remainder it leaves after being divided by 9. A sanity test in which the above-mentioned procedures are used to check for errors in arithmetical calculations. The test is carried out by applying the same sequence of arithmetical operations to the digital roots of the operands as are applied to the operands themselves. If no mistakes are made in the calculations, the digital roots of the two resultants will be the same. If they are different, therefore, one or more mistakes must have been made in the calculations. Digit sums To "cast out nines" from a single number, its decimal digits can be simply added together to obtain its so-called digit sum. The digit sum of 2946, for example is 2 + 9 + 4 + 6 = 21. Since 21 = 2946 − 325 × 9, the effect of taking the digit sum of 2946 is to "cast out" 325 lots of 9 from it. If the digit 9 is ignored when summing the digits, the effect is to "cast out" one more 9 to give the result 12. More generally, when casting out nines by summing digits, any set of digits which add up to 9, or a multiple of 9, can be ignored. In the number 3264, for example, the digits 3 and 6 sum to 9. Ignoring these two digits, therefore, and summing the other two, we get 2 + 4 = 6. Since 6 = 3264 − 362 × 9, this computation has resulted in casting out 362 lots of 9 from 3264. For an arbitrary number, , normally represented by the sequence of decimal digits, , the digit sum is . The difference between the original number and its digit sum is Because numbers of the form are always divisible by 9 (since ), replacing the original number by its digit sum has the effect of casting out lots of 9. Digital roots If the procedure described in the preceding paragraph is repeatedly applied to the result of each previous application, the eventual result will be a single-digit number from which all 9s, with the possible exception of one, have been "cast out". The resulting single-digit number is called the digital root of the original. The exception occurs when the original number has a digital root of 9, whose digit sum is itself, and therefore will not be cast out by taking further digit sums. The number 12565, for instance, has digit sum 1+2+5+6+5 = 19, which, in turn, has digit sum 1+9=10, which, in its turn has digit sum 1+0=1, a single-digit number. The digital root of 12565 is therefore 1, and its computation has the effect of casting out (12565 - 1)/9 = 1396 lots of 9 from 12565. Checking calculations by casting out nines To check the result of an arithmetical calculation by casting out nines, each number in the calculation is replaced by its digital root and the same calculations applied to these digital roots. The digital root of the result of this calculation is then compared with that of the result of the original calculation. If no mistake has been made in the calculations, these two digital roots must be the same. Examples in which casting-out-nines has been used to check addition, subtraction, multiplication, and division are given below. Examples Addition In each addend, cross out all 9s and pairs of digits that total 9, then add together what remains. These new values are called excesses. Add up leftover digits for each addend until one digit is reached. Now process the sum and also the excesses to get a final excess. Subtraction Multiplication *8 times 8 is 64; 6 and 4 are 10; 1 and 0 are 1. Division How it works The method works because the original numbers are 'decimal' (base 10), the modulus is chosen to differ by 1, and casting out is equivalent to taking a digit sum. In general any two 'large' integers, x and y, expressed in any smaller modulus as x and y' (for example, modulo 7) will always have the same sum, difference or product as their originals. This property is also preserved for the 'digit sum' where the base and the modulus differ by 1. If a calculation was correct before casting out, casting out on both sides will preserve correctness. However, it is possible that two previously unequal integers will be identical modulo 9 (on average, a ninth of the time). The operation does not work on fractions, since a given fractional number does not have a unique representation. A variation on the explanation A trick to learn to add with nines is to add ten to the digit and to count back one. Since we are adding 1 to the tens digit and subtracting one from the units digit, the sum of the digits should remain the same. For example, 9 + 2 = 11 with 1 + 1 = 2. When adding 9 to itself, we would thus expect the sum of the digits to be 9 as follows: 9 + 9 = 18, (1 + 8 = 9) and 9 + 9 + 9 = 27, (2 + 7 = 9). Let us look at a simple multiplication: 5 × 7 = 35, (3 + 5 = 8). Now consider (7 + 9) × 5 = 16 × 5 = 80, (8 + 0 = 8) or 7 × (9 + 5) = 7 × 14 = 98, (9 + 8 = 17), (1 + 7 = 8). Any non-negative integer can be written as 9×n + a, where 'a' is a single digit from 0 to 8, and 'n' is some non-negative integer. Thus, using the distributive rule, (9×n + a)×(9×m + b)= 9×9×n×m + 9(am + bn) + ab. Since the first two factors are multiplied by 9, their sums will end up being 9 or 0, leaving us with 'ab'. In our example, 'a' was 7 and 'b' was 5. We would expect that in any base system, the number before that base would behave just like the nine. Limitation to casting out nines While extremely useful, casting out nines does not catch all errors made while doing calculations. For example, the casting-out-nines method would not recognize the error in a calculation of 5 × 7 which produced any of the erroneous results 8, 17, 26, etc. (that is, any result congruent to 8 modulo 9). In particular, casting out nines does not catch transposition errors, such as 1324 instead of 1234. In other words, the method only catches erroneous results whose digital root is one of the 8 digits that is different from that of the correct result. History A form of casting out nines known to ancient Greek mathematicians was described by the Roman bishop Hippolytus (170–235) in The Refutation of all Heresies, and more briefly by the Syrian Neoplatonist philosopher Iamblichus (c.245–c.325) in his commentary on the Introduction to Arithmetic of Nicomachus of Gerasa. Both Hippolytus's and Iamblichus's descriptions, though, were limited to an explanation of how repeated digital sums of Greek numerals were used to compute a unique "root" between 1 and 9. Neither of them displayed any awareness of how the procedure could be used to check the results of arithmetical computations. The earliest known surviving work which describes how casting out nines can be used to check the results of arithmetical computations is the Mahâsiddhânta, written around 950 by the Indian mathematician and astronomer Aryabhata II (c.920–c.1000). Writing about 1020, the Persian polymath, Ibn Sina (Avicenna) (c.980–1037), also gave full details of what he called the "Hindu method" of checking arithmetical calculations by casting out nines. The procedure was described by Fibonacci in his Liber Abaci. Generalization This method can be generalized to determine the remainders of division by certain prime numbers. Since 3·3 = 9, So we can use the remainder from casting out nines to get the remainder of division by three. Casting out ninety nines is done by adding groups of two digits instead just one digit. Since 11·9 = 99, So we can use the remainder from casting out ninety nines to get the remainder of division by eleven. This is called casting out elevens'. The same result can also be calculated directly by alternately adding and subtracting the digits that make up . Eleven divides if and only if eleven divides that sum. Casting out nine hundred ninety nines is done by adding groups of three digits. Since 37·27 = 999, So we can use the remainder from casting out nine hundred ninety nines to get the remainder of division by thirty seven. Notes References External links "Numerology" by R. Buckminster Fuller "Paranormal Numbers" by Paul Niquette Arithmetic Error detection and correction
Casting out nines
[ "Mathematics", "Engineering" ]
2,077
[ "Arithmetic", "Reliability engineering", "Number theory", "Error detection and correction" ]
575,697
https://en.wikipedia.org/wiki/Cheminformatics
Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "in silico" techniques—in application to a range of descriptive and prescriptive problems in the field of chemistry, including in its applications to biology and related molecular fields. Such in silico techniques are used, for example, by pharmaceutical companies and in academic settings to aid and inform the process of drug discovery, for instance in the design of well-defined combinatorial libraries of synthetic compounds, or to assist in structure-based drug design. The methods can also be used in chemical and allied industries, and such fields as environmental science and pharmacology, where chemical processes are involved or studied. History Cheminformatics has been an active field in various guises since the 1970s and earlier, with activity in academic departments and commercial pharmaceutical research and development departments. The term chemoinformatics was defined in its application to drug discovery by F.K. Brown in 1998:Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both terms, cheminformatics and chemoinformatics, have been used, although, lexicographically, cheminformatics appears to be more frequently used, despite academics in Europe declaring for the variant chemoinformatics in 2006. In 2009, a prominent Springer journal in the field was founded by transatlantic executive editors named the Journal of Cheminformatics. Background Cheminformatics combines the scientific working fields of chemistry, computer science, and information science—for example in the areas of topology, chemical graph theory, information retrieval and data mining in the chemical space. Cheminformatics can also be applied to data analysis for various industries like paper and pulp, dyes and such allied industries. Applications Storage and retrieval A primary application of cheminformatics is the storage, indexing, and search of information relating to chemical compounds. The efficient search of such stored information includes topics that are dealt with in computer science, such as data mining, information retrieval, information extraction, and machine learning. Related research topics include: Digital libraries Unstructured data Structured data mining and mining of structured data Database mining Graph mining Molecule mining Sequence mining Tree mining File formats The in silico representation of chemical structures uses specialized formats such as the Simplified molecular input line entry specifications (SMILES) or the XML-based Chemical Markup Language. These representations are often used for storage in large chemical databases. While some formats are suited for visual representations in two- or three-dimensions, others are more suited for studying physical interactions, modeling and docking studies. Virtual libraries Chemical data can pertain to real or virtual molecules. Virtual libraries of compounds may be generated in various ways to explore chemical space and hypothesize novel compounds with desired properties. Virtual libraries of classes of compounds (drugs, natural products, diversity-oriented synthetic products) were recently generated using the FOG (fragment optimized growth) algorithm. This was done by using cheminformatic tools to train transition probabilities of a Markov chain on authentic classes of compounds, and then using the Markov chain to generate novel compounds that were similar to the training database. Virtual screening In contrast to high-throughput screening, virtual screening involves computationally screening in silico libraries of compounds, by means of various methods such as docking, to identify members likely to possess desired properties such as biological activity against a given target. In some cases, combinatorial chemistry is used in the development of the library to increase the efficiency in mining the chemical space. More commonly, a diverse library of small molecules or natural products is screened. Quantitative structure-activity relationship (QSAR) This is the calculation of quantitative structure–activity relationship and quantitative structure property relationship values, used to predict the activity of compounds from their structures. In this context there is also a strong relationship to chemometrics. Chemical expert systems are also relevant, since they represent parts of chemical knowledge as an in silico representation. There is a relatively new concept of matched molecular pair analysis or prediction-driven MMPA which is coupled with QSAR model in order to identify activity cliff. See also Bioinformatics Chemical file format Chemicalize.org Cheminformatics toolkits Chemogenomics Computational chemistry Information engineering Journal of Chemical Information and Modeling Journal of Cheminformatics Materials informatics Molecular design software Molecular graphics Molecular Informatics Molecular modelling Nanoinformatics Software for molecular modeling WorldWide Molecular Matrix Molecular descriptor References Further reading External links Computational chemistry Drug discovery Computational fields of study Applied statistics
Cheminformatics
[ "Chemistry", "Mathematics", "Technology", "Biology" ]
968
[ "Computational fields of study", "Life sciences industry", "Drug discovery", "Applied mathematics", "Theoretical chemistry", "Computational chemistry", "Computing and society", "Cheminformatics", "nan", "Medicinal chemistry", "Applied statistics" ]
576,142
https://en.wikipedia.org/wiki/Amyloplast
Amyloplasts are a type of plastid, double-enveloped organelles in plant cells that are involved in various biological pathways. Amyloplasts are specifically a type of leucoplast, a subcategory for colorless, non-pigment-containing plastids. Amyloplasts are found in roots and storage tissues, and they store and synthesize starch for the plant through the polymerization of glucose. Starch synthesis relies on the transportation of carbon from the cytosol, the mechanism by which is currently under debate. Starch synthesis and storage also takes place in chloroplasts, a type of pigmented plastid involved in photosynthesis. Amyloplasts and chloroplasts are closely related, and amyloplasts can turn into chloroplasts; this is for instance observed when potato tubers are exposed to light and turn green. Role in gravity sensing Amyloplasts are thought to play a vital role in gravitropism. Statoliths, a specialized starch-accumulating amyloplast, are denser than cytoplasm, and are able to settle to the bottom of the gravity-sensing cell, called a statocyte. This settling is a vital mechanism in plant's perception of gravity, triggering the asymmetrical distribution of auxin that causes the curvature and growth of stems against the gravity vector, as well as growth of roots along the gravity vector. A plant lacking in phosphoglucomutase (pgm), for example, is a starchless mutant plant, thus preventing the settling of the statoliths. This mutant shows a significantly weaker gravitropic response as compared to a non-mutant plant. A normal gravitropic response can be rescued with hypergravity. In roots, gravity is sensed in the root cap, a section of tissue at the very tip of the root. Upon removal of the root cap, the root loses its ability to sense gravity. However, if the root cap is regrown, the root's gravitropic response will recover. In stems, gravity is sensed in the endodermal cells of the shoots. References Organelles Plant cells Plant physiology Cell anatomy
Amyloplast
[ "Biology" ]
475
[ "Plant physiology", "Plants" ]
576,246
https://en.wikipedia.org/wiki/Pribnow%20box
The Pribnow box (also known as the Pribnow-Schaller box) is a sequence of TATAAT of six nucleotides (thymine, adenine, thymine, etc.) that is an essential part of a promoter site on DNA for transcription to occur in bacteria. It is an idealized or consensus sequence—that is, it shows the most frequently occurring base at each position in many promoters analyzed; individual promoters often vary from the consensus at one or more positions. It is also commonly called the -10 sequence or element, because it is centered roughly ten base pairs upstream from the site of initiation of transcription. The Pribnow box has a function similar to the TATA box that occurs in promoters in eukaryotes and archaea: it is recognized and bound by a subunit of RNA polymerase during initiation of transcription. This region of the DNA is also the first place where base pairs separate during prokaryotic transcription to allow access to the template strand. The AT-richness is important to allow this separation, since adenine and thymine are easier to break apart (not only due to fewer hydrogen bonds, but also due to weaker base stacking effects). It is named after David Pribnow and Heinz Schaller. Probability of occurrence of each nucleotide in E. coli In fiction The term "Pribnow box" is used in episode 13 of Neon Genesis Evangelion, in reference to the chamber holding simulation Evangelions for testing purposes. See also TATA box References Regulatory sequences
Pribnow box
[ "Chemistry" ]
316
[ "Gene expression", "Regulatory sequences" ]
576,646
https://en.wikipedia.org/wiki/2.5D
2.5D (basic pronunciation two-and-a-half dimensional) perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little to no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment. This is similar but different from pseudo-3D perspective (sometimes called three-quarter view when the environment is portrayed from an angled top-down perspective), which refers to 2D graphical projections and similar techniques used to cause images or scenes to simulate the appearance of being three-dimensional (3D) when in fact they are not. By contrast, games, spaces or perspectives that are simulated and rendered in 3D and used in 3D level design are said to be true 3D, and 2D rendered games made to appear as 2D without approximating a 3D image are said to be true 2D. Common in video games, 2.5D projections have also been useful in geographic visualization (GVIS) to help understand visual-cognitive spatial representations or 3D visualization. The terms three-quarter perspective and three-quarter view trace their origins to the three-quarter profile in portraiture and facial recognition, which depicts a person's face that is partway between a frontal view and a side view. Computer graphics Axonometric and oblique projection In axonometric projection and oblique projection, two forms of parallel projection, the viewpoint is rotated slightly to reveal other facets of the environment than what are visible in a top-down perspective or side view, thereby producing a three-dimensional effect. An object is "considered to be in an inclined position resulting in foreshortening of all three axes", and the image is a "representation on a single plane (as a drawing surface) of a three-dimensional object placed at an angle to the plane of projection." Lines perpendicular to the plane become points, lines parallel to the plane have true length, and lines inclined to the plane are foreshortened. They are popular camera perspectives among 2D video games, most commonly those released for 16-bit or earlier and handheld consoles, as well as in later strategy and role-playing video games. The advantage of these perspectives is that they combine the visibility and mobility of a top-down game with the character recognizability of a side-scrolling game. Thus the player can be presented an overview of the game world in the ability to see it from above, more or less, and with additional details in artwork made possible by using an angle: Instead of showing a humanoid in top-down perspective, as a head and shoulders seen from above, the entire body can be drawn when using a slanted angle; turning a character around would reveal how it looks from the sides, the front and the back, while the top-down perspective will display the same head and shoulders regardless. There are three main divisions of axonometric projection: isometric (equal measure), dimetric (symmetrical and unsymmetrical), and trimetric (single-view or only two sides). The most common of these drawing types in engineering drawing is isometric projection. This projection is tilted so that all three axes create equal angles at intervals of 120 degrees. The result is that all three axes are equally foreshortened. In video games, a form of dimetric projection with a 2:1 pixel ratio is more common due to the problems of anti-aliasing and square pixels found on most computer monitors. In oblique projection typically all three axes are shown without foreshortening. All lines parallel to the axes are drawn to scale, and diagonals and curved lines are distorted. One tell-tale sign of oblique projection is that the face pointed toward the camera retains its right angles with respect to the image plane. Two examples of oblique projection are Ultima VII: The Black Gate and Paperboy. Examples of axonometric projection include SimCity 2000, and the role-playing games Diablo and Baldur's Gate. Billboarding In three-dimensional scenes, the term billboarding is applied to a technique in which objects are sometimes represented by two-dimensional images applied to a single polygon which is typically kept perpendicular to the line of sight. The name refers to the fact that objects are seen as if drawn on a billboard. This technique was commonly used in early 1990s video games when consoles did not have the hardware power to render fully 3D objects. This is also known as a backdrop. This can be used to good effect for a significant performance boost when the geometry is sufficiently distant that it can be seamlessly replaced with a 2D sprite. In games, this technique is most frequently applied to objects such as particles (smoke, sparks, rain) and low-detail vegetation. It has since become mainstream, and is found in many games such as Rome: Total War, where it is exploited to simultaneously display thousands of individual soldiers on a battlefield. Early examples include early first-person shooters like Marathon Trilogy, Wolfenstein 3D, Doom, Hexen and Duke Nukem 3D as well as racing games like Carmageddon and Super Mario Kart and platformers like Super Mario 64. Skyboxes and skydomes Skyboxes and skydomes are methods used to easily create a background to make a game level look bigger than it really is. If the level is enclosed in a cube, the sky, distant mountains, distant buildings, and other unreachable objects are rendered onto the cube's faces using a technique called cube mapping, thus creating the illusion of distant three-dimensional surroundings. A skydome employs the same concept but uses a sphere or hemisphere instead of a cube. As a viewer moves through a 3D scene, it is common for the skybox or skydome to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being very far away since other objects in the scene appear to move, while the skybox does not. This imitates real life, where distant objects such as clouds, stars and even mountains appear to be stationary when the viewpoint is displaced by relatively small distances. Effectively, everything in a skybox will always appear to be infinitely distant from the viewer. This consequence of skyboxes dictates that designers should be careful not to carelessly include images of discrete objects in the textures of a skybox since the viewer may be able to perceive the inconsistencies of those objects' sizes as the scene is traversed. Scaling along the Z axis In some games, sprites are scaled larger or smaller depending on its distance to the player, producing the illusion of motion along the Z (forward) axis. Sega's 1986 video game Out Run, which runs on the Sega OutRun arcade system board, is a good example of this technique. In Out Run, the player drives a Ferrari into depth of the game window. The palms on the left and right side of the street are the same bitmap, but have been scaled to different sizes, creating the illusion that some are closer than others. The angles of movement are "left and right" and "into the depth" (while still capable of doing so technically, this game did not allow making a U-turn or going into reverse, therefore moving "out of the depth", as this did not make sense to the high-speed game play and tense time limit). Notice the view is comparable to that which a driver would have in reality when driving a car. The position and size of any billboard is generated by a (complete 3D) perspective transformation as are the vertices of the poly-line representing the center of the street. Often the center of the street is stored as a spline and sampled in a way that on straight streets every sampling point corresponds to one scan-line on the screen. Hills and curves lead to multiple points on one line and one has to be chosen. Or one line is without any point and has to be interpolated lineary from the adjacent lines. Very memory intensive billboards are used in Out Run to draw corn-fields and water waves which are wider than the screen even at the largest viewing distance and also in Test Drive to draw trees and cliffs. Drakkhen was notable for being among the first role-playing video games to feature a three-dimensional playing field. However, it did not employ a conventional 3D game engine, instead emulating one using character-scaling algorithms. The player's party travels overland on a flat terrain made up of vectors, on which 2D objects are zoomed. Drakkhen features an animated day-night cycle, and the ability to wander freely about the game world, both rarities for a game of its era. This type of engine was later used in the game Eternam. Some mobile games that were released on the Java ME platform, such as the mobile version of Asphalt: Urban GT and Driver: L.A. Undercover, used this method for rendering the scenery. While the technique is similar to some of Sega's arcade games, such as Thunder Blade and Cool Riders and the 32-bit version of Road Rash, it uses polygons instead of sprite scaling for buildings and certain objects though it looks flat shaded. Later mobile games (mainly from Gameloft), such as Asphalt 4: Elite Racing and the mobile version of Iron Man 2, uses a mix of sprite scaling and texture mapping for some buildings and objects. Parallax scrolling Parallaxing refers to when a collection of 2D sprites or layers of sprites are made to move independently of each other and/or the background to create a sense of added depth. This depth cue is created by relative motion of layers. The technique grew out of the multiplane camera technique used in traditional animation since the 1940s. This type of graphical effect was first used in the 1982 arcade game Moon Patrol. Examples include the skies in Rise of the Triad, the arcade version of Rygar, Sonic the Hedgehog, Street Fighter II, Shadow of the Beast and Dracula X Chronicles, as well as Super Mario World. Mode 7 Mode 7, a display system effect that included rotation and scaling, allowed for a 3D effect while moving in any direction without any actual 3D models, and was used to simulate 3D graphics on the SNES. Ray casting Ray casting is a first person pseudo-3D technique in which a ray for every vertical slice of the screen is sent from the position of the camera. These rays shoot out until they hit an object or wall, and that part of the wall is rendered in that vertical screen slice. Due to the limited camera movement and internally 2D playing field, this is often considered 2.5D. Bump, normal and parallax mapping Bump mapping, normal mapping and parallax mapping are techniques applied to textures in 3D rendering applications such as video games to simulate bumps and wrinkles on the surface of an object without using more polygons. To the end user, this means that textures such as stone walls will have more apparent depth and thus greater realism with less of an influence on the performance of the simulation. Bump mapping is achieved by perturbing the surface normals of an object and using a grayscale image and the perturbed normal during illumination calculations. The result is an apparently bumpy surface rather than a perfectly smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978. In normal mapping, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the dot product is the intensity of the light on that surface. Imagine a polygonal model of a sphere—you can only approximate the shape of the surface. By using a 3-channel bitmapped image textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (x, y and z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques. Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement of the bump mapping and normal mapping techniques implemented by displacing the texture coordinates at a point on the rendered polygon by a function of the view angle in tangent space (the angle relative to the surface normal) and the value of the height map at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due to parallax effects as the view changes. Film and animation techniques The term is also used to describe an animation effect commonly used in music videos and, more frequently, title sequences. Brought to wide attention by the motion picture The Kid Stays in the Picture, an adaptation of film producer Robert Evans's memoir, it involves the layering and animating of two-dimensional pictures in three-dimensional space. Earlier examples of this technique include Liz Phair's music video "Down" (directed by Rodney Ascher) and "A Special Tree" (directed by musician Giorgio Moroder). On a larger scale, the 2018 movie In Saturn's Rings used over 7.5 million separate two-dimensional images, captured in space or by telescopes, which were composited and moved using multi-plane animation techniques. Graphic design The term also refers to an often-used effect in the design of icons and graphical user interfaces (GUIs), where a slight 3D illusion is created by the presence of a virtual light source to the left (or in some cases right) side, and above a person's computer monitor. The light source itself is always invisible, but its effects are seen in the lighter colors for the top and left side, simulating reflection, and the darker colours to the right and below of such objects, simulating shadow. An advanced version of this technique can be found in some specialised graphic design software, such as Pixologic's ZBrush. The idea is that the program's canvas represents a normal 2D painting surface, but that the data structure that holds the pixel information is also able to store information with respect to a z-index, as well material settings, specularity, etc. Again, with this data it is thus possible to simulate lighting, shadows, and so forth. History The first video games that used pseudo-3D were primarily arcade games, the earliest known examples dating back to the mid-1970s, when they began using microprocessors. In 1975, Taito released Interceptor, an early first-person shooter and combat flight simulator that involved piloting a jet fighter, using an eight-way joystick to aim with a crosshair and shoot at enemy aircraft that move in formations of two and increase/decrease in size depending on their distance to the player. In 1976, Sega released Moto-Cross, an early black-and-white motorbike racing video game, based on the motocross competition, that was most notable for introducing an early three-dimensional third-person perspective. Later that year, Sega-Gremlin re-branded the game as Fonz, as a tie-in for the popular sitcom Happy Days. Both versions of the game displayed a constantly changing forward-scrolling road and the player's bike in a third-person perspective where objects nearer to the player are larger than those nearer to the horizon, and the aim was to steer the vehicle across the road, racing against the clock, while avoiding any on-coming motorcycles or driving off the road. That same year also saw the release of two arcade games that extended the car driving subgenre into three dimensions with a first-person perspective: Sega's Road Race, which displayed a constantly changing forward-scrolling S-shaped road with two obstacle race cars moving along the road that the player must avoid crashing while racing against the clock, and Atari's Night Driver, which presented a series of posts by the edge of the road though there was no view of the road or the player's car. Games using vector graphics had an advantage in creating pseudo-3D effects. 1979's Speed Freak recreated the perspective of Night Driver in greater detail. In 1979, Nintendo debuted Radar Scope, a shoot 'em up that introduced a three-dimensional third-person perspective to the genre, imitated years later by shooters such as Konami's Juno First and Activision's Beamrider. In 1980, Atari's Battlezone was a breakthrough for pseudo-3D gaming, recreating a 3D perspective with unprecedented realism, though the gameplay was still planar. It was followed up that same year by Red Baron, which used scaling vector images to create a forward scrolling rail shooter. Sega's arcade shooter Space Tactics, released in 1980, allowed players to take aim using crosshairs and shoot lasers into the screen at enemies coming towards them, creating an early 3D effect. It was followed by other arcade shooters with a first-person perspective during the early 1980s, including Taito's 1981 release Space Seeker, and Sega's Star Trek in 1982. Sega's SubRoc-3D in 1982 also featured a first-person perspective and introduced the use of stereoscopic 3-D through a special eyepiece. Sega's Astron Belt in 1983 was the first laserdisc video game, using full-motion video to display the graphics from a first-person perspective. Third-person rail shooters were also released in arcades at the time, including Sega's Tac/Scan in 1982, Nippon's Ambush in 1983, Nichibutsu's Tube Panic in 1983, and Sega's 1982 release Buck Rogers: Planet of Zoom, notable for its fast pseudo-3D scaling and detailed sprites. In 1981, Sega's Turbo was the first racing game to use sprite scaling with full-colour graphics. Pole Position by Namco is one of the first racing games to use the trailing camera effect that is now so familiar . In this particular example, the effect was produced by linescroll—the practice of scrolling each line independently in order to warp an image. In this case, the warping would simulate curves and steering. To make the road appear to move towards the player, per-line color changes were used, though many console versions opted for palette animation instead. Zaxxon, a shooter introduced by Sega in 1982, was the first game to use isometric axonometric projection, from which its name is derived. Though Zaxxon's playing field is semantically 3D, the game has many constraints which classify it as 2.5D: a fixed point of view, scene composition from sprites, and movements such as bullet shots restricted to straight lines along the axes. It was also one of the first video games to display shadows. The following year, Sega released the first pseudo-3D isometric platformer, Congo Bongo. Another early pseudo-3D platform game released that year was Konami's Antarctic Adventure, where the player controls a penguin in a forward-scrolling third-person perspective while having to jump over pits and obstacles. It was one of the earliest pseudo-3D games available on a computer, released for the MSX in 1983. That same year, Irem's Moon Patrol was a side-scrolling run & gun platform-shooter that introduced the use of layered parallax scrolling to give a pseudo-3D effect. In 1985, Space Harrier introduced Sega's "Super Scaler" technology that allowed pseudo-3D sprite-scaling at high frame rates, with the ability to scale 32,000 sprites and fill a moving landscape with them. The first original home console game to use pseudo-3D, and also the first to use multiple camera angles mirrored on television sports broadcasts, was Intellivision World Series Baseball (1983) by Don Daglow and Eddie Dombrower, published by Mattel. Its television sports style of display was later adopted by 3D sports games and is now used by virtually all major team sports titles. In 1984, Sega ported several pseudo-3D arcade games to the Sega SG-1000 console, including a smooth conversion of the third-person pseudo-3D rail shooter Buck Rogers: Planet of Zoom. By 1989, 2.5D representations were surfaces drawn with depth cues and a part of graphic libraries like GINO. 2.5D was also used in terrain modeling with software packages such as ISM from Dynamic Graphics, GEOPAK from Uniras and the Intergraph DTM system. 2.5D surface techniques gained popularity within the geography community because of its ability to visualize the normal thickness to area ratio used in many geographic models; this ratio was very small and reflected the thinness of the object in relation to its width, which made it the object realistic in a specific plane. These representations were axiomatic in that the entire subsurface domain was not used or the entire domain could not be reconstructed; therefore, it used only a surface and a surface is one aspect not the full 3D identity. The specific term "two-and-a-half-D" was used as early as 1994 by Warren Spector in an interview in the North American premiere issue of PC Gamer magazine. At the time, the term was understood to refer specifically to first-person shooters like Wolfenstein 3D and Doom, to distinguish them from System Shock's "true" 3D engine. With the advent of consoles and computer systems that were able to handle several thousand polygons (the most basic element of 3D computer graphics) per second and the usage of 3D specialized graphics processing units, pseudo-3D became obsolete. But even today, there are computer systems in production, such as cellphones, which are often not powerful enough to display true 3D graphics, and therefore use pseudo-3D for that purpose. Many games from the 1980s' pseudo-3D arcade era and 16-bit console era are ported to these systems, giving the manufacturers the possibility to earn revenues from games that are several decades old. The resurgence of 2.5D or visual analysis, in natural and earth science, has increased the role of computer systems in the creation of spatial information in mapping. GVIS has made real the search for unknowns, real-time interaction with spatial data, and control over map display and has paid particular attention to three-dimensional representations. Efforts in GVIS have attempted to expand higher dimensions and make them more visible; most efforts have focused on "tricking" vision into seeing three dimensions in a 2D plane. Much like 2.5D displays where the surface of a three-dimensional object is represented but locations within the solid are distorted or not accessible. Technical aspects and generalizations The reason for using pseudo-3D instead of "real" 3D computer graphics is that the system that has to simulate a 3D-looking graphic is not powerful enough to handle the calculation-intensive routines of 3D computer graphics, yet is capable of using tricks of modifying 2D graphics like bitmaps. One of these tricks is to stretch a bitmap more and more, therefore making it larger with each step, as to give the effect of an object coming closer and closer towards the player. Even simple shading and size of an image could be considered pseudo-3D, as shading makes it look more realistic. If the light in a 2D game were 2D, it would only be visible on the outline, and because outlines are often dark, they would not be very clearly visible. However, any visible shading would indicate the usage of pseudo-3D lighting and that the image uses pseudo-3D graphics. Changing the size of an image can cause the image to appear to be moving closer or further away, which could be considered simulating a third dimension. Dimensions are the variables of the data and can be mapped to specific locations in space; 2D data can be given 3D volume by adding a value to the x, y, or z plane. "Assigning height to 2D regions of a topographic map" associating every 2D location with a height/elevation value creates a 2.5D projection; this is not considered a "true 3D representation", however is used like 3D visual representation to "simplify visual processing of imagery and the resulting spatial cognition". See also 3D computer graphics Bas-relief Cel-shaded animation Flash animation Head-coupled perspective Isometric graphics in video games Limited animation List of stereoscopic video games Live2D Ray casting Trompe-l'œil Vector graphics References Video game development Video game graphics Dimension
2.5D
[ "Physics" ]
5,047
[ "Geometric measurement", "Dimension", "Physical quantities", "Theory of relativity" ]
577,162
https://en.wikipedia.org/wiki/Relativistic%20wave%20equations
In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields. The solutions to the equations, universally denoted as or (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background). In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation; one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator. More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group. History Early 1920s: Classical and quantum mechanics The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant , the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on). Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation: by inserting the energy operator and momentum operator into the relativistic energy–momentum relation: The solutions to () are scalar fields. The KG equation is undesirable due to its prediction of negative energies and probabilities, as a result of the quadratic nature of () – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless, () is applicable to spin-0 bosons. Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was spin. The first two-dimensional spin matrices (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was phenomenological. Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for massless spin- fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation () to the electron – by various manipulations he factorized the equation into the form: and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices and in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to () are multi-component spinor fields, and each component satisfies (). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin- fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation. Although a landmark in quantum theory, the Dirac equation is only true for spin- fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the "Dirac sea" of negative energy states). 1930s–1960s: Relativistic quantum mechanics of higher-spin particles The natural problem became clear: to generalize the Dirac equation to particles with any spin; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions. This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of (): where is a spinor field now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices and are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of satisfy equation (); instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory. Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940. Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors and , symmetric in all indices, for a massive particle of spin for integer (see Van der Waerden notation for the meaning of the dotted indices): where is the momentum as a covariant spinor operator. For , the equations reduce to the coupled Dirac equations and and together transform as the original Dirac spinor. Eliminating either or shows that and each fulfill (). The direct derivation of the Dirac-Pauli-Fierz equations using the Bargmann-Wigner operators is given in. In 1941, Rarita and Schwinger focussed on spin- particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin for integer . In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in () and () by an arbitrary constant, subject to a set of conditions which the wave functions must obey. Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations. In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg, the Joos–Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles. 1960s–present The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present. Linear equations The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive. Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted , and are the components of the four-gradient operator. In matrix equations, the Pauli matrices are denoted by in which , where is the identity matrix: and the other matrices have their usual representations. The expression is a matrix operator which acts on 2-component spinor fields. The gamma matrices are denoted by , in which again , and there are a number of representations to select from. The matrix is not necessarily the identity matrix. The expression is a matrix operator which acts on 4-component spinor fields. Note that terms such as "" scalar multiply an identity matrix of the relevant dimension, the common sizes are or , and are conventionally not written for simplicity. Linear gauge fields The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles: Constructing RWEs Using 4-vectors and the energy–momentum relation Start with the standard special relativity (SR) 4-vectors 4-position 4-velocity 4-momentum 4-wavevector 4-gradient Note that each 4-vector is related to another by a Lorentz scalar: , where is the proper time , where is the rest mass , which is the 4-vector version of the Planck–Einstein relation & the de Broglie matter wave relation , which is the 4-gradient version of complex-valued plane waves Now, just apply the standard Lorentz scalar product rule to each one: The last equation is a fundamental quantum relation. When applied to a Lorentz scalar field , one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations. : in 4-vector format : in tensor format : in factored tensor format The Schrödinger equation is the low-velocity limiting case () of the Klein–Gordon equation. When the relation is applied to a four-vector field instead of a Lorentz scalar field , then one gets the Proca equation (in Lorenz gauge): If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge) Representations of the Lorentz group Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states of spin with spin z-component locally transform under some representation of the Lorentz group: where is some finite-dimensional representation, i.e. a matrix. Here is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation. Representations with several possible values for are considered below. The irreducible representations are labeled by a pair of half-integers or integers . From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation so that . To put this into context; Dirac spinors transform under the representation. In general, the representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin j, where each allowed value: occurs exactly once. In general, tensor products of irreducible representations are reducible; they decompose as direct sums of irreducible representations. The representations and can each separately represent particles of spin . A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation. Non-linear equations There are equations which have solutions that do not satisfy the superposition principle. Nonlinear gauge fields Yang–Mills equation: describes a non-abelian gauge field Yang–Mills–Higgs equations: describes a non-abelian gauge field coupled with a massive spin-0 particle Spin 2 Einstein field equations: describe interaction of matter with the gravitational field (massless spin-2 field): The solution is a metric tensor field, rather than a wave function. See also List of equations in nuclear and particle physics List of equations in quantum mechanics Lorentz transformation Mathematical descriptions of the electromagnetic field Quantization of the electromagnetic field Minimal coupling Scalar field theory Status of special relativity References Further reading Equations of physics Quantum field theory Quantum mechanics Wave equations Waves
Relativistic wave equations
[ "Physics", "Mathematics" ]
2,892
[ "Quantum field theory", "Physical phenomena", "Equations of physics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Equations", "Special relativity", "Waves", "Motion (physics)", "Theory of relativity" ]
8,638,963
https://en.wikipedia.org/wiki/Superferromagnetism
Superferromagnetism is the magnetism of an ensemble of magnetically interacting super-moment-bearing material particles that would be superparamagnetic if they were not interacting. Nanoparticles of iron oxides, such as ferrihydrite (nominally FeOOH), often cluster and interact magnetically. These interactions change the magnetic behaviours of the nanoparticles (both above and below their blocking temperatures) and lead to an ordered low-temperature phase with non-randomly oriented particle super-moments. Discovery The phenomenon appears to have been first described and the term "superferromagnatism" introduced by Bostanjoglo and Röhkel, for a metallic film system. A decade later, the same phenomenon was rediscovered and described to occur in small-particle systems. The discovery is attributed as such in the scientific literature. References Magnetic ordering
Superferromagnetism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
181
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
8,647,217
https://en.wikipedia.org/wiki/Transcritical%20cycle
A transcritical cycle is a closed thermodynamic cycle where the working fluid goes through both subcritical and supercritical states. In particular, for power cycles the working fluid is kept in the liquid region during the compression phase and in vapour and/or supercritical conditions during the expansion phase. The ultrasupercritical steam Rankine cycle represents a widespread transcritical cycle in the electricity generation field from fossil fuels, where water is used as working fluid. Other typical applications of transcritical cycles to the purpose of power generation are represented by organic Rankine cycles, which are especially suitable to exploit low temperature heat sources, such as geothermal energy, heat recovery applications or waste to energy plants. With respect to subcritical cycles, the transcritical cycle exploits by definition higher pressure ratios, a feature that ultimately yields higher efficiencies for the majority of the working fluids. Considering then also supercritical cycles as a valid alternative to the transcritical ones, the latter cycles are capable of achieving higher specific works due to the limited relative importance of the work of compression work. This evidences the extreme potential of transcritical cycles to the purpose of producing the most power (measurable in terms of the cycle specific work) with the least expenditure (measurable in terms of spent energy to compress the working fluid). While in single level supercritical cycles both pressure levels are above the critical pressure of the working fluid, in transcritical cycles one pressure level is above the critical pressure and the other is below. In the refrigeration field carbon dioxide, CO2, is increasingly considered of interest as refrigerant. Transcritical conditions of the working fluid In transcritical cycles, the pressure of the working fluid at the outlet of the pump is higher than the critical pressure, while the inlet conditions are close to the saturated liquid pressure at the given minimum temperature. During the heating phase, which is typically considered an isobaric process, the working fluid overcomes the critical temperature, moving thus from the liquid to the supercritical phase without the occurrence of any evaporation process, a significant difference between subcritical and transcritical cycles. Due to this significant difference in the heating phase, the heat injection into the cycle is significantly more efficient from a second law perspective, since the average temperature difference between the hot source and the working fluid is reduced. As a consequence, the maximum temperatures reached by the cold source can be higher at fixed hot source characteristics. Therefore, the expansion process can be accomplished exploiting higher pressure ratios, which yields higher power production. Modern ultrasupercritical Rankine cycles can reach maximum temperatures up to 620°C exploiting the optimized heat introduction process. Characterization of the power cycle As in any power cycle, the most important indicator of its performance is the thermal efficiency. The thermal efficiency of a transcritical cycle is computed as: where is the thermal input of the cycle, provided by either combustion or with a heat exchanger, and is the power produced by the cycle. The power produced is considered comprehensive of the produced power during the expansion process of the working fluid and the one consumed during the compression step. The typical conceptual configuration of a transcritical cycle employs a single heater, thanks to the absence of drastic phase change from one state to another, being the pressure above the critical one. In subcritical cycles, instead, the heating process of the working fluid occurs in three different heat exchangers: in economizers the working fluid is heated (while remaining in the liquid phase) up to a condition approaching the saturated liquid conditions. Evaporators accomplish fluid evaporation process (typically up to the saturated vapour conditions) and in superheaters the working fluid is heated form the saturated vapour conditions to a superheated vapor. Moreover, using Rankine cycles as bottoming cycles in the context of combined gas-steam cycles keeps the configuration of the former ones as always subcritical. Therefore, there will be multiple pressure levels and hence multiple evaporators, economizers and superheaters, which introduces a significant complication to the heat injection process in the cycle. Characterization of the compression process Along adiabatic and isentropic processes, such as those theoretically associated with pumping processes in transcritical cycles, the enthalpy difference across both a compression and an expansion is computed as: Consequently, a working fluid with a lower specific volume (hence higher density) can inevitably be compressed spending a lower mechanical work than one with low density (more gas like). In transcritical cycles, the very high maximum pressures and the liquid conditions along the whole compression phase ensure a higher density and a lower specific volume with respect to supercritical counterparts. Considering the different physical phases though which compression processes occur, transcritical and supercritical cycles employ pumps (for liquids) and compressors (for gases), respectively, during the compression step. Characterization of the expansion process In the expansion step of the working fluid in transcritical cycles, as in subcritical ones, the working fluid can be discharged either in wet or dry conditions. Typical dry expansions are those involving organic or other unconventional working fluids, which are characterized by non-negligible molecular complexities and high molecular weights. The expansion step occurs in turbines: depending on the application and on the nameplate power produced by the power plant, both axial turbines and radial turbines can be exploited during fluid expansion. Axial turbines favour lower rotational speed and higher power production, while radial turbines are suitable for limited powers produced and high rotational speed. Organic cycles are appropriate choices for low enthalpy applications and are characterized by higher average densities across the expanders than those occurring in transcritical steam cycles: for this reason a low blade height is normally designed and the volumetric flow rate is kept limited to relatively small values. On the other hand in large scale application scenarios the expander blades typically show heights that exceed one meter and that are exploited in the steam cycles. Here, in fact, the fluid density at the outlet of the last expansion stage is significantly low. In general, the specific work of the cycle is expressed as: Even though the specific work of any cycle is strongly dependent on the actual working fluid considered in the cycle, transcritical cycles are expected to exhibit higher specific works than the corresponding subcritical and supercritical counterparts (i.e., that exploit the same working fluid). For this reason, at fixed boundary conditions, power produced and working fluid, a lower mass flow rate is expected in transcritical cycles than in other configurations. Applications in power cycles Ultrasupercritical Rankine cycles In the last decades, the thermal efficiency of Rankine cycles increased drastically, especially for large scale applications fueled by coal: for these power plants, the application of ultrasupercritical layouts was the main factor to achieve the goal, since the higher pressure ratio ensures higher cycle efficiencies. The increment in thermal efficiency of power plants fueled by dirty fuels became crucial also in the reduction of the specific emissions of the plants, both in therms of greenhouse gas and for pollutant such as sulfur dioxide or NOx. In large scale applications, ultrasupercritical Rankine cycles employ up to 10 feedwater heaters, five on the high pressure side and five on the low pressure side, including the deaerator, helping in the increment of the temperature at the inlet of the boiler up to 300°C, allowing a significant regenerative air preheating, thus reducing the fuel consumption. Studies on the best performant configurations of supercritical rankine cycles (300 bar of maximum pressure, 600°C of maximum temperature and two reheats) show that such layouts can achieve a cycle efficiency higher than 50%, about 6% higher than subcritical configurations. Organic Rankine cycles Organic Rankine cycles are innovative power cycles which allow good performances for low enthalpy thermal sources and ensure condensation above the atmospheric pressure, thus avoiding deaerators and large cross sectional area in the heat rejection units. Moreover, with respect to steam Rankine cycles, ORC have a higher flexibility in handling low power sizes, allowing significant compactness. Typical applications of ORC cover: waste heat recovery plants, geothermal plants, biomass plants and waste to energy power plants. Organic Rankine cycles use organic fluids (such as hydrocarbons, perfluorocarbons, chlorofluorocarbon, and many others) as working fluids. Most of them have a critical temperature in the range of 100-200°C, for this reason perfectly adaptable to transcritical cycles in low temperature applications. Considering organic fluids, having a maximum pressure above the critical one can more than double the temperature difference across the turbine, with respect to the subcritical counterpart, and significantly increase both the cycle specific work and cycle efficiency. Applications in refrigeration cycles A refrigeration cycle, also known as heat pump, is a thermodynamic cycle that allows the removal of heat from a low temperature heat source and the rejection of heat into a high temperature heat source, thanks to mechanical power consumption. Traditional refrigeration cycles are subcritical, with the high pressure side (where heat rejection occurs) below the critical pressure. Innovative transcritical refrigeration cycles, instead, should use a working fluid whose critical temperature is around the ambient temperature. For this reason, carbon dioxide is chosen due to its favourable critical conditions. In fact, the critical point of carbon dioxide is 31°C, reasonably in between the hot source and cold source of traditional refrigeration applications, thus suitable for a transcritical applications. In transcritical refrigeration cycles the heat is dissipated through a gas cooler instead of a desuperheater and a condenser like in subcritical cycles. This limits the plant components, plant complexity and costs of the power block. The advantages of using supercritical carbon dioxide as working fluid, instead of traditional refrigerant fluids (like HFC of HFO), in refrigeration cycles is represented both by economic aspects and environmental ones. The cost of carbon dioxide is two order of magnitude lower than the ones of the average refrigerant working fluid and the environmental impact of carbon dioxide is very limited (with a GWP of 1 and an ODP of 0), the fluid is not reactive nor significantly toxic. No other working fluids for refrigeration is able to reach the same environmental favourable characteristics of carbon dioxide. References Energy conversion Power station technology Thermodynamics
Transcritical cycle
[ "Physics", "Chemistry", "Mathematics" ]
2,180
[ "Thermodynamics", "Dynamical systems" ]
11,772,928
https://en.wikipedia.org/wiki/Weibull%20modulus
The Weibull modulus is a dimensionless parameter of the Weibull distribution. It represents the width of a probability density function (PDF) in which a higher modulus is a characteristic of a narrower distribution of values. Use case examples include biological and brittle material failure analysis, where modulus is used to describe the variability of failure strength for materials. Definition The Weibull distribution, represented as a cumulative distribution function (CDF), is defined by: in which m is the Weibull modulus. is a parameter found during the fit of data to the Weibull distribution and represents an input value for which ~67% of the data is encompassed. As m increases, the CDF distribution more closely resembles a step function at , which correlates with a sharper peak in the probability density function (PDF) defined by: Failure analysis often uses this distribution, as a CDF of the probability of failure F of a sample, as a function of applied stress σ, in the form: Failure stress of the sample, σ, is substituted for the property in the above equation. The initial property is assumed to be 0, an unstressed, equilibrium state of the material. In the plotted figure of the Weibull CDF, it is worth noting that the plotted functions all intersect at a stress value of 50 MPa, the characteristic strength for the distributions, even though the value of the Weibull moduli vary. It is also worth noting in the plotted figure of the Weibull PDF that a higher Weibull modulus results in a steeper slope within the plot. The Weibull distribution can also be multi-modal, in which there would be multiple reported values and multiple reported moduli, m. The CDF for a bimodal Weibull distribution has the following form, when applied to materials failure analysis: This represents a material which fails by two different modes. In this equation m1 is the modulus for the first mode, and m2 is the modulus for the second mode. Φ is the fraction of the sample set which fail by the first mode. The corresponding PDF is defined by: Examples of a bimodal Weibull PDF and CDF are plotted in the figures of this article with values of the characteristic strength being 40 and 120 MPa, the Weibull moduli being 4 and 10, and the value of Φ is 0.5, corresponding to 50% of the specimens failing by each failure mode. Linearization of the CDF The complement of the cumulative Weibull distribution function can be expressed as: Where P corresponds to the probability of survival of a specimen for a given stress value. Thus, it follows that: where m is the Weibull modulus. If the probability is plotted vs the stress, we find that the graph is sigmoidal, as shown in the figure above. Taking advantage of the fact that the exponential is the base of the natural logarithm, the above equation can be rearranged to: Which, using the properties of logarithms, can also be expressed as: When the left side of this equation is plotted as a function of the natural logarithm of stress, a linear plot can be created which has a slope of the Weibull modulus, m, and an x-intercept of . Looking at the plotted linearization of the CDFs from above it can be seen that all of the lines intersect the x-axis at the same point because all of the functions have the same value of the characteristic strength. The slopes vary because of the differing values of the Weibull moduli. Measurement Standards organizations have created multiple standards for measuring and reporting values of Weibull parameters, along with other statistical analyses of strength data: ASTM C1239-13: Standard Practice for Reporting Uniaxial Strength Data and Estimating Weibull Distribution Parameters for Advanced Ceramics ASTM D7846-21: Standard Practice for Reporting Uniaxial Strength Data and Estimating Weibull Distribution Parameters for Advanced Graphites ISO 20501:2019 Fine Ceramics (Advanced Ceramics, Advanced Technical Ceramics) - Weibull Statistics for Strength Data ANSI DIN EN 843-5:2007 Advanced Technical Ceramics - Mechanical Properties of Monolithic Ceramics at Room Temperature - Part 5: Statistical Analysis When applying a Weibull distribution to a set of data the data points must first be put in ranked order. For the use case of failure analysis specimens' failure strengths are ranked in ascending order, i.e. from lowest to greatest strength. A probability of failure is then assigned to each failure strength measured, ASTM C1239-13 uses the following formula: where is the specimen number as ranked and is the total number of specimens in the sample. From there can be plotted against failure strength to obtain a Weibull CDF. The Weibull parameters, modulus and characteristic strength, can be obtained from fitting or using the linearization method detailed above. Example uses from published work Weibull statistics are often used for ceramics and other brittle materials. They have also been applied to other fields as well such as meteorology where wind speeds are often described using Weibull statistics. Ceramics and brittle materials For ceramics and other brittle materials, the maximum stress that a sample can be measured to withstand before failure may vary from specimen to specimen, even under identical testing conditions. This is related to the distribution of physical flaws present in the surface or body of the brittle specimen, since brittle failure processes originate at these weak points. Much work has been done to describe brittle failure with the field of linear elastic fracture mechanics and specifically with the development of the ideas of the stress intensity factor and Griffith Criterion. When flaws are consistent and evenly distributed, samples will behave more uniformly than when flaws are clustered inconsistently. This must be taken into account when describing the strength of the material, so strength is best represented as a distribution of values rather than as one specific value. Consider strength measurements made on many small samples of a brittle ceramic material. If the measurements show little variation from sample to sample, the calculated Weibull modulus will be high, and a single strength value would serve as a good description of the sample-to-sample performance. It may be concluded that its physical flaws, whether inherent to the material itself or resulting from the manufacturing process, are distributed uniformly throughout the material. If the measurements show high variation, the calculated Weibull modulus will be low; this reveals that flaws are clustered inconsistently, and the measured strength will be generally weak and variable. Products made from components of low Weibull modulus will exhibit low reliability and their strengths will be broadly distributed. With careful manufacturing processes Weibull moduli of up to 98 have been seen for glass fibers tested in tension. A table is provided with the Weibull moduli for several common materials. However, it is important to note that the Weibull modulus is a fitting parameter from strength data, and therefore the reported value may vary from source to source. It also is specific to the sample preparation and testing method, and subject to change if the analysis or manufacturing process changes. Organic materials Studies examining organic brittle materials highlight the consistency and variability of the Weibull modulus within naturally occurring ceramics such as human dentin and abalone nacre. Research on human dentin samples indicates that the Weibull modulus remains stable across different depths or locations within the tooth, with an average value of approximately 4.5 and a range between 3 and 6. Variations in the modulus suggest differences in flaw populations between individual teeth, thought to be caused by random defects introduced during specimen preparation. Speculation exists regarding a potential decrease in the Weibull modulus with age due to changes in flaw distribution and stress sensitivity. Failure in dentin typically initiates at these flaws, which can be intrinsic or extrinsic in origin, arising from factors such as cavity preparation, wear, damage, or cyclic loading. Studies on the abalone shell illustrate its unique structural adaptations, sacrificing tensile strength perpendicular to its structure to enhance strength parallel to the tile arrangement. The Weibull modulus of abalone nacre samples is determined to be 1.8, indicating a moderate degree of variability in strength among specimens. Quasi-brittle materials The Weibull modulus of quasi-brittle materials correlates with the decline in the slope of the energy barrier spectrum, as established in fracture mechanics models. This relationship allows for the determination of both the fracture energy barrier spectrum decline slope and the Weibull modulus, while keeping factors like crack interaction and defect-induced degradation in consideration. Temperature dependence and variations due to crack interactions or stress field interactions are observed in the Weibull modulus of quasi-brittle materials. Damage accumulation leads to a rapid decrease in the Weibull modulus, resulting in a right-shifted distribution with a smaller Weibull modulus as damage increases. Quality analysis Weibull analysis is also used in quality control and "life analysis" for products. A higher Weibull modulus allows for companies to more confidently predict the life of their product for use in determining warranty periods. Other methods of characterization for brittle materials A further method to determine the strength of brittle materials has been described by the Wikibook contribution Weakest link determination by use of three parameter Weibull statistics. References Materials science Engineering statistics
Weibull modulus
[ "Physics", "Materials_science", "Engineering" ]
1,915
[ "Applied and interdisciplinary physics", "Materials science", "nan", "Engineering statistics" ]
11,774,498
https://en.wikipedia.org/wiki/Baum%E2%80%93Connes%20conjecture
In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object. The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture. The conjecture is also closely related to index theory, as the assembly map is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program. The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects. Formulation Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism called the assembly map, from the equivariant K-homology with -compact supports of the classifying space of proper actions to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1. Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism: Baum-Connes Conjecture. The assembly map is an isomorphism. As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the -algebra, one usually views the conjecture as an "explanation" of the right hand side. The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982. In case is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space of . There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a -algebra on which acts by -automorphisms. It says in KK-language that the assembly map is an isomorphism, containing the case without coefficients as the case However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups. Examples Let be the integers . Then the left hand side is the K-homology of which is the circle. The -algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism. Results The conjecture without coefficients is still open, although the field has received great attention since 1982. The conjecture is proved for the following classes of groups: Discrete subgroups of and . Groups with the Haagerup property, sometimes called a-T-menable groups. These are groups that admit an isometric action on an affine Hilbert space which is proper in the sense that for all and all sequences of group elements with . Examples of a-T-menable groups are amenable groups, Coxeter groups, groups acting properly on trees, and groups acting properly on simply connected cubical complexes. Groups that admit a finite presentation with only one relation. Discrete cocompact subgroups of real Lie groups of real rank 1. Cocompact lattices in or . It was a long-standing problem since the first days of the conjecture to expose a single infinite property T-group that satisfies it. However, such a group was given by V. Lafforgue in 1998 as he showed that cocompact lattices in have the property of rapid decay and thus satisfy the conjecture. Gromov hyperbolic groups and their subgroups. Among non-discrete groups, the conjecture has been shown in 2003 by J. Chabert, S. Echterhoff and R. Nest for the vast class of all almost connected groups (i. e. groups having a cocompact connected component), and all groups of -rational points of a linear algebraic group over a local field of characteristic zero (e.g. ). For the important subclass of real reductive groups, the conjecture had already been shown in 1987 by Antony Wassermann. Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987. Injectivity is known for the following classes: Discrete subgroups of connected Lie groups or virtually connected Lie groups. Discrete subgroups of p-adic groups. Bolic groups (a certain generalization of hyperbolic groups). Groups which admit an amenable action on some compact space. The simplest example of a group for which it is not known whether it satisfies the conjecture is . References . . External links On the Baum-Connes conjecture by Dmitry Matsnev. C*-algebras K-theory Surgery theory Conjectures Unsolved problems in mathematics
Baum–Connes conjecture
[ "Mathematics" ]
1,251
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures" ]
11,779,912
https://en.wikipedia.org/wiki/Marine%20sediment
Marine sediment, or ocean sediment, or seafloor sediment, are deposits of insoluble particles that have accumulated on the seafloor. These particles either have their origins in soil and rocks and have been transported from the land to the sea, mainly by rivers but also by dust carried by wind and by the flow of glaciers into the sea, or they are biogenic deposits from marine organisms or from chemical precipitation in seawater, as well as from underwater volcanoes and meteorite debris. Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediment. This material comes from several different sources and is highly variable in composition. Seafloor sediment can range in thickness from a few millimetres to several tens of kilometres. Near the surface seafloor sediment remains unconsolidated, but at depths of hundreds to thousands of metres the sediment becomes lithified (turned to rock). Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Sediment transported from the land accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher. Biogenous oozes accumulate at a rate of about one centimetre per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years. Sediments from the land are deposited on the continental margins by surface runoff, river discharge, and other processes. Turbidity currents can transport this sediment down the continental slope to the deep ocean floor. The deep ocean floor undergoes its own process of spreading out from the mid-ocean ridge, and then slowly subducts accumulated sediment on the deep floor into the molten interior of the earth. In turn, molten material from the interior returns to the surface of the earth in the form of lava flows and emissions from deep sea hydrothermal vents, ensuring the process continues indefinitely. The sediments provide habitat for a multitude of marine life, particularly of marine microorganisms. Their fossilized remains contain information about past climates, plate tectonics, ocean circulation patterns, and the timing of major extinctions. Overview Except within a few kilometres of a mid-ocean ridge, where the volcanic rock is still relatively young, most parts of the seafloor are covered in sediments. This material comes from several different sources and is highly variable in composition, depending on proximity to a continent, water depth, ocean currents, biological activity, and climate. Seafloor sediments (and sedimentary rocks) can range in thickness from a few millimetres to several tens of kilometres. Near the surface, the sea-floor sediments remain unconsolidated, but at depths of hundreds to thousands of metres (depending on the type of sediment and other factors) the sediment becomes lithified. The various sources of seafloor sediment can be summarized as follows: Terrigenous sediment is derived from continental sources transported by rivers, wind, ocean currents, and glaciers. It is dominated by quartz, feldspar, clay minerals, iron oxides, and terrestrial organic matter. Pelagic carbonate sediment is derived from organisms (e.g., foraminifera) living in the ocean water (at various depths, but mostly near surface) that make their shells (a.k.a. tests) out of carbonate minerals such as calcite. Pelagic silica sediment is derived from marine organisms (e.g., diatoms and radiolaria) that make their tests out of silica (microcrystalline quartz). Volcanic ash and other volcanic materials are derived from both terrestrial and submarine eruptions. Iron and manganese nodules form as direct precipitates from ocean-bottom water. The distributions of some of these materials around the seas are shown in the diagram at the start of this article ↑. Terrigenous sediments predominate near the continents and within inland seas and large lakes. These sediments tend to be relatively coarse, typically containing sand and silt, but in some cases even pebbles and cobbles. Clay settles slowly in nearshore environments, but much of the clay is dispersed far from its source areas by ocean currents. Clay minerals are predominant over wide areas in the deepest parts of the ocean, and most of this clay is terrestrial in origin. Siliceous oozes (derived from radiolaria and diatoms) are common in the south polar region, along the equator in the Pacific, south of the Aleutian Islands, and within large parts of the Indian Ocean. Carbonate oozes are widely distributed in all of the oceans within equatorial and mid-latitude regions. In fact, clay settles everywhere in the oceans, but in areas where silica- and carbonate-producing organisms are prolific, they produce enough silica or carbonate sediment to dominate over clay. Carbonate sediments are derived from a wide range of near-surface pelagic organisms that make their shells out of carbonate. These tiny shells, and the even tinier fragments that form when they break into pieces, settle slowly through the water column, but they don't necessarily make it to the bottom. While calcite is insoluble in surface water, its solubility increases with depth (and pressure) and at around 4,000 m, the carbonate fragments dissolve. This depth, which varies with latitude and water temperature, is known as the carbonate compensation depth. As a result, carbonate oozes are absent from the deepest parts of the ocean (deeper than 4,000 m), but they are common in shallower areas such as the mid-Atlantic ridge, the East Pacific Rise (west of South America), along the trend of the Hawaiian/Emperor Seamounts (in the northern Pacific), and on the tops of many isolated seamounts. Texture Sediment texture can be examined in several ways. The first way is grain size. Sediments can be classified by particle size according to the Wentworth scale. Clay sediments are the finest with a grain diameter of less than .004 mm and boulders are the largest with grain diameters of 256 mm or larger. Among other things, grain size represents the conditions under which the sediment was deposited. High energy conditions, such as strong currents or waves, usually results in the deposition of only the larger particles as the finer ones will be carried away. Lower energy conditions will allow the smaller particles to settle out and form finer sediments. Sorting is another way to categorize sediment texture. Sorting refers to how uniform the particles are in terms of size. If all of the particles are of a similar size, such as in beach sand, the sediment is well-sorted. If the particles are of very different sizes, the sediment is poorly sorted, such as in glacial deposits. A third way to describe marine sediment texture is its maturity, or how long its particles have been transported by water. One way which can indicate maturity is how round the particles are. The more mature a sediment the rounder the particles will be, as a result of being abraded over time. A high degree of sorting can also indicate maturity, because over time the smaller particles will be washed away, and a given amount of energy will move particles of a similar size over the same distance. Lastly, the older and more mature a sediment the higher the quartz content, at least in sediments derived from rock particles. Quartz is a common mineral in terrestrial rocks, and it is very hard and resistant to abrasion. Over time, particles made from other materials are worn away, leaving only quartz behind. Beach sand is a very mature sediment; it is composed primarily of quartz, and the particles are rounded and of similar size (well-sorted). Origins Marine sediments can also classified by their source of origin. There are four types: Lithogenous sediments, also called terrigenous sediments, are derived from preexisting rock and come from land via rivers, ice, wind and other processes. They are referred to as terrigenous sediments since most comes from the land. Biogenous sediments are composed of the remains of marine organisms, and come from organisms like plankton when their exoskeletons break down Hydrogenous sediments come from chemical reactions in the water, and are formed when materials that are dissolved in water precipitate out and form solid particles. Cosmogenous sediments are derived from extraterrestrial sources, coming from space, filtering in through the atmosphere or carried to Earth on meteorites. Lithogenous Lithogenous or terrigenous sediment is primarily composed of small fragments of preexisting rocks that have made their way into the ocean. These sediments can contain the entire range of particle sizes, from microscopic clays to large boulders, and they are found almost everywhere on the ocean floor. Lithogenous sediments are created on land through the process of weathering, where rocks and minerals are broken down into smaller particles through the action of wind, rain, water flow, temperature- or ice-induced cracking, and other erosive processes. These small eroded particles are then transported to the oceans through a variety of mechanisms: Streams and rivers: Various forms of runoff deposit large amounts of sediment into the oceans, mostly in the form of finer-grained particles. About 90% of the lithogenous sediment in the oceans is thought to have come from river discharge, particularly from Asia. Most of this sediment, especially the larger particles, will be deposited and remain fairly close to the coastline, however, smaller clay particles may remain suspended in the water column for long periods of time and may be transported great distances from the source. Wind: Windborne (aeolian) transport can take small particles of sand and dust and move them thousands of kilometres from the source. These small particles can fall into the ocean when the wind dies down, or can serve as the nuclei around which raindrops or snowflakes form. Aeolian transport is particularly important near desert areas. Glaciers and ice rafting: As glaciers grind their way over land, they pick up lots of soil and rock particles, including very large boulders, that get carried by the ice. When the glacier meets the ocean and begins to break apart or melt, these particles get deposited. Most of the deposition will happen close to where the glacier meets the water, but a small amount of material is also transported longer distances by rafting, where larger pieces of ice drift far from the glacier before releasing their sediment. Gravity: Landslides, mudslides, avalanches, and other gravity-driven events can deposit large amounts of material into the ocean when they happen close to shore. Waves: Wave action along a coastline will erode rocks and will pull loose particles from beaches and shorelines into the water. Volcanoes: Volcanic eruptions emit vast amounts of ash and other debris into the atmosphere, where it can then be transported by wind to eventually get deposited in the oceans. Gastroliths: Another, relatively minor, means of transporting lithogenous sediment to the ocean are gastroliths. Gastrolith means "stomach stone". Many animals, including seabirds, pinnipeds, and some crocodiles deliberately swallow stones and regurgitate them latter. Stones swallowed on land can be regurgitated at sea. The stones can help grind food in the stomach or act as ballast regulating buoyancy. Mostly these processes deposit lithogenous sediment close to shore. Sediment particles can then be transported farther by waves and currents, and may eventually escape the continental shelf and reach the deep ocean floor. Composition Lithogenous sediments usually reflect the composition of whatever materials they were derived from, so they are dominated by the major minerals that make up most terrestrial rock. This includes quartz, feldspar, clay minerals, iron oxides, and terrestrial organic matter. Quartz (silicon dioxide, the main component of glass) is one of the most common minerals found in nearly all rocks, and it is very resistant to abrasion, so it is a dominant component of lithogenous sediments, including sand. Biogenous Biogenous sediments come from the remains of living organisms that settle out as sediment when the organisms die. It is the "hard parts" of the organisms that contribute to the sediments; things like shells, teeth or skeletal elements, as these parts are usually mineralized and are more resistant to decomposition than the fleshy "soft parts" that rapidly deteriorate after death. Macroscopic sediments contain large remains, such as skeletons, teeth, or shells of larger organisms. This type of sediment is fairly rare over most of the ocean, as large organisms do not die in enough of a concentrated abundance to allow these remains to accumulate. One exception is around coral reefs; here there is a great abundance of organisms that leave behind their remains, in particular the fragments of the stony skeletons of corals that make up a large percentage of tropical sand. Microscopic sediment consists of the hard parts of microscopic organisms, particularly their shells, or tests. Although very small, these organisms are highly abundant and as they die by the billions every day their tests sink to the bottom to create biogenous sediments. Sediments composed of microscopic tests are far more abundant than sediments from macroscopic particles, and because of their small size they create fine-grained, mushy sediment layers. If the sediment layer consists of at least 30% microscopic biogenous material, it is classified as a biogenous ooze. The remainder of the sediment is often made up of clay. The primary sources of microscopic biogenous sediments are unicellular algaes and protozoans (single-celled amoeba-like creatures) that secrete tests of either calcium carbonate (CaCO3) or silica (SiO2). Silica tests come from two main groups, the diatoms (algae) and the radiolarians (protozoans). Diatoms are particularly important members of the phytoplankton, functioning as small, drifting algal photosynthesizers. A diatom consists of a single algal cell surrounded by an elaborate silica shell that it secretes for itself. Diatoms come in a range of shapes, from elongated, pennate forms, to round, or centric shapes that often have two halves, like a Petri dish. In areas where diatoms are abundant, the underlying sediment is rich in silica diatom tests, and is called diatomaceous earth. Radiolarians are planktonic protozoans (making them part of the zooplankton), that like diatoms, secrete a silica test. The test surrounds the cell and can include an array of small openings through which the radiolarian can extend an amoeba-like "arm" or pseudopod. Radiolarian tests often display a number of rays protruding from their shells which aid in buoyancy. Oozes that are dominated by diatom or radiolarian tests are called siliceous oozes. Like the siliceous sediments, the calcium carbonate, or calcareous sediments are also produced from the tests of microscopic algae and protozoans; in this case the coccolithophores and foraminiferans. Coccolithophores are single-celled planktonic algae about 100 times smaller than diatoms. Their tests are composed of a number of interlocking CaCO3 plates (coccoliths) that form a sphere surrounding the cell. When coccolithophores die the individual plates sink out and form an ooze. Over time, the coccolithophore ooze lithifies to becomes chalk. The White Cliffs of Dover in England are composed of coccolithophore-rich ooze that turned into chalk deposits. Foraminiferans (also referred to as forams) are protozoans whose tests are often chambered, similar to the shells of snails. As the organism grows, is secretes new, larger chambers in which to reside. Most foraminiferans are benthic, living on or in the sediment, but there are some planktonic species living higher in the water column. When coccolithophores and foraminiferans die, they form calcareous oozes. Older calcareous sediment layers contain the remains of another type of organism, the discoasters; single-celled algae related to the coccolithophores that also produced calcium carbonate tests. Discoaster tests were star-shaped, and reached sizes of 5-40 μm across. Discoasters went extinct approximately 2 million years ago, but their tests remain in deep, tropical sediments that predate their extinction. Because of their small size, these tests sink very slowly; a single microscopic test may take about 10–50 years to sink to the bottom! Given that slow descent, a current of only 1 cm/sec could carry the test as much as 15,000 km away from its point of origin before it reaches the bottom. Despite this, the sediments in a particular location are well-matched to the types of organisms and degree of productivity that occurs in the water overhead. This means the sediment particles must be sinking to the bottom at a much faster rate, so they accumulate below their point of origin before the currents can disperse them. Most of the tests do not sink as individual particles; about 99% of them are first consumed by some other organism, and are then aggregated and expelled as large fecal pellets, which sink much more quickly and reach the ocean floor in only 10–15 days. This does not give the particles as much time to disperse, and the sediment below will reflect the production occurring near the surface. The increased rate of sinking through this mechanism has been called the "fecal express". Hydrogenous Seawater contains many different dissolved substances. Occasionally chemical reactions occur that cause these substances to precipitate out as solid particles, which then accumulate as hydrogenous sediment. These reactions are usually triggered by a change in conditions, such as a change in temperature, pressure, or pH, which reduces the amount of a substance that can remain in a dissolved state. There is not a lot of hydrogenous sediment in the ocean compared to lithogenous or biogenous sediments, but there are some interesting forms. In hydrothermal vents seawater percolates into the seafloor where it becomes superheated by magma before being expelled by the vent. This superheated water contains many dissolved substances, and when it encounters the cold seawater after leaving the vent, these particles precipitate out, mostly as metal sulfides. These particles make up the "smoke" that flows from a vent, and may eventually settle on the bottom as hydrogenous sediment. Hydrothermal vents are distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. Currently there are about 500 known active submarine hydrothermal vent fields, about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits. Manganese nodules are rounded lumps of manganese and other metals that form on the seafloor, generally ranging between 3–10 cm in diameter, although they may sometimes reach up to 30 cm. The nodules form in a manner similar to pearls; there is a central object around which concentric layers are slowly deposited, causing the nodule to grow over time. The composition of the nodules can vary somewhat depending on their location and the conditions of their formation, but they are usually dominated by manganese- and iron oxides. They may also contain smaller amounts of other metals such as copper, nickel and cobalt. The precipitation of manganese nodules is one of the slowest geological processes known; they grow on the order of a few millimetres per million years. For that reason, they only form in areas where there are low rates of lithogenous or biogenous sediment accumulation, because any other sediment deposition would quickly cover the nodules and prevent further nodule growth. Therefore, manganese nodules are usually limited to areas in the central ocean, far from significant lithogenous or biogenous inputs, where they can sometimes accumulate in large numbers on the seafloor (Figure 12.4.2 right). Because the nodules contain a number of commercially valuable metals, there has been significant interest in mining the nodules over the last several decades, although most of the efforts have thus far remained at the exploratory stage. A number of factors have prevented large-scale extraction of nodules, including the high costs of deep sea mining operations, political issues over mining rights, and environmental concerns surrounding the extraction of these non-renewable resources. Evaporites are hydrogenous sediments that form when seawater evaporates, leaving the dissolved materials to precipitate into solids, particularly halite (salt, NaCl). In fact, the evaporation of seawater is the oldest form of salt production for human use, and is still carried out today. Large deposits of halite evaporites exist in a number of places, including under the Mediterranean Sea. Beginning around 6 million years ago, tectonic processes closed off the Mediterranean Sea from the Atlantic, and the warm climate evaporated so much water that the Mediterranean was almost completely dried out, leaving large deposits of salt in its place (an event known as the Messinian Salinity Crisis). Eventually the Mediterranean re-flooded about 5.3 million years ago, and the halite deposits were covered by other sediments, but they still remain beneath the seafloor. Oolites are small, rounded grains formed from concentric layers of precipitation of material around a suspended particle. They are usually composed of calcium carbonate, but they may also from phosphates and other materials. Accumulation of oolites results in oolitic sand, which is found in its greatest abundance in the Bahamas. Methane hydrates are another type of hydrogenous deposit with a potential industrial application. All terrestrial erosion products include a small proportion of organic matter derived mostly from terrestrial plants. Tiny fragments of this material plus other organic matter from marine plants and animals accumulate in terrigenous sediments, especially within a few hundred kilometres of shore. As the sediments pile up, the deeper parts start to warm up (from geothermal heat), and bacteria get to work breaking down the contained organic matter. Because this is happening in the absence of oxygen (a.k.a. anaerobic conditions), the by-product of this metabolism is the gas methane (CH4). Methane released by the bacteria slowly bubbles upward through the sediment toward the seafloor. At water depths of 500 m to 1,000 m, and at the low temperatures typical of the seafloor (close to 4 °C), water and methane combine to create a substance known as methane hydrate. Within a few metres to hundreds of metres of the seafloor, the temperature is low enough for methane hydrate to be stable and hydrates accumulate within the sediment. Methane hydrate is flammable because when it is heated, the methane is released as a gas. The methane within seafloor sediments represents an enormous reservoir of fossil fuel energy. Although energy corporations and governments are anxious to develop ways to produce and sell this methane, anyone that understands the climate-change implications of its extraction and use can see that this would be folly. Cosmogenous Cosmogenous sediment is derived from extraterrestrial sources, and comes in two primary forms; microscopic spherules and larger meteor debris. Spherules are composed mostly of silica or iron and nickel, and are thought to be ejected as meteors burn up after entering the atmosphere. Meteor debris comes from collisions of meteorites with Earth. These high impact collisions eject particles into the atmosphere that eventually settle back down to Earth and contribute to the sediments. Like spherules, meteor debris is mostly silica or iron and nickel. One form of debris from these collisions are tektites, which are small droplets of glass. They are likely composed of terrestrial silica that was ejected and melted during a meteorite impact, which then solidified as it cooled upon returning to the surface. Cosmogenous sediment is fairly rare in the ocean and it does not usually accumulate in large deposits. However, it is constantly being added to through space dust that continuously rains down on Earth. About 90% of incoming cosmogenous debris is vaporized as it enters the atmosphere, but it is estimated that 5 to 300 tons of space dust land on the Earth's surface each day. Composition Siliceous ooze Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica Si(O2), as opposed to calcareous oozes, which are made from skeletons of calcium carbonate organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes. Calcareous ooze The term calcareous can be applied to a fossil, sediment, or sedimentary rock which is formed from, or contains a high proportion of, calcium carbonate in the form of calcite or aragonite. Calcareous sediments (limestone) are usually deposited in shallow water near land, since the carbonate is precipitated by marine organisms that need land-derived nutrients. Generally speaking, the farther from land sediments fall, the less calcareous they are. Some areas can have interbedded calcareous sediments due to storms, or changes in ocean currents. Calcareous ooze is a form of calcium carbonate derived from planktonic organisms that accumulates on the sea floor. This can only occur if the ocean is shallower than the carbonate compensation depth. Below this depth, calcium carbonate begins to dissolve in the ocean, and only non-calcareous sediments are stable, such as siliceous ooze or pelagic red clay. Lithified sediments Distribution Where and how sediments accumulate will depend on the amount of material coming from a source, the distance from the source, the amount of time that sediment has had to accumulate, how well the sediments are preserved, and the amounts of other types of sediments that are also being added to the system. Rates of sediment accumulation are relatively slow throughout most of the ocean, in many cases taking thousands of years for any significant deposits to form. Lithogenous sediment accumulates the fastest, on the order of one metre or more per thousand years for coarser particles. However, sedimentation rates near the mouths of large rivers with high discharge can be orders of magnitude higher. Biogenous oozes accumulate at a rate of about 1 cm per thousand years, while small clay particles are deposited in the deep ocean at around one millimetre per thousand years. As described above, manganese nodules have an incredibly slow rate of accumulation, gaining 0.001 millimetres per thousand years. Marine sediments are thickest near the continental margins where they can be over 10 km thick. This is because the crust near passive continental margins is often very old, allowing for a long period of accumulation, and because there is a large amount of terrigenous sediment input coming from the continents. Near mid-ocean ridge systems where new oceanic crust is being formed, sediments are thinner, as they have had less time to accumulate on the younger crust. As distance increases from a ridge spreading center the sediments get progressively thicker, increasing by approximately 100–200 m of sediment for every 1000 km distance from the ridge axis. With a seafloor spreading rate of about 20–40 km/million years, this represents a sediment accumulation rate of approximately 100–200 m every 25–50 million years. The diagram at the start of this article ↑ shows the distribution of the major types of sediment on the ocean floor. Cosmogenous sediments could potentially end up in any part of the ocean, but they accumulate in such small abundances that they are overwhelmed by other sediment types and thus are not dominant in any location. Similarly, hydrogenous sediments can have high concentrations in specific locations, but these regions are very small on a global scale. So cosmogenous and hydrogenous sediments can mostly be ignored in the discussion of global sediment patterns. Coarse lithogenous/terrigenous sediments are dominant near the continental margins as land runoff, river discharge, and other processes deposit vast amounts of these materials on the continental shelf. Much of this sediment remains on or near the shelf, while turbidity currents can transport material down the continental slope to the deep ocean floor (abyssal plain). Lithogenous sediment is also common at the poles where thick ice cover can limit primary production, and glacial breakup deposits sediments along the ice edge. Coarse lithogenous sediments are less common in the central ocean, as these areas are too far from the sources for these sediments to accumulate. Very small clay particles are the exception, and as described below, they can accumulate in areas that other lithogenous sediment will not reach. The distribution of biogenous sediments depends on their rates of production, dissolution, and dilution by other sediments. Coastal areas display very high primary production, so abundant biogenous deposits might be expected in these regions. However, sediment must be >30% biogenous to be considered a biogenous ooze, and even in productive coastal areas there is so much lithogenous input that it swamps the biogenous materials, and that 30% threshold is not reached. So coastal areas remain dominated by lithogenous sediment, and biogenous sediments will be more abundant in pelagic environments where there is little lithogenous input. In order for biogenous sediments to accumulate their rate of production must be greater than the rate at which the tests dissolve. Silica is undersaturated throughout the ocean and will dissolve in seawater, but it dissolves more readily in warmer water and lower pressures; that is, it dissolves faster near the surface than in deep water. Silica sediments will therefore only accumulate in cooler regions of high productivity where they accumulate faster than they dissolve. This includes upwelling regions near the equator and at high latitudes where there are abundant nutrients and cooler water. Oozes formed near the equatorial regions are usually dominated by radiolarians, while diatoms are more common in the polar oozes. Once the silica tests have settled on the bottom and are covered by subsequent layers, they are no longer subject to dissolution and the sediment will accumulate. Approximately 15% of the seafloor is covered by siliceous oozes. Biogenous calcium carbonate sediments also require production to exceed dissolution for sediments to accumulate, but the processes involved are a little different than for silica. Calcium carbonate dissolves more readily in more acidic water. Cold seawater contains more dissolved CO2 and is slightly more acidic than warmer water. So calcium carbonate tests are more likely to dissolve in colder, deeper, polar water than in warmer, tropical, surface water. At the poles the water is uniformly cold, so calcium carbonate readily dissolves at all depths, and carbonate sediments do not accumulate. In temperate and tropical regions calcium carbonate dissolves more readily as it sinks into deeper water. The depth at which calcium carbonate dissolves as fast as it accumulates is called the calcium carbonate compensation depth or calcite compensation depth, or simply the CCD. The lysocline represents the depths where the rate of calcium carbonate dissolution increases dramatically (similar to the thermocline and halocline). At depths shallower than the CCD carbonate accumulation will exceed the rate of dissolution, and carbonate sediments will be deposited. In areas deeper than the CCD, the rate of dissolution will exceed production, and no carbonate sediments can accumulate (see diagram at right). The CCD is usually found at depths of 4 – 4.5 km, although it is much shallower at the poles where the surface water is cold. Thus calcareous oozes will mostly be found in tropical or temperate waters less than about 4 km deep, such as along the mid-ocean ridge systems and atop seamounts and plateaus. The CCD is deeper in the Atlantic than in the Pacific since the Pacific contains more CO2, making the water more acidic and calcium carbonate more soluble. This, along with the fact that the Pacific is deeper, means that the Atlantic contains more calcareous sediment than the Pacific. All told, about 48% of the seafloor is dominated by calcareous oozes. Much of the rest of the deep ocean floor (about 38%) is dominated by abyssal clays. This is not so much a result of an abundance of clay formation, but rather the lack of any other types of sediment input. The clay particles are mostly of terrestrial origin, but because they are so small they are easily dispersed by wind and currents, and can reach areas inaccessible to other sediment types. Clays dominate in the central North Pacific, for example. This area is too far from land for coarse lithogenous sediment to reach, it is not productive enough for biogenous tests to accumulate, and it is too deep for calcareous materials to reach the bottom before dissolving. Because clay particles accumulate so slowly, the clay-dominated deep ocean floor is often home to hydrogenous sediments like manganese nodules. If any other type of sediment was produced here it would accumulate much more quickly and would bury the nodules before they had a chance to grow. Coastal sediments Shallow water marine environments are found in areas between the shore and deeper water, such as a reef wall or a shelf break. The water in this environment is shallow and clear, allowing the formation of different sedimentary structures, carbonate rocks, coral reefs, and allowing certain organisms to survive and become fossils. The sediment itself is often composed of limestone, which forms readily in shallow, warm calm waters. The shallow marine environments are not exclusively composed of siliciclastic or carbonaceous sediments. While they cannot always coexist, it is possible to have a shallow marine environment composed solely of carbonaceous sediment or one that is composed completely of siliciclastic sediment. Shallow water marine sediment is made up of larger grain sizes because smaller grains have been washed out to deeper water. Within sedimentary rocks composed of carbonaceous sediment, there may also be evaporite minerals. The most common evaporite minerals found within modern and ancient deposits are gypsum, anhydrite, and halite; they can occur as crystalline layers, isolated crystals or clusters of crystals. In terms of geologic time, it is said that most Phanerozoic sedimentary rock was deposited in shallow marine environments as about 75% of the sedimentary carapace is made up of shallow marine sediments; it is then assumed that Precambrian sedimentary rocks were too, deposited in shallow marine waters, unless it is specifically identified otherwise. This trend is seen in the North American and Caribbean region. Also, as a result of supercontinent breakup and other shifting tectonic plate processes, shallow marine sediment displays large variations in terms of quantity in the geologic time. Bioturbation Bioturbation is the reworking of sediment by animals or plants. These include burrowing, ingestion, and defecation of sediment grains. Bioturbating activities have a profound effect on the environment and are thought to be a primary driver of biodiversity. The formal study of bioturbation began in the 1800s by Charles Darwin experimenting in his garden. The disruption of aquatic sediments and terrestrial soils through bioturbating activities provides significant ecosystem services. These include the alteration of nutrients in aquatic sediment and overlying water, shelter to other species in the form of burrows in terrestrial and water ecosystems, and soil production on land. Bioturbators are ecosystem engineers because they alter resource availability to other species through the physical changes they make to their environments. This type of ecosystem change affects the evolution of cohabitating species and the environment, which is evident in trace fossils left in marine and terrestrial sediments. Other bioturbation effects include altering the texture of sediments (diagenesis), bioirrigation, and displacement of microorganisms and non-living particles. Bioturbation is sometimes confused with the process of bioirrigation, however these processes differ in what they are mixing; bioirrigation refers to the mixing of water and solutes in sediments and is an effect of bioturbation Walruses and salmon are examples of large bioturbators. Although the activities of these large macrofaunal bioturbators are more conspicuous, the dominant bioturbators are small invertebrates, such as polychaetes, ghost shrimp and mud shrimp. The activities of these small invertebrates, which include burrowing and ingestion and defecation of sediment grains, contribute to mixing and the alteration of sediment structure. Bioirrigation Bioirrigation is the process of benthic organisms flushing their burrows with overlying water. The exchange of dissolved substances between the porewater and overlying seawater that results is an important process in the context of the biogeochemistry of the oceans. Coastal aquatic environments often have organisms that destabilize sediment. They change the physical state of the sediment. Thus improving the conditions for other organisms and themselves. These organisms often also cause Bioturbation, which is commonly used interchangeably or in reference with bioirrigation. Bioirrigation works as two different processes. These processes are known as particle reworking and ventilation, which is the work of benthic macro-invertebrates (usually ones that burrow). This particle reworking and ventilation is caused by the organisms when they feed (faunal feeding), defecate, burrow, and respire. Bioirrigation is responsible for a large amount of oxidative transport and has a large impact on biogeochemical cycles. Pelagic sediments Pelagic sediments, or pelagite, are fine-grained sediments that accumulate as the result of the settling of particles to the floor of the open ocean, far from land. These particles consist primarily of either the microscopic, calcareous or siliceous shells of phytoplankton or zooplankton; clay-size siliciclastic sediment; or some mixture of these. Trace amounts of meteoric dust and variable amounts of volcanic ash also occur within pelagic sediments. Based upon the composition of the ooze, there are three main types of pelagic sediments: siliceous oozes, calcareous oozes, and red clays. An extensive body of work on deep-water processes and sediments has been built over the past 150 years since the voyage of HMS Challenger (1872–1876), during which the first systematic study of seafloor sediments was made. For many decades since that pioneering expedition, and through the first half of the twentieth century, the deep sea was considered entirely pelagic in nature. The composition of pelagic sediments is controlled by three main factors. The first factor is the distance from major landmasses, which affects their dilution by terrigenous, or land-derived, sediment. The second factor is water depth, which affects the preservation of both siliceous and calcareous biogenic particles as they settle to the ocean bottom. The final factor is ocean fertility, which controls the amount of biogenic particles produced in surface waters. Turbidites Turbidites are the geologic deposits of a turbidity current, which is a type of amalgamation of fluidal and sediment gravity flow responsible for distributing vast amounts of clastic sediment into the deep ocean. Turbidites are deposited in the deep ocean troughs below the continental shelf, or similar structures in deep lakes, by underwater avalanches which slide down the steep slopes of the continental shelf edge. When the material comes to rest in the ocean trough, it is the sand and other coarse material which settles first followed by mud and eventually the very fine particulate matter. This sequence of deposition creates the Bouma sequences that characterize these rocks. Turbidites were first recognised in the 1950s and the first facies model was developed by Bouma in 1962. Since that time, turbidites have been one of the better known and most intensively studied deep-water sediment facies. They are now very well known from sediment cores recovered from modern deep-water systems, subsurface (hydrocarbon) boreholes and ancient outcrops now exposed on land. Each new study of a particular turbidite system reveals specific deposit characteristics and facies for that system. The most commonly observed facies have been variously synthesised into a range of facies schemes. Contourites A contourite is a sedimentary deposit commonly formed on continental rise to lower slope settings, although they may occur anywhere that is below storm wave base. Countourites are produced by thermohaline-induced deepwater bottom currents and may be influenced by wind or tidal forces. The geomorphology of contourite deposits is mainly influenced by the deepwater bottom-current velocity, sediment supply, and seafloor topography. Contourites were first identified in the early 1960s by Bruce Heezen and co-workers at Woods Hole Oceanographic Institute. Their now seminal paper demonstrated the very significant effects of contour-following bottom currents in shaping sedimentation on the deep continental rise off eastern North America. The deposits of these semi-permanent alongslope currents soon became known as contourites, and the demarcation of slope-parallel, elongate and mounded sediment bodies made up largely of contourites became known as contourite drifts. Hemipelagic Hemipelagic sediments, or hemipelagite, are a type of marine sediments that consists of clay and silt-sized grains that are terrigenous and some biogenic material derived from the landmass nearest the deposits or from organisms living in the water. Hemipelagic sediments are deposited on continental shelves and continental rises, and differ from pelagic sediment compositionally. Pelagic sediment is composed of primarily biogenic material from organisms living in the water column or on the seafloor and contains little to no terrigenous material. Terrigenous material includes minerals from the lithosphere like feldspar or quartz. Volcanism on land, wind blown sediments as well as particulates discharged from rivers can contribute to Hemipelagic deposits. These deposits can be used to qualify climatic changes and identify changes in sediment provenances. Ecology Benthos () is the community of organisms that live on, in, or near the seafloor, also known as the benthic zone. Hyperbenthos (or hyperbenthic organisms), prefix , live just above the sediment. Epibenthos (or epibenthic organisms), prefix , live on top of the sediments. Endobenthos (or endobenthic organisms), prefix , live buried, or burrowing in the sediment, often in the oxygenated top layer. Microbenthos Marine microbenthos are microorganisms that live in the benthic zone of the ocean – that live near or on the seafloor, or within or on surface seafloor sediments. The word benthos comes from Greek, meaning "depth of the sea". Microbenthos are found everywhere on or about the seafloor of continental shelves, as well as in deeper waters, with greater diversity in or on seafloor sediments. In shallow waters, seagrass meadows, coral reefs and kelp forests provide particularly rich habitats. In photic zones benthic diatoms dominate as photosynthetic organisms. In intertidal zones changing tides strongly control opportunities for microbenthos. Diatoms form a (disputed) phylum containing about 100,000 recognised species of mainly unicellular algae. Diatoms generate about 20 per cent of the oxygen produced on the planet each year, take in over 6.7 billion metric tons of silicon each year from the waters in which they live, and contribute nearly half of the organic material found in the oceans. Coccolithophores are minute unicellular photosynthetic protists with two flagella for locomotion. Most of them are protected by a shell covered with ornate circular plates or scales called coccoliths. The coccoliths are made from calcium carbonate. The term coccolithophore derives from the Greek for a seed carrying stone, referring to their small size and the coccolith stones they carry. Under the right conditions they bloom, like other phytoplankton, and can turn the ocean milky white. Radiolarians are unicellular predatory protists encased in elaborate globular shells usually made of silica and pierced with holes. Their name comes from the Latin for "radius". They catch prey by extending parts of their body through the holes. As with the silica frustules of diatoms, radiolarian shells can sink to the ocean floor when radiolarians die and become preserved as part of the ocean sediment. These remains, as microfossils, provide valuable information about past oceanic conditions. Like radiolarians, foraminiferans (forams for short) are single-celled predatory protists, also protected with shells that have holes in them. Their name comes from the Latin for "hole bearers". Their shells, often called tests, are chambered (forams add more chambers as they grow). The shells are usually made of calcite, but are sometimes made of agglutinated sediment particles or chiton, and (rarely) of silica. Most forams are benthic, but about 40 species are planktic. They are widely researched with well established fossil records which allow scientists to infer a lot about past environments and climates. Both foraminifera and diatoms have planktonic and benthic forms, that is, they can drift in the water column or live on sediment at the bottom of the ocean. Either way, their shells end up on the seafloor after they die. These shells are widely used as climate proxies. The chemical composition of the shells are a consequence of the chemical composition of the ocean at the time the shells were formed. Past water temperatures can be also be inferred from the ratios of stable oxygen isotopes in the shells, since lighter isotopes evaporate more readily in warmer water leaving the heavier isotopes in the shells. Information about past climates can be inferred further from the abundance of forams and diatoms, since they tend to be more abundant in warm water. The sudden extinction event which killed the dinosaurs 66 million years ago also rendered extinct three-quarters of all other animal and plant species. However, deep-sea benthic forams flourished in the aftermath. In 2020 it was reported that researchers have examined the chemical composition of thousands of samples of these benthic forams and used their findings to build the most detailed climate record of Earth ever. Some endoliths have extremely long lives. In 2013 researchers reported evidence of endoliths in the ocean floor, perhaps millions of years old, with a generation time of 10,000 years. These are slowly metabolizing and not in a dormant state. Some Actinomycetota found in Siberia are estimated to be half a million years old. Sediment cores The diagram on the right shows an example of a sediment core. The sample was retrieved from the Upernavik Fjord circa 2018. Grain-size measurements were made, and the top 50 cm was dated with the 210Pb method. Carbon processing Thinking about ocean carbon and carbon sequestration has shifted in recent years from a structurally-based chemical reactivity viewpoint toward a view that includes the role of the ecosystem in organic carbon degradation rates. This shift in view towards organic carbon and ecosystem involvement includes aspects of the "molecular revolution" in biology, discoveries on the limits of life, advances in quantitative modelling, paleo studies of ocean carbon cycling, novel analytical techniques, and interdisciplinary efforts. In 2020, LaRowe et al. outlined a broad view of this issue that is spread across multiple scientific disciplines related to marine sediments and global carbon cycling. Evolutionary history To begin with, the Earth was molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust and water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a planetoid with the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans. By the start of the Archean, about four billion years ago, rocks were often heavily metamorphized deep-water sediments, such as graywackes, mudstones, volcanic sediments and banded iron formations. Greenstone belts are typical Archean formations, consisting of alternating high- and low-grade metamorphic rocks. High-grade rocks were derived from volcanic island arcs, while low-grade metamorphic rocks represented deep-sea sediments eroded from the neighboring island rocks and deposited in a forearc basin. The earliest-known supercontinent Rodinia assembled about one billion years ago, and began to break apart after about 250 million years during the latter part of the Proterozoic. The Paleozoic, (Ma), started shortly after the breakup of Pannotia and at the end of a global ice age. Throughout the early Paleozoic, the Earth's landmass was broken up into a substantial number of relatively small continents. Toward the end of the era the continents gathered together into a supercontinent called Pangaea, which included most of the Earth's land area. During the Silurian, which started 444 Ma, Gondwana continued a slow southward drift to high southern latitudes. The melting of ice caps and glaciers contributed to a rise in sea levels, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments, forming an unconformity. Other cratons and continent fragments drifted together near the equator, starting the formation of a second supercontinent known as Euramerica. During the Triassic deep-ocean sediments were laid down and subsequently disappeared through the subduction of oceanic plates, so very little is known of the Triassic open ocean. The supercontinent Pangaea rifted during the Triassic – especially late in the period – but had not yet separated. The first non-marine sediments in the rift that marks the initial break-up of Pangea are of Late Triassic age. Because of the limited shoreline of one super-continental mass, Triassic marine deposits are globally relatively rare; despite their prominence in Western Europe where the Triassic was first studied. In North America, for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based on organisms living in lagoons and hypersaline environments, such as Estheria crustaceans and terrestrial vertebrates. Patterns or traces of bioturbation are preserved in lithified rock. The study of such patterns is called ichnology, or the study of "trace fossils", which, in the case of bioturbators, are fossils left behind by digging or burrowing animals. This can be compared to the footprint left behind by these animals. In some cases bioturbation is so pervasive that it completely obliterates sedimentary structures, such as laminated layers or cross-bedding. Thus, it affects the disciplines of sedimentology and stratigraphy within geology. The study of bioturbator ichnofabrics uses the depth of the fossils, the cross-cutting of fossils, and the sharpness (or how well defined) of the fossil to assess the activity that occurred in old sediments. Typically the deeper the fossil, the better preserved and well defined the specimen. Important trace fossils from bioturbation have been found in marine sediments from tidal, coastal and deep sea sediments. In addition sand dune, or Eolian, sediments are important for preserving a wide variety of fossils. Evidence of bioturbation has been found in deep-sea sediment cores including into long records, although the act extracting the core can disturb the signs of bioturbation, especially at shallower depths. Arthropods, in particular are important to the geologic record of bioturbation of Eolian sediments. Dune records show traces of burrowing animals as far back as the lower Mesozoic, 250 Ma, although bioturbation in other sediments has been seen as far back as 550 Ma. Research history The first major study of deep-ocean sediments occurred between 1872 and 1876 with the HMS Challenger expedition, which travelled nearly 70,000 nautical miles sampling seawater and marine sediments. The scientific goals of the expedition were to take physical measurements of the seawater at various depths, as well as taking samples so the chemical composition could be determined, along with any particulate matter or marine organisms that were present. This included taking samples and analysing sediments from the deep ocean floor. Before the Challenger voyage, oceanography had been mainly speculative. As the first true oceanographic cruise, the Challenger expedition laid the groundwork for an entire academic and research discipline. Earlier theories of continental drift proposed that continents in motion "plowed" through the fixed and immovable seafloor. Later in the 1960s the idea that the seafloor itself moves and also carries the continents with it as it spreads from a central rift axis was proposed by Harold Hess and Robert Dietz. The phenomenon is known today as plate tectonics. In locations where two plates move apart, at mid-ocean ridges, new seafloor is continually formed during seafloor spreading. In 1968, the oceanographic research vessel Glomar Challenger was launched and embarked on a 15-year-long program, the Deep Sea Drilling Program. This program provided crucial data that supported the seafloor spreading hypothesis by collecting rock samples that confirmed that the farther from the mid-ocean ridge, the older the rock was. See also Bioturbation Depositional environment Cosmic dust Interplanetary dust Deep biosphere Great Calcite Belt Marine clay Microbially induced sedimentary structure Oolitic aragonite sand Organic-rich sedimentary rocks Redox gradient Seafloor depth versus age Sediment-water interface Sedimentary rock Sediment transport Coastal sediment transport Coastal sediment supply References Sources Marine geology Oceanography Sediments Sedimentary rocks
Marine sediment
[ "Physics", "Environmental_science" ]
11,407
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
320,468
https://en.wikipedia.org/wiki/Affine%20variety
In algebraic geometry, an affine algebraic set is the set of the common zeros over an algebraically closed field of some family of polynomials in the polynomial ring An affine variety or affine algebraic variety, is an affine algebraic set such that the ideal generated by the defining polynomials is prime. Some texts use the term variety for any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense). In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field in which the coefficients are considered, from the algebraically closed field (containing ) over which the common zeros are considered (that is, the points of the affine algebraic set are in ). In this case, the variety is said defined over , and the points of the variety that belong to are said -rational or rational over . In the common case where is the field of real numbers, a -rational point is called a real point. When the field is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by has no rational points for any integer greater than two. Introduction An affine algebraic set is the set of solutions in an algebraically closed field of a system of polynomial equations with coefficients in . More precisely, if are polynomials with coefficients in , they define an affine algebraic set An affine (algebraic) variety is an affine algebraic set that is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible. If is an affine algebraic set, and is the ideal of all polynomials that are zero on , then the quotient ring is called the of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in other words (see #Structure sheaf), it is the space of global sections of the structure sheaf of X. The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety). Examples The complement of a hypersurface in an affine variety (that is for some polynomial ) is affine. Its defining equations are obtained by saturating by the defining ideal of . The coordinate ring is thus the localization . In particular, (the affine line with the origin removed) is affine. On the other hand, (the affine plane with the origin removed) is not an affine variety; cf. Hartogs' extension theorem. The subvarieties of codimension one in the affine space are exactly the hypersurfaces, that is the varieties defined by a single polynomial. The normalization of an irreducible affine variety is affine; the coordinate ring of the normalization is the integral closure of the coordinate ring of the variety. (Similarly, the normalization of a projective variety is a projective variety.) Rational points For an affine variety over an algebraically closed field , and a subfield of , a -rational point of is a point That is, a point of whose coordinates are elements of . The collection of -rational points of an affine variety is often denoted Often, if the base field is the complex numbers , points that are -rational (where is the real numbers) are called real points of the variety, and -rational points ( the rational numbers) are often simply called rational points. For instance, is a -rational and an -rational point of the variety as it is in and all its coordinates are integers. The point is a real point of that is not -rational, and is a point of that is not -rational. This variety is called a circle, because the set of its -rational points is the unit circle. It has infinitely many -rational points that are the points where is a rational number. The circle is an example of an algebraic curve of degree two that has no -rational point. This can be deduced from the fact that, modulo , the sum of two squares cannot be . It can be proved that an algebraic curve of degree two with a -rational point has infinitely many other -rational points; each such point is the second intersection point of the curve and a line with a rational slope passing through the rational point. The complex variety has no -rational points, but has many complex points. If is an affine variety in defined over the complex numbers , the -rational points of can be drawn on a piece of paper or by graphing software. The figure on the right shows the -rational points of Singular points and tangent space Let be an affine variety defined by the polynomials and be a point of . The Jacobian matrix of at is the matrix of the partial derivatives The point is regular if the rank of equals the codimension of , and singular otherwise. If is regular, the tangent space to at is the affine subspace of defined by the linear equations If the point is singular, the affine subspace defined by these equations is also called a tangent space by some authors, while other authors say that there is no tangent space at a singular point. A more intrinsic definition, which does not use coordinates is given by Zariski tangent space. The Zariski topology The affine algebraic sets of kn form the closed sets of a topology on kn, called the Zariski topology. This follows from the fact that and (in fact, a countable intersection of affine algebraic sets is an affine algebraic set). The Zariski topology can also be described by way of basic open sets, where Zariski-open sets are countable unions of sets of the form for These basic open sets are the complements in kn of the closed sets zero loci of a single polynomial. If k is Noetherian (for instance, if k is a field or a principal ideal domain), then every ideal of k is finitely-generated, so every open set is a finite union of basic open sets. If V is an affine subvariety of kn the Zariski topology on V is simply the subspace topology inherited from the Zariski topology on kn. Geometry–algebra correspondence The geometric structure of an affine variety is linked in a deep way to the algebraic structure of its coordinate ring. Let I and J be ideals of k[V], the coordinate ring of an affine variety V. Let I(V) be the set of all polynomials in that vanish on V, and let denote the radical of the ideal I, the set of polynomials f for which some power of f is in I. The reason that the base field is required to be algebraically closed is that affine varieties automatically satisfy Hilbert's nullstellensatz: for an ideal J in where k is an algebraically closed field, Radical ideals (ideals that are their own radical) of k[V] correspond to algebraic subsets of V. Indeed, for radical ideals I and J, if and only if Hence V(I)=V(J) if and only if I=J. Furthermore, the function taking an affine algebraic set W and returning I(W), the set of all functions that also vanish on all points of W, is the inverse of the function assigning an algebraic set to a radical ideal, by the nullstellensatz. Hence the correspondence between affine algebraic sets and radical ideals is a bijection. The coordinate ring of an affine algebraic set is reduced (nilpotent-free), as an ideal I in a ring R is radical if and only if the quotient ring R/I is reduced. Prime ideals of the coordinate ring correspond to affine subvarieties. An affine algebraic set V(I) can be written as the union of two other algebraic sets if and only if I=JK for proper ideals J and K not equal to I (in which case ). This is the case if and only if I is not prime. Affine subvarieties are precisely those whose coordinate ring is an integral domain. This is because an ideal is prime if and only if the quotient of the ring by the ideal is an integral domain. Maximal ideals of k[V] correspond to points of V. If I and J are radical ideals, then if and only if As maximal ideals are radical, maximal ideals correspond to minimal algebraic sets (those that contain no proper algebraic subsets), which are points in V. If V is an affine variety with coordinate ring this correspondence becomes explicit through the map where denotes the image in the quotient algebra R of the polynomial An algebraic subset is a point if and only if the coordinate ring of the subset is a field, as the quotient of a ring by a maximal ideal is a field. The following table summarises this correspondence, for algebraic subsets of an affine variety and ideals of the corresponding coordinate ring: Products of affine varieties A product of affine varieties can be defined using the isomorphism then embedding the product in this new affine space. Let and have coordinate rings and respectively, so that their product has coordinate ring . Let be an algebraic subset of and an algebraic subset of Then each is a polynomial in , and each is in . The product of and is defined as the algebraic set in The product is irreducible if each , is irreducible. The Zariski topology on is not the topological product of the Zariski topologies on the two spaces. Indeed, the product topology is generated by products of the basic open sets and Hence, polynomials that are in but cannot be obtained as a product of a polynomial in with a polynomial in will define algebraic sets that are in the Zariski topology on but not in the product topology. Morphisms of affine varieties A morphism, or regular map, of affine varieties is a function between affine varieties that is polynomial in each coordinate: more precisely, for affine varieties and , a morphism from to is a map of the form where for each These are the morphisms in the category of affine varieties. There is a one-to-one correspondence between morphisms of affine varieties over an algebraically closed field and homomorphisms of coordinate rings of affine varieties over going in the opposite direction. Because of this, along with the fact that there is a one-to-one correspondence between affine varieties over and their coordinate rings, the category of affine varieties over is dual to the category of coordinate rings of affine varieties over The category of coordinate rings of affine varieties over is precisely the category of finitely-generated, nilpotent-free algebras over More precisely, for each morphism of affine varieties, there is a homomorphism between the coordinate rings (going in the opposite direction), and for each such homomorphism, there is a morphism of the varieties associated to the coordinate rings. This can be shown explicitly: let and be affine varieties with coordinate rings and respectively. Let be a morphism. Indeed, a homomorphism between polynomial rings factors uniquely through the ring and a homomorphism is determined uniquely by the images of Hence, each homomorphism corresponds uniquely to a choice of image for each . Then given any morphism from to a homomorphism can be constructed that sends to where is the equivalence class of in Similarly, for each homomorphism of the coordinate rings, a morphism of the affine varieties can be constructed in the opposite direction. Mirroring the paragraph above, a homomorphism sends to a polynomial in . This corresponds to the morphism of varieties defined by Structure sheaf Equipped with the structure sheaf described below, an affine variety is a locally ringed space. Given an affine variety X with coordinate ring A, the sheaf of k-algebras is defined by letting be the ring of regular functions on U. Let D(f) = { x | f(x) ≠ 0 } for each f in A. They form a base for the topology of X and so is determined by its values on the open sets D(f). (See also: sheaf of modules#Sheaf associated to a module.) The key fact, which relies on Hilbert nullstellensatz in the essential way, is the following: Proof: The inclusion ⊃ is clear. For the opposite, let g be in the left-hand side and , which is an ideal. If x is in D(f), then, since g is regular near x, there is some open affine neighborhood D(h) of x such that ; that is, hm g is in A and thus x is not in V(J). In other words, and thus the Hilbert nullstellensatz implies f is in the radical of J; i.e., . The claim, first of all, implies that X is a "locally ringed" space since where . Secondly, the claim implies that is a sheaf; indeed, it says if a function is regular (pointwise) on D(f), then it must be in the coordinate ring of D(f); that is, "regular-ness" can be patched together. Hence, is a locally ringed space. Serre's theorem on affineness A theorem of Serre gives a cohomological characterization of an affine variety; it says an algebraic variety is affine if and only if for any and any quasi-coherent sheaf F on X. (cf. Cartan's theorem B.) This makes the cohomological study of an affine variety non-existent, in a sharp contrast to the projective case in which cohomology groups of line bundles are of central interest. Affine algebraic groups An affine variety over an algebraically closed field is called an affine algebraic group if it has: A multiplication , which is a regular morphism that follows the associativity axiom—that is, such that for all points , and in An identity element such that for every in An inverse morphism, a regular bijection such that for every in Together, these define a group structure on the variety. The above morphisms are often written using ordinary group notation: can be written as , or ; the inverse can be written as or Using the multiplicative notation, the associativity, identity and inverse laws can be rewritten as: , and . The most prominent example of an affine algebraic group is the general linear group of degree This is the group of linear transformations of the vector space if a basis of is fixed, this is equivalent to the group of invertible matrices with entries in It can be shown that any affine algebraic group is isomorphic to a subgroup of . For this reason, affine algebraic groups are often called linear algebraic groups. Affine algebraic groups play an important role in the classification of finite simple groups, as the groups of Lie type are all sets of -rational points of an affine algebraic group, where is a finite field. Generalizations If an author requires the base field of an affine variety to be algebraically closed (as this article does), then irreducible affine algebraic sets over non-algebraically closed fields are a generalization of affine varieties. This generalization notably includes affine varieties over the real numbers. An affine variety plays a role of a local chart for algebraic varieties; that is to say, general algebraic varieties such as projective varieties are obtained by gluing affine varieties. Linear structures that are attached to varieties are also (trivially) affine varieties; e.g., tangent spaces, fibers of algebraic vector bundles. An affine variety is a special case of an affine scheme, a locally-ringed space that is isomorphic to the spectrum of a commutative ring (up to an equivalence of categories). Each affine variety has an affine scheme associated to it: if is an affine variety in with coordinate ring then the scheme corresponding to is the set of prime ideals of The affine scheme has "classical points", which correspond with points of the variety (and hence maximal ideals of the coordinate ring of the variety), and also a point for each closed subvariety of the variety (these points correspond to prime, non-maximal ideals of the coordinate ring). This creates a more well-defined notion of the "generic point" of an affine variety, by assigning to each closed subvariety an open point that is dense in the subvariety. More generally, an affine scheme is an affine variety if it is reduced, irreducible, and of finite type over an algebraically closed field Notes See also Algebraic variety Affine scheme Representations on coordinate rings References The original article was written as a partial human translation of the corresponding French article. Milne, James S. Lectures on Étale cohomology Algebraic geometry
Affine variety
[ "Mathematics" ]
3,581
[ "Fields of abstract algebra", "Algebraic geometry" ]
320,469
https://en.wikipedia.org/wiki/Projective%20variety
In algebraic geometry, a projective variety is an algebraic variety that is a closed subvariety of a projective space. That is, it is the zero-locus in of some finite family of homogeneous polynomials that generate a prime ideal, the defining ideal of the variety. A projective variety is a projective curve if its dimension is one; it is a projective surface if its dimension is two; it is a projective hypersurface if its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a single homogeneous polynomial. If X is a projective variety defined by a homogeneous prime ideal I, then the quotient ring is called the homogeneous coordinate ring of X. Basic invariants of X such as the degree and the dimension can be read off the Hilbert polynomial of this graded ring. Projective varieties arise in many ways. They are complete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, but Chow's lemma describes the close relation of these two notions. Showing that a variety is projective is done by studying line bundles or divisors on X. A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties, Serre duality can be viewed as an analog of Poincaré duality. It also leads to the Riemann–Roch theorem for projective curves, i.e., projective varieties of dimension 1. The theory of projective curves is particularly rich, including a classification by the genus of the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties. Hilbert schemes parametrize closed subschemes of with prescribed Hilbert polynomial. Hilbert schemes, of which Grassmannians are special cases, are also projective schemes in their own right. Geometric invariant theory offers another approach. The classical approaches include the Teichmüller space and Chow varieties. A particularly rich theory, reaching back to the classics, is available for complex projective varieties, i.e., when the polynomials defining X have complex coefficients. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. For example, the theory of holomorphic vector bundles (more generally coherent analytic sheaves) on X coincide with that of algebraic vector bundles. Chow's theorem says that a subset of projective space is the zero-locus of a family of holomorphic functions if and only if it is the zero-locus of homogeneous polynomials. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory. Variety and scheme structure Variety structure Let k be an algebraically closed field. The basis of the definition of projective varieties is projective space , which can be defined in different, but equivalent ways: as the set of all lines through the origin in (i.e., all one-dimensional vector subspaces of ) as the set of tuples , with not all zero, modulo the equivalence relation for any . The equivalence class of such a tuple is denoted by This equivalence class is the general point of projective space. The numbers are referred to as the homogeneous coordinates of the point. A projective variety is, by definition, a closed subvariety of , where closed refers to the Zariski topology. In general, closed subsets of the Zariski topology are defined to be the common zero-locus of a finite collection of homogeneous polynomial functions. Given a polynomial , the condition does not make sense for arbitrary polynomials, but only if f is homogeneous, i.e., the degrees of all the monomials (whose sum is f) are the same. In this case, the vanishing of is independent of the choice of . Therefore, projective varieties arise from homogeneous prime ideals I of , and setting Moreover, the projective variety X is an algebraic variety, meaning that it is covered by open affine subvarieties and satisfies the separation axiom. Thus, the local study of X (e.g., singularity) reduces to that of an affine variety. The explicit structure is as follows. The projective space is covered by the standard open affine charts which themselves are affine n-spaces with the coordinate ring Say i = 0 for the notational simplicity and drop the superscript (0). Then is a closed subvariety of defined by the ideal of generated by for all f in I. Thus, X is an algebraic variety covered by (n+1) open affine charts . Note that X is the closure of the affine variety in . Conversely, starting from some closed (affine) variety , the closure of V in is the projective variety called the of V. If defines V, then the defining ideal of this closure is the homogeneous ideal of generated by for all f in I. For example, if V is an affine curve given by, say, in the affine plane, then its projective completion in the projective plane is given by Projective schemes For various applications, it is necessary to consider more general algebro-geometric objects than projective varieties, namely projective schemes. The first step towards projective schemes is to endow projective space with a scheme structure, in a way refining the above description of projective space as an algebraic variety, i.e., is a scheme which it is a union of (n + 1) copies of the affine n-space kn. More generally, projective space over a ring A is the union of the affine schemes in such a way the variables match up as expected. The set of closed points of , for algebraically closed fields k, is then the projective space in the usual sense. An equivalent but streamlined construction is given by the Proj construction, which is an analog of the spectrum of a ring, denoted "Spec", which defines an affine scheme. For example, if A is a ring, then If R is a quotient of by a homogeneous ideal I, then the canonical surjection induces the closed immersion Compared to projective varieties, the condition that the ideal I be a prime ideal was dropped. This leads to a much more flexible notion: on the one hand the topological space may have multiple irreducible components. Moreover, there may be nilpotent functions on X. Closed subschemes of correspond bijectively to the homogeneous ideals I of that are saturated; i.e., This fact may be considered as a refined version of projective Nullstellensatz. We can give a coordinate-free analog of the above. Namely, given a finite-dimensional vector space V over k, we let where is the symmetric algebra of . It is the projectivization of V; i.e., it parametrizes lines in V. There is a canonical surjective map , which is defined using the chart described above. One important use of the construction is this (cf., ). A divisor D on a projective variety X corresponds to a line bundle L. One then set ; it is called the complete linear system of D. Projective space over any scheme S can be defined as a fiber product of schemes If is the twisting sheaf of Serre on , we let denote the pullback of to ; that is, for the canonical map A scheme X → S is called projective over S if it factors as a closed immersion followed by the projection to S. A line bundle (or invertible sheaf) on a scheme X over S is said to be very ample relative to S if there is an immersion (i.e., an open immersion followed by a closed immersion) for some n so that pullbacks to . Then a S-scheme X is projective if and only if it is proper and there exists a very ample sheaf on X relative to S. Indeed, if X is proper, then an immersion corresponding to the very ample line bundle is necessarily closed. Conversely, if X is projective, then the pullback of under the closed immersion of X into a projective space is very ample. That "projective" implies "proper" is deeper: the main theorem of elimination theory. Relation to complete varieties By definition, a variety is complete, if it is proper over k. The valuative criterion of properness expresses the intuition that in a proper variety, there are no points "missing". There is a close relation between complete and projective varieties: on the one hand, projective space and therefore any projective variety is complete. The converse is not true in general. However: A smooth curve C is projective if and only if it is complete. This is proved by identifying C with the set of discrete valuation rings of the function field k(C) over k. This set has a natural Zariski topology called the Zariski–Riemann space. Chow's lemma states that for any complete variety X, there is a projective variety Z and a birational morphism Z → X. (Moreover, through normalization, one can assume this projective variety is normal.) Some properties of a projective variety follow from completeness. For example, for any projective variety X over k. This fact is an algebraic analogue of Liouville's theorem (any holomorphic function on a connected compact complex manifold is constant). In fact, the similarity between complex analytic geometry and algebraic geometry on complex projective varieties goes much further than this, as is explained below. Quasi-projective varieties are, by definition, those which are open subvarieties of projective varieties. This class of varieties includes affine varieties. Affine varieties are almost never complete (or projective). In fact, a projective subvariety of an affine variety must have dimension zero. This is because only the constants are globally regular functions on a projective variety. Examples and basic invariants By definition, any homogeneous ideal in a polynomial ring yields a projective scheme (required to be prime ideal to give a variety). In this sense, examples of projective varieties abound. The following list mentions various classes of projective varieties which are noteworthy since they have been studied particularly intensely. The important class of complex projective varieties, i.e., the case , is discussed further below. The product of two projective spaces is projective. In fact, there is the explicit immersion (called Segre embedding) As a consequence, the product of projective varieties over k is again projective. The Plücker embedding exhibits a Grassmannian as a projective variety. Flag varieties such as the quotient of the general linear group modulo the subgroup of upper triangular matrices, are also projective, which is an important fact in the theory of algebraic groups. Homogeneous coordinate ring and Hilbert polynomial As the prime ideal P defining a projective variety X is homogeneous, the homogeneous coordinate ring is a graded ring, i.e., can be expressed as the direct sum of its graded components: There exists a polynomial P such that for all sufficiently large n; it is called the Hilbert polynomial of X. It is a numerical invariant encoding some extrinsic geometry of X. The degree of P is the dimension r of X and its leading coefficient times r! is the degree of the variety X. The arithmetic genus of X is (−1)r (P(0) − 1) when X is smooth. For example, the homogeneous coordinate ring of is and its Hilbert polynomial is ; its arithmetic genus is zero. If the homogeneous coordinate ring R is an integrally closed domain, then the projective variety X is said to be projectively normal. Note, unlike normality, projective normality depends on R, the embedding of X into a projective space. The normalization of a projective variety is projective; in fact, it's the Proj of the integral closure of some homogeneous coordinate ring of X. Degree Let be a projective variety. There are at least two equivalent ways to define the degree of X relative to its embedding. The first way is to define it as the cardinality of the finite set where d is the dimension of X and Hi's are hyperplanes in "general positions". This definition corresponds to an intuitive idea of a degree. Indeed, if X is a hypersurface, then the degree of X is the degree of the homogeneous polynomial defining X. The "general positions" can be made precise, for example, by intersection theory; one requires that the intersection is proper and that the multiplicities of irreducible components are all one. The other definition, which is mentioned in the previous section, is that the degree of X is the leading coefficient of the Hilbert polynomial of X times (dim X)!. Geometrically, this definition means that the degree of X is the multiplicity of the vertex of the affine cone over X. Let be closed subschemes of pure dimensions that intersect properly (they are in general position). If mi denotes the multiplicity of an irreducible component Zi in the intersection (i.e., intersection multiplicity), then the generalization of Bézout's theorem says: The intersection multiplicity mi can be defined as the coefficient of Zi in the intersection product in the Chow ring of . In particular, if is a hypersurface not containing X, then where Zi are the irreducible components of the scheme-theoretic intersection of X and H with multiplicity (length of the local ring) mi. A complex projective variety can be viewed as a compact complex manifold; the degree of the variety (relative to the embedding) is then the volume of the variety as a manifold with respect to the metric inherited from the ambient complex projective space. A complex projective variety can be characterized as a minimizer of the volume (in a sense). The ring of sections Let X be a projective variety and L a line bundle on it. Then the graded ring is called the ring of sections of L. If L is ample, then Proj of this ring is X. Moreover, if X is normal and L is very ample, then is the integral closure of the homogeneous coordinate ring of X determined by L; i.e., so that pulls-back to L. For applications, it is useful to allow for divisors (or -divisors) not just line bundles; assuming X is normal, the resulting ring is then called a generalized ring of sections. If is a canonical divisor on X, then the generalized ring of sections is called the canonical ring of X. If the canonical ring is finitely generated, then Proj of the ring is called the canonical model of X. The canonical ring or model can then be used to define the Kodaira dimension of X. Projective curves Projective schemes of dimension one are called projective curves. Much of the theory of projective curves is about smooth projective curves, since the singularities of curves can be resolved by normalization, which consists in taking locally the integral closure of the ring of regular functions. Smooth projective curves are isomorphic if and only if their function fields are isomorphic. The study of finite extensions of or equivalently smooth projective curves over is an important branch in algebraic number theory. A smooth projective curve of genus one is called an elliptic curve. As a consequence of the Riemann–Roch theorem, such a curve can be embedded as a closed subvariety in . In general, any (smooth) projective curve can be embedded in (for a proof, see Secant variety#Examples). Conversely, any smooth closed curve in of degree three has genus one by the genus formula and is thus an elliptic curve. A smooth complete curve of genus greater than or equal to two is called a hyperelliptic curve if there is a finite morphism of degree two. Projective hypersurfaces Every irreducible closed subset of of codimension one is a hypersurface; i.e., the zero set of some homogeneous irreducible polynomial. Abelian varieties Another important invariant of a projective variety X is the Picard group of X, the set of isomorphism classes of line bundles on X. It is isomorphic to and therefore an intrinsic notion (independent of embedding). For example, the Picard group of is isomorphic to via the degree map. The kernel of is not only an abstract abelian group, but there is a variety called the Jacobian variety of X, Jac(X), whose points equal this group. The Jacobian of a (smooth) curve plays an important role in the study of the curve. For example, the Jacobian of an elliptic curve E is E itself. For a curve X of genus g, Jac(X) has dimension g. Varieties, such as the Jacobian variety, which are complete and have a group structure are known as abelian varieties, in honor of Niels Abel. In marked contrast to affine algebraic groups such as , such groups are always commutative, whence the name. Moreover, they admit an ample line bundle and are thus projective. On the other hand, an abelian scheme may not be projective. Examples of abelian varieties are elliptic curves, Jacobian varieties and K3 surfaces. Projections Let be a linear subspace; i.e., for some linearly independent linear functionals si. Then the projection from E is the (well-defined) morphism The geometric description of this map is as follows: We view so that it is disjoint from E. Then, for any , where denotes the smallest linear space containing E and x (called the join of E and x.) where are the homogeneous coordinates on For any closed subscheme disjoint from E, the restriction is a finite morphism. Projections can be used to cut down the dimension in which a projective variety is embedded, up to finite morphisms. Start with some projective variety If the projection from a point not on X gives Moreover, is a finite map to its image. Thus, iterating the procedure, one sees there is a finite map This result is the projective analog of Noether's normalization lemma. (In fact, it yields a geometric proof of the normalization lemma.) The same procedure can be used to show the following slightly more precise result: given a projective variety X over a perfect field, there is a finite birational morphism from X to a hypersurface H in In particular, if X is normal, then it is the normalization of H. Duality and linear system While a projective n-space parameterizes the lines in an affine n-space, the dual of it parametrizes the hyperplanes on the projective space, as follows. Fix a field k. By , we mean a projective n-space equipped with the construction: , a hyperplane on where is an L-point of for a field extension L of k and For each L, the construction is a bijection between the set of L-points of and the set of hyperplanes on . Because of this, the dual projective space is said to be the moduli space of hyperplanes on . A line in is called a pencil: it is a family of hyperplanes on parametrized by . If V is a finite-dimensional vector space over k, then, for the same reason as above, is the space of hyperplanes on . An important case is when V consists of sections of a line bundle. Namely, let X be an algebraic variety, L a line bundle on X and a vector subspace of finite positive dimension. Then there is a map: determined by the linear system V, where B, called the base locus, is the intersection of the divisors of zero of nonzero sections in V (see Linear system of divisors#A map determined by a linear system for the construction of the map). Cohomology of coherent sheaves Let X be a projective scheme over a field (or, more generally over a Noetherian ring A). Cohomology of coherent sheaves on X satisfies the following important theorems due to Serre: is a finite-dimensional k-vector space for any p. There exists an integer (depending on ; see also Castelnuovo–Mumford regularity) such that for all and p > 0, where is the twisting with a power of a very ample line bundle These results are proven reducing to the case using the isomorphism where in the right-hand side is viewed as a sheaf on the projective space by extension by zero. The result then follows by a direct computation for n any integer, and for arbitrary reduces to this case without much difficulty. As a corollary to 1. above, if f is a projective morphism from a noetherian scheme to a noetherian ring, then the higher direct image is coherent. The same result holds for proper morphisms f, as can be shown with the aid of Chow's lemma. Sheaf cohomology groups Hi on a noetherian topological space vanish for i strictly greater than the dimension of the space. Thus the quantity, called the Euler characteristic of , is a well-defined integer (for X projective). One can then show for some polynomial P over rational numbers. Applying this procedure to the structure sheaf , one recovers the Hilbert polynomial of X. In particular, if X is irreducible and has dimension r, the arithmetic genus of X is given by which is manifestly intrinsic; i.e., independent of the embedding. The arithmetic genus of a hypersurface of degree d is in . In particular, a smooth curve of degree d in has arithmetic genus . This is the genus formula. Smooth projective varieties Let X be a smooth projective variety where all of its irreducible components have dimension n. In this situation, the canonical sheaf ωX, defined as the sheaf of Kähler differentials of top degree (i.e., algebraic n-forms), is a line bundle. Serre duality Serre duality states that for any locally free sheaf on X, where the superscript prime refers to the dual space and is the dual sheaf of . A generalization to projective, but not necessarily smooth schemes is known as Verdier duality. Riemann–Roch theorem For a (smooth projective) curve X, H2 and higher vanish for dimensional reason and the space of the global sections of the structure sheaf is one-dimensional. Thus the arithmetic genus of X is the dimension of . By definition, the geometric genus of X is the dimension of H0(X, ωX). Serre duality thus implies that the arithmetic genus and the geometric genus coincide. They will simply be called the genus of X. Serre duality is also a key ingredient in the proof of the Riemann–Roch theorem. Since X is smooth, there is an isomorphism of groups from the group of (Weil) divisors modulo principal divisors to the group of isomorphism classes of line bundles. A divisor corresponding to ωX is called the canonical divisor and is denoted by K. Let l(D) be the dimension of . Then the Riemann–Roch theorem states: if g is a genus of X, for any divisor D on X. By the Serre duality, this is the same as: which can be readily proved. A generalization of the Riemann–Roch theorem to higher dimension is the Hirzebruch–Riemann–Roch theorem, as well as the far-reaching Grothendieck–Riemann–Roch theorem. Hilbert schemes Hilbert schemes parametrize all closed subvarieties of a projective scheme X in the sense that the points (in the functorial sense) of H correspond to the closed subschemes of X. As such, the Hilbert scheme is an example of a moduli space, i.e., a geometric object whose points parametrize other geometric objects. More precisely, the Hilbert scheme parametrizes closed subvarieties whose Hilbert polynomial equals a prescribed polynomial P. It is a deep theorem of Grothendieck that there is a scheme over k such that, for any k-scheme T, there is a bijection The closed subscheme of that corresponds to the identity map is called the universal family. For , the Hilbert scheme is called the Grassmannian of r-planes in and, if X is a projective scheme, is called the Fano scheme of r-planes on X. Complex projective varieties In this section, all algebraic varieties are complex algebraic varieties. A key feature of the theory of complex projective varieties is the combination of algebraic and analytic methods. The transition between these theories is provided by the following link: since any complex polynomial is also a holomorphic function, any complex variety X yields a complex analytic space, denoted . Moreover, geometric properties of X are reflected by the ones of . For example, the latter is a complex manifold if and only if X is smooth; it is compact if and only if X is proper over . Relation to complex Kähler manifolds Complex projective space is a Kähler manifold. This implies that, for any projective algebraic variety X, is a compact Kähler manifold. The converse is not in general true, but the Kodaira embedding theorem gives a criterion for a Kähler manifold to be projective. In low dimensions, there are the following results: (Riemann) A compact Riemann surface (i.e., compact complex manifold of dimension one) is a projective variety. By the Torelli theorem, it is uniquely determined by its Jacobian. (Chow-Kodaira) A compact complex manifold of dimension two with two algebraically independent meromorphic functions is a projective variety. GAGA and Chow's theorem Chow's theorem provides a striking way to go the other way, from analytic to algebraic geometry. It states that every analytic subvariety of a complex projective space is algebraic. The theorem may be interpreted to saying that a holomorphic function satisfying certain growth condition is necessarily algebraic: "projective" provides this growth condition. One can deduce from the theorem the following: Meromorphic functions on the complex projective space are rational. If an algebraic map between algebraic varieties is an analytic isomorphism, then it is an (algebraic) isomorphism. (This part is a basic fact in complex analysis.) In particular, Chow's theorem implies that a holomorphic map between projective varieties is algebraic. (consider the graph of such a map.) Every holomorphic vector bundle on a projective variety is induced by a unique algebraic vector bundle. Every holomorphic line bundle on a projective variety is a line bundle of a divisor. Chow's theorem can be shown via Serre's GAGA principle. Its main theorem states: Let X be a projective scheme over . Then the functor associating the coherent sheaves on X to the coherent sheaves on the corresponding complex analytic space Xan is an equivalence of categories. Furthermore, the natural maps are isomorphisms for all i and all coherent sheaves on X. Complex tori vs. complex abelian varieties The complex manifold associated to an abelian variety A over is a compact complex Lie group. These can be shown to be of the form and are also referred to as complex tori. Here, g is the dimension of the torus and L is a lattice (also referred to as period lattice). According to the uniformization theorem already mentioned above, any torus of dimension 1 arises from an abelian variety of dimension 1, i.e., from an elliptic curve. In fact, the Weierstrass's elliptic function attached to L satisfies a certain differential equation and as a consequence it defines a closed immersion: There is a p-adic analog, the p-adic uniformization theorem. For higher dimensions, the notions of complex abelian varieties and complex tori differ: only polarized complex tori come from abelian varieties. Kodaira vanishing The fundamental Kodaira vanishing theorem states that for an ample line bundle on a smooth projective variety X over a field of characteristic zero, for i > 0, or, equivalently by Serre duality for i < n. The first proof of this theorem used analytic methods of Kähler geometry, but a purely algebraic proof was found later. The Kodaira vanishing in general fails for a smooth projective variety in positive characteristic. Kodaira's theorem is one of various vanishing theorems, which give criteria for higher sheaf cohomologies to vanish. Since the Euler characteristic of a sheaf (see above) is often more manageable than individual cohomology groups, this often has important consequences about the geometry of projective varieties. Related notions Multi-projective variety Weighted projective variety, a closed subvariety of a weighted projective space See also Algebraic geometry of projective spaces Adequate equivalence relation Hilbert scheme Lefschetz hyperplane theorem Minimal model program Notes References R. Vakil, Foundations Of Algebraic Geometry External links The Hilbert Scheme by Charles Siegel - a blog post varieties Ch. 1 Algebraic geometry Algebraic varieties Projective geometry
Projective variety
[ "Mathematics" ]
6,015
[ "Fields of abstract algebra", "Algebraic geometry" ]
320,578
https://en.wikipedia.org/wiki/Tarski%27s%20theorem%20about%20choice
In mathematics, Tarski's theorem, proved by , states that in ZF the theorem "For every infinite set , there is a bijective map between the sets and " implies the axiom of choice. The opposite direction was already known, thus the theorem and axiom of choice are equivalent. Tarski told that when he tried to publish the theorem in Comptes Rendus de l'Académie des Sciences de Paris, Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. Proof The goal is to prove that the axiom of choice is implied by the statement "for every infinite set ". It is known that the well-ordering theorem is equivalent to the axiom of choice; thus it is enough to show that the statement implies that for every set there exists a well-order. Since the collection of all ordinals such that there exists a surjective function from to the ordinal is a set, there exists an infinite ordinal, such that there is no surjective function from to We assume without loss of generality that the sets and are disjoint. By the initial assumption, thus there exists a bijection For every it is impossible that because otherwise we could define a surjective function from to Therefore, there exists at least one ordinal such that so the set is not empty. We can define a new function: This function is well defined since is a non-empty set of ordinals, and so has a minimum. For every the sets and are disjoint. Therefore, we can define a well order on for every we define since the image of that is, is a set of ordinals and therefore well ordered. References Axiom of choice Cardinal numbers Set theory Theorems in the foundations of mathematics fr:Ordinal de Hartogs#Produit cardinal
Tarski's theorem about choice
[ "Mathematics" ]
416
[ "Mathematical theorems", "Cardinal numbers", "Foundations of mathematics", "Set theory", "Mathematical logic", "Mathematical objects", "Infinity", "Mathematical axioms", "Axiom of choice", "Numbers", "Axioms of set theory", "Mathematical problems", "Theorems in the foundations of mathematics...
320,819
https://en.wikipedia.org/wiki/Cantor%20function
In mathematics, the Cantor function is an example of a function that is continuous, but not absolutely continuous. It is a notorious counterexample in analysis, because it challenges naive intuitions about continuity, derivative, and measure. Though it is continuous everywhere and has zero derivative almost everywhere, its value still goes from 0 to 1 as its argument reaches from 0 to 1. Thus, in one sense the function seems very much like a constant one which cannot grow, and in another, it does indeed monotonically grow. It is also called the Cantor ternary function, the Lebesgue function, Lebesgue's singular function, the Cantor–Vitali function, the Devil's staircase, the Cantor staircase function, and the Cantor–Lebesgue function. introduced the Cantor function and mentioned that Scheeffer pointed out that it was a counterexample to an extension of the fundamental theorem of calculus claimed by Harnack. The Cantor function was discussed and popularized by , and . Definition To define the Cantor function , let be any number in and obtain by the following steps: Express in base 3, using digits 0, 1, 2. If the base-3 representation of contains a 1, replace every digit strictly after the first 1 with 0. Replace any remaining 2s with 1s. Interpret the result as a binary number. The result is . For example: has the ternary representation 0.02020202... There are no 1s so the next stage is still 0.02020202... This is rewritten as 0.01010101... This is the binary representation of , so . has the ternary representation 0.01210121... The digits after the first 1 are replaced by 0s to produce 0.01000000... This is not rewritten since it has no 2s. This is the binary representation of , so . has the ternary representation 0.21102 (or 0.211012222...). The digits after the first 1 are replaced by 0s to produce 0.21. This is rewritten as 0.11. This is the binary representation of , so . Equivalently, if is the Cantor set on [0,1], then the Cantor function can be defined as This formula is well-defined, since every member of the Cantor set has a unique base 3 representation that only contains the digits 0 or 2. (For some members of , the ternary expansion is repeating with trailing 2's and there is an alternative non-repeating expansion ending in 1. For example, = 0.13 = 0.02222...3 is a member of the Cantor set). Since and , and is monotonic on , it is clear that also holds for all . Properties The Cantor function challenges naive intuitions about continuity and measure; though it is continuous everywhere and has zero derivative almost everywhere, goes from 0 to 1 as goes from 0 to 1, and takes on every value in between. The Cantor function is the most frequently cited example of a real function that is uniformly continuous (precisely, it is Hölder continuous of exponent α = log 2/log 3) but not absolutely continuous. It is constant on intervals of the form (0.x1x2x3...xn022222..., 0.x1x2x3...xn200000...), and every point not in the Cantor set is in one of these intervals, so its derivative is 0 outside of the Cantor set. On the other hand, it has no derivative at any point in an uncountable subset of the Cantor set containing the interval endpoints described above. The Cantor function can also be seen as the cumulative probability distribution function of the 1/2-1/2 Bernoulli measure μ supported on the Cantor set: . This probability distribution, called the Cantor distribution, has no discrete part. That is, the corresponding measure is atomless. This is why there are no jump discontinuities in the function; any such jump would correspond to an atom in the measure. However, no non-constant part of the Cantor function can be represented as an integral of a probability density function; integrating any putative probability density function that is not almost everywhere zero over any interval will give positive probability to some interval to which this distribution assigns probability zero. In particular, as pointed out, the function is not the integral of its derivative even though the derivative exists almost everywhere. The Cantor function is the standard example of a singular function. The Cantor function is also a standard example of a function with bounded variation but, as mentioned above, is not absolutely continuous. However, every absolutely continuous function is continuous with bounded variation. The Cantor function is non-decreasing, and so in particular its graph defines a rectifiable curve. showed that the arc length of its graph is 2. Note that the graph of any nondecreasing function such that and has length not greater than 2. In this sense, the Cantor function is extremal. Lack of absolute continuity Because the Lebesgue measure of the uncountably infinite Cantor set is 0, for any positive ε < 1 and δ, there exists a finite sequence of pairwise disjoint sub-intervals with total length < δ over which the Cantor function cumulatively rises more than ε. In fact, for every δ > 0 there are finitely many pairwise disjoint intervals (xk,yk) (1 ≤ k ≤ M) with and . Alternative definitions Iterative construction Below we define a sequence {fn} of functions on the unit interval that converges to the Cantor function. Let f0(x) = x. Then, for every integer , the next function fn+1(x) will be defined in terms of fn(x) as follows: Let fn+1(x) = ,  when ; Let fn+1(x) = 1/2,  when ; Let fn+1(x) = ,  when . The three definitions are compatible at the end-points 1/3 and 2/3, because fn(0) = 0 and fn(1) = 1 for every n, by induction. One may check that fn converges pointwise to the Cantor function defined above. Furthermore, the convergence is uniform. Indeed, separating into three cases, according to the definition of fn+1, one sees that If f denotes the limit function, it follows that, for every n ≥ 0, Fractal volume The Cantor function is closely related to the Cantor set. The Cantor set C can be defined as the set of those numbers in the interval [0, 1] that do not contain the digit 1 in their base-3 (triadic) expansion, except if the 1 is followed by zeros only (in which case the tail 1000 can be replaced by 0222 to get rid of any 1). It turns out that the Cantor set is a fractal with (uncountably) infinitely many points (zero-dimensional volume), but zero length (one-dimensional volume). Only the D-dimensional volume (in the sense of a Hausdorff-measure) takes a finite value, where is the fractal dimension of C. We may define the Cantor function alternatively as the D-dimensional volume of sections of the Cantor set Self-similarity The Cantor function possesses several symmetries. For , there is a reflection symmetry and a pair of magnifications, one on the left and one on the right: and The magnifications can be cascaded; they generate the dyadic monoid. This is exhibited by defining several helper functions. Define the reflection as The first self-symmetry can be expressed as where the symbol denotes function composition. That is, and likewise for the other cases. For the left and right magnifications, write the left-mappings and Then the Cantor function obeys Similarly, define the right mappings as and Then, likewise, The two sides can be mirrored one onto the other, in that and likewise, These operations can be stacked arbitrarily. Consider, for example, the sequence of left-right moves Adding the subscripts C and D, and, for clarity, dropping the composition operator in all but a few places, one has: Arbitrary finite-length strings in the letters L and R correspond to the dyadic rationals, in that every dyadic rational can be written as both for integer n and m and as finite length of bits with Thus, every dyadic rational is in one-to-one correspondence with some self-symmetry of the Cantor function. Some notational rearrangements can make the above slightly easier to express. Let and stand for L and R. Function composition extends this to a monoid, in that one can write and generally, for some binary strings of digits A, B, where AB is just the ordinary concatenation of such strings. The dyadic monoid M is then the monoid of all such finite-length left-right moves. Writing as a general element of the monoid, there is a corresponding self-symmetry of the Cantor function: The dyadic monoid itself has several interesting properties. It can be viewed as a finite number of left-right moves down an infinite binary tree; the infinitely distant "leaves" on the tree correspond to the points on the Cantor set, and so, the monoid also represents the self-symmetries of the Cantor set. In fact, a large class of commonly occurring fractals are described by the dyadic monoid; additional examples can be found in the article on de Rham curves. Other fractals possessing self-similarity are described with other kinds of monoids. The dyadic monoid is itself a sub-monoid of the modular group Note that the Cantor function bears more than a passing resemblance to Minkowski's question-mark function. In particular, it obeys the exact same symmetry relations, although in an altered form. Generalizations Let be the dyadic (binary) expansion of the real number 0 ≤ y ≤ 1 in terms of binary digits bk ∈ {0,1}. This expansion is discussed in greater detail in the article on the dyadic transformation. Then consider the function For z = 1/3, the inverse of the function x = 2 C1/3(y) is the Cantor function. That is, y = y(x) is the Cantor function. In general, for any z < 1/2, Cz(y) looks like the Cantor function turned on its side, with the width of the steps getting wider as z approaches zero. As mentioned above, the Cantor function is also the cumulative distribution function of a measure on the Cantor set. Different Cantor functions, or Devil's Staircases, can be obtained by considering different atom-less probability measures supported on the Cantor set or other fractals. While the Cantor function has derivative 0 almost everywhere, current research focusses on the question of the size of the set of points where the upper right derivative is distinct from the lower right derivative, causing the derivative to not exist. This analysis of differentiability is usually given in terms of fractal dimension, with the Hausdorff dimension the most popular choice. This line of research was started in the 1990s by Darst, who showed that the Hausdorff dimension of the set of non-differentiability of the Cantor function is the square of the dimension of the Cantor set, . Subsequently Falconer showed that this squaring relationship holds for all Ahlfors's regular, singular measures, i.e.Later, Troscheit obtain a more comprehensive picture of the set where the derivative does not exist for more general normalized Gibb's measures supported on self-conformal and self-similar sets. Hermann Minkowski's question mark function loosely resembles the Cantor function visually, appearing as a "smoothed out" form of the latter; it can be constructed by passing from a continued fraction expansion to a binary expansion, just as the Cantor function can be constructed by passing from a ternary expansion to a binary expansion. The question mark function has the interesting property of having vanishing derivatives at all rational numbers. See also Dyadic transformation Weierstrass function, a function that is continuous everywhere but differentiable nowhere. Notes References Reprinted in: E. Zermelo (Ed.), Gesammelte Abhandlungen Mathematischen und Philosophischen Inhalts, Springer, New York, 1980. External links Cantor ternary function at Encyclopaedia of Mathematics Cantor Function by Douglas Rivers, the Wolfram Demonstrations Project. Fractals Measure theory Special functions Georg Cantor De Rham curves
Cantor function
[ "Mathematics" ]
2,655
[ "Functions and mappings", "Mathematical analysis", "Special functions", "Mathematical objects", "Fractals", "Combinatorics", "Mathematical relations" ]
320,873
https://en.wikipedia.org/wiki/Rocketdyne
Rocketdyne is an American rocket engine design and production company headquartered in Canoga Park, in the western San Fernando Valley of suburban Los Angeles, in southern California. Rocketdyne was founded as a division of North American Aviation in 1955 and was later part of Rockwell International from 1967 until 1996 and Boeing from 1996 to 2005. In 2005, Boeing sold the Rocketdyne division to United Technologies Corporation, becoming Pratt & Whitney Rocketdyne as part of Pratt & Whitney. In 2013, Rocketdyne was sold to GenCorp, Inc., which merged it with Aerojet to form Aerojet Rocketdyne. History After World War II, North American Aviation (NAA) was contracted by the Defense Department to study the German V-2 missile and adapt its engine to Society of Automotive Engineers (SAE) measurements and U.S. construction details. NAA also used the same general concept of separate burner/injectors from the V-2 engine design to build a much larger engine for the Navaho missile project (1946–1958). This work was considered unimportant in the 1940s and funded at a very low level, but the start of the Korean War in 1950 changed priorities. NAA had begun to use the Santa Susana Field Laboratory (SSFL) high in the Simi Hills around 1947 for the Navaho's rocket engine testing. At that time the site was much further away from major populated areas than the early test sites NAA had been using within Los Angeles. Navaho ran into continual difficulties and was canceled in 1958 when the Chrysler Corporation Missile Division's Redstone missile design (essentially an improved V-2) had caught up in development. However the Rocketdyne engine, known as the A-5 or NAA75-110, proved to be considerably more reliable than the one developed for Redstone, so the missile was redesigned with the A-5 even though the resulting missile had much shorter range. As the missile entered production, NAA spun off Rocketdyne in 1955 as a separate division, and built its new plant in the then small Los Angeles suburb of Canoga Park, in the San Fernando Valley near and below its Santa Susana Field Laboratory. In 1967, NAA, with its Rocketdyne and Atomics International divisions, merged with the Rockwell Corporation to form North American Rockwell, becoming in 1973 Rockwell International. Thor, Delta, Atlas Rocketdyne's next major development was its first all-new design, the S-3D, which had been developed in parallel to the V-2 derived A series. The S-3 was used on the Army's Jupiter missile design, essentially a development of the Redstone, and was later selected for the competitor Air Force Thor missile. An even larger design, the LR89/LR105, was used on the Atlas missile. The Thor had a short military career, but it was used as a satellite launcher through the 1950s and 60s in a number of different versions. One, Thor Delta, became the baseline for the current Delta series of space launchers, although since the late 1960s the Delta has had almost nothing in common with the Thor. Although the original S-3 engine was used on some Delta versions, most use its updated RS-27 design, originally developed as a single engine to replace the three-engine cluster on the Atlas. The Atlas also had a short military career as a deterrent weapon, but the Atlas rocket family descended from it became an important orbital launcher for many decades, both for the Project Mercury crewed spacecraft, and in the much-employed Atlas-Agena and Atlas-Centaur rockets. The Atlas V is still in manufacture and use. NASA Rocketdyne also became the major supplier for NASA's development efforts, supplying all of the major engines for the Saturn rocket, and potentially, the huge Nova rocket designs. Rocketdyne's H-1 engine was used by the Saturn I booster main stage. Five F-1 engines powered the Saturn V's S-IC first stage, while five J-2 engines powered its S-II second stage, and one J-2 the S-IVB third stages. By 1965, Rocketdyne built the vast majority of United States rocket engines, excepting those of the Titan rocket (built by Aerojet), and its payroll had grown to 65,000. This sort of growth appeared to be destined to continue in the 1970s when Rocketdyne won the contract for the RS-25 Space Shuttle Main Engine (SSME), but the rapid downturn in other military and civilian contracts led to downsizing of the company. North American Aviation, largely a spacecraft manufacturer, and also tied almost entirely to the Space Shuttle, merged with the Rockwell Corporation in 1966 to form the North American Rockwell company, which became Rockwell International in 1973, with Rocketdyne as a major division. Downsizing During continued downsizing in the 1980s and 1990s, Rockwell International shed several parts of the former North American Rockwell corporation. The aerospace entities of Rockwell International, including the former NAA and Rocketdyne, were sold to Boeing in 1996. Rocketdyne became part of Boeing's Defense division. In February 2005, Boeing reached an agreement to sell what was by then referred to as "Rocketdyne Propulsion & Power" to Pratt & Whitney of United Technologies Corporation. The transaction was completed on August 2, 2005. Boeing retained ownership of Rocketdyne's Santa Susana Field Lab. GenCorp, Inc. purchased Pratt & Whitney Rocketdyne in 2013 from United Technologies Corporation, and merged it with Aerojet to form Aerojet Rocketdyne. Facilities and operations Canoga Park, California Rocketdyne maintained division headquarters and rocket engine manufacturing facilities at Canoga Park from 1955 until 2014. North American Aviation's rocket development activities began with engine tests nearby the Los Angeles Airport. In 1948, NAA began testing liquid rocket engines within the Simi Hills which would later become the Santa Susana Field Laboratory. The company sought a location for a manufacturing plant nearby the Simi Hills testing site. In 1954, North American Aviation purchased 56 acres of land within the current Warner Center area then deeded the property to the Air Force. The Air Force, in turn, designated the site Air Force Plant No. 56 and contracted with Rocketdyne to build and operate the facility. NAA completed construction of the main manufacturing building and designated Rocketdyne as a new company division in November 1955. Rocketdyne's success resulted in the addition of buildings within a growing footprint. At its peak, the Rocketdyne Canoga facility comprised some 27 different buildings over 119 acres of land, including over one million square feet of manufacturing area plus 516,000 square feet of office space. The Canoga plant grew into areas both east and southeast of the original location. In 1960, Rocketdyne opened a headquarters building at the southeast corner of Victory Boulevard and Canoga Avenue. A pedestrian tunnel underneath Victory Boulevard east of Canoga Avenue provided access between buildings to the South (including the Headquarters) and those located to the North of the street. (The tunnel was removed in 1973.) The Canoga plant shrank over time via piecemeal property sales and building demolitions into the 2000s. With the completion of the Apollo program in 1969, Rocketdyne ended the leases of several facilities and returned the headquarters offices to the Canoga Main building. In 1973, Rocketdyne repurchased the Air Force Plant No. 56 property, thereby ending the government designation. The Space Shuttle program ended in 2011, and further reductions followed. Pratt and Whitney retained ownership of the Canoga property when Rocketdyne was sold to Aerojet in 2013; the remaining property measured roughly 47 acres with buildings and structures comprising a total of 770,000 square feet. Rocketdyne played a key role in the United States space program and the development of propulsion systems. Ten years after being established, the Canoga plant produced the vast majority of America's United States liquid rocket engines (except those of the Titan rocket, them being built by Aerojet). Through the end of the twentieth century, Rocketdyne products powered all major engines for the Saturn program and every space program in the United States. Six specific periods of liquid rocket engine development and manufacturing programs took place at the Canoga plant: Atlas (1954-late 1960s), Thor (1961-1975), Jupiter (1955-1962), Saturn (1961-1975); Apollo (1961-1972); Space Shuttle (1981-2011). Key rocket engine technologies were advanced at the Rocketdyne Canoga plant: gimbaling of rocket engines, introduction of engine injector baffling plates for improved combustion stability, tubular regenerative cooling, "stage and a half" engine configuration first used on Atlas, thrust chamber ignition using pyrophoric chemicals and electrically controlled starting sequences. Aerojet Rocketdyne moved its office and manufacturing operations to the DeSoto campus in 2014. Demolition and site clearing of the former Rocketdyne facility in Canoga Park commenced in August 2016. As of February 2019, the future land use of the site has not been announced. McGregor, Texas Rocketdyne's Solid Propulsion Operations business unit was engaged in the development, testing and production of solid rocket engines at McGregor, Texas for nearly twenty years. The Rocket Fuels Division of Phillips Petroleum Company began using the former Bluebonnet Ordnance Plant in 1952. In 1958, Phillips and Rocketdyne entered a partnership to form Astrodyne Incorporated. In 1959, Rocketdyne purchased full ownership of the company and renamed it Solid Propulsion Operations (later designated the Solid Rocket Division). The purchase caused Rocketdyne to invest in facilities and research at McGregor towards diversification into other propellant types and rockets engines. Notably, Rocketdyne installed a facility capable of testing engines having up to three million pounds of thrust. The Solid Propulsion Operations initially used ammonium nitrate-based propellants in the manufacture of gas generators used to start aircraft jet engines, turbopumps of the Rocketdyne H-1 rocket engine and the manufacture of the Jet Assisted Take Off (JATO) rocket engines. Ullage motors were developed for the Saturn V Space Vehicle. The group also built solid propellant boosters providing for the zero-length launching of North American F-100 Super Sabre and Lockheed F-104 Starfighter aircraft. The motor provided a takeoff thrust of 130,000 lbf for 4 seconds, accelerating the aircraft to 275 miles per hour and 4 g before separating and dropping away from the jet. In 1959, the group began using ammonium perchlorate oxidizer combined with carboxyl-terminated polybutadiene (CTPB) binder to produce solid propellants marketed under the trade name "Flexadyne." For the next nineteen years, Rocketdyne used the formulation in the production of solid rocket motors for three major missile systems: the AIM-7 Sparrow III, AGM-45 Shrike, and the AIM-54 Phoenix. Rocketdyne transferred operation of the McGregor plant to Hercules Inc. in 1978. A portion of the former Bluebonnet Ordnance Plant is now used by SpaceX as their Rocket Development and Test Facility. Neosho, Missouri A rocket engine manufacturing plant was operated by Rocketdyne over a twelve-year period at Neosho, Missouri. The plant was constructed by the U.S. Air Force within a 2,000-acre portion of Fort Crowder, a decommissioned World War II training base. The Rocketdyne division of North American Aviation operated the site, employing approximately 1,250 workers beginning in 1956. The plant primarily produced the MA-5 booster, sustainer and vernier rocket engines, H-1 engines and components for the F-1 and J-2 rocket engines. The P4-1 (a.k.a. LR64) engine was also manufactured for the AQM-37A target drone. The engines and components were evaluated at an on-site test area located approximately one mile from the plant. Rocketdyne closed the plant in 1968. The plant has been used by several different companies for the refurbishment of jet aircraft engines. The citizens of Neosho have placed a commemorative monument dedicated to the men and women of Rocketdyne Neosho "whose tireless efforts and relentless pursuit of quality resulted in the world's finest liquid rocket engines." Nevada Field Laboratory Rocketdyne established and operated a 120,000 acre rocket engine test and development facility nearby Reno, Nevada from 1962 until 1970. The Nevada Field Laboratory had three active open-air test facilities and two administrative areas. The test facilities were used for the Gemini and Apollo space programs, the annular aerospike engine and the early (proposal-stage) development of the Space Shuttle main engine. Power generation In addition to its primary business of building rocket engines, Rocketdyne has developed power generation and control systems. These included early nuclear power generation experiments, radioisotope thermoelectric generators (RTG), and solar power equipment, including the main power system for the International Space Station. In the Boeing sale to Pratt & Whitney, the Power Systems division of Rocketdyne was transferred to Hamilton Sundstrand, another subsidiary of United Technologies Corporation. List of engines Some of the engines developed by Rocketdyne are: Rocketdyne A1 to A6 (LOX/Alcohol) Used on Redstone Rocketdyne A7 (LOX/Alcohol) Used on Jupiter-C Rocketdyne 16NS-1,000 Rocketdyne Kiwi Nuclear rocket engine Rocketdyne M-34 Rocketdyne MA-2 Rocketdyne MA-3 Rocketdyne Megaboom modular sled rocket Rocketdyne P Rocketdyne LR64 Rocketdyne LR70 Rocketdyne LR79 family: XLR83-NA-1 - Navaho G-26 XLR89-1 - Atlas A LR79-7 - Thor, Delta, Thor-Able, Thor-Agena A, Thor Agena B, Thor Agena D, Thor-Burner S-3D - Jupiter XLR89-1 - Atlas A, B, C XLR71-NA-1 - Navaho II B-2C - Atlas A XLR89-5 - Atlas D S-3 - Juno II, Saturn A-2 MB-3-1 - Delta A, B, C, Thor Ablestar LR89-5 - Atlas E, F H-1 - Saturn I/IB MB-3 Press Mod - Sea Horse LR89-7 - Atlas LV-3C, Atlas Agena, Atlas Centaur, Atlas F/Agena D, Atlas H, Atlas G, Atlas I MB-3-3 - H-I RZ.2 - Europa H-1c - Saturn IB-A, IB-B H-1b - Saturn B-1, Saturn A-2, Saturn IB, Saturn IB-C, Saturn IB-CE, Saturn IB-D, Saturn INT-11, Saturn INT-12, Saturn INT-13, Saturn INT-14, Saturn INT-15. RS-27 - N-I, N-II, Delta 1000, Delta 4000, Delta 5000, Delta 2000, Delta 3000 MB-3-J - N RS-27A - Delta 6925, Delta 6920-8, Delta 6925-8, Delta 6920-10, Delta 8930 RS-27C - Barbarian MDD, Delta 7925 RS-56-OBA - Atlas II, IIA, IIAS Rocketdyne LR-101 Vernier engine used by Atlas, Thor and Delta Rocketdyne LR105 family: S-4 - Super-Jupiter XLR105-5 - Atlas Able, Atlas B, Atlas C, Atlas LV-3C, Atlas D, Atlas-Agena, Atlas LV-3B LR105-3 LR105-5 - Atlas LV-3C, Atlas E, Atlas Agena B, Atlas F, Atlas Agena D, Atlas Centaur D, Atlas SLV-3 LR105-7 - Atlas Agena D, Atlas F/Agena D, Atlas H, Atlas G, Atlas I RS-56-OSA - Atlas II, IIA, IIAS Rocketdyne Aeolus Rocketdyne XRS-2200, linear aerospike engine, tested for X-33 Rocketdyne RS-2200, linear aerospike engine, intended for Venturestar Rocketdyne E-1 (RP-1/LOX) Backup design for the Titan I Rocketdyne F-1 (RP-1/LOX) Used by the Saturn V. Rocketdyne H-1 (RP-1/LOX) Used by the Saturn I and IB Rocketdyne J-2 (LH2/LOX) Used by both the Saturn V and Saturn IB. Rocketdyne RS-25 Space Shuttle Main Engine (SSME) (LH2/LOX) The main engine for the Space Shuttle, also used on the Space Launch System Rocketdyne RS-27A (RP-1/LOX) Used by the Delta II/III and Atlas ICBM Rocketdyne RS-56 (RP-1/LOX) Used by the Atlas II first stage Rocketdyne RS-68 (LH2/LOX) Used by the Delta IV first stage Rocketdyne XLR46-NA-2, intended for the North American NA-247 interceptor proposal Gallery See also Rocketdyne engines Aerojet Rocketdyne Pratt & Whitney Rocketdyne Atomics International Division Santa Susana Field Laboratory References External links Rocketdyne internet archives (unofficial) GenCorp, Inc.: Rocketdyne Acquisition presentation 01 Rocketry Aerospace companies of the United States Former defense companies of the United States Technology companies based in Greater Los Angeles Manufacturing companies based in Los Angeles Canoga Park, Los Angeles Simi Hills North American Aviation Boeing mergers and acquisitions Aerojet Rocketdyne Holdings United Technologies American companies established in 1955 Manufacturing companies established in 1955 Technology companies established in 1955 Manufacturing companies disestablished in 2005 Technology companies disestablished in 2005 1955 establishments in California 2005 disestablishments in California Defunct manufacturing companies based in Greater Los Angeles History of the San Fernando Valley 1967 mergers and acquisitions 1996 mergers and acquisitions 2005 mergers and acquisitions
Rocketdyne
[ "Engineering" ]
3,784
[ "Rocketry", "Aerospace engineering" ]
321,157
https://en.wikipedia.org/wiki/Model%20checking
In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash). In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure. Overview Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy. An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking. Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams or control-interpreted Petri nets. The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite-state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution. Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula , and a structure with initial state , decide if . If is finite, as it is in hardware, model checking reduces to a graph search. Symbolic model checking Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic. Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking. Example One example of such a system requirement: Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula: Here, should be read as "always", as "eventually", as "until" and the other symbols are standard logical symbols, for "or", for "and" and for "not". Techniques Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem. Symbolic algorithms avoid ever explicitly constructing the graph for the FSM; instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan, as well as of Olivier Coudert and Jean-Christophe Madre, and the development of open-source BDD manipulation libraries such as CUDD and BuDDy. Bounded model-checking algorithms unroll the FSM for a fixed number of steps, , and check whether a property violation can occur in or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of until all possible violations have been ruled out (cf. Iterative deepening depth-first search). Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-Boolean variables and to only consider Boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion. Counterexample-guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again. Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems. First-order logic Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is considered: Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula. This problem is in the circuit class AC0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables. Tools Here is a list of significant model-checking tools: Afra: a model checker for Rebeca which is an actor-based language for modeling concurrent and reactive systems Alloy (Alloy Analyzer) BLAST (Berkeley Lazy Abstraction Software Verification Tool) CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems CPAchecker: an open-source software model checker for C programs, based on the CPA framework ECLAIR: a platform for the automatic analysis, verification, testing, and transformation of C and C++ programs FDR2: a model checker for verifying real-time systems modelled and specified as CSP Processes FizzBee: an easier to use alternative to TLA+, that uses Python-like specification language, that has both behavioral modeling like TLA+ and probabilistic modeling like PRISM ISP code level verifier for MPI programs Java Pathfinder: an open-source model checker for Java programs Libdmc: a framework for distributed model checking mCRL2 Toolset, Boost Software License, Based on ACP NuSMV: a new symbolic model checker PAT: an enhanced simulator, model checker and refinement checker for concurrent and real-time systems Prism: a probabilistic symbolic model checker Roméo: an integrated tool environment for modelling, simulation, and verification of real-time systems modelled as parametric, time, and stopwatch Petri nets SPIN: a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion Storm: A model checker for probabilistic systems. TAPAs: a tool for the analysis of process algebra TAPAAL: an integrated tool environment for modelling, validation, and verification of Timed-Arc Petri Nets TLA+ model checker by Leslie Lamport UPPAAL: an integrated tool environment for modelling, validation, and verification of real-time systems modelled as networks of timed automata Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows. See also References Further reading . JA Bergstra, A. Ponse and SA Smolka, editors." (). (this is also a very good introduction and overview of model checking)
Model checking
[ "Mathematics" ]
2,144
[ "Mathematical logic", "Logic in computer science" ]
321,438
https://en.wikipedia.org/wiki/Perfect%20information
In economics, perfect information (sometimes referred to as "no hidden information") is a feature of perfect competition. With perfect information in a market, all consumers and producers have complete and instantaneous knowledge of all market prices, their own utility, and own cost functions. In game theory, a sequential game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred, including the "initialization event" of the game (e.g. the starting hands of each player in a card game). Perfect information is importantly different from complete information, which implies common knowledge of each player's utility functions, payoffs, strategies and "types". A game with perfect information may or may not have complete information. Games where some aspect of play is hidden from opponents – such as the cards in poker and bridge – are examples of games with imperfect information. Examples Chess is an example of a game with perfect information, as each player can see all the pieces on the board at all times. Other games with perfect information include tic-tac-toe, Reversi, checkers, and Go. Academic literature has not produced consensus on a standard definition of perfect information which defines whether games with chance, but no secret information, and games with simultaneous moves are games of perfect information. Games which are sequential (players alternate in moving) and which have chance events (with known probabilities to all players) but no secret information, are sometimes considered games of perfect information. This includes games such as backgammon and Monopoly. But there are some academic papers which do not regard such games as games of perfect information because the results of chance themselves are unknown prior to them occurring. Games with simultaneous moves are generally not considered games of perfect information. This is because each player holds information which is secret, and must play a move without knowing the opponent's secret information. Nevertheless, some such games are symmetrical, and fair. An example of a game in this category includes rock paper scissors. See also Extensive form game Information asymmetry Partial knowledge Screening game Signaling game References Further reading Fudenberg, D. and Tirole, J. (1993) Game Theory, MIT Press. (see Chapter 3, sect 2.2) Gibbons, R. (1992) A primer in game theory, Harvester-Wheatsheaf. (see Chapter 2) Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey, Wiley & Sons (see Chapter 3, section 2) The Economics of Groundhog Day by economist D.W. MacKenzie, using the 1993 film Groundhog Day to argue that perfect information, and therefore perfect competition, is impossible. Watson, J. (2013) Strategy: An Introduction to Game Theory, W.W. Norton and Co. Game theory Perfect competition Board game terminology
Perfect information
[ "Mathematics" ]
590
[ "Game theory" ]
321,671
https://en.wikipedia.org/wiki/Tessellation
A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellation can be generalized to higher dimensions and a variety of geometries. A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern (an aperiodic set of prototiles). A tessellation of space, also known as a space filling or honeycomb, can be defined in the geometry of higher dimensions. A real physical tessellation is a tiling made of materials such as cemented ceramic squares or hexagons. Such tilings may be decorative patterns, or may have functions such as providing durable and water-resistant pavement, floor, or wall coverings. Historically, tessellations were used in Ancient Rome and in Islamic art such as in the Moroccan architecture and decorative geometric tiling of the Alhambra palace. In the twentieth century, the work of M. C. Escher often made use of tessellations, both in ordinary Euclidean geometry and in hyperbolic geometry, for artistic effect. Tessellations are sometimes employed for decorative effect in quilting. Tessellations form a class of patterns in nature, for example in the arrays of hexagonal cells found in honeycombs. History Tessellations were used by the Sumerians (about 4000 BC) in building wall decorations formed by patterns of clay tiles. Decorative mosaic tilings made of small squared blocks called tesserae were widely employed in classical antiquity, sometimes displaying geometric patterns. In 1619, Johannes Kepler made an early documented study of tessellations. He wrote about regular and semiregular tessellations in his ; he was possibly the first to explore and to explain the hexagonal structures of honeycomb and snowflakes. Some two hundred years later in 1891, the Russian crystallographer Yevgraf Fyodorov proved that every periodic tiling of the plane features one of seventeen different groups of isometries. Fyodorov's work marked the unofficial beginning of the mathematical study of tessellations. Other prominent contributors include Alexei Vasilievich Shubnikov and Nikolai Belov in their book Colored Symmetry (1964), and Heinrich Heesch and Otto Kienzle (1963). Etymology In Latin, tessella is a small cubical piece of clay, stone, or glass used to make mosaics. The word "tessella" means "small square" (from tessera, square, which in turn is from the Greek word τέσσερα for four). It corresponds to the everyday term tiling, which refers to applications of tessellations, often made of glazed clay. Overview Tessellation in two dimensions, also called planar tiling, is a topic in geometry that studies how shapes, known as tiles, can be arranged to fill a plane without any gaps, according to a given set of rules. These rules can be varied. Common ones are that there must be no gaps between tiles, and that no corner of one tile can lie along the edge of another. The tessellations created by bonded brickwork do not obey this rule. Among those that do, a regular tessellation has both identical regular tiles and identical regular corners or vertices, having the same angle between adjacent edges for every tile. There are only three shapes that can form such regular tessellations: the equilateral triangle, square and the regular hexagon. Any one of these three shapes can be duplicated infinitely to fill a plane with no gaps. Many other types of tessellation are possible under different constraints. For example, there are eight types of semi-regular tessellation, made with more than one kind of regular polygon but still having the same arrangement of polygons at every corner. Irregular tessellations can also be made from other shapes such as pentagons, polyominoes and in fact almost any kind of geometric shape. The artist M. C. Escher is famous for making tessellations with irregular interlocking tiles, shaped like animals and other natural objects. If suitable contrasting colours are chosen for the tiles of differing shape, striking patterns are formed, and these can be used to decorate physical surfaces such as church floors. More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries. These tiles may be polygons or any other shapes. Many tessellations are formed from a finite number of prototiles in which all tiles in the tessellation are congruent to the given prototiles. If a geometric shape can be used as a prototile to create a tessellation, the shape is said to tessellate or to tile the plane. The Conway criterion is a sufficient, but not necessary, set of rules for deciding whether a given shape tiles the plane periodically without reflections: some tiles fail the criterion, but still tile the plane. No general rule has been found for determining whether a given shape can tile the plane or not, which means there are many unsolved problems concerning tessellations. Mathematically, tessellations can be extended to spaces other than the Euclidean plane. The Swiss geometer Ludwig Schläfli pioneered this by defining polyschemes, which mathematicians nowadays call polytopes. These are the analogues to polygons and polyhedra in spaces with more dimensions. He further defined the Schläfli symbol notation to make it easy to describe polytopes. For example, the Schläfli symbol for an equilateral triangle is {3}, while that for a square is {4}. The Schläfli notation makes it possible to describe tilings compactly. For example, a tiling of regular hexagons has three six-sided polygons at each vertex, so its Schläfli symbol is {6,3}. Other methods also exist for describing polygonal tilings. When the tessellation is made of regular polygons, the most common notation is the vertex configuration, which is simply a list of the number of sides of the polygons around a vertex. The square tiling has a vertex configuration of 4.4.4.4, or 44. The tiling of regular hexagons is noted 6.6.6, or 63. In mathematics Introduction to tessellations Mathematicians use some technical terms when discussing tilings. An edge is the intersection between two bordering tiles; it is often a straight line. A vertex is the point of intersection of three or more bordering tiles. Using these terms, an isogonal or vertex-transitive tiling is a tiling where every vertex point is identical; that is, the arrangement of polygons about each vertex is the same. The fundamental region is a shape such as a rectangle that is repeated to form the tessellation. For example, a regular tessellation of the plane with squares has a meeting of four squares at every vertex. The sides of the polygons are not necessarily identical to the edges of the tiles. An edge-to-edge tiling is any polygonal tessellation where adjacent tiles only share one full side, i.e., no tile shares a partial side or more than one side with any other tile. In an edge-to-edge tiling, the sides of the polygons and the edges of the tiles are the same. The familiar "brick wall" tiling is not edge-to-edge because the long side of each rectangular brick is shared with two bordering bricks. A normal tiling is a tessellation for which every tile is topologically equivalent to a disk, the intersection of any two tiles is a connected set or the empty set, and all tiles are uniformly bounded. This means that a single circumscribing radius and a single inscribing radius can be used for all the tiles in the whole tiling; the condition disallows tiles that are pathologically long or thin. A is a tessellation in which all tiles are congruent; it has only one prototile. A particularly interesting type of monohedral tessellation is the spiral monohedral tiling. The first spiral monohedral tiling was discovered by Heinz Voderberg in 1936; the Voderberg tiling has a unit tile that is a nonconvex enneagon. The Hirschhorn tiling, published by Michael D. Hirschhorn and D. C. Hunt in 1985, is a pentagon tiling using irregular pentagons: regular pentagons cannot tile the Euclidean plane as the internal angle of a regular pentagon, , is not a divisor of 2. An isohedral tiling is a special variation of a monohedral tiling in which all tiles belong to the same transitivity class, that is, all tiles are transforms of the same prototile under the symmetry group of the tiling. If a prototile admits a tiling, but no such tiling is isohedral, then the prototile is called anisohedral and forms anisohedral tilings. A regular tessellation is a highly symmetric, edge-to-edge tiling made up of regular polygons, all of the same shape. There are only three regular tessellations: those made up of equilateral triangles, squares, or regular hexagons. All three of these tilings are isogonal and monohedral. A semi-regular (or Archimedean) tessellation uses more than one type of regular polygon in an isogonal arrangement. There are eight semi-regular tilings (or nine if the mirror-image pair of tilings counts as two). These can be described by their vertex configuration; for example, a semi-regular tiling using squares and regular octagons has the vertex configuration 4.82 (each vertex has one square and two octagons). Many non-edge-to-edge tilings of the Euclidean plane are possible, including the family of Pythagorean tilings, tessellations that use two (parameterised) sizes of square, each square touching four squares of the other size. An edge tessellation is one in which each tile can be reflected over an edge to take up the position of a neighbouring tile, such as in an array of equilateral or isosceles triangles. Wallpaper groups Tilings with translational symmetry in two independent directions can be categorized by wallpaper groups, of which 17 exist. It has been claimed that all seventeen of these groups are represented in the Alhambra palace in Granada, Spain. Although this is disputed, the variety and sophistication of the Alhambra tilings have interested modern researchers. Of the three regular tilings two are in the p6m wallpaper group and one is in p4m. Tilings in 2-D with translational symmetry in just one direction may be categorized by the seven frieze groups describing the possible frieze patterns. Orbifold notation can be used to describe wallpaper groups of the Euclidean plane. Aperiodic tilings Penrose tilings, which use two different quadrilateral prototiles, are the best known example of tiles that forcibly create non-periodic patterns. They belong to a general class of aperiodic tilings, which use tiles that cannot tessellate periodically. The recursive process of substitution tiling is a method of generating aperiodic tilings. One class that can be generated in this way is the rep-tiles; these tilings have unexpected self-replicating properties. Pinwheel tilings are non-periodic, using a rep-tile construction; the tiles appear in infinitely many orientations. It might be thought that a non-periodic pattern would be entirely without symmetry, but this is not so. Aperiodic tilings, while lacking in translational symmetry, do have symmetries of other types, by infinite repetition of any bounded patch of the tiling and in certain finite groups of rotations or reflections of those patches. A substitution rule, such as can be used to generate Penrose patterns using assemblies of tiles called rhombs, illustrates scaling symmetry. A Fibonacci word can be used to build an aperiodic tiling, and to study quasicrystals, which are structures with aperiodic order. Wang tiles are squares coloured on each edge, and placed so that abutting edges of adjacent tiles have the same colour; hence they are sometimes called Wang dominoes. A suitable set of Wang dominoes can tile the plane, but only aperiodically. This is known because any Turing machine can be represented as a set of Wang dominoes that tile the plane if, and only if, the Turing machine does not halt. Since the halting problem is undecidable, the problem of deciding whether a Wang domino set can tile the plane is also undecidable. Truchet tiles are square tiles decorated with patterns so they do not have rotational symmetry; in 1704, Sébastien Truchet used a square tile split into two triangles of contrasting colours. These can tile the plane either periodically or randomly. An einstein tile is a single shape that forces aperiodic tiling. The first such tile, dubbed a "hat", was discovered in 2023 by David Smith, a hobbyist mathematician. The discovery is under professional review and, upon confirmation, will be credited as solving a longstanding mathematical problem. Tessellations and colour Sometimes the colour of a tile is understood as part of the tiling; at other times arbitrary colours may be applied later. When discussing a tiling that is displayed in colours, to avoid ambiguity, one needs to specify whether the colours are part of the tiling or just part of its illustration. This affects whether tiles with the same shape, but different colours, are considered identical, which in turn affects questions of symmetry. The four colour theorem states that for every tessellation of a normal Euclidean plane, with a set of four available colours, each tile can be coloured in one colour such that no tiles of equal colour meet at a curve of positive length. The colouring guaranteed by the four colour theorem does not generally respect the symmetries of the tessellation. To produce a colouring that does, it is necessary to treat the colours as part of the tessellation. Here, as many as seven colours may be needed, as demonstrated in the image at left. Tessellations with polygons Next to the various tilings by regular polygons, tilings by other polygons have also been studied. Any triangle or quadrilateral (even non-convex) can be used as a prototile to form a monohedral tessellation, often in more than one way. Copies of an arbitrary quadrilateral can form a tessellation with translational symmetry and 2-fold rotational symmetry with centres at the midpoints of all sides. For an asymmetric quadrilateral this tiling belongs to wallpaper group p2. As fundamental domain we have the quadrilateral. Equivalently, we can construct a parallelogram subtended by a minimal set of translation vectors, starting from a rotational centre. We can divide this by one diagonal, and take one half (a triangle) as fundamental domain. Such a triangle has the same area as the quadrilateral and can be constructed from it by cutting and pasting. If only one shape of tile is allowed, tilings exist with convex N-gons for N equal to 3, 4, 5, and 6. For , see Pentagonal tiling, for , see Hexagonal tiling, for , see Heptagonal tiling and for , see octagonal tiling. With non-convex polygons, there are far fewer limitations in the number of sides, even if only one shape is allowed. Polyominoes are examples of tiles that are either convex of non-convex, for which various combinations, rotations, and reflections can be used to tile a plane. For results on tiling the plane with polyominoes, see Polyomino § Uses of polyominoes. Voronoi tilings Voronoi or Dirichlet tilings are tessellations where each tile is defined as the set of points closest to one of the points in a discrete set of defining points. (Think of geographical regions where each region is defined as all the points closest to a given city or post office.) The Voronoi cell for each defining point is a convex polygon. The Delaunay triangulation is a tessellation that is the dual graph of a Voronoi tessellation. Delaunay triangulations are useful in numerical simulation, in part because among all possible triangulations of the defining points, Delaunay triangulations maximize the minimum of the angles formed by the edges. Voronoi tilings with randomly placed points can be used to construct random tilings of the plane. Tessellations in higher dimensions Tessellation can be extended to three dimensions. Certain polyhedra can be stacked in a regular crystal pattern to fill (or tile) three-dimensional space, including the cube (the only Platonic polyhedron to do so), the rhombic dodecahedron, the truncated octahedron, and triangular, quadrilateral, and hexagonal prisms, among others. Any polyhedron that fits this criterion is known as a plesiohedron, and may possess between 4 and 38 faces. Naturally occurring rhombic dodecahedra are found as crystals of andradite (a kind of garnet) and fluorite. Tessellations in three or more dimensions are called honeycombs. In three dimensions there is just one regular honeycomb, which has eight cubes at each polyhedron vertex. Similarly, in three dimensions there is just one quasiregular honeycomb, which has eight tetrahedra and six octahedra at each polyhedron vertex. However, there are many possible semiregular honeycombs in three dimensions. Uniform honeycombs can be constructed using the Wythoff construction. The Schmitt-Conway biprism is a convex polyhedron with the property of tiling space only aperiodically. A Schwarz triangle is a spherical triangle that can be used to tile a sphere. Tessellations in non-Euclidean geometries It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometry. A uniform tiling in the hyperbolic plane (that may be regular, quasiregular, or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other). A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells. In three-dimensional (3-D) hyperbolic space there are nine Coxeter group families of compact convex uniform honeycombs, generated as Wythoff constructions, and represented by permutations of rings of the Coxeter diagrams for each family. In art In architecture, tessellations have been used to create decorative motifs since ancient times. Mosaic tilings often had geometric patterns. Later civilisations also used larger tiles, either plain or individually decorated. Some of the most decorative were the Moorish wall tilings of Islamic architecture, using Girih and Zellige tiles in buildings such as the Alhambra and La Mezquita. Tessellations frequently appeared in the graphic art of M. C. Escher; he was inspired by the Moorish use of symmetry in places such as the Alhambra when he visited Spain in 1936. Escher made four "Circle Limit" drawings of tilings that use hyperbolic geometry. For his woodcut "Circle Limit IV" (1960), Escher prepared a pencil and ink study showing the required geometry. Escher explained that "No single component of all the series, which from infinitely far away rise like rockets perpendicularly from the limit and are at last lost in it, ever reaches the boundary line." Tessellated designs often appear on textiles, whether woven, stitched in, or printed. Tessellation patterns have been used to design interlocking motifs of patch shapes in quilts. Tessellations are also a main genre in origami (paper folding), where pleats are used to connect molecules, such as twist folds, together in a repeating fashion. In manufacturing Tessellation is used in manufacturing industry to reduce the wastage of material (yield losses) such as sheet metal when cutting out shapes for objects such as car doors or drink cans. Tessellation is apparent in the mudcrack-like cracking of thin films – with a degree of self-organisation being observed using micro and nanotechnologies. In nature The honeycomb is a well-known example of tessellation in nature with its hexagonal cells. In botany, the term "tessellate" describes a checkered pattern, for example on a flower petal, tree bark, or fruit. Flowers including the fritillary, and some species of Colchicum, are characteristically tessellate. Many patterns in nature are formed by cracks in sheets of materials. These patterns can be described by Gilbert tessellations, also known as random crack networks. The Gilbert tessellation is a mathematical model for the formation of mudcracks, needle-like crystals, and similar structures. The model, named after Edgar Gilbert, allows cracks to form starting from being randomly scattered over the plane; each crack propagates in two opposite directions along a line through the initiation point, its slope chosen at random, creating a tessellation of irregular convex polygons. Basaltic lava flows often display columnar jointing as a result of contraction forces causing cracks as the lava cools. The extensive crack networks that develop often produce hexagonal columns of lava. One example of such an array of columns is the Giant's Causeway in Northern Ireland. Tessellated pavement, a characteristic example of which is found at Eaglehawk Neck on the Tasman Peninsula of Tasmania, is a rare sedimentary rock formation where the rock has fractured into rectangular blocks. Other natural patterns occur in foams; these are packed according to Plateau's laws, which require minimal surfaces. Such foams present a problem in how to pack cells as tightly as possible: in 1887, Lord Kelvin proposed a packing using only one solid, the bitruncated cubic honeycomb with very slightly curved faces. In 1993, Denis Weaire and Robert Phelan proposed the Weaire–Phelan structure, which uses less surface area to separate cells of equal volume than Kelvin's foam. In puzzles and recreational mathematics Tessellations have given rise to many types of tiling puzzle, from traditional jigsaw puzzles (with irregular pieces of wood or cardboard) and the tangram, to more modern puzzles that often have a mathematical basis. For example, polyiamonds and polyominoes are figures of regular triangles and squares, often used in tiling puzzles. Authors such as Henry Dudeney and Martin Gardner have made many uses of tessellation in recreational mathematics. For example, Dudeney invented the hinged dissection, while Gardner wrote about the "rep-tile", a shape that can be dissected into smaller copies of the same shape. Inspired by Gardner's articles in Scientific American, the amateur mathematician Marjorie Rice found four new tessellations with pentagons. Squaring the square is the problem of tiling an integral square (one whose sides have integer length) using only other integral squares. An extension is squaring the plane, tiling it by squares whose sizes are all natural numbers without repetitions; James and Frederick Henle proved that this was possible. Examples See also Discrete global grid Honeycomb (geometry) Space partitioning Explanatory footnotes References Sources External links Tegula (open-source software for exploring two-dimensional tilings of the plane, sphere and hyperbolic plane; includes databases containing millions of tilings) Wolfram MathWorld: Tessellation (good bibliography, drawings of regular, semiregular and demiregular tessellations) Dirk Frettlöh and Edmund Harriss. "Tilings Encyclopedia" (extensive information on substitution tilings, including drawings, people, and references) Tessellations.org (how-to guides, Escher tessellation gallery, galleries of tessellations by other artists, lesson plans, history) (list of web resources including articles and galleries) Mosaic Symmetry
Tessellation
[ "Physics", "Mathematics" ]
5,219
[ "Tessellation", "Euclidean plane geometry", "Geometry", "Planes (geometry)", "Symmetry" ]
321,801
https://en.wikipedia.org/wiki/Multiply%20perfect%20number
In mathematics, a multiply perfect number (also called multiperfect number or pluperfect number) is a generalization of a perfect number. For a given natural number k, a number n is called (or perfect) if the sum of all positive divisors of n (the divisor function, σ(n)) is equal to kn; a number is thus perfect if and only if it is . A number that is for a certain k is called a multiply perfect number. As of 2014, numbers are known for each value of k up to 11. It is unknown whether there are any odd multiply perfect numbers other than 1. The first few multiply perfect numbers are: 1, 6, 28, 120, 496, 672, 8128, 30240, 32760, 523776, 2178540, 23569920, 33550336, 45532800, 142990848, 459818240, ... . Example The sum of the divisors of 120 is 1 + 2 + 3 + 4 + 5 + 6 + 8 + 10 + 12 + 15 + 20 + 24 + 30 + 40 + 60 + 120 = 360 which is 3 × 120. Therefore 120 is a number. Smallest known k-perfect numbers The following table gives an overview of the smallest known numbers for k ≤ 11 : Properties It can be proven that: For a given prime number p, if n is and p does not divide n, then pn is . This implies that an integer n is a number divisible by 2 but not by 4, if and only if n/2 is an odd perfect number, of which none are known. If 3n is and 3 does not divide n, then n is . Odd multiply perfect numbers It is unknown whether there are any odd multiply perfect numbers other than 1. However if an odd number n exists where k > 2, then it must satisfy the following conditions: The largest prime factor is ≥ 100129 The second largest prime factor is ≥ 1009 The third largest prime factor is ≥ 101 Bounds In little-o notation, the number of multiply perfect numbers less than x is for all ε > 0. The number of k-perfect numbers n for n ≤ x is less than , where c and c are constants independent of k. Under the assumption of the Riemann hypothesis, the following inequality is true for all numbers n, where k > 3 where is Euler's gamma constant. This can be proven using Robin's theorem. The number of divisors τ(n) of a number n satisfies the inequality The number of distinct prime factors ω(n) of n satisfies If the distinct prime factors of n are , then: Specific values of k Perfect numbers A number n with σ(n) = 2n is perfect. Triperfect numbers A number n with σ(n) = 3n is triperfect. There are only six known triperfect numbers and these are believed to comprise all such numbers: 120, 672, 523776, 459818240, 1476304896, 51001180160 If there exists an odd perfect number m (a famous open problem) then 2m would be , since σ(2m) = σ(2)&hairsp;σ(m) = 3×2m. An odd triperfect number must be a square number exceeding 1070 and have at least 12 distinct prime factors, the largest exceeding 105. Variations Unitary multiply perfect numbers A similar extension can be made for unitary perfect numbers. A positive integer n is called a unitary multi number if σ*(n) = kn where σ*(n) is the sum of its unitary divisors. (A divisor d of a number n is a unitary divisor if d and n/d share no common factors.). A unitary multiply perfect number is simply a unitary multi number for some positive integer k. Equivalently, unitary multiply perfect numbers are those n for which n divides σ*(n). A unitary multi number is naturally called a unitary perfect number. In the case k > 2, no example of a unitary multi number is yet known. It is known that if such a number exists, it must be even and greater than 10102 and must have more than forty four odd prime factors. This problem is probably very difficult to settle. The concept of unitary divisor was originally due to R. Vaidyanathaswamy (1931) who called such a divisor as block factor. The present terminology is due to E. Cohen (1960). The first few unitary multiply perfect numbers are: 1, 6, 60, 90, 87360 Bi-unitary multiply perfect numbers A positive integer n is called a bi-unitary multi number if σ**(n) = kn where σ**(n) is the sum of its bi-unitary divisors. This concept is due to Peter Hagis (1987). A bi-unitary multiply perfect number is simply a bi-unitary multi number for some positive integer k. Equivalently, bi-unitary multiply perfect numbers are those n for which n divides σ**(n). A bi-unitary multi number is naturally called a bi-unitary perfect number, and a bi-unitary multi number is called a bi-unitary triperfect number. A divisor d of a positive integer n is called a bi-unitary divisor''' of n if the greatest common unitary divisor (gcud) of d and n/d equals 1. This concept is due to D. Surynarayana (1972). The sum of the (positive) bi-unitary divisors of n is denoted by σ**(n). Peter Hagis (1987) proved that there are no odd bi-unitary multiperfect numbers other than 1. Haukkanen and Sitaramaiah (2020) found all bi-unitary triperfect numbers of the form 2au where 1 ≤ a ≤ 6 and u is odd, and partially the case where a = 7. Further, they fixed completely the case a'' = 8. The first few bi-unitary multiply perfect numbers are: 1, 6, 60, 90, 120, 672, 2160, 10080, 22848, 30240 References Sources See also Hemiperfect number External links The Multiply Perfect Numbers page The Prime Glossary: Multiply perfect numbers Arithmetic dynamics Divisor function Perfect numbers
Multiply perfect number
[ "Mathematics" ]
1,370
[ "Recreational mathematics", "Perfect numbers", "Arithmetic dynamics", "Number theory", "Dynamical systems" ]
321,827
https://en.wikipedia.org/wiki/Thermosetting%20polymer
In materials science, a thermosetting polymer, often called a thermoset, is a polymer that is obtained by irreversibly hardening ("curing") a soft solid or viscous liquid prepolymer (resin). Curing is induced by heat or suitable radiation and may be promoted by high pressure or mixing with a catalyst. Heat is not necessarily applied externally, and is often generated by the reaction of the resin with a curing agent (catalyst, hardener). Curing results in chemical reactions that create extensive cross-linking between polymer chains to produce an infusible and insoluble polymer network. The starting material for making thermosets is usually malleable or liquid prior to curing, and is often designed to be molded into the final shape. It may also be used as an adhesive. Once hardened, a thermoset cannot be melted for reshaping, in contrast to thermoplastic polymers which are commonly produced and distributed in the form of pellets, and shaped into the final product form by melting, pressing, or injection molding. Chemical process Curing a thermosetting resin transforms it into a plastic, or elastomer (rubber) by crosslinking or chain extension through the formation of covalent bonds between individual chains of the polymer. Crosslink density varies depending on the monomer or prepolymer mix, and the mechanism of crosslinking: Acrylic resins, polyesters and vinyl esters with unsaturated sites at the ends or on the backbone are generally linked by copolymerisation with unsaturated monomer diluents, with cure initiated by free radicals generated from ionizing radiation or by the photolytic or thermal decomposition of a radical initiator – the intensity of crosslinking is influenced by the degree of backbone unsaturation in the prepolymer; Epoxy functional resins can be homo-polymerized with anionic or cationic catalysts and heat, or copolymerised through nucleophilic addition reactions with multifunctional crosslinking agents which are also known as curing agents or hardeners. As reaction proceeds, larger and larger molecules are formed and highly branched crosslinked structures develop, the rate of cure being influenced by the physical form and functionality of epoxy resins and curing agents – elevated temperature postcuring induces secondary crosslinking of backbone hydroxyl functionality which condense to form ether bonds; Polyurethanes form when isocyanate resins and prepolymers are combined with low- or high-molecular weight polyols, with strict stoichiometric ratios being essential to control nucleophilic addition polymerisation – the degree of crosslinking and resulting physical type (elastomer or plastic) is adjusted from the molecular weight and functionality of isocyanate resins, prepolymers, and the exact combinations of diols, triols and polyols selected, with the rate of reaction being strongly influenced by catalysts and inhibitors; polyureas form virtually instantaneously when isocyanate resins are combined with long-chain amine functional polyether or polyester resins and short-chain diamine extenders – the amine-isocyanate nucleophilic addition reaction does not require catalysts. Polyureas also form when isocyanate resins come into contact with moisture; Phenolic, amino, and furan resins all cured by polycondensation involving the release of water and heat, with cure initiation and polymerisation exotherm control influenced by curing temperature, catalyst selection or loading and processing method or pressure – the degree of pre-polymerisation and level of residual hydroxymethyl content in the resins determine the crosslink density. Polybenzoxazines are cured by an exothermal ring-opening polymerisation without releasing any chemical, which translates in near zero shrinkage upon polymerisation. Thermosetting polymer mixtures based on thermosetting resin monomers and pre-polymers can be formulated and applied and processed in a variety of ways to create distinctive cured properties that cannot be achieved with thermoplastic polymers or inorganic materials. Properties Thermosetting plastics are generally stronger than thermoplastic materials due to the three-dimensional network of bonds (crosslinking), and are also better suited to high-temperature applications up to the decomposition temperature since they keep their shape as strong covalent bonds between polymer chains cannot be broken easily. The higher the crosslink density and aromatic content of a thermoset polymer, the higher the resistance to heat degradation and chemical attack. Mechanical strength and hardness also improve with crosslink density, although at the expense of brittleness. They normally decompose before melting. Hard, plastic thermosets may undergo permanent or plastic deformation under load. Elastomers, which are soft and springy or rubbery and can be deformed and revert to their original shape on loading release. Conventional thermoset plastics or elastomers cannot be melted and re-shaped after they are cured. This usually prevents recycling for the same purpose, except as filler material. New developments involving thermoset epoxy resins which on controlled and contained heating form crosslinked networks permit repeatedly reshaping, like silica glass by reversible covalent bond exchange reactions on reheating above the glass transition temperature. There are also thermoset polyurethanes shown to have transient properties and which can thus be reprocessed or recycled. Fiber-reinforced materials When compounded with fibers, thermosetting resins form fiber-reinforced polymer composites, which are used in the fabrication of factory-finished structural composite OEM or replacement parts, and as site-applied, cured and finished composite repair and protection materials. When used as the binder for aggregates and other solid fillers, they form particulate-reinforced polymer composites, which are used for factory-applied protective coating or component manufacture, and for site-applied and cured construction, or maintenance purposes. Materials Epoxy resin used as the matrix component in many fiber reinforced plastics such as glass-reinforced plastic and graphite-reinforced plastic; casting; electronics encapsulation; construction; protective coatings; adhesives; sealing and joining. Polyimides and Bismaleimides used in printed circuit boards and in body parts of modern aircraft, aerospace composite structures, as a coating material and for glass reinforced pipes. Cyanate esters or polycyanurates for electronics applications with need for dielectric properties and high glass temperature requirements in aerospace structural composite components. Polyester resin fiberglass systems: sheet molding compounds and bulk molding compounds; filament winding; wet lay-up lamination; repair compounds and protective coatings. Polyurethanes: insulating foams, mattresses, coatings, adhesives, car parts, print rollers, shoe soles, flooring, synthetic fibers, etc. Polyurethane polymers are formed by combining two bi- or higher functional monomers/oligomers. Polyurea/polyurethane hybrids used for abrasion resistant waterproofing coatings. Vulcanized rubber. Bakelite, a phenol-formaldehyde resin used in electrical insulators and plasticware. Duroplast, light but strong material, similar to Bakelite formerly used in the manufacture of the Trabant automobile, currently used for household objects Urea-formaldehyde foam used in plywood, particleboard and medium-density fibreboard. Melamine resin used on worktop surfaces and some plastic dishes. Diallyl-phthalate (DAP) used in high temperature and mil-spec electrical connectors and other components. Usually glass filled. Epoxy novolac resins used for printed circuit boards, electrical encapsulation, adhesives and coatings for metal. Benzoxazines, used alone or hybridised with epoxy and phenolic resins, for structural prepregs, liquid molding and film adhesives for composite construction, bonding and repair. Mold or mold runners (the black plastic part in integrated circuits or semiconductors). Furan resins used in the manufacture of sustainable biocomposite construction, cements, adhesives, coatings and casting/foundry resins. Silicone resins used for thermoset polymer matrix composites and as ceramic matrix composite precursors. Thiolyte, an electrical insulating thermoset phenolic laminate material. Vinyl ester resins used for wet lay-up laminating, molding and fast setting industrial protection and repair materials. Applications Application/process uses and methods for thermosets include protective coating, seamless flooring, civil engineering construction grouts for jointing and injection, mortars, foundry sands, adhesives, sealants, castings, potting, electrical insulation, encapsulation, solid foams, wet lay-up laminating, pultrusion, gelcoats, filament winding, pre-pregs, and molding. Specific methods of molding thermosets are: Reactive injection moulding (used for objects such as milk bottle crates) Extrusion molding (used for making pipes, threads of fabric and insulation for electrical cables) Compression molding (used to shape SMC and BMC thermosetting plastics) Spin casting (used for producing fishing lures and jigs, gaming miniatures, figurines, emblems as well as production and replacement parts) See also Fusion bonded epoxy coating Thermoset polymer matrix Vulcanization References Polymer chemistry
Thermosetting polymer
[ "Chemistry", "Materials_science", "Engineering" ]
2,030
[ "Materials science", "Polymer chemistry" ]
321,869
https://en.wikipedia.org/wiki/Coding%20theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: Data compression (or source coding) Error control (or channel coding) Cryptographic coding Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination. Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes. History of coding theory In 1948, Claude Shannon published "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory. The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting a fourth. Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, and error-detecting and error-correcting codes. He invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, and Hamming distance. In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3. Source coding The aim of source coding is to take the source data and make it smaller. Definition Data can be seen as a random variable , where appears with probability . Data are encoded by strings (words) over an alphabet . A code is a function (or if the empty string is not part of the alphabet). is the code word associated with . Length of the code word is written as Expected length of a code is The concatenation of code words . The code word of the empty string is the empty string itself: Properties is non-singular if injective. is uniquely decodable if injective. is instantaneous if is not a proper prefix of (and vice versa). Principle Entropy of a source is the measure of information. Basically, source codes try to reduce the redundancy present in the source, and represent the source with fewer bits that carry more information. Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding. Various techniques used by source coding schemes try to achieve the limit of entropy of the source. C(x) ≥ H(x), where H(x) is entropy of source (bitrate), and C(x) is the bitrate after compression. In particular, no source coding scheme can be better than the entropy of the source. Example Facsimile transmission uses a simple run length code. Source coding removes all data superfluous to the need of the transmitter, decreasing the bandwidth required for transmission. Channel coding The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade-off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. CDs use cross-interleaved Reed–Solomon coding to spread the data out over the disk. Although not a very good code, a simple repeat code can serve as an understandable example. Suppose we take a block of data bits (representing sound) and send it three times. At the receiver we will examine the three repetitions bit by bit and take a majority vote. The twist on this is that we do not merely send the bits in order. We interleave them. The block of data bits is first divided into 4 smaller blocks. Then we cycle through the block and send one bit from the first, then the second, etc. This is done three times to spread the data out over the surface of the disk. In the context of the simple repeat code, this may not appear effective. However, there are more powerful codes known which are very effective at correcting the "burst" error of a scratch or a dust spot when this interleaving technique is used. Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance. Cell phones are subject to rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading. Linear codes The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched. Algebraic coding theory is basically divided into two major types of codes: Linear block codes Convolutional codes It analyzes the following three properties of a code – mainly: Code word length Total number of valid code words The minimum distance between two valid code words, using mainly the Hamming distance, sometimes also other distances like the Lee distance Linear block codes Linear block codes have the property of linearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property. Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin) where n is the length of the codeword, in symbols, m is the number of source symbols that will be used for encoding at once, dmin is the minimum hamming distance for the code. There are many types of linear block codes, such as Cyclic codes (e.g., Hamming codes) Repetition codes Parity codes Polynomial codes (e.g., BCH codes) Reed–Solomon codes Algebraic geometric codes Reed–Muller codes Perfect codes Locally recoverable code Block codes are tied to the sphere packing problem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12) Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above. The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r – 1, 2r – 1 – r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes. Another code property is the number of neighbors that a single codeword may have. Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers. Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping, one of the best-known shaping codes. Convolutional codes The idea behind a convolutional code is to make every codeword symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response. So we generally find the output of the system convolutional encoder, which is the convolution of the input bit, against the states of the convolution encoder, registers. Fundamentally, convolutional codes do not offer more protection against noise than an equivalent block code. In many cases, they generally offer greater simplicity of implementation over a block code of equal power. The encoder is usually a simple circuit which has state memory and some feedback logic, normally XOR gates. The decoder can be implemented in software or firmware. The Viterbi algorithm is the optimum algorithm used to decode convolutional codes. There are simplifications to reduce the computational load. They rely on searching only the most likely paths. Although not optimum, they have generally been found to give good results in low noise environments. Convolutional codes are used in voiceband modems (V.32, V.17, V.34) and in GSM mobile phones, as well as satellite and military communication devices. Cryptographic coding Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that block adversaries; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons from doing the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. Line coding A line code (also called digital baseband modulation or digital baseband transmission method) is a code chosen for use within a communications system for baseband transmission purposes. Line coding is often used for digital data transport. Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding. The common types of line encoding are unipolar, polar, bipolar, and Manchester encoding. Other applications of coding theory Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel. Another application of codes, used in some mobile phone systems, is code-division multiple access (CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones. When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise. Another general class of codes are the automatic repeat-request (ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplest wide area network protocols use ARQ. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP. Group testing Group testing uses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in the Second World War when the United States Army Air Forces needed to test its soldiers for syphilis. Analog coding Information is encoded analogously in the neural networks of brains, in analog signal processing, and analog electronics. Aspects of analog coding include analog error correction, analog data compression and analog encryption. Neural coding Neural coding is a neuroscience-related field concerned with how sensory and other information is represented in the brain by networks of neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. It is thought that neurons can encode both digital and analog information, and that neurons follow the principles of information theory and compress information, and detect and correct errors in the signals that are sent throughout the brain and wider nervous system. See also Coding gain Covering code Error correction code Folded Reed–Solomon code Group testing Hamming distance, Hamming weight Lee distance List of algebraic coding theory topics Spatial coding and MIMO in multiple antenna research Spatial diversity coding is spatial coding that transmits replicas of the information signal along different spatial paths, so as to increase the reliability of the data transmission. Spatial interference cancellation coding Spatial multiplex coding Timeline of information theory, data compression, and error correcting codes Notes References Elwyn R. Berlekamp (2014), Algebraic Coding Theory, World Scientific Publishing (revised edition), . MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. Vera Pless (1982), Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc., . Randy Yates, A Coding Theory Tutorial. Error detection and correction
Coding theory
[ "Mathematics", "Engineering" ]
3,600
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
321,921
https://en.wikipedia.org/wiki/Osborne%20Reynolds
Osborne Reynolds (23 August 1842 – 21 February 1912) was an Irish-born British innovator in the understanding of fluid dynamics. Separately, his studies of heat transfer between solids and fluids brought improvements in boiler and condenser design. He spent his entire career at what is now the University of Manchester. Life Osborne Reynolds was born in Belfast and moved with his parents soon afterward to Dedham, Essex. His father, Reverend Osborne Reynolds, was a Fellow of Queens' College, Cambridge who worked as a school headmaster and clergyman, but was also a very able mathematician with a keen interest in mechanics. The father took out a number of patents for improvements to agricultural equipment, and the son credits him with being his chief teacher as a boy. Reynolds showed an early aptitude and liking for the study of mechanics. In his late teens, for the year before entering university, he went to work as an apprentice at the workshop of Edward Hayes, a well known shipbuilder in Stony Stratford, where he obtained practical experience in the manufacture and fitting out of coastal steamers (and thus gained an early appreciation of the practical value of understanding fluid dynamics). Osborne Reynolds attended Queens' College, Cambridge and graduated in 1867 as the seventh wrangler in mathematics. He had chosen to study mathematics at Cambridge because, in his own words in his 1868 application for the professorship, "From my earliest recollection I have had an irresistible liking for mechanics and the physical laws on which mechanics as a science is based.... my attention drawn to various mechanical phenomena, for the explanation of which I discovered that a knowledge of mathematics was essential." For the year immediately following his graduation from Cambridge he again took up a post with a civil engineering firm, Lawson and Mansergh of London, as a practising civil engineer working with the London (Croydon) sewage transport system. In 1868 he was appointed to the newly instituted Chair of Civil and Mechanical Engineering at Owens College in Manchester (now the University of Manchester), becoming in that year one of the first professors in UK university history to hold the title of "Professor of Engineering". This professorship had been newly created and financed by a group of manufacturing industrialists in the Manchester area, and they also had a leading role in selecting the 25–year–old Reynolds to fill the position. Reynolds remained at Owens College for the rest of his career – in 1880 the college became a constituent college of the newly founded Victoria University. Reynolds was elected a Fellow of the Royal Society in 1877 and awarded the Royal Medal in 1888. He retired in 1905 and died of influenza 21 February 1912 at Watchet in Somerset. He was buried at the Church of St Decuman, Watchet. Fluid mechanics Reynolds most famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow. In 1883 Reynolds demonstrated the transition to turbulent flow in a classic experiment in which he examined the behaviour of water flow under different flow rates using a small jet of dyed water introduced into the centre of flow in a larger pipe. The larger pipe was glass so the behaviour of the layer of dyed flow could be observed, and at the end of this pipe there was a flow control valve used to vary the water velocity inside the tube. When the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the layer broke up at a given point and diffused throughout the fluid's cross-section. The point at which this happened was the transition point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of inertial forces to viscous forces. Reynolds also proposed what is now known as Reynolds-averaging of turbulent flows, where quantities such as velocity are expressed as the sum of mean and fluctuating components. Such averaging allows for 'bulk' description of turbulent flow, for example using the Reynolds-averaged Navier–Stokes equations. Reynolds' contributions to fluid mechanics were not lost on ship designers ("naval architects"). The ability to make a small scale model of a ship, and extract useful predictive data with respect to a full size ship, depends directly on the experimentalist applying Reynolds' turbulence principles to friction drag computations, along with a proper application of William Froude's theories of gravity wave energy and propagation. Reynolds himself had a number of papers concerning ship design published in Transactions of the Institution of Naval Architects. Publications His publications in fluid dynamics began in the early 1870s. His final theoretical model published in the mid-1890s is still the standard mathematical framework used today. Examples of titles from his more groundbreaking reports: Other work Reynolds published about seventy science and engineering research reports. When towards the end of his career these were republished as a collection they filled three volumes. For a catalogue and short summaries of them see the External links. Areas covered besides fluid dynamics included thermodynamics, kinetic theory of gases, condensation of steam, screw-propeller-type ship propulsion, turbine-type ship propulsion, hydraulic brakes, hydrodynamic lubrication, and laboratory apparatus for better measurement of Joule's mechanical equivalent of heat. For his work on lubrication, he was named by Duncan Dowson as one of the 23 "Men of Tribology". One of the subjects that Reynolds studied in the 1880s was the properties of granular materials, including dilatant materials. In 1903 appeared his 250-page book The Sub-Mechanics of the Universe, in which he tried to generalise the mechanics of granular materials to be "capable of accounting for all the physical evidence, as we know it, in the Universe". His aim seems to have been to construct a theory of aether, which he considered to be in a liquid state. The ideas were extremely difficult to understand or evaluate, and in any case were overtaken by other developments in physics around the same time. See also References Further reading External links Reynolds the Engineer Reynolds the Scientist 1842 births 1912 deaths Academics of the Victoria University of Manchester Alumni of Queens' College, Cambridge British civil engineers British physicists Fellows of Queens' College, Cambridge Fellows of the Royal Society Fluid dynamicists Geotechnical engineers Engineers from Belfast People from Dedham, Essex Royal Medal winners Tribologists Manchester Literary and Philosophical Society Recipients of the Dalton Medal
Osborne Reynolds
[ "Chemistry", "Materials_science" ]
1,294
[ "Tribology", "Fluid dynamicists", "Tribologists", "Fluid dynamics" ]
322,376
https://en.wikipedia.org/wiki/Ecological%20succession
Ecological succession is the process of change in the species that make up an ecological community over time. The process of succession occurs either after the initial colonization of a newly created habitat, or after a disturbance substantially alters a pre-existing habitat. Succession that begins in new habitats, uninfluenced by pre-existing communities, is called primary succession, whereas succession that follows disruption of a pre-existing community is called secondary succession. Primary succession may happen after a lava flow or the emergence of a new island from the ocean. Surtsey, a volcanic island off the southern coast of Iceland, is an important example of a place where primary succession has been observed. On the other hand, secondary succession happens after disturbance of a community, such as from a fire, severe windthrow, or logging. Succession was among the first theories advanced in ecology. Ecological succession was first documented in the Indiana Dunes of Northwest Indiana and remains an important ecological topic of study. Over time, the understanding of succession has changed from a linear progression to a stable climax state, to a more complex, cyclical model that de-emphasizes the idea of organisms having fixed roles or relationships. History Precursors of the idea of ecological succession go back to the beginning of the 19th century. As early as 1742 French naturalist Buffon noted that poplars precede oaks and beeches in the natural evolution of a forest. Buffon was later forced by the theological committee at the University of Paris to recant many of his ideas because they contradicted the biblical narrative of Creation. Swiss geologist Jean-André Deluc and the later French naturalist Adolphe Dureau de la Malle were the first to make use of the word succession concerning the vegetation development after forest clear-cutting. In 1859 Henry David Thoreau wrote an address called "The Succession of Forest Trees" in which he described succession in an oak-pine forest. "It has long been known to observers that squirrels bury nuts in the ground, but I am not aware that any one has thus accounted for the regular succession of forests." The Austrian botanist Anton Kerner published a study about the succession of plants in the Danube river basin in 1863. Ragnar Hult's 1885 study on the stages of forest development in Blekinge noted that grassland becomes heath before the heath develops into forest. Birch dominated the early stages of forest development, then pine (on dry soil) and spruce (on wet soil). If the birch is replaced by oak it eventually develops to beechwood. Swamps proceed from moss to sedges to moor vegetation followed by birch and finally spruce. H. C. Cowles Between 1899 and 1910, Henry Chandler Cowles, at the University of Chicago, developed a more formal concept of succession. Inspired by studies of Danish dunes by Eugen Warming, Cowles studied vegetation development on sand dunes on the shores of Lake Michigan (the Indiana Dunes). He recognized that vegetation on dunes of different ages might be interpreted as different stages of a general trend of vegetation development on dunes (an approach to the study of vegetation change later termed space-for-time substitution, or chronosequence studies). He first published this work as a paper in the Botanical Gazette in 1899 ("The ecological relations of the vegetation of the sand dunes of Lake Michigan"). In this classic publication and subsequent papers, he formulated the idea of primary succession and the notion of a sere—a repeatable sequence of community changes specific to particular environmental circumstances. Gleason and Clements From about 1900 to 1960, however, understanding of succession was dominated by the theories of Frederic Clements, a contemporary of Cowles, who held that seres were highly predictable and deterministic and converged on a climatically determined stable climax community regardless of starting conditions. Clements explicitly analogized the successional development of ecological communities with ontogenetic development of individual organisms, and his model is often referred to as the pseudo-organismic theory of community ecology. Clements and his followers developed a complex taxonomy of communities and successional pathways. Henry Gleason offered a contrasting framework as early as the 1920s. The Gleasonian model was more complex and much less deterministic than the Clementsian. It differs most fundamentally from the Clementsian view in suggesting a much greater role of chance factors and in denying the existence of coherent, sharply bounded community types. Gleason argued that species distributions responded individualistically to environmental factors, and communities were best regarded as artifacts of the juxtaposition of species distributions. Gleason's ideas, first published in 1926, were largely ignored until the late 1950s. Two quotes illustrate the contrasting views of Clements and Gleason. Clements wrote in 1916: while Gleason, in his 1926 paper, said: Gleason's ideas were, in fact, more consistent with Cowles' original thinking about succession. About Clements' distinction between primary succession and secondary succession, Cowles wrote (1911): Eugene Odum In 1969, Eugene Odum published The Strategy of Ecosystem Development, a paper that was highly influential to conservation and environmental restoration. Odum argued that ecological succession was an orderly progression toward a climax state where “maximum biomass and symbiotic function between organisms are maintained per unit energy flow." Odum highlighted how succession was not merely a change in the species composition of an ecosystem, but also created change in more complex attributes of the ecosystem, such as structure and nutrient cycling. Modern era A more rigorous, data-driven testing of successional models and community theory generally began with the work of Robert Whittaker and John Curtis in the 1950s and 1960s. Succession theory has since become less monolithic and more complex. J. Connell and R. Slatyer attempted a codification of successional processes by mechanism. Among British and North American ecologists, the notion of a stable climax vegetation has been largely abandoned, and successional processes have come to be seen as much less deterministic, with important roles for historical contingency and for alternate pathways in the actual development of communities. Debates continue as to the general predictability of successional dynamics and the relative importance of equilibrial vs. non-equilibrial processes. Former Harvard professor Fakhri A. Bazzaz introduced the notion of scale into the discussion, as he considered that at local or small area scale the processes are stochastic and patchy, but taking bigger regional areas into consideration, certain tendencies can not be denied. More recent definitions of succession highlight change as the central characteristic. New research techniques are greatly enhancing contemporary scientists' ability to study succession, which is now seen as neither entirely random nor entirely predictable. Factors Both consistent patterns and variability are observed in ecological succession. Theories of ecological succession identify different factors that help explain why plant communities change the way they do. Diversity of possible trajectories Ecological succession was formerly seen as an orderly progression through distinct stages, where several plant communities would replace each other in a fixed order and eventually reach a stable end point known as the climax. The climax community was sometimes referred to as the 'potential vegetation' of a site, and thought to be primarily determined by the local climate. This idea has been largely abandoned by modern ecologists in favor of nonequilibrium ideas of ecosystems dynamics. Most natural ecosystems experience disturbance at a rate that makes a "climax" community unattainable. Climate change often occurs at a rate and frequency sufficient to prevent arrival at a climax state. The trajectory of successional change can be influenced by initial site conditions, by the type of disturbance that triggers succession, by the interactions of the species present, and by more random factors such as availability of colonists or seeds or weather conditions at the time of disturbance. Some aspects of succession are broadly predictable; others may proceed more unpredictably than in the classical view of ecological succession. Coupled with the stochastic nature of disturbance events and other long-term (e.g., climatic) changes, such dynamics make it doubtful whether the 'climax' concept ever applies or is particularly useful in considering actual vegetation. Stochastic events Succession is influenced partially by random chance, but it is debated how much random chance directs the trajectory of succession, as opposed to more deterministic factors. The timing of a disturbance such as a weather event may be random and unpredictable. Dispersal of propagules to a new site may also be random. However, community assembly is also determined by processes that select species non-randomly from the local species pool. Dispersal limitation vs. environmental filtering Succession is impacted both by the ability of seeds to disperse to new sites, and the suitability of site conditions for those seeds to grow and survive. Dispersal limitation means that even though favorable sites for a plant to live might exist, the plant's seeds may be unable to reach those sites. Environmental filtering, also called establishment limitation, implies that although seeds may be distributed to a site, those seeds may be unable to survive due to various characteristics of the site. The predicted impact of these two factors varies under different models of ecological succession. Feedback loops Ecological succession is driven by feedbacks between plants and their environment. As plants grow following a disturbance, they change their environment, for example by creating shade, attracting seed dispersers, contributing organic matter to the soil, changing the availability of soil nutrients, creating microhabitats, and buffering temperature and moisture fluctuations. This creates opportunities for different plants to grow, which causes directional change in the ecosystem. The development of some ecosystem attributes, such as soil properties and nutrient cycles, are both influenced by community properties, and, in turn, influence further successional development. This feed-back process may occur over centuries or millennia. Plants may facilitate the establishment of other plants by creating suitable conditions for them to grow, for example by providing shade or allowing for soil formation. Plants may also competitively exclude or otherwise prevent the growth of other plants. Patterns Though the idea of a fixed, predictable process of succession with a single well-defined climax is an overly simplified model, several predictions made by the classical model are accurate. Species diversity, overall plant biomass, plant lifespans, the importance of decomposer organisms, and overall stability all increase as a community approaches a climax state, while the rate at which soil nutrients are consumed, rate of biogeochemical cycling, and rate of net primary productivity all decrease as a community approaches a climax state. Communities in early succession will be dominated by fast-growing, well-dispersed species (opportunist, fugitive, or r-selected life-histories). These are also called pioneer species. As succession proceeds, these species will tend to be replaced by more competitive (k-selected) species. Some of these trends do not apply in all cases. For example, species diversity almost necessarily increases during early succession as new species arrive, but may decline in later succession as competition eliminates opportunistic species and leads to dominance by locally superior competitors. Net Primary Productivity, biomass, and trophic properties all show variable patterns over succession, depending on the particular system and site. Disruptions Two important perturbation factors today are human actions and climatic change. Additions to available species pools through range expansions and introductions can also continually reshape communities. Types Primary succession Successional dynamics beginning with colonization of an area that has not been previously occupied by an ecological community are referred to as primary succession. This includes newly exposed rock or sand surfaces, lava flows, and newly exposed glacial tills. The stages of primary succession include pioneer microorganisms, plants (lichens and mosses), grassy stage, smaller shrubs, and trees. Animals begin to return when there is food there for them to eat. When it is a fully functioning ecosystem, it has reached the climax community stage. Secondary succession Secondary succession follows severe disturbance or removal of a preexisting community that has remnants of the previous ecosystem. Secondary succession is strongly influenced by pre-disturbance conditions such as soil development, seed banks, remaining organic matter, and residual living organisms. Because of residual fertility and preexisting organisms, community change in early stages of secondary succession can be relatively rapid. Secondary succession is much more commonly observed and studied than primary succession. Particularly common types of secondary succession include responses to natural disturbances such as fire, flood, and severe winds, and to human-caused disturbances such as logging and agriculture. In secondary succession, the soils and organisms need to be left unharmed so there is a way for the new material to rebuild. As an example, in a fragmented old field habitat created in eastern Kansas, woody plants "colonized more rapidly (per unit area) on large and nearby patches". Secondary succession can quickly change a landscape. In the 1900s, Acadia National Park had a wildfire that destroyed much of the landscape. Originally evergreen trees grew in the landscape. After the fire, the area took at least a year to grow shrubs. Eventually, deciduous trees started to grow instead of evergreens. Secondary succession has been occurring in Shenandoah National Park following the 1995 flood of the Moorman's and Rapidan rivers, which destroyed plant and animal life. Seasonal and cyclic dynamics Unlike secondary succession, these types of vegetation change are not dependent on disturbance but are periodic changes arising from fluctuating species interactions or recurring events. These models modify the climax concept towards one of dynamic states. Causes of plant succession Autogenic succession can be brought by changes in the soil caused by the organisms there. These changes include accumulation of organic matter in litter or humic layer, alteration of soil nutrients, or change in the pH of soil due to the plants growing there. The structure of the plants themselves can also alter the community. For example, when larger species like trees mature, they produce shade on to the developing forest floor that tends to exclude light-requiring species. Shade-tolerant species will invade the area. Allogenic succession is caused by external environmental influences and not by the vegetation. For example, soil changes due to erosion, leaching or the deposition of silt and clays can alter the nutrient content and water relationships in the ecosystems. Animals also play an important role in allogenic changes as they are pollinators, seed dispersers and herbivores. They can also increase nutrient content of the soil in certain areas, or shift soil about (as termites, ants, and moles do) creating patches in the habitat. This may create regeneration sites that favor certain species. Climatic factors may be very important, but on a much longer time-scale than any other. Changes in temperature and rainfall patterns will promote changes in communities. As the climate warmed at the end of each ice age, great successional changes took place. The tundra vegetation and bare glacial till deposits underwent succession to mixed deciduous forest. The greenhouse effect resulting in increase in temperature is likely to bring profound Allogenic changes in the next century. Geological and climatic catastrophes such as volcanic eruptions, earthquakes, avalanches, meteors, floods, fires, and high wind also bring allogenic changes. Mechanisms In 1916, Frederic Clements published a descriptive theory of succession and advanced it as a general ecological concept. His theory of succession had a powerful influence on ecological thought. Clements' concept is usually termed classical ecological theory. According to Clements, succession is a process involving several phases: Nudation: Succession begins with the development of a bare site, called Nudation (disturbance). Migration: refers to arrival of propagules. Ecesis: involves establishment and initial growth of vegetation. Competition: as vegetation becomes well established, grows, and spreads, various species begin to compete for space, light and nutrients. Reaction: during this phase autogenic changes such as the buildup of humus affect the habitat, and one plant community replaces another. Stabilization: a supposedly stable climax community forms. Seral communities A seral community is an intermediate stage found in an ecosystem advancing towards its climax community. In many cases more than one seral stage evolves until climax conditions are attained. A prisere is a collection of seres making up the development of an area from non-vegetated surfaces to a climax community. Depending on the substratum and climate, different seres are found. Changes in animal life Succession theory was developed primarily by botanists. The study of succession applied to whole ecosystems initiated in the writings of Ramon Margalef, while Eugene Odum's publication of The Strategy of Ecosystem Development is considered its formal starting point. Animal life also exhibits changes with changing communities. In the lichen stage, fauna is sparse. It comprises a few mites, ants, and spiders living in cracks and crevices. The fauna undergoes a qualitative increase during the herb grass stage. The animals found during this stage include nematodes, insect larvae, ants, spiders, mites, etc. The animal population increases and diversifies with the development of the forest climax community. The fauna consists of invertebrates like slugs, snails, worms, millipedes, centipedes, ants, bugs; and vertebrates such as squirrels, foxes, mice, moles, snakes, various birds, salamanders and frogs. A review of succession research by Hodkinson et al. (2002) documented what was likely first noted by Darwin during his voyage on the H.M.S. Beagle: These naturalists note that prior to the establishment of autotrophs, there is a foodweb formed by heterotrophs built on allochthonous inputs of dead organic matter (necromass). Work on volcanic systems such as Kasatochi Volcano in the Aleutians by Sikes and Slowik (2010) supports this idea. Microsuccession Succession of micro-organisms including fungi and bacteria occurring within a microhabitat is known as microsuccession or serule. In artificial bacterial meta-communities of motile strains on-chip it has been shown that ecological succession is based on a trade-off between colonization and competition abilities. To exploit locations or explore the landscape? Escherichia coli is a fugitive species, whereas Pseudomonas aeruginosa is a slower colonizer but superior competitor. Like in plants, microbial succession can occur in newly available habitats (primary succession) such as surfaces of plant leaves, recently exposed rock surfaces (i.e., glacial till) or animal infant guts, and also on disturbed communities (secondary succession) like those growing in recently dead trees, decaying fruits, or animal droppings. Microbial communities may also change due to products secreted by the bacteria present. Changes of pH in a habitat could provide ideal conditions for a new species to inhabit the area. In some cases the new species may outcompete the present ones for nutrients leading to the primary species demise. Changes can also occur by microbial succession with variations in water availability and temperature. Theories of macroecology have only recently been applied to microbiology and so much remains to be understood about this growing field. A recent study of microbial succession evaluated the balances between stochastic and deterministic processes in the bacterial colonization of a salt marsh chronosequence. The results of this study show that, much like in macro succession, early colonization (primary succession) is mostly influenced by stochasticity while secondary succession of these bacterial communities was more strongly influenced by deterministic factors. Climax concept According to classical ecological theory, succession stops when the sere has arrived at an equilibrium or steady state with the physical and biotic environment. Barring major disturbances, it will persist indefinitely. This end point of succession is called climax. Climax community The final or stable community in a sere is the climax community or climatic vegetation. It is self-perpetuating and in equilibrium with the physical habitat. There is no net annual accumulation of organic matter in a climax community. The annual production and use of energy is balanced in such a community. Characteristics The vegetation is tolerant of environmental conditions. It has a wide diversity of species, a well-drained spatial structure, and complex food chains. The climax ecosystem is balanced. There is equilibrium between gross primary production and total respiration, between energy used from sunlight and energy released by decomposition, between uptake of nutrients from the soil and the return of nutrient by litter fall to the soil. Individuals in the climax stage are replaced by others of the same kind. Thus the species composition maintains equilibrium. It is an index of the climate of the area. The life or growth forms indicate the climatic type. Types of climax Climatic Climax If there is only a single climax and the development of climax community is controlled by the climate of the region, it is termed as climatic climax. For example, development of Maple-beech climax community over moist soil. Climatic climax is theoretical and develops where physical conditions of the substrate are not so extreme as to modify the effects of the prevailing regional climate. Edaphic Climax When there are more than one climax communities in the region, modified by local conditions of the substrate such as soil moisture, soil nutrients, topography, slope exposure, fire, and animal activity, it is called edaphic climax. Succession ends in an edaphic climax where topography, soil, water, fire, or other disturbances are such that a climatic climax cannot develop. Catastrophic Climax Climax vegetation vulnerable to a catastrophic event such as a wildfire. For example, in California, chaparral vegetation is the final vegetation. The wildfire removes the mature vegetation and decomposers. A rapid development of herbaceous vegetation follows until the shrub dominance is re-established. This is known as catastrophic climax. Disclimax When a stable community, which is not the climatic or edaphic climax for the given site, is maintained by man or his domestic animals, it is designated as Disclimax (disturbance climax) or anthropogenic subclimax (man-generated). For example, overgrazing by stock may produce a desert community of bushes and cacti where the local climate actually would allow grassland to maintain itself. Subclimax The prolonged stage in succession just preceding the climatic climax is subclimax. Preclimax and Postclimax In certain areas different climax communities develop under similar climatic conditions. If the community has life forms lower than those in the expected climatic climax, it is called preclimax; a community that has life forms higher than those in the expected climatic climax is postclimax. Preclimax strips develop in less moist and hotter areas, whereas Postclimax strands develop in more moist and cooler areas than that of surrounding climate. Theories There are three schools of interpretations explaining the climax concept: Monoclimax or Climatic Climax Theory was advanced by Clements (1916) and recognizes only one climax whose characteristics are determined solely by climate (climatic climax). The processes of succession and modification of environment overcome the effects of differences in topography, parent material of the soil, and other factors. The whole area would be covered with uniform plant community. Communities other than the climax are related to it, and are recognized as subclimax, postclimax and disclimax. Polyclimax Theory was advanced by Tansley (1935). It proposes that the climax vegetation of a region consists of more than one vegetation climaxes controlled by soil moisture, soil nutrients, topography, slope exposure, fire, and animal activity. Climax Pattern Theory was proposed by Whittaker (1953). The climax pattern theory recognizes a variety of climaxes governed by responses of species populations to biotic and abiotic conditions. According to this theory the total environment of the ecosystem determines the composition, species structure, and balance of a climax community. The environment includes the species' responses to moisture, temperature, and nutrients, their biotic relationships, availability of flora and fauna to colonize the area, chance dispersal of seeds and animals, soils, climate, and disturbance such as fire and wind. The nature of climax vegetation will change as the environment changes. The climax community represents a pattern of populations that corresponds to and changes with the pattern of environment. The central and most widespread community is the climatic climax. The theory of alternative stable states suggests there is not one end point but many which transition between each other over ecological time. Succession by habitat type Forest succession Forests, being an ecological system, are subject to the species succession process. There are "opportunistic" or "pioneer" species that produce great quantities of seed that are disseminated by the wind, and therefore can colonize big empty extensions. They are capable of germinating and growing in direct sunlight. Once they have produced a closed canopy, the lack of direct sun radiation at the soil makes it difficult for their own seedlings to develop. It is then the opportunity for shade-tolerant species to become established under the protection of the pioneers. When the pioneers die, the shade-tolerant species replace them. These species are capable of growing beneath the canopy, and therefore, in the absence of disturbances, will stay. For this reason it is then said the stand has reached its climax. When a disturbance occurs, the opportunity for the pioneers opens up again, provided they are present or within a reasonable range. An example of pioneer species, in forests of northeastern North America are Betula papyrifera (White birch) and Prunus serotina (Black cherry), that are particularly well-adapted to exploit large gaps in forest canopies, but are intolerant of shade and are eventually replaced by other shade-tolerant species in the absence of disturbances that create such gaps. In the tropics, well known pioneer forest species can be found among the genera Cecropia, Ochroma and Trema. Things in nature are not black and white, and there are intermediate stages. It is therefore normal that between the two extremes of light and shade there is a gradient, and there are species that may act as pioneer or tolerant, depending on the circumstances. It is of paramount importance to know the tolerance of species in order to practice an effective silviculture. Wetland succession Since many types of wetland environments exist, succession may follow a wide array of trajectories and patterns in wetlands. Under the classical model, the process of secondary succession holds that a wetland progresses over time from an initial state of open water with few plants, to a forested climax state where decayed organic matter has built up over time, forming peat. However, many wetlands are maintained by regular disturbance or natural processes at an equilibrium state that does not resemble the predicted forested "climax." The idea that ponds and wetlands gradually fill in to become dry land has been criticized and called into question due to lack of evidence. Wetland succession is a uniquely complex, non-linear process shaped by hydrology. Hydrological factors often work against linear processes that predict a succession to a "climax" state. The energy carried by moving water may create a continuous source of disturbance. For example, in coastal wetlands, the tides moving in and out continuously acts upon the ecological community. Fire may also maintain an equilibrium state in a wetland by burning off vegetation, thus interrupting the accumulation of peat. Water entering and leaving the wetland follows patterns that are broadly cyclical but erratic. For example, seasonal flooding and drying may occur with yearly changes in precipitation, causing seasonal changes in the wetland community that maintain it at a stable state. However, unusually heavy rain or unusually severe drought may cause the wetland to enter a positive feedback loop where it begins to change in a linear direction. Since wetlands are sensitive to changes in the natural processes that maintain them, human activities, invasive species, and climate change could initiate long-term changes in wetland ecosystems. Grassland succession For a long time, grasslands were thought to be early stages of succession, dominated by weedy species and with little conservation value. However, comparing grasslands that form after recovery from long-term disruptions like agricultural tillage with ancient or "old-growth" grasslands has shown that grasslands are not inherently early-successional communities. Rather, grasslands undergo a centuries-long process of succession, and a grassland that is tilled up for agriculture or otherwise destroyed is estimated to take a minimum of 100 years, and potentially on average 1,400 years, to recover to its previous level of biodiversity. However, planting a high diversity of late-successional grassland species in a disturbed environment can accelerate the recovery of the soil's ability to sequester carbon, resulting in twice as much carbon storage as a naturally recovering grassland over the same period of time. Many grassland ecosystems are maintained by disturbance, such as fire and grazing by large animals, or else the process of succession will change them to forest or shrubland. In fact, it is debated whether fire should be considered disturbance at all for the North American prairie ecosystems, since it maintains, rather than disrupts, an equilibrium state. Many late-successional grassland species have adaptations that allow them to store nutrients underground and re-sprout rapidly after "aboveground" disturbances like fire or grazing. Disturbance events that severely disrupt or destroy the soil, such as tilling, eliminate these late-successional species, reverting the grassland to an early successional stage dominated by pioneers, whereas fire and grazing benefit late-successional species. Both too much and too little disturbance can damage the biodiversity of disturbance-dependent ecosystems like grasslands. In North American semi-arid grasslands, the introduction of livestock ranching and absence of fire was observed to cause a transition away from grasses to woody vegetation, particularly mesquite. However, the means by which ecological succession under frequent disturbance results in ecosystems of the sort seen in remnant prairies is poorly understood. See also Connell–Slatyer model of ecological succession Cyclic succession Ecological stability Intermediate disturbance hypothesis References Further reading External links Science Aid: Succession Explanation of succession for high school students. Biographical sketch of Henry Chandler Cowles. Robbert Murphy sees a significantly ideological, rather than scientific, basis for the disfavour shown towards succession by the current ecological orthodoxy and seeks to reinstate succession by holistic and teleological argument. Succession Succession Environmental terminology Habitat
Ecological succession
[ "Physics", "Biology" ]
6,134
[ "Ecology terminology", "Physical phenomena", "Ecological processes", "Earth phenomena" ]
322,533
https://en.wikipedia.org/wiki/Project%20Orion%20%28nuclear%20propulsion%29
Project Orion was a study conducted in the 1950s and 1960s by the United States Air Force, DARPA, and NASA into the viability of a nuclear pulse spaceship that would be directly propelled by a series of atomic explosions behind the craft. Early versions of the vehicle were proposed to take off from the ground; later versions were presented for use only in space. The design effort took place at General Atomics in San Diego, and supporters included Wernher von Braun, who issued a white paper advocating the idea. Non-nuclear tests were conducted with models, but the project was eventually abandoned for several reasons, including the 1963 Partial Test Ban Treaty, which banned nuclear explosions in space, amid concerns over nuclear fallout. Physicist Stanislaw Ulam proposed the general idea of nuclear pulse propulsion in 1946, and preliminary calculations were made by Frederick Reines and Ulam in a Los Alamos memorandum dated 1947. In August 1955, Ulam co-authored a classified paper proposing the use of nuclear fission bombs, "ejected and detonated at a considerable distance", for propelling a vehicle in outer space. The project was led by Ted Taylor at General Atomics and physicist Freeman Dyson who, at Taylor's request, took a year away from the Institute for Advanced Study in Princeton to work on the project. In July 1958, DARPA agreed to sponsor Orion at an initial level of $1 million per year, at which point the project received its name and formally began. The agency granted a study of the concept to the General Dynamics Corporation, but decided to withdraw support in late 1959. The U.S. Air Force agreed to support Orion if a military use was found for the project, and the NASA Office of Manned Spaceflight also contributed funding. The concept investigated by the government used a blast shield and shock absorber to protect the crew and convert the detonations into a continuous propulsion force. The most successful model test, in November 1959, reached roughly 100 meters in altitude with six sequenced chemical explosions. NASA also produced a Mars mission profile for a 125 day round trip with eight astronauts, at a predicted development cost of $1.5 billion. Orion was canceled in 1964, after the United States signed the Partial Test Ban Treaty the prior year; the treaty greatly reduced political support for the project. NASA had also decided, in 1959, that the civilian space program would be non-nuclear in the near-term. The Orion concept offered both high thrust and high specific impulse, or propellant efficiency: 2,000 pulse units (Isp) under the original design and an Isp of perhaps 4,000 to 6,000 seconds according to the Air Force plan, with a later 1968 fusion bomb proposal by Dyson potentially increasing this to more than 75,000 Isp, enabling velocities of 10,000 km/sec. A moderate-sized nuclear device was estimated, at the time, to produce about 5 or 10 billion horsepower. The extreme power of the nuclear explosions, relative to the vehicle's mass, would be managed by using external detonations, although an earlier version of the pulse concept did propose containing the blasts in an internal pressure structure, with one such design prepared by The Martin Company. As a qualitative power comparison, traditional chemical rockets, such as the Saturn V that took the Apollo program to the Moon, produce high thrust with low specific impulse, whereas electric ion engines produce a small amount of thrust very efficiently. Orion, by contrast, would have offered performance greater than the most advanced conventional or nuclear rocket engines then under consideration. Supporters of Project Orion felt that it had potential for cheap interplanetary travel. From Project Longshot to Project Daedalus, Mini-Mag Orion, and other proposals which reach engineering analysis at the level of considering thermal power dissipation, the principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. Such later proposals have tended to modify the basic principle by envisioning equipment driving detonation of much smaller fission or fusion pellets, in contrast to Project Orion's larger nuclear pulse units (full nuclear bombs). In 1979 General Dynamics donated a 26 inch (56 cm) tall wooden model of the craft to the Smithsonian, which displays it at the Steven F. Udvar-Hazy Center near Dulles International Airport in Northern Virginia. Basic principles The Orion nuclear pulse drive combines a very high exhaust velocity, from 19 to 31 km/s (12 to 19 mi/s) in typical interplanetary designs, with meganewtons of thrust. Many spacecraft propulsion drives can achieve one of these or the other, but nuclear pulse rockets are the only proposed technology that could potentially meet the extreme power requirements to deliver both at once (see spacecraft propulsion for more speculative systems). Specific impulse (Isp) measures how much thrust can be derived from a given mass of fuel, and is a standard figure of merit for rocketry. For any rocket propulsion, since the kinetic energy of exhaust goes up with velocity squared (kinetic energy = mv2), whereas the momentum and thrust go up with velocity linearly (momentum = mv), obtaining a particular level of thrust (as in a number of g acceleration) requires far more power each time that exhaust velocity and Isp are much increased in a design goal. (For instance, the most fundamental reason that electric propulsion systems of high Isp tend to be low thrust is due to their limits on available power. Their thrust is actually inversely proportional to Isp if power going into exhaust is constant or at its limit from heat dissipation needs or other engineering constraints.) The Orion concept detonates nuclear explosions externally at a rate of power release which is beyond what nuclear reactors could survive internally with known materials and design. Since weight is no limitation, an Orion craft can be extremely robust. An uncrewed craft could tolerate very large accelerations, perhaps 100 g. A human-crewed Orion, however, must use some sort of damping system behind the pusher plate to smooth the near instantaneous acceleration to a level that humans can comfortably withstand – typically about 2 to 4 g. The high performance depends on the high exhaust velocity, in order to maximize the rocket's force for a given mass of propellant. The velocity of the plasma debris is proportional to the square root of the change in the temperature (Tc) of the nuclear fireball. Since such fireballs typically achieve ten million degrees Celsius or more in less than a millisecond, they create very high velocities. However, a practical design must also limit the destructive radius of the fireball. The diameter of the nuclear fireball is proportional to the square root of the bomb's explosive yield. The shape of the bomb's reaction mass is critical to efficiency. The original project designed bombs with a reaction mass made of tungsten. The bomb's geometry and materials focused the X-rays and plasma from the core of nuclear explosive to hit the reaction mass. In effect each bomb would be a nuclear shaped charge. A bomb with a cylinder of reaction mass expands into a flat, disk-shaped wave of plasma when it explodes. A bomb with a disk-shaped reaction mass expands into a far more efficient cigar-shaped wave of plasma debris. The cigar shape focuses much of the plasma to impinge onto the pusher-plate. For greatest mission efficiency the rocket equation demands that the greatest fraction of the bomb's explosive force be directed at the spacecraft, rather than being spent isotropically. The maximum effective specific impulse, Isp, of an Orion nuclear pulse drive generally is equal to: where C0 is the collimation factor (what fraction of the explosion plasma debris will actually hit the impulse absorber plate when a pulse unit explodes), Ve is the nuclear pulse unit plasma debris velocity, and gn is the standard acceleration of gravity (9.81 m/s2; this factor is not necessary if Isp is measured in N·s/kg or m/s). A collimation factor of nearly 0.5 can be achieved by matching the diameter of the pusher plate to the diameter of the nuclear fireball created by the explosion of a nuclear pulse unit. The smaller the bomb, the smaller each impulse will be, so the higher the rate of impulses and more than will be needed to achieve orbit. Smaller impulses also mean less g shock on the pusher plate and less need for damping to smooth out the acceleration. The optimal Orion drive bomblet yield (for the human crewed 4,000 ton reference design) was calculated to be in the region of 0.15 kt, with approx 800 bombs needed to orbit and a bomb rate of approx 1 per second. Sizes of vehicles The following can be found in George Dyson's book. The figures for the comparison with Saturn V are taken from this section and converted from metric (kg) to US short tons (abbreviated "t" here). In late 1958 to early 1959, it was realized that the smallest practical vehicle would be determined by the smallest achievable bomb yield. The use of 0.03 kt (sea-level yield) bombs would give vehicle mass of 880 tons. However, this was regarded as too small for anything other than an orbital test vehicle and the team soon focused on a 4,000 ton "base design". At that time, the details of small bomb designs were shrouded in secrecy. Many Orion design reports had all details of bombs removed before release. Contrast the above details with the 1959 report by General Atomics, which explored the parameters of three different sizes of hypothetical Orion spacecraft: The biggest design above is the "super" Orion design; at 8 million tons, it could easily be a city. In interviews, the designers contemplated the large ship as a possible interstellar ark. This extreme design could be built with materials and techniques that could be obtained in 1958 or were anticipated to be available shortly after. Most of the three thousand tons of each of the "super" Orion's propulsion units would be inert material such as polyethylene, or boron salts, used to transmit the force of the propulsion units detonation to the Orion's pusher plate, and absorb neutrons to minimize fallout. One design proposed by Freeman Dyson for the "Super Orion" called for the pusher plate to be composed primarily of uranium or a transuranic element so that upon reaching a nearby star system the plate could be converted to nuclear fuel. Theoretical applications The Orion nuclear pulse rocket design has extremely high performance. Orion nuclear pulse rockets using nuclear fission type pulse units were originally intended for use on interplanetary space flights. Missions that were designed for an Orion vehicle in the original project included single stage (i.e., directly from Earth's surface) to Mars and back, and a trip to one of the moons of Saturn. Freeman Dyson performed the first analysis of what kinds of Orion missions were possible to reach Alpha Centauri, the nearest star system to the Sun. His 1968 paper "Interstellar Transport" (Physics Today) retained the concept of large nuclear explosions but Dyson moved away from the use of fission bombs and considered the use of one megaton deuterium fusion explosions instead. His conclusions were simple: the debris velocity of fusion explosions was probably in the 3000–30,000 km/s range and the reflecting geometry of Orion's hemispherical pusher plate would reduce that range to 750–15,000 km/s. To estimate the upper and lower limits of what could be done using 1968 technology, Dyson considered two starship designs. The more conservative energy limited pusher plate design simply had to absorb all the thermal energy of each impinging explosion (4×1015 joules, half of which would be absorbed by the pusher plate) without melting. Dyson estimated that if the exposed surface consisted of copper with a thickness of 1 mm, then the diameter and mass of the hemispherical pusher plate would have to be 20 kilometers and 5 million tonnes, respectively. 100 seconds would be required to allow the copper to radiatively cool before the next explosion. It would then take on the order of 1000 years for the energy-limited heat sink Orion design to reach Alpha Centauri. In order to improve on this performance while reducing size and cost, Dyson considered an alternative momentum limited pusher plate design where an ablation coating of the exposed surface is substituted to get rid of the excess heat. The limitation is then set by the capacity of shock absorbers to transfer momentum from the impulsively accelerated pusher plate to the smoothly accelerated vehicle. Dyson calculated that the properties of available materials limited the velocity transferred by each explosion to ~30 meters per second independent of the size and nature of the explosion. If the vehicle is to be accelerated at 1 Earth gravity (9.81 m/s2) with this velocity transfer, then the pulse rate is one explosion every three seconds. The dimensions and performance of Dyson's vehicles are given as: Later studies indicate that the top cruise velocity that can theoretically be achieved are a few percent of the speed of light (0.08–0.1c). An atomic (fission) Orion can achieve perhaps 9–11% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure Matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant; this would allow the ship to travel near the maximum theoretical velocity. At 0.1c, Orion thermonuclear starships would require a flight time of at least 44 years to reach Alpha Centauri, not counting time needed to reach that speed (about 36 days at constant acceleration of 1g or 9.8 m/s2). At 0.1c, an Orion starship would require 100 years to travel 10 light years. The astronomer Carl Sagan suggested that this would be an excellent use for stockpiles of nuclear weapons. As part of the development of Project Orion, to garner funding from the military, a derived "space battleship" space-based nuclear-blast-hardened nuclear-missile weapons platform was mooted in the 1960s by the United States Air Force. It would comprise the USAF "Deep Space Bombardment Force". Later developments A concept similar to Orion was designed by the British Interplanetary Society (B.I.S.) in the years 1973–1974. Project Daedalus was to be a robotic interstellar probe to Barnard's Star that would travel at 12% of the speed of light. In 1989, a similar concept was studied by the U.S. Navy and NASA in Project Longshot. Both of these concepts require significant advances in fusion technology, and therefore cannot be built at present, unlike Orion. From 1998 to the present, the nuclear engineering department at Pennsylvania State University has been developing two improved versions of project Orion known as Project ICAN and Project AIMStar using compact antimatter catalyzed nuclear pulse propulsion units, rather than the large inertial confinement fusion ignition systems proposed in Project Daedalus and Longshot. Costs The expense of the fissionable materials required was thought to be high, until the physicist Ted Taylor showed that with the right designs for explosives, the amount of fissionables used on launch was close to constant for every size of Orion from 2,000 tons to 8,000,000 tons. The larger bombs used more explosives to super-compress the fissionables, increasing efficiency. The extra debris from the explosives also serves as additional propulsion mass. The bulk of costs for historical nuclear defense programs have been for delivery and support systems, rather than for production cost of the bombs directly (with warheads being 7% of the U.S. 1946–1996 expense total according to one study). After initial infrastructure development and investment, the marginal cost of additional nuclear bombs in mass production can be relatively low. In the 1980s, some U.S. thermonuclear warheads had $1.1 million estimated cost each ($630 million for 560). For the perhaps simpler fission pulse units to be used by one Orion design, a 1964 source estimated a cost of $40,000 or less each in mass production, which would be up to approximately $0.3 million each in modern-day dollars adjusted for inflation. Project Daedalus later proposed fusion explosives (deuterium or tritium pellets) detonated by electron beam inertial confinement. This is the same principle behind inertial confinement fusion. Theoretically, it could be scaled down to far smaller explosions, and require small shock absorbers. Vehicle architecture From 1957 to 1964 this information was used to design a spacecraft propulsion system called Orion, in which nuclear explosives would be thrown behind a pusher-plate mounted on the bottom of a spacecraft and exploded. The shock wave and radiation from the detonation would impact against the underside of the pusher plate, giving it a powerful push. The pusher plate would be mounted on large two-stage shock absorbers that would smoothly transmit acceleration to the rest of the spacecraft. During take-off, there were concerns of danger from fluidic shrapnel being reflected from the ground. One proposed solution was to use a flat plate of conventional explosives spread over the pusher plate, and detonate this to lift the ship from the ground before going nuclear. This would lift the ship far enough into the air that the first focused nuclear blast would not create debris capable of harming the ship. A preliminary design for a nuclear pulse unit was produced. It proposed the use of a shaped-charge fusion-boosted fission explosive. The explosive was wrapped in a beryllium oxide channel filler, which was surrounded by a uranium radiation mirror. The mirror and channel filler were open ended, and in this open end a flat plate of tungsten propellant was placed. The whole unit was built into a can with a diameter no larger than and weighed just over so it could be handled by machinery scaled-up from a soft-drink vending machine; Coca-Cola was consulted on the design. At 1 microsecond after ignition the gamma bomb plasma and neutrons would heat the channel filler and be somewhat contained by the uranium shell. At 2–3 microseconds the channel filler would transmit some of the energy to the propellant, which vaporized. The flat plate of propellant formed a cigar-shaped explosion aimed at the pusher plate. The plasma would cool to as it traversed the distance to the pusher plate and then reheat to as, at about 300 microseconds, it hits the pusher plate and is recompressed. This temperature emits ultraviolet light, which is poorly transmitted through most plasmas. This helps keep the pusher plate cool. The cigar shaped distribution profile and low density of the plasma reduces the instantaneous shock to the pusher plate. Because the momentum transferred by the plasma is greatest in the center, the pusher plate's thickness would decrease by approximately a factor of 6 from the center to the edge. This ensures the change in velocity is the same for the inner and outer parts of the plate. At low altitudes where the surrounding air is dense, gamma scattering could potentially harm the crew without a radiation shield; a radiation refuge would also be necessary on long missions to survive solar flares. Radiation shielding effectiveness increases exponentially with shield thickness, see gamma ray for a discussion of shielding. On ships with a mass greater than the structural bulk of the ship, its stores along with the mass of the bombs and propellant, would provide more than adequate shielding for the crew. Stability was initially thought to be a problem due to inaccuracies in the placement of the bombs, but it was later shown that the effects would cancel out. Numerous model flight tests, using conventional explosives, were conducted at Point Loma, San Diego in 1959. On November 14, 1959 the one-meter model, also known as "Hot Rod" and "putt-putt", first flew using RDX (chemical explosives) in a controlled flight for 23 seconds to a height of . Film of the tests has been transcribed to video and were featured on the BBC TV program "To Mars by A-Bomb" in 2003 with comments by Freeman Dyson and Arthur C. Clarke. The model landed by parachute undamaged and is in the collection of the Smithsonian National Air and Space Museum. The first proposed shock absorber was a ring-shaped airbag. It was soon realized that, should an explosion fail, the pusher plate would tear away the airbag on the rebound. So a two-stage detuned spring and piston shock absorber design was developed. On the reference design the first stage mechanical absorber was tuned to 4.5 times the pulse frequency whilst the second stage gas piston was tuned to 0.5 times the pulse frequency. This permitted timing tolerances of 10 ms in each explosion. The final design coped with bomb failure by overshooting and rebounding into a center position. Thus following a failure and on initial ground launch it would be necessary to start or restart the sequence with a lower yield device. In the 1950s methods of adjusting bomb yield were in their infancy and considerable thought was given to providing a means of swapping out a standard yield bomb for a smaller yield one in a 2 or 3 second time frame or to provide an alternative means of firing low yield bombs. Modern variable yield devices would allow a single standardized explosive to be tuned down (configured to a lower yield) automatically. The bombs had to be launched behind the pusher plate with enough velocity to explode beyond it every 1.1 seconds. Numerous proposals were investigated, from multiple guns poking over the edge of the pusher plate to rocket propelled bombs launched from roller coaster tracks; however, the final reference design used a simple gas gun to shoot the devices through a hole in the center of the pusher plate. Potential problems Exposure to repeated nuclear blasts raises the problem of ablation (erosion) of the pusher plate. Calculations and experiments indicated that a steel pusher plate would ablate less than 1 mm, if unprotected. If sprayed with an oil it would not ablate at all (this was discovered by accident: a test plate had oily fingerprints on it and the fingerprints suffered no ablation). The absorption spectra of carbon and hydrogen minimize heating. The design temperature of the shockwave, , emits ultraviolet light. Most materials and elements are opaque to ultraviolet, especially at the pressures the plate experiences. This prevents the plate from melting or ablating. One issue that remained unresolved at the conclusion of the project was whether or not the turbulence created by the combination of the propellant and ablated pusher plate would dramatically increase the total ablation of the pusher plate. According to Freeman Dyson, in the 1960s they would have had to actually perform a test with a real nuclear explosive to determine this; with modern simulation technology this could be determined fairly accurately without such empirical investigation. Another potential problem with the pusher plate is that of spalling—shards of metal—potentially flying off the top of the plate. The shockwave from the impacting plasma on the bottom of the plate passes through the plate and reaches the top surface. At that point, spalling may occur, damaging the pusher plate. For that reason, alternative substances—plywood and fiberglass—were investigated for the surface layer of the pusher plate and thought to be acceptable. If the conventional explosives in the nuclear bomb detonate but a nuclear explosion does not ignite, shrapnel could strike and potentially critically damage the pusher plate. True engineering tests of the vehicle systems were thought to be impossible because several thousand nuclear explosions could not be performed in any one place. Experiments were designed to test pusher plates in nuclear fireballs and long-term tests of pusher plates could occur in space. The shock-absorber designs could be tested at full-scale on Earth using chemical explosives. However, the main unsolved problem for a launch from the surface of the Earth was thought to be nuclear fallout. Freeman Dyson, group leader on the project, estimated back in the 1960s that with conventional nuclear weapons, each launch would statistically cause on average between 0.1 and 1 fatal cancers from the fallout. That estimate is based on no-threshold model assumptions, a method often used in estimates of statistical deaths from other industrial activities. Each few million dollars of efficiency indirectly gained or lost in the world economy may statistically average lives saved or lost, in terms of opportunity gains versus costs. Indirect effects could matter for whether the overall influence of an Orion-based space program on future human global mortality would be a net increase or a net decrease, including if change in launch costs and capabilities affected space exploration, space colonization, the odds of long-term human species survival, space-based solar power, or other hypotheticals. Danger to human life was not a reason given for shelving the project. The reasons included lack of a mission requirement, the fact that no one in the U.S. government could think of any reason to put thousands of tons of payload into orbit, the decision to focus on rockets for the Moon mission, and ultimately the signing of the Partial Test Ban Treaty in 1963. The danger to electronic systems on the ground from an electromagnetic pulse was not considered to be significant from the sub-kiloton blasts proposed since solid-state integrated circuits were not in general use at the time. From many smaller detonations combined, the fallout for the entire launch of a Orion is equal to the detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout. Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the Mike shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, Ivy Mike created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at in 1963, with a residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation, which averages globally but varies greatly, such as in some high-altitude cities. Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred. With special designs of the nuclear explosive, Ted Taylor estimated that fission product fallout could be reduced tenfold, or even to zero, if a pure fusion explosive could be constructed instead. A 100% pure fusion explosive has yet to be successfully developed, according to declassified US government documents, although relatively clean PNEs (Peaceful nuclear explosions) were tested for canal excavation by the Soviet Union in the 1970s with 98% fusion yield in the Taiga test's 15 kiloton devices, 0.3 kilotons fission, which excavated part of the proposed Pechora–Kama Canal. The vehicle's propulsion system and its test program would violate the Partial Test Ban Treaty of 1963, as currently written, which prohibits all nuclear detonations except those conducted underground as an attempt to slow the arms race and to limit the amount of radiation in the atmosphere caused by nuclear detonations. There was an effort by the US government to put an exception into the 1963 treaty to allow for the use of nuclear propulsion for spaceflight but Soviet fears about military applications kept the exception out of the treaty. This limitation would affect only the US, Russia, and the United Kingdom. It would also violate the Comprehensive Nuclear-Test-Ban Treaty which has been signed by the United States and China as well as the de facto moratorium on nuclear testing that the declared nuclear powers have imposed since the 1990s. The launch of such an Orion nuclear bomb rocket from the ground or low Earth orbit would generate an electromagnetic pulse that could cause significant damage to computers and satellites as well as flooding the van Allen belts with high-energy radiation. Since the EMP footprint would be a few hundred miles wide, this problem might be solved by launching from very remote areas. A few relatively small space-based electrodynamic tethers could be deployed to quickly eject the energetic particles from the capture angles of the Van Allen belts. An Orion spacecraft could be boosted by non-nuclear means to a safer distance only activating its drive well away from Earth and its satellites. The Lofstrom launch loop, space elevator, or other alternative launch systems hypothetically provide excellent solutions; in the case of the space elevator, existing carbon nanotubes composites, with the possible exception of Colossal carbon tubes, do not yet have sufficient tensile strength. All chemical rocket designs are extremely inefficient and expensive when launching large mass into orbit but could be employed if the result were cost effective. Notable personnel Lew Allen, contract manager Jerry Astl, explosives engineer Jeremy Bernstein, physicist Edward Creutz, physicist Brian Dunne, Orion's chief scientist Freeman Dyson, physicist Harold Finger, physicist Burt Freeman, physicist Edward B. Giller, USAF liaison Charles Clark Loomis, physicist Harris Mayer, physicist James Nance, project director H. Pierre Noyes, physicist Ronald F. Prater, USAF liaison Don Prickett, USAF liaison Kedar "Bud" Pyatt, mathematician Morris Scharff, physicist Ted Taylor, project director Micheal Treshow, physicist Stanisław Ulam, mathematician Operation Plumbbob A test that was similar to the test of a pusher plate occurred as an accidental side effect of a nuclear containment test called "Pascal-B" conducted on 27 August 1957. The test's experimental designer Dr. Robert Brownlee performed a highly approximate calculation that suggested that the low-yield nuclear explosive would accelerate the massive (900 kg) steel capping plate to six times escape velocity. The plate was never found but Dr. Brownlee believes that the plate never left the atmosphere; for example, it could have been vaporized by compression heating of the atmosphere due to its high speed. The calculated velocity was interesting enough that the crew trained a high-speed camera on the plate which, unfortunately, only appeared in one frame indicating a very high lower bound for the speed of the plate. Notable appearances in fiction The first appearance of the idea in print appears to be Robert A. Heinlein's 1940 short story, "Blowups Happen." As discussed by Arthur C. Clarke in his recollections of the making of 2001: A Space Odyssey in The Lost Worlds of 2001, a nuclear-pulse version of the U.S. interplanetary spacecraft Discovery One was considered. However the Discovery in the movie did not use this idea, as Stanley Kubrick thought it might be considered parody after making Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. An Orion spaceship features prominently in the science fiction novel Footfall by Larry Niven and Jerry Pournelle. In the face of an alien siege/invasion of Earth, the humans must resort to drastic measures to get a fighting ship into orbit to face the alien fleet. The opening premise of the show Ascension is that in 1963 President John F. Kennedy and the U.S. government, fearing the Cold War will escalate and lead to the destruction of Earth, launched the Ascension, an Orion-class spaceship, to colonize a planet orbiting Proxima Centauri, assuring the survival of the human race. Author Stephen Baxter's science fiction novel Ark employs an Orion-class generation ship to escape ecological disaster on Earth. Towards the conclusion of his Empire Games trilogy, Charles Stross includes a spacecraft modeled after Project Orion. The crafts' designers, constrained by a 1960s level of industrial capacity, intend it to be used to explore parallel worlds and to act as a nuclear deterrent, leapfrogging their foes more contemporary capabilities. In the horror novel Torment by Jeremy Robinson (written under the pseudonym Jeremy Bishop), the main characters escape from a global nuclear war in a nuclear pulse propulsion craft. The craft is among 3 others; part of the "Orion Protocol", an escape mechanism for members of the federal government. The craft are housed in a subterranean chamber below The Ellipse. In the science fiction novel "3 Body Problem" and its associated television shows, a probe is launched towards an approaching alien fleet using a variation of the Orion method. See also AIMStar Antimatter-catalyzed nuclear pulse propulsion Helios (propulsion system) NERVA (Nuclear Engine for Rocket Vehicle Application) Nuclear propulsion Project Pluto Project Prometheus Project Valkyrie Peaceful nuclear explosion References Further reading "Nuclear Pulse Propulsion (Project Orion) Technical Summary Report" RTD-TDR-63-3006 (1963–1964); GA-4805 Vol. 1: Reference Vehicle Design Study, Vol. 2: Interaction Effects, Vol. 3: Pulse Systems, Vol. 4: Experimental Structural Response. (From the National Technical Information Service, U.S.A.) "Nuclear Pulse Propulsion (Project Orion) Technical Summary Report" 1 July 1963 – 30 June 1964, WL-TDR-64-93; GA-5386 Vol. 1: Summary Report, Vol. 2: Theoretical and Experimental Physics, Vol. 3: Engine Design, Analysis and Development Techniques, Vol. 4: Engineering Experimental Tests. (From the National Technical Information Service, U.S.A.) General Atomics, Nuclear Pulse Space Vehicle Study, Volume I – Summary, September 19, 1964 General Atomics, Nuclear Pulse Space Vehicle Study, Volume III – Conceptual Vehicle Designs And Operational Systems, September 19, 1964 General Atomics, Nuclear Pulse Space Vehicle Study, Volume IV – Mission Velocity Requirements And System Comparisons, February 28, 1966 General Atomics, Nuclear Pulse Space Vehicle Study, Volume IV – Mission Velocity Requirements And System Comparisons (Supplement), February 28, 1966 NASA, Nuclear Pulse Vehicle Study Condensed Summary Report (General Dynamics Corp), January 14, 1964 External links The case for Orion Freeman Dyson talking about Project Orion Electromagnetic Pulse Shockwaves as a result of Nuclear Pulse Propulsion George Dyson talking about Project Orion at TED Orion Hypothetical spacecraft Single-stage-to-orbit Space access Orion Freeman Dyson Interstellar travel
Project Orion (nuclear propulsion)
[ "Astronomy", "Technology", "Engineering" ]
7,121
[ "Exploratory engineering", "Astronomical hypotheses", "Hypothetical spacecraft", "Interstellar travel", "nan" ]
322,913
https://en.wikipedia.org/wiki/Native%20state
In biochemistry, the native state of a protein or nucleic acid is its properly folded and/or assembled form, which is operative and functional. The native state of a biomolecule may possess all four levels of biomolecular structure, with the secondary through quaternary structure being formed from weak interactions along the covalently-bonded backbone. This is in contrast to the denatured state, in which these weak interactions are disrupted, leading to the loss of these forms of structure and retaining only the biomolecule's primary structure. Biochemistry Proteins While all protein molecules begin as simple unbranched chains of amino acids, once completed they assume highly specific three-dimensional shapes. That ultimate shape, known as tertiary structure, is the folded shape that possesses a minimum of free energy. It is a protein's tertiary, folded structure that makes it capable of performing its biological function. In fact, shape changes in proteins are the primary cause of several neurodegenerative diseases, including those caused by prions and amyloid (i.e. mad cow disease, kuru, Creutzfeldt–Jakob disease). Many enzymes and other non-structural proteins have more than one native state, and they operate or undergo regulation by transitioning between these states. However, "native state" is used almost exclusively in the singular, typically to distinguish properly folded proteins from denatured or unfolded ones. In other contexts, the folded shape of a protein is most often referred to as its native "conformation" or "structure." Folded and unfolded proteins are often easily distinguished by virtue of their water solubilities, as many proteins become insoluble on denaturation. Proteins in the native state will have defined secondary structure, which can be detected spectroscopically, by circular dichroism and by nuclear magnetic resonance (NMR). The native state of a protein can be distinguished from a molten globule, by among other things, distances measured by NMR. Amino acids widely separated in a protein's sequence may touch or lie very close to one another within a stably folded protein. In a molten globule, on the other hand, their time-averaged distances are liable to be greater. Learning how native state proteins can be manufactured is important, as attempts to create proteins from scratch have resulted in molten globules and not true native state products. Therefore, an understanding of the native state is crucial in protein engineering. Nucleic acids Nucleic acids attain their native state through base pairing and, to a lesser extent, other interactions such as coaxial stacking. Biological DNA usually exists as long linear double helices bound to proteins in chromatin, and biological RNA such as tRNA often form complex native configurations approaching the complexity of folded proteins. Additionally, artificial nucleic acid structures used in DNA nanotechnology are designed to have specific native configurations in which multiple nucleic acid strands are assembled into a single complex. In some cases native state of biological DNA performs their functions without being controlled by any other regulatory units. References Protein structure
Native state
[ "Chemistry" ]
634
[ "Protein structure", "Structural biology" ]
322,931
https://en.wikipedia.org/wiki/Random%20coil
In polymer chemistry, a random coil is a conformation of polymers where the monomer subunits are oriented randomly while still being bonded to adjacent units. It is not one specific shape, but a statistical distribution of shapes for all the chains in a population of macromolecules. The conformation's name is derived from the idea that, in the absence of specific, stabilizing interactions, a polymer backbone will "sample" all possible conformations randomly. Many unbranched, linear homopolymers — in solution, or above their melting temperatures — assume (approximate) random coils. Random walk model: The Gaussian chain There are an enormous number of different ways in which a chain can be curled around in a relatively compact shape, like an unraveling ball of twine with much open space, and comparatively few ways it can be more or less stretched out. So, if each conformation has an equal probability or statistical weight, chains are much more likely to be ball-like than they are to be extended — a purely entropic effect. In an ensemble of chains, most of them will, therefore, be loosely balled up. This is the kind of shape any one of them will have most of the time. Consider a linear polymer to be a freely-jointed chain with N subunits, each of length , that occupy zero volume, so that no part of the chain excludes another from any location. One can regard the segments of each such chain in an ensemble as performing a random walk (or "random flight") in three dimensions, limited only by the constraint that each segment must be joined to its neighbors. This is the ideal chain mathematical model. It is clear that the maximum, fully extended length L of the chain is . If we assume that each possible chain conformation has an equal statistical weight, it can be shown that the probability P(r) of a polymer chain in the population to have distance r between the ends will obey a characteristic distribution described by the formula where is the mean of . The average (root mean square) end-to-end distance for the chain, , turns out to be times the square root of N — in other words, the average distance scales with N 0.5. Real polymers A real polymer is not freely-jointed. A -C-C- single bond has a fixed tetrahedral angle of 109.5 degrees. The value of L is well-defined for, say, a fully extended polyethylene or nylon, but it is less than N x l because of the zig-zag backbone. There is, however, free rotation about many chain bonds. The model above can be enhanced. A longer, "effective" unit length can be defined such that the chain can be regarded as freely-jointed, along with a smaller N, such that the constraint L = N x l is still obeyed. It, too, gives a Gaussian distribution. However, specific cases can also be precisely calculated. The average end-to-end distance for freely-rotating (not freely-jointed) polymethylene (polyethylene with each -C-C- considered as a subunit) is l times the square root of 2N, an increase by a factor of about 1.4. Unlike the zero volume assumed in a random walk calculation, all real polymers' segments occupy space because of the van der Waals radii of their atoms, including bulky substituent groups that interfere with bond rotations. This can also be taken into account in calculations. All such effects increase the mean end-to-end distance. Because their polymerization is stochastically driven, chain lengths in any real population of synthetic polymers will obey a statistical distribution. In that case, we should take N to be an average value. Also, many polymers have random branching. Even with corrections for local constraints, the random walk model ignores steric interference between chains, and between distal parts of the same chain. A chain often cannot move from a given conformation to a closely related one by a small displacement because one part of it would have to pass through another part, or through a neighbor. We may still hope that the ideal-chain, random-coil model will be at least a qualitative indication of the shapes and dimensions of real polymers in solution, and in the amorphous state, as long as there are only weak physicochemical interactions between the monomers. This model, and the Flory-Huggins Solution Theory, for which Paul Flory received the Nobel Prize in Chemistry in 1974, ostensibly apply only to ideal, dilute solutions. But there is reason to believe (e.g., neutron diffraction studies) that excluded volume effects may cancel out, so that, under certain conditions, chain dimensions in amorphous polymers have approximately the ideal, calculated size When separate chains interact cooperatively, as in forming crystalline regions in solid thermoplastics, a different mathematical approach must be used. Stiffer polymers such as helical polypeptides, Kevlar, and double-stranded DNA can be treated by the worm-like chain model. Even copolymers with monomers of unequal length will distribute in random coils if the subunits lack any specific interactions. The parts of branched polymers may also assume random coils. Below their melting temperatures, most thermoplastic polymers (polyethylene, nylon, etc.) have amorphous regions in which the chains approximate random coils, alternating with regions that are crystalline. The amorphous regions contribute elasticity and the crystalline regions contribute strength and rigidity. More complex polymers such as proteins, with various interacting chemical groups attached to their backbones, self-assemble into well-defined structures. But segments of proteins, and polypeptides that lack secondary structure, are often assumed to exhibit a random-coil conformation in which the only fixed relationship is the joining of adjacent amino acid residues by a peptide bond. This is not actually the case, since the ensemble will be energy weighted due to interactions between amino acid side-chains, with lower-energy conformations being present more frequently. In addition, even arbitrary sequences of amino acids tend to exhibit some hydrogen bonding and secondary structure. For this reason, the term "statistical coil" is occasionally preferred. The conformational entropy of the random-coil stabilizes the unfolded protein state and represents main free energy contribution that opposes to protein folding. Spectroscopy A random-coil conformation can be detected using spectroscopic techniques. The arrangement of the planar amide bonds results in a distinctive signal in circular dichroism. The chemical shift of amino acids in a random-coil conformation is well known in nuclear magnetic resonance (NMR). Deviations from these signatures often indicates the presence of some secondary structure, rather than complete random coil. Furthermore, there are signals in multidimensional NMR experiments that indicate that stable, non-local amino acid interactions are absent for polypeptides in a random-coil conformation. Likewise, in the images produced by crystallography experiments, segments of random coil result simply in a reduction in "electron density" or contrast. A randomly coiled state for any polypeptide chain can be attained by denaturing the system. However, there is evidence that proteins are never truly random coils, even when denatured (Shortle & Ackerman). See also Protein folding Native state Molten globule Probability theory References External links polymer statistical mechanics A topological problem in polymer physics: configurational and mechanical properties of a random walk enclosing a constant are D. Shortle and M. Ackerman, Persistence of native-like topology in a denatured protein in 8 M urea, Science 293 (2001), pp. 487–489 Sample chapter "Conformations, Solutions, and Molecular Weight" from "Polymer Science & Technology" courtesy of Prentice Hall Professional publications Polymer physics Physical chemistry
Random coil
[ "Physics", "Chemistry", "Materials_science" ]
1,642
[ "Polymer physics", "Applied and interdisciplinary physics", "Polymer chemistry", "nan", "Physical chemistry" ]
323,137
https://en.wikipedia.org/wiki/Geometric%20phase
In classical and quantum mechanics, geometric phase is a phase difference acquired over the course of a cycle, when a system is subjected to cyclic adiabatic processes, which results from the geometrical properties of the parameter space of the Hamiltonian. The phenomenon was independently discovered by S. Pancharatnam (1956), in classical optics and by H. C. Longuet-Higgins (1958) in molecular physics; it was generalized by Michael Berry in (1984). It is also known as the Pancharatnam–Berry phase, Pancharatnam phase, or Berry phase. It can be seen in the conical intersection of potential energy surfaces and in the Aharonov–Bohm effect. Geometric phase around the conical intersection involving the ground electronic state of the C6H3F3+ molecular ion is discussed on pages 385–386 of the textbook by Bunker and Jensen. In the case of the Aharonov–Bohm effect, the adiabatic parameter is the magnetic field enclosed by two interference paths, and it is cyclic in the sense that these two paths form a loop. In the case of the conical intersection, the adiabatic parameters are the molecular coordinates. Apart from quantum mechanics, it arises in a variety of other wave systems, such as classical optics. As a rule of thumb, it can occur whenever there are at least two parameters characterizing a wave in the vicinity of some sort of singularity or hole in the topology; two parameters are required because either the set of nonsingular states will not be simply connected, or there will be nonzero holonomy. Waves are characterized by amplitude and phase, and may vary as a function of those parameters. The geometric phase occurs when both parameters are changed simultaneously but very slowly (adiabatically), and eventually brought back to the initial configuration. In quantum mechanics, this could involve rotations but also translations of particles, which are apparently undone at the end. One might expect that the waves in the system return to the initial state, as characterized by the amplitudes and phases (and accounting for the passage of time). However, if the parameter excursions correspond to a loop instead of a self-retracing back-and-forth variation, then it is possible that the initial and final states differ in their phases. This phase difference is the geometric phase, and its occurrence typically indicates that the system's parameter dependence is singular (its state is undefined) for some combination of parameters. To measure the geometric phase in a wave system, an interference experiment is required. The Foucault pendulum is an example from classical mechanics that is sometimes used to illustrate the geometric phase. This mechanics analogue of the geometric phase is known as the Hannay angle. Berry phase in quantum mechanics In a quantum system at the n-th eigenstate, an adiabatic evolution of the Hamiltonian sees the system remain in the n-th eigenstate of the Hamiltonian, while also obtaining a phase factor. The phase obtained has a contribution from the state's time evolution and another from the variation of the eigenstate with the changing Hamiltonian. The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution. However, if the variation is cyclical, the Berry phase cannot be cancelled; it is invariant and becomes an observable property of the system. By reviewing the proof of the adiabatic theorem given by Max Born and Vladimir Fock, in Zeitschrift für Physik 51, 165 (1928), we could characterize the whole change of the adiabatic process into a phase term. Under the adiabatic approximation, the coefficient of the n-th eigenstate under adiabatic process is given by where is the Berry's phase with respect to parameter t. Changing the variable t into generalized parameters, we could rewrite the Berry's phase into where parametrizes the cyclic adiabatic process. Note that the normalization of implies that the integrand is imaginary, so that is real. It follows a closed path in the appropriate parameter space. Geometric phase along the closed path can also be calculated by integrating the Berry curvature over surface enclosed by . Examples of geometric phases Foucault pendulum One of the easiest examples is the Foucault pendulum. An easy explanation in terms of geometric phases is given by Wilczek and Shapere: To put it in different words, there are no inertial forces that could make the pendulum precess, so the precession (relative to the direction of motion of the path along which the pendulum is carried) is entirely due to the turning of this path. Thus the orientation of the pendulum undergoes parallel transport. For the original Foucault pendulum, the path is a circle of latitude, and by the Gauss–Bonnet theorem, the phase shift is given by the enclosed solid angle. Derivation In a near-inertial frame moving in tandem with the Earth, but not sharing the rotation of the Earth about its own axis, the suspension point of the pendulum traces out a circular path during one sidereal day. At the latitude of Paris, 48 degrees 51 minutes north, a full precession cycle takes just under 32 hours, so after one sidereal day, when the Earth is back in the same orientation as one sidereal day before, the oscillation plane has turned by just over 270 degrees. If the plane of swing was north–south at the outset, it is east–west one sidereal day later. This also implies that there has been exchange of momentum; the Earth and the pendulum bob have exchanged momentum. The Earth is so much more massive than the pendulum bob that the Earth's change of momentum is unnoticeable. Nonetheless, since the pendulum bob's plane of swing has shifted, the conservation laws imply that an exchange must have occurred. Rather than tracking the change of momentum, the precession of the oscillation plane can efficiently be described as a case of parallel transport. For that, it can be demonstrated, by composing the infinitesimal rotations, that the precession rate is proportional to the projection of the angular velocity of Earth onto the normal direction to Earth, which implies that the trace of the plane of oscillation will undergo parallel transport. After 24 hours, the difference between initial and final orientations of the trace in the Earth frame is , which corresponds to the value given by the Gauss–Bonnet theorem. is also called the holonomy or geometric phase of the pendulum. When analyzing earthbound motions, the Earth frame is not an inertial frame, but rotates about the local vertical at an effective rate of radians per day. A simple method employing parallel transport within cones tangent to the Earth's surface can be used to describe the rotation angle of the swing plane of Foucault's pendulum. From the perspective of an Earth-bound coordinate system (the measuring circle and spectator are Earth-bounded, also if terrain reaction to Coriolis force is not perceived by spectator when he moves), using a rectangular coordinate system with its axis pointing east and its axis pointing north, the precession of the pendulum is due to the Coriolis force (other fictitious forces as gravity and centrifugal force have not direct precession component, Euler's force is low because Earth's rotation speed is nearly constant). Consider a planar pendulum with constant natural frequency in the small angle approximation. There are two forces acting on the pendulum bob: the restoring force provided by gravity and the wire, and the Coriolis force (the centrifugal force, opposed to the gravitational restoring force, can be neglected). The Coriolis force at latitude is horizontal in the small angle approximation and is given by where is the rotational frequency of Earth, is the component of the Coriolis force in the direction, and is the component of the Coriolis force in the direction. The restoring force, in the small-angle approximation and neglecting centrifugal force, is given by Using Newton's laws of motion, this leads to the system of equations Switching to complex coordinates , the equations read To first order in , this equation has the solution If time is measured in days, then and the pendulum rotates by an angle of during one day. Polarized light in an optical fiber A second example is linearly polarized light entering a single-mode optical fiber. Suppose the fiber traces out some path in space, and the light exits the fiber in the same direction as it entered. Then compare the initial and final polarizations. In semiclassical approximation the fiber functions as a waveguide, and the momentum of the light is at all times tangent to the fiber. The polarization can be thought of as an orientation perpendicular to the momentum. As the fiber traces out its path, the momentum vector of the light traces out a path on the sphere in momentum space. The path is closed, since initial and final directions of the light coincide, and the polarization is a vector tangent to the sphere. Going to momentum space is equivalent to taking the Gauss map. There are no forces that could make the polarization turn, just the constraint to remain tangent to the sphere. Thus the polarization undergoes parallel transport, and the phase shift is given by the enclosed solid angle (times the spin, which in case of light is 1). Stochastic pump effect A stochastic pump is a classical stochastic system that responds with nonzero, on average, currents to periodic changes of parameters. The stochastic pump effect can be interpreted in terms of a geometric phase in evolution of the moment generating function of stochastic currents. Spin The geometric phase can be evaluated exactly for a spin- particle in a magnetic field. Geometric phase defined on attractors While Berry's formulation was originally defined for linear Hamiltonian systems, it was soon realized by Ning and Haken that similar geometric phase can be defined for entirely different systems such as nonlinear dissipative systems that possess certain cyclic attractors. They showed that such cyclic attractors exist in a class of nonlinear dissipative systems with certain symmetries. There are several important aspects of this generalization of Berry's phase: 1) Instead of the parameter space for the original Berry phase, this Ning-Haken generalization is defined in phase space; 2) Instead of the adiabatic evolution in quantum mechanical system, the evolution of the system in phase space needs not to be adiabatic. There is no restriction on the time scale of the temporal evolution; 3) Instead of a Hermitian system or non-hermitian system with linear damping, systems can be generally nonlinear and non-hermitian. Exposure in molecular adiabatic potential surface intersections There are several ways to compute the geometric phase in molecules within the Born–Oppenheimer framework. One way is through the "non-adiabatic coupling matrix" defined by where is the adiabatic electronic wave function, depending on the nuclear parameters . The nonadiabatic coupling can be used to define a loop integral, analogous to a Wilson loop (1974) in field theory, developed independently for molecular framework by M. Baer (1975, 1980, 2000). Given a closed loop , parameterized by where is a parameter, and . The D-matrix is given by (here is a path-ordering symbol). It can be shown that once is large enough (i.e. a sufficient number of electronic states is considered), this matrix is diagonal, with the diagonal elements equal to where are the geometric phases associated with the loop for the -th adiabatic electronic state. For time-reversal symmetrical electronic Hamiltonians the geometric phase reflects the number of conical intersections encircled by the loop. More accurately, where is the number of conical intersections involving the adiabatic state encircled by the loop An alternative to the D-matrix approach would be a direct calculation of the Pancharatnam phase. This is especially useful if one is interested only in the geometric phases of a single adiabatic state. In this approach, one takes a number of points along the loop with and then using only the j-th adiabatic states computes the Pancharatnam product of overlaps: In the limit one has (see Ryb & Baer 2004 for explanation and some applications) Geometric phase and quantization of cyclotron motion An electron subjected to magnetic field moves on a circular (cyclotron) orbit. Classically, any cyclotron radius is acceptable. Quantum-mechanically, only discrete energy levels (Landau levels) are allowed, and since is related to electron's energy, this corresponds to quantized values of . The energy quantization condition obtained by solving Schrödinger's equation reads, for example, for free electrons (in vacuum) or for electrons in graphene, where . Although the derivation of these results is not difficult, there is an alternative way of deriving them, which offers in some respect better physical insight into the Landau level quantization. This alternative way is based on the semiclassical Bohr–Sommerfeld quantization condition which includes the geometric phase picked up by the electron while it executes its (real-space) motion along the closed loop of the cyclotron orbit. For free electrons, while for electrons in graphene. It turns out that the geometric phase is directly linked to of free electrons and of electrons in graphene. See also Riemann curvature tensor – for the connection to mathematics Berry connection and curvature Chern class Optical rotation Winding number Notes For simplicity, we consider electrons confined to a plane, such as 2DEG and magnetic field perpendicular to the plane. is the cyclotron frequency (for free electrons) and is the Fermi velocity (of electrons in graphene). Footnotes Sources (See chapter 13 for a mathematical treatment) Connections to other physical phenomena (such as the Jahn–Teller effect) are discussed here: Berry's geometric phase: a review Paper by Prof. Galvez at Colgate University, describing Geometric Phase in Optics: Applications of Geometric Phase in Optics Surya Ganguli, Fibre Bundles and Gauge Theories in Classical Physics: A Unified Description of Falling Cats, Magnetic Monopoles and Berry's Phase Robert Batterman, Falling Cats, Parallel Parking, and Polarized Light M. Baer, Electronic non-adiabatic transitions: Derivation of the general adiabatic-diabatic transformation matrix, Mol. Phys. 40, 1011 (1980); M. Baer, Existence of diabetic potentials and the quantization of the nonadiabatic matrix, J. Phys. Chem. A 104, 3181–3184 (2000). Further reading Michael V. Berry, The geometric phase, Scientific American 259 (6) (1988), 26–34. External links Classical mechanics Quantum phases
Geometric phase
[ "Physics", "Chemistry", "Materials_science" ]
3,102
[ "Quantum phases", "Phases of matter", "Classical mechanics", "Quantum mechanics", "Mechanics", "Condensed matter physics", "Matter" ]
323,207
https://en.wikipedia.org/wiki/George%20Constantinescu
George "Gogu" Constantinescu (; last name also Constantinesco; 4 October 1881 – 11 December 1965) was a Romanian scientist, engineer, and inventor. During his career, he registered over 130 inventions. Constantinescu was the creator of the theory of sonics, a new branch of continuum mechanics, in which he described the transmission of mechanical energy through vibrations. Biography Early years Born in Craiova in "the Doctor's House" near the Mihai Bravu Gardens, Constantinescu was influenced by his father George, born in 1844 (a professor of mathematics and engineering science, specialized in mathematics at the Sorbonne University). Gogu Constantinescu settled in the United Kingdom in 1912. He was an honorary member of the Romanian Academy. Family He married Alexandra (Sandra) Cocorescu in Richmond, London, in December 1914. The couple moved to Wembley and, after their son Ian was born, they moved to Weybridge. The marriage broke down in the 1920s and ended in divorce. He then married Eva Litton and the couple moved to Oxen House, beside Lake Coniston. Eva had two children, Richard and Michael, by a previous marriage. Inventions and designs Synchronization gear His hydraulic machine gun synchronization gear allowed airplane-mounted guns to shoot between the spinning blades of the propeller. The Constantinesco synchronization gear (or "CC" gear) was first used operationally on the D.H.4s of No. 55 squadron R.F.C. from March 1917, during World War I, and rapidly became standard equipment, replacing a variety of mechanical gears. It continued to be used by the Royal Air Force until World War II – the Gloster Gladiator being the last British fighter to be equipped with "CC" gear. Sonics In 1918, he published the book A treatise on transmission of power by vibrations in which he described his theory of sonics. The theory is applicable to various systems of power transmission but has mostly been applied to hydraulic systems. Sonics differs from hydrostatics, being based on waves, rather than pressure, in the liquid. Constantinescu argued that, contrary to popular belief, liquids are compressible. Transmission of power by waves in a liquid (e.g. water or oil) required a generator to produce the waves and a motor to use the waves to do work, either by percussion (as in rock drills) or by conversion to rotary motion. Internal combustion engines He had several patents for improvements to carburetors, for example US1206512. He also devised a hydraulic system (patent GB133719) for operating both the valves and the fuel injectors for diesel engines. Torque converter He invented a mechanical torque converter actuated by a pendulum. This was applied to the Constantinesco, a French-manufactured car. It was also tried on rail vehicles. A 250 hp petrol engined locomotive with a Constantinescu torque converter was exhibited at the 1924 Wembley Exhibition. The system was not adopted on British railways but it was applied to some railcars on the Romanian State Railways. Other Other inventions included a "railway motor wagon". The latter ran on normal flanged steel wheels but the drive used a road vehicle powertrain with rubber tyres pressed against the rails. This is similar to the system used on many modern road-rail vehicles. He also designed the Grand Mosque of Constanța (a project completed by the architect Victor Ştefănescu, then known as the Carol I Mosque). Recent developments Research on a sonic asynchronous motor for vehicle applications (based on Constantinescu's work) has been done at the Transilvania University of Brașov. The date of the paper is believed to be 5 October 2010. Death He died at Oxen House, beside Coniston Water on 11/12 December 1965, and is buried in the churchyard at Lowick, Cumbria. Recognition The Dimitrie Leonida Technical Museum in Bucharest has exhibits relating to George Constantinescu. References External links Biography Patents of G. Constantinescu George (Gogu) Constantinescu (ro) Autoturism Homepage YouTube showing operation of Constantinesco-Colley synchronising gear for WW1 aircraft 1881 births 1965 deaths Aerodynamicists Burials in Cumbria Carol I National College alumni Romanian emigrants to the United Kingdom Fluid mechanics People from Craiova Romanian aerospace engineers 20th-century Romanian engineers Romanian inventors Romanian scientists Titular members of the Romanian Academy
George Constantinescu
[ "Engineering" ]
908
[ "Civil engineering", "Fluid mechanics" ]
14,440,503
https://en.wikipedia.org/wiki/Indirect%20agonist
In pharmacology, an indirect agonist or indirect-acting agonist is a substance that enhances the release or action of an endogenous neurotransmitter but has no specific agonist activity at the neurotransmitter receptor itself. Indirect agonists work through varying mechanisms to achieve their effects, including transporter blockade, induction of transmitter release, and inhibition of transmitter breakdown. Mechanisms of indirect agonism Reuptake inhibition Cocaine is a monoamine transporter blocker and, thus, an indirect agonist of dopamine receptors. Cocaine binds the dopamine transporter (DAT), blocking the protein's ability to uptake dopamine from the synaptic cleft and also blocking DAT from terminating dopamine signaling. Blockage of DAT increases the extracellular concentration of dopamine, therefore increasing the amount of dopamine receptor binding and signaling. Dipyridamole inhibits reuptake of adenosine, resulting in greater extracellular concentrations of adenosine. Dipyridamole also inhibits the enzyme adenosine deaminase, the enzyme that catalyzes the breakdown of adenosine. Evoking transmitter release Fenfluramine is an indirect agonist of serotonin receptors. Fenfluramine binds to the serotonin transporter, blocking serotonin reuptake. However, fenfluramine also acts to induce non-exocytotic serotonin release; in a mechanism similar to that of methamphetamine in dopamine neurons, fenfluramine binds to VMAT2, disrupting the compartmentalization of serotonin into vesicles and increasing the concentration of cytoplasmic serotonin available for drug-induced release. References Biomolecules Proteins Neurotransmitters Medical terminology Pharmacodynamics Physiology
Indirect agonist
[ "Chemistry", "Biology" ]
388
[ "Pharmacology", "Biomolecules by chemical classification", "Natural products", "Physiology", "Pharmacodynamics", "Neurotransmitters", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Proteins", "Neurochemistry", "Molecular biology" ]
14,440,793
https://en.wikipedia.org/wiki/Magnetic%20resonance%20spectroscopic%20imaging
Magnetic resonance spectroscopic imaging (MRSI) is a noninvasive imaging method that provides spectroscopic information in addition to the image that is generated by MRI alone. Whereas traditional magnetic resonance imaging (MRI) generates a black-and-white image in which brightness is determined primarily by the T1 or T2 relaxation times of the tissue being imaged, the spectroscopic information obtained in an MRSI study can be used to infer further information about cellular activity (metabolic information). For example, in the context of oncology, an MRI scan may reveal the shape and size of a tumor, while an MRSI study provides additional information about the metabolic activity occurring in the tumor. MRSI can be performed on a standard MRI scanner, and the patient experience is the same for MRSI as for MRI. MRSI has broad applications in medicine, including oncology and general physiological studies. When hydrogen is the target element, MRSI is also called 1H-nuclear magnetic resonance spectroscopic imaging and proton magnetic resonance spectroscopic imaging. MRSI can also be performed with phosphorus, or hyperpolarized carbon-13. References Magnetic resonance spectroscopic imaging entry in the public domain NCI Dictionary of Cancer Terms External links Magnetic resonance imaging
Magnetic resonance spectroscopic imaging
[ "Physics", "Chemistry", "Astronomy" ]
256
[ "Spectroscopy stubs", "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Magnetic resonance imaging", "Astronomy stubs", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
14,444,088
https://en.wikipedia.org/wiki/Positronium%20hydride
Positronium hydride, or hydrogen positride is an exotic molecule consisting of a hydrogen atom bound to an exotic atom of positronium (that is a combination of an electron and a positron). Its formula is PsH. It was predicted to exist in 1951 by A. Ore, and subsequently studied theoretically, but was not observed until 1990. R. Pareja, R. Gonzalez from Madrid trapped positronium in hydrogen-laden magnesia crystals. The trap was prepared by Yok Chen from the Oak Ridge National Laboratory. In this experiment the positrons were thermalized so that they were not traveling at high speed, and they then reacted with H− ions in the crystal. In 1992 it was created in an experiment done by David M. Schrader and F.M. Jacobsen and others at the Aarhus University in Denmark. The researchers made the positronium hydride molecules by firing intense bursts of positrons into methane, which has the highest density of hydrogen atoms. Upon slowing down, the positrons were captured by ordinary electrons to form positronium atoms which then reacted with hydrogen atoms from the methane. Decay PsH is constructed from one proton, two electrons, and one positron. The binding energy is . The lifetime of the molecule is 0.65 nanoseconds. The lifetime of positronium deuteride is indistinguishable from the normal hydride. The decay of positronium is easily observed by detecting the two 511 keV gamma ray photons emitted in the decay. The energy of the photons from positronium should differ slightly by the binding energy of the molecule. However, this has not yet been detected. Properties The structure of PsH is as a diatomic molecule, with a chemical bond between the two positively charged centres. The electrons are more concentrated around the proton. Predicting the properties of PsH is a four body Coulomb problem. Calculated using the stochastic variational method, the size of the molecule is larger than dihydrogen, which has a bond length of 0.7413 Å. In PsH the positron and proton are separated on average by 3.66 a0 (1.94 Å). The positronium in the molecule is swollen compared to the positronium atom, increasing to 3.48 a0 compared to 3 a0. Average distance of the electrons from the proton is larger than the dihydrogen molecule, at 2.31 a0 with the maximum density at 2.8 au. Formation Due to its short lifetime, establishing the chemistry of positronium hydride poses difficulties. Theoretical calculations can predict outcomes. One method of formation is through alkali metal hydrides reacting with positrons. Molecules with dipole moments greater than 1.625 debye are predicted to attract and hold positrons in a bound state. Crawford's model predicts this positron capture. In the case of lithium hydride, sodium hydride and potassium hydride molecules, this adduct decomposes and positronium hydride and the alkali positive ion form. M+H− + e+ → PsH + M+ Similar compounds PsH is a simple exotic compound. Other compounds of positronium are possible by the reactions e+ + AB → PsA + B+. Other substances that contain positronium are di-positronium and the ion Ps− with two electrons. Molecules of Ps with normal matter include halides and cyanide. Positronium antihydride (Ps) contains antihydrogen instead of hydrogen. It can be made as the anti-hydride ion (+) reacts with positronium (Ps) + + Ps → Ps + e+ The GBAR experiment uses the similar reaction + Ps → + + e− which cannot produce positronium antihydride, as there is too much energy left over for positronium antihydride to be stable. References Extra reading Antimatter Molecular physics Quantum electrodynamics Exotic atoms Substances discovered in the 1990s
Positronium hydride
[ "Physics", "Chemistry" ]
871
[ "Antimatter", "Molecular physics", "Exotic atoms", "Subatomic particles", " molecular", "nan", "Nuclear physics", "Atomic", "Atoms", "Matter", " and optical physics" ]
14,446,614
https://en.wikipedia.org/wiki/Frank%E2%80%93Read%20source
In materials science, a Frank–Read source is a mechanism explaining the generation of multiple dislocations in specific well-spaced slip planes in crystals when they are deformed. When a crystal is deformed, in order for slip to occur, dislocations must be generated in the material. This implies that, during deformation, dislocations must be primarily generated in these planes. Cold working of metal increases the number of dislocations by the Frank–Read mechanism. Higher dislocation density increases yield strength and causes work hardening of metals. The mechanism of dislocation generation was proposed by and named after British physicist Charles Frank and Thornton Read. In 2024, Cheng Long and coworkers demonstrated that the Frank-Read mechanism can generate disclination loops in nematic liquid crystals. This finding suggests that the Frank-Read mechanism may arise in a broader class of materials containing topological defect lines. History Charles Frank detailed the history of the discovery from his perspective in Proceedings of the Royal Society in 1980. In 1950 Charles Frank, who was then a research fellow in the physics department at the University of Bristol, visited the United States to participate in a conference on crystal plasticity in Pittsburgh. Frank arrived in the United States well in advance of the conference to spend time at a naval laboratory and to give a lecture at Cornell University. When, during his travels in Pennsylvania, Frank visited Pittsburgh, he received a letter from fellow scientist Jock Eshelby suggesting that he read a recent paper by Gunther Leibfried. Frank was supposed to board a train to Cornell to give his lecture at Cornell, but before departing for Cornell he went to the library at Carnegie Institute of Technology to obtain a copy of the paper. The library did not yet have the journal with Leibfried's paper, but the staff at the library believed that the journal could be in the recently arrived package from Germany. Frank decided to wait for the library to open the package, which did indeed contain the journal. Upon reading the paper he took a train to Cornell, where he was told to pass the time until 5:00, as the faculty was in meeting. Frank decided to take a walk between 3:00 and 5:00. During those two hours, while considering the Leibfried paper, he formulated the theory for what was later named the Frank–Read source. A couple of days later, he traveled to the conference on crystal plasticity in Pittsburgh where he ran into Thornton Read in the hotel lobby. Upon encountering each other, the two scientists immediately discovered that they had come up with the same idea for dislocation generation almost simultaneously (Frank during his walk at Cornell, and Thornton Read during tea the previous Wednesday) and decided to write a joint paper on the topic. The mechanism for dislocation generation described in that paper is now known as the Frank–Read source. Mechanism The Frank–Read source is a mechanism based on dislocation multiplication in a slip plane under shear stress. Consider a straight dislocation in a crystal slip plane with its two ends, A and B, pinned. If a shear stress is exerted on the slip plane then a force , where b is the Burgers vector of the dislocation and x is the distance between the pinning sites A and B, is exerted on the dislocation line as a result of the shear stress. This force acts perpendicularly to the line, inducing the dislocation to lengthen and curve into an arc. The bending force caused by the shear stress is opposed by the line tension of the dislocation, which acts on each end of the dislocation along the direction of the dislocation line away from A and B with a magnitude of , where G is the shear modulus. If the dislocation bends, the ends of the dislocation make an angle with the horizontal between A and B, which gives the line tensions acting along the ends a vertical component acting directly against the force induced by the shear stress. If sufficient shear stress is applied and the dislocation bends, the vertical component from the line tensions, which acts directly against the force caused by the shear stress, grows as the dislocation approaches a semicircular shape. When the dislocation becomes a semicircle, all of the line tension is acting against the bending force induced by the shear stress, because the line tension is perpendicular to the horizontal between A and B. For the dislocation to reach this point, it is thus evident that the equation: must be satisfied, and from this we can solve for the shear stress: This is the stress required to generate dislocation from a Frank–Read source. If the shear stress increases any further and the dislocation passes the semicircular equilibrium state, it will spontaneously continue to bend and grow, spiraling around the A and B pinning points, until the segments spiraling around the A and B pinning points collide and cancel. The process results in a dislocation loop around A and B in the slip plane which expands under continued shear stress, and also in a new dislocation line between A and B which, under renewed or continued shear, can continue to generate dislocation loops in the manner just described. A Frank–Read loop can thus generate many dislocations in a plane in a crystal under applied stress. The Frank–Read source mechanism explains why dislocations are primarily generated on certain slip planes; dislocations are primarily generated in just those planes with Frank–Read sources. It is important to note that if the shear stress does not exceed: and the dislocation does not bend past the semicircular equilibrium state, it will not form a dislocation loop and instead revert to its original state. References Materials science de:Frank-Read-Quelle
Frank–Read source
[ "Physics", "Materials_science", "Engineering" ]
1,190
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
14,448,037
https://en.wikipedia.org/wiki/DaT%20scan
DaT Scan (DaT scan or Dopamine Transporter Scan) commonly refers to a diagnostic method, based on SPECT imaging, to investigate if there is a loss of dopaminergic neurons in striatum. The term may also refer to a brand name of Ioflupane (123I) tracer used for the study. The scan principle is based on use of the radiopharmaceutical Ioflupane (123I) which binds to dopamine transporters (DaT). The signal from them is then detected by the use of single-photon emission computed tomography (SPECT) which uses special gamma-cameras to create a pictographic representation of the distribution of dopamine transporters in the brain. DaTSCAN is indicated in cases of tremor when its origin is uncertain. Although this method can distinguish essential tremor from Parkinson's syndrome, it is unable to distinguish between Parkinson's disease, Dementia with Lewy bodies, Parkinson's disease dementia, multiple system atrophy or progressive supranuclear palsy. There is evidence that DaTSCAN is accurate in diagnosing early Parkinson's. Procedure At the beginning a patient should take two iodine tablets and wait for one hour. These pills are important because they prevent the accumulation of radioactive substances in the thyroid gland. After one hour, the patient gets an injection to the shoulder, which contains the radiopharmaceutical, and then waits for 4 hours. The concentration of the substance increases, and then it is scanned by a gamma-camera, which is located around the patient's head. The whole examination lasts about 30–45 minutes, and it is non-invasive. If a patient uses certain medications listed below, it is necessary to stop usage for a few days or weeks before the DaTSCAN, but only after a consultation with the patient's doctor. The examination takes just a few hours, so patients do not need to stay in a hospital overnight, but they have to drink much more than they are used to and go to the toilet more often. It is important for a fast elimination of the radioactive substances from the body. Contraindications pregnancy breast-feeding severe renal or hepatic insufficiency allergy to iodine substances certain medications – stimulants or noradrenalin and some antidepressants Differential Diagnosis Parkinson's disease, multiple system atrophy or progressive supranuclear palsy Essential tremor Lewy body disease References External links European Parkinson's Disease Association DaTSCAN Patient's view Neurology Neuroimaging Medical physics Dopamine reuptake inhibitors Parkinson's disease 3D nuclear medical imaging Radiobiology
DaT scan
[ "Physics", "Chemistry", "Biology" ]
550
[ "Radiobiology", "Radioactivity", "Applied and interdisciplinary physics", "Medical physics" ]
10,794,057
https://en.wikipedia.org/wiki/Mesoporous%20silica
Mesoporous silica is a form of silica that is characterised by its mesoporous structure, that is, having pores that range from 2 nm to 50 nm in diameter. According to IUPAC's terminology, mesoporosity sits between microporous (<2 nm) and macroporous (>50 nm). Mesoporous silica is a relatively recent development in nanotechnology. The most common types of mesoporous nanoparticles are MCM-41 and SBA-15. Research continues on the particles, which have applications in catalysis, drug delivery and imaging. Mesoporous ordered silica films have been also obtained with different pore topologies. A compound producing mesoporous silica was patented around 1970. It went almost unnoticed and was reproduced in 1997. Mesoporous silica nanoparticles (MSNs) were independently synthesized in 1990 by researchers in Japan. They were later produced also at Mobil Corporation laboratories and named Mobil Composition of Matter (or Mobil Crystalline Materials, MCM). Six years later, silica nanoparticles with much larger (4.6 to 30 nanometer) pores were produced at the University of California, Santa Barbara. The material was named Santa Barbara Amorphous type material, or SBA-15. These particles also have a hexagonal array of pores. The researchers who invented these types of particles planned to use them as molecular sieves. Today, mesoporous silica nanoparticles have many applications in medicine, biosensors, thermal energy storage, water/gas filtration and imaging. Synthesis Mesoporous silica nanoparticles are synthesized by reacting tetraethyl orthosilicate with a template made of micellar rods. The result is a collection of nano-sized spheres or rods that are filled with a regular arrangement of pores. The template can then be removed by washing with a solvent adjusted to the proper pH. Mesoporous particles can also be synthesized using a simple sol-gel method such as the Stöber process, or a spray drying method. Tetraethyl orthosilicate is also used with an additional polymer monomer (as a template). However, TEOS is not the most effective precursor for synthesizing such particles; a better precursor is (3-Mercaptopropyl)trimethoxysilane, often abbreviated to MPTMS. Use of this precursor drastically reduces the chance of aggregation and ensures more uniform spheres. Drug delivery The large surface area of the pores allows the particles to be filled with a drug or a cytotoxin. Like a Trojan Horse, the particles will be taken up by certain biological cells through endocytosis, depending on what chemicals are attached to the outside of the spheres. Some types of cancer cells will take up more of the particles than healthy cells will, giving researchers hope that MCM-41 will one day be used to treat certain types of cancer. Ordered mesoporous silica (e.g. SBA-15, TUD-1, HMM-33, and FSM-16) also show potential to boost the in vitro and in vivo dissolution of poorly water-soluble drugs. Many drug-candidates coming from drug discovery suffer from a poor water solubility. An insufficient dissolution of these hydrophobic drugs in the gastrointestinal fluids strongly limits the oral bioavailability. One example is itraconazole which is an antimycoticum known for its poor aqueous solubility. Upon introduction of itraconazole-on-SBA-15 formulation in simulated gastrointestinal fluids, a supersaturated solution is obtained giving rise to enhanced transepithelial intestinal transport. Also the efficient uptake into the systemic circulation of SBA-15 formulated itraconazole has been demonstrated in vivo (rabbits and dogs). This approach based on SBA-15 yields stable formulations and can be used for a wide variety of poorly water-soluble compounds. Biosensors The structure of these particles allows them to be filled with a fluorescent dye that would normally be unable to pass through cell walls. The MSN material is then capped off with a molecule that is compatible with the target cells. When the MSNs are added to a cell culture, they carry the dye across the cell membrane. These particles are optically transparent, so the dye can be seen through the silica walls. The dye in the particles does not have the same problem with self-quenching that a dye in solution has. The types of molecules that are grafted to the outside of the MSNs will control what kinds of biomolecules are allowed inside the particles to interact with the dye. See also Mesoporous material Mesoporous silicates References Silicon dioxide Silica
Mesoporous silica
[ "Materials_science" ]
1,005
[ "Mesoporous material", "Porous media" ]
10,794,084
https://en.wikipedia.org/wiki/WinBUGS
WinBUGS is statistical software for Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. It is based on the BUGS (Bayesian inference Using Gibbs Sampling) project started in 1989. It runs under Microsoft Windows, though it can also be run on Linux or Mac using Wine. It was developed by the BUGS Project, a team of British researchers at the MRC Biostatistics Unit, Cambridge, and Imperial College School of Medicine, London. Originally intended to solve problems encountered in medical statistics, it soon became widely used in other disciplines, such as ecology, sociology, and geology. The last version of WinBUGS was version 1.4.3, released in August 2007. Development is now focused on OpenBUGS, an open-source version of the package. WinBUGS 1.4.3 remains available as a stable version for routine use, but is no longer being developed. References Further reading External links WinBUGS Homepage Statistical software Monte Carlo software Windows-only freeware Bayesian statistics
WinBUGS
[ "Mathematics" ]
210
[ "Statistical software", "Mathematical software" ]
10,795,030
https://en.wikipedia.org/wiki/Anatoxin-a
Anatoxin-a, also known as Very Fast Death Factor (VFDF), is a secondary, bicyclic amine alkaloid and cyanotoxin with acute neurotoxicity. It was first discovered in the early 1960s in Canada, and was isolated in 1972. The toxin is produced by multiple genera of cyanobacteria and has been reported in North America, South America, Central America, Europe, Africa, Asia, and Oceania. Symptoms of anatoxin-a toxicity include loss of coordination, muscular fasciculations, convulsions and death by respiratory paralysis. Its mode of action is through the nicotinic acetylcholine receptor (nAchR) where it mimics the binding of the receptor's natural ligand, acetylcholine. As such, anatoxin-a has been used for medicinal purposes to investigate diseases characterized by low acetylcholine levels. Due to its high toxicity and potential presence in drinking water, anatoxin-a poses a threat to animals, including humans. While methods for detection and water treatment exist, scientists have called for more research to improve reliability and efficacy. Anatoxin-a is not to be confused with guanitoxin (formerly anatoxin-a(S)), another potent cyanotoxin that has a similar mechanism of action to that of anatoxin-a and is produced by many of the same cyanobacteria genera, but is structurally unrelated. History Anatoxin-a was first discovered by P.R. Gorham in the early 1960s, after several herds of cattle died as a result of drinking water from Saskatchewan Lake in Ontario, Canada, which contained toxic algal blooms. It was isolated in 1972 by J.P. Devlin from the cyanobacteria Anabaena flos-aquae. Occurrence Anatoxin-a is a neurotoxin produced by multiple genera of freshwater cyanobacteria that are found in water bodies globally. Some freshwater cyanobacteria are known to be salt tolerant and thus it is possible for anatoxin-a to be found in estuarine or other saline environments. Blooms of cyanobacteria that produce anatoxin-a among other cyanotoxins are increasing in frequency due to increasing temperatures, stratification, and eutrophication due to nutrient runoff. These expansive cyanobacterial harmful algal blooms, known as cyanoHABs, increase the amount of cyanotoxins in the surrounding water, threatening the health of both aquatic and terrestrial organisms. Some species of cyanobacteria that produce anatoxin-a don't produce surface water blooms but instead form benthic mats. Many cases of anatoxin-a related animal deaths have occurred due to ingestion of detached benthic cyanobacterial mats that have washed ashore. Anatoxin-a producing cyanobacteria have also been found in soils and aquatic plants. Anatoxin-a sorbs well to negatively charged sites in clay-like, organic-rich soils and weakly to sandy soils. One study found both bound and free anatoxin-a in 38% of aquatic plants sampled across 12 Nebraskan reservoirs, with much higher incidence of bound anatoxin-a than free. Experimental studies In 1977, Carmichael, Gorham, and Biggs experimented with anatoxin-a. They introduced toxic cultures of A. flos-aquae into the stomachs of two young male calves, and observed that muscular fasciculations and loss of coordination occurred in a matter of minutes, while death due to respiratory failure occurred anywhere between several minutes and a few hours. They also established that extensive periods of artificial respiration did not allow for detoxification to occur and natural neuromuscular functioning to resume. From these experiments, they calculated that the oral minimum lethal dose (MLD) (of the algae, not the anatoxin molecule), for calves is roughly 420 mg/kg body weight. In the same year, Devlin and colleagues discovered the bicyclic secondary amine structure of anatoxin-a. They also performed experiments similar to those of Carmichael et al. on mice. They found that anatoxin-a kills mice 2–5 minutes after intraperitoneal injection preceded by twitching, muscle spasms, paralysis and respiratory arrest, hence the name Very Fast Death Factor. They determined the LD50 for mice to be 250 μg/kg body weight. Electrophysiological experiments done by Spivak et al. (1980) on frogs showed that anatoxin-a is a potent agonist of the muscle-type (α1)2βγδ nAChR. Anatoxin-a induced depolarizing neuromuscular blockade, contracture of the frog's rectus abdominis muscle, depolarization of the frog sartorius muscle, desensitization, and alteration of the action potential. Later, Thomas et al., (1993) through his work with chicken α4β2 nAChR subunits expressed on mouse M 10 cells and chicken α7 nAChR expressed in oocytes from Xenopus laevis, showed that anatoxin-a is also a potent agonist of neuronal nAChR. Toxicity Effects Laboratory studies using mice showed that characteristic effects of acute anatoxin-a poisoning via intraperitoneal injection include muscle fasciculations, tremors, staggering, gasping, respiratory paralysis, and death within minutes. Zebrafish exposed to anatoxin-a contaminated water had altered heart rates. There have been cases of non-lethal poisoning in humans who have ingested water from streams and lakes that contain various genera of cyanobacteria that are capable of producing anatoxin-a. The effects of non-lethal poisoning were primarily gastrointestinal: nausea, vomiting, diarrhea, and abdominal pain. A case of lethal poisoning was reported in Wisconsin after a teen jumped into a pond contaminated with cyanobacteria. Exposure routes Oral Ingestion of drinking water or recreational water that is contaminated with anatoxin-a can pose fatal consequences since anatoxin-a was found to be quickly absorbed through the gastrointestinal tract in animal studies. Dozens of cases of animal deaths due to ingestion of anatoxin-a contaminated water from lakes or rivers have been recorded, and it is suspected to have also been the cause of death of one human. One study found that anatoxin-a is capable of binding to acetylcholine receptors and inducing toxic effects with concentrations in the nano-molar (nM) range if ingested. Dermal Dermal exposure is the most likely form of contact with cyanotoxins in the environment. Recreational exposure to river, stream, and lake waters contaminated with algal blooms has been known to cause skin irritation and rashes. The first study that looked at in vitro cytotoxic effects of anatoxin-a on human skin cell proliferation and migration found that anatoxin-a exerted no effect at 0.1 μg/mL or 1 μg/mL, and a weak toxic effect at 10 μg/mL only after an extended period of contact (48 hours). Inhalation No data on inhalation toxicity of anatoxin-a is currently available, though severe respiratory distress occurred in a water skier after they inhaled water spray containing a fellow cyanobacterial neurotoxin, saxitoxin. It is possible that inhalation of water spray containing anatoxin-a could pose similar consequences. Mechanism of toxicity Anatoxin-a is an agonist of both neuronal α4β2 and α4 nicotinic acetylcholine receptors present in the CNS as well as the (α1)2βγδ muscle-type nAchRs that are present at the neuromuscular junction. (Anatoxin-a has an affinity for these muscle-type receptors that is about 20 times greater than that of acetylcholine.) However, the cyanotoxin has little effect on muscarinic acetylcholine receptors; it has a 100 fold lesser selectivity for these types of receptors than it has for nAchRs. Anatoxin-a also shows much less potency in the CNS than in neuromuscular junctions. In hippocampal and brain stem neurons, a 5 to 10 times greater concentration of anatoxin-a was necessary to activate nAchRs than what was required in the PNS. In normal circumstances, acetylcholine binds to nAchRs in the post-synaptic neuronal membrane, causing a conformational change in the extracellular domain of the receptor which in turn opens the channel pore. This allows Na+ and Ca2+ ions to move into the neuron, causing cell depolarization and inducing the generation of action potentials, which allows for muscle contraction. The acetylcholine neurotransmitter then dissociates from the nAchR, where it is rapidly cleaved into acetate and choline by acetylcholinesterase. Anatoxin-a binding to these nAchRs cause the same effects in neurons. However, anatoxin-a binding is irreversible, and the anatoxin-a nAchR complex cannot be broken down by acetylcholinesterase. Thus, the nAchR is temporarily locked open, which leads to overstimulation due to the constant generation of action potentials. Two enantiomers of anatoxin-a, the positive enantiomer, (+)-anatoxin-a, is 150 fold more potent than the synthetic negative enantiomer, (−)-anatoxin-a. This is because (+)-anatoxin-a, the s-cis enone conformation, has a distance a 6.0 Å between its nitrogen and carbonyl group, which corresponds well to the 5.9 Å distance that separate the nitrogen and oxygen in acetylcholine. Respiratory arrest, which results in a lack of an oxygen supply to the brain, is the most evident and lethal effect of anatoxin-a. Injections of mice, rats, birds, dogs, and calves with lethal doses of anatoxin-a have demonstrated that death is preceded by a sequence of muscle fasciculations, decreased movement, collapse, exaggerated abdominal breathing, cyanosis and convulsions. In mice, anatoxin-a also seriously impacted blood pressure and heart rate, and caused severe acidosis. Cases of toxicity Many cases of wildlife and livestock deaths due to anatoxin-a have been reported since its discovery. Domestic dog deaths due to the cyanotoxin, as determined by analysis of stomach contents, have been observed at the lower North Island in New Zealand in 2005, in eastern France in 2003, in California of the United States in 2002 and 2006, in Scotland in 1992, in Ireland in 1997 and 2005, in Germany in 2017 and 2020. In each case, the dogs began showing muscle convulsions within minutes, and were dead within a matter of hours. Numerous cattle fatalities arising from the consumption of water contaminated with cyanobacteria that produce anatoxin-a have been reported in the United States, Canada, and Finland between 1980 and the present. A particularly interesting case of anatoxin-a poisoning is that of lesser flamingos at Lake Bogoria in Kenya. The cyanotoxin, which was identified in the stomachs and fecal pellets of the birds, killed roughly 30,000 flamingos in the second half of 1999, and continues to cause mass fatalities annually, devastating the flamingo population. The toxin is introduced into the birds via water contaminated with cyanobacterial mat communities that arise from the hot springs in the lake bed. Synthesis Laboratory synthesis Cyclic expansion of tropanes The first biologically occurring initial substance for tropane expansion into anatoxin-a was cocaine, which has similar stereochemistry to anatoxin-a. Cocaine is first converted into the endo isomer of a cyclopropane, which is then photolytically cleaved to obtain an alpha, beta unsaturated ketone. Through the use of diethyl azodicarboxylate, the ketone is demethylated and anatoxin-a is formed. A similar, more recent synthesis pathway involves producing 2-tropinone from cocaine and treating the product with ethyl chloroformate producing a bicyclic ketone. This product is combined with trimethylsilyldiazylmethane, an organoaluminum Lewis acid and trimethylsinyl enol ether to produce tropinone. This method undergoes several more steps, producing useful intermediates as well as anatoxin-a as a final product. Cyclization of cyclooctenes The first and most extensively explored approach used to synthesize anatoxin-a in vitro, cyclooctene cyclization involves 1,5-cyclooctadiene as its initial source. This starting substance is reacted to form methyl amine and combined with hypobromous acid to form anatoxin-a. Another method developed in the same laboratory uses aminoalcohol in conjunction with mercuric (II) acetate and sodium borohydride. The product of this reaction was transformed into an alpha, beta ketone and oxidized by ethyl azodicarboxylate to form anatoxin-a. Enantioselective enolization strategy This method for anatoxin-a production was one of the first used that does not utilize a chimerically analogous starting substance for anatoxin formation. Instead, a racemic mixture of 3-tropinone is used with a chiral lithium amide base and additional ring expansion reactions in order to produce a ketone intermediate. Addition of an organocuprate to the ketone produces an enol triflate derivative, which is then lysed hydrogenously and treated with a deprotecting agent in order to produce anatoxin-a. Similar strategies have also been developed and utilized by other laboratories. Intramolecular cyclization of iminium ions Iminium ion cyclization utilizes several different pathways to create anatoxin-a, but each of these produces and progresses with a pyrrolidine iminium ion. The major differences in each pathway relate to the precursors used to produce the imium ion and the total yield of anatoxin-a at the end of the process. These separate pathways include production of alkyl iminium salts, acyl iminium salts and tosyl iminium salts. Enyne metathesis Enyne metathesis of anatoxin-a involves the use of a ring closing mechanism and is one of the more recent advances in anatoxin-a synthesis. In all methods involving this pathway, pyroglutamic acid is used as a starting material in conjunction with a Grubb's catalyst. Similar to iminium cyclization, the first attempted synthesis of anatoxin-a using this pathway used a 2,5-cis-pyrrolidine as an intermediate. Biosynthesis Anatoxin-a is synthesized in vivo in the species Anabaena flos-aquae, as well as several other genera of cyanobacteria. Anatoxin-a and related chemical structures are produced using acetate and glutamate. Further enzymatic reduction of these precursors results in the formation of anatoxin-a. Homoanatoxin, a similar chemical, is produced by Oscillatoria formosa and utilizes the same precursor. However, homoanatoxin undergoes a methyl addition by S-adenosyl-L-methionine instead of an addition of electrons, resulting in a similar analogue. The biosynthetic gene cluster (BGC) for anatoxin-a was described from Oscillatoria PCC 6506 in 2009. Stability and degradation Anatoxin-a is unstable in water and other natural conditions, and in the presence of UV light undergoes photodegradation, being converted to the less toxic products dihydroanatoxin-a and epoxyanatoxin-a. The photodegradation of anatoxin-a is dependent on pH and sunlight intensity but independent of oxygen, indicating that the degradation by light is not achieved through the process of photo-oxidation. Studies have shown that some microorganisms are capable of degrading anatoxin-a. A study done by Kiviranta and colleagues in 1991 showed that the bacterial genus Pseudomonas was capable of degrading anatoxin-a at a rate of 2–10 μg/ml per day. Later experiments done by Rapala and colleagues (1994) supported these results. They compared the effects of sterilized and non-sterilized sediments on anatoxin-a degradation over the course of 22 days, and found that after that time vials with the sterilized sediments showed similar levels of anatoxin-a as at the commencement of the experiment, while vials with non-sterilized sediment showed a 25-48% decrease. Detection There are two categories of anatoxin-a detection methods. Biological methods have involved administration of samples to mice and other organisms more commonly used in ecotoxicological testing, such as brine shrimp (Artemia salina), larvae of the freshwater crustacean Thamnocephalus platyurus, and various insect larvae. Problems with this methodology include an inability to determine whether it is anatoxin-a or another neurotoxin that causes the resulting deaths. Large amounts of sample material are also needed for such testing. In addition to the biological methods, scientists have used chromatography to detect anatoxin-a. This is complicated by the rapid degradation of the toxin and the lack of commercially available standards for anatoxin-a. Public health Despite the relatively low frequency of anatoxin-a relative to other cyanotoxins, its high toxicity (the lethal dose is not known for humans, but is estimated to be less than 5 mg for an adult male) means that it is still considered a serious threat to terrestrial and aquatic organisms, most significantly to livestock and to humans. Anatoxin-a is suspected to have been involved in the death of at least one person. The threat posed by anatoxin-a and other cyanotoxins is increasing as both fertilizer runoff, leading to eutrophication in lakes and rivers, and higher global temperatures contribute to a greater frequency and prevalence of cyanobacterial blooms. Water regulations The World Health Organization in 1999 and EPA in 2006 both came to the conclusion that there was not enough toxicity data for anatoxin-a to establish a formal tolerable daily intake (TDI) level, though some places have implemented levels of their own. United States Drinking water advisory levels Anatoxin-a is not regulated under the Safe Drinking Water Act, but states are allowed to create their own standards for contaminants that are unregulated. Currently there are four states that have set drinking water advisory levels for anatoxin-a as seen in the table below. On October 8, 2009 the EPA published the third Drinking Water Contaminant Candidate List (CCL) which included anatoxin-a (among other cyanotoxins), indicating that anatoxin-a may be present in public water systems but is not regulated by the EPA. Anatoxin-a's presence on the CCL means that it may need to be regulated by the EPA in the future, pending further information on its health effects in humans. Recreational water advisory levels In 2008 the state of Washington implemented a recreational advisory level for anatoxin-a of 1 μg/L in order to better manage algal blooms in lakes and protect users from exposure to the blooms. Canada The Canadian province of Québec has a drinking water Maximum Accepted Value for anatoxin-a of 3.7 μg/L. New Zealand New Zealand has a drinking water Maximum Accepted Value for anatoxin-a of 6 μg/L. Water treatment As of now, there is no official guideline level for anatoxin-a, although scientists estimate that a level of 1 μg l−1 would be sufficiently low. Likewise, there are no official guidelines regarding testing for anatoxin-a. Among methods of reducing the risk for cyanotoxins, including anatoxin-a, scientists look favorably on biological treatment methods because they do not require complicated technology, are low maintenance, and have low running costs. Few biological treatment options have been tested for anatoxin-a specifically, although a species of Pseudomonas, capable of biodegrading anatoxin-a at a rate of 2–10 μg ml−1 d−1, has been identified. Biological (granular) activated carbon (BAC) has also been tested as a method of biodegradation, but it is inconclusive whether biodegradation occurred or if anatoxin-a was simply adsorbing the activated carbon. Others have called for additional studies to determine more about how to use activated carbon effectively. Chemical treatment methods are more common in drinking water treatment compared to biological treatment, and numerous processes have been suggested for anatoxin-a. Oxidants such as potassium permanganate, ozone, and advanced oxidation processes (AOPs) have worked in lowering levels of anatoxin-a, but others, including photocatalysis, UV photolysis, and chlorination, have not shown great efficacy. Directly removing the cyanobacteria in the water treatment process through physical treatment (e.g., membrane filtration) is another option because most of the anatoxin-a is contained within the cells when the bloom is growing. However, anatoxin-a is released from cyanobacteria into water when they senesce and lyse, so physical treatment may not remove all of the anatoxin-a present. Additional research needs to be done to find more reliable and efficient methods of both detection and treatment. Laboratory uses Anatoxin-a is a very powerful nicotinic acetylcholine receptor agonist and as such has been extensively studied for medicinal purposes. It is mainly used as a pharmacological probe in order to investigate diseases characterized by low acetylcholine levels, such as muscular dystrophy, myasthenia gravis, Alzheimer disease, and Parkinson disease. Further research on anatoxin-a and other less potent analogues are being tested as possible replacements for acetylcholine. Genera of cyanobacteria that produce anatoxin-a Anabaena (Dolichospermum) Aphanizomenon Cylindrospermopsis Cylindrospermum Lyngbya Microcystis Nostoc Oscillatoria Microcoleus (Phormidium) Planktothrix Raphidiopsis Tychonema Woronichinia See also Guanitoxin Epibatidine References Further reading External links Very Fast Death Factor (Anatoxin-a) at The Periodic Table of Videos (University of Nottingham) Molecule of the Month: Anatoxin at the School of Chemistry, Physics, and Environmental Studies, University of Sussex at Brighton Neurotoxins Nitrogen heterocycles Alkaloids Ketones Cyanotoxins Cycloalkenes Amines Heterocyclic compounds with 2 rings Enones Bacterial alkaloids Nicotinic agonists
Anatoxin-a
[ "Chemistry" ]
4,972
[ "Biomolecules by chemical classification", "Natural products", "Ketones", "Functional groups", "Amines", "Organic compounds", "Neurochemistry", "Neurotoxins", "Bases (chemistry)", "Alkaloids" ]
10,798,875
https://en.wikipedia.org/wiki/Starlite
Starlite is an intumescent material said to be able to withstand and insulate from extreme heat. It was invented by British hairdresser and amateur chemist Maurice Ward (1933–2011) during the 1970s and 1980s, and received significant publicity after coverage of the material aired in 1990 on the BBC science and technology show Tomorrow's World. The name Starlite was coined by Ward's granddaughter Kimberly. The American company Thermashield, LLC, says it acquired the rights to Starlite in 2013 and replicated it. It is the only company to have itself publicly demonstrated the technology and have samples tested by third parties. Thermashield's Starlite has successfully passed femtosecond laser testing at the Georgia Institute of Technology and ASTM D635-15 Standard Testing. Properties Live demonstrations on Tomorrow's World and BBC Radio 4 showed that an egg coated in Starlite could remain raw, and cold enough to be picked up with a bare hand, even after five minutes in the flame of an oxyacetylene blowtorch. It would also prevent a blowtorch from damaging a human hand. When heat is applied, the material chars, which creates an expanding low density carbon foam which is very thermally resistant. Even the application of a plasma torch, capable of cutting eighteen-inch thick steel plate, has little impact on Starlite. It was reported that it took nine seconds to heat a warhead to 900 °C, but a thin layer of the compound prevented the temperature from rising above 40 °C. Starlite was also claimed to have been able to withstand a laser beam that could produce a temperature of 10,000 °C. Starlite reacts more efficiently as more heat is applied. The MOD's report, as published in Jane's International Defence Review 4/1993, speculated this was due to particle scatter of an ablative layer, thereby increasing the reflective properties of the compound. Testing continues for thermal conductivity and capacity under different conditions. Starlite may become contaminated with dust residue and so degrade with use. Keith Lewis, a retired MOD officer, noted that the material guards only against thermal damage and not the physical damage caused by an explosion, which can destroy the insulating layer. Materials scientist Mark Miodownik described Starlite as a type of intumescent paint, and one of the materials he would most like to see for himself. He also admitted some doubt about the commercial potential of Starlite. Its main use appears to be as a flame retardant. Testing of modern composite materials enhanced with Starlite could expand the range of potential uses and applications of this substance. Composition Starlite's composition is a closely guarded secret. "The actual composition of Starlite is known only to Maurice and one or two members of his family," former Chief Scientific Adviser to the Ministry of Defence Sir Ronald Mason averred. It is said to contain a variety of organic polymers and co-polymers with both organic and inorganic additives, including borates and small quantities of ceramics and other special barrier ingredients—up to 21 in all. Perhaps uniquely for a material said to be thermal proof, it is said to be not entirely inorganic but up to 90 per cent organic. Nicola McDermott, Ward's youngest daughter, stated that Starlite is 'natural' and edible, and that it has been fed to dogs and horses without ill effects. The American company Thermashield, LLC, which owns the Starlite formula, stated in a radio interview that Starlite is not made from household ingredients and there is no PVA glue, baking soda or baking powder in it. Commercialisation Ward allowed various organisations such as the Atomic Weapons Establishment and ICI to conduct tests on samples, but did not permit them to retain samples for fear of reverse engineering. Ward maintained that his invention was worth billions. Sir Ronald Mason told a reporter in 1993, "I started this path with Maurice very sceptical. I’m totally convinced of the reality of the claims." He further states, "We don't still quite understand how it works, but that it works is undoubtedly the case." NASA became involved in Starlite in 1994, and NASA engineer Rosendo 'Rudy' Naranjo talked about its potential in a Dateline NBC report. The Dateline reporter stated that Starlite could perhaps help with the fragile Space Shuttle heat shield. Naranjo said of their discussions with Ward, "We have done a lot of evaluation and … we know all the tremendous possibilities that this material has." Boeing, which was the main contractor for the Space Shuttles in 1994, became interested in the potential of Starlite to eliminate flammable materials in their jets. By the time of Ward's death in 2011 there appeared to have been no commercialisation of Starlite, and the formulation of the material had not been released to the public. According to a 2016 broadcast of the BBC programme The Naked Scientists, Ward took his secrets with him when he died. According to a 2020 BBC Online release in the BBC Reel category, Thermashield, LLC had purchased all of Ward's notes, equipment and other related materials and is working towards a viable commercial product. Replication A YouTube user, NightHawkInLight, attempted in 2018 to create materials that replicated the properties of Starlite. Observing that the mechanism that generates an expanding carbon foam in Starlite is similar to black snake fireworks, NightHawkInLight concocted a formula using cornstarch, baking soda, and PVA glue. After drying, the hardened material creates a thin layer of carbon foam on the surface when exposed to high heat, insulating the material from further heat transfer. He later improved it by taking out the PVA glue and baking soda, and adding in flour, sugar and borax. Using borax and flour makes it less expensive, mold and insect resistant, and able to work when dry. Several experiments testing the replication and variant recipes show that they can handle lasers, thermite, torches, etc. But the replication recipe failed when it was used to make a crucible for an induction furnace. See also Lost inventions Firepaste References External links . . . (Wayback Machine; March 9, 2020) . . . Organic polymers Biomaterials Brand name materials Lost inventions Firestops
Starlite
[ "Physics", "Chemistry", "Biology" ]
1,284
[ "Biomaterials", "Organic polymers", "Organic compounds", "Materials", "Matter", "Medical technology" ]
10,799,117
https://en.wikipedia.org/wiki/Fashion%20design
Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends and has varied over time and place. "A fashion designer creates clothing, including dresses, suits, pants, and skirts, and accessories like shoes and handbags, for consumers. They can specialize in clothing, accessory, or jewelry design, or may work in more than one of these areas." Fashion designers Fashion designers work in a variety of ways when designing their pieces and accessories such as rings, bracelets, necklaces and earrings. Due to the time required to put a garment out on the market, designers must anticipate changes to consumer desires. Fashion designers are responsible for creating looks for individual garments, involving shape, color, fabric, trimming, and more. Fashion designers attempt to design clothes that are functional as well as aesthetically pleasing. They consider who is likely to wear a garment and the situations in which it will be worn, and they work with a wide range of materials, colors, patterns, and styles. Though most clothing worn for everyday wear falls within a narrow range of conventional styles, unusual garments are usually sought for special occasions such as evening wear or party dresses. Some clothes are made specifically for an individual, as in the case of haute couture or bespoke tailoring. Today, most clothing is designed for the mass market, especially casual and everyday wear, which are commonly known as ready to wear or fast fashion. Structure There are different lines of work for designers in the fashion industry. Fashion designers who work full-time for a fashion house, as 'in-house designers', own the designs and may either work alone or as a part of a design team. Freelance designers who work for themselves sell their designs to fashion houses, directly to shops, or to clothing manufacturers. There are quite a few fashion designers who choose to set up their labels, which offers them full control over their designs. Others are self-employed and design for individual clients. Other high-end fashion designers cater to specialty stores or high-end fashion department stores. These designers create original garments, as well as those that follow established fashion trends. Most fashion designers, however, work for apparel manufacturers, creating designs of men's, women's, and children's fashions for the mass market. Large designer brands that have a 'name' as their brand such as Abercrombie & Fitch, Justice, or Juicy are likely to be designed by a team of individual designers under the direction of a design director. Designing a garment Garment design includes components of "color, texture, space, lines, pattern, silhouette, shape, proportion, balance, emphasis, rhythm, and harmony". All of these elements come together to design a garment by creating visual interest for consumers. Fashion designers work in various ways, some start with a vision in their head and later move into drawing it on paper or on a computer, while others go directly into draping fabric onto a dress form, also known as a mannequin. The design process is unique to the designer and it is rather intriguing to see the various steps that go into the process. Designing a garment starts with patternmaking. The process begins with creating a sloper or base pattern. The sloper will fit the size of the model a designer is working with or a base can be made by utilizing standard size charting. Three major manipulations within patternmaking include dart manipulation, contouring, and added fullness. Dart manipulation allows for a dart to be moved on a garment in various places but does not change the overall fit of the garment. Contouring allows for areas of a garment to fit closer to areas of the torso such as the bust or shoulders. Added fullness increases the length or width of a pattern to change the frame as well as fit of the garment. The fullness can be added on one side, unequal, or equally to the pattern. A designer may choose to work with certain apps that can help connect all their ideas together and expand their thoughts to create a cohesive design. When a designer is completely satisfied with the fit of the toile (or muslin), they will consult a professional pattern maker who will then create the finished, working version of the pattern out of paper or using a computer program. Finally, a sample garment is made up and tested on a model to make sure it is an operational outfit. Fashion design is expressive, the designers create art that may be functional or non-functional. Technology within fashion Over the years, there has been an increase in the use of technology within fashion design. Iris van Herpen, a Dutch designer, incorporated 3D printing in her Crystallization collection. Software can aid designers in the product development stage. Designers can use artificial intelligence and virtual reality to prototype clothing. 3D modeling within software allows for initial sampling and development stages for partnerships with suppliers before the garments are produced. History Modern Western fashion design is often considered to have started in the 19th century with Charles Frederick Worth who was the first designer to have his label sewn into the garments that he created. Before the former draper set up his maison couture (fashion house) in Paris, clothing design and creation of the garments were handled largely by anonymous seamstresses. At the time high fashion descended from what was popularly worn at royal courts. Worth's success was such that he was able to dictate to his customers what they should wear, instead of following their lead as earlier dressmakers had done. The term couturier was in fact first created in order to describe him. While all articles of clothing from any time period are studied by academics as costume design, only clothing created after 1858 is considered fashion design. It was during this period that many design houses began to hire artists to sketch or paint designs for garments. Rather than going straight into manufacturing, the images were shown to clients to gain approval, which saved time and money for the designer. If the client liked their design, the patrons commissioned the garment from the designer, and it was produced for the client in the fashion house. This designer-patron construct launched designers sketching their work rather than putting the completed designs on models. Types of fashion Garments produced by clothing manufacturers fall into three main categories, although these may be split up into additional, different types. Haute couture Until the 1950s, fashion clothing was predominately designed and manufactured on a made-to-measure or haute couture basis (French for high-sewing), with each garment being created for a specific client. A couture garment is made to order for an individual customer, and is usually made from high-quality, expensive fabric, sewn with extreme attention to detail and finish, often using time-consuming, hand-executed techniques. Look and fit take priority over the cost of materials and the time it takes to make. Due to the high cost of each garment, haute couture makes little direct profit for the fashion houses, but is important for prestige and publicity. Ready-to-wear (prêt-à-porter) Ready-to-wear, or prêt-à-porter, clothes are a cross between haute couture and mass market. They are not made for individual customers, but great care is taken in the choice and cut of the fabric. Clothes are made in small quantities to guarantee exclusivity, so they are rather expensive. Ready-to-wear collections are usually presented by fashion houses each season during a period known as Fashion Week. This takes place on a citywide basis and occurs twice a year. The main seasons of Fashion Week include; spring/summer, fall/winter, resort, swim, and bridal. Half-way garments are an alternative to ready-to-wear, "off-the-peg", or prêt-à-porter fashion. Half-way garments are intentionally unfinished pieces of clothing that encourage co-design between the "primary designer" of the garment, and what would usually be considered, the passive "consumer". This differs from ready-to-wear fashion, as the consumer is able to participate in the process of making and co-designing their clothing. During the Make{able} workshop, Hirscher and Niinimaki found that personal involvement in the garment-making process created a meaningful "narrative" for the user, which established a person-product attachment and increased the sentimental value of the final product. Otto von Busch also explores half-way garments and fashion co-design in his thesis, "Fashion-able, Hacktivism and engaged Fashion Design". Mass market Currently, the fashion industry relies more on mass-market sales. The mass market caters for a wide range of customers, producing ready-to-wear garments using trends set by the famous names in fashion. They often wait around a season to make sure a style is going to catch on before producing their versions of the original look. To save money and time, they use cheaper fabrics and simpler production techniques which can easily be done by machines. The end product can, therefore, be sold much more cheaply. There is a type of design called "kutch" originated from the German word kitschig, meaning "trashy" or "not aesthetically pleasing". Kitsch can also refer to "wearing or displaying something that is therefore no longer in fashion". Income The median annual wages for salaried fashion designers was $79,290 in May 2023, approximately $38.12 per hour. The middle 50 percent earned an average of 76,700. The lowest 10 percent earned $37,090 and the highest 10 percent earned $160,850. The highest number of employment lies within Apparel, Piece Goods, and Notions Merchant Wholesalers with a percentage of 5.4. The average is 7,820 based on employment. The lowest employment is within Apparel Knitting Mills at .46% of the industry employed, which averages to 30 workers within the specific specialty. In 2016, 23,800 people were counted as fashion designers in the United States. Geographically, the largest employment state of Fashion designers is New York with an employment of 7,930. New York is considered a hub for fashion designers due to a large percentage of luxury designers and brands. Fashion industry Fashion today is a global industry, and most major countries have a fashion industry. Seven countries have established an international reputation in fashion: the United States, France, Italy, United Kingdom, Japan, Germany and Belgium. The "big four" fashion capitals of the fashion industry are New York City, Paris, Milan, and London. United States The United States is home to the largest, wealthiest, and most multi-faceted fashion industry. Most fashion houses in the United States are based in New York City, with a high concentration centered in the Garment District neighborhood. On the US west coast, there is also to a lesser extent a significant number of fashion houses in Los Angeles, where a substantial percentage of high fashion clothing manufactured in the United States is actually made. Miami has also emerged as a new fashion hub, especially in regards to swimwear and other beach-oriented fashion. A semi-annual event held every February and September, New York Fashion Week is the oldest of the four major fashion weeks held throughout the world. Parsons The New School for Design, located in the Greenwich Village neighborhood of Lower Manhattan in New York City, is considered one of the top fashion schools in the world. There are numerous fashion magazines published in the United States and distributed to a global readership. Examples include Vogue, Harper's Bazaar, and Cosmopolitan. American fashion design is highly diverse, reflecting the enormous ethnic diversity of the population, but is largely dominated by a clean-cut, urban, hip aesthetic, and often favors a more casual style, reflecting the athletic, health-conscious lifestyles of the suburban and urban middle classes. The annual Met Gala ceremony in Manhattan is widely regarded as the world's most prestigious haute couture fashion event and is a venue where fashion designers and their creations are celebrated. Social media is also a place where fashion is presented most often. Some influencers are paid huge amounts of money to promote a product or clothing item, where the business hopes many viewers will buy the product off the back of the advertisement. Instagram is the most popular platform for advertising, but Facebook, Snapchat, Twitter and other platforms are also used. In New York, the LGBT fashion design community contributes very significantly to promulgating fashion trends, and drag celebrities have developed a profound influence upon New York Fashion Week. Prominent American brands and designers include Calvin Klein, Ralph Lauren, Coach, Nike, Vans, Marc Jacobs, Tommy Hilfiger, DKNY, Tom Ford, Caswell-Massey, Michael Kors, Levi Strauss and Co., Estée Lauder, Revlon, Kate Spade, Alexander Wang, Vera Wang, Victoria's Secret, Tiffany and Co., Converse, Oscar de la Renta, John Varvatos, Anna Sui, Prabal Gurung, Bill Blass, Halston, Carhartt, Brooks Brothers, Stuart Weitzman, Diane von Furstenberg, J. Crew, American Eagle Outfitters, Steve Madden, Abercrombie and Fitch, Juicy Couture, Thom Browne, Guess, Supreme, and The Timberland Company. Belgium In the late 1980s and early 1990s, Belgian fashion designers brought a new fashion image that mixed East and West, and brought a highly individualised, personal vision on fashion. Well known Belgian designers are the Antwerp Six: Ann Demeulemeester, Dries Van Noten, Dirk Bikkembergs, Dirk Van Saene, Walter Van Beirendonck and Marina Yee, as well as Martin Margiela, Raf Simons, Kris Van Assche, Bruno Pieters, Anthony Vaccarello. United Kingdom London has long been the capital of the United Kingdom fashion industry and has a wide range of foreign designs which have integrated with modern British styles. Typical British design is smart but innovative yet recently has become more and more unconventional, fusing traditional styles with modern techniques. Vintage styles play an important role in the British fashion and styling industry. Stylists regularly 'mix and match' the old with the new, which gives British style a unique, bohemian aesthetic. Irish fashion (both design and styling) is also heavily influenced by fashion trends from Britain. Well-known British designers include Thomas Burberry, Alfred Dunhill, Paul Smith, Vivienne Westwood, Stella McCartney, Jimmy Choo, John Galliano, John Richmond, Alexander McQueen, Matthew Williamson, Gareth Pugh, Hussein Chalayan and Neil Barrett. France Most French fashion houses are in Paris, which is the capital of French fashion. Traditionally, French fashion is chic and stylish, defined by its sophistication, cut, and smart accessories. French fashion is internationally acclaimed. Spain Madrid and Barcelona are the main fashion centers in Spain. Spanish fashion is often more conservative and traditional but also more 'timeless' than other fashion cultures. Spaniards are known not to take great risks when dressing. Nonetheless, many of the fashion brands and designers coming from Spain. The most notable luxury houses are Loewe and Balenciaga. Famous designers include Manolo Blahnik, Elio Berhanyer, Cristóbal Balenciaga, Paco Rabanne, Adolfo Domínguez, Manuel Pertegaz, Jesús del Pozo, Felipe Varela and Agatha Ruiz de la Prada. Spain is also home to large fashion brands such as Zara, Massimo Dutti, Bershka, Pull&Bear, Mango, Desigual, Pepe Jeans and Camper. Germany Berlin is the centre of fashion in Germany (prominently displayed at Berlin Fashion Week), while Düsseldorf holds Europe's largest fashion trade fairs with Igedo. Other important centres of the scene are Munich, Hamburg, and Cologne. German fashion is known for its elegant lines as well as unconventional young designs and the great variety of styles. India Most of the Indian fashion houses are in Mumbai, Lakme Fashion Week is considered one of the premier fashion events in the country. Lakme Fashion Week in India takes place twice a year and is held in the populous city of Mumbai. The first show occurs during April featuring summer collections. The second show takes place in August to showcase the winter collection. Lakme, a cosmetic brand for Indian women, hosts the event. This fashion week started in 1999 and originally partnered with the FDCI, Fashion Design Council of India then later  switched to a sponsorship with Lakme. Italy Milan is Italy's fashion capital. Most of the older Italian couturiers are in Rome. However, Milan and Florence are the Italian fashion capitals, and it is the exhibition venue for their collections. Italian fashion features casual and glamorous elegance. In Italy, Milan Fashion Week takes place twice a year in February and September. Milan Fashion week puts fashion in the spotlight and celebrates it in the heart of Milan with fashion lovers, buyers and media. Japan Most Japanese fashion houses are in Tokyo which is home to Tokyo Fashion Week, Asia's largest fashion week. The Japanese look is loose and unstructured (often resulting from complicated cutting), colors tend to the sombre and subtle, and richly textured fabrics. Famous Japanese designers include Kenzo Takada, Issey Miyake, Yohji Yamamoto and Rei Kawakubo. China Chinese clothing has historically been associated with lower quality both inside and outside China, leading to a stigma on Chinese brands. Due to government censorship, Chinese citizens were only able to access fashion magazines in the 1990s. However, as more and more Chinese designers matriculate from the world's top fashion schools, Chinese designers such as Shushu/Tong and Rui Zhou have made their way into the world's top fashion weeks, and Shanghai has become a fashion hub in China. In the early 2020s, Gen Z shoppers pioneered the guochao () movement, a trend of preferring homegrown designers which incorporate aspects of Chinese history and culture. Hong Kong clothing brand Shanghai Tang's design concept is inspired by Chinese clothing and set out to rejuvenate Chinese fashion of the 1920s and 30s, with a modern twist of the 21st century and its usage of bright colours. Additionally, a revival in interest in traditional Han clothing has led to interest in haute couture clothing with historical Chinese details, particularly around Chinese New Year. Soviet Union Fashion in the Soviet Union largely followed general trends of the Western world. However, the state's socialist ideology consistently moderated and influenced these trends. In addition, shortages of consumer goods meant that the general public did not have ready access to pre-made fashion. Switzerland Most of the Swiss fashion houses are in Zürich. The Swiss look is casual elegant and luxurious with a slight touch of quirkiness. Additionally, it has been greatly influenced by the dance club scene. Mexico In the development of Mexican indigenous dress, the fabrication was determined by the materials and resources that are available in specific regions, impacting the "fabric, shape and construction of a people's clothing". Textiles were created from plant fibers including cotton and agave. Class status differentiated what fabric was worn. Mexican dress was influenced by geometric shapes to create the silhouettes. Huipil a blouse characterized by a "loose, sleeveless tunic made of two or three joined webs of cloth sewn lengthwise" is an important historical garment, often seen today. After the Spanish Conquest, traditional Mexican clothing shifted to take a Spanish resemblance. Mexican indigenous groups rely on specific embroidery and colors to differentiate themselves from each other. Mexican Pink is a significant color to the identity of Mexican art and design and general spirit. The term "Rosa Mexicano" as described by Ramón Valdiosera was established by prominent figures such as Dolores del Río and designer Ramón Val in New York. When newspapers and magazines such as El Imparcial and El Mundo Ilustrado circulated in Mexico, became a significant movement, as it informed the large cities, such as Mexico City, of European fashions. This encouraged the founding of department stores, changing the existent pace of fashion. With access to European fashion and dress, those with high social status relied on adopting those elements to distinguish themselves from the rest. Juana Catarina Romero was a successful entrepreneur and pioneer in this movement. Fashion design terms A fashion designer conceives garment combinations of line, proportion, color, and texture. While sewing and pattern-making skills are beneficial, they are not a pre-requisite of successful fashion design. Most fashion designers are formally trained or apprenticed. A technical designer works with the design team and the factories overseas to ensure correct garment construction, appropriate fabric choices and a good fit. The technical designer fits the garment samples on a fit model, and decides which fit and construction changes to make before mass-producing the garment. A pattern maker (also referred as pattern master or pattern cutter) drafts the shapes and sizes of a garment's pieces. This may be done manually with paper and measuring tools or by using a CAD computer software program. Another method is to drape fabric directly onto a dress form. The resulting pattern pieces can be constructed to produce the intended design of the garment and required size. Formal training is usually required for working as a pattern marker. A tailor makes custom designed garments made to the client's measure; especially suits (coat and trousers, jacket and skirt, et cetera). Tailors usually undergo an apprenticeship or other formal training. A textile designer designs fabric weaves and prints for clothes and furnishings. Most textile designers are formally trained as apprentices and in school. A stylist co-ordinates the clothes, jewelry, and accessories used in fashion photography and catwalk presentations. A stylist may also work with an individual client to design a coordinated wardrobe of garments. Many stylists are trained in fashion design, the history of fashion, and historical costume, and have a high level of expertise in the current fashion market and future market trends. However, some simply have a strong aesthetic sense for pulling great looks together. A fashion buyer selects and buys the mix of clothing available in retail shops, department stores, and chain stores. Most fashion buyers are trained in business and/or fashion studies. A seamstress sews ready-to-wear or mass-produced clothing by hand or with a sewing machine, either in a garment shop or as a sewing machine operator in a factory. She (or he) may not have the skills to make (design and cut) the garments, or to fit them on a model. A dressmaker specializes in custom-made women's clothes: day, cocktail, and evening dresses, business clothes and suits, trousseaus, sports clothes, and lingerie. A fashion forecaster predicts what colours, styles and shapes will be popular ("on-trend") before the garments are on sale in stores. A model wears and displays clothes at fashion shows and in photographs. A fit model aids the fashion designer by wearing and commenting on the fit of clothes during their design and pre-manufacture. Fit models need to be a particular size for this purpose. A fashion journalist writes fashion articles describing the garments presented or fashion trends, for magazines or newspapers. A fashion photographer produces photographs about garments and other fashion items along with models and stylists for magazines or advertising agencies. See also Fashion Fashion design copyright History of western fashion List of fashion designers List of fashion education programs List of fashion topics List of individual dresses Runway (fashion) Deconstruction (fashion) Sustainable fashion Textile design Western dress codes References Bibliography Breward, Christopher, The culture of fashion: a new history of fashionable dress, Manchester: Manchester University Press, 2003, Hollander, Anne, Seeing through clothes, Berkeley: University of California Press, 1993, Hollander, Anne, Sex and suits: the evolution of modern dress, New York: Knopf, 1994, Hollander, Anne, Feeding the eye: essays, New York: Farrar, Straus, and Giroux, 1999, Hollander, Anne, Fabric of vision: dress and drapery in painting, London: National Gallery, 2002, Kawamura, Yuniya, Fashion-ology: an introduction to Fashion Studies, Oxford and New York: Berg, 2005, Lipovetsky, Gilles (translated by Catherine Porter), The empire of fashion: dressing modern democracy, Woodstock: Princeton University Press, 2002, McDermott, Kathleen, Style for all: why fashion, invented by kings, now belongs to all of us (An illustrated history), 2010, — Many hand-drawn color illustrations, extensive annotated bibliography and reading guide Mckay Rosenberg, Dawn, Fashion designer job description: Salary, skills, & more. Retrieved May 10, 2021, from https://www.thebalancecareers.com/fashion-designer-526016 Perrot, Philippe (translated by Richard Bienvenu), Fashioning the bourgeoisie: a history of clothing in the nineteenth century, Princeton NJ: Princeton University Press, 1994, Steele, Valerie, Paris fashion: a cultural history, (2. ed., rev. and updated), Oxford: Berg, 1998, Steele, Valerie, Fifty years of fashion: new look to now, New Haven: Yale University Press, 2000, Steele, Valerie, Encyclopedia of clothing and fashion, Detroit: Thomson Gale, 2005 Strijbos, Bram. (2021, May 10). All the news about Milan Fashion week on FashionUnited. Retrieved May 10, 2021, from https://fashionweekweb.com/milan-fashion-week Sterlacci, Francesca. (n.d.). What is a fashion designer? Retrieved May 10, 2021, from https://fashion-history.lovetoknow.com/fashion-clothing-industry/what-is-fashion-designer Design occupations Arts occupations
Fashion design
[ "Engineering" ]
5,316
[ "Design occupations", "Design", "Fashion design" ]
10,799,349
https://en.wikipedia.org/wiki/GIS%20and%20hydrology
Geographic information systems (GISs) have become a useful and important tool in the field of hydrology to study and manage Earth's water resources. Climate change and greater demands on water resources require a more knowledgeable disposition of arguably one of our most vital resources. Because water in its occurrence varies spatially and temporally throughout the hydrologic cycle, its study using GIS is especially practical. Whereas previous GIS systems were mostly static in their geospatial representation of hydrologic features, GIS platforms are becoming increasingly dynamic, narrowing the gap between historical data and current hydrologic reality. The elementary water cycle has inputs equal to outputs plus or minus change in storage. Hydrologists make use of this hydrologic budget when they study a watershed. The inputs in a hydrologic budget include precipitation, surface flow, and groundwater flow. Outputs consist of evapotranspiration, infiltration, surface runoff, and surface/groundwater flows. All of these quantities can be measured or estimated based on environmental data and their characteristics can be graphically displayed and studies using GIS. GIS in surface water In the field of hydrological modeling, analysis generally begins with the sampling and measurement of existing hydrologic areas. In this stage of research, the scale and accuracy of measurements are key issues. Data may either be collected in the field or through online research. The United States Geological Survey ((USGS)) is a publicly available source of remotely sensed hydrological data. Historical and real-time streamflow data are also available via the internet from sources such as the National Weather Service (NWS) and the United States Environmental Protection Agency (EPA). A benefit of using GIS softwares for hydrological modeling is that digital visualizations of data can be linked to real-time data. GIS revolutionized curation, manipulation, and input for complex computational hydrologic models For surface water modeling, digital elevation model are often layered with hydrographic data in order to determine the boundaries of a watershed. Understanding these boundaries is integral to understanding where precipitation runoff will flow. For example, in the event of snowmelt, the amount of snowfall can be input into GIS to predict the amount of water that will travel downstream. This information has applications in local government asset management, agriculture and environmental science. Another useful application for GIS regards flood risk assessment. Using digital elevation models combined with peak discharge data can predict which areas of a floodplain will be submerged depending on the amount of rainfall. In a study of the Illinois River watershed, Rabie (2014) found that a decently accurate flood risk map could be generated using only DEMs and stream gauge data. Analysis based on these two parameters alone does not account for manmade developments including levees or drainage systems, and therefore should not be considered a comprehensive result. GIS in groundwater The use of GIS to analyze groundwater falls into the field of hydrogeology. Since 98% of available freshwater on Earth is groundwater, the need to effectively model and manage these resources is apparent. As the demand for groundwater continues to increase with the world’s growing population, it is vital that these resources be properly managed. Indeed, when groundwater usage is not monitored sufficiently, it may result in damage to aquifers or groundwater-related subsidence, as occurred in the Ogallala aquifer in the United States. In some cases, GIS can be used to analyze drainage and groundwater data in order to select suitable sites for groundwater recharge. See also GIS in environmental contamination Geographic information system ArcGIS GIS and aquatic science References Girish Kumar, M., Bali, R. and Agarwal, A.K (2009). GIS Integration of remote sensing and electrical data for hydrological exploration- A case study of Bhakar watershed, India. Hydrological Sciences Journal 54 (5) pp 949–960. Dingman, S. Lawrence, Physical Hydrology, Prentice-Hall, 2nd Edition, 2002 Fetter, C.W. Applied Hydrogeology, Prentice-Hall, 4th Edition, 2001 Maidment, David R., ed. Arc Hydro: GIS for Water Resources, ESRI Press, 2002 External links Spatial Hydrology GIS Lounge ArcNews Online US Army Geospatial Center — For information on OCONUS surface water and groundwater. Applications of geographic information systems Hydrology
GIS and hydrology
[ "Chemistry", "Engineering", "Environmental_science" ]
888
[ "Hydrology", "Environmental engineering" ]
10,806,161
https://en.wikipedia.org/wiki/Medical%20calculator
A medical calculator is a type of medical computer software, whose purpose is to allow easy calculation of various scores and indices, presenting the user with a friendly interface that hides the complexity of the formulas. Most offer further information such as result interpretation guides and medical literature references. Generally, such calculators are intended for use by health care professionals, and use by the general public maybe discouraged. Medical calculators arose because modern medicine makes frequent use of scores and indices that put physicians' memory and calculation skills to the test. The advent of personal computers, the Internet and Web, and more recently personal digital assistants (PDAs) have formed an environment conducive to their development, spread and use. Types Online Various websites, including Wikipedia, are available that provide calculations from a browser based input form. Websites that offer this ability include MDCalc. Hardware Purpose-built devices for specific medical calculations are available from various commercial sources. There are two ways to make a calculator using an array that looks up an answer based on a large array of data or where the calculator computes the answer using a mathematical equation. Apps Software-based medical calculators are available for various platforms, including the iPhone and Android. Handheld battery powered portable units are available and can be manufactured in smaller quantities than before thanks to OTP (one Time Programmable) chips. References Medical equipment Medical software
Medical calculator
[ "Biology" ]
284
[ "Medical software", "Medical equipment", "Medical technology" ]
1,517,731
https://en.wikipedia.org/wiki/EPICS
The Experimental Physics and Industrial Control System (EPICS) is a set of software tools and applications used to develop and implement distributed control systems to operate devices such as particle accelerators, telescopes and other large scientific facilities. The tools are designed to help develop systems which often feature large numbers of networked computers delivering control and feedback. They also provide SCADA capabilities. History EPICS was initially developed as the Ground Test Accelerator Controls System (GTACS) at Los Alamos National Laboratory (LANL) in 1988 by Bob Dalesio, Jeff Hill, et al.  In 1989, Marty Kraimer from Argonne National Laboratory (ANL) came to work alongside the GTA controls team for 6 months, bringing his experience from his work on the Advanced Photon Source (APS) Control System to the project. The resulting software was renamed EPICS and was presented at the International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) in 1991. EPICS was originally available under a commercial license, with enhanced versions sold by Tate & Kinetic Systems. Licenses for collaborators were free, but required a legal agreement with LANL and APS. An EPICS community was established and development grew as more facilities joined in with the collaboration. In February 2004, EPICS became freely distributable after its release under the EPICS Open License. It is now used and developed by over 50 large science institutions worldwide, as well as by several commercial companies. Architecture EPICS uses client–server and publish–subscribe techniques to communicate between computers. Servers, the “input/output controllers” (IOCs), collect experiment and control data in real time, using the measurement instruments attached to them. This information is then provided to clients, using the high-bandwidth Channel Access (CA) or the recently added pvAccess networking protocols that are designed to suit real-time applications such as scientific experiments. IOCs hold and interact with a database of "records", which represent either devices or aspects of the devices to be controlled. IOCs can be hosted by stock-standard servers or PCs or by VME, MicroTCA, and other standard embedded system processors. For "hard real-time" applications the RTEMS or VxWorks operating systems are normally used, whereas "soft real-time" applications typically run on Linux or Microsoft Windows. Data held in the records are represented by unique identifiers known as Process Variables (PVs). These PVs are accessible over the network channels provided by the CA/pvAccess protocol. Many record types are available for various types of input and output (e.g., analog or binary) and to provide functional behaviour such as calculations. It is also possible to create custom record types. Each record consists of a set of fields, which hold the record's static and dynamic data and specify behaviour when various functions are requested locally or remotely. Most record types are listed in the EPICS record reference manual. Graphical user interface packages are available, allowing users to view and interact with PV data through typical display widgets such as dials and text boxes. Examples include EDM (Extensible Display Manager), MEDM (Motif/EDM), and CSS. Any software that implements the CA/pvAccess protocol can read and write PV values. Extension packages are available to provide support for MATLAB, LabVIEW, Perl, Python, Tcl, ActiveX, etc. These can be used to write scripts to interact with EPICS-controlled equipment. Facilities using EPICS Commercial Users BiRa Systems Ciemat CosyLab GLResearch idt Mobiis Nusano, Inc Observatory Sciences Osprey Distributed Control Systems Varian Medical Systems Pyramid Technical Consultants See also TANGO control system SCADA—Supervisory Control And Data Acquisition References External links EPICS Record Reference Manual Science software Physics software Experimental particle physics Industrial automation software
EPICS
[ "Physics" ]
798
[ "Computational physics", "Experimental physics", "Particle physics", "Experimental particle physics", "Physics software" ]
1,517,848
https://en.wikipedia.org/wiki/International%20Centre%20for%20Diffraction%20Data
The International Centre for Diffraction Data (ICDD) maintains a database of powder diffraction patterns, the Powder Diffraction File (PDF), including the d-spacings (related to angle of diffraction) and relative intensities of observable diffraction peaks. Patterns may be experimentally determined, or computed based on crystal structure and Bragg's law. It is most often used to identify substances based on x-ray diffraction data, and is designed for use with a diffractometer. The PDF contains more than a million unique material data sets. Each data set contains diffraction, crystallographic and bibliographic data, as well as experimental, instrument and sampling conditions, and select physical properties in a common standardized format. The organization was founded in 1941 as the Joint Committee on Powder Diffraction Standards. In 1978, the current name was adopted to highlight the global commitment of this scientific endeavor. The ICDD is a nonprofit scientific organization working in the field of X-ray analysis and materials characterization. It produces materials databases, characterization tools, and educational materials, as well as organizing and supporting global workshops, clinics and conferences. Products and services of the ICDD include the paid subscription based Powder Diffraction File databases (PDF-2, PDF-4+, PDF-4+/Web , PDF-4/Minerals, PDF-4/Organics, PDF-4/Axiom, and ICDD Server Edition), educational workshops, clinics, and symposia. It is a sponsor of the Denver X-ray Conference and the Pharmaceutical Powder X-ray Diffraction Symposium. It also publishes the journals Advances in X-ray Analysis and Powder Diffraction. In 2019, Materials Data, also known as MDI, merged with ICDD. Materials Data creates JADE software used to collect, analyze, and simulate XRD data and solve issues in an array of materials science projects. In 2020, the ICDD and the Cambridge Crystallographic Data Centre, which curates and maintains the Cambridge Structural Database, announced a data partnership. See also Powder diffraction Crystallography References External links History, contents & use of the PDF Materials Data Advances in X-ray Analysis—Technical articles on x-ray methods and analyses Powder Diffraction Journal—quarterly journal published by the JCPDS-International Centre for Diffraction Data through the Cambridge University Press Denver X-ray Conference—World's largest X-ray conference on the latest advancements in XRD and XRF PPXRD-16 —Pharmaceutical Powder X-ray Diffraction Symposium Crystallography organizations Diffraction Optics institutions Organizations established in 1941
International Centre for Diffraction Data
[ "Physics", "Chemistry", "Materials_science", "Astronomy" ]
550
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Diffraction", "Crystallography", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs", "Crystallography organizations" ]
1,519,181
https://en.wikipedia.org/wiki/Polaritonics
Polaritonics is an intermediate regime between photonics and sub-microwave electronics (see Fig. 1). In this regime, signals are carried by an admixture of electromagnetic and lattice vibrational waves known as phonon-polaritons, rather than currents or photons. Since phonon-polaritons propagate with frequencies in the range of hundreds of gigahertz to several terahertz, polaritonics bridges the gap between electronics and photonics. A compelling motivation for polaritonics is the demand for high speed signal processing and linear and nonlinear terahertz spectroscopy. Polaritonics has distinct advantages over electronics, photonics, and traditional terahertz spectroscopy in that it offers the potential for a fully integrated platform that supports terahertz wave generation, guidance, manipulation, and readout in a single patterned material. Polaritonics, like electronics and photonics, requires three elements: robust waveform generation, detection, and guidance and control. Without all three, polaritonics would be reduced to just phonon-polaritons, just as electronics and photonics would be reduced to just electromagnetic radiation. These three elements can be combined to enable device functionality similar to that in electronics and photonics. Illustration To illustrate the functionality of polaritonic devices, consider the hypothetical circuit in Fig. 2 (right). The optical excitation pulses that generate phonon-polaritons, in the top left and bottom right of the crystal, enter normal to the crystal face (into the page). The resulting phonon-polaritons will travel laterally away from the excitation regions. Entrance into the waveguides is facilitated by reflective and focusing structures. Phonon-polaritons are guided through the circuit by terahertz waveguides carved into the crystal. Circuit functionality resides in the interferometer structure at the top and the coupled waveguide structure at the bottom of the circuit. The latter employs a photonic bandgap structure with a defect (yellow) that could provide bistability for the coupled waveguide. Waveform generation Phonon-polaritons generated in ferroelectric crystals propagate nearly laterally to the excitation pulse due to the high dielectric constants of ferroelectric crystals, facilitating easy separation of phonon-polaritons from the excitation pulses that generated them. Phonon-polaritons are therefore available for direct observation, as well as coherent manipulation, as they move from the excitation region into other parts of the crystal. Lateral propagation is paramount to a polaritonic platform in which generation and propagation take place in a single crystal. A full treatment of the Cherenkov-radiation-like terahertz wave response reveals that in general, there is also a forward propagation component that must be considered in many cases. Signal detection Direct observation of phonon-polariton propagation was made possible by real-space imaging, in which the spatial and temporal profiles of phonon-polaritons are imaged onto a CCD camera using Talbot phase-to-amplitude conversion. This by itself was an extraordinary breakthrough. It was the first time that electromagnetic waves were imaged directly, appearing much like ripples in a pond when a rock plummets through the water's surface (see Fig. 3). Real-space imaging is the preferred detection technique in polaritonics, though other more conventional techniques like optical Kerr-gating, time resolved diffraction, interferometric probing, and terahertz field induced second-harmonic generation are useful in some applications where real-space imaging is not easily employed. For example, patterned materials with feature sizes on the order of a few tens of micrometres cause parasitic scattering of the imaging light. Phonon-polariton detection is then only possible by focusing a more conventional probe, like those mentioned before, into an unblemished region of the crystal. Guidance and control The last element requisite to polaritonics is guidance and control. Complete lateral propagation parallel to the crystal plane is achieved by generating phonon-polaritons in crystals of thickness on the order of the phonon-polariton wavelength. This forces propagation to take place in one or more of the available slab waveguide modes. However, dispersion in these modes can be radically different from that in bulk propagation, and in order to exploit this, the dispersion must be understood. Control and guidance of phonon-polariton propagation may also be achieved by guided wave, reflective, diffractive, and dispersive elements, as well as photonic and effective index crystals that can be integrated directly into the host crystal. However, lithium niobate, lithium tantalate, and other perovskites are impermeable to the standard techniques of material patterning. In fact, the only etchant known to be even marginally successful is hydrofluoric acid (HF), which etches slowly and predominantly in the direction of the crystal optic axis. Laser Micromachining Femtosecond laser micromachining is used for device fabrication by milling 'air' holes and/or troughs into ferroelectric crystals by directing them through the focus region of a femtosecond laser beam. . The advantages of femtosecond laser micromachining for a wide range of materials have been well documented. In brief, free electrons are created within the beam focus through multiphoton excitation. Because the peak intensity of a femtosecond laser pulse is many orders of magnitude higher than that from longer pulse or continuous wave lasers, the electrons are rapidly excited, heated to form a quantum plasma. Particularly in dielectric materials, the electrostatic instability, induced by the plasma, of the remaining lattice ions results in ejection of these ions and hence ablation of the material, leaving a material void in the laser focus region. Also, since the pulse duration and ablation time scales are much faster than the thermalization time, femtosecond laser micromachining does not suffer from the adverse effects of a heat-affected-zone, like cracking and melting in regions neighboring the intended damage region. See also Electronics Photonics Polariton Spintronics Polariton laser External references David W. Ward: Polaritonics: An Intermediate Regime between Electronics and Photonics, Ph.D. Thesis, Massachusetts Institute of Technology, 2005. This is the main reference for this article. External links The research group at MIT that invented polaritonics. References Photonics Nanoelectronics Solid state engineering
Polaritonics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,388
[ "Electronic engineering", "Nanoelectronics", "Condensed matter physics", "Nanotechnology", "Solid state engineering" ]
1,519,267
https://en.wikipedia.org/wiki/Copper%E2%80%93copper%28II%29%20sulfate%20electrode
The copper–copper(II) sulfate electrode is a reference electrode of the first kind, based on the redox reaction with participation of the metal (copper) and its salt, copper(II) sulfate. It is used for measuring electrode potential and is the most commonly used reference electrode for testing cathodic protection corrosion control systems. The corresponding equation can be presented as follow: Cu2+ + 2e− → Cu0(metal) This reaction characterized by reversible and fast electrode kinetics, meaning that a sufficiently high current can be passed through the electrode with the 100% efficiency of the redox reaction (dissolution of the metal or cathodic deposition of the copper-ions). The Nernst equation below shows the dependence of the potential of the copper-copper(II) sulfate electrode on the activity or concentration copper-ions: Commercial reference electrodes consist of a plastic tube holding the copper rod and saturated solution of copper sulfate. A porous plug on one end allows contact with the copper sulfate electrolyte. The copper rod protrudes out of the tube. A voltmeter negative lead is connected to the copper rod. The potential of a copper–copper sulfate electrode is +0.314 volt with respect to the standard hydrogen electrode. Copper–copper(II) sulfate electrode is also used as one of the half cells in the galvanic Daniel-Jakobi cell. Applications Copper coulometer Notes References E. Protopopoff and P. Marcus, Potential Measurements with Reference Electrodes, Corrosion: Fundamentals, Testing, and Protection, Vol 13A, ASM Handbook, ASM International, 2003, p 13-16 A.W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd Ed., 2001, NACE International. Electrodes Corrosion prevention
Copper–copper(II) sulfate electrode
[ "Chemistry" ]
367
[ "Corrosion prevention", "Electrodes", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry stubs" ]
1,520,221
https://en.wikipedia.org/wiki/Journal%20of%20the%20British%20Interplanetary%20Society
The Journal of the British Interplanetary Society (JBIS) is a monthly peer-reviewed scientific journal that was established in 1934. The journal covers research on astronautics and space science and technology, including spacecraft design, nozzle theory, launch vehicle design, mission architecture, space stations, lunar exploration, spacecraft propulsion, robotic and crewed exploration of the solar system, interstellar travel, interstellar communications, extraterrestrial intelligence, philosophy, and cosmology. It is published monthly by the British Interplanetary Society. History The journal was established in 1934 when the British Interplanetary Society was founded. The inaugural editorial stated: The first issue was only a six-page pamphlet, but has the distinction of being the world's oldest surviving astronautical publication. Notable papers Notable papers published in the journal include: The B.I.S Space-Ship, H.E.Ross, JBIS, 5, pp. 4–9, 1939 The Challenge of the Spaceship (Astronautics and its Impact Upon Human Society), Arthur C. Clarke, JBIS, 6, pp. 66–78, 1946 Atomic rocket papers by Les Shepherd, Val Cleaver and others, 1948–1949. Interstellar Flight, L.R.Shepherd, JBIS, 11, pp. 149–167, 1952 A Programme for Achieving Interplanetary Flight, A.V.Cleaver, JBIS, 13, pp. 1–27, 1954 Special Issue on World Ships, JBIS, 37, 6, June 1984 Project Daedalus - Final Study Reports, Alan Bond & Anthony R Martin et al., Special Supplement JBIS, pp.S1-192, 1978 Editors Some of the people that have been editor-in-chief of the journal are: Philip E. Cleator J. Hardy Gerald V. Groves Anthony R. Martin Mark Hempsell Chris Toomer Kelvin Long Roger Longstaff See also Spaceflight (magazine) References External links British Interplanetary Society Space science journals Academic journals established in 1934 Planetary engineering Monthly journals English-language journals 1934 establishments in the United Kingdom
Journal of the British Interplanetary Society
[ "Engineering" ]
430
[ "Planetary engineering" ]
1,520,619
https://en.wikipedia.org/wiki/Proper%20orthogonal%20decomposition
The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve. It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning. POD and PCA The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen and Loève, and their Karhunen–Loève theorem. Mathematical expression The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector field u(x, t) into a set of deterministic spatial functions Φk(x) modulated by random time coefficients ak(t) so that: The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method is averaging the samples over the space dimension n, and correlating them with each other along the time samples p: with n spatial elements, and p time samples The next step is to compute the covariance matrix C We then compute the eigenvalues and eigenvectors of C and we order them from the largest eigenvalue to the smallest. We obtain n eigenvalues λ1,...,λn and a set of n eigenvectors arranged as columns in an n × n matrix Φ: References External links MIT: http://web.mit.edu/6.242/www/images/lec6_6242_2004.pdf Stanford University - Charbel Farhat & David Amsallem https://web.stanford.edu/group/frg/course_work/CME345/CA-CME345-Ch4.pdf Weiss, Julien: A Tutorial on the Proper Orthogonal Decomposition. In: 2019 AIAA Aviation Forum. 17–21 June 2019, Dallas, Texas, United States. French course from CNRS https://www.math.u-bordeaux.fr/~mbergman/PDF/OuvrageSynthese/OCET06.pdf Applications of the Proper Orthogonal Decomposition Method http://www.cerfacs.fr/~cfdbib/repository/WN_CFD_07_97.pdf Continuum mechanics Numerical differential equations Partial differential equations Structural analysis Computational electromagnetics
Proper orthogonal decomposition
[ "Physics", "Engineering" ]
680
[ "Structural engineering", "Computational electromagnetics", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Computational physics", "Mechanical engineering", "Aerospace engineering" ]
1,521,283
https://en.wikipedia.org/wiki/Hardy%E2%80%93Ramanujan%E2%80%93Littlewood%20circle%20method
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. History The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and the method still yields results. The method is the subject of a monograph by R. C. Vaughan. Outline The goal is to prove asymptotic behavior of a series: to show that for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. Setup The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers for , we want some asymptotic information of the type , where we have some heuristic reason to guess the form taken by (an ansatz), we write a power series generating function. The interesting cases are where is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. Residues From that formulation, it follows directly from the residue theorem that for integers , where is a circle of radius and centred at 0, for any with ; in other words, is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of may not be defined there. Singularities on unit circle The problem addressed by the circle method is to force the issue of taking , by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: Here the denominator , assuming that is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical near . Method The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of , as , should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity into two classes, according to whether or , where is a function of that is ours to choose conveniently. The integral is divided up into integrals each on some arc of the circle that is adjacent to , of length a function of (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than . Discussion Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. Waring's problem In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at ; followed by , and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity and that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. Vinogradov trigonometric sums Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of in the limiting operation to be set directly to the value 1. Applications Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables is large relative to the degree (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If is fixed and is small, other methods are required, and indeed the Hasse principle tends to fail. Rademacher's contour In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution , so that the contour integral becomes an integral from to . (The number could be replaced by any number on the upper half-plane, but is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from to by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). Notes References Further reading External links Terence Tao, Heuristic limitations of the circle method, a blog post in 2012 Analytic number theory
Hardy–Ramanujan–Littlewood circle method
[ "Mathematics" ]
1,586
[ "Analytic number theory", "Number theory" ]
1,521,726
https://en.wikipedia.org/wiki/Superquadrics
In mathematics, the superquadrics or super-quadrics (also superquadratics) are a family of geometric shapes defined by formulas that resemble those of ellipsoids and other quadrics, except that the squaring operations are replaced by arbitrary powers. They can be seen as the three-dimensional relatives of the superellipses. The term may refer to the solid object or to its surface, depending on the context. The equations below specify the surface; the solid is specified by replacing the equality signs by less-than-or-equal signs. The superquadrics include many shapes that resemble cubes, octahedra, cylinders, lozenges and spindles, with rounded or sharp corners. Because of their flexibility and relative simplicity, they are popular geometric modeling tools, especially in computer graphics. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation. Some authors, such as Alan Barr, define "superquadrics" as including both the superellipsoids and the supertoroids. In modern computer vision literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. Comprehensive coverage of geometrical properties of superquadrics and methods of their recovery from range images and point clouds are covered in several computer vision literatures. Formulas Implicit equation The surface of the basic superquadric is given by where r, s, and t are positive real numbers that determine the main features of the superquadric. Namely: less than 1: a pointy octahedron modified to have concave faces and sharp edges. exactly 1: a regular octahedron. between 1 and 2: an octahedron modified to have convex faces, blunt edges and blunt corners. exactly 2: a sphere greater than 2: a cube modified to have rounded edges and corners. infinite (in the limit): a cube Each exponent can be varied independently to obtain combined shapes. For example, if r=s=2, and t=4, one obtains a solid of revolution which resembles an ellipsoid with round cross-section but flattened ends. This formula is a special case of the superellipsoid's formula if (and only if) r = s. If any exponent is allowed to be negative, the shape extends to infinity. Such shapes are sometimes called super-hyperboloids. The basic shape above spans from -1 to +1 along each coordinate axis. The general superquadric is the result of scaling this basic shape by different amounts A, B, C along each axis. Its general equation is Parametric description Parametric equations in terms of surface parameters u and v (equivalent to longitude and latitude if m equals 2) are where the auxiliary functions are and the sign function sgn(x) is Spherical product Barr introduces the spherical product which given two plane curves produces a 3D surface. If are two plane curves then the spherical product is This is similar to the typical parametric equation of a sphere: which give rise to the name spherical product. Barr uses the spherical product to define quadric surfaces, like ellipsoids, and hyperboloids as well as the torus, superellipsoid, superquadric hyperboloids of one and two sheets, and supertoroids. Plotting code The following GNU Octave code generates a mesh approximation of a superquadric: function superquadric(epsilon,a) n = 50; etamax = pi/2; etamin = -pi/2; wmax = pi; wmin = -pi; deta = (etamax-etamin)/n; dw = (wmax-wmin)/n; [i,j] = meshgrid(1:n+1,1:n+1) eta = etamin + (i-1) * deta; w = wmin + (j-1) * dw; x = a(1) .* sign(cos(eta)) .* abs(cos(eta)).^epsilon(1) .* sign(cos(w)) .* abs(cos(w)).^epsilon(1); y = a(2) .* sign(cos(eta)) .* abs(cos(eta)).^epsilon(2) .* sign(sin(w)) .* abs(sin(w)).^epsilon(2); z = a(3) .* sign(sin(eta)) .* abs(sin(eta)).^epsilon(3); mesh(x,y,z); end See also Superegg Superellipsoid Ellipsoid References External links Bibliography: SuperQuadric Representations Superquadric Tensor Glyphs SuperQuadric Ellipsoids and Toroids, OpenGL Lighting, and Timing Superquadrics by Robert Kragler, The Wolfram Demonstrations Project. Superquadrics in Python Superquadrics recovery algorithm in Python and MATLAB Computer graphics Computer vision Geometry Geometry in computer vision Robotics engineering
Superquadrics
[ "Mathematics", "Technology", "Engineering" ]
1,087
[ "Computer engineering", "Robotics engineering", "Packaging machinery", "Geometry", "Artificial intelligence engineering", "Geometry in computer vision", "Computer vision" ]
1,521,971
https://en.wikipedia.org/wiki/Ptolemy%27s%20theorem
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy. If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that: This relation may be verbally expressed as follows: If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides. Moreover, the converse of Ptolemy's theorem is also true: In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral. Corollaries on inscribed polygons Equilateral triangle Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle. Given An equilateral triangle inscribed on a circle and a point on the circle. The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices. Proof: Follows immediately from Ptolemy's theorem: Square Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to then the length of the diagonal is equal to according to the Pythagorean theorem, and Ptolemy's relation obviously holds. Rectangle More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of the diagonals is then d2, the right hand side of Ptolemy's relation is the sum a2 + b2. Copernicus – who used Ptolemy's theorem extensively in his trigonometrical work – refers to this result as a 'Porism' or self-evident corollary: Furthermore it is clear (manifestum est) that when the chord subtending an arc has been given, that chord too can be found which subtends the rest of the semicircle. Pentagon A more interesting example is the relation between the length a of the side and the (common) length b of the 5 chords in a regular pentagon. By completing the square, the relation yields the golden ratio: Side of decagon If now diameter AF is drawn bisecting DC so that DF and CF are sides c of an inscribed decagon, Ptolemy's Theorem can again be applied – this time to cyclic quadrilateral ADFC with diameter d as one of its diagonals: where is the golden ratio. whence the side of the inscribed decagon is obtained in terms of the circle diameter. Pythagoras's theorem applied to right triangle AFD then yields "b" in terms of the diameter and "a" the side of the pentagon is thereafter calculated as As Copernicus (following Ptolemy) wrote, "The diameter of a circle being given, the sides of the triangle, tetragon, pentagon, hexagon and decagon, which the same circle circumscribes, are also given." Proofs Visual proof The animation here shows a visual demonstration of Ptolemy's theorem, based on Derrick & Herstein (2012). Proof by similarity of triangles Let ABCD be a cyclic quadrilateral. On the chord BC, the inscribed angles ∠BAC = ∠BDC, and on AB, ∠ADB = ∠ACB. Construct K on AC such that ∠ABK = ∠CBD; since ∠ABK + ∠CBK = ∠ABC = ∠CBD + ∠ABD, ∠CBK = ∠ABD. Now, by common angles △ABK is similar to △DBC, and likewise △ABD is similar to △KBC. Thus AK/AB = CD/BD, and CK/BC = DA/BD; equivalently, AK⋅BD = AB⋅CD, and CK⋅BD = BC⋅DA. By adding two equalities we have AK⋅BD + CK⋅BD = AB⋅CD + BC⋅DA, and factorizing this gives (AK+CK)·BD = AB⋅CD + BC⋅DA. But AK+CK = AC, so AC⋅BD = AB⋅CD + BC⋅DA, Q.E.D. The proof as written is only valid for simple cyclic quadrilaterals. If the quadrilateral is self-crossing then K will be located outside the line segment AC. But in this case, AK−CK = ±AC, giving the expected result. Proof by trigonometric identities Let the inscribed angles subtended by , and be, respectively, , and , and the radius of the circle be , then we have , , , , and , and the original equality to be proved is transformed to from which the factor has disappeared by dividing both sides of the equation by it. Now by using the sum formulae, and , it is trivial to show that both sides of the above equation are equal to Q.E.D. Here is another, perhaps more transparent, proof using rudimentary trigonometry. Define a new quadrilateral inscribed in the same circle, where are the same as in , and located at a new point on the same circle, defined by , . (Picture triangle flipped, so that vertex moves to vertex and vertex moves to vertex . Vertex will now be located at a new point D’ on the circle.) Then, has the same edges lengths, and consequently the same inscribed angles subtended by the corresponding edges, as , only in a different order. That is, , and , for, respectively, and . Also, and have the same area. Then, Q.E.D. Proof by inversion Choose an auxiliary circle of radius centered at D with respect to which the circumcircle of ABCD is inverted into a line (see figure). Then Then and can be expressed as , and respectively. Multiplying each term by and using yields Ptolemy's equality. Q.E.D. Note that if the quadrilateral is not cyclic then A', B' and C' form a triangle and hence A'B'+B'C' > A'C', giving us a very simple proof of Ptolemy's Inequality which is presented below. Proof using complex numbers Embed ABCD in the complex plane by identifying as four distinct complex numbers . Define the cross-ratio . Then with equality if and only if the cross-ratio is a positive real number. This proves Ptolemy's inequality generally, as it remains only to show that lie consecutively arranged on a circle (possibly of infinite radius, i.e. a line) in if and only if . From the polar form of a complex number , it follows with the last equality holding if and only if ABCD is cyclic, since a quadrilateral is cyclic if and only if opposite angles sum to . Q.E.D. Note that this proof is equivalently made by observing that the cyclicity of ABCD, i.e. the supplementarity and , is equivalent to the condition ; in particular there is a rotation of in which this is 0 (i.e. all three products are positive real numbers), and by which Ptolemy's theorem is then directly established from the simple algebraic identity Corollaries In the case of a circle of unit diameter the sides of any cyclic quadrilateral ABCD are numerically equal to the sines of the angles and which they subtend. Similarly the diagonals are equal to the sine of the sum of whichever pair of angles they subtend. We may then write Ptolemy's Theorem in the following trigonometric form: Applying certain conditions to the subtended angles and it is possible to derive a number of important corollaries using the above as our starting point. In what follows it is important to bear in mind that the sum of angles . Corollary 1. Pythagoras's theorem Let and . Then (since opposite angles of a cyclic quadrilateral are supplementary). Then: Corollary 2. The law of cosines Let . The rectangle of corollary 1 is now a symmetrical trapezium with equal diagonals and a pair of equal sides. The parallel sides differ in length by units where: It will be easier in this case to revert to the standard statement of Ptolemy's theorem: The cosine rule for triangle ABC. Corollary 3. Compound angle sine (+) Let Then Therefore, Formula for compound angle sine (+). Corollary 4. Compound angle sine (−) Let . Then . Hence, Formula for compound angle sine (−). This derivation corresponds to the Third Theorem as chronicled by Copernicus following Ptolemy in Almagest. In particular if the sides of a pentagon (subtending 36° at the circumference) and of a hexagon (subtending 30° at the circumference) are given, a chord subtending 6° may be calculated. This was a critical step in the ancient method of calculating tables of chords. Corollary 5. Compound angle cosine (+) This corollary is the core of the Fifth Theorem as chronicled by Copernicus following Ptolemy in Almagest. Let . Then . Hence Formula for compound angle cosine (+) Despite lacking the dexterity of our modern trigonometric notation, it should be clear from the above corollaries that in Ptolemy's theorem (or more simply the Second Theorem) the ancient world had at its disposal an extremely flexible and powerful trigonometric tool which enabled the cognoscenti of those times to draw up accurate tables of chords (corresponding to tables of sines) and to use these in their attempts to understand and map the cosmos as they saw it. Since tables of chords were drawn up by Hipparchus three centuries before Ptolemy, we must assume he knew of the 'Second Theorem' and its derivatives. Following the trail of ancient astronomers, history records the star catalogue of Timocharis of Alexandria. If, as seems likely, the compilation of such catalogues required an understanding of the 'Second Theorem' then the true origins of the latter disappear thereafter into the mists of antiquity but it cannot be unreasonable to presume that the astronomers, architects and construction engineers of ancient Egypt may have had some knowledge of it. Ptolemy's inequality The equation in Ptolemy's theorem is never true with non-cyclic quadrilaterals. Ptolemy's inequality is an extension of this fact, and it is a more general form of Ptolemy's theorem. It states that, given a quadrilateral ABCD, then where equality holds if and only if the quadrilateral is cyclic. This special case is equivalent to Ptolemy's theorem. Related theorem about the ratio of the diagonals Ptolemy's theorem gives the product of the diagonals (of a cyclic quadrilateral) knowing the sides, the following theorem yields the same for the ratio of the diagonals. Proof: It is known that the area of a triangle inscribed in a circle of radius is: Writing the area of the quadrilateral as sum of two triangles sharing the same circumscribing circle, we obtain two relations for each decomposition. Equating, we obtain the announced formula. Consequence: Knowing both the product and the ratio of the diagonals, we deduce their immediate expressions: See also Casey's theorem Greek mathematics Notes References Coxeter, H. S. M. and S. L. Greitzer (1967) "Ptolemy's Theorem and its Extensions." §2.6 in Geometry Revisited, Mathematical Association of America pp. 42–43. Copernicus (1543) De Revolutionibus Orbium Coelestium, English translation found in On the Shoulders of Giants (2002) edited by Stephen Hawking, Penguin Books Amarasinghe, G. W. I. S. (2013) A Concise Elementary Proof for the Ptolemy's Theorem, Global Journal of Advanced Research on Classical and Modern Geometries (GJARCMG) 2(1): 20–25 (pdf). External links Proof of Ptolemy's Theorem for Cyclic Quadrilateral MathPages – On Ptolemy's Theorem Ptolemy's Theorem at cut-the-knot Compound angle proof at cut-the-knot Ptolemy's Theorem on PlanetMath Ptolemy Inequality on MathWorld De Revolutionibus Orbium Coelestium at Harvard. Deep Secrets: The Great Pyramid, the Golden Ratio and the Royal Cubit Ptolemy's Theorem by Jay Warendorff, The Wolfram Demonstrations Project. Book XIII of Euclid's Elements A Miraculous Proof (Ptolemy's Theorem) by Zvezdelina Stankova, on Numberphile. Theorems about quadrilaterals and circles Theorem Articles containing proofs Euclidean plane geometry
Ptolemy's theorem
[ "Mathematics" ]
2,775
[ "Articles containing proofs", "Planes (geometry)", "Euclidean plane geometry" ]
1,522,286
https://en.wikipedia.org/wiki/Schur%27s%20theorem
In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur. Ramsey theory In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers into c parts, one of the parts contains integers x, y, and z with . Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's numbers. Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part. Using this definition, the only known Schur numbers are S(n) 2, 5, 14, 45, and 161 () The proof that was announced in 2017 and required 2 petabytes of space. Combinatorics In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if is a set of integers such that , the number of different multiples of non-negative integer numbers such that when goes to infinity is: As a result, for every set of relatively prime numbers there exists a value of such that every larger number is representable as a linear combination of in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem.) Differential geometry In differential geometry, Schur's theorem compares the distance between the endpoints of a space curve to the distance between the endpoints of a corresponding plane curve of less curvature. Suppose is a plane curve with curvature which makes a convex curve when closed by the chord connecting its endpoints, and is a curve of the same length with curvature . Let denote the distance between the endpoints of and denote the distance between the endpoints of . If then . Schur's theorem is usually stated for curves, but John M. Sullivan has observed that Schur's theorem applies to curves of finite total curvature (the statement is slightly different). Linear algebra In linear algebra, Schur’s theorem is referred to as either the triangularization of a square matrix with complex entries, or of a square matrix with real entries and real eigenvalues. Functional analysis In functional analysis and the study of Banach spaces, Schur's theorem, due to I. Schur, often refers to Schur's property, that for certain spaces, weak convergence implies convergence in the norm. Number theory In number theory, Issai Schur showed in 1912 that for every nonconstant polynomial p(x) with integer coefficients, if S is the set of all nonzero values , then the set of primes that divide some member of S is infinite. See also Schur's lemma (from Riemannian geometry) References Herbert S. Wilf (1994). generatingfunctionology. Academic Press. Shiing-Shen Chern (1967). Curves and Surfaces in Euclidean Space. In Studies in Global Geometry and Analysis. Prentice-Hall. Issai Schur (1912). Über die Existenz unendlich vieler Primzahlen in einigen speziellen arithmetischen Progressionen, Sitzungsberichte der Berliner Math. Further reading Dany Breslauer and Devdatt P. Dubhashi (1995). Combinatorics for Computer Scientists John M. Sullivan (2006). Curves of Finite Total Curvature. arXiv. Theorems in discrete mathematics Ramsey theory Additive combinatorics Theorems in combinatorics Theorems in differential geometry Theorems in linear algebra Theorems in functional analysis Computer-assisted proofs
Schur's theorem
[ "Mathematics" ]
912
[ "Theorems in differential geometry", "Theorems in linear algebra", "Theorems in mathematical analysis", "Theorems in combinatorics", "Discrete mathematics", "Mathematical theorems", "Theorems in algebra", "Additive combinatorics", "Computer-assisted proofs", "Theorems in discrete mathematics", "...
1,522,373
https://en.wikipedia.org/wiki/Carroll%27s%20paradox
In physics, Carroll's paradox arises when considering the motion of a falling rigid rod that is specially constrained. Considered one way, the angular momentum stays constant; considered in a different way, it changes. It is named after Michael M. Carroll who first published it in 1984. Explanation Consider two concentric circles of radius and as might be drawn on the face of a wall clock. Suppose a uniform rigid heavy rod of length is somehow constrained between these two circles so that one end of the rod remains on the inner circle and the other remains on the outer circle. Motion of the rod along these circles, acting as guides, is frictionless. The rod is held in the three o'clock position so that it is horizontal, then released. Now consider the angular momentum about the centre of the rod: After release, the rod falls. Being constrained, it must rotate as it moves. When it gets to a vertical six o'clock position, it has lost potential energy and, because the motion is frictionless, will have gained kinetic energy. It therefore possesses angular momentum. The reaction force on the rod from either circular guide is frictionless, so it must be directed along the rod; there can be no component of the reaction force perpendicular to the rod. Taking moments about the center of the rod, there can be no moment acting on the rod, so its angular momentum remains constant. Because the rod starts with zero angular momentum, it must continue to have zero angular momentum for all time. An apparent resolution of this paradox is that the physical situation cannot occur. To maintain the rod in a radial position the circles have to exert an infinite force. In real life it would not be possible to construct guides that do not exert a significant reaction force perpendicular to the rod. Victor Namias, however, disputed that infinite forces occur, and argued that a finitely thick rod experiences torque about its center of mass even in the limit as it approaches zero width. References Mechanics Physical paradoxes
Carroll's paradox
[ "Physics", "Engineering" ]
401
[ "Mechanics", "Mechanical engineering" ]
6,980,269
https://en.wikipedia.org/wiki/Wave-making%20resistance
Wave-making resistance is a form of drag that affects surface watercraft, such as boats and ships, and reflects the energy required to push the water out of the way of the hull. This energy goes into creating the wave. Physics For small displacement hulls, such as sailboats or rowboats, wave-making resistance is the major source of the marine vessel drag. A salient property of water waves is dispersiveness; i.e., the greater the wavelength, the faster it moves. Waves generated by a ship are affected by her geometry and speed, and most of the energy given by the ship for making waves is transferred to water through the bow and stern parts. Simply speaking, these two wave systems, i.e., bow and stern waves, interact with each other, and the resulting waves are responsible for the resistance. If the resulting wave is large, it carries much energy away from the ship, delivering it to the shore or wherever else the wave ends up or just dissipating it in the water, and that energy must be supplied by the ship's propulsion (or momentum), so that the ship experiences it as drag. Conversely, if the resulting wave is small, the drag experienced is small. The amount and direction (additive or subtractive) of the interference depends upon the phase difference between the bow and stern waves (which have the same wavelength and phase speed), and that is a function of the length of the ship at the waterline. For a given ship speed, the phase difference between the bow wave and stern wave is proportional to the length of the ship at the waterline. For example, if the ship takes three seconds to travel its own length, then at some point the ship passes, a stern wave is initiated three seconds after a bow wave, which implies a specific phase difference between those two waves. Thus, the waterline length of the ship directly affects the magnitude of the wave-making resistance. For a given waterline length, the phase difference depends upon the phase speed and wavelength of the waves, and those depend directly upon the speed of the ship. For a deepwater wave, the phase speed is the same as the propagation speed and is proportional to the square root of the wavelength. That wavelength is dependent upon the speed of the ship. Thus, the magnitude of the wave-making resistance is a function of the speed of the ship in relation to its length at the waterline. A simple way of considering wave-making resistance is to look at the hull in relation to bow and stern waves. If the length of a ship is half the length of the waves generated, the resulting wave will be very small due to cancellation, and if the length is the same as the wavelength, the wave will be large due to enhancement. The phase speed of waves is given by the following formula: where is the length of the wave and the gravitational acceleration. Substituting in the appropriate value for yields the equation: or, in metric units: These values, 1.34, 2.5 and very easy 6, are often used in the hull speed rule of thumb used to compare potential speeds of displacement hulls, and this relationship is also fundamental to the Froude number, used in the comparison of different scales of watercraft. When the vessel exceeds a "speed–length ratio" (speed in knots divided by square root of length in feet) of 0.94, it starts to outrun most of its bow wave, the hull actually settles slightly in the water as it is now only supported by two wave peaks. As the vessel exceeds a speed-length ratio of 1.34, the wavelength is now longer than the hull, and the stern is no longer supported by the wake, causing the stern to squat, and the bow to rise. The hull is now starting to climb its own bow wave, and resistance begins to increase at a very high rate. While it is possible to drive a displacement hull faster than a speed-length ratio of 1.34, it is prohibitively expensive to do so. Most large vessels operate at speed-length ratios well below that level, at speed-length ratios of under 1.0. Ways of reducing wave-making resistance Since wave-making resistance is based on the energy required to push the water out of the way of the hull, there are a number of ways that this can be minimized. Reduced displacement Reducing the displacement of the craft, by eliminating excess weight, is the most straightforward way to reduce the wave making drag. Another way is to shape the hull so as to generate lift as it moves through the water. Semi-displacement hulls and planing hulls do this, and they are able to break through the hull speed barrier and transition into a realm where drag increases at a much lower rate. The disadvantage of this is that planing is only practical on smaller vessels, with high power-to-weight ratios, such as motorboats. It is not a practical solution for a large vessel such as a supertanker. Fine entry A hull with a blunt bow has to push the water away very quickly to pass through, and this high acceleration requires large amounts of energy. By using a fine bow, with a sharper angle that pushes the water out of the way more gradually, the amount of energy required to displace the water will be less. A modern variation is the wave-piercing design. The total amount of water to be displaced by a moving hull, and thus causing wave making drag, is the cross sectional area of the hull times distance the hull travels, and will not remain the same when prismatic coefficient is increased for the same lwl and same displacement and same speed. Bulbous bow A special type of bow, called a bulbous bow, is often used on large power vessels to reduce wave-making drag. The bulb alters the waves generated by the hull, by changing the pressure distribution ahead of the bow. Because of the nature of its destructive interference with the bow wave, there is a limited range of vessel speeds over which it is effective. A bulbous bow must be properly designed to mitigate the wave-making resistance of a particular hull over a particular range of speeds. A bulb that works for one vessel's hull shape and one range of speeds could be detrimental to a different hull shape or a different speed range. Proper design and knowledge of a ship's intended operating speeds and conditions is therefore necessary when designing a bulbous bow. Hull form filtering If the hull is designed to operate at speeds substantially lower than hull speed then it is possible to refine the hull shape along its length to reduce wave resistance at one speed. This is practical only where the block coefficient of the hull is not a significant issue. Semi-displacement and planing hulls Since semi-displacement and planing hulls generate a significant amount of lift in operation, they are capable of breaking the barrier of the wave propagation speed and operating in realms of much lower drag, but to do this they must be capable of first pushing past that speed, which requires significant power. This stage is called the transition stage and at this stage the rate of wave-making resistance is the highest. Once the hull gets over the hump of the bow wave, the rate of increase of the wave drag will start to reduce significantly. The planing hull will rise up clearing its stern off the water and its trim will be high. Underwater part of the planing hull will be small during the planing regime. A qualitative interpretation of the wave resistance plot is that a displacement hull resonates with a wave that has a crest near its bow and a trough near its stern, because the water is pushed away at the bow and pulled back at the stern. A planing hull simply pushed down on the water under it, so it resonates with a wave that has a trough under it. If it has about twice the length it will therefore have only square root (2) or 1.4 times the speed. In practice most planing hulls usually move much faster than that. At four times hull speed the wavelength is already 16 times longer than the hull. See also Ship resistance and propulsion Hull (watercraft)#Categorisation Hull speed References On the subject of high speed monohulls, Daniel Savitsky, Professor Emeritus, Davidson Laboratory, Stevens Institute of Technology Fluid dynamics Water waves Naval architecture
Wave-making resistance
[ "Physics", "Chemistry", "Engineering" ]
1,708
[ "Naval architecture", "Physical phenomena", "Water waves", "Chemical engineering", "Waves", "Marine engineering", "Piping", "Fluid dynamics" ]
6,980,928
https://en.wikipedia.org/wiki/Thermal%20expansion%20valve
A thermal expansion valve or thermostatic expansion valve (often abbreviated as TEV, TXV, or TX valve) is a component in vapor-compression refrigeration and air conditioning systems that controls the amount of refrigerant released into the evaporator and is intended to regulate the superheat of the refrigerant that flows out of the evaporator to a steady value. Although often described as a "thermostatic" valve, an expansion valve is not able to regulate the evaporator's temperature to a precise value. The evaporator's temperature will vary only with the evaporating pressure, which will have to be regulated through other means (such as by adjusting the compressor's capacity). Thermal expansion valves are often referred to generically as "metering devices", although this may also refer to any other device that releases liquid refrigerant into the low-pressure section but does not react to temperature, such as a capillary tube or a pressure-controlled valve. Theory of operation A thermal expansion valve is a key element to a heat pump; this is the cycle that makes air conditioning, or air cooling, possible. A basic refrigeration cycle consists of four major elements: a compressor, a condenser, a metering device and an evaporator. As a refrigerant passes through a circuit containing these four elements, air conditioning occurs. The cycle starts when refrigerant enters the compressor in a low-pressure, moderate-temperature, gaseous form. The refrigerant is compressed by the compressor to a high-pressure and high-temperature gaseous state. The high-pressure and high-temperature gas then enters the condenser. The condenser cools the high-pressure and high-temperature gas allowing it to condense to a high-pressure liquid by transferring heat to a lower temperature medium, usually ambient air. In order to produce a cooling effect from the higher pressure liquid, the flow of refrigerant entering the evaporator is restricted by the expansion valve, reducing the pressure and allowing isenthalpic expansion back into the vapor phase to take place, which absorbs heat and results in cooling. A TXV type expansion device has a sensing bulb that is filled with a liquid whose thermodynamic properties are similar to those of the refrigerant. This bulb is thermally connected to the output of the evaporator so that the temperature of the refrigerant that leaves the evaporator can be sensed. The gas pressure in the sensing bulb provides the force to open the TXV, and as the temperature drops this force will decrease, therefore dynamically adjusting the flow of refrigerant into the evaporator. The superheat is the excess temperature of the vapor above its boiling point at the evaporating pressure. No superheat indicates that the refrigerant is not being fully vaporized within the evaporator and liquid may end up recirculated to the compressor which is inefficient and can cause damage. On the other hand, excessive superheat indicates that there is insufficient refrigerant flowing through the evaporator coil, and thus a significant portion toward the end is not providing cooling. Therefore, by regulating the superheat to a small value, typically only a few °C, the heat transfer of the evaporator will be near optimal, without excess liquid refrigerant being returned to the compressor. In order to provide an appropriate superheat, a spring force is often applied in the direction that would close the valve, meaning that the valve will close when the bulb is at a lower temperature than the refrigerant is evaporating at. Spring-type valves may be fixed, or adjustable, although other methods to ensure a superheat also exist, such as the sensing bulb having a different vapor composition to the rest of the system. Some thermal expansion valves are also specifically designed to ensure that a certain minimum flow of refrigerant can always flow through the system, while others can also be designed to control the evaporator's pressure so that it never rises above a maximum value. Description Flow control, or metering, of the refrigerant is accomplished by use of a temperature sensing bulb, filled with a gas or liquid charge similar to the one inside the system, that causes the orifice in the valve to open against the spring pressure in the valve body as the temperature on the bulb increases. As the suction line temperature decreases, so does the pressure in the bulb and therefore on the spring, causing the valve to close. An air conditioning system with a TX valve is often more efficient than those with designs that do not use one. Also, TX valve air conditioning systems do not require an accumulator (a refrigerant tank placed downstream of the evaporator's outlet), since the valves reduce the liquid refrigerant flow when the evaporator's thermal load decreases, so that all the refrigerant completely evaporates inside the evaporator (in normal operating conditions such as a proper evaporator temperature and airflow). However, a liquid refrigerant receiver tank needs to be placed in the liquid line before the TX valve so that, in low evaporator thermal load conditions, any excess liquid refrigerant can be stored inside it, preventing any liquid from backflowing inside the condenser coil from the liquid line. At heat loads which are very low compared to the valve's power rating, the orifice can become oversized for the heat load, and the valve can begin to repeatedly open and close, in an attempt to control the superheat to the set value, making the superheat oscillate. Cross charges, that is, sensing bulb charges composed of a mixture of different refrigerants or also non-refrigerant gases such as nitrogen (as opposed to a charge composed exclusively of the same refrigerant inside the system, known as a parallel charge), set so that the vapor pressure vs temperature curve of the bulb charge "crosses" the vapor pressure vs temperature curve of the system's refrigerant at a certain temperature value (that is, a bulb charge set so that, below a certain refrigerant temperature, the vapor pressure of the bulb charge suddenly becomes higher than that of the system's refrigerant, forcing the metering pin to stay into an open position), help to reduce the superheat hunt phenomenon by preventing the valve orifice from completely closing during system operation. The same result can be attained through different kinds of bleed passages that generate a minimum refrigerant flow at all times. The cost, however, is determining a certain flow of refrigerant that will not reach the suction line in a fully evaporated state while the heat load is particularly low, and that the compressor must be designed to handle. By carefully selecting the amount of a liquid sensing bulb charge, a so-called MOP (maximum operating pressure) effect can be also attained; above a precise refrigerant temperature, the sensing bulb charge will be entirely evaporated, making the valve begin restricting flow irrespective of the sensed superheat, rather than increasing it in order to bring evaporator superheat down to the target value. Therefore, the evaporator pressure will be kept from increasing above the MOP value. This feature helps to control the compressor's maximum operating torque to a value that is acceptable for the application, such as a small displacement car engine. A low refrigerant charge condition is often accompanied when the compressor is operational by a loud whooshing sound heard from the thermal expansion valve and the evaporator, which is caused by the lack of a liquid head right before the valve's moving orifice, resulting in the orifice trying to meter a vapor or a vapor/liquid mixture instead of a liquid. Types There are two main types of thermal expansion valves: internally or externally equalized. The difference between externally and internally equalized valves is how the evaporator pressure affects the position of the needle. In internally equalized valves, the evaporator pressure against the diaphragm is the pressure at the inlet of the evaporator (typically via an internal connection to the outlet of the valve), whereas in externally equalized valves, the evaporator pressure against the diaphragm is the pressure at the outlet of the evaporator. Externally equalized thermostatic expansion valves compensate for any pressure drop through the evaporator. For internally equalised valves a pressure drop in the evaporator will have the effect of increasing the superheat. Internally equalized valves can be used on single circuit evaporator coils having low-pressure drop. If a refrigerant distributor is used for multiple parallel evaporators (rather than a valve on each evaporator) then an externally equalized valve must be used. Externally equalized TXVs can be used on all applications; however, an externally equalized TXV cannot be replaced with an internally equalized TXV. For automotive applications, a type of externally equalized thermal expansion valve, known as the block type valve, is often used. In this type, either a sensing bulb is located within the suction line connection within the valve body and is in constant contact with the refrigerant that flows out of the evaporator's outlet, or a heat transfer means is provided so that the refrigerant is able to exchange heat with the sensing charge contained in a chamber located above the diaphragm as it flows to the suction line. Although the bulb/diaphragm type is used in most systems that control the refrigerant superheat, electronic expansion valves are becoming more common in larger systems or systems with multiple evaporators to allow them to be adjusted independently. Although electronic valves can provide greater control range and flexibility that bulb/diaphragm types cannot provide, they add complexity and points of failure to a system as they require additional temperature and pressure sensors and an electronic control circuit. Most electronic valves use a stepper motor hermetically sealed inside the valve to actuate a needle valve with a screw mechanism, on some units only the stepper rotor is within the hermetic body and is magnetically driven through the sealed valve body by stator coils on the outside of the device. References Further reading How does a TEV work? Valves Cooling technology
Thermal expansion valve
[ "Physics", "Chemistry" ]
2,175
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
6,988,866
https://en.wikipedia.org/wiki/Directed%20percolation
In statistical physics, directed percolation (DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect of gravity. Varying the microscopic connectivity of the pores, these models display a phase transition from a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate. More generally, the term directed percolation stands for a universality class of continuous phase transitions which are characterized by the same type of collective behavior on large scales. Directed percolation is probably the simplest universality class of transitions out of thermal equilibrium. Lattice models One of the simplest realizations of DP is bond directed percolation. This model is a directed variant of ordinary (isotropic) percolation and can be introduced as follows. The figure shows a tilted square lattice with bonds connecting neighboring sites. The bonds are permeable (open) with probability and impermeable (closed) otherwise. The sites and bonds may be interpreted as holes and randomly distributed channels of a porous medium. The difference between ordinary and directed percolation is illustrated to the right. In isotropic percolation a spreading agent (e.g. water) introduced at a particular site percolates along open bonds, generating a cluster of wet sites. Contrarily, in directed percolation the spreading agent can pass open bonds only along a preferred direction in space, as indicated by the arrow. The resulting red cluster is directed in space. As a dynamical process Interpreting the preferred direction as a temporal degree of freedom, directed percolation can be regarded as a stochastic process that evolves in time. In a minimal, two-parameter model that includes bond and site DP as special cases, a one-dimensional chain of sites evolves in discrete time , which can be viewed as a second dimension, and all sites are updated in parallel. Activating a certain site (called initial seed) at time the resulting cluster can be constructed row by row. The corresponding number of active sites varies as time evolves. Universal scaling behavior The DP universality class is characterized by a certain set of critical exponents. These exponents depend on the spatial dimension . Above the so-called upper critical dimension they are given by their mean-field values while in dimensions they have been estimated numerically. Current estimates are summarized in the following table: Other examples In two dimensions, the percolation of water through a thin tissue (such as toilet paper) has the same mathematical underpinnings as the flow of electricity through two-dimensional random networks of resistors. In chemistry, chromatography can be understood with similar models. The propagation of a tear or rip in a sheet of paper, in a sheet of metal, or even the formation of a crack in ceramic bears broad mathematical resemblance to the flow of electricity through a random network of electrical fuses. Above a certain critical point, the electrical flow will cause a fuse to pop, possibly leading to a cascade of failures, resembling the propagation of a crack or tear. The study of percolation helps indicate how the flow of electricity will redistribute itself in the fuse network, thus modeling which fuses are most likely to pop next, and how fast they will pop, and what direction the crack may curve in. Examples can be found not only in physical phenomena, but also in biology, neuroscience, ecology (e.g. evolution), and economics (e.g. diffusion of innovation). Percolation can be considered to be a branch of the study of dynamical systems or statistical mechanics. In particular, percolation networks exhibit a phase change around a critical threshold. Experimental realizations In spite of vast success in the theoretical and numerical studies of DP, obtaining convincing experimental evidence has proved challenging. In 1999 an experiment on flowing sand on an inclined plane was identified as a physical realization of DP. In 2007, critical behavior of DP was finally found in the electrohydrodynamic convection of liquid crystal, where a complete set of static and dynamic critical exponents and universal scaling functions of DP were measured in the transition to spatiotemporal intermittency between two turbulent states. See also Percolation threshold Ziff–Gulari–Barshad model Percolation critical exponents Sources Literature L. Canet: "Processus de réaction-diffusion : une approche par le groupe de renormalisation non perturbatif", Thèse. Thèse en ligne Muhammad Sahimi. Applications of Percolation Theory. Taylor & Francis, 1994. (cloth), (paper) Geoffrey Grimmett. Percolation (2. ed). Springer Verlag, 1999. References Sources Percolation theory Critical phenomena
Directed percolation
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,024
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Percolation theory", "Combinatorics", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
6,988,897
https://en.wikipedia.org/wiki/Defense%20physiology
Defense physiology is a term used to refer to the symphony of body function (physiology) changes which occur in response to a stress or threat. When the body executes the "fight-or-flight" reaction or stress response, the nervous system initiates, coordinates and directs specific changes in how the body is functioning, preparing the body to deal with the threat. (See also General adaptation syndrome.) Definitions Stress : As it pertains to the term defense physiology, the term stress refers to a perceived threat to the continued functioning of the body / life according to its current state. Threat: A threat may be consciously recognized or not. A physical event (a loud noise or car collision or a coming attack), a chemical or a biological agent which alters (or has the possibility to alter) body function (physiology) away from optimum or healthy functioning (or away from its current state of functioning) may be perceived as a threat (also called a stressor). Life circumstances, though posing no immediate physical danger, could be perceived as a threat. Anything that could change the continuing of the person’s life as they are currently experiencing it could be perceived as a threat. Physiological reactions to threat (or perceived threat) A threat may be either empirical (an outside observer may agree that the event or circumstance poses a threat) or a priori (an outside observer would not agree that the event or circumstance poses a threat). What is important to the individual, in terms of the body’s response, is that a threat is perceived. The perception of a threat may also trigger an associated ‘feeling of distress’. Physiological reactions triggered by mind cannot differentiate both the physical or mental threat separately, Hence the "fight-or-flight" response of mind for the both reactions will be same. Duration of threat and its different physiological effects on the nervous system. Acute Stress Reaction - The body executes the “fight-or-flight” reaction to get the body out of danger quickly. When the timing between the threat and the resolution of the threat are close, the “fight-or-flight" reaction is executed, the threat is handled, and the body returns to its previous state (taking care of the business of life - digestion, relaxation, tissue repair etc.). The body has evolved to stay in this mode for only a short time. Chronic Stress State - When the timing between the threat and the resolution of the threat are more distant (the threat or the perception of threat is prolonged or other threats occur before the body has recovered), the “fight-or-flight" reaction continues and becomes the new "standard operating condition" of the body, "chronic defense physiology". Continuing in this mode produces significant negative effects (distress) in many aspects of body functioning (physical, mental and emotional distress). See also Hypothalamic–pituitary–adrenal axis References Physiology Stress (biology) Endocrine system
Defense physiology
[ "Biology" ]
602
[ "Organ systems", "Endocrine system", "Physiology" ]
980,166
https://en.wikipedia.org/wiki/Air%20vortex%20cannon
An air vortex cannon is a toy that releases doughnut-shaped air vortices — similar to smoke rings but larger, stronger and invisible. The vortices can ruffle hair, disturb papers or blow out candles after travelling several metres. An air vortex cannon can be made easily at home, from just a cardboard box. Air cannons are used in some amusement parks such as Universal Studios to spook or surprise visitors. The Wham-O Air Blaster toy introduced in 1965 could blow out a candle at . The commercial Airzooka was developed by Brian S. Jordan who claims to have conceived it when still a boy. A feature of the Airzooka is a loose non-elastic polythene membrane, tensioned by a bungee cord, rather than elastic membranes. This allows a much greater volume of air to be displaced. A large air vortex cannon, with a wide barrel and a displacement volume of was built in March 2008 at the University of Minnesota, and could blow out candles at . In 2012, a large air vortex cannon was built for Czech Television program Zázraky přírody (). It was capable of bringing down a wall of cardboard boxes from in what was claimed to be a world record. See also Bubble ring Vortex ring gun Bamboo cannon Boga (noisemaker) Potato cannon Big-Bang Cannon References External links Home made vortex cannon using a cardboard box and a smoke machine from The URN Science Show. Toy weapons Vortices
Air vortex cannon
[ "Chemistry", "Mathematics" ]
299
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
981,045
https://en.wikipedia.org/wiki/Three-key%20exposition
In music, the three-key exposition is a particular kind of exposition used in sonata form. Normally, a sonata form exposition has two main key areas. The first asserts the primary key of the piece, that is, the tonic. The second section moves to a different key, establishes that key firmly, arriving ultimately at a cadence in that key. For the second key, composers normally chose the dominant for major-key sonatas, and the relative major (or less commonly, the minor-mode dominant) for minor-key sonatas. The three-key exposition moves not directly to the dominant or relative major, but indirectly via a third key; hence the name. Examples A very early example appears in the first movement of Haydn's String Quartet in D major, Op. 17 No. 6: the three keys are D major, C major, and A major. (C major is prepared by a modulation to its relative minor A minor, which happens to be the dominant minor of the original key.) Ludwig van Beethoven wrote a number of sonata movements during the earlier part of his career with three-key expositions. For the "third" (that is, the intermediate) key, Beethoven made various choices: the dominant minor (Piano Sonata No. 2, Op. 2 no. 2; String Quartet No. 5, Op. 18 no. 5), the supertonic minor (Piano Sonata No. 3, Op. 2 no. 3), and the relative minor (Piano Sonata No. 7, Op. 10 no. 3). Later, Beethoven used the supertonic major (Piano Sonata No. 9, Op. 14 no. 1, Piano Sonata No. 11, Op. 22), which is only a mild sort of three-key exposition, since the supertonic major is the dominant of the dominant, and commonly arises in any event as part of the modulation. As he entered his so-called "middle period," Beethoven abandoned the three-key exposition. This was part of a general change in the composer's work in which he moved closer to the older practice of Haydn, writing less discursive and more closely organized sonata movements. Franz Schubert, who liked discursive forms for the entirety of his short career, also employed the three-key expositions in many of his sonata movements. A famous example is the first movement of the Death and the Maiden Quartet in D minor, in which the exposition moves to F major and then A minor (translated to D major and minor respectively in the recapitulation), a formula that is repeated in the final movement; another is the Violin Sonata in A major (in which the second theme appears in G major and B major, while only the closing passage of the exposition is in the dominant, E major). His B major piano sonata, D 575, even uses a four-key exposition (B major, G major, E major, F-sharp major): this key scheme is literally transposed up a fourth for the recapitulation. The finale of his sixth symphony (D 589) is an even more extreme case: its exposition passes from C major to G major by way of A-flat major, F major, A major, and E-flat major, making a six-key exposition. Felix Mendelssohn followed the Death and the Maiden example in the first movement of his second Piano Trio, in which the E flat major second theme gives way to a G minor close (transposed to C major and minor in the recapitulation). The first movement of Frédéric Chopin's Piano Concerto in F minor also has a three-key exposition (F minor, A-flat major, C minor). The first movement of the second cello sonata by Brahms also employs a three-key exposition moving to C major and then A minor, the exposition of the first movement of the String Sextet in B flat involves an intervening theme in A major before reaching F, and the Piano Quartet in G minor involves secondary themes in D minor and major respectively (the first of these being omitted in the recapitulation and the second transposed to E flat major moving back to G minor). The D minor violin sonata has a final movement that moves through a calm second theme in C major before closing the exposition in A minor. Further reading Longyear, Rey M., and Kate R. Covington (1988). Sources of the three-key exposition. The Journal of Musicology 6(4), pp. 448-470. Rosen, Charles (1985) Sonata Forms. New York: Norton. Graham G. Hunt; When Structure and Design Collide: The Three-Key Exposition Revisited, Music Theory Spectrum, Volume 36, Issue 2, 1 December 2014, Pages 247–269. Formal sections in music analysis
Three-key exposition
[ "Technology" ]
984
[ "Components", "Formal sections in music analysis" ]
981,153
https://en.wikipedia.org/wiki/Exergonic%20process
An exergonic process is one which there is a positive flow of energy from the system to the surroundings. This is in contrast with an endergonic process. Constant pressure, constant temperature reactions are exergonic if and only if the Gibbs free energy change is negative (∆G < 0). "Exergonic" (from the prefix exo-, derived for the Greek word ἔξω exō, "outside" and the suffix -ergonic, derived from the Greek word ἔργον ergon, "work") means "releasing energy in the form of work". In thermodynamics, work is defined as the energy moving from the system (the internal region) to the surroundings (the external region) during a given process. All physical and chemical systems in the universe follow the second law of thermodynamics and proceed in a downhill, i.e., exergonic, direction. Thus, left to itself, any physical or chemical system will proceed, according to the second law of thermodynamics, in a direction that tends to lower the free energy of the system, and thus to expend energy in the form of work. These reactions occur spontaneously. A chemical reaction is also exergonic when spontaneous. Thus in this type of reactions the Gibbs free energy decreases. The entropy is included in any change of the Gibbs free energy. This differs from an exothermic reaction or an endothermic reaction where the entropy is not included. The Gibbs free energy is calculated with the Gibbs–Helmholtz equation: where: T = temperature in kelvins (K) ΔG = change in the Gibbs free energy ΔS = change in entropy (at 298 K) as ΔS = Σ{S(Product)} − Σ{S(Reagent)} ΔH = change in enthalpy (at 298 K) as ΔH = Σ{H(Product)} − Σ{H(Reagent)} A chemical reaction progresses spontaneously only when the Gibbs free energy decreases, in that case the ΔG is negative. In exergonic reactions the ΔG is negative and in endergonic reactions the ΔG is positive: exergon endergon where: equals the change in the Gibbs free energy after completion of a chemical reaction. See also Endergonic Endergonic reaction Exothermic process Endothermic process Exergonic reaction Exothermic reaction Endothermic reaction Endotherm Ectotherm References Thermodynamic processes Chemical thermodynamics
Exergonic process
[ "Physics", "Chemistry" ]
546
[ "Chemical thermodynamics", "Thermodynamic processes", "Thermodynamics" ]
981,631
https://en.wikipedia.org/wiki/Failure%20mode%20and%20effects%20analysis
Failure mode and effects analysis (FMEA; often written with "failure modes" in plural) is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study. A few different types of FMEA analyses exist, such as: Functional Design Process Sometimes FMEA is extended to FMECA (failure mode, effects, and criticality analysis) to indicate that criticality analysis is performed too. FMEA is an inductive reasoning (forward logic) single point of failure analysis and is a core task in reliability engineering, safety engineering and quality engineering. A successful FMEA activity helps identify potential failure modes based on experience with similar products and processes—or based on common physics of failure logic. It is widely used in development and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of those failures on different system levels. Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for functional FMEA or piece-part (hardware) FMEA. A FMEA is used to structure mitigation for risk reduction based on either failure mode or effect severity reduction, or based on lowering the probability of failure or both. The FMEA is in principle a full inductive (forward logic) analysis, however the failure probability can only be estimated or reduced by understanding the failure mechanism. Hence, FMEA may include information on causes of failure (deductive analysis) to reduce the possibility of occurrence by eliminating identified (root) causes. Introduction The FME(C)A is a design tool used to systematically analyze postulated component failures and identify the resultant effects on system operations. The analysis is sometimes characterized as consisting of two sub-analyses, the first being the failure modes and effects analysis (FMEA), and the second, the criticality analysis (CA). Successful development of an FMEA requires that the analyst include all significant failure modes for each contributing element or part in the system. FMEAs can be performed at the system, subsystem, assembly, subassembly or part level. The FMECA should be a living document during development of a hardware design. It should be scheduled and completed concurrently with the design. If completed in a timely manner, the FMECA can help guide design decisions. The usefulness of the FMECA as a design tool and in the decision-making process is dependent on the effectiveness and timeliness with which design problems are identified. Timeliness is probably the most important consideration. In the extreme case, the FMECA would be of little value to the design decision process if the analysis is performed after the hardware is built. While the FMECA identifies all part failure modes, its primary benefit is the early identification of all critical and catastrophic subsystem or system failure modes so they can be eliminated or minimized through design modification at the earliest point in the development effort; therefore, the FMECA should be performed at the system level as soon as preliminary design information is available and extended to the lower levels as the detail design progresses. Remark: For more complete scenario modelling another type of reliability analysis may be considered, for example fault tree analysis (FTA); a deductive (backward logic) failure analysis that may handle multiple failures within the item and/or external to the item including maintenance and logistics. It starts at higher functional / system level. An FTA may use the basic failure mode FMEA records or an effect summary as one of its inputs (the basic events). Interface hazard analysis, human error analysis and others may be added for completion in scenario modelling. Functional failure mode and effects analysis The analysis should always be started by someone listing the functions that the design needs to fulfill. Functions are the starting point of a well done FMEA, and using functions as baseline provides the best yield of an FMEA. After all, a design is only one possible solution to perform functions that need to be fulfilled. This way an FMEA can be done on concept designs as well as detail designs, on hardware as well as software, and no matter how complex the design. When performing a FMECA, interfacing hardware (or software) is first considered to be operating within specification. After that it can be extended by consequently using one of the 5 possible failure modes of one function of the interfacing hardware as a cause of failure for the design element under review. This gives the opportunity to make the design robust against function failure elsewhere in the system. In addition, each part failure postulated is considered to be the only failure in the system (i.e., it is a single failure analysis). In addition to the FMEAs done on systems to evaluate the impact lower level failures have on system operation, several other FMEAs are done. Special attention is paid to interfaces between systems and in fact at all functional interfaces. The purpose of these FMEAs is to assure that irreversible physical and/or functional damage is not propagated across the interface as a result of failures in one of the interfacing units. These analyses are done to the piece part level for the circuits that directly interface with the other units. The FMEA can be accomplished without a CA, but a CA requires that the FMEA has previously identified system level critical failures. When both steps are done, the total process is called an FMECA. Ground rules The ground rules of each FMEA include a set of project selected procedures; the assumptions on which the analysis is based; the hardware that has been included and excluded from the analysis and the rationale for the exclusions. The ground rules also describe the indenture level of the analysis (i.e. the level in the hierarchy of the part to the sub-system, sub-system to the system, etc.), the basic hardware status, and the criteria for system and mission success. Every effort should be made to define all ground rules before the FMEA begins; however, the ground rules may be expanded and clarified as the analysis proceeds. A typical set of ground rules (assumptions) follows: Only one failure mode exists at a time. All inputs (including software commands) to the item being analyzed are present and at nominal values. All consumables are present in sufficient quantities. Nominal power is available Benefits Major benefits derived from a properly implemented FMECA effort are as follows: It provides a documented method for selecting a design with a high probability of successful operation and safety. A documented uniform method of assessing potential failure mechanisms, failure modes and their impact on system operation, resulting in a list of failure modes ranked according to the seriousness of their system impact and likelihood of occurrence. Early identification of single failure points (SFPS) and system interface problems, which may be critical to mission success and/or safety. They also provide a method of verifying that switching between redundant elements is not jeopardized by postulated single failures. An effective method for evaluating the effect of proposed changes to the design and/or operational procedures on mission success and safety. A basis for in-flight troubleshooting procedures and for locating performance monitoring and fault-detection devices. Criteria for early planning of tests. From the above list, early identifications of SFPS, input to the troubleshooting procedure and locating of performance monitoring / fault detection devices are probably the most important benefits of the FMECA. In addition, the FMECA procedures are straightforward and allow orderly evaluation of the design. History Procedures for conducting FMECA were described in 1949 in US Armed Forces Military Procedures document MIL-P-1629, revised in 1980 as MIL-STD-1629A. By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA or FMEA under a variety of names. NASA programs using FMEA variants included Apollo, Viking, Voyager, Magellan, Galileo, and Skylab. The civil aviation industry was an early adopter of FMEA, with the Society for Automotive Engineers (SAE, an organization covering aviation and other transportation beyond just automotive, despite its name) publishing ARP926 in 1967. After two revisions, Aerospace Recommended Practice ARP926 has been replaced by ARP4761, which is now broadly used in civil aviation. During the 1970s, use of FMEA and related techniques spread to other industries. In 1971 NASA prepared a report for the U.S. Geological Survey recommending the use of FMEA in assessment of offshore petroleum exploration. A 1973 U.S. Environmental Protection Agency report described the application of FMEA to wastewater treatment plants. FMEA as application for HACCP on the Apollo Space Program moved into the food industry in general. The automotive industry began to use FMEA by the mid 1970s. The Ford Motor Company introduced FMEA to the automotive industry for safety and regulatory consideration after the Pinto affair. Ford applied the same approach to processes (PFMEA) to consider potential process induced failures prior to launching production. In 1993 the Automotive Industry Action Group (AIAG) first published an FMEA standard for the automotive industry. It is now in its fourth edition. The SAE first published related standard J1739 in 1994. This standard is also now in its fourth edition. In 2019 both method descriptions were replaced by the new AIAG / VDA FMEA handbook. It is a harmonization of the former FMEA standards of AIAG, VDA, SAE and other method descriptions. As of 2024, the AIAG / VDA FMEA Handbook is accepted by GM, Ford, Stellantis, Honda NA, BMW, Volkswagen Group, Mercedes-Benz Group AG (formerly Daimler AG), and Daimler Truck. Although initially developed by the military, FMEA methodology is now extensively used in a variety of industries including semiconductor processing, food service, plastics, software, and healthcare. Toyota has taken this one step further with its design review based on failure mode (DRBFM) approach. The method is now supported by the American Society for Quality which provides detailed guides on applying the method. The standard failure modes and effects analysis (FMEA) and failure modes, effects and criticality analysis (FMECA) procedures identify the product failure mechanisms, but may not model them without specialized software. This limits their applicability to provide a meaningful input to critical procedures such as virtual qualification, root cause analysis, accelerated test programs, and to remaining life assessment. To overcome the shortcomings of FMEA and FMECA a failure modes, mechanisms and effect analysis (FMMEA) has often been used. Following the release of IATF 16949:2016, an international quality standard that requires companies to have an organization-specific documented FMEA process, many original equipment manufacturers (OEMs) like Ford are updating their Customer Specific Requirements (CSR) to include the usage of specific FMEA software. For Ford specifically, these requirements had multiple-stage compliance deadlines of July and December of 2022. Basic terms The following covers some basic FMEA terminology. Action priority (AP) The AP replaces the former risk matrix and RPN in the AIAG / VDA FMEA handbook 2019. It makes a statement about the need for additional improvement measures. Failure The loss of a function under stated conditions. Failure mode The specific manner or way by which a failure occurs in terms of failure of the part, component, function, equipment, subsystem, or system under investigation. Depending on the type of FMEA performed, failure mode may be described at various levels of detail. A piece part FMEA will focus on detailed part or component failure modes (such as fully fractured axle or deformed axle, or electrical contact stuck open, stuck short, or intermittent). A functional FMEA will focus on functional failure modes. These may be general (such as no function, over function, under function, intermittent function, or unintended function) or more detailed and specific to the equipment being analyzed. A PFMEA will focus on process failure modes (such as inserting the wrong drill bit). Failure cause and/or mechanism Defects in requirements, design, process, quality control, handling or part application, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time. A failure mode may have more causes. For example; "fatigue or corrosion of a structural beam" or "fretting corrosion in an electrical contact" is a failure mechanism and in itself (likely) not a failure mode. The related failure mode (end state) is a "full fracture of structural beam" or "an open electrical contact". The initial cause might have been "Improper application of corrosion protection layer (paint)" and /or "(abnormal) vibration input from another (possibly failed) system". Failure effect Immediate consequences of a failure on operation, or more generally on the needs for the customer / user that should be fulfilled by the function but now is not, or not fully, fulfilled. Indenture levels (bill of material or functional breakdown) An identifier for system level and thereby item complexity. Complexity increases as levels are closer to one. Local effect The failure effect as it applies to the item under analysis. Next higher level effect The failure effect as it applies at the next higher indenture level. End effect The failure effect at the highest indenture level or total system. Detection The means of detection of the failure mode by maintainer, operator or built in detection system, including estimated dormancy period (if applicable). Probability The likelihood of the failure occurring. Risk priority number (RPN) Severity (of the event) × probability (of the event occurring) × detection (probability that the event would not be detected before the user was aware of it). Severity The consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, system damage and/or time lost to repair the failure. Remarks / mitigation / actions Additional info, including the proposed mitigation or actions used to lower a risk or justify a risk level or scenario. Example of FMEA worksheet Probability (P) It is necessary to look at the cause of a failure mode and the likelihood of occurrence. This can be done by analysis, calculations / FEM, looking at similar items or processes and the failure modes that have been documented for them in the past. A failure cause is looked upon as a design weakness. All the potential causes for a failure mode should be identified and documented. This should be in technical terms. Examples of causes are: Human errors in handling, Manufacturing induced faults, Fatigue, Creep, Abrasive wear, erroneous algorithms, excessive voltage or improper operating conditions or use (depending on the used ground rules). A failure mode may be given a Probability Ranking with a defined number of levels. This field is also often referred to as an Occurrence Rating. For a piece part FMEA, quantitative probability may be calculated from the results of a reliability prediction analysis and the failure mode ratios from a failure mode distribution catalog, such as RAC FMD-97. This method allows a quantitative FTA to use the FMEA results to verify that undesired events meet acceptable levels of risk. Severity (S) Determine the Severity for the worst-case scenario adverse end effect (state). It is convenient to write these effects down in terms of what the user might see or experience in terms of functional failures. Examples of these end effects are: full loss of function x, degraded performance, functions in reversed mode, too late functioning, erratic functioning, etc. Each end effect is given a Severity number (S) from, say, I (no effect) to V (catastrophic), based on cost and/or loss of life or quality of life. These numbers prioritize the failure modes (together with probability and detectability). Below a typical classification is given. Other classifications are possible. See also hazard analysis. Detection (D) The means or method by which a failure is detected, isolated by operator and/or maintainer and the time it may take. This is important for maintainability control (availability of the system) and it is especially important for multiple failure scenarios. This may involve dormant failure modes (e.g. No direct system effect, while a redundant system / item automatically takes over or when the failure only is problematic during specific mission or system states) or latent failures (e.g. deterioration failure mechanisms, like metal growing a crack, but not of critical length). It should be made clear how the failure mode or cause can be discovered by an operator under normal system operation or if it can be discovered by the maintenance crew by some diagnostic action or automatic built in system test. A dormancy and/or latency period may be entered. Dormancy or latency period The average time that a failure mode may be undetected may be entered if known. For example: Seconds, auto detected by maintenance computer 8 hours, detected by turn-around inspection 2 months, detected by scheduled maintenance block X 2 years, detected by overhaul task x Indication If the undetected failure allows the system to remain in a safe / working state, a second failure situation should be explored to determine whether or not an indication will be evident to all operators and what corrective action they may or should take. Indications to the operator should be described as follows: Normal. An indication that is evident to an operator when the system or equipment is operating normally. Abnormal. An indication that is evident to an operator when the system has malfunctioned or failed. Incorrect. An erroneous indication to an operator due to the malfunction or failure of an indicator (i.e., instruments, sensing devices, visual or audible warning devices, etc.). PERFORM DETECTION COVERAGE ANALYSIS FOR TEST PROCESSES AND MONITORING (From ARP4761 Standard): This type of analysis is useful to determine how effective various test processes are at the detection of latent and dormant faults. The method used to accomplish this involves an examination of the applicable failure modes to determine whether or not their effects are detected, and to determine the percentage of failure rate applicable to the failure modes which are detected. The possibility that the detection means may itself fail latently should be accounted for in the coverage analysis as a limiting factor (i.e., coverage cannot be more reliable than the detection means availability). Inclusion of the detection coverage in the FMEA can lead to each individual failure that would have been one effect category now being a separate effect category due to the detection coverage possibilities. Another way to include detection coverage is for the FTA to conservatively assume that no holes in coverage due to latent failure in the detection method affect detection of all failures assigned to the failure effect category of concern. The FMEA can be revised if necessary for those cases where this conservative assumption does not allow the top event probability requirements to be met. After these three basic steps the Risk level may be provided. Risk level (P×S) and (D) Risk is the combination of end effect probability and severity where probability and severity includes the effect on non-detectability (dormancy time). This may influence the end effect probability of failure or the worst case effect Severity. The exact calculation may not be easy in all cases, such as those where multiple scenarios (with multiple events) are possible and detectability / dormancy plays a crucial role (as for redundant systems). In that case fault tree analysis and/or event trees may be needed to determine exact probability and risk levels. Preliminary risk levels can be selected based on a risk matrix like shown below, based on Mil. Std. 882. The higher the risk level, the more justification and mitigation is needed to provide evidence and lower the risk to an acceptable level. High risk should be indicated to higher level management, who are responsible for final decision-making. After this step the FMEA has become like a FMECA. Timing FMEA should be used: When a product or process is being designed (or redesigned) When an existing product or process is applied in a novel way Before developing control plans or procedures for a new or redesigned process When trying to improve an existing product, process, or service When analyzing failures for an existing product, process, or service Periodically and regularly throughout the lifetime of the product, process, or service The FMEA should be updated whenever: A new cycle begins (new product/process) Changes are made to the operating conditions A change is made in the design New regulations are instituted Customer feedback indicates a problem Uses Development of system requirements that minimize the likelihood of failures. Development of designs and test systems to ensure that the failures have been eliminated or the risk is reduced to acceptable level. Development and evaluation of diagnostic systems. To help with design choices (trade-off analysis). Advantages Catalyst for teamwork and idea exchange between functions Collect information to reduce future failures, capture engineering knowledge Early identification and elimination of potential failure modes Emphasize problem prevention Fulfill legal requirements (product liability) Improve company image and competitiveness Improve production yield Improve the quality, reliability, and safety of a product/process Increase user satisfaction Maximize profit Minimize late changes and associated cost Reduce impact on company profit margin Reduce system development time and cost Reduce the possibility of same kind of failure in future Reduce the potential for warranty concerns Limitations While FMEA identifies important hazards in a system, its results may not be comprehensive and the approach has limitations. In the healthcare context, FMEA and other risk assessment methods, including SWIFT (Structured What If Technique) and retrospective approaches, have been found to have limited validity when used in isolation. Challenges around scoping and organisational boundaries appear to be a major factor in this lack of validity. If used as a top-down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a bottom-up tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system. Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode. The reason for this is that the rankings are ordinal scale numbers, and multiplication is not defined for ordinal numbers. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as severe as a ranking of "1", or an "8" may not be twice as severe as a "4", but multiplication treats them as though they are. See Level of measurement for further discussion. Various solutions to this problems have been proposed, e.g., the use of fuzzy logic as an alternative to classic RPN model. In the new AIAG / VDA FMEA handbook (2019) the RPN approach was replaced by the AP (action priority). The FMEA worksheet is hard to produce, hard to understand and read, as well as hard to maintain. The use of neural network techniques to cluster and visualise failure modes were suggested starting from 2010. An alternative approach is to combine the traditional FMEA table with set of bow-tie diagrams. The diagrams provide a visualisation of the chains of cause and effect, while the FMEA table provides the detailed information about specific events. Types Functional: before design solutions are provided (or only on high level) functions can be evaluated on potential functional failure effects. General Mitigations ("design to" requirements) can be proposed to limit consequence of functional failures or limit the probability of occurrence in this early development. It is based on a functional breakdown of a system. This type may also be used for Software evaluation. Concept design / hardware: analysis of systems or subsystems in the early design concept stages to analyse the failure mechanisms and lower level functional failures, specially to different concept solutions in more detail. It may be used in trade-off studies. Detailed design / hardware: analysis of products prior to production. These are the most detailed (in MIL 1629 called Piece-Part or Hardware FMEA) FMEAs and used to identify any possible hardware (or other) failure mode up to the lowest part level. It should be based on hardware breakdown (e.g. the BoM = bill of materials). Any failure effect severity, failure prevention (mitigation), failure detection and diagnostics may be fully analyzed in this FMEA. Process: analysis of manufacturing and assembly processes. Both quality and reliability may be affected from process faults. The input for this FMEA is amongst others a work process / task breakdown. See also References Japanese business terms Lean manufacturing Reliability engineering Systems analysis Reliability analysis Quality control tools
Failure mode and effects analysis
[ "Engineering" ]
5,256
[ "Systems engineering", "Reliability analysis", "Lean manufacturing", "Reliability engineering" ]
982,386
https://en.wikipedia.org/wiki/Killing%20form
In mathematics, the Killing form, named after Wilhelm Killing, is a symmetric bilinear form that plays a basic role in the theories of Lie groups and Lie algebras. Cartan's criteria (criterion of solvability and criterion of semisimplicity) show that Killing form has a close relationship to the semisimplicity of the Lie algebras. History and name The Killing form was essentially introduced into Lie algebra theory by in his thesis. In a historical survey of Lie theory, has described how the term "Killing form" first occurred in 1951 during one of his own reports for the Séminaire Bourbaki; it arose as a misnomer, since the form had previously been used by Lie theorists, without a name attached. Some other authors now employ the term "Cartan-Killing form". At the end of the 19th century, Killing had noted that the coefficients of the characteristic equation of a regular semisimple element of a Lie algebra are invariant under the adjoint group, from which it follows that the Killing form (i.e. the degree 2 coefficient) is invariant, but he did not make much use of the fact. A basic result that Cartan made use of was Cartan's criterion, which states that the Killing form is non-degenerate if and only if the Lie algebra is a direct sum of simple Lie algebras. Definition Consider a Lie algebra over a field . Every element of defines the adjoint endomorphism (also written as ) of with the help of the Lie bracket, as Now, supposing is of finite dimension, the trace of the composition of two such endomorphisms defines a symmetric bilinear form with values in , the Killing form on . Properties The following properties follow as theorems from the above definition. The Killing form is bilinear and symmetric. The Killing form is an invariant form, as are all other forms obtained from Casimir operators. The derivation of Casimir operators vanishes; for the Killing form, this vanishing can be written as where [ , ] is the Lie bracket. If is a simple Lie algebra then any invariant symmetric bilinear form on is a scalar multiple of the Killing form. The Killing form is also invariant under automorphisms of the algebra , that is, for in . The Cartan criterion states that a Lie algebra is semisimple if and only if the Killing form is non-degenerate. The Killing form of a nilpotent Lie algebra is identically zero. If are two ideals in a Lie algebra with zero intersection, then and are orthogonal subspaces with respect to the Killing form. The orthogonal complement with respect to of an ideal is again an ideal. If a given Lie algebra is a direct sum of its ideals , then the Killing form of is the direct sum of the Killing forms of the individual summands. Matrix elements Given a basis of the Lie algebra , the matrix elements of the Killing form are given by Here in Einstein summation notation, where the are the structure coefficients of the Lie algebra. The index functions as column index and the index as row index in the matrix . Taking the trace amounts to putting and summing, and so we can write The Killing form is the simplest 2-tensor that can be formed from the structure constants. The form itself is then In the above indexed definition, we are careful to distinguish upper and lower indices (co- and contra-variant indices). This is because, in many cases, the Killing form can be used as a metric tensor on a manifold, in which case the distinction becomes an important one for the transformation properties of tensors. When the Lie algebra is semisimple over a zero-characteristic field, its Killing form is nondegenerate, and hence can be used as a metric tensor to raise and lower indexes. In this case, it is always possible to choose a basis for such that the structure constants with all upper indices are completely antisymmetric. The Killing form for some Lie algebras are (for in viewed in their fundamental matrix representation): The table shows that the Dynkin index for the adjoint representation is equal to twice the dual Coxeter number. Connection with real forms Suppose that is a semisimple Lie algebra over the field of real numbers . By Cartan's criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries . By Sylvester's law of inertia, the number of positive entries is an invariant of the bilinear form, i.e. it does not depend on the choice of the diagonalizing basis, and is called the index of the Lie algebra . This is a number between and the dimension of which is an important invariant of the real Lie algebra. In particular, a real Lie algebra is called compact if the Killing form is negative definite (or negative semidefinite if the Lie algebra is not semisimple). Note that this is one of two inequivalent definitions commonly used for compactness of a Lie algebra; the other states that a Lie algebra is compact if it corresponds to a compact Lie group. The definition of compactness in terms of negative definiteness of the Killing form is more restrictive, since using this definition it can be shown that under the Lie correspondence, compact Lie algebras correspond to compact semisimple Lie groups. If is a semisimple Lie algebra over the complex numbers, then there are several non-isomorphic real Lie algebras whose complexification is , which are called its real forms. It turns out that every complex semisimple Lie algebra admits a unique (up to isomorphism) compact real form . The real forms of a given complex semisimple Lie algebra are frequently labeled by the positive index of inertia of their Killing form. For example, the complex special linear algebra has two real forms, the real special linear algebra, denoted , and the special unitary algebra, denoted . The first one is noncompact, the so-called split real form, and its Killing form has signature . The second one is the compact real form and its Killing form is negative definite, i.e. has signature . The corresponding Lie groups are the noncompact group of real matrices with the unit determinant and the special unitary group , which is compact. Trace forms Let be a finite-dimensional Lie algebra over the field , and be a Lie algebra representation. Let be the trace functional on . Then we can define the trace form for the representation as Then the Killing form is the special case that the representation is the adjoint representation, . It is easy to show that this is symmetric, bilinear and invariant for any representation . If furthermore is simple and is irreducible, then it can be shown where is the index of the representation. See also Casimir invariant Killing vector field Citations References Lie groups Lie algebras
Killing form
[ "Mathematics" ]
1,415
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
982,970
https://en.wikipedia.org/wiki/Mertens%27%20theorems
In analytic number theory, Mertens' theorems are three 1874 results related to the density of prime numbers proved by Franz Mertens. In the following, let mean all primes not exceeding n. First theorem Mertens' first theorem is that does not exceed 2 in absolute value for any . () Second theorem Mertens' second theorem is where M is the Meissel–Mertens constant (). More precisely, Mertens proves that the expression under the limit does not in absolute value exceed for any . Proof The main step in the proof of Mertens' second theorem is where the last equality needs which follows from . Thus, we have proved that . Since the sum over prime powers with converges, this implies . A partial summation yields . Changes in sign In a paper on the growth rate of the sum-of-divisors function published in 1983, Guy Robin proved that in Mertens' 2nd theorem the difference changes sign infinitely often, and that in Mertens' 3rd theorem the difference changes sign infinitely often. Robin's results are analogous to Littlewood's famous theorem that the difference π(x) − li(x) changes sign infinitely often. No analog of the Skewes number (an upper bound on the first natural number x for which π(x) > li(x)) is known in the case of Mertens' 2nd and 3rd theorems. Relation to the prime number theorem Regarding this asymptotic formula Mertens refers in his paper to "two curious formula of Legendre", the first one being Mertens' second theorem's prototype (and the second one being Mertens' third theorem's prototype: see the very first lines of the paper). He recalls that it is contained in Legendre's third edition of his "Théorie des nombres" (1830; it is in fact already mentioned in the second edition, 1808), and also that a more elaborate version was proved by Chebyshev in 1851. Note that, already in 1737, Euler knew the asymptotic behaviour of this sum. Mertens diplomatically describes his proof as more precise and rigorous. In reality none of the previous proofs are acceptable by modern standards: Euler's computations involve the infinity (and the hyperbolic logarithm of infinity, and the logarithm of the logarithm of infinity!); Legendre's argument is heuristic; and Chebyshev's proof, although perfectly sound, makes use of the Legendre-Gauss conjecture, which was not proved until 1896 and became better known as the prime number theorem. Mertens' proof does not appeal to any unproved hypothesis (in 1874), and only to elementary real analysis. It comes 22 years before the first proof of the prime number theorem which, by contrast, relies on a careful analysis of the behavior of the Riemann zeta function as a function of a complex variable. Mertens' proof is in that respect remarkable. Indeed, with modern notation it yields whereas the prime number theorem (in its simplest form, without error estimate), can be shown to imply In 1909 Edmund Landau, by using the best version of the prime number theorem then at his disposition, proved that holds; in particular the error term is smaller than for any fixed integer k. A simple summation by parts exploiting the strongest form known of the prime number theorem improves this to for some . Similarly a partial summation shows that is implied by the PNT. Third theorem Mertens' third theorem is where γ is the Euler–Mascheroni constant (). Relation to sieve theory An estimate of the probability of () having no factor is given by This is closely related to Mertens' third theorem which gives an asymptotic approximation of References Further reading Yaglom and Yaglom Challenging mathematical problems with elementary solutions Vol 2, problems 171, 173, 174 External links Mathematical series Summability theory Theorems about prime numbers
Mertens' theorems
[ "Mathematics" ]
819
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Theorems about prime numbers", "Theorems in number theory" ]
983,423
https://en.wikipedia.org/wiki/47%20Tucanae
47 Tucanae or 47 Tuc (also designated as NGC 104 and Caldwell 106) is a globular cluster located in the constellation Tucana. It is about from Earth, and 120 light years in diameter. 47 Tuc can be seen with the naked eye, with an apparent magnitude of 4.1. It appears about 44 arcminutes across including its far outreaches. Due to its far southern location, 18° from the south celestial pole, it was not catalogued by European astronomers until the 1750s, when the cluster was first identified by Nicolas-Louis de Lacaille from South Africa. 47 Tucanae is the second brightest globular cluster after Omega Centauri, and telescopically reveals about ten thousand stars, many appearing within a small dense central core. The cluster may contain an intermediate-mass black hole. Early history The cluster was recorded in 1751-2 by Nicolas-Louis de Lacaille, who initially thought it was the nucleus of a bright comet. Lacaille then listed it as "Lac I-1", the first object listed in his deep-sky catalogue. The number "47" was assigned in Allgemeine Beschreibung und Nachweisung der Gestirne nebst Verzeichniss ("General description and verification of the stars and indexes"), compiled by Johann Elert Bode and published in Berlin in 1801. Bode did not observe this cluster himself, but had reordered Lacaille's catalogued stars by constellation in order of right ascension. In the 19th century, Benjamin Apthorp Gould assigned the Greek letter ξ (Xi) to the cluster to designate it ξ Tucanae, but this was not widely adopted and it is almost universally referred to as 47 Tucanae. Characteristics 47 Tucanae is the second brightest globular cluster in the sky (after Omega Centauri), and is noted for having a small very bright and dense core. It is one of the most massive globular clusters in the Galaxy, containing millions of stars. The cluster appears roughly the size of the full moon in the sky under ideal conditions. Though it appears adjacent to the Small Magellanic Cloud, the latter is some distant, being over fifteen times farther than 47 Tuc. A blue giant star with a spectral class of B8III is the brightest star in visible and ultraviolet light, with a luminosity of about 1,100 times that of the Sun, and is aptly known as the "Bright Star". It is a post-AGB star, having passed the asymptotic giant branch phase of its life, and is currently fusing helium. It has an effective temperature of about 10,850 K, and is about 54% the mass of the Sun. The core of 47 Tuc was the subject of a major survey for planets, using the Hubble Space Telescope to look for partial eclipses of stars by their planets. No planets were found, though ten to fifteen were expected based on the rate of planet discoveries around stars near the Sun. This indicates that planets are relatively rare in globular clusters. A later ground-based survey in the uncrowded outer regions of the cluster also failed to detect planets when several were expected. This strongly indicates that the low metallicity of the environment, rather than the crowding, is responsible. 47 Tucanae contains at least two stellar populations of stars, of different ages or metallicities. The dense core contains a number of exotic stars of scientific interest, including at least 21 blue stragglers. Globular clusters efficiently sort stars by mass, with the most massive stars falling to the center. 47 Tucanae contains hundreds of X-ray sources, including stars with enhanced chromospheric activity due to their presence in binary star systems, cataclysmic variable stars containing white dwarfs accreting from companion stars and low-mass X-ray binaries containing neutron stars that are not currently accreting, but can be observed by the X-rays emitted from the hot surface of the neutron star. 47 Tucanae has 35 known millisecond pulsars, the second largest population of pulsars in any globular cluster, after Terzan 5. These pulsars are thought to be spun up by the accretion of material from binary companion stars, in a previous X-ray binary phase. The companion of one pulsar in 47 Tucanae, 47 Tuc W, seems to still be transferring mass towards its companion, indicating that this system is completing a transition from being an accreting low-mass X-ray binary to a millisecond pulsar. X-ray emission has been individually detected from most millisecond pulsars in 47 Tucanae with the Chandra X-ray Observatory, likely emission from the neutron star surface, and gamma-ray emission has been detected with the Fermi Gamma-ray Space Telescope from its millisecond pulsar population (making 47 Tucanae the first globular cluster to be detected in gamma-rays). Possible central black hole It is not yet clear whether 47 Tucanae hosts a central black hole. Hubble Space Telescope data constrain the mass of any possible black hole at the cluster's center to be less than approximately 1,500 solar masses. However, in February, 2017, astronomers announced that a black hole of some 2,200 solar masses may be located in the cluster; the researchers detected the black hole's signature from the motions and distributions of pulsars in the cluster. Despite this, a recent analysis of an updated and more extensive timing data set on these pulsars provides no solid evidence in favor of the existence of a black hole. Modern discoveries In December 2008, Ragbir Bhathal of the University of Western Sydney claimed the detection of a strong laser-like signal from the direction of 47 Tucanae. In May 2015, the first evidence of the process of mass segregation in this globular cluster was announced. The cluster's Hertzsprung–Russell diagram suggests stars approximately 13 billion years old, which is unusually old. References External links 47 Tucanae at the ESA-Hubble website 47 Tucanae, Galactic Globular Clusters Database page 47 Tucanae at the Chandra X-ray Observatory website NGC 104 The Toucan's Diamond ESO Globular clusters Tucana Tucanae, Xi Xi Tucanae 106b 17510914 Articles containing video clips 0095 002051 Intermediate-mass black holes
47 Tucanae
[ "Physics", "Astronomy" ]
1,360
[ "Black holes", "Unsolved problems in physics", "Intermediate-mass black holes", "Constellations", "Tucana" ]
983,601
https://en.wikipedia.org/wiki/Comparative%20genomic%20hybridization
Comparative genomic hybridization (CGH) is a molecular cytogenetic method for analysing copy number variations (CNVs) relative to ploidy level in the DNA of a test sample compared to a reference sample, without the need for culturing cells. The aim of this technique is to quickly and efficiently compare two genomic DNA samples arising from two sources, which are most often closely related, because it is suspected that they contain differences in terms of either gains or losses of either whole chromosomes or subchromosomal regions (a portion of a whole chromosome). This technique was originally developed for the evaluation of the differences between the chromosomal complements of solid tumor and normal tissue, and has an improved resolution of 5–10 megabases compared to the more traditional cytogenetic analysis techniques of giemsa banding and fluorescence in situ hybridization (FISH) which are limited by the resolution of the microscope utilized. This is achieved through the use of competitive fluorescence in situ hybridization. In short, this involves the isolation of DNA from the two sources to be compared, most commonly a test and reference source, independent labelling of each DNA sample with fluorophores (fluorescent molecules) of different colours (usually red and green), denaturation of the DNA so that it is single stranded, and the hybridization of the two resultant samples in a 1:1 ratio to a normal metaphase spread of chromosomes, to which the labelled DNA samples will bind at their locus of origin. Using a fluorescence microscope and computer software, the differentially coloured fluorescent signals are then compared along the length of each chromosome for identification of chromosomal differences between the two sources. A higher intensity of the test sample colour in a specific region of a chromosome indicates the gain of material of that region in the corresponding source sample, while a higher intensity of the reference sample colour indicates the loss of material in the test sample in that specific region. A neutral colour (yellow when the fluorophore labels are red and green) indicates no difference between the two samples in that location. CGH is only able to detect unbalanced chromosomal abnormalities. This is because balanced chromosomal abnormalities such as reciprocal translocations, inversions or ring chromosomes do not affect copy number, which is what is detected by CGH technologies. CGH does, however, allow for the exploration of all 46 human chromosomes in single test and the discovery of deletions and duplications, even on the microscopic scale which may lead to the identification of candidate genes to be further explored by other cytological techniques. Through the use of DNA microarrays in conjunction with CGH techniques, the more specific form of array CGH (aCGH) has been developed, allowing for a locus-by-locus measure of CNV with increased resolution as low as 100 kilobases. This improved technique allows for the aetiology of known and unknown conditions to be discovered. History The motivation underlying the development of CGH stemmed from the fact that the available forms of cytogenetic analysis at the time (giemsa banding and FISH) were limited in their potential resolution by the microscopes necessary for interpretation of the results they provided. Furthermore, giemsa banding interpretation has the potential to be ambiguous and therefore has lowered reliability, and both techniques require high labour inputs which limits the loci which may be examined. The first report of CGH analysis was by Kallioniemi and colleagues in 1992 at the University of California, San Francisco, who utilised CGH in the analysis of solid tumors. They achieved this by the direct application of the technique to both breast cancer cell lines and primary bladder tumors in order to establish complete copy number karyotypes for the cells. They were able to identify 16 different regions of amplification, many of which were novel discoveries. Soon after in 1993, du Manoir et al. reported virtually the same methodology. The authors painted a series of individual human chromosomes from a DNA library with two different fluorophores in different proportions to test the technique, and also applied CGH to genomic DNA from patients affected with either Downs syndrome or T-cell prolymphocytic leukemia as well as cells of a renal papillary carcinoma cell line. It was concluded that the fluorescence ratios obtained were accurate and that differences between genomic DNA from different cell types were detectable, and therefore that CGH was a highly useful cytogenetic analysis tool. Initially, the widespread use of CGH technology was difficult, as protocols were not uniform and therefore inconsistencies arose, especially due to uncertainties in the interpretation of data. However, in 1994 a review was published which described an easily understood protocol in detail and the image analysis software was made available commercially, which allowed CGH to be utilised all around the world. As new techniques such as microdissection and degenerate oligonucleotide primed polymerase chain reaction (DOP-PCR) became available for the generation of DNA products, it was possible to apply the concept of CGH to smaller chromosomal abnormalities, and thus the resolution of CGH was improved. The implementation of array CGH, whereby DNA microarrays are used instead of the traditional metaphase chromosome preparation, was pioneered by Solinas-Tolodo et al. in 1997 using tumor cells and Pinkel et al. in 1998 by use of breast cancer cells. This was made possible by the Human Genome Project which generated a library of cloned DNA fragments with known locations throughout the human genome, with these fragments being used as probes on the DNA microarray. Now probes of various origins such as cDNA, genomic PCR products and bacterial artificial chromosomes (BACs) can be used on DNA microarrays which may contain up to 2 million probes. Array CGH is automated, allows greater resolution (down to 100 kb) than traditional CGH as the probes are far smaller than metaphase preparations, requires smaller amounts of DNA, can be targeted to specific chromosomal regions if required and is ordered and therefore faster to analyse, making it far more adaptable to diagnostic uses. Basic methods Metaphase slide preparation The DNA on the slide is a reference sample, and is thus obtained from a karyotypically normal man or woman, though it is preferential to use female DNA as they possess two X chromosomes which contain far more genetic information than the male Y chromosome. Phytohaemagglutinin stimulated peripheral blood lymphocytes are used. 1mL of heparinised blood is added to 10ml of culture medium and incubated for 72 hours at 37 °C in an atmosphere of 5% CO2. Colchicine is added to arrest the cells in mitosis, the cells are then harvested and treated with hypotonic potassium chloride and fixed in 3:1 methanol/acetic acid. One drop of the cell suspension should then be dropped onto an ethanol cleaned slide from a distance of about 30 cm, optimally this should be carried out at room temperature at humidity levels of 60–70%. Slides should be evaluated by visualisation using a phase contrast microscope, minimal cytoplasm should be observed and chromosomes should not be overlapping and be 400–550 bands long with no separated chromatids and finally should appear dark rather than shiny. Slides then need to be air dried overnight at room temperature, and any further storage should be in groups of four at −20 °C with either silica beads or nitrogen present to maintain dryness. Different donors should be tested as hybridization may be variable. Commercially available slides may be used, but should always be tested first. Isolation of DNA from test tissue and reference tissue Standard phenol extraction is used to obtain DNA from test or reference (karyotypically normal individual) tissue, which involves the combination of Tris-Ethylenediaminetetraacetic acid and phenol with aqueous DNA in equal amounts. This is followed by separation by agitation and centrifugation, after which the aqueous layer is removed and further treated using ether and finally ethanol precipitation is used to concentrate the DNA. May be completed using DNA isolation kits available commercially which are based on affinity columns. Preferentially, DNA should be extracted from fresh or frozen tissue as this will be of the highest quality, though it is now possible to use archival material which is formalin fixed or paraffin wax embedded, provided the appropriate procedures are followed. 0.5-1 μg of DNA is sufficient for the CGH experiment, though if the desired amount is not obtained DOP-PCR may be applied to amplify the DNA, however it in this case it is important to apply DOP-PCR to both the test and reference DNA samples to improve reliability. DNA labelling Nick translation is used to label the DNA and involves cutting DNA and substituting nucleotides labelled with fluorophores (direct labelling) or biotin or oxigenin to have fluophore conjugated antibodies added later (indirect labelling). It is then important to check fragment lengths of both test and reference DNA by gel electrophoresis, as they should be within the range of 500kb-1500kb for optimum hybridization. Blocking Unlabelled Life Technologies Corporation's Cot-1 DNA (placental DNA enriched with repetitive sequences of length 50bp-100bp)is added to block normal repetitive DNA sequences, particularly at centromeres and telomeres, as these sequences, if detected, may reduce the fluorescence ratio and cause gains or losses to escape detection. Hybridization 8–12μl of each of labelled test and labelled reference DNA are mixed and 40 μg Cot-1 DNA is added, then precipitated and subsequently dissolved in 6μl of hybridization mix, which contains 50% formamide to decrease DNA melting temperature and 10% dextran sulphate to increase the effective probe concentration in a saline sodium citrate (SSC) solution at a pH of 7.0. Denaturation of the slide and probes are carried out separately. The slide is submerged in 70% formamide/2xSSC for 5–10 minutes at 72 °C, while the probes are denatured by immersion in a water bath of 80 °C for 10 minutes and are immediately added to the metaphase slide preparation. This reaction is then covered with a coverslip and left for two to four days in a humid chamber at 40 °C. The coverslip is then removed and 5 minute washes are applied, three using 2xSSC at room temperature, one at 45 °C with 0.1xSSC and one using TNT at room temperature. The reaction is then preincubated for 10 minutes then followed by a 60-minute, 37 °C incubation, three more 5 minute washes with TNT then one with 2xSSC at room temperature. The slide is then dried using an ethanol series of 70%/96%/100% before counterstaining with DAPI (0.35 μg/ml), for chromosome identification, and sealing with a coverslip. Fluorescence visualisation and imaging A fluorescence microscope with the appropriate filters for the DAPI stain as well as the two fluorophores utilised is required for visualisation, and these filters should also minimise the crosstalk between the fluorophores, such as narrow band pass filters. The microscope must provide uniform illumination without chromatic variation, be appropriately aligned and have a "plan" type of objective which is apochromatic and give a magnification of x63 or x100. The image should be recorded using a camera with spatial resolution at least 0.1 μm at the specimen level and give an image of at least 600x600 pixels. The camera must also be able to integrate the image for at least 5 to 10 seconds, with a minimum photometric resolution of 8 bit. Dedicated CGH software is commercially available for the image processing step, and is required to subtract background noise, remove and segment materials not of chromosomal origin, normalize the fluorescence ratio, carry out interactive karyotyping and chromosome scaling to standard length. A "relative copy number karyotype" which presents chromosomal areas of deletions or amplifications is generated by averaging the ratios of a number of high quality metaphases and plotting them along an ideogram, a diagram identifying chromosomes based on banding patterns. Interpretation of the ratio profiles is conducted either using fixed or statistical thresholds (confidence intervals). When using confidence intervals, gains or losses are identified when 95% of the fluorescence ratio does not contain 1.0. Extra notes Extreme care must be taken to avoid contamination of any step involving DNA, especially with the test DNA as contamination of the sample with normal DNA will skew results closer to 1.0, thus abnormalities may go undetected. FISH, PCR and flow cytometry experiments may be employed to confirm results. Array comparative genomic hybridization Array comparative genomic hybridization (also microarray-based comparative genomic hybridization, matrix CGH, array CGH, aCGH) is a molecular cytogenetic technique for the detection of chromosomal copy number changes on a genome wide and high-resolution scale. Array CGH compares the patient's genome against a reference genome and identifies differences between the two genomes, and hence locates regions of genomic imbalances in the patient, utilizing the same principles of competitive fluorescence in situ hybridization as traditional CGH. With the introduction of array CGH, the main limitation of conventional CGH, a low resolution, is overcome. In array CGH, the metaphase chromosomes are replaced by cloned DNA fragments (+100–200 kb) of which the exact chromosomal location is known. This allows the detection of aberrations in more detail and, moreover, makes it possible to map the changes directly onto the genomic sequence. Array CGH has proven to be a specific, sensitive, fast and high-throughput technique, with considerable advantages compared to other methods used for the analysis of DNA copy number changes making it more amenable to diagnostic applications. Using this method, copy number changes at a level of 5–10 kilobases of DNA sequences can be detected. , even high-resolution CGH (HR-CGH) arrays are accurate to detect structural variations (SV) at resolution of 200 bp. This method allows one to identify new recurrent chromosome changes such as microdeletions and duplications in human conditions such as cancer and birth defects due to chromosome aberrations. Methodology Array CGH is based on the same principle as conventional CGH. In both techniques, DNA from a reference (or control) sample and DNA from a test (or patient) sample are differentially labelled with two different fluorophores and used as probes that are cohybridized competitively onto nucleic acid targets. In conventional CGH, the target is a reference metaphase spread. In array CGH, these targets can be genomic fragments cloned in a variety of vectors (such as BACs or plasmids), cDNAs, or oligonucleotides. Figure 2. is a schematic overview of the array CGH technique. DNA from the sample to be tested is labeled with a red fluorophore (Cyanine 5) and a reference DNA sample is labeled with green fluorophore (Cyanine 3). Equal quantities of the two DNA samples are mixed and cohybridized to a DNA microarray of several thousand evenly spaced cloned DNA fragments or oligonucleotides, which have been spotted in triplicate on the array. After hybridization, digital imaging systems are used to capture and quantify the relative fluorescence intensities of each of the hybridized fluorophores. The resulting ratio of the fluorescence intensities is proportional to the ratio of the copy numbers of DNA sequences in the test and reference genomes. If the intensities of the flurochromes are equal on one probe, this region of the patient's genome is interpreted as having equal quantity of DNA in the test and reference samples; if there is an altered Cy3:Cy5 ratio this indicates a loss or a gain of the patient DNA at that specific genomic region. Technological approaches to array CGH Array CGH has been implemented using a wide variety of techniques. Therefore, some of the advantages and limitations of array CGH are dependent on the technique chosen. The initial approaches used arrays produced from large insert genomic DNA clones, such as BACs. The use of BACs provides sufficient intense signals to detect single-copy changes and to locate aberration boundaries accurately. However, initial DNA yields of isolated BAC clones are low and DNA amplification techniques are necessary. These techniques include ligation-mediated polymerase chain reaction (PCR), degenerate primer PCR using one or several sets of primers, and rolling circle amplification. Arrays can also be constructed using cDNA. These arrays currently yield a high spatial resolution, but the number of cDNAs is limited by the genes that are encoded on the chromosomes, and their sensitivity is low due to cross-hybridization. This results in the inability to detect single copy changes on a genome wide scale. The latest approach is spotting the arrays with short oligonucleotides. The amount of oligos is almost infinite, and the processing is rapid, cost-effective, and easy. Although oligonucleotides do not have the sensitivity to detect single copy changes, averaging of ratios from oligos that map next to each other on the chromosome can compensate for the reduced sensitivity. It is also possible to use arrays which have overlapping probes so that specific breakpoints may be uncovered. Design approaches There are two approaches to the design of microarrays for CGH applications: whole genome and targeted. Whole genome arrays are designed to cover the entire human genome. They often include clones that provide an extensive coverage across the genome; and arrays that have contiguous coverage, within the limits of the genome. Whole-genome arrays have been constructed mostly for research applications and have proven their outstanding worth in gene discovery. They are also very valuable in screening the genome for DNA gains and losses at an unprecedented resolution. Targeted arrays are designed for a specific region(s) of the genome for the purpose of evaluating that targeted segment. It may be designed to study a specific chromosome or chromosomal segment or to identify and evaluate specific DNA dosage abnormalities in individuals with suspected microdeletion syndromes or subtelomeric rearrangements. The crucial goal of a targeted microarray in medical practice is to provide clinically useful results for diagnosis, genetic counseling, prognosis, and clinical management of unbalanced cytogenetic abnormalities. Applications Conventional Conventional CGH has been used mainly for the identification of chromosomal regions that are recurrently lost or gained in tumors, as well as for the diagnosis and prognosis of cancer. This approach can also be used to study chromosomal aberrations in fetal and neonatal genomes. Furthermore, conventional CGH can be used in detecting chromosomal abnormalities and have been shown to be efficient in diagnosing complex abnormalities associated with human genetic disorders. In cancer research CGH data from several studies of the same tumor type show consistent patterns of non-random genetic aberrations. Some of these changes appear to be common to various kinds of malignant tumors, while others are more tumor specific. For example, gains of chromosomal regions lq, 3q and 8q, as well as losses of 8p, 13q, 16q and 17p, are common to a number of tumor types, such as breast, ovarian, prostate, renal and bladder cancer (Figure. 3). Other alterations, such as 12p and Xp gains in testicular cancer, 13q gain 9q loss in bladder cancer, 14q loss in renal cancer and Xp loss in ovarian cancer are more specific, and might reflect the unique selection forces operating during cancer development in different organs. Array CGH is also frequently used in research and diagnostics of B cell malignancies, such as chronic lymphocytic leukemia. Chromosomal aberrations Cri du Chat (CdC) is a syndrome caused by a partial deletion of the short arm of chromosome 5. Several studies have shown that conventional CGH is suitable to detect the deletion, as well as more complex chromosomal alterations. For example, Levy et al. (2002) reported an infant with a cat-like cry, the hallmark of CdC, but having an indistinct karyotype. CGH analysis revealed a loss of chromosomal material from 5p15.3 confirming the diagnosis clinically. These results demonstrate that conventional CGH is a reliable technique in detecting structural aberrations and, in specific cases, may be more efficient in diagnosing complex abnormalities. Array CGH Array CGH applications are mainly directed at detecting genomic abnormalities in cancer. However, array CGH is also suitable for the analysis of DNA copy number aberrations that cause human genetic disorders. That is, array CGH is employed to uncover deletions, amplifications, breakpoints and ploidy abnormalities. Earlier diagnosis is of benefit to the patient as they may undergo appropriate treatments and counseling to improve their prognosis. Genomic abnormalities in cancer Genetic alterations and rearrangements occur frequently in cancer and contribute to its pathogenesis. Detecting these aberrations by array CGH provides information on the locations of important cancer genes and can have clinical use in diagnosis, cancer classification and prognostification. However, not all of the losses of genetic material are pathogenetic, since some DNA material is physiologically lost during the rearrangement of immunoglobulin subgenes. In a recent study, array CGH has been implemented to identify regions of chromosomal aberration (copy-number variation) in several mouse models of breast cancer, leading to identification of cooperating genes during myc-induced oncogenesis. Array CGH may also be applied not only to the discovery of chromosomal abnormalities in cancer, but also to the monitoring of the progression of tumors. Differentiation between metastatic and mild lesions is also possible using FISH once the abnormalities have been identified by array CGH. Submicroscopic aberrations Prader–Willi syndrome (PWS) is a paternal structural abnormality involving 15q11-13, while a maternal aberration in the same region causes Angelman syndrome (AS). In both syndromes, the majority of cases (75%) are the result of a 3–5 Mb deletion of the PWS/AS critical region. These small aberrations cannot be detected using cytogenetics or conventional CGH, but can be readily detected using array CGH. As a proof of principle Vissers et al. (2003) constructed a genome wide array with a 1 Mb resolution to screen three patients with known, FISH-confirmed microdeletion syndromes, including one with PWS. In all three cases, the abnormalities, ranging from 1.5 to 2.9Mb, were readily identified. Thus, array CGH was demonstrated to be a specific and sensitive approach in detecting submicroscopic aberrations. When using overlapping microarrays, it is also possible to uncover breakpoints involved in chromosomal aberrations. Prenatal genetic diagnosis Though not yet a widely employed technique, the use of array CGH as a tool for preimplantation genetic screening is becoming an increasingly popular concept. It has the potential to detect CNVs and aneuploidy in eggs, sperm or embryos which may contribute to failure of the embryo to successfully implant, miscarriage or conditions such as Down syndrome (trisomy 21). This makes array CGH a promising tool to reduce the incidence of life altering conditions and improve success rates of IVF attempts. The technique involves whole genome amplification from a single cell which is then used in the array CGH method. It may also be used in couples carrying chromosomal translocations such as balanced reciprocal translocations or Robertsonian translocations, which have the potential to cause chromosomal imbalances in their offspring. Limitations of CGH and array CGH A main disadvantage of conventional CGH is its inability to detect structural chromosomal aberrations without copy number changes, such as mosaicism, balanced chromosomal translocations, and inversions. CGH can also only detect gains and losses relative to the ploidy level. In addition, chromosomal regions with short repetitive DNA sequences are highly variable between individuals and can interfere with CGH analysis. Therefore, repetitive DNA regions like centromeres and telomeres need to be blocked with unlabeled repetitive DNA (e.g. Cot1 DNA) and/or can be omitted from screening. Furthermore, the resolution of conventional CGH is a major practical problem that limits its clinical applications. Although CGH has proven to be a useful and reliable technique in the research and diagnostics of both cancer and human genetic disorders, the applications involve only gross abnormalities. Because of the limited resolution of metaphase chromosomes, aberrations smaller than 5–10 Mb cannot be detected using conventional CGH. For the detection of such abnormalities, a high-resolution technique is required. Array CGH overcomes many of these limitations. Array CGH is characterized by a high resolution, its major advantage with respect to conventional CGH. The standard resolution varies between 1 and 5 Mb, but can be increased up to approximately 40 kb by supplementing the array with extra clones. However, as in conventional CGH, the main disadvantage of array CGH is its inability to detect aberrations that do not result in copy number changes and is limited in its ability to detect mosaicism. The level of mosaicism that can be detected is dependent on the sensitivity and spatial resolution of the clones. At present, rearrangements present in approximately 50% of the cells is the detection limit. For the detection of such abnormalities, other techniques, such as SKY (Spectral karyotyping) or FISH have to still be used. See also Cytogenetics Virtual karyotype References External links Virtual Grand Rounds: "Differentiating Microarray Technologies and Related Clinical Implications" by Arthur Beaudet, MD arrayMap repository: Continuously expanded collection cancer genome array datasets, with per-array and aggregated data visualisation (ca. 64'000 arrays, September 2014). The former NCBI's Cancer Chromosomes resource has been discontinued. Molecular genetics Gene tests Cytogenetics Comparisons
Comparative genomic hybridization
[ "Chemistry", "Biology" ]
5,571
[ "Genetics techniques", "Molecular genetics", "Gene tests", "Molecular biology" ]
984,020
https://en.wikipedia.org/wiki/What%20the%20Bleep%20Do%20We%20Know%21%3F
What the Bleep Do We Know!? (stylized as What tнē #$*! D̄ө ωΣ (k)πow!? and What the #$*! Do We Know!?) is a 2004 American pseudo-scientific film that posits a spiritual connection between quantum physics and consciousness (as part of a belief system known as quantum mysticism). The plot follows the fictional story of a photographer, using documentary-style interviews and computer-animated graphics, as she encounters emotional and existential obstacles in her life and begins to consider the idea that individual and group consciousness can influence the material world. Her experiences are offered by the creators to illustrate the film's scientifically unsupported ideas. Bleep was conceived and its production funded by William Arntz, who serves as co-director along with Betsy Chasse and Mark Vicente; all three were students of Ramtha's School of Enlightenment. A moderately low-budget independent film, it was promoted using viral marketing methods and opened in art-house theaters in the western United States, winning several independent film awards before being picked up by a major distributor and eventually grossing over $10 million. The 2004 theatrical release was succeeded by a substantially changed, extended home media version in 2006. The film has been described as an example of quantum mysticism, and has been criticized for both misrepresenting science and containing pseudoscience. While many of its interviewees and subjects are professional scientists in the fields of physics, chemistry, and biology, one of them has noted that the film quotes him out of context. Synopsis Filmed in Portland, Oregon, What the Bleep Do We Know!? presents a viewpoint of the physical universe and human life within it, with connections to neuroscience and quantum physics. Some ideas discussed in the film are: That the universe is best seen as constructed from thoughts and ideas rather than from matter. That "empty space" is not empty. That matter is not solid, and electrons are able to pop in and out of existence without it being known where they disappear to. That beliefs about who one is and what is real are a direct cause of oneself and of one's own realities. That peptides produced by the brain can cause a bodily reaction to emotion. In the narrative segments of the film, Marlee Matlin portrays Amanda, a photographer who plays the role of everywoman as she experiences her life from startlingly new and different perspectives. In the documentary segments of the film, interviewees discuss the roots and meaning of Amanda's experiences. The comments focus primarily on a single theme: "We create our own reality." The director, William Arntz, has described What the Bleep as a film for the "metaphysical left". Cast Marlee Matlin as Amanda Elaine Hendrix as Jennifer Barry Newman as Frank Robert Bailey Jr. as Reggie John Ross Bowie as Elliot Armin Shimerman as Man Robert Blanche as Bob Larry Brandenburg as Bruno Patti B. Collins as Mother of the Bride Production Work was split between Toronto-based Mr. X Inc., Lost Boys Studios in Vancouver, and Atomic Visual Effects in Cape Town, South Africa. The visual-effects team, led by Evan Jacobs, worked closely with the other film-makers to create visual metaphors that would capture the essence of the film's technical subjects with attention to aesthetic detail. Release Promotion Lacking the funding and resources of the typical Hollywood film, the filmmakers relied on "guerrilla marketing" first to get the film into theaters, and then to attract audiences. This has led to accusations, both formal and informal, directed towards the film's proponents, of spamming online message boards and forums with many thinly veiled promotional posts. Initially, the film was released in only two theaters: one in Yelm, Washington (the home of the producers, which is also the home of Ramtha), and the other the Bagdad Theater in Portland, Oregon, where it was filmed. Within several weeks, the film had appeared in a dozen or more theaters (mostly in the western United States), and within six months it had made its way into 200 theaters across the US. Box office According to Publishers Weekly, the film was one of the sleeper hits of 2004, as "word-of-mouth and strategic marketing kept it in theaters for an entire year." The article states that the domestic gross exceeded $10 million, described as not bad for a low-budget documentary, and that the DVD release attained even more significant success with over a million units shipped in the first six months following its release in March 2005. Foreign gross added another $5 million for a worldwide gross of just over $21 million. Critical response In the Publishers Weekly article, publicist Linda Rienecker of New Page Books says that she sees the success as part of a wider phenomenon, stating "A large part of the population is seeking spiritual connections, and they have the whole world to choose from now". Author Barrie Dolnick adds that "people don't want to learn how to do one thing. They'll take a little bit of Buddhism, a little bit of veganism, a little bit of astrology... They're coming into the marketplace hungry for direction, but they don't want some person who claims to have all the answers. They want suggestions, not formulas." The same article quotes Bill Pfau, Advertising Manager of Inner Traditions, as saying "More and more ideas from the New Age community have become accepted into the mainstream." Critics offered mixed reviews as seen on the film review website Rotten Tomatoes, where it scored a "Rotten" 34% score with an average score of 4.6/10, based on 77 reviews. In his review, Dave Kehr of The New York Times described the "transition from quantum mechanics to cognitive therapy" as "plausible", but stated also that "the subsequent leap—from cognitive therapy into large, hazy spiritual beliefs—isn't as effectively executed. Suddenly people who were talking about subatomic particles are alluding to alternate universes and cosmic forces, all of which can be harnessed in the interest of making Ms. Matlin's character feel better about her thighs." What the Bleep Do We Know!? has been described as "a kind of New Age answer to The Passion of the Christ and other films that adhere to traditional religious teachings." It offers alternative spirituality views characteristic of New Age philosophy, including critiques of the competing claims of stewardship among traditional religions [viz., institutional Judaism, Christianity, and Islam] of universally recognized and accepted moral values. Academic reaction Scientists who have reviewed What the Bleep Do We Know!? have described distinct assertions made as pseudoscience. Lisa Randall refers to the film as "the bane of scientists". Amongst the assertions in the film that have been challenged are that water molecules can be influenced by thought (as popularized by Masaru Emoto), that meditation can reduce violent crime rates of a city, and that quantum physics implies that "consciousness is the ground of all being." The film was also discussed in a letter published in Physics Today that challenges how physics is taught, saying teaching fails to "expose the mysteries physics has encountered [and] reveal the limits of our understanding". In the letter, the authors write: "the movie illustrates the uncertainty principle with a bouncing basketball being in several places at once. There's nothing wrong with that. It's recognized as pedagogical exaggeration. But the movie gradually moves to quantum 'insights' that lead a woman to toss away her antidepressant medication, to the quantum channeling of Ramtha, the 35,000-year-old Lemurian warrior, and on to even greater nonsense." It went on to say that "Most laypeople cannot tell where the quantum physics ends and the quantum nonsense begins, and many are susceptible to being misguided," and that "a physics student may be unable to convincingly confront unjustified extrapolations of quantum mechanics," a shortcoming which the authors attribute to the current teaching of quantum mechanics, in which "we tacitly deny the mysteries physics has encountered". Richard Dawkins stated that "the authors seem undecided whether their theme is quantum theory or consciousness. Both are indeed mysterious, and their genuine mystery needs none of the hype with which this film relentlessly and noisily belabours us", concluding that the film is "tosh". Professor Clive Greated wrote that "thinking on neurology and addiction are covered in some detail but, unfortunately, early references in the film to quantum physics are not followed through, leading to a confused message". Despite his caveats, he recommends that people see the film, stating: "I hope it develops into a cult movie in the UK as it has in the US. Science and engineering are important for our future, and anything that engages the public can only be a good thing." Simon Singh called it pseudoscience and said the suggestion "that if observing water changes its molecular structure, and if we are 90% water, then by observing ourselves we can change at a fundamental level via the laws of quantum physics" was "ridiculous balderdash". According to João Magueijo, professor in theoretical physics at Imperial College, the film deliberately misquotes science. The American Chemical Society's review criticizes the film as a "pseudoscientific docudrama", saying "Among the more outlandish assertions are that people can travel backward in time, and that matter is actually thought." Bernie Hobbs, a science writer with ABC Science Online, explains why the film is incorrect about quantum physics and reality: "The observer effect of quantum physics isn't about people or reality. It comes from the Heisenberg Uncertainty Principle, and it's about the limitations of trying to measure the position and momentum of subatomic particles... this only applies to sub-atomic particles—a rock doesn't need you to bump into it to exist. It's there. The sub-atomic particles that make up the atoms that make up the rock are there too." Hobbs also discusses Hagelin's experiment with Transcendental Meditation and the Washington DC rate of violent crime, saying that "the number of murders actually went up". Hobbs further disputed the film's use of the ten percent of the brain myth. David Albert, a philosopher of physics who appears in the film, has accused the filmmakers of selectively editing his interview to make it appear that he endorses the film's thesis that quantum mechanics is linked with consciousness. He says he is "profoundly unsympathetic to attempts at linking quantum mechanics with consciousness". In the film, during a discussion of the influence of experience on perception, Candace Pert gives an apocryphal version of the invisible ships myth whereby Native Americans were unable to see Columbus's ships because they were outside the natives' experience. According to an article in Fortean Times by David Hambling, the origins of this story likely involved the voyages of Captain James Cook, not Columbus, and an account related by Robert Hughes which said Cook's ships were "...complex and unfamiliar as to defy the natives' understanding". Hambling says it is likely that both the Hughes account and the story told by Pert were exaggerations of the records left by Captain Cook and the botanist Joseph Banks. Skeptic James Randi described the film as "a fantasy docudrama" and "[a] rampant example of abuse by charlatans and cults". Eric Scerri in a review for Committee for Skeptical Inquiry dismisses it as "a hodgepodge of all kinds of crackpot nonsense," where "science [is] distorted and sensationalized". A BBC reviewer described it as "a documentary aimed at the totally gullible". According to Margaret Wertheim, "History abounds with religious enthusiasts who have read spiritual portent into the arrangement of the planets, the vacuum of space, electromagnetic waves and the big bang. But no scientific discovery has proved so ripe for spiritual projection as the theories of quantum physics, replete with their quixotic qualities of uncertainty, simultaneity and parallelism." Wertheim continues that the film "abandons itself entirely to the ecstasies of quantum mysticism, finding in this aleatory description of nature the key to spiritual transformation. As one of the film's characters gushes early in the proceedings, 'The moment we acknowledge the quantum self, we say that somebody has become enlightened'. A moment in which 'the mathematical formalisms of quantum mechanics [...] are stripped of all empirical content and reduced to a set of syrupy nostrums'." Journalist John Gorenfeld, writing in Salon, notes that the film's three directors, William Arntz, Betsy Chasse, and Mark Vicente, were at the time students of Ramtha's School of Enlightenment, which he says has been described as a cult. Mark Vicente later became involved with another prominent cult: NXIVM, the human-potential-development and sex-trafficking pyramid scheme founded by convicted con artist Keith Raniere. After leaving NXIVM, Vicente participated in the exposé documentary series The Vow, revealing many of the cult's damaging tactics; however, nowhere in The Vow does Vicente admit that NXIVM was not his first time adhering to a cult-like group. Accolades Ashland Independent Film Festival – Best Documentary DCIFF – DC Independent Film Festival – Grand Jury Documentary Award Maui Film Festival – Audience Choice Award – Best Hybrid Documentary Sedona International Film Festival – Audience Choice Award, Most Thought-Provoking Film Pigasus Award – an annual tongue-in-cheek award, this particular award's category was #3: "to the media outlet that reported as factual the most outrageous supernatural, paranormal or occult claims". Legacy In mid-2005, the filmmakers worked with HCI Books to expand on the themes in a book titled What the Bleep Do We Know!?—Discovering the Endless Possibilities of Your Everyday Reality. HCI president Peter Vegso stated that in regard to this book, "What the Bleep is the quantum leap in the New Age world," and "by marrying science and spirituality, it is the foundation of future thought." On August 1, 2006, What the Bleep! Down the Rabbit Hole - Quantum Edition multi-disc DVD set was released, containing two extended versions of What the Bleep Do We Know!?, with over 15 hours of material on three double-sided DVDs. Featured individuals The film features interview segments with: Dean Radin, Senior Scientist at the Institute of Noetic Sciences (IONS) in Petaluma, California and proponent of paranormal phenomena. John Hagelin of Maharishi University of Management, director of MUM's Institute for Science, Technology, and Public Policy, and three-time presidential candidate of the Transcendental Meditation-linked Natural Law Party. Stuart Hameroff, anesthesiologist, author, and associate director of the Center for Consciousness Studies at the University of Arizona, who developed with Roger Penrose a quantum hypothesis of consciousness in the books The Emperor's New Mind, and Shadows of the Mind. JZ Knight, a spiritual teacher who is identified in interview segments as the spirit "Ramtha" that Knight claims to channel. Andrew B. Newberg, assistant professor of radiology at the University of Pennsylvania Hospital, and physician in nuclear medicine, who coauthored the book Why God Won't Go Away: Brain Science & the Biology of Belief () Candace Pert, a neuroscientist, who discovered the cellular bonding site for endorphins in the brain, and in 1997 wrote the book Molecules of Emotion () Fred Alan Wolf, independent physicist, author of Taking the Quantum Leap, winner of the 1982 National Book Award in science, and featured in the documentary film Spirit Space. Wolf has taught at San Diego State University, the University of Paris, the Hebrew University of Jerusalem, the University of London, and Birkbeck College, London. David Albert, philosopher of physics and professor at Columbia University, author of Quantum Mechanics and Experience, who according to a Popular Science article was "outraged at the final product" of his interview which he felt misrepresented his views about quantum mechanics and consciousness. Micheál Ledwith, author and former professor of theology at St. Patrick's College, Maynooth; Daniel Monti, physician and director of the Mind-Body Medicine Program at Thomas Jefferson University; Jeffrey Satinover, psychiatrist, author and professor; William Tiller, Professor Emeritus of Material Science and Engineering at Stanford University; Joe Dispenza, former Ramtha School of Enlightenment teacher, chiropractor. See also Mind-body problem Hard problem of consciousness Law of attraction List of films featuring the deaf and hard of hearing References Further reading External links 2004 films 2004 comedy-drama films 2000s American films 2000s English-language films 2000s German-language films 2000s Spanish-language films American comedy-drama films English-language comedy-drama films Films about quantum mechanics Films about spirituality Films set in Oregon Films scored by Christopher Franke Films shot in Portland, Oregon Quantum mysticism New Age media Pseudoscience documentary films Roadside Attractions films
What the Bleep Do We Know!?
[ "Physics" ]
3,580
[ "Quantum mechanics", "Quantum mysticism" ]
2,991,011
https://en.wikipedia.org/wiki/EL34
The EL34 is a thermionic vacuum tube of the power pentode type. The EL34 was introduced in 1955 by Mullard, who were owned by Philips. The EL34 has an octal base (indicated by the '3' in the part number) and is found mainly in the final output stages of audio amplification circuits; it was also designed to be suitable as a series regulator by virtue of its high permissible voltage between heater and cathode and other parameters. The American RETMA tube designation number for this tube is 6CA7. The USSR analog was 6P27S (Cyrillic: 6П27C). Specifications In common with all 'E' prefix tubes, using the Mullard–Philips tube designation, the EL34 has a heater voltage of 6.3 V. According to the data sheets found in old vacuum tube reference manuals, a pair of EL34s with 800 V plate voltage can produce 90 watts output in class AB1 in push–pull configuration. However, this configuration is rarely found. One application of this type was in "Australian Sound" public address amplifiers commonly used in government schools in Australia in the 1950s, using four EL34s for ≈200 watts. More commonly found is a pair of EL34s running class AB1 in push–pull around 375–450 V plate voltage and producing 50 watts output (if fixed bias is used), while a quad of EL34s running class AB1 in push–pull typically run anywhere from 425 to 500 V plate voltage and produces 100 watts output. This configuration is typically found in guitar amplifiers. The EL34 is a pentode, while the 6L6, which delivers a similar range of power output, is a beam tetrode which RCA referred to as a beam power tube. Although power pentodes and beam tetrodes have some differences in their principles of operation (the beam forming plates of the beam tetrode or fifth electrode (3rd grid) of the pentode, both serving to hinder the return of unabsorbed electrons from the anode (or plate) to the 4th electrode (2nd grid)) and have some internal construction differences, they are functionally closely equivalent. Unlike the 6L6, (EIA base 7AC) the EL34 has its grid 3 connection brought out to a separate Pin (Pin 1) (EIA base 8ET) and its heater draws 1.5 Amps compared to the 0.9 Amp heater in the 6L6. However, Sylvania (and possibly GE) marketed a tube as 6CA7 which was not only in a markedly different 'fat boy' envelope, but used a beam forming plate much like a 6L6. Examining the mica spacer on the top of the tube will confirm the lack of a suppressor grid. Although these tubes have similar (but not identical) characteristics, they are made very differently. While the EL34 is no longer made by Philips, it is currently manufactured by EkspoPUL in Saratov, Russia (Electro-Harmonix, Tung-Sol, Mullard and Genalex Gold Lion brands), JJ Electronic in Čadca, Slovakia and by Hengyang Electronics at former Foshan Nanhai Guiguang Electron Tube Factory in southern China, (Psvane and TAD brands). Some firms make a related tube called an E34L which is rated to require a higher grid bias voltage, but which may be interchangeable in some equipment. Application The EL34 was widely used in higher-powered audio amplifiers of the 1960s and 1970s, such as the popular Dynaco Stereo 70 and the Leak TL25 (mono) and Stereo 60, and is also widely used in high-end guitar amplifiers because it is characterized by greater distortion (considered desirable in this application) at lower power than other octal tubes such as 6L6, KT88, or 6550. The EL34 is found in many British guitar amps and is associated with the "British tone" (Vox, Marshall, Hiwatt, Orange) as compared to the 6L6 which is generally associated with the "American tone" (Fender/Mesa Boogie; the earlier classic Marshall "Plexi" amps used the KT66, a beam tetrode similar to the 6L6, as well). Replacement 6CA7 Similar tubes KT77 6P27S (6П27С) See also EL84 6V6 6L6 5881 KT66 KT88 6550 List of vacuum tubes References Technical specifications EL34 Philips Metal Base Valvo Gmbh, Valvo Taschenbuch, 1958. RCA, RCA Receiving Tube Manual RC26, 1968. JJ Electronics EL34 and E34L data sheet (PDF) EL34 EI Yugoslavia External links Duncan's Amps TDSL Reviews of EL34 tubes Tube Data Archive, thousands of tube data sheets Vacuum tubes Guitar amplification tubes
EL34
[ "Physics" ]
1,025
[ "Vacuum tubes", "Vacuum", "Matter" ]
2,991,090
https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley%20model
The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical engineering characteristics of excitable cells such as neurons and muscle cells. It is a continuous-time dynamical system. Alan Hodgkin and Andrew Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. They received the 1963 Nobel Prize in Physiology or Medicine for this work. Basic components The typical Hodgkin–Huxley model treats each component of an excitable cell as an electrical element (as shown in the figure). The lipid bilayer is represented as a capacitance (Cm). Voltage-gated ion channels are represented by electrical conductances (gn, where n is the specific ion channel) that depend on both voltage and time. Leak channels are represented by linear conductances (gL). The electrochemical gradients driving the flow of ions are represented by voltage sources (En) whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species of interest. Finally, ion pumps are represented by current sources (Ip). The membrane potential is denoted by Vm. Mathematically, the current flowing through the lipid bilayer is written as and the current through a given ion channel is the product of that channel's conductance and the driving potential for the specific ion where is the reversal potential of the specific ion channel. Thus, for a cell with sodium and potassium channels, the total current through the membrane is given by: where I is the total membrane current per unit area, Cm is the membrane capacitance per unit area, gK and gNa are the potassium and sodium conductances per unit area, respectively, VK and VNa are the potassium and sodium reversal potentials, respectively, and gl and Vl are the leak conductance per unit area and leak reversal potential, respectively. The time dependent elements of this equation are Vm, gNa, and gK, where the last two conductances depend explicitly on the membrane voltage (Vm) as well. Ionic current characterization In voltage-gated ion channels, the channel conductance is a function of both time and voltage ( in the figure), while in leak channels, , it is a constant ( in the figure). The current generated by ion pumps is dependent on the ionic species specific to that pump. The following sections will describe these formulations in more detail. Voltage-gated ion channels Using a series of voltage clamp experiments and by varying extracellular sodium and potassium concentrations, Hodgkin and Huxley developed a model in which the properties of an excitable cell are described by a set of four ordinary differential equations. Together with the equation for the total current mentioned above, these are: where I is the current per unit area and and are rate constants for the i-th ion channel, which depend on voltage but not time. is the maximal value of the conductance. n, m, and h are dimensionless probabilities between 0 and 1 that are associated with potassium channel subunit activation, sodium channel subunit activation, and sodium channel subunit inactivation, respectively. For instance, given that potassium channels in squid giant axon are made up of four subunits which all need to be in the open state for the channel to allow the passage of potassium ions, the n needs to be raised to the fourth power. For , and take the form and are the steady state values for activation and inactivation, respectively, and are usually represented by Boltzmann equations as functions of . In the original paper by Hodgkin and Huxley, the functions and are given by where denotes the negative depolarization in mV. In many current software programs Hodgkin–Huxley type models generalize and to In order to characterize voltage-gated channels, the equations can be fitted to voltage clamp data. For a derivation of the Hodgkin–Huxley equations under voltage-clamp, see. Briefly, when the membrane potential is held at a constant value (i.e., with a voltage clamp), for each value of the membrane potential the nonlinear gating equations reduce to equations of the form: Thus, for every value of membrane potential the sodium and potassium currents can be described by In order to arrive at the complete solution for a propagated action potential, one must write the current term I on the left-hand side of the first differential equation in terms of V, so that the equation becomes an equation for voltage alone. The relation between I and V can be derived from cable theory and is given by where a is the radius of the axon, R is the specific resistance of the axoplasm, and x is the position along the nerve fiber. Substitution of this expression for I transforms the original set of equations into a set of partial differential equations, because the voltage becomes a function of both x and t. The Levenberg–Marquardt algorithm is often used to fit these equations to voltage-clamp data. While the original experiments involved only sodium and potassium channels, the Hodgkin–Huxley model can also be extended to account for other species of ion channels. Leak channels Leak channels account for the natural permeability of the membrane to ions and take the form of the equation for voltage-gated channels, where the conductance is a constant. Thus, the leak current due to passive leak ion channels in the Hodgkin-Huxley formalism is . Pumps and exchangers The membrane potential depends upon the maintenance of ionic concentration gradients across it. The maintenance of these concentration gradients requires active transport of ionic species. The sodium-potassium and sodium-calcium exchangers are the best known of these. Some of the basic properties of the Na/Ca exchanger have already been well-established: the stoichiometry of exchange is 3 Na+: 1 Ca2+ and the exchanger is electrogenic and voltage-sensitive. The Na/K exchanger has also been described in detail, with a 3 Na+: 2 K+ stoichiometry. Mathematical properties The Hodgkin–Huxley model can be thought of as a differential equation system with four state variables, , and , that change with respect to time . The system is difficult to study because it is a nonlinear system, cannot be solved analytically, and therefore has no closed-form solution. However, there are many numerical methods available to analyze the system. Certain properties and general behaviors, such as limit cycles, can be proven to exist. Center manifold Because there are four state variables, visualizing the path in phase space can be difficult. Usually two variables are chosen, voltage and the potassium gating variable , allowing one to visualize the limit cycle. However, one must be careful because this is an ad-hoc method of visualizing the 4-dimensional system. This does not prove the existence of the limit cycle. A better projection can be constructed from a careful analysis of the Jacobian of the system, evaluated at the equilibrium point. Specifically, the eigenvalues of the Jacobian are indicative of the center manifold's existence. Likewise, the eigenvectors of the Jacobian reveal the center manifold's orientation. The Hodgkin–Huxley model has two negative eigenvalues and two complex eigenvalues with slightly positive real parts. The eigenvectors associated with the two negative eigenvalues will reduce to zero as time t increases. The remaining two complex eigenvectors define the center manifold. In other words, the 4-dimensional system collapses onto a 2-dimensional plane. Any solution starting off the center manifold will decay towards the center manifold. Furthermore, the limit cycle is contained on the center manifold. Bifurcations If the injected current were used as a bifurcation parameter, then the Hodgkin–Huxley model undergoes a Hopf bifurcation. As with most neuronal models, increasing the injected current will increase the firing rate of the neuron. One consequence of the Hopf bifurcation is that there is a minimum firing rate. This means that either the neuron is not firing at all (corresponding to zero frequency), or firing at the minimum firing rate. Because of the all-or-none principle, there is no smooth increase in action potential amplitude, but rather there is a sudden "jump" in amplitude. The resulting transition is known as a canard. Improvements and alternative models The Hodgkin–Huxley model is regarded as one of the great achievements of 20th-century biophysics. Nevertheless, modern Hodgkin–Huxley-type models have been extended in several important ways: Additional ion channel populations have been incorporated based on experimental data. The Hodgkin–Huxley model has been modified to incorporate transition state theory and produce thermodynamic Hodgkin–Huxley models. Models often incorporate highly complex geometries of dendrites and axons, often based on microscopy data. Conductance-based models similar to Hodgkin–Huxley model incorporate the knowledge about cell types defined by single cell transcriptomics. Stochastic models of ion-channel behavior, leading to stochastic hybrid systems. The Poisson–Nernst–Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. Several simplified neuronal models have also been developed (such as the FitzHugh–Nagumo model), facilitating efficient large-scale simulation of groups of neurons, as well as mathematical insight into dynamics of action potential generation. See also Anode break excitation Autowave Neural circuit GHK flux equation Goldman equation Memristor Neural accommodation Reaction–diffusion Theta model Rulkov map Chialvo map References Further reading External links Interactive Javascript simulation of the HH model Runs in any HTML5 – capable browser. Allows for changing the parameters of the model and current injection. Interactive Java applet of the HH model Parameters of the model can be changed as well as excitation parameters and phase space plottings of all the variables is possible. Direct link to Hodgkin–Huxley model and a Description in BioModels Database Neural Impulses: The Action Potential In Action by Garrett Neske, The Wolfram Demonstrations Project Interactive Hodgkin–Huxley model by Shimon Marom, The Wolfram Demonstrations Project ModelDB A computational neuroscience source code database containing 4 versions (in different simulators) of the original Hodgkin–Huxley model and hundreds of models that apply the Hodgkin–Huxley model to other channels in many electrically excitable cell types. Several articles about the stochastic version of the model and its link with the original one. Nonlinear systems Electrophysiology Ion channels Computational neuroscience
Hodgkin–Huxley model
[ "Chemistry", "Mathematics" ]
2,283
[ "Nonlinear systems", "Neurochemistry", "Ion channels", "Dynamical systems" ]
2,992,378
https://en.wikipedia.org/wiki/Jena%20Observatory
Astrophysikalisches Institut und Universitäts-Sternwarte Jena (AIU Jena, Astrophysical Institute and University Observatory Jena, or simply Jena Observatory) is an astronomical observatory owned and operated by Friedrich Schiller University of Jena. It has two main locations in Jena, Germany and the neighbouring village of Großschwabhausen. History The first observatory was built in 1813, and replaced by a bigger one in 1889. It was funded by local regent Karl August von Sachsen-Weimar-Eisenach and planned by Johann Wolfgang Goethe. Its most famous director in the later decades was Ernst Abbe. The new observatory in Großschwabhausen was built in 1962, in order to avoid the light pollution from the city of Jena. The old main observatory in the city centre is the home of "Volkssternwarte Urania", a society of hobbyist astronomers. They offer public access and courses for children and adults, and host events like watching comets or lunar eclipses. WASP-3c & TTV Transit Timing Variation (TTV), a variation on the transit method, was used to discover an exoplanet WASP-3c by Rozhen Observatory, Jena Observatory, and Toruń Centre for Astronomy. See also List of astronomical observatories References External links Jena Observatory Universitäts-Sternwarte Jena Astronomical observatories in Germany Buildings and structures in Jena Glass engineering and science
Jena Observatory
[ "Materials_science", "Engineering" ]
303
[ "Glass engineering and science", "Materials science" ]
2,992,440
https://en.wikipedia.org/wiki/Antibody%20opsonization
Antibody opsonization is a process by which a pathogen is marked for phagocytosis through coating of a target cell with antibodies. Immunoglobulins participate in molecular tagging of pathogens which display antigens recognised by their specific paratope. The binding of antibodies enhances pathogen identification and recruitment of immune effector cells, ultimately accelerating microbial clearance through phagocytic destruction or antibody-dependent cellular cytotoxicity. Principles Antibody-mediated opsonisation (marking) of pathogens depends on high affinity paratope-epitope interactions. Immunoglobulins are highly effective opsonins, with the IgG subclasses IgG1 and IgG3 being recognised as the most efficacious opsonins in humans. Antibodies structurally contain two important domains Fab domain - the region of the antibody which displays the paratope capable of binding to antigenic epitopes Fc fragment - the 'tail' region of the Y-shaped immunoglobulin which provides a binding site for endogenous Fc receptors (FcRs) displayed on immune cell surfaces This Fc domain allows antibodies to engage with various effector leukocytes, enhancing the detection and elimination of encountered pathogens. The interaction with leukocytes is largely driven by the predominant antibody isotype as well as the presence and concentration of immune cells recruited to the local environment. The resulting immune cell recruitment may result in phagocytosis if monocytes, macrophages, or neutrophils are the primary cells recruited, release of granzymes and other killing factors if NK cells or neutrophils are recruited, and release of pro-inflammatory cytokines in nearly all cases. Recruitment and Clearance Antibody-stimulated Phagocytosis Mononuclear phagocytes and neutrophils express FcRs that bind strongly to the Fc regions of particular antibody isotypes. During a normal inflammatory response, microbial pathogen-associated molecular patterns (PAMPs) bind with phagocytic pattern recognition receptors (PRRs), triggering sequential intracellular signalling cascades culminating in phagocytotic clearance. Co-expression of opsonin receptors such as FcRs enhances their ability to detect microbes which have been tagged by as pathogenic. These interactions result in envelopment of the particle by the cytoplasmic membrane of the phagocytic cell, until the particle is contained in a membrane-bound vacuole (phagosome) within the cell. The pathogen is subsequently destroyed following intracellular vesicle fusion with lytic vessels. Antibody-dependent Cell-mediated Cytotoxicity In antibody-dependent cell-mediated cytotoxicity, the pathogen does not need to be internalised to be destroyed. ADCC requires an effector cell with the ability to eliminate pathogens through release of cytotoxic agents, most notably natural killer cells. However, macrophages, neutrophils and eosinophils are sometimes implicated. During this process, the pathogen is opsonized and bound with the antibody IgG via its Fab domain. Cells with cyotoxic function (e.g. NK cells) expresses Fcγ receptors which recognize and bind to the reciprocal Fc portion of an antibody. This receptor conjugation triggers degranulation and release of cytotoxic granules containing perforin and granzymes to kill antibody-sensitized target cells. References Immune system
Antibody opsonization
[ "Biology" ]
718
[ "Immune system", "Organ systems" ]
2,992,856
https://en.wikipedia.org/wiki/Rhodamine%20B
Rhodamine B is a chemical compound and a dye. It is often used as a tracer dye within water to determine the rate and direction of flow and transport. Rhodamine dyes fluoresce and can thus be detected easily and inexpensively with fluorometers. Rhodamine B is used in biology as a staining fluorescent dye, sometimes in combination with auramine O, as the auramine-rhodamine stain to demonstrate acid-fast organisms, notably Mycobacterium. Rhodamine dyes are also used extensively in biotechnology applications such as fluorescence microscopy, flow cytometry, fluorescence correlation spectroscopy and ELISA. Other uses Rhodamine B is often mixed with herbicides to show where they have been used. It is also being tested for use as a biomarker in oral rabies vaccines for wildlife, such as raccoons, to identify animals that have eaten a vaccine bait. The rhodamine is incorporated into the animal's whiskers and teeth. Rhodamine B is an important hydrophilic xanthene dye well known for its stability and is widely used in the textile industry, leather, paper printing, paint, coloured glass and plastic industries. Rhodamine B (BV10) is mixed with quinacridone magenta (PR122) to make the bright pink watercolor known as Opera Rose. Properties Rhodamine B can exist in equilibrium between two forms: an "open"/fluorescent form and a "closed"/nonfluorescent spirolactone form. The "open" form dominates in acidic condition while the "closed" form is colorless in basic condition. The fluorescence intensity of rhodamine B will decrease as temperature increases. The solubility of rhodamine B in water varies by manufacturer, and has been reported as 8 g/L and ~15 g/L, while solubility in alcohol (presumably ethanol) has been reported as 15 g/L. Chlorinated tap water decomposes rhodamine B. Rhodamine B solutions adsorb to plastics and should be kept in glass. Rhodamine B is tunable around 610 nm when used as a laser dye. Its luminescence quantum yield is 0.65 in basic ethanol, 0.49 in ethanol, 1.0, and 0.68 in 94% ethanol. The fluorescence yield is temperature dependent; the compound is fluxional in that its excitability is in thermal equilibrium at room temperature. Safety and health In California, rhodamine B is suspected to be carcinogenic and thus products containing it must contain a warning on its label. Cases of economically motivated adulteration, where it has been illegally used to impart a red color to chili powder, have come to the attention of food safety regulators. See also Dye laser Laser dyes Rhodamine Rhodamine 6G References Notes Microscopy Microbiology techniques Laboratory techniques Histopathology Histotechnology Staining dyes Staining Laser gain media Benzoic acids Aromatic amines Chlorides Quaternary ammonium compounds Triarylmethane dyes Xanthenes Diethylamino compounds Fluorescent dyes
Rhodamine B
[ "Chemistry", "Biology" ]
673
[ "Chlorides", "Inorganic compounds", "Staining", "Salts", "Microbiology techniques", "nan", "Microscopy", "Cell imaging", "Histopathology" ]
2,993,391
https://en.wikipedia.org/wiki/Globar
A Globar is used as a thermal light source for infrared spectroscopy. The preferred material for making Globar is silicon carbide that is shaped as rods or arches of various sizes. When inserted into a circuit that provides it with electric current, it emits radiation from ~ 2 to 50 micrometres wavelength via the Joule heating phenomenon. Globars are used as infrared sources for spectroscopy because their spectral behavior corresponds approximately to that of a Planck radiator (i.e. a black body). Alternative infrared sources are Nernst lamps, coils of chrome–nickel alloy or high-pressure mercury lamps. The technical term Globar is an English portmanteau word consisting of glow and bar. The term glowbar is sometimes used synonymously in English (which is an incorrect spelling in the strict sense). The American Resistor Company in Milwaukee, Wisconsin, had word and lettering Globar registered as a trademark (in a special decorative script font) with the United States Patent and Trademark Office on June 30, 1925 (registration number 0200201) and on October 18, 1927 (registration number 0234147). This registration had been renewed for the third time in 1987 (by various companies throughout 60 years). See also Nernst lamp List of light sources References External links Viewgraphs about infrared beamlines and IR spectroscopy Advanced Light Source, Berkeley, CA, USA Introduction to the optical principles of IR spectroscopy, light sources Ralf Arnold (in German) Lighting Spectroscopy
Globar
[ "Physics", "Chemistry" ]
310
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
2,993,692
https://en.wikipedia.org/wiki/Kuiper%27s%20theorem
In mathematics, Kuiper's theorem (after Nicolaas Kuiper) is a result on the topology of operators on an infinite-dimensional, complex Hilbert space H. It states that the space GL(H) of invertible bounded endomorphisms of H is such that all maps from any finite complex Y to GL(H) are homotopic to a constant, for the norm topology on operators. A significant corollary, also referred to as Kuiper's theorem, is that this group is weakly contractible, ie. all its homotopy groups are trivial. This result has important uses in topological K-theory. General topology of the general linear group For finite dimensional H, this group would be a complex general linear group and not at all contractible. In fact it is homotopy equivalent to its maximal compact subgroup, the unitary group U of H. The proof that the complex general linear group and unitary group have the same homotopy type is by the Gram-Schmidt process, or through the matrix polar decomposition, and carries over to the infinite-dimensional case of separable Hilbert space, basically because the space of upper triangular matrices is contractible as can be seen quite explicitly. The underlying phenomenon is that passing to infinitely many dimensions causes much of the topological complexity of the unitary groups to vanish; but see the section on Bott's unitary group, where the passage to infinity is more constrained, and the resulting group has non-trivial homotopy groups. Historical context and topology of spheres It is a surprising fact that the unit sphere, sometimes denoted S∞, in infinite-dimensional Hilbert space H is a contractible space, while no finite-dimensional spheres are contractible. This result, certainly known decades before Kuiper's, may have the status of mathematical folklore, but it is quite often cited. In fact more is true: S∞ is diffeomorphic to H, which is certainly contractible by its convexity. One consequence is that there are smooth counterexamples to an extension of the Brouwer fixed-point theorem to the unit ball in H. The existence of such counter-examples that are homeomorphisms was shown in 1943 by Shizuo Kakutani, who may have first written down a proof of the contractibility of the unit sphere. But the result was anyway essentially known (in 1935 Andrey Nikolayevich Tychonoff showed that the unit sphere was a retract of the unit ball). The result on the group of bounded operators was proved by the Dutch mathematician Nicolaas Kuiper, for the case of a separable Hilbert space; the restriction of separability was later lifted. The same result, but for the strong operator topology rather than the norm topology, was published in 1963 by Jacques Dixmier and Adrien Douady. The geometric relationship of the sphere and group of operators is that the unit sphere is a homogeneous space for the unitary group U. The stabiliser of a single vector v of the unit sphere is the unitary group of the orthogonal complement of v; therefore the homotopy long exact sequence predicts that all the homotopy groups of the unit sphere will be trivial. This shows the close topological relationship, but is not in itself quite enough, since the inclusion of a point will be a weak homotopy equivalence only, and that implies contractibility directly only for a CW complex. In a paper published two years after Kuiper's, Bott's unitary group There is another infinite-dimensional unitary group, of major significance in homotopy theory, that to which the Bott periodicity theorem applies. It is certainly not contractible. The difference from Kuiper's group can be explained: Bott's group is the subgroup in which a given operator acts non-trivially only on a subspace spanned by the first N of a fixed orthonormal basis {ei}, for some N, being the identity on the remaining basis vectors. Applications An immediate consequence, given the general theory of fibre bundles, is that every Hilbert bundle is a trivial bundle. The result on the contractibility of S∞ gives a geometric construction of classifying spaces for certain groups that act freely on it, such as the cyclic group with two elements and the circle group. The unitary group U in Bott's sense has a classifying space BU for complex vector bundles (see Classifying space for U(n)). A deeper application coming from Kuiper's theorem is the proof of the Atiyah–Jänich theorem (after Klaus Jänich and Michael Atiyah), stating that the space of Fredholm operators on H, with the norm topology, represents the functor K(.) of topological (complex) K-theory, in the sense of homotopy theory. This is given by Atiyah. Case of Banach spaces The same question may be posed about invertible operators on any Banach space of infinite dimension. Here there are only partial results. Some classical sequence spaces have the same property, namely that the group of invertible operators is contractible. On the other hand, there are examples known where it fails to be a connected space. Where all homotopy groups are known to be trivial, the contractibility in some cases may remain unknown. References K-theory Operator theory Hilbert spaces Theorems in topology Topology of Lie groups
Kuiper's theorem
[ "Physics", "Mathematics" ]
1,115
[ "Quantum mechanics", "Theorems in topology", "Topology", "Mathematical problems", "Hilbert spaces", "Mathematical theorems" ]
2,993,946
https://en.wikipedia.org/wiki/IEEE%201541
IEEE 1541-2002 is a standard issued in 2002 by the Institute of Electrical and Electronics Engineers (IEEE) concerning the use of prefixes for binary multiples of units of measurement related to digital electronics and computing. IEEE 1541-2021 revises and supersedes IEEE 1541–2002, which is 'inactive'. While the International System of Units (SI) defines multiples based on powers of ten (like k = 103, M = 106, etc.), a different definition is sometimes used in computing, based on powers of two (like k = 210, M = 220, etc.). This is due to binary nature of current computing systems, making powers of two the simplest to calculate. In the early years of computing, there was no significant error in using the same prefix for either quantity (210 = 1,024 and 103 = 1000 are equal, to two significant figures). Thus, the SI prefixes were borrowed to indicate nearby binary multiples for these computer-related quantities. Meanwhile, manufacturers of storage devices, such as hard disks, traditionally used the standard decimal meanings of the prefixes, and decimal multiples are used for transmission rates and processor clock speeds as well. As technology improved, all of these measurements and capacities increased. As the binary meaning was extended to higher prefixes, the absolute error between the two meanings increased. This has even resulted in litigation against hard drive manufacturers, because some operating systems report the size using the larger binary interpretation. Moreover, there is not a consistent use of the symbols to indicate quantities of bits and bytes – the unit symbol "Mb", for instance, has been widely used for both megabytes and megabits. IEEE 1541 sets new recommendations to represent these quantities and unit symbols unambiguously. After a trial period of two years, in 2005, IEEE 1541-2002 was elevated to a full-use standard by the IEEE Standards Association, and was reaffirmed on 27 March 2008. IEEE 1541 is closely related to Amendment 2 of the international standard IEC 60027-2. Later, the IEC standard was harmonized into the common ISO/IEC 80000-13:2008 – Quantities and units – Part 13: Information science and technology. IEC 80000-13 uses 'bit' as the symbol for bit, as opposed to 'b'. Recommendations IEEE 1541 recommends: a set of units to refer to quantities used in digital electronics and computing: bit (symbol 'b'), a binary digit; byte (symbol 'B'), a set of adjacent bits (usually, but not necessarily, eight) operated on as a group; octet (symbol 'o'), a group of eight bits; a set of prefixes to indicate binary multiples of the aforesaid units: kibi (symbol 'Ki'), 210 = ; mebi (symbol 'Mi'), 220 = ; gibi (symbol 'Gi'), 230 = ; tebi (symbol 'Ti'), 240 = ; pebi (symbol 'Pi'), 250 = ; exbi (symbol 'Ei'), 260 = ; zebi (symbol 'Zi'), 270 = ; yobi (symbol 'Yi'), 280 = ; that the first part of the binary prefix is pronounced as the analogous SI prefix, and the second part is pronounced as bee; that SI prefixes are not used to indicate binary multiples. The bi part of the prefix comes from the word binary, so for example, kibibyte means a kilobinary byte, that is 1024 bytes. Acceptance In 1998, the International Bureau of Weights and Measures (BIPM), one of the organizations that maintain SI, published a brochure stating, among other things, that SI prefixes strictly refer to powers of ten and should not be used to indicate binary multiples, using as an example that 1 kilobit is 1000 bits and not 1024 bits. The binary prefixes have been adopted by the European Committee for Electrotechnical Standardization (CENELEC) as the harmonization document HD 60027-2:2003-03. Adherence to this standard implies that binary prefixes would be used for powers of two and SI prefixes for powers of ten. This document has been adopted as a European standard. The IEC binary prefixes (kibi, mebi, ...) are gaining acceptance in open source software and in scientific literature. Elsewhere adoption has been slow, with some operating systems, most notably Windows, continuing to use SI prefixes (kilo, mega, ...) for binary multiples. Supporters of IEEE 1541 emphasize that the standard solves the confusion of units in the market place. Some software (most notably free and open source) uses the decimal SI prefixes and binary prefixes according to the standard. See also Powers of 1024 Binary prefixes Timeline of binary prefixes TU (time unit), defined as 1024 μs in IEEE 802.11 References External links IEEE 1541-2002 - IEEE Standard for Prefixes for Binary Multiples (original document) SI Brochure: The International System of Units (SI) Electronics standards Prefixes Measurement Naming conventions Units of information IEEE standards
IEEE 1541
[ "Physics", "Mathematics", "Technology" ]
1,072
[ "Physical quantities", "Computer standards", "Quantity", "Measurement", "Size", "Units of information", "IEEE standards", "Units of measurement" ]
2,994,458
https://en.wikipedia.org/wiki/Blade%20pitch
Blade pitch or simply pitch refers to the angle of a blade in a fluid. The term has applications in aeronautics, shipping, and other fields. Aeronautics In aeronautics, blade pitch refers to the angle of the blades of an aircraft propeller or helicopter rotor. Blade pitch is measured relative to the aircraft body. It is usually described as "fine" or "low" for a more vertical blade angle, and "coarse" or "high" for a more horizontal blade angle. Blade pitch is normally described as a ratio of forward distance per rotation assuming no slip. Blade pitch acts much like the gearing of the final drive of a car. Low pitch yields good low speed acceleration (and climb rate in an aircraft) while high pitch optimizes high speed performance and fuel economy. It is quite common for an aircraft to be designed with a variable-pitch propeller, to give maximum thrust over a larger speed range. A fine pitch would be used during take-off and landing, whereas a coarser pitch is used for high-speed cruise flight. This is because the effective angle of attack of the propeller blade decreases as airspeed increases. To maintain the optimum effective angle of attack, the pitch must be increased. Blade pitch angle is not the same as blade angle of attack. As speed increases, blade pitch is increased to keep blade angle of attack constant. A propeller blade's "lift", or its thrust, depends on the angle of attack combined with its speed. Because the velocity of a propeller blade varies from the hub to the tip, it is of twisted form in order for the thrust to remain approximately constant along the length of the blade; this is called "blade twist". This is typical of all but the crudest propellers. Helicopters In helicopters, pitch control changes the angle of incidence of the rotor blades, which in turn affects the blades' angle of attack. Main rotor pitch is controlled by both collective and cyclic, whereas tail rotor pitch is altered using pedals. Feathering Feathering the blades of a propeller means to increase their angle of pitch by turning the blades to be parallel to the airflow. This minimizes drag from a stopped propeller following an engine failure in flight. Reverse thrust Some propeller-driven aircraft permit the pitch to be decreased beyond the fine position until the propeller generates thrust in the reverse direction. This is called thrust reversal, and the propeller position is called the beta position. Wind turbines Blade pitch control is a feature of nearly all large modern horizontal-axis wind turbines. It is used to adjust the rotation speed and the generated power. While operating, a wind turbine's control system adjusts the blade pitch to keep the rotor speed within operating limits as the wind speed changes. Feathering the blades stops the rotor during emergency shutdowns, or whenever the wind speed exceeds the maximum rated speed. During construction and maintenance of wind turbines, the blades are usually feathered to reduce unwanted rotational torque in the event of wind gusts. Blade pitch control is preferred over rotor brakes, as brakes are subject to failure or overload by the wind force on the turbine. This can lead to runaway turbines. By contrast, pitch control allows the blades to be feathered, so that wind speed does not affect the stress on the control mechanism. Pitch control can be implemented via hydraulic or electric mechanisms. Hydraulic mechanisms have longer life, faster response time due to higher driving force, and a lower maintenance backup spring. However, hydraulics tend to require more power to keep the system at a high pressure, and can leak. Electric systems consume and waste less power, and do not leak. However, they require costly fail safe batteries and capacitors in the event of power failure. Pitch control does not need to be active (reliant on actuators). Passive (stall-controlled) wind turbines rely on the fact that angle of attack increases with wind speed. Blades can be designed to stop functioning past a certain speed. This is another use for twisted blades: the twist allows for a gradual stall as each portion of the blade has a different angle of attack and will stop at a different time. Blade pitch control typically accounts for less than 3% of a wind turbine's expense while blade pitch malfunctions account for 23% of all wind turbine production downtime, and account for 21% of all component failures. Shipping In shipping, blade pitch is measured in the number of inches of forward propulsion through the water for one complete revolution of the propeller. For example, a propeller with a 12" pitch will propel the vessel 12" ahead when rotated once. Note that this is the theoretical maximum distance; in reality, due to "slip" between the propeller and the water, the actual distance propelled will invariably be less. Some composite propellers have interchangeable blades, which enables the blade pitch to be changed when the propeller is stopped. A lower pitch would be used for transporting heavy loads at low speed, whereas a higher pitch would be used for high-speed travel. Rowing (sport) In rowing, blade pitch is the inclination of the blade towards the stern of the boat during the drive phase of the rowing stroke. Without correct blade pitch, a blade would have a tendency to dive too deep, or pop out of the water and/or cause difficulties with balancing on the recovery phase of the stroke. References External links Aerodynamics
Blade pitch
[ "Chemistry", "Engineering" ]
1,081
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
2,994,661
https://en.wikipedia.org/wiki/Lewis%20number
In fluid dynamics and thermodynamics, the Lewis number (denoted ) is a dimensionless number defined as the ratio of thermal diffusivity to mass diffusivity. It is used to characterize fluid flows where there is simultaneous heat and mass transfer. The Lewis number puts the thickness of the thermal boundary layer in relation to the concentration boundary layer. The Lewis number is defined as . where: is the thermal diffusivity, is the mass diffusivity, is the thermal conductivity, is the density, is the mixture-averaged diffusion coefficient, is the specific heat capacity at constant pressure. In the field of fluid mechanics, many sources define the Lewis number to be the inverse of the above definition. The Lewis number can also be expressed in terms of the Prandtl number () and the Schmidt number (): It is named after Warren K. Lewis (1882–1975), who was the first head of the Chemical Engineering Department at MIT. Some workers in the field of combustion assume (incorrectly) that the Lewis number was named for Bernard Lewis (1899–1993), who for many years was a major figure in the field of combustion research. References Further reading Fluid dynamics Dimensionless numbers of fluid mechanics Combustion
Lewis number
[ "Chemistry", "Engineering" ]
254
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
2,994,664
https://en.wikipedia.org/wiki/Schmidt%20number
In fluid dynamics, the Schmidt number (denoted ) of a fluid is a dimensionless number defined as the ratio of momentum diffusivity (kinematic viscosity) and mass diffusivity, and it is used to characterize fluid flows in which there are simultaneous momentum and mass diffusion convection processes. It was named after German engineer Ernst Heinrich Wilhelm Schmidt (1892–1975). The Schmidt number is the ratio of the shear component for diffusivity (viscosity divided by density) to the diffusivity for mass transfer . It physically relates the relative thickness of the hydrodynamic layer and mass-transfer boundary layer. It is defined as: where (in SI units): is the kinematic viscosity (m2/s) is the mass diffusivity (m2/s). is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/m·s) is the density of the fluid (kg/m3) is the Peclet Number is the Reynolds Number. The heat transfer analog of the Schmidt number is the Prandtl number (). The ratio of thermal diffusivity to mass diffusivity is the Lewis number (). Turbulent Schmidt Number The turbulent Schmidt number is commonly used in turbulence research and is defined as: where: is the eddy viscosity in units of (m2/s) is the eddy diffusivity (m2/s). The turbulent Schmidt number describes the ratio between the rates of turbulent transport of momentum and the turbulent transport of mass (or any passive scalar). It is related to the turbulent Prandtl number, which is concerned with turbulent heat transfer rather than turbulent mass transfer. It is useful for solving the mass transfer problem of turbulent boundary layer flows. The simplest model for Sct is the Reynolds analogy, which yields a turbulent Schmidt number of 1. From experimental data and CFD simulations, Sct ranges from 0.2 to 6. Stirling engines For Stirling engines, the Schmidt number is related to the specific power. Gustav Schmidt of the German Polytechnic Institute of Prague published an analysis in 1871 for the now-famous closed-form solution for an idealized isothermal Stirling engine model. where: is the Schmidt number is the heat transferred into the working fluid is the mean pressure of the working fluid is the volume swept by the piston. References Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics
Schmidt number
[ "Physics", "Chemistry", "Engineering" ]
509
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Chemical engineering", "Piping", "Fluid dynamics" ]
7,146,399
https://en.wikipedia.org/wiki/Rietdijk%E2%80%93Putnam%20argument
In philosophy, the Rietdijk–Putnam argument, named after and Hilary Putnam, uses 20th-century findings in physicsspecifically in special relativityto support the philosophical position known as four-dimensionalism. If special relativity is true, then each observer will have their own plane of simultaneity, which contains a unique set of events that constitutes the observer's present moment. Observers moving at different relative velocities have different planes of simultaneity, and hence different sets of events that are present. Each observer considers their set of present events to be a three-dimensional universe, but even the slightest movement of the head or offset in distance between observers can cause the three-dimensional universes to have differing content. If each three-dimensional universe exists, then the existence of multiple three-dimensional universes suggests that the universe is four-dimensional. The argument is named after the discussions by Rietdijk (1966) and Putnam (1967). It is sometimes called the Rietdijk–Putnam–Penrose argument. Andromeda paradox Roger Penrose advanced a form of this argument that has been called the Andromeda paradox in which he points out that two people walking past each other on the street could have very different present moments. If one of the people were walking towards the Andromeda Galaxy, then events in this galaxy might be hours or even days advanced of the events on Andromeda for the person walking in the other direction. If this occurs, it would have dramatic effects on our understanding of time. Penrose highlighted the consequences by discussing a potential invasion of Earth by aliens living in the Andromeda Galaxy. As Penrose put it: The "paradox" consists of two observers who are, from their conscious perspective, in the same place and at the same instant having different sets of events in their "present moment". Notice that neither observer can actually "see" what is happening in Andromeda, because light from Andromeda (and the hypothetical alien fleet) will take 2.5 million years to reach Earth. The argument is not about what can be "seen"; it is purely about what events different observers consider to occur in the present moment. Criticisms The interpretations of relativity used in the Rietdijk–Putnam argument and the Andromeda paradox are not universally accepted. Howard Stein and Steven F. Savitt note that in relativity the present is a local concept that cannot be extended to global hyperplanes. Furthermore, N. David Mermin states: So stressing that the "present moment" cannot be applied to very distant events with any accuracy. References Further reading Vesselin Petkov (2005) "Is There an Alternative to the Block Universe View?" in Dennis Dieks (ed.), The Ontology of Spacetime, Elsevier, Amsterdam, 2006; "Philosophy and Foundations of Physics" Series, pp. 207–228 Wikibook:The relativity of simultaneity and the Andromeda paradox "Being and Becoming in Modern Physics", Stanford Encyclopedia of Philosophy Relativistic paradoxes Special relativity
Rietdijk–Putnam argument
[ "Physics" ]
631
[ "Special relativity", "Theory of relativity" ]
7,147,157
https://en.wikipedia.org/wiki/Absolutely%20integrable%20function
In mathematics, an absolutely integrable function is a function whose absolute value is integrable, meaning that the integral of the absolute value over the whole domain is finite. For a real-valued function, since where both and must be finite. In Lebesgue integration, this is exactly the requirement for any measurable function f to be considered integrable, with the integral then equaling , so that in fact "absolutely integrable" means the same thing as "Lebesgue integrable" for measurable functions. The same thing goes for a complex-valued function. Let us define where and are the real and imaginary parts of . Then so This shows that the sum of the four integrals (in the middle) is finite if and only if the integral of the absolute value is finite, and the function is Lebesgue integrable only if all the four integrals are finite. So having a finite integral of the absolute value is equivalent to the conditions for the function to be "Lebesgue integrable". External links Integral calculus References Tao, Terence, Analysis 2, 3rd ed., Texts and Readings in Mathematics, Hindustan Book Agency, New Delhi.
Absolutely integrable function
[ "Mathematics" ]
245
[ "Integral calculus", "Calculus" ]
7,148,738
https://en.wikipedia.org/wiki/Molecular%20Hamiltonian
In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity, specific heat, electrical conductivity, optical, and magnetic properties, and reactivity. The elementary parts of a molecule are the nuclei, characterized by their atomic numbers, Z, and the electrons, which have negative elementary charge, −e. Their interaction gives a nuclear charge of Z + q, where , with N equal to the number of electrons. Electrons and nuclei are, to a very good approximation, point charges and point masses. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. The Hamiltonian that contains only the kinetic energies of electrons and nuclei, and the Coulomb interactions between them, is known as the Coulomb Hamiltonian. From it are missing a number of small terms, most of which are due to electronic and nuclear spin. Although it is generally assumed that the solution of the time-independent Schrödinger equation associated with the Coulomb Hamiltonian will predict most properties of the molecule, including its shape (three-dimensional structure), calculations based on the full Coulomb Hamiltonian are very rare. The main reason is that its Schrödinger equation is very difficult to solve. Applications are restricted to small systems like the hydrogen molecule. Almost all calculations of molecular wavefunctions are based on the separation of the Coulomb Hamiltonian first devised by Born and Oppenheimer. The nuclear kinetic energy terms are omitted from the Coulomb Hamiltonian and one considers the remaining Hamiltonian as a Hamiltonian of electrons only. The stationary nuclei enter the problem only as generators of an electric potential in which the electrons move in a quantum mechanical way. Within this framework the molecular Hamiltonian has been simplified to the so-called clamped nucleus Hamiltonian, also called electronic Hamiltonian, that acts only on functions of the electronic coordinates. Once the Schrödinger equation of the clamped nucleus Hamiltonian has been solved for a sufficient number of constellations of the nuclei, an appropriate eigenvalue (usually the lowest) can be seen as a function of the nuclear coordinates, which leads to a potential energy surface. In practical calculations the surface is usually fitted in terms of some analytic functions. In the second step of the Born–Oppenheimer approximation the part of the full Coulomb Hamiltonian that depends on the electrons is replaced by the potential energy surface. This converts the total molecular Hamiltonian into another Hamiltonian that acts only on the nuclear coordinates. In the case of a breakdown of the Born–Oppenheimer approximation—which occurs when energies of different electronic states are close—the neighboring potential energy surfaces are needed, see this article for more details on this. The nuclear motion Schrödinger equation can be solved in a space-fixed (laboratory) frame, but then the translational and rotational (external) energies are not accounted for. Only the (internal) atomic vibrations enter the problem. Further, for molecules larger than triatomic ones, it is quite common to introduce the harmonic approximation, which approximates the potential energy surface as a quadratic function of the atomic displacements. This gives the harmonic nuclear motion Hamiltonian. Making the harmonic approximation, we can convert the Hamiltonian into a sum of uncoupled one-dimensional harmonic oscillator Hamiltonians. The one-dimensional harmonic oscillator is one of the few systems that allows an exact solution of the Schrödinger equation. Alternatively, the nuclear motion (rovibrational) Schrödinger equation can be solved in a special frame (an Eckart frame) that rotates and translates with the molecule. Formulated with respect to this body-fixed frame the Hamiltonian accounts for rotation, translation and vibration of the nuclei. Since Watson introduced in 1968 an important simplification to this Hamiltonian, it is often referred to as Watson's nuclear motion Hamiltonian, but it is also known as the Eckart Hamiltonian. Coulomb Hamiltonian The algebraic form of many observables—i.e., Hermitian operators representing observable quantities—is obtained by the following quantization rules: Write the classical form of the observable in Hamilton form (as a function of momenta p and positions q). Both vectors are expressed with respect to an arbitrary inertial frame, usually referred to as laboratory-frame or space-fixed frame. Replace p by and interpret q as a multiplicative operator. Here is the nabla operator, a vector operator consisting of first derivatives. The well-known commutation relations for the p and q operators follow directly from the differentiation rules. Classically the electrons and nuclei in a molecule have kinetic energy of the form p2/(2 m) and interact via Coulomb interactions, which are inversely proportional to the distance rij between particle i and j. In this expression ri stands for the coordinate vector of any particle (electron or nucleus), but from here on we will reserve capital R to represent the nuclear coordinate, and lower case r for the electrons of the system. The coordinates can be taken to be expressed with respect to any Cartesian frame centered anywhere in space, because distance, being an inner product, is invariant under rotation of the frame and, being the norm of a difference vector, distance is invariant under translation of the frame as well. By quantizing the classical energy in Hamilton form one obtains the a molecular Hamilton operator that is often referred to as the Coulomb Hamiltonian. This Hamiltonian is a sum of five terms. They are The kinetic energy operators for each nucleus in the system; The kinetic energy operators for each electron in the system; The potential energy between the electrons and nuclei – the total electron-nucleus Coulombic attraction in the system; The potential energy arising from Coulombic electron-electron repulsions The potential energy arising from Coulombic nuclei-nuclei repulsions – also known as the nuclear repulsion energy. See electric potential for more details. Here Mi is the mass of nucleus i, Zi is the atomic number of nucleus i, and me is the mass of the electron. The Laplace operator of particle i is:. Since the kinetic energy operator is an inner product, it is invariant under rotation of the Cartesian frame with respect to which xi, yi, and zi are expressed. Small terms In the 1920s much spectroscopic evidence made it clear that the Coulomb Hamiltonian is missing certain terms. Especially for molecules containing heavier atoms, these terms, although much smaller than kinetic and Coulomb energies, are nonnegligible. These spectroscopic observations led to the introduction of a new degree of freedom for electrons and nuclei, namely spin. This empirical concept was given a theoretical basis by Paul Dirac when he introduced a relativistically correct (Lorentz covariant) form of the one-particle Schrödinger equation. The Dirac equation predicts that spin and spatial motion of a particle interact via spin–orbit coupling. In analogy spin-other-orbit coupling was introduced. The fact that particle spin has some of the characteristics of a magnetic dipole led to spin–spin coupling. Further terms without a classical counterpart are the Fermi-contact term (interaction of electronic density on a finite size nucleus with the nucleus), and nuclear quadrupole coupling (interaction of a nuclear quadrupole with the gradient of an electric field due to the electrons). Finally a parity violating term predicted by the Standard Model must be mentioned. Although it is an extremely small interaction, it has attracted a fair amount of attention in the scientific literature because it gives different energies for the enantiomers in chiral molecules. The remaining part of this article will ignore spin terms and consider the solution of the eigenvalue (time-independent Schrödinger) equation of the Coulomb Hamiltonian. The Schrödinger equation of the Coulomb Hamiltonian The Coulomb Hamiltonian has a continuous spectrum due to the center of mass (COM) motion of the molecule in homogeneous space. In classical mechanics it is easy to separate off the COM motion of a system of point masses. Classically the motion of the COM is uncoupled from the other motions. The COM moves uniformly (i.e., with constant velocity) through space as if it were a point particle with mass equal to the sum Mtot of the masses of all the particles. In quantum mechanics a free particle has as state function a plane wave function, which is a non-square-integrable function of well-defined momentum. The kinetic energy of this particle can take any positive value. The position of the COM is uniformly probable everywhere, in agreement with the Heisenberg uncertainty principle. By introducing the coordinate vector X of the center of mass as three of the degrees of freedom of the system and eliminating the coordinate vector of one (arbitrary) particle, so that the number of degrees of freedom stays the same, one obtains by a linear transformation a new set of coordinates ti. These coordinates are linear combinations of the old coordinates of all particles (nuclei and electrons). By applying the chain rule one can show that The first term of is the kinetic energy of the COM motion, which can be treated separately since does not depend on X. As just stated, its eigenstates are plane waves. The potential V(t) consists of the Coulomb terms expressed in the new coordinates. The first term of has the usual appearance of a kinetic energy operator. The second term is known as the mass polarization term. The translationally invariant Hamiltonian can be shown to be self-adjoint and to be bounded from below. That is, its lowest eigenvalue is real and finite. Although is necessarily invariant under permutations of identical particles (since and the COM kinetic energy are invariant), its invariance is not manifest. Not many actual molecular applications of exist; see, however, the seminal work on the hydrogen molecule for an early application. In the great majority of computations of molecular wavefunctions the electronic problem is solved with the clamped nucleus Hamiltonian arising in the first step of the Born–Oppenheimer approximation. See Ref. for a thorough discussion of the mathematical properties of the Coulomb Hamiltonian. Also it is discussed in this paper whether one can arrive a priori at the concept of a molecule (as a stable system of electrons and nuclei with a well-defined geometry) from the properties of the Coulomb Hamiltonian alone. Clamped nucleus Hamiltonian The clamped nucleus Hamiltonian, which is also often called the electronic Hamiltonian, describes the energy of the electrons in the electrostatic field of the nuclei, where the nuclei are assumed to be stationary with respect to an inertial frame. The form of the electronic Hamiltonian is The coordinates of electrons and nuclei are expressed with respect to a frame that moves with the nuclei, so that the nuclei are at rest with respect to this frame. The frame stays parallel to a space-fixed frame. It is an inertial frame because the nuclei are assumed not to be accelerated by external forces or torques. The origin of the frame is arbitrary, it is usually positioned on a central nucleus or in the nuclear center of mass. Sometimes it is stated that the nuclei are "at rest in a space-fixed frame". This statement implies that the nuclei are viewed as classical particles, because a quantum mechanical particle cannot be at rest. (It would mean that it had simultaneously zero momentum and well-defined position, which contradicts Heisenberg's uncertainty principle). Since the nuclear positions are constants, the electronic kinetic energy operator is invariant under translation over any nuclear vector. The Coulomb potential, depending on difference vectors, is invariant as well. In the description of atomic orbitals and the computation of integrals over atomic orbitals this invariance is used by equipping all atoms in the molecule with their own localized frames parallel to the space-fixed frame. As explained in the article on the Born–Oppenheimer approximation, a sufficient number of solutions of the Schrödinger equation of leads to a potential energy surface (PES) . It is assumed that the functional dependence of V on its coordinates is such that for where t and s are arbitrary vectors and Δφ is an infinitesimal angle, Δφ >> Δφ2. This invariance condition on the PES is automatically fulfilled when the PES is expressed in terms of differences of, and angles between, the Ri, which is usually the case. Harmonic nuclear motion Hamiltonian In the remaining part of this article we assume that the molecule is semi-rigid. In the second step of the BO approximation the nuclear kinetic energy Tn is reintroduced and the Schrödinger equation with Hamiltonian is considered. One would like to recognize in its solution: the motion of the nuclear center of mass (3 degrees of freedom), the overall rotation of the molecule (3 degrees of freedom), and the nuclear vibrations. In general, this is not possible with the given nuclear kinetic energy, because it does not separate explicitly the 6 external degrees of freedom (overall translation and rotation) from the 3N − 6 internal degrees of freedom. In fact, the kinetic energy operator here is defined with respect to a space-fixed (SF) frame. If we were to move the origin of the SF frame to the nuclear center of mass, then, by application of the chain rule, nuclear mass polarization terms would appear. It is customary to ignore these terms altogether and we will follow this custom. In order to achieve a separation we must distinguish internal and external coordinates, to which end Eckart introduced conditions to be satisfied by the coordinates. We will show how these conditions arise in a natural way from a harmonic analysis in mass-weighted Cartesian coordinates. In order to simplify the expression for the kinetic energy we introduce mass-weighted displacement coordinates . Since the kinetic energy operator becomes, If we make a Taylor expansion of V around the equilibrium geometry, and truncate after three terms (the so-called harmonic approximation), we can describe V with only the third term. The term V0 can be absorbed in the energy (gives a new zero of energy). The second term is vanishing because of the equilibrium condition. The remaining term contains the Hessian matrix F of V, which is symmetric and may be diagonalized with an orthogonal 3N × 3N matrix with constant elements: It can be shown from the invariance of V under rotation and translation that six of the eigenvectors of F (last six rows of Q) have eigenvalue zero (are zero-frequency modes). They span the external space. The first rows of Q are—for molecules in their ground state—eigenvectors with non-zero eigenvalue; they are the internal coordinates and form an orthonormal basis for a (3N - 6)-dimensional subspace of the nuclear configuration space R3N, the internal space. The zero-frequency eigenvectors are orthogonal to the eigenvectors of non-zero frequency. It can be shown that these orthogonalities are in fact the Eckart conditions. The kinetic energy expressed in the internal coordinates is the internal (vibrational) kinetic energy. With the introduction of normal coordinates the vibrational (internal) part of the Hamiltonian for the nuclear motion becomes in the harmonic approximation The corresponding Schrödinger equation is easily solved, it factorizes into 3N − 6 equations for one-dimensional harmonic oscillators. The main effort in this approximate solution of the nuclear motion Schrödinger equation is the computation of the Hessian F of V and its diagonalization. This approximation to the nuclear motion problem, described in 3N mass-weighted Cartesian coordinates, became standard in quantum chemistry, since the days (1980s-1990s) that algorithms for accurate computations of the Hessian F became available. Apart from the harmonic approximation, it has as a further deficiency that the external (rotational and translational) motions of the molecule are not accounted for. They are accounted for in a rovibrational Hamiltonian that sometimes is called Watson's Hamiltonian. Watson's nuclear motion Hamiltonian In order to obtain a Hamiltonian for external (translation and rotation) motions coupled to the internal (vibrational) motions, it is common to return at this point to classical mechanics and to formulate the classical kinetic energy corresponding to these motions of the nuclei. Classically it is easy to separate the translational—center of mass—motion from the other motions. However, the separation of the rotational from the vibrational motion is more difficult and is not completely possible. This ro-vibrational separation was first achieved by Eckart in 1935 by imposing by what is now known as Eckart conditions. Since the problem is described in a frame (an "Eckart" frame) that rotates with the molecule, and hence is a non-inertial frame, energies associated with the fictitious forces: centrifugal and Coriolis force appear in the kinetic energy. In general, the classical kinetic energy T defines the metric tensor g = (gij) associated with the curvilinear coordinates s = (si) through The quantization step is the transformation of this classical kinetic energy into a quantum mechanical operator. It is common to follow Podolsky by writing down the Laplace–Beltrami operator in the same (generalized, curvilinear) coordinates s as used for the classical form. The equation for this operator requires the inverse of the metric tensor g and its determinant. Multiplication of the Laplace–Beltrami operator by gives the required quantum mechanical kinetic energy operator. When we apply this recipe to Cartesian coordinates, which have unit metric, the same kinetic energy is obtained as by application of the quantization rules. The nuclear motion Hamiltonian was obtained by Wilson and Howard in 1936, who followed this procedure, and further refined by Darling and Dennison in 1940. It remained the standard until 1968, when Watson was able to simplify it drastically by commuting through the derivatives the determinant of the metric tensor. We will give the ro-vibrational Hamiltonian obtained by Watson, which often is referred to as the Watson Hamiltonian. Before we do this we must mention that a derivation of this Hamiltonian is also possible by starting from the Laplace operator in Cartesian form, application of coordinate transformations, and use of the chain rule. The Watson Hamiltonian, describing all motions of the N nuclei, is The first term is the center of mass term The second term is the rotational term akin to the kinetic energy of the rigid rotor. Here is the α component of the body-fixed rigid rotor angular momentum operator, see this article for its expression in terms of Euler angles. The operator is a component of an operator known as the vibrational angular momentum operator (although it does not satisfy angular momentum commutation relations), with the Coriolis coupling constant: Here is the Levi-Civita symbol. The terms quadratic in the are centrifugal terms, those bilinear in and are Coriolis terms. The quantities Q s, iγ are the components of the normal coordinates introduced above. Alternatively, normal coordinates may be obtained by application of Wilson's GF method. The 3 × 3 symmetric matrix is called the effective reciprocal inertia tensor. If all q s were zero (rigid molecule) the Eckart frame would coincide with a principal axes frame (see rigid rotor) and would be diagonal, with the equilibrium reciprocal moments of inertia on the diagonal. If all q s would be zero, only the kinetic energies of translation and rigid rotation would survive. The potential-like term U is the Watson term: proportional to the trace of the effective reciprocal inertia tensor. The fourth term in the Watson Hamiltonian is the kinetic energy associated with the vibrations of the atoms (nuclei) expressed in normal coordinates qs, which as stated above, are given in terms of nuclear displacements ρiα by Finally V is the unexpanded potential energy by definition depending on internal coordinates only. In the harmonic approximation it takes the form See also Quantum chemistry computer programs Adiabatic process (quantum mechanics) Franck–Condon principle Born–Oppenheimer approximation GF method Eckart conditions Rigid rotor References Further reading A readable and thorough discussion on the spin terms in the molecular Hamiltonian is in: Molecular physics Quantum chemistry Spectroscopy
Molecular Hamiltonian
[ "Physics", "Chemistry" ]
4,292
[ "Quantum chemistry", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", "Spectroscopy", " and optical physics" ]
7,149,012
https://en.wikipedia.org/wiki/Factorization%20system
In mathematics, it can be shown that every function can be written as the composite of a surjective function followed by an injective function. Factorization systems are a generalization of this situation in category theory. Definition A factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that: E and M both contain all isomorphisms of C and are closed under composition. Every morphism f of C can be factored as for some morphisms and . The factorization is functorial: if and are two morphisms such that for some morphisms and , then there exists a unique morphism making the following diagram commute: Remark: is a morphism from to in the arrow category. Orthogonality Two morphisms and are said to be orthogonal, denoted , if for every pair of morphisms and such that there is a unique morphism such that the diagram commutes. This notion can be extended to define the orthogonals of sets of morphisms by and Since in a factorization system contains all the isomorphisms, the condition (3) of the definition is equivalent to (3') and Proof: In the previous diagram (3), take (identity on the appropriate object) and . Equivalent definition The pair of classes of morphisms of C is a factorization system if and only if it satisfies the following conditions: Every morphism f of C can be factored as with and and Weak factorization systems Suppose e and m are two morphisms in a category C. Then e has the left lifting property with respect to m (respectively m has the right lifting property with respect to e) when for every pair of morphisms u and v such that ve = mu there is a morphism w such that the following diagram commutes. The difference with orthogonality is that w is not necessarily unique. A weak factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that: The class E is exactly the class of morphisms having the left lifting property with respect to each morphism in M. The class M is exactly the class of morphisms having the right lifting property with respect to each morphism in E. Every morphism f of C can be factored as for some morphisms and . This notion leads to a succinct definition of model categories: a model category is a pair consisting of a category C and classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that C has all limits and colimits, is a weak factorization system, is a weak factorization system, and satisfies the two-out-of-three property: if and are composable morphisms and two of are in , then so is the third. A model category is a complete and cocomplete category equipped with a model structure. A map is called a trivial fibration if it belongs to and it is called a trivial cofibration if it belongs to An object is called fibrant if the morphism to the terminal object is a fibration, and it is called cofibrant if the morphism from the initial object is a cofibration. References External links Category theory
Factorization system
[ "Mathematics" ]
708
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
7,149,215
https://en.wikipedia.org/wiki/Sun%20photometer
A Sun photometer is a type of photometer conceived in such a way that it points at the Sun. Recent sun photometers are automated instruments incorporating a Sun-tracking unit, an appropriate optical system, a spectrally filtering device, a photodetector, and a data acquisition system. The measured quantity is called direct-sun radiance. When a Sun-photometer is placed somewhere within the Earth's atmosphere, the measured radiance is not equal to the radiance emitted by the Sun (i.e. the solar extraterrestrial radiance), because the solar flux is reduced by atmospheric absorption and scattering. Therefore, the measured radiant flux is due to a combination of what is emitted by the Sun and the effect of the atmosphere; the link between these quantities is given by Beer's law. The atmospheric effect can be removed with Langley extrapolation; this method therefore allows measuring the solar extraterrestrial radiance with ground-based measurements. Once the extraterrestrial radiance is known, one can use the Sun photometer for studying the atmosphere, and in particular for determining the atmospheric optical depth. Also, if the signal at two or more suitably selected spectral intervals is measured, one can use the information derived for calculating the vertically integrated concentration of selected atmospheric gases, such as water vapour, ozone, etc. See also Dobson ozone spectrophotometer AERONET References Glenn E. Shaw, "Sun photometry", Bulletin of the American Meteorological Society 64, 4-10, 1983. Optical devices Electromagnetic radiation meters
Sun photometer
[ "Physics", "Materials_science", "Technology", "Engineering" ]
325
[ "Glass engineering and science", "Optical devices", "Spectrum (physical sciences)", "Electromagnetic radiation meters", "Electromagnetic spectrum", "Measuring instruments" ]
7,149,681
https://en.wikipedia.org/wiki/Generalized%20dihedral%20group
In mathematics, the generalized dihedral groups are a family of groups with algebraic structures similar to that of the dihedral groups. They include the finite dihedral groups, the infinite dihedral group, and the orthogonal group O(2). Dihedral groups play an important role in group theory, geometry, and chemistry. Definition For any abelian group H, the generalized dihedral group of H, written Dih(H), is the semidirect product of H and Z2, with Z2 acting on H by inverting elements. I.e., with φ(0) the identity and φ(1) inversion. Thus we get: (h1, 0) * (h2, t2) = (h1 + h2, t2) (h1, 1) * (h2, t2) = (h1 − h2, 1 + t2) for all h1, h2 in H and t2 in Z2. (Writing Z2 multiplicatively, we have (h1, t1) * (h2, t2) = (h1 + t1h2, t1t2) .) Note that (h, 0) * (0,1) = (h,1), i.e. first the inversion and then the operation in H. Also (0, 1) * (h, t) = (−h, 1 + t); indeed (0,1) inverts h, and toggles t between "normal" (0) and "inverted" (1) (this combined operation is its own inverse). The subgroup of Dih(H) of elements (h, 0) is a normal subgroup of index 2, isomorphic to H, while the elements (h, 1) are all their own inverse. The conjugacy classes are: the sets {(h,0 ), (−h,0 )} the sets {(h + k + k, 1) | k in H } Thus for every subgroup M of H, the corresponding set of elements (m,0) is also a normal subgroup. We have: Dih(H) / M = Dih ( H / M ) Examples Dihn = Dih(Zn) (the dihedral groups) For even n there are two sets {(h + k + k, 1) | k in H }, and each generates a normal subgroup of type Dihn / 2. As subgroups of the isometry group of the set of vertices of a regular n-gon they are different: the reflections in one subgroup all have two fixed points, while none in the other subgroup has (the rotations of both are the same). However, they are isomorphic as abstract groups. For odd n there is only one set {(h + k + k, 1) | k in H } Dih∞ = Dih(Z) (the infinite dihedral group); there are two sets {(h + k + k, 1) | k in H }, and each generates a normal subgroup of type Dih∞. As subgroups of the isometry group of Z they are different: the reflections in one subgroup all have a fixed point, the mirrors are at the integers, while none in the other subgroup has, the mirrors are in between (the translations of both are the same: by even numbers). However, they are isomorphic as abstract groups. Dih(S1), or orthogonal group O(2,R), or O(2): the isometry group of a circle, or equivalently, the group of isometries in 2D that keep the origin fixed. The rotations form the circle group S1, or equivalently SO(2,R), also written SO(2), and R/Z ; it is also the multiplicative group of complex numbers of absolute value 1. In the latter case one of the reflections (generating the others) is complex conjugation. There are no proper normal subgroups with reflections. The discrete normal subgroups are cyclic groups of order n for all positive integers n. The quotient groups are isomorphic with the same group Dih(S1). Dih(Rn ): the group of isometries of Rn consisting of all translations and inversion in all points; for n = 1 this is the Euclidean group E(1); for n > 1 the group Dih(Rn ) is a proper subgroup of E(n ), i.e. it does not contain all isometries. H can be any subgroup of Rn, e.g. a discrete subgroup; in that case, if it extends in n directions it is a lattice. Discrete subgroups of Dih(R2 ) which contain translations in one direction are of frieze group type and 22. Discrete subgroups of Dih(R2 ) which contain translations in two directions are of wallpaper group type p1 and p2. Discrete subgroups of Dih(R3 ) which contain translations in three directions are space groups of the triclinic crystal system. Properties Dih(H) is Abelian, with the semidirect product a direct product, if and only if all elements of H are their own inverse, i.e., an elementary abelian 2-group: Dih(Z1) = Dih1 = Z2 Dih(Z2) = Dih2 = Z2 × Z2 (Klein four-group) Dih(Dih2) = Dih2 × Z2 = Z2 × Z2 × Z2 etc. Topology Dih(Rn ) and its dihedral subgroups are disconnected topological groups. Dih(Rn ) consists of two connected components: the identity component isomorphic to Rn, and the component with the reflections. Similarly O(2) consists of two connected components: the identity component isomorphic to the circle group, and the component with the reflections. For the group Dih∞ we can distinguish two cases: Dih∞ as the isometry group of Z Dih∞ as a 2-dimensional isometry group generated by a rotation by an irrational number of turns, and a reflection Both topological groups are totally disconnected, but in the first case the (singleton) components are open, while in the second case they are not. Also, the first topological group is a closed subgroup of Dih(R) but the second is not a closed subgroup of O(2). References Group theory
Generalized dihedral group
[ "Mathematics" ]
1,358
[ "Group theory", "Fields of abstract algebra" ]
7,149,861
https://en.wikipedia.org/wiki/Mixed-mode%20chromatography
Mixed-mode chromatography (MMC), or multimodal chromatography, refers to chromatographic methods that utilize more than one form of interaction between the stationary phase and analytes in order to achieve their separation. What is distinct from conventional single-mode chromatography is that the secondary interactions in MMC cannot be too weak, and thus they also contribute to the retention of the solutes. History Before MMC was considered as a chromatographic approach, secondary interactions were generally believed to be the main cause of peak tailing. However, it was discovered afterwards that secondary interactions can be applied for improving separation power. In 1986, Regnier’s group synthesized a stationary phase that had characteristics of anion exchange chromatography (AEX) and hydrophobic interaction chromatography (HIC) on protein separation. In 1998, a new form of MMC, hydrophobic charge induction chromatography (HCIC), was proposed by Burton and Harding. In the same year, conjoint liquid chromatography (CLC), which combines different types of monolithic convective interaction media (CIM) disks in the same housing, was introduced by Štrancar et al. In 1999, Yates’ group [11] loaded strong-cation exchange (SCX) and reversed phase liquid chromatography (RPLC) stationary phases sequentially into a capillary column coupled with tandem mass spectrometry (MS/MS) in the analysis of peptides, which became one of the most efficient technique in proteomics afterwards. In 2009, Geng’s group first achieved online two-dimensional (2D) separation of intact proteins using a single column possessing separation features of weak-cation exchange chromatography (WCX) and HIC (termed as two-dimensional liquid chromatography using a single column, (2D-LC-1C). Advantages Higher selectivity: for example, positive, negative and neutral substances could be separated by a reversed phase (RP)/anion-cation exchange (ACE) column in a single run. Higher loading capacity, for example, loading capacity of ACE/ hydrophilic interaction chromatography (HILIC) increased 10-100 times when compared with RPLC, which offered a new selection and idea for developing semi-preparative and preparative chromatography. One mixed-mode column can replace two or even more single mode columns, which is economic and eco-friendly for employing the stationary phase more sufficiently and reducing the consuming and ‘waste’ of raw materials. Single mixed-mode column can be applied for on-line two-dimensional (2D) analysis in a sealed system via establishing corresponding chromatographic system or off-line 2D analysis as two columns. Classification of MMC MMC can be classified into physical MMC and chemical MMC. In the former method, the stationary phase is constructed of two or more types of packing materials. In the chemical method, just one type of packing material containing two or more functionalities is used. Physical methods The simplest approach is to connect two commercial columns in series, which is termed a “tandem column”. Another approach is “biphasic column”, by packing two stationary phases separately in two ends of the same column. The third approach is to homogenize two or more different types of stationary phases in a single column, which is termed a “hybrid column” or “mixed-bed column”. Chemical methods IEC/HIC Since IEC and HIC conditions are the closest ones to physiological conditions which are fit for maintaining biological activity, the combinations of them are widely used in the separation of biological products. IEC/HIC MMC has improved separation power and selectivity on the grounds that it applies both electrostatic and hydrophobic interactions. IEC/RPLC IEC/RP MMC combines the advantages of RPLC and IEC. For example, WAX/RP has increased separation power and degree of freedom in adjusting the separation selectivity when compared with single WAX or RPLC. HILIC/RPLC Liu et al. synthesized a HILIC/RP stationary phase which could show RPLC or HILIC retention by adjusting the organic phase in mobile phase. HILIC/IEC Mant et al. reported that HILIC/CEX offered unique selectivity, stronger separation power and wider range of applications compared to RPLC for peptide separations. SEC/IEC Hydrophobic interactions in protein SEC are relatively weak at low ionic strength, electrostatic effects may contribute significantly to retention, and this allows us to use an SEC column as a weak ion exchanger. References Chromatography
Mixed-mode chromatography
[ "Chemistry" ]
973
[ "Chromatography", "Separation processes" ]
7,150,276
https://en.wikipedia.org/wiki/Hydrophilic%20interaction%20chromatography
Hydrophilic interaction chromatography (or hydrophilic interaction liquid chromatography, HILIC) is a variant of normal phase liquid chromatography that partly overlaps with other chromatographic applications such as ion chromatography and reversed phase liquid chromatography. HILIC uses hydrophilic stationary phases with reversed-phase type eluents. The name was suggested by Andrew Alpert in his 1990 paper on the subject. He described the chromatographic mechanism for it as liquid-liquid partition chromatography where analytes elute in order of increasing polarity, a conclusion supported by a review and re-evaluation of published data. Surface Any polar chromatographic surface can be used for HILIC separations. Even non-polar bonded silicas have been used with extremely high organic solvent composition, thanks to the exposed patches of silica in between the bonded ligands on the support, which can affect the interactions. With that exception, HILIC phases can be grouped into five categories of neutral polar or ionic surfaces: simple unbonded silica silanol or diol bonded phases amino or anionic bonded phases amide bonded phases cationic bonded phases zwitterionic bonded phases Mobile phase A typical mobile phase for HILIC chromatography includes acetonitrile ("MeCN", also designated as "ACN") with a small amount of water. However, any aprotic solvent miscible with water (e.g. THF or dioxane) can be used. Alcohols can also be used, however, their concentration must be higher to achieve the same degree of retention for an analyte relative to an aprotic solvent–water combination. See also Aqueous normal phase chromatography. It is commonly believed that in HILIC, the mobile phase forms a water-rich layer on the surface of the polar stationary phase vs. the water-deficient mobile phase, creating a liquid/liquid extraction system. The analyte is distributed between these two layers. However, HILIC is more than just simple partitioning and includes hydrogen donor interactions between neutral polar species as well as weak electrostatic mechanisms under the high organic solvent conditions used for retention. This distinguishes HILIC as a mechanism distinct from ion exchange chromatography. The more polar compounds will have a stronger interaction with the stationary aqueous layer than the less polar compounds. Thus, a separation based on a compound's polarity and degree of solvation takes place. Additives Ionic additives, such as ammonium acetate and ammonium formate, are usually used to control the mobile phase pH and ion strength. In HILIC they can also contribute to the polarity of the analyte, resulting in differential changes in retention. For extremely polar analytes (e.g. aminoglycoside antibiotics (gentamicin) or adenosine triphosphate), higher concentrations of buffer (c. 100 mM) are required to ensure that the analyte will be in a single ionic form. Otherwise, asymmetric peak shape, chromatographic tailing, and/or poor recovery from the stationary phase will be observed. For the separation of neutral polar analytes (e.g. carbohydrates), no buffer is necessary. Other salts, such as 100–300 mM sodium perchlorate, that are soluble in high-organic solvent mixtures (c. 70–90% acetonitrile), can be used to increase the mobile phase polarity to affect elution These salts are not volatile, so this technique is less useful with a mass spectrometer as the detector. Usually a gradient (to increasing amounts of water) is enough to promote elution. All ions partition into the stationary phase to some degree, so an occasional "wash" with water is required to ensure a reproducible stationary phase. Applications The HILIC mode of separation is used extensively for separation of some biomolecules, organic and some inorganic molecules by differences in polarity. Its utility has increased due to the simplified sample preparation for biological samples, when analyzing for metabolites, since the metabolic process generally results in the addition of polar groups to enhance elimination from the cellular tissue. This separation technique is also particularly suitable for glycosylation analysis and quality assurance of glycoproteins and glycoforms in biologic medical products. For the detection of polar compounds with the use of electrospray-ionization mass spectrometry as a chromatographic detector, HILIC can offer a ten fold increase in sensitivity over reversed-phase chromatography because the organic solvent is much more volatile. Choice of pH With surface chemistries that are weakly ionic, the choice of pH can affect the ionic nature of the column chemistry. Properly adjusted, the pH can be set to reduce the selectivity toward functional groups with the same charge as the column, or enhance it for oppositely charged functional groups. Similarly, the choice of pH affects the polarity of the solutes. However, for column surface chemistries that are strongly ionic, and thus resistant to pH values in the mid-range of the pH scale (pH 3.5–8.5), these separations will be reflective of the polarity of the analytes alone, and thus might be easier to understand when doing methods development. ERLIC In 2008, Alpert coined the term, ERLIC (electrostatic repulsion hydrophilic interaction chromatography), for HILIC separations where an ionic column surface chemistry is used to repel a common ionic polar group on an analyte or within a set of analytes, to facilitate separation by the remaining polar groups. Electrostatic effects have an order of magnitude stronger chemical potential than neutral polar effects. This allows one to minimize the influence of a common, ionic group within a set of analyte molecules; or to reduce the degree of retention from these more polar functional groups, even enabling isocratic separations in lieu of a gradient in some situations. His subsequent publication further described orientation effects which others have also called ion-pair normal phase or e-HILIC, reflecting retention mechanisms sensitive to a particular ionic portion of the analyte, either attractive or repulsive. ERLIC (eHILIC) separations need not be isocratic, but the net effect is the reduction of the attraction of a particularly strong polar group, which then requires less strong elution conditions, and the enhanced interaction of the remaining polar (opposite charged ionic, or non-ionic) functional groups of the analyte(s).Based on the ERLIC column invented by Andrew Alpert, a new peptide mapping methodology was developed with unique properties of separation of asparagine deamidation and isomerization. This unique properties would be very beneficial for future mass spectrometry based multi-attributes monitoring in biologics quality control. Cationic eHILIC For example, one could use a cation exchange (negatively charged) surface chemistry for ERLIC separations to reduce the influence on retention of anionic (negatively charged) groups (the phosphates of nucleotides or of phosphonyl antibiotic mixtures; or sialic acid groups of modified carbohydrates) to now allow separation based more on the basic and/or neutral functional groups of these molecules. Modifying the polarity of a weakly ionic group (e.g. carboxyl) on the surface is easily accomplished by adjusting the pH to be within two pH units of that group's pKa. For strongly ionic functional groups of the surface (i.e. sulfates or phosphates) one could instead use a lower amount of buffer so the residual charge is not completely ion paired. An example of this would be the use of a 12.5mM (rather than the recommended >20mM buffer), pH 9.2 mobile phase on a polymeric, zwitterionic, betaine-sulfonate surface to separate phosphonyl antibiotic mixtures (each containing a phosphate group). This enhances the influence of the column's sulfonic acid functional groups of its surface chemistry over its, slightly diminished (by pH), quaternary amine. Commensurate with this, these analytes will show a reduced retention on the column eluting earlier, and in higher amounts of organic solvent, than if a neutral polar HILIC surface were used. This also increases their detection sensitivity by negative ion mass spectrometry. Anionic eHILIC By analogy to the above, one can use an anion exchange (positively charged) column surface chemistry to reduce the influence on retention of cationic (positively charged) functional groups for a set of analytes, such as when selectively isolating phosphorylated peptides or sulfated polysaccharide molecules. Use of a pH between 1 and 2 pH units will reduce the polarity of two of the three ionizable oxygens of the phosphate group, and thus will allow easy desorption from the (oppositely charged) surface chemistry. It will also reduce the influence of negatively charged carboxyls in the analytes, since they will be protonated at this low a pH value, and thus contribute less overall polarity to the molecule. Any common, positively charged amino groups will be repelled from the column surface chemistry and thus these conditions enhance the role of the phosphate's polarity (as well as other neutral polar groups) in the separation. References Chromatography Laboratory techniques Molecular biology Biochemistry methods
Hydrophilic interaction chromatography
[ "Chemistry", "Biology" ]
1,982
[ "Chromatography", "Biochemistry methods", "Separation processes", "nan", "Molecular biology", "Biochemistry" ]
13,284,111
https://en.wikipedia.org/wiki/Wu%27s%20method%20of%20characteristic%20set
Wenjun Wu's method is an algorithm for solving multivariate polynomial equations introduced in the late 1970s by the Chinese mathematician Wen-Tsun Wu. This method is based on the mathematical concept of characteristic set introduced in the late 1940s by J.F. Ritt. It is fully independent of the Gröbner basis method, introduced by Bruno Buchberger (1965), even if Gröbner bases may be used to compute characteristic sets. Wu's method is powerful for mechanical theorem proving in elementary geometry, and provides a complete decision process for certain classes of problem. It has been used in research in his laboratory (KLMM, Key Laboratory of Mathematics Mechanization in Chinese Academy of Science) and around the world. The main trends of research on Wu's method concern systems of polynomial equations of positive dimension and differential algebra where Ritt's results have been made effective. Wu's method has been applied in various scientific fields, like biology, computer vision, robot kinematics and especially automatic proofs in geometry. Informal description Wu's method uses polynomial division to solve problems of the form: where f is a polynomial equation and I is a conjunction of polynomial equations. The algorithm is complete for such problems over the complex domain. The core idea of the algorithm is that you can divide one polynomial by another to give a remainder. Repeated division results in either the remainder vanishing (in which case the I implies f statement is true), or an irreducible remainder is left behind (in which case the statement is false). More specifically, for an ideal I in the ring k[x1, ..., xn] over a field k, a (Ritt) characteristic set C of I is composed of a set of polynomials in I, which is in triangular shape: polynomials in C have distinct main variables (see the formal definition below). Given a characteristic set C of I, one can decide if a polynomial f is zero modulo I. That is, the membership test is checkable for I, provided a characteristic set of I. Ritt characteristic set A Ritt characteristic set is a finite set of polynomials in triangular form of an ideal. This triangular set satisfies certain minimal condition with respect to the Ritt ordering, and it preserves many interesting geometrical properties of the ideal. However it may not be its system of generators. Notation Let R be the multivariate polynomial ring k[x1, ..., xn] over a field k. The variables are ordered linearly according to their subscript: x1 < ... < xn. For a non-constant polynomial p in R, the greatest variable effectively presenting in p, called main variable or class, plays a particular role: p can be naturally regarded as a univariate polynomial in its main variable xk with coefficients in k[x1, ..., xk−1]. The degree of p as a univariate polynomial in its main variable is also called its main degree. Triangular set A set T of non-constant polynomials is called a triangular set if all polynomials in T have distinct main variables. This generalizes triangular systems of linear equations in a natural way. Ritt ordering For two non-constant polynomials p and q, we say p is smaller than q with respect to Ritt ordering and written as p <r q, if one of the following assertions holds: (1) the main variable of p is smaller than the main variable of q, that is, mvar(p) < mvar(q), (2) p and q have the same main variable, and the main degree of p is less than the main degree of q, that is, mvar(p) = mvar(q) and mdeg(p) < mdeg(q). In this way, (k[x1, ..., xn],<r) forms a well partial order. However, the Ritt ordering is not a total order: there exist polynomials p and q such that neither p <r q nor p >r q. In this case, we say that p and q are not comparable. The Ritt ordering is comparing the rank of p and q. The rank, denoted by rank(p), of a non-constant polynomial p is defined to be a power of its main variable: mvar(p)mdeg(p) and ranks are compared by comparing first the variables and then, in case of equality of the variables, the degrees. Ritt ordering on triangular sets A crucial generalization on Ritt ordering is to compare triangular sets. Let T = { t1, ..., tu} and S = { s1, ..., sv} be two triangular sets such that polynomials in T and S are sorted increasingly according to their main variables. We say T is smaller than S w.r.t. Ritt ordering if one of the following assertions holds there exists k ≤ min(u, v) such that rank(ti) = rank(si) for 1 ≤ i < k and tk <r sk, u > v and rank(ti) = rank(si) for 1 ≤ i ≤ v. Also, there exists incomparable triangular sets w.r.t Ritt ordering. Ritt characteristic set Let I be a non-zero ideal of k[x1, ..., xn]. A subset T of I is a Ritt characteristic set of I if one of the following conditions holds: T consists of a single nonzero constant of k, T is a triangular set and T is minimal w.r.t Ritt ordering in the set of all triangular sets contained in I. A polynomial ideal may possess (infinitely) many characteristic sets, since Ritt ordering is a partial order. Wu characteristic set The Ritt–Wu process, first devised by Ritt, subsequently modified by Wu, computes not a Ritt characteristic but an extended one, called Wu characteristic set or ascending chain. A non-empty subset T of the ideal generated by F is a Wu characteristic set of F if one of the following condition holds T = {a} with a being a nonzero constant, T is a triangular set and there exists a subset G of such that = and every polynomial in G is pseudo-reduced to zero with respect to T. Wu characteristic set is defined to the set F of polynomials, rather to the ideal generated by F. Also it can be shown that a Ritt characteristic set T of is a Wu characteristic set of F. Wu characteristic sets can be computed by Wu's algorithm CHRST-REM, which only requires pseudo-remainder computations and no factorizations are needed. Wu's characteristic set method has exponential complexity; improvements in computing efficiency by weak chains, regular chains, saturated chain were introduced Decomposing algebraic varieties An application is an algorithm for solving systems of algebraic equations by means of characteristic sets. More precisely, given a finite subset F of polynomials, there is an algorithm to compute characteristic sets T1, ..., Te such that: where W(Ti) is the difference of V(Ti) and V(hi), here hi is the product of initials of the polynomials in Ti. See also Regular chain Mathematics-Mechanization Platform References P. Aubry, M. Moreno Maza (1999) Triangular Sets for Solving Polynomial Systems: a Comparative Implementation of Four Methods. J. Symb. Comput. 28(1–2): 125–154 David A. Cox, John B. Little, Donal O'Shea. Ideals, Varieties, and Algorithms. 2007. Ritt, J. (1966). Differential Algebra. New York, Dover Publications. Dongming Wang (1998). Elimination Methods. Springer-Verlag, Wien, Springer-Verlag Dongming Wang (2004). Elimination Practice, Imperial College Press, London Wu, W. T. (1984). Basic principles of mechanical theorem proving in elementary geometries. J. Syst. Sci. Math. Sci., 4, 207–35 Wu, W. T. (1987). A zero structure theorem for polynomial equations solving. MM Research Preprints, 1, 2–12 External links wsolve Maple package The Characteristic Set Method Computer algebra Algebraic geometry Commutative algebra Polynomials
Wu's method of characteristic set
[ "Mathematics", "Technology" ]
1,717
[ "Polynomials", "Computer algebra", "Computational mathematics", "Fields of abstract algebra", "Computer science", "Algebraic geometry", "Commutative algebra", "Algebra" ]
13,285,816
https://en.wikipedia.org/wiki/Potassium%20canrenoate
Potassium canrenoate (INN, JAN) or canrenoate potassium (USAN) (brand names Venactone, Soldactone), also known as aldadiene kalium, the potassium salt of canrenoic acid, is an aldosterone antagonist of the spirolactone group. Like spironolactone, it is a prodrug, and is metabolized to active canrenone in the body. Potassium canrenoate is notable in that it is the only clinically used antimineralocorticoid which is available for parenteral administration (specifically intravenous) as opposed to oral administration. In the UK, it is unlicensed and only used for short term diuresis in oedema or heart failure in neonates or children under specialist initiation and monitoring. See also Canrenoic acid Canrenone References 11β-Hydroxylase inhibitors Antimineralocorticoids CYP17A1 inhibitors Pregnanes Potassium compounds Prodrugs Progestogens Spirolactones Steroidal antiandrogens
Potassium canrenoate
[ "Chemistry" ]
229
[ "Chemicals in medicine", "Prodrugs" ]
13,285,821
https://en.wikipedia.org/wiki/Canrenone
Canrenone, sold under the brand names Contaren, Luvion, Phanurane, and Spiroletan, is a steroidal antimineralocorticoid of the spirolactone group related to spironolactone which is used as a diuretic in Europe, including in Italy and Belgium. It is also an important active metabolite of spironolactone, and partially accounts for its therapeutic effects. Medical uses Canrenone has been found to be effective in the treatment of hirsutism in women. Heart failure Two studies of canrenone in people with heart failure have shown a mortality benefit compared to placebo. In the evaluation which studied people with chronic heart failure (CHF), people that were treated with canrenone displayed a lower number of deaths compared to the placebo group, indicating a death and morbidity benefit of the medication. One study compared 166 treated with canrenone to 336 given conventional therapy lasting 10 years. Differences in systolic and diastolic blood pressure was observed between both patient groups where, patients treated with canrenone, showed a lower blood pressure compared to conventional therapy. Uric acid was lower in the group treated with canrenone; however, no differences were seen in potassium, sodium, and brain natriuretic peptide (BNP) levels. Left ventricular mass was also lower in the group treated with canrenone and a greater progression of NYHA class was observed in the control group compared to patients treated with canrenone. Another study concluded that treatment with canrenone in patients with chronic heart failure improves diastolic function and further decreased BNP levels. Pharmacology Pharmacodynamics Canrenone is reportedly more potent as an antimineralocorticoid relative to spironolactone, but is considerably less potent and effective as an antiandrogen. Similarly to spironolactone, canrenone inhibits steroidogenic enzymes such as 11β-hydroxylase, cholesterol side-chain cleavage enzyme, 17α-hydroxylase, 17,20-lyase, and 21-hydroxylase, but once again, is comparatively less potent in doing so. Pharmacokinetics The elimination half-life of canrenone is about 16.5 hours. As a metabolite Canrenone is an active metabolite of spironolactone, canrenoic acid, and potassium canrenoate, and is considered to be partially responsible for their effects. It has been found to have approximately 10 to 25% of the potassium-sparing diuretic effect of spironolactone, whereas another metabolite, 7α-thiomethylspironolactone (7α-TMS), accounts for around 80% of the potassium-sparing effect of the drug. History Canrenone was described and characterized in 1959. It was introduced for medical use, in the form of potassium canrenoate (the potassium salt of canrenoic acid), by 1968. Society and culture Generic names Canrenone is the and of the drug. Brand names Canrenone has been marketed under the brand names Contaren, Luvion, Phanurane, and Spiroletan, among others. Availability Canrenone appears to remain available only in Italy, although potassium canrenoate remains marketed in various other countries as well. See also Canrenoic acid Potassium canrenoate References 11β-Hydroxylase inhibitors 21-Hydroxylase inhibitors Antimineralocorticoids Cholesterol side-chain cleavage enzyme inhibitors CYP17A1 inhibitors Diuretics Human drug metabolites Lactones Pregnanes Progestogens Spiro compounds Spirolactones Spironolactone Steroidal antiandrogens World Anti-Doping Agency prohibited substances Conjugated dienes Enones
Canrenone
[ "Chemistry" ]
821
[ "Organic compounds", "Chemicals in medicine", "Human drug metabolites", "Spiro compounds" ]
13,289,313
https://en.wikipedia.org/wiki/Souders%E2%80%93Brown%20equation
In chemical engineering, the Souders–Brown equation (named after Mott Souders and George Granger Brown) has been a tool for obtaining the maximum allowable vapor velocity in vapor–liquid separation vessels (variously called flash drums, knockout drums, knockout pots, compressor suction drums and compressor inlet drums). It has also been used for the same purpose in designing trayed fractionating columns, trayed absorption columns and other vapor–liquid-contacting columns. A vapor–liquid separator drum is a vertical vessel into which a liquid and vapor mixture (or a flashing liquid) is fed and wherein the liquid is separated by gravity, falls to the bottom of the vessel, and is withdrawn. The vapor travels upward at a design velocity which minimizes the entrainment of any liquid droplets in the vapor as it exits the top of the vessel. Use The diameter of a vapor–liquid separator drum is dictated by the expected volumetric flow rate of vapor and liquid from the drum. The following sizing methodology is based on the assumption that those flow rates are known. Use a vertical pressure vessel with a length–diameter ratio of about 3 to 4, and size the vessel to provide about 5 minutes of liquid inventory between the normal liquid level and the bottom of the vessel (with the normal liquid level being somewhat below the feed inlet). Calculate the maximum allowable vapor velocity in the vessel by using the Souders–Brown equation: where is the maximum allowable vapor velocity in m/s is the liquid density in kg/m is the vapor density in kg/m = 0.107 m/s (when the drum includes a de-entraining mesh pad) Then the cross-sectional area of the drum can be found from: where is the vapor volumetric flow rate in m/s is the cross-sectional area of the drum And the drum diameter is: The drum should have a vapor outlet at the top, liquid outlet at the bottom, and feed inlet at about the half-full level. At the vapor outlet, provide a de-entraining mesh pad within the drum such that the vapor must pass through that mesh before it can leave the drum. Depending upon how much liquid flow is expected, the liquid outlet line should probably have a liquid level control valve. As for the mechanical design of the drum (materials of construction, wall thickness, corrosion allowance, etc.) use the same criteria as for any pressure vessel. Recommended values of k The GPSA Engineering Data Book recommends the following values for vertical drums with horizontal mesh pads (at the denoted operating pressures): At a gauge pressure of 0 bar: 0.107 m/s At a gauge pressure of 7 bar: 0.107 m/s At a gauge pressure of 21 bar: 0.101 m/s At a gauge pressure of 42 bar: 0.092 m/s At a gauge pressure of 63 bar: 0.083 m/s At a gauge pressure of 105 bar: 0.065 m/s GPSA notes: = 0.107 at a gauge pressure of 7 bar. Subtract 0.003 for every 7 bar above a gauge pressure of 7 bar. For glycol or amine solutions, multiply above values by 0.6 – 0.8 Typically use one-half of the above values for approximate sizing of vertical separators without mesh pads For compressor suction scrubbers and expander inlet separators, multiply by 0.7 – 0.8 See also Demister References Equations Gas-liquid separation
Souders–Brown equation
[ "Chemistry", "Mathematics" ]
729
[ "Equations", "Mathematical objects", "Gas-liquid separation", "Separation processes by phases" ]
13,289,588
https://en.wikipedia.org/wiki/Foundation%20integrity%20testing
Foundation integrity testing is the non-destructive testing of piled foundations. It was first used in the late 1960s, and has been developed over time by many companies. Three organizations supply a majority of the test equipment in use: CEBTP (Centre Expérimental de Recherches et d'Etudes du Bâtiment et des Travaux Publics) in Europe; Integrity Testing in Asia and Australia: and by GRL in the USA. References Bridges
Foundation integrity testing
[ "Engineering" ]
92
[ "Structural engineering", "Bridges" ]
15,964,561
https://en.wikipedia.org/wiki/Forensic%20materials%20engineering
Forensic materials engineering, a branch of forensic engineering, focuses on the material evidence from crime or accident scenes, seeking defects in those materials which might explain why an accident occurred, or the source of a specific material to identify a criminal. Many analytical methods used for material identification may be used in investigations, the exact set being determined by the nature of the material in question, be it metal, glass, ceramic, polymer or composite. An important aspect is the analysis of trace evidence such as skid marks on exposed surfaces, where contact between dissimilar materials leaves material traces of one left on the other. Provided the traces can be analysed successfully, then an accident or crime can often be reconstructed. Another aim will be to determine the cause of a broken component using the technique of fractography. Metals and alloys Metal surfaces can be analyzed in a number of ways, including by spectroscopy and EDX used during scanning electron microscopy. The nature and composition of the metal can normally be established by sectioning and polishing the bulk, and examining the flat section using optical microscopy after etching solutions have been used to provide contrast in the section between alloy constituents. Such solutions (often an acid) attack the surface preferentially, so isolating features or inclusions of one composition, enabling them to be seen much more clearly than in the polished but untreated surface. Metallography is a routine technique for examining the microstructure of metals, but can also be applied to ceramics, glasses and polymers. SEM can often be critical in determining failures modes by examining fracture surfaces. The origin of a crack can be found and the way it grew assessed, to distinguish, for example, overload failure from fatigue. Often however, fatigue fractures are easy to distinguish from overload failures by the lack of ductility, and the existence of a fast crack growth region and the slow crack growth area on the fracture surface. Crankshaft fatigue for example is a common failure mode for engine parts. The example shows just two such zones, the slow crack at the base, the fast at the top. Ceramics and glasses Hard products like ceramic pottery and glass windscreens can be studied using the same SEM methods used for metals, especially ESEM conducted at low vacuum. Fracture surfaces are especially valuable sources of information because surface features like hachures can enable the origin or origins of the cracks to be found. Analysis of the surface features is carried out using fractography. The position of the origin can then be matched with likely loads on the product to show how an accident occurred, for example. Inspection of bullet holes can often show the direction of travel and energy of the impact, and the way common glass products like bottles can be analysed to show whether deliberately or accidentally broken in a crime or accident. Defects such as foreign particles will often occur near or at the origin of the critical crack, and can be readily identified by ESEM. Polymers and composites Thermoplastics, thermosets, and composites can be analyzed using FTIR and UV spectroscopy as well as NMR and ESEM. Failed samples can either be dissolved in a suitable solvent and examined directly (UV, IR and NMR spectroscopy) or as a thin film cast from solvent or cut using microtomy from the solid product. The slicing method is preferable since there are no complications from solvent absorption, and the integrity of the sample is partly preserved. Fractured products can be examined using fractography, an especially useful method for all fractured components using macrophotography and optical microscopy. Although polymers usually possess quite different properties to metals and ceramics, they are just as susceptible to failure from mechanical overload, fatigue and stress corrosion cracking if products are poorly designed or manufactured. Many plastics are susceptible to attack by active chemicals like chlorine, present at low levels in potable water supplies, especially if the injection mouldings are faulty. ESEM is especially useful for providing elemental analysis from viewed parts of the sample being investigated. It is effectively a technique of microanalysis and valuable for examination of trace evidence. On the other hand, colour rendition is absent, and there is no information provided about the way in which those elements are bonded to one another. Specimens will be exposed to a vacuum, so any volatiles may be removed, and surfaces may be contaminated by substances used to attach the sample to the mount. Elastomers Rubber products are often safety-critical parts of machines, so that failure can often cause accidents or loss of function. Failed products can be examined with many of the generic polymer methods, although it is more difficult if the sample is vulcanized or cross-linked. Attenuated total reflectance infra-red spectroscopy is useful because the product is usually flexible so can be pressed against the selenium crystal used for analysis. Simple swelling tests can also help to identify the specific elastomer used in a product. Often the best technique is ESEM using the X-ray elemental analysis facility on the microscope. Although the method only provides elemental analysis, it can provide clues as to the identity of the elastomer being examined. Thus the presence of substantial amounts of chlorine indicates polychloroprene while the presence of nitrogen indicates nitrile rubber. The method is also useful in confirming ozone cracking by the large amounts of oxygen present on cracked surfaces. Ozone attacks susceptible elastomers such as natural rubber, nitrile rubber and polybutadiene and associated copolymers. Such elastomers possess double bonds in their main chains, the group which is attacked during ozonolysis. The problem occurs when small concentrations of ozone gas are present near to exposed elastomer surfaces, such as O-rings and diaphragm seals. The product must be in tension, but only very low strains are sufficient to cause degradation. See also Applied spectroscopy Brittleness Circumstantial evidence Forensic engineering Forensic polymer engineering Forensic science Fracture Fractography Fracture mechanics Ozone cracking Polymer degradation Skid mark Stress corrosion cracking Trace evidence References Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004). Lewis, Peter Rhys Forensic Polymer Engineering: Why polymer products fail in service, 2nd edition, Woodhead/Elsevier (2016). Engineering disciplines Materials science Materials engineering
Forensic materials engineering
[ "Physics", "Materials_science", "Engineering" ]
1,289
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]