text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
MIKE 21 is a computer program that simulates flows, waves, sediments and ecology in rivers, lakes, estuaries, bays, coastal areas and seas in two dimensions. It was developed by DHI .
MIKE 21 comprises three simulation engines:
Single Grid : the full time-dependent non-linear equations of continuity and conservation of momentum are solved by implicit finite difference techniques with the variables defined on a space-staggered rectangular grid.
Multiple Grids : the Multiple Grids version uses the same simulation engine and numerical approach as the single grid version. However, it provides the possibility of refining areas of special interest within the model area (nesting). All domains within the model area are dynamically linked.
Flexible Mesh : is an unstructured mesh and uses a cell-centred finite volume solution technique. The mesh is based on linear triangular elements.
MIKE 21 can be used for design data assessment for coastal and offshore structures, optimization of port layout and coastal protection measures, cooling water, desalination and recirculation analysis, environmental impact assessment of marine infrastructures, water forecast for safe marine operations and navigation, coastal flooding and storm surge warnings, inland flooding and overland flow modeling.
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MIKE_21 |
MIKE 21C is a computer program that simulates the development in the river bed and channel plan form in two dimensions. MIKE 21C was developed by DHI . MIKE 21C uses curvilinear finite difference grids.
Simulated processes with MIKE 21C include bank erosion , scouring and shoaling brought about by activities such as construction and dredging, seasonal fluctuations in flow, etc.
MIKE 21C can be used for designing protection schemes against bank erosion, evaluating measures to reduce or manage shoaling, analyzing alignments and dimensions of navigation channels for minimizing capital and maintenance dredging, predicting the impact of bridge, tunnel and pipeline crossings on river channel hydraulics and morphology, optimizing restoration plans for habitat environment in channel floodplain systems, designing monitoring networks based on morphological forecasting.
Due to its accurate descriptions of the physical processes, MIKE 21C can simulate a braided river developing from a plane bed, which was illustrated by Enggrob & Tjerry (1998).
As most other models made by DHI, MIKE 21C applies an add-on concept in which the overall time-loop can contain processes to be simulated, selected by the user. In its basic form the model is a 2-dimensional hydrodynamic model that can simulate dynamic as well as quasi-steady or steady-state hydrodynamic solutions. The hydrodynamic model solves the Saint-Venant equations in two dimensions with the water depth defined in cell centers and a staggered velocity field (internally the code solves the flux field, i.e. the water depth multiplied by the velocity vector) defined with direction as the local grid base vector.
The model is computationally a parallel code (written in Fortran ) with parallelizations in all modules, which allows for simulations of morphological developments on fine grids over long periods of time. The model is typically applied with as much 25,000 computational points over periods of several years or even decades.
The most important secondary flow in rivers is the so-called helical flow, with its name derived from Helios (the Sun in Greek). The name helical is used because the flow arises as the water in the lower portions of the water column flowing towards the local center of curvature, and away from the local center of curvature along the water surface. This has only a minor impact on the hydrodynamics, usually only pronounced on a laboratory scale, but it has profound impacts on the sediment transport and morphology because the helical flow influences the otherwise zero transverse sediment component. MIKE 21C applies standard theory for the helical flow, which can be found in e.g. Rozowsky (1957). Standard helical flow theory provides a secondary flow velocity profile that is fully characterised by friction and the deviation angle between the main flow direction and the direction of the shear stress at the river bed.
MIKE 21C uses the traditional division of sediment transport into bedload and suspended load, and the model can simulate both non-cohesive and cohesive sediment in a mixture.
The bed-load model accounts for the impacts of secondary flow (bed shear stress direction) and local bed slope (gravity). Suspended load is calculated with an advection-dispersion equation for each fraction, which includes adaptation in time and space as well as the 2-dimensional depth-integrated effects of the 3-dimensional flow pattern through profile functions (Galappatti & Vreugdenhil, 1985). | https://en.wikipedia.org/wiki/MIKE_21C |
MIKE BASIN is an extension of ArcMap ( ESRI ) for integrated water resources management and planning. It provides a framework for managers and stakeholders to address multi-sectoral allocation and environmental issues in river basins. It is designed to investigate water sharing issues at international or interstate level, and between competing groups of water users, including the environment. MIKE BASIN is developed by DHI . As of September 2014, MIKE BASIN is no longer available for order or download from DHI. It has been replaced by the application named MIKE HYDRO Basin .
MIKE BASIN can be used for providing solutions and alternatives to water allocation and water shortage problems, improving and optimizing reservoir and hydropower operations, exploring conjunctive use of groundwater and surface water, evaluating and improving irrigation performance, solving multi-criteria optimization problems, establishing cost-effective measures for water quality compliance.
Official website | https://en.wikipedia.org/wiki/MIKE_BASIN |
MIKE FLOOD is a computer program that simulates inundation for rivers, flood plains and urban drainage systems . It dynamically couples 1D ( MIKE 11 and Mouse ) and 2D ( MIKE 21 ) modeling techniques into one single tool. MIKE FLOOD is developed by DHI .
MIKE FLOOD is accepted by US Federal Emergency Management Agency (FEMA) for use in the National Flood Insurance Program (NFIP).
MIKE FLOOD can be expanded with a range of modules and methods including a flexible mesh overland flow solver, MIKE URBAN, Rainfall-runoff modeling and dynamic operation of structures.
MIKE FLOOD can be used for river-flood plain interaction, integrated urban drainage and river modeling, urban flood analysis and detailed dam break studies.
This scientific software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MIKE_FLOOD |
MIKE SHE is an integrated hydrological modelling system for building and simulating surface water flow and groundwater flow. MIKE SHE can simulate the entire land phase of the hydrologic cycle and allows components to be used independently and customized to local needs. MIKE SHE emerged from Système Hydrologique Européen (SHE) as developed and extensively applied since 1977 onwards [ 1 ] by a consortium of three European organizations: the Institute of Hydrology ( the United Kingdom ), SOGREAH ( France ) and DHI ( Denmark ). Since then, DHI has continuously invested resources into research and development of MIKE SHE. MIKE SHE can be used for the analysis, planning and management of a wide range of water resources and environmental problems related to surface water and groundwater, especially surface-water impact from groundwater withdrawal, conjunctive use of groundwater and surface water, wetland management and restoration, river basin management and planning, impact studies for changes in land use and climate .
The program is offered in both 32-bit and 64-bit versions for Microsoft Windows operating systems. | https://en.wikipedia.org/wiki/MIKE_SHE |
In radio , multiple-input and multiple-output ( MIMO ) ( / ˈ m aɪ m oʊ , ˈ m iː m oʊ / ) is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation . [ 1 ] [ 2 ] MIMO has become an essential element of wireless communication standards including IEEE 802.11n (Wi-Fi 4), IEEE 802.11ac (Wi-Fi 5), HSPA+ (3G), WiMAX , and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification. [ 3 ] [ 4 ]
At one time, in wireless the term "MIMO" referred to the use of multiple antennas at the transmitter and the receiver. In modern usage, "MIMO" specifically refers to a class of techniques for sending and receiving more than one data signal simultaneously over the same radio channel by exploiting the difference in signal propagation between different antennas (e.g. due to multipath propagation ). Additionally, modern MIMO usage often refers to multiple data signals sent to different receivers (with one or more receive antennas) though this is more accurately termed multi-user multiple-input single-output (MU-MISO).
MIMO is often traced back to 1970s research papers concerning multi-channel digital transmission systems and interference (crosstalk) between wire pairs in a cable bundle: AR Kaye and DA George (1970), [ 5 ] Branderburg and Wyner (1974), [ 6 ] and W. van Etten (1975, 1976). [ 7 ] Although these are not examples of exploiting multipath propagation to send multiple information streams, some of the mathematical techniques for dealing with mutual interference proved useful to MIMO development. In the mid-1980s Jack Salz at Bell Laboratories took this research a step further, investigating multi-user systems operating over "mutually cross-coupled linear networks with additive noise sources" such as time-division multiplexing and dually-polarized radio systems. [ 8 ]
Methods were developed to improve the performance of cellular radio networks and enable more aggressive frequency reuse in the early 1990s. Space-division multiple access (SDMA) uses directional or smart antennas to communicate on the same frequency with users in different locations within range of the same base station. An SDMA system was proposed by Richard Roy and Björn Ottersten , researchers at ArrayComm , in 1991. Their US patent (No. 5515378 issued in 1996 [ 9 ] ) describes a method for increasing capacity using "an array of receiving antennas at the base station" with a "plurality of remote users."
Arogyaswami Paulraj and Thomas Kailath proposed an SDMA-based inverse multiplexing technique in 1993. Their US patent (No. 5,345,599 issued in 1994 [ 10 ] ) described a method of broadcasting at high data rates by splitting a high-rate signal "into several low-rate signals" to be transmitted from "spatially separated transmitters" and recovered by the receive antenna array based on differences in "directions-of-arrival." Paulraj was awarded the prestigious Marconi Prize in 2014 for "his pioneering contributions to developing the theory and applications of MIMO antennas. ... His idea for using multiple antennas at both the transmitting and receiving stations – which is at the heart of the current high speed WiFi and 4G mobile systems – has revolutionized high speed wireless." [ 11 ]
In an April 1996 paper and subsequent patent, Greg Raleigh proposed that natural multipath propagation can be exploited to transmit multiple, independent information streams using co-located antennas and multi-dimensional signal processing. [ 12 ] The paper also identified practical solutions for modulation ( MIMO-OFDM ), coding, synchronization, and channel estimation. Later that year (September 1996) Gerard J. Foschini submitted a paper that also suggested it is possible to multiply the capacity of a wireless link using what the author described as "layered space-time architecture." [ 13 ]
Greg Raleigh, V. K. Jones, and Michael Pollack founded Clarity Wireless in 1996, and built and field-tested a prototype MIMO system. [ 14 ] Cisco Systems acquired Clarity Wireless in 1998. [ 15 ] Bell Labs built a laboratory prototype demonstrating its V-BLAST (Vertical-Bell Laboratories Layered Space-Time) technology in 1998. [ 16 ] Arogyaswami Paulraj founded Iospan Wireless in late 1998 to develop MIMO-OFDM products. Iospan was acquired by Intel in 2003. [ 17 ] Neither Clarity Wireless nor Iospan Wireless shipped MIMO-OFDM products before being acquired. [ 18 ]
MIMO technology has been standardized for wireless LANs , 3G mobile phone networks, and 4G mobile phone networks and is now in widespread commercial use. Greg Raleigh and V. K. Jones founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs. The Institute of Electrical and Electronics Engineers ( IEEE ) created a task group in late 2003 to develop a wireless LAN standard delivering at least 100 Mbit/s of user data throughput. There were two major competing proposals: TGn Sync was backed by companies including Intel and Philips , and WWiSE was supported by companies including Airgo Networks, Broadcom , and Texas Instruments . Both groups agreed that the 802.11n standard would be based on MIMO-OFDM with 20 MHz and 40 MHz channel options. [ 19 ] TGn Sync, WWiSE, and a third proposal (MITMOT, backed by Motorola and Mitsubishi ) were merged to create what was called the Joint Proposal. [ 20 ] In 2004, Airgo became the first company to ship MIMO-OFDM products. [ 21 ] Qualcomm acquired Airgo Networks in late 2006. [ 22 ] The final 802.11n standard supported speeds up to 600 Mbit/s (using four simultaneous data streams) and was published in late 2009. [ 23 ]
Surendra Babu Mandava and Arogyaswami Paulraj founded Beceem Communications in 2004 to produce MIMO-OFDM chipsets for WiMAX . The company was acquired by Broadcom in 2010. [ 24 ] WiMAX was developed as an alternative to cellular standards, is based on the 802.16e standard, and uses MIMO-OFDM to deliver speeds up to 138 Mbit/s. The more advanced 802.16m standard enables download speeds up to 1 Gbit/s. [ 25 ] A nationwide WiMAX network was built in the United States by Clearwire , a subsidiary of Sprint-Nextel , covering 130 million points of presence (PoPs) by mid-2012. [ 26 ] Sprint subsequently announced plans to deploy LTE (the cellular 4G standard) covering 31 cities by mid-2013 [ 27 ] and to shut down its WiMAX network by the end of 2015. [ 28 ]
The first 4G cellular standard was proposed by NTT DoCoMo in 2004. [ 29 ] Long term evolution (LTE) is based on MIMO-OFDM and continues to be developed by the 3rd Generation Partnership Project (3GPP). LTE specifies downlink rates up to 300 Mbit/s, uplink rates up to 75 Mbit/s, and quality of service parameters such as low latency. [ 30 ] LTE Advanced adds support for picocells, femtocells, and multi-carrier channels up to 100 MHz wide. LTE has been embraced by both GSM/UMTS and CDMA operators. [ 31 ]
The first LTE services were launched in Oslo and Stockholm by TeliaSonera in 2009. [ 32 ] As of 2015, there were more than 360 LTE networks in 123 countries operational with approximately 373 million connections (devices). [ 33 ]
MIMO can be sub-divided into three main categories: precoding , spatial multiplexing (SM), and diversity coding .
Precoding is multi-stream beamforming , in the narrowest definition. In more general terms, it is considered to be all spatial processing that occurs at the transmitter. In (single-stream) beamforming, the same signal is emitted from each of the transmit antennas with appropriate phase and gain weighting such that the signal power is maximized at the receiver input. The benefits of beamforming are to increase the received signal gain – by making signals emitted from different antennas add up constructively – and to reduce the multipath fading effect. In line-of-sight propagation , beamforming results in a well-defined directional pattern. However, conventional beams are not a good analogy in cellular networks, which are mainly characterized by multipath propagation . When the receiver has multiple antennas, the transmit beamforming cannot simultaneously maximize the signal level at all of the receive antennas, and precoding with multiple streams is often beneficial. Precoding requires knowledge of channel state information (CSI) at the transmitter and the receiver.
Spatial multiplexing requires MIMO antenna configuration. In spatial multiplexing, a high-rate signal is split into multiple lower-rate streams and each stream is transmitted from a different transmit antenna in the same frequency channel. If these signals arrive at the receiver antenna array with sufficiently different spatial signatures and the receiver has accurate CSI, it can separate these streams into (almost) parallel channels. Spatial multiplexing is a very powerful technique for increasing channel capacity at higher signal-to-noise ratios (SNR). The maximum number of spatial streams is limited by the lesser of the number of antennas at the transmitter or receiver. Spatial multiplexing can be used without CSI at the transmitter, but can be combined with precoding if CSI is available. Spatial multiplexing can also be used for simultaneous transmission to multiple receivers, known as space-division multiple access or multi-user MIMO , in which case CSI is required at the transmitter. [ 34 ] The scheduling of receivers with different spatial signatures allows good separability.
Diversity coding techniques are used when there is no channel knowledge at the transmitter. In diversity methods, a single stream (unlike multiple streams in spatial multiplexing) is transmitted, but the signal is coded using techniques called space-time coding . The signal is emitted from each of the transmit antennas with full or near orthogonal coding. Diversity coding exploits the independent fading in the multiple antenna links to enhance signal diversity. Because there is no channel knowledge, there is no beamforming or array gain from diversity coding.
Diversity coding can be combined with spatial multiplexing when some channel knowledge is available at the receiver.
Multi-antenna MIMO (or single-user MIMO) technology has been developed and implemented in some standards, e.g., 802.11n products.
Third Generation (3G) (CDMA and UMTS) allows for implementing space-time transmit diversity schemes, in combination with transmit beamforming at base stations. Fourth Generation (4G) LTE And LTE Advanced define very advanced air interfaces extensively relying on MIMO techniques. LTE primarily focuses on single-link MIMO relying on Spatial Multiplexing and space-time coding while LTE-Advanced further extends the design to multi-user MIMO. In wireless local area networks (WLAN), the IEEE 802.11n (Wi-Fi), MIMO technology is implemented in the standard using three different techniques: antenna selection, space-time coding and possibly beamforming. [ 49 ]
Spatial multiplexing techniques make the receivers very complex, and therefore they are typically combined with orthogonal frequency-division multiplexing (OFDM) or with orthogonal frequency-division multiple access (OFDMA) modulation, where the problems created by a multi-path channel are handled efficiently. The IEEE 802.16e standard incorporates MIMO-OFDMA. The IEEE 802.11n standard, released in October 2009, recommends MIMO-OFDM.
MIMO is used in mobile radio telephone standards such as 3GPP and 3GPP2 . In 3GPP, High-Speed Packet Access plus (HSPA+) and Long Term Evolution (LTE) standards take MIMO into account. Moreover, to fully support cellular environments, MIMO research consortia including IST-MASCOT propose to develop advanced MIMO techniques, e.g., multi-user MIMO (MU-MIMO).
MIMO wireless communications architectures and processing techniques can be applied to sensing problems. This is studied in a sub-discipline called MIMO radar .
MIMO technology can be used in non-wireless communications systems. One example is the home networking standard ITU-T G.9963 , which defines a powerline communications system that uses MIMO techniques to transmit multiple signals over multiple AC wires (phase, neutral and ground). [ 3 ]
In MIMO systems, a transmitter sends multiple streams by multiple transmit antennas. The transmit streams go through a matrix channel which consists of all N t N r {\displaystyle N_{t}N_{r}} paths between the N t {\displaystyle N_{t}} transmit antennas at the transmitter and N r {\displaystyle N_{r}} receive antennas at the receiver. Then, the receiver gets the received signal vectors by the multiple receive antennas and decodes the received signal vectors into the original information. A narrowband flat fading MIMO system is modeled as: [ citation needed ]
where y {\displaystyle \mathbf {y} } and x {\displaystyle \mathbf {x} } are the receive and transmit vectors, respectively, and H {\displaystyle \mathbf {H} } and n {\displaystyle \mathbf {n} } are the channel matrix and the noise vector, respectively.
Referring to information theory , the ergodic channel capacity of MIMO systems where both the transmitter and the receiver have perfect instantaneous channel state information is [ 51 ]
where ( ) H {\displaystyle ()^{H}} denotes Hermitian transpose and ρ {\displaystyle \rho } is the ratio between transmit power and noise power (i.e., transmit SNR ). The optimal signal covariance Q = V S V H {\displaystyle \mathbf {Q} =\mathbf {VSV} ^{H}} is achieved through singular value decomposition of the channel matrix U D V H = H {\displaystyle \mathbf {UDV} ^{H}\,=\,\mathbf {H} } and an optimal diagonal power allocation matrix S = diag ( s 1 , … , s min ( N t , N r ) , 0 , … , 0 ) {\displaystyle \mathbf {S} ={\textrm {diag}}(s_{1},\ldots ,s_{\min(N_{t},N_{r})},0,\ldots ,0)} . The optimal power allocation is achieved through waterfilling , [ 52 ] that is
where d 1 , … , d min ( N t , N r ) {\displaystyle d_{1},\ldots ,d_{\min(N_{t},N_{r})}} are the diagonal elements of D {\displaystyle \mathbf {D} } , ( ⋅ ) + {\displaystyle (\cdot )^{+}} is zero if its argument is negative, and μ {\displaystyle \mu } is selected such that s 1 + … + s min ( N t , N r ) = N t {\displaystyle s_{1}+\ldots +s_{\min(N_{t},N_{r})}=N_{t}} .
If the transmitter has only statistical channel state information , then the ergodic channel capacity will decrease as the signal covariance Q {\displaystyle \mathbf {Q} } can only be optimized in terms of the average mutual information as [ 51 ]
The spatial correlation of the channel has a strong impact on the ergodic channel capacity with statistical information.
If the transmitter has no channel state information it can select the signal covariance Q {\displaystyle \mathbf {Q} } to maximize channel capacity under worst-case statistics, which means Q = 1 / N t I {\displaystyle \mathbf {Q} =1/N_{t}\mathbf {I} } and accordingly
Depending on the statistical properties of the channel, the ergodic capacity is no greater than min ( N t , N r ) {\displaystyle \min(N_{t},N_{r})} times larger than that of a SISO system.
A fundamental problem in MIMO communication is estimating the transmit vector, x {\displaystyle \mathbf {x} } , given the received vector, y {\displaystyle \mathbf {y} } . This can be posed as a statistical detection problem, and addressed using a variety of techniques including zero-forcing, [ 53 ] successive interference cancellation a.k.a. V-blast , Maximum likelihood estimation and recently, neural network MIMO detection. [ 54 ] Such techniques commonly assume that the channel matrix H {\displaystyle \mathbf {H} } is known at the receiver. In practice, in communication systems, the transmitter sends a Pilot signal and the receiver learns the state of the channel (i.e., H {\displaystyle \mathbf {H} } ) from the received signal Y {\displaystyle Y} and the Pilot signal X {\displaystyle X} . Recently, there are works on MIMO detection using Deep learning tools which have shown to work better than other methods such as zero-forcing. [ 55 ]
MIMO signal testing focuses first on the transmitter/receiver system. The random phases of the sub-carrier signals can produce instantaneous power levels that cause the amplifier to compress, momentarily causing distortion and ultimately symbol errors. Signals with a high PAR ( peak-to-average ratio ) can cause amplifiers to compress unpredictably during transmission. OFDM signals are very dynamic and compression problems can be hard to detect because of their noise-like nature. [ 56 ]
Knowing the quality of the signal channel is also critical. A channel emulator can simulate how a device performs at the cell edge, can add noise or can simulate what the channel looks like at speed. To fully qualify the performance of a receiver, a calibrated transmitter, such as a vector signal generator (VSG), and channel emulator can be used to test the receiver under a variety of different conditions. Conversely, the transmitter's performance under a number of different conditions can be verified using a channel emulator and a calibrated receiver, such as a vector signal analyzer (VSA).
Understanding the channel allows for manipulation of the phase and amplitude of each transmitter in order to form a beam. To correctly form a beam, the transmitter needs to understand the characteristics of the channel. This process is called channel sounding or channel estimation . A known signal is sent to the mobile device that enables it to build a picture of the channel environment. The mobile device sends back the channel characteristics to the transmitter. The transmitter can then apply the correct phase and amplitude adjustments to form a beam directed at the mobile device. This is called a closed-loop MIMO system. For beamforming , it is required to adjust the phases and amplitude of each transmitter. In a beamformer optimized for spatial diversity or spatial multiplexing, each antenna element simultaneously transmits a weighted combination of two data symbols. [ 57 ]
Papers by Gerard J. Foschini and Michael J. Gans, [ 58 ] Foschini [ 59 ] and Emre Telatar [ 60 ] have shown that the channel capacity (a theoretical upper bound on system throughput) for a MIMO system is increased as the number of antennas is increased, proportional to the smaller of the number of transmit antennas and the number of receive antennas. This is known as the multiplexing gain and this basic finding in information theory is what led to a spurt of research in this area. Despite the simple propagation models used in the aforementioned seminal works, the multiplexing gain is a fundamental property that can be proved under almost any physical channel propagation model and with practical hardware that is prone to transceiver impairments. [ 61 ]
A textbook by A. Paulraj, R. Nabar and D. Gore has published an introduction to this area. [ 62 ] There are many other principal textbooks available as well. [ 63 ] [ 64 ] [ 65 ]
There exists a fundamental tradeoff between transmit diversity and spatial multiplexing gains in a MIMO system (Zheng and Tse, 2003). [ 66 ] In particular, achieving high spatial multiplexing gains is of profound importance in modern wireless systems. [ 67 ]
Given the nature of MIMO, it is not limited to wireless communication. It can be used for wire line communication as well. For example, a new type of DSL technology (gigabit DSL) has been proposed based on binder MIMO channels.
An important question which attracts the attention of engineers and mathematicians is how to use the multi-output signals at the receiver to recover the multi-input signals at the transmitter. In Shang, Sun and Zhou (2007), sufficient and necessary conditions are established to guarantee the complete recovery of the multi-input signals. [ 68 ] | https://en.wikipedia.org/wiki/MIMO |
Multiple-input, multiple-output orthogonal frequency-division multiplexing ( MIMO-OFDM ) is the dominant air interface for 4G and 5G broadband wireless communications. It combines multiple-input, multiple-output ( MIMO ) technology, which multiplies capacity by transmitting different signals over multiple antennas, and orthogonal frequency-division multiplexing (OFDM), which divides a radio channel into a large number of closely spaced subchannels to provide more reliable communications at high speeds. Research conducted during the mid-1990s showed that while MIMO can be used with other popular air interfaces such as time-division multiple access (TDMA) and code-division multiple access (CDMA), the combination of MIMO and OFDM is most practical at higher data rates. [ citation needed ]
MIMO-OFDM is the foundation for most advanced wireless local area network ( wireless LAN ) and mobile broadband network standards because it achieves the greatest spectral efficiency and, therefore, delivers the highest capacity and data throughput. Greg Raleigh invented MIMO in 1996 when he showed that different data streams could be transmitted at the same time on the same frequency by taking advantage of the fact that signals transmitted through space bounce off objects (such as the ground) and take multiple paths to the receiver. That is, by using multiple antennas and precoding the data, different data streams could be sent over different paths. Raleigh suggested and later proved that the processing required by MIMO at higher speeds would be most manageable using OFDM modulation, because OFDM converts a high-speed data channel into a number of parallel lower-speed channels.
In modern usage, the term "MIMO" indicates more than just the presence of multiple transmit antennas (multiple input) and multiple receive antennas (multiple output). While multiple transmit antennas can be used for beamforming , and multiple receive antennas can be used for diversity , the word "MIMO" refers to the simultaneous transmission of multiple signals ( spatial multiplexing ) to multiply spectral efficiency (capacity).
Traditionally, radio engineers treated natural multipath propagation as an impairment to be mitigated. MIMO is the first radio technology that treats multipath propagation as a phenomenon to be exploited. MIMO multiplies the capacity of a radio link by transmitting multiple signals over multiple, co-located antennas. This is accomplished without the need for additional power or bandwidth. Space–time codes are employed to ensure that the signals transmitted over the different antennas are orthogonal to each other, making it easier for the receiver to distinguish one from another. Even when there is line of sight access between two stations, dual antenna polarization may be used to ensure that there is more than one robust path.
OFDM enables reliable broadband communications by distributing user data across a number of closely spaced, narrowband subchannels. [ 1 ] This arrangement makes it possible to eliminate the biggest obstacle to reliable broadband communications, intersymbol interference (ISI). ISI occurs when the overlap between consecutive symbols is large compared to the symbols’ duration. Normally, high data rates require shorter duration symbols, increasing the risk of ISI. By dividing a high-rate data stream into numerous low-rate data streams, OFDM enables longer duration symbols. A cyclic prefix (CP) may be inserted to create a (time) guard interval that prevents ISI entirely. If the guard interval is longer than the delay spread —the difference in delays experienced by symbols transmitted over the channel—then there will be no overlap between adjacent symbols and consequently no intersymbol interference. Though the CP slightly reduces spectral capacity by consuming a small percentage of the available bandwidth, the elimination of ISI makes it an exceedingly worthwhile tradeoff.
A key advantage of OFDM is that fast Fourier transforms (FFTs) may be used to simplify implementation. Fourier transforms convert signals back and forth between the time domain and frequency domain. Consequently, Fourier transforms can exploit the fact that any complex waveform may be decomposed into a series of simple sinusoids. In signal processing applications, discrete Fourier transforms (DFTs) are used to operate on real-time signal samples. DFTs may be applied to composite OFDM signals, avoiding the need for the banks of oscillators and demodulators associated with individual subcarriers. Fast Fourier transforms are numerical algorithms used by computers to perform DFT calculations. [ 2 ]
FFTs also enable OFDM to make efficient use of bandwidth. The subchannels must be spaced apart in frequency just enough to ensure that their time-domain waveforms are orthogonal to each other. In practice, this means that the subchannels are allowed to partially overlap in frequency.
MIMO-OFDM is a particularly powerful combination because MIMO does not attempt to mitigate multipath propagation and OFDM avoids the need for signal equalization . MIMO-OFDM can achieve very high spectral efficiency even when the transmitter does not possess channel state information (CSI). When the transmitter does possess CSI (which can be obtained through the use of training sequences), it is possible to approach the theoretical channel capacity. CSI may be used, for example, to allocate different size signal constellations to the individual subcarriers, making optimal use of the communications channel at any given moment of time.
More recent MIMO-OFDM developments include multi-user MIMO (MU-MIMO), higher order MIMO implementations (greater number of spatial streams), and research concerning massive MIMO and cooperative MIMO (CO-MIMO) for inclusion in coming 5G standards.
MU-MIMO is part of the IEEE 802.11ac standard, the first Wi-Fi standard to offer speeds in the gigabit per second range. MU-MIMO enables an access point (AP) to transmit to up to four client devices simultaneously. This eliminates contention delays, but requires frequent channel measurements to properly direct the signals. Each user may employ up to four of the available eight spatial streams. For example, an AP with eight antennas can talk to two client devices with four antennas, providing four spatial streams to each. Alternatively, the same AP can talk to four client devices with two antennas each, providing two spatial streams to each. [ 3 ]
Multi-user MIMO beamforming even benefits single spatial stream devices. Prior to MU-MIMO beamforming, an access point communicating with multiple client devices could only transmit to one at a time. With MU-MIMO beamforming, the access point can transmit to up to four single stream devices at the same time on the same channel.
The 802.11ac standard also supports speeds up to 6.93 Gbit/s using eight spatial streams in single-user mode. The maximum data rate assumes use of the optional 160 MHz channel in the 5 GHz band and 256 QAM (quadrature amplitude modulation). Chipsets supporting six spatial streams have been introduced and chipsets supporting eight spatial streams are under development.
Massive MIMO consists of a large number of base station antennas operating in a MU-MIMO environment. [ 4 ] While LTE networks already support handsets using two spatial streams, and handset antenna designs capable of supporting four spatial streams have been tested, massive MIMO can deliver significant capacity gains even to single spatial stream handsets. Again, MU-MIMO beamforming is used to enable the base station to transmit independent data streams to multiple handsets on the same channel at the same time. However, one question still to be answered by research is: When is it best to add antennas to the base station and when is it best to add small cells?
Another focus of research for 5G wireless is CO-MIMO. In CO-MIMO, clusters of base stations work together to boost performance. This can be done using macro diversity for improved reception of signals from handsets or multi-cell multiplexing to achieve higher downlink data rates. However, CO-MIMO requires high-speed communication between the cooperating base stations.
Gregory Raleigh was first to advocate the use of MIMO in combination with OFDM. In a theoretical paper, he proved that with the proper type of MIMO system—multiple, co-located antennas transmitting and receiving multiple information streams using multidimensional coding and encoding—multipath propagation could be exploited to multiply the capacity of a wireless link. [ 5 ] Up to that time, radio engineers tried to make real-world channels behave like ideal channels by mitigating the effects of multipath propagation. However, mitigation strategies have never been fully successful. In order to exploit multipath propagation, it was necessary to identify modulation and coding techniques that perform robustly over time-varying, dispersive, multipath channels. Raleigh published additional research on MIMO-OFDM under time-varying conditions, MIMO-OFDM channel estimation, MIMO-OFDM synchronization techniques, and the performance of the first experimental MIMO-OFDM system. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Raleigh solidified the case for OFDM by analyzing the performance of MIMO with three leading modulation techniques in his PhD dissertation: quadrature amplitude modulation (QAM), direct sequence spread spectrum (DSSS), and discrete multi-tone (DMT). [ 10 ] QAM is representative of narrowband schemes such as TDMA that use equalization to combat ISI. DSSS uses rake receivers to compensate for multipath and is used by CDMA systems. DMT uses interleaving and coding to eliminate ISI and is representative of OFDM systems. The analysis was performed by deriving the MIMO channel matrix models for the three modulation schemes, quantifying the computational complexity and assessing the channel estimation and synchronization challenges for each. The models showed that for a MIMO system using QAM with an equalizer or DSSS with a rake receiver, computational complexity grows quadratically as data rate is increased. In contrast, when MIMO is used with DMT, computational complexity grows log-linearly (i.e., n log n) as data rate is increased.
Raleigh subsequently founded Clarity Wireless in 1996 and Airgo Networks in 2001 to commercialize the technology. Clarity developed specifications in the Broadband Wireless Internet Forum (BWIF) that led to the IEEE 802.16 (commercialized as WiMAX ) and LTE standards, both of which support MIMO. Airgo designed and shipped the first MIMO-OFDM chipsets for what became the IEEE 802.11n standard. MIMO-OFDM is also used in the 802.11ac standard and is expected to play a major role in 802.11ax and fifth generation ( 5G ) mobile phone systems.
Several early papers on multi-user MIMO were authored by Ross Murch et al. at Hong Kong University of Science and Technology. [ 11 ] MU-MIMO was included in the 802.11ac standard (developed starting in 2011 and approved in 2014). MU-MIMO capacity appears for the first time in what have become known as "Wave 2" products. Qualcomm announced chipsets supporting MU-MIMO in April 2014. [ 12 ]
Broadcom introduced the first 802.11ac chipsets supporting six spatial streams for data rates up to 3.2 Gbit/s in April 2014. Quantenna says it is developing chipsets to support eight spatial streams for data rates up to 10 Gbit/s. [ 13 ]
Massive MIMO, Cooperative MIMO (CO-MIMO), and HetNets (heterogeneous networks) are currently the focus of research concerning 5G wireless. The development of 5G standards is expected to begin in 2016. Prominent researchers to date include Jakob Hoydis (of Alcatel-Lucent), Robert W. Heath (at the University of Texas at Austin), Helmut Bölcskei (at ETH Zurich), and David Gesbert (at EURECOM). [ 14 ] [ 15 ] [ 16 ] [ 17 ]
Trials of 5G technology have been conducted by Samsung. [ 18 ] Japanese operator NTT DoCoMo plans to trial 5G technology in collaboration with Alcatel-Lucent, Ericsson, Fujitsu, NEC, Nokia, and Samsung. [ 19 ] | https://en.wikipedia.org/wiki/MIMO-OFDM |
MINAS is a database of metal ions in nucleic acids . [ 1 ] [ 2 ] [ 3 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MINAS |
MINERVA-Australis (stylized M INERVA -Australis) is a dedicated exoplanet observatory, [ 1 ] operated by the University of Southern Queensland , in Queensland , Australia. The facility is located at USQ's Mount Kent Observatory , and saw first light in quarter two 2018. Commissioning of the facility was completed in mid-2019, and the facility was officially launched on 23 July 2019. [ 2 ]
The facility follows the innovative model first deployed in the northern hemisphere's Miniature Exoplanet Radial Velocity Array ( MINERVA ), [ 3 ] a northern hemisphere exoplanet facility located at the U.S. Fred Lawrence Whipple Observatory at Mt. Hopkins, Arizona. MINERVA-Australis is being used to perform follow-up and characterisation observations of exoplanets discovered by NASA's Transiting Exoplanet Survey Satellite (TESS), which was launched in April, 2018.
The project's principal investigator is USQ astronomer Rob Wittenmyer , who leads a consortium of partners from institutions across the world (UNSW Australia; Nanjing University; University of California, Riverside; MIT; George Mason University ; University of Louisville; University of Texas at Austin; University of Florida).
The primary mission of MINERVA-Australis is to support observations carried out by the NASA TESS spacecraft, providing dedicated follow-up and characterisation of newly discovered exoplanets . During commissioning, the facility was used to pursue targets of opportunity, and to carry out work extending the baseline of the Anglo-Australian Planet Search program . MINERVA-Australis allows researchers to obtain precise radial velocity observations for target stars, enabling the masses of planets discovered by the TESS spacecraft to be directly measured, and has recently demonstrated a radial velocity precision of approximately 1 m/s.
In addition to providing high precision velocity measurements, MINERVA-Australis will also offer high-cadence photometric observations. This is to facilitate direct follow-up transit observations of TESS candidate planets (particularly those in fields from which TESS has moved on). It can also enable the observation of occultation events and other transient targets of opportunity .
During commissioning, observations made by the MINERVA-Australis array contributed to the discovery of 13 new exoplanets, working in collaboration with researchers at institutions across the globe.
MINERVA-Australis currently consists of four PlaneWave CDK700 telescopes , [ 4 ] [ failed verification ] . Five CDK700's were available for the period July 2019 - April 2020, while a donated CDK700 telescope was waiting for another project to start. [ citation needed ] These 0.7 m telescopes have two ports, allowing each to be used for either spectroscopic or photometric observations. Each telescope sits in its own automated clam-shell Astrohaven dome, [ 5 ] [ failed verification ] distributed in an approximate semi-circle around the main observatory building.
Photometric work is to be carried out using Andor cameras, [ 6 ] with 2k x 2k back-illuminated CCDs with 15 μm pixels. These cameras offer an effective field of view greater than 20 arcminutes .
The telescopes are connected by optical fibre to a stabilised, R = 75,000 echelle spectrograph , covering the wavelengths 480 - 630nm, designed by KiwiStar Optics. The spectrograph uses simultaneous calibration in a separate fibre. Prior to 2020, the simultaneous calibration was provided by a Thorium-Argon lamp. Because of the wavelength range and the very low scattered light the simultaneous calibration source is now supplied by a Tungsten slit-flat lamp backlighting an iodine cell . This is a different approach to the normal iodine cell method that passes the starlight through an iodine cell. | https://en.wikipedia.org/wiki/MINERVA-Australis |
The Minimum Information for Publication of Quantitative Real-Time PCR Experiments ( MIQE ) guidelines are a set of protocols for conducting and reporting quantitative real-time PCR experiments and data, as devised by Bustin et al. in 2009. [ 1 ] They were devised after a paper was published in 2002 that claimed to detect measles virus in children with autism through the use of RT-qPCR, but the results proved to be completely unreproducible by other scientists. [ 2 ] The authors themselves also did not try to reproduce them and the raw data was found to have a large amount of errors and basic mistakes in analysis. This incident prompted Stephen Bustin to create the MIQE guidelines to provide a baseline level of quality for qPCR data published in scientific literature. [ 2 ]
The MIQE guidelines were created due to the low quality of qPCR data submitted to academic journals at the time, which was only becoming more common as Next Generation Sequencing machinery allowed for such experiments to be run for a cheaper cost. Because the technique is utilized across all of science in multiple fields, the instruments, methods, and designs of how qPCR is used differ greatly. To help improve overall quality, the MIQE guidelines were made as generalized suggestions on basic experimental procedures and forms of data that should be collected as a minimum level of reported information for other researchers to understand and use when reading the published material. Setting up a recognized and largely agreed upon set of guidelines such as these were deemed important by the scientific community especially due to the ever increasing amount of scientific work coming from developing countries with many different languages and protocols. [ 3 ]
In 2009, Stephen Bustin led an international group of scientists including Mikael Kubista to put together a set of guidelines on how to perform qPCR and what forms of data should be collected and published in the process. [ 1 ] This also allowed editors and reviewers of scientific journals to employ the guidelines when looking over a submitted paper that included qPCR data. Thus, the guidelines were set up as a sort of checklist for each step of the procedure with certain items being marked as essential (E) when submitting data for publication and others marked as just desirable (D). [ 4 ]
An additional version of the guidelines was published in September 2010 for use with fluorescence-based quantitative real-time PCR. It also acted as a précis for the broader form of the guidelines. [ 5 ] Other researchers have been creating further versions for specific forms of qPCR that may require a supplementary or different set of items to check, including single-cell qPCR [ 6 ] and digital PCR (dPCR). [ 7 ] Appropriate adherence to the existing MIQE guidelines has also been overviewed in other scientific areas, including photobiomodulation [ 8 ] and clinical biomarkers . [ 9 ]
It was noted by Bustin in 2014 (and again by him in 2017) that there was some amount of uptake and usage of the MIQE guidelines within the scientific community, but there were still far too many published papers with qPCR experiments that lacked even the most basic of data presentation and proper confirmation of effectiveness for said data. These studies retained major reproducibility issues, where the conclusions of their evidence could not be replicated by other researchers, throwing the initial results into doubt. All of this was despite many papers directly citing Bustin's original MIQE publication, but not following through on the guideline checklist of material in their own experiments. [ 2 ] [ 10 ] However, some researchers have pointed out at least some success, with a number of papers being rejected by academic journals for publication due to failing to pass MIQE checklists. Other studies have been retracted after the fact once their lack of proper data to pass the MIQE guidelines was noted and publicly pointed out to the journal editors. [ 11 ]
When setting up their new comparative qPCR systems titled "Dots in Boxes" in 2017, New England Biolabs stated that they had designed the data collection portion around the MIQE guidelines so that the data fit all the minimum parameter checklists in the protocols. [ 12 ] Other scientific instrument companies have assisted in guideline compliance by purposefully tailoring their devices for them, including Bio-Rad creating a mobile app that allows for active marking of the MIQE checklist as each step is completed. [ 13 ]
An overview of the 10th anniversary since the publication of the MIQE guidelines was conducted in June 2020 and discussed the scientific studies that had produced better and more organized results when following the guidelines. [ 14 ] In August 2020, an updated version of the guidelines for the digital PCR method was published to account for improvement in machinery, technologies, and techniques since the original 2013 release. Additional guideline steps were added for data analysis, while also providing a more simplified checklist table for researchers to use. [ 15 ] An RT-qPCR targeting assay was developed alongside Stephen Bustin using the MIQE guidelines for clinical biomarkers in December 2020 in order to identify the clinical presence of COVID-19 viral particles during the COVID-19 pandemic . [ 16 ]
The MIQE guidelines are split up into 9 different sections that make up the checklist. These include not only considerations for doing the qPCR itself, but also how the resulting data is collected, analyzed, and presented. An important part of the latter is including information relating to the analysis software used and also submitting the raw data to the relevant databases. [ 1 ]
Large portions of the guidelines include basic actions that would normally be included in experiments and publications regardless, such as an item for describing the experimental and control group differences. Other such information includes how many individual units are used in each group in the experiment. These two pieces are defined as essential for any study. This section also includes two desirable points, which are pointing out whether the author's laboratory itself or a core laboratory of the university or organization conducted the qPCR assay and an acknowledgement of any other individuals that contributed to the work. [ 1 ]
The essential requirements that samples and sample material must meet includes a description of the sample, what form of dissection was used, what processing method was done, whether the samples were frozen or fixed and how long did it take, and what sample conditions were used. It is also desirable to know the volume or mass of the sample that was processed for the qPCR. [ 1 ]
For the process of extracting the DNA/RNA, there are a number of essential guidelines. This includes a description of the extraction process done, a statement on what DNA extraction kit was used and any changes made to the directions, details on whether any DNase or RNase treatment was used, a statement on whether any contamination was assessed, a quantification of the amount of genetic material extracted, a description of the instruments used for the extraction, the methods used to retain RNA integrity, a statement on the RNA integrity number and quality indicator and the quantification cycle (Cq) reached, and lastly what testing was done to determine the presence or absence of inhibitors . Four desired pieces of information are where the reagents used were obtained from, what level of genetic purity was obtained, what yield was obtained, and an electrophoresis gel image for confirmation. [ 1 ]
The primary essential parts for this phase include detailing the reaction conditions in full, giving both the amount of RNA used and the total volume of the reaction, give information on the oligonucleotide used as a primer and its concentration, the concentration and type of reverse transcriptase used, and lastly the temperature and amount of time done for the reaction. It is also desirable to have the catalog numbers of reagents used and their manufacturers, the standard deviation for the Cq with and without the transcriptase being involved, and how the cDNA was stored. [ 1 ]
All of the basic information regarding the target is necessary here, including the gene symbol, the accession database number for the sequence in question, the length of the sequence being amplified, information about the specificity screen used such as BLAST , what splicing variants exist for the sequence, and where the exon or intron for each primer is. There are several desired, but not required information pieces for this section, such as the location of the amplicon , whether any pseudogenes or homologs exist, whether a sequence alignment was done and the data obtained from it, and any data on the secondary structure of the amplified sequence. [ 1 ]
Creation of the oligonucleotides requires only two pieces of essential information: the primer sequences used and the location and details of any modifications made to the sequence. But there are several desirable pieces of data, including the identification number from the RTPrimerDB database, the sequences from the probes , the manufacturer used to make the oligos, and how they were purified. [ 1 ]
As one of the primary segments of the guidelines, there are several essential parts on the checklist for the qPCR process itself. This includes the full set of conditions used for the reaction, the volume of both the reaction and the cDNA, the concentrations for the probes, magnesium ions, and dNTPs , what kind of polymerase was used and its concentration, what kit was used and its manufacturer, what additives to the reaction were used, who manufactured the qPCR machine, and what parameters were set for the thermocycling process. The only additional desired pieces of information are the chemical composition of the buffer used, who manufactured the plates and tubes used and what their catalog number is, and whether the reaction was set up manually or by a machine. [ 1 ] [ 17 ]
In order to confirm the effectiveness and quality of the qPCR process that was performed, there are several actions and subsequent data that must be presented. This includes explaining the specific method of checking that the process functioned, such as using a gel, direct sequencing of the genetic material, showing a melt profile , or from digestion by restriction enzyme . If SYBR Green I was used, then the Cq of the control group with no template DNA must be given. Further essential data includes the calibration of the machine curves with the slope and y intercept noted, the efficiency of the PCR process as determined from the aforementioned slope, the correlation coefficients (r squared) for the calibration curves, the dynamic range of the linear curves, the Cq found at the lowest concentration where 95% of the results were still positive ( LOD ) along with the evidence for the LOD itself, and lastly if a multiplex is used, then the efficiency and LOD must be given for each assay done. [ 1 ] [ 17 ]
The extra desired information includes evidence given that qPCR optimization occurred by the use of gradients, the confidence intervals to show efficiency of the qPCR, and the confidence intervals for the entire range tested. [ 1 ]
The final section of the guidelines involves information on how the analysis of the qPCR data was done. The essential parts of that include the program and program version used for the analysis, the method for how the Cq was determined, figuring out the outlier points in the data and how they are used or excluded and why, what results were found for the controls with no template genetic material, an explanation for why the reference genes used were chosen and why the number of them was chosen, the method used to normalize the data, how many technical replicates were included, how repeatable was the data within the assays, what methods were used to determine significance of the results, and what software was used for this part of the qualitative analysis. [ 1 ]
It is also desired to include information on the number of biological replicates and whether they matched the results from the technical replicates, the reproducibility data for the concentration variants, data on the power analysis , and lastly for the researchers to submit the raw data in the RDML file format. [ 1 ] [ 18 ] | https://en.wikipedia.org/wiki/MIQE |
The MIRIAM Registry , a by-product of the MIRIAM Guidelines, is a database of namespaces and associated information that is used in the creation of uniform resource identifiers . It contains the set of community-approved namespaces for databases and resources serving, primarily, the biological sciences domain. These shared namespaces, when combined with 'data collection' identifiers, can be used to create globally unique identifiers for knowledge held in data repositories. For more information on the use of URIs to annotate models, see the specification of SBML Level 2 Version 2 (and above).
A 'data collection' is defined as a set of data which is generated by a provider. A 'resource' is defined as a distributor of that data. Such a description allows numerous resources to be associated with a single collection, allowing accurate representation of how biological information is available on the World Wide Web; often the same information, from a single data collection, may be mirrored by different resources, or the core information may be supplemented with other data.
The MIRIAM Registry is a curated resource, which is freely available and open to all. Submissions for new collections can be made through the website. [ 1 ]
The MIRIAM Guidelines require the use of uniform resource identifiers in the annotation of model components. These are created using the shared list of namespaces defined in the MIRIAM Registry.
Using the namespaces defined in the MIRIAM Registry, it is possible to create identifiers in both a URN and a URL forms. This requires a unique collection-specific identifier, as well as a namespace to globally constrain the information space. Both the namespace and the root of each URI form are given for each data collection in the Registry. Both forms are derived from the same namespace. For example:
In this example, the collection-specific identifier is 16333295, and the namespace is pubmed.
The URN form of identifiers requires the use of Web Services or programmatic means to access the referenced record. This means that one cannot simply put the URN form into a browser window and arrive at the referenced information. The URL form is directly resolvable, and relies on a resolving layer provided by Identifiers.org .
To enable efficient use of the MIRIAM Registry and the rapid adoption of the annotation scheme, a number of supporting features are provided. [ 2 ] These include Web Services , a website interface [ 3 ] to access the Registry itself, and a Java library [ 4 ]
The MIRIAM Registry is developed by the Proteomics Services Team at the European Bioinformatics Institute . The source code for the entire project, including supporting features, is available from SourceForge.net . [ 5 ]
The MIRIAM Registry is used by several worldwide projects such as BioModels Database , [ 6 ] [ 7 ] SABIO-RK, [ 8 ] COPASI, [ 9 ] A more thorough listing can be found on the website. [ 10 ] | https://en.wikipedia.org/wiki/MIRIAM_Registry |
MIS Quarterly Executive is a quarterly journal covering the management of information systems . The journal was founded in 2002. Its purpose is to encourage practice-based research in the IS field and to disseminate the results of that research into a much more relevant manner to practitioners. It is a journal of The Association for Information Systems . [ 1 ] It is based in Atlanta, Georgia . [ 2 ]
This article about a scientific journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/MIS_Quarterly_Executive |
mioty is a low-power wide-area network ( LPWAN ) protocol. It is using telegram splitting , a standardized LPWAN technology in the license-free spectrum. This technology splits a data telegram into multiple sub packets and sends them after applying error correcting codes, in a partly predefined time and frequency pattern. This makes a transmission robust to interferences and packet collisions. [ 1 ] It is standardised in the TS 103 357 ETSI . [ 2 ] Its uplink operates at the 868 MHz band, license free in Europe, and 916 MHz band in North America. It requires a bandwidth of 200 kHz for two channels (e.g. up- and downlink). [ 3 ]
It is intended to be used for monitoring devices in large areas. [ 10 ] | https://en.wikipedia.org/wiki/MIoTy |
MK-2870 or SKB264 is an experimental antibody–drug conjugate . The antibody component is directed against "the trophoblast cell-surface antigen 2 (TROP2), which is overexpressed in many types of solid tumors, coupled to moderate cytotoxic belotecan -derivative through a novel linker which was designed to balance the extracellular stability and intracellular rupture". [ 1 ] [ 2 ] [ 3 ] The drug is developed as a partnership between Merck and the Chinese company Kelun . [ 4 ] | https://en.wikipedia.org/wiki/MK-2870 |
MK-886 , or L-663536 , is a leukotriene antagonist . It may perform this by blocking the 5-lipoxygenase activating protein (FLAP), thus inhibiting 5-lipoxygenase (5-LOX), [ 1 ] and may help in treating atherosclerosis . [ 2 ]
MK-886 is a synthesized chemical compound known for its role in inhibiting the 5-lipoxygenase-activating protein (FLAP). Key part in leukotriene biosynthesis. It has potential therapeutic applications in different types of cancer when paired up with other synthetically made chemical compounds. This includes the cancer cells doing apoptosis and suppress tumor cell growth independent from leukotriene inhibition with some cases.
Originally MK-886 was made as a FLAP inhibitor which has some interference with the leukotriene pathway. This is done by blocking 5-lipoxygenase which is an enzyme which aids in the synthesis of pro-inflammatory leukotrienes which is linked to some inflammatory diseases and cancers.
Several studies have shown the anti-cancer potential of MK-886. In a 2004 study MK-886 was shown to induce apoptosis in gastric cancer cells through the upregulation of the pro-apoptotic proteins p27^kip1 and Bax . This shows that the compound may promote cell death through a pathway that is at least partially independent of its role in leukotriene inhibition. [ 3 ]
In another study from 2007 researchers investigated the combined inhibition of 5-lipoxygenase and cyclooxygenase-2 (COX-2) in premalignant and malignant lung cell lines. MK-886 when combined with a COX-2 inhibitor significantly enhanced growth arrest and cell death. This shows a synergistic effect when targeting multiple inflammatory pathways in cancer therapy. [ 4 ]
Due to its ability to both function in inflammatory and apoptotic pathways MK-886 has gained some recognition as a therapeutic agent for not a solo case of inflammatory diseases but also for cancers where leukotriene signaling, specially, helps to promote the tumors survival and growth.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MK-886 |
MKE TAMGEÇ is a mine destruction system ( Mine-clearing line charge ) produced by MKEK , consisting of chain explosives attached to the back of a rocket. [ 1 ]
When the rocket is activated, it follows a parabola course and the explosives attached to its back follow this course with it. When the rocket falls to the ground, explosives also fall to the ground at the level of a rope and explode when they come into contact with the ground. As a result, a pathway cleared of mines is opened. [ 2 ] [ 3 ]
MKE TAMGEÇ and MKE TAMKAR actively used by the Turkish armed forces and firstly used during operation Operation Euphrates Shield and Operation Olive Branch . [ 4 ] [ 5 ] [ 6 ]
This military vehicle article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MKE_TAMGEÇ |
MKM steel , an alloy containing nickel and aluminum , was developed in 1931 by metallurgist Tokushichi Mishima (三島徳七). While conducting research into the properties of nickel, Mishima discovered that a strongly magnetic steel could be created by adding aluminum to non-magnetic nickel steel. [ 1 ]
The developers claim MKM steel is tough and durable, inexpensive to produce, maintains strong magnetism when miniaturized and can produce a stable magnetic force in spite of temperature changes or vibration. MKM steel is similar to Alnico . [ citation needed ]
MKM is an acronym for Mishima Kizumi Magnetic, 'Kizumi (喜住)' being the inventor's childhood surname. [ citation needed ] | https://en.wikipedia.org/wiki/MKM_steel |
The Multicast MAnet Routing Protocol (MMARP ) aims to provide multicast routing in Mobile Ad Hoc Networks (MANETs) taking into account interoperation with fixed IP networks with support of IGMP / MLD protocol. This is achieved by the Multicast Internet Gateway (MIG) which is an ad hoc node itself and is responsible for notifying access routers about the interest revealed by common ad hoc nodes. Any of these nodes may become a MIG at any time but needs to be one hop away from the network access router. Once it self-configures as MIG it should then broadcast periodically its address as being that of the default multicast gateway. Whoever besides this proactive advertisement the protocol states a reactive component the ad hoc mesh is created and maintained. [ 1 ]
When a source node has multicast traffic to send it broadcast a message warning potential receivers of such data. Receivers should then manifest interest sending a Join message towards the source creating a multicast shortest path. Also in the same way the MIG should inform all the ad hoc nodes about the path towards multicast sources in the fixed network.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MMARP |
MNase-seq , short for m icrococcal n ucle ase digestion with deep seq uencing, [ 1 ] [ 2 ] [ 3 ] [ 4 ] is a molecular biological technique that was first pioneered in 2006 to measure nucleosome occupancy in the C. elegans genome, [ 1 ] and was subsequently applied to the human genome in 2008. [ 2 ] Though, the term ‘MNase-seq’ had not been coined until a year later, in 2009. [ 3 ] Briefly, this technique relies on the use of the non-specific endo - exonuclease micrococcal nuclease , an enzyme derived from the bacteria Staphylococcus aureus , to bind and cleave protein-unbound regions of DNA on chromatin . DNA bound to histones or other chromatin-bound proteins (e.g. transcription factors ) may remain undigested. The uncut DNA is then purified from the proteins and sequenced through one or more of the various Next-Generation sequencing methods. [ 5 ]
MNase-seq is one of four classes of methods used for assessing the status of the epigenome through analysis of chromatin accessibility. The other three techniques are DNase-seq , FAIRE-seq , and ATAC-seq . [ 4 ] While MNase-seq is primarily used to sequence regions of DNA bound by histones or other chromatin-bound proteins, [ 2 ] the other three are commonly used for: mapping Deoxyribonuclease I hypersensitive sites (DHSs) , [ 6 ] sequencing the DNA unbound by chromatin proteins, [ 7 ] or sequencing regions of loosely packaged chromatin through transposition of markers, [ 8 ] [ 9 ] respectively. [ 4 ]
Micrococcal nuclease (MNase) was first discovered in S. aureus in 1956, [ 10 ] protein crystallized in 1966, [ 11 ] and characterized in 1967. [ 12 ] MNase digestion of chromatin was key to early studies of chromatin structure; being used to determine that each nucleosomal unit of chromatin was composed of approximately 200bp of DNA. [ 13 ] This, alongside Olins’ and Olins’ “beads on a string” model, [ 14 ] confirmed Kornberg’s ideas regarding the basic chromatin structure. [ 15 ] Upon additional studies, it was found that MNase could not degrade histone-bound DNA shorter than ~140bp and that DNase I and II could degrade the bound DNA to as low as 10bp. [ 16 ] [ 17 ] This ultimately elucidated that ~146bp of DNA wrap around the nucleosome core, [ 18 ] ~50bp linker DNA connect each nucleosome, [ 19 ] and that 10 continuous base-pairs of DNA tightly bind to the core of the nucleosome in intervals. [ 17 ]
In addition to being used to study chromatin structure, micrococcal nuclease digestion had been used in oligonucleotide sequencing experiments since its characterization in 1967. [ 20 ] MNase digestion was additionally used in several studies to analyze chromatin-free sequences, such as yeast ( Saccharomyces cerevisiae ) mitochondrial DNA [ 21 ] as well as bacteriophage DNA [ 22 ] [ 23 ] through its preferential digestion of adenine and thymine -rich regions. [ 24 ] In the early 1980s, MNase digestion was used to determine the nucleosomal phasing and associated DNA for chromosomes from mature SV40 , [ 25 ] fruit flies ( Drosophila melanogaster ) , [ 26 ] yeast, [ 27 ] and monkeys, [ 28 ] among others. The first study to use this digestion to study the relevance of chromatin accessibility to gene expression in humans was in 1985. In this study, nuclease was used to find the association of certain oncogenic sequences with chromatin and nuclear proteins. [ 29 ] Studies utilizing MNase digestion to determine nucleosome positioning without sequencing or array information continued into the early 2000s. [ 30 ]
With the advent of whole genome sequencing in the late 1990s and early 2000s, it became possible to compare purified DNA sequences to the eukaryotic genomes of S. cerevisiae, [ 31 ] Caenorhabditis elegans , [ 32 ] D. melanogaster, [ 33 ] Arabidopsis thaliana , [ 34 ] Mus musculus , [ 35 ] and Homo sapiens . [ 36 ] MNase digestion was first applied to genome-wide nucleosome occupancy studies in S. cerevisiae [ 37 ] accompanied by analyses through microarrays to determine which DNA regions were enriched with MNase-resistant nucleosomes. MNase-based microarray analyses were often utilized at genome-wide scales for yeast [ 38 ] [ 39 ] and in limited genomic regions in humans [ 40 ] [ 41 ] to determine nucleosome positioning, which could be used as an inference for transcriptional inactivation .
In 2006, Next-Generation sequencing was first coupled with MNase digestion to explore nucleosome positioning and DNA sequence preferences in C. elegans, . [ 1 ] This was the first example of MNase-seq in any organism.
It was not until 2008, around the time Next-Generation sequencing was becoming more widely available, when MNase digestion was combined with high-throughput sequencing, namely Solexa/Illumina sequencing , to study nucleosomal positioning at a genome-wide scale in humans. [ 2 ] A year later, the terms “MNase-Seq” and “MNase-ChIP”, for micrococcal nuclease digestion with chromatin immunoprecipitation , were finally coined. [ 3 ] Since its initial application in 2006, [ 1 ] MNase-seq has been utilized to deep sequence DNA associated with nucleosome occupancy and epigenomics across eukaryotes. [ 5 ] As of February 2020, MNase-seq is still applied to assay accessibility in chromatin. [ 42 ]
Chromatin is dynamic and the positioning of nucleosomes on DNA changes through the activity of various transcription factors and remodeling complexes , approximately reflecting transcriptional activity at these sites. DNA wrapped around nucleosomes are generally inaccessible to transcription factors. [ 43 ] Hence, MNase-seq can be used to indirectly determine which regions of DNA are transcriptionally inaccessible by directly determining which regions are bound to nucleosomes. [ 5 ]
In a typical MNase-seq experiment, eukaryotic cell nuclei are first isolated from a tissue of interest. Then, MNase-seq uses the endo-exonuclease micrococcal nuclease to bind and cleave protein-unbound regions of DNA of eukaryotic chromatin, first cleaving and resecting one strand, then cleaving the antiparallel strand as well. [ 3 ] The chromatin can be optionally crosslinked with formaldehyde . [ 44 ] MNase requires Ca 2+ as a cofactor , typically with a final concentration of 1mM. [ 5 ] [ 12 ] If a region of DNA is bound by the nucleosome core (i.e. histones ) or other chromatin-bound proteins (e.g. transcription factors), then MNase is unable to bind and cleave the DNA. Nucleosomes or the DNA-protein complexes can be purified from the sample and the bound DNA can be subsequently purified via gel electrophoresis and extraction . The purified DNA is typically ~150bp, if purified from nucleosomes, [ 2 ] or shorter, if from another protein (e.g. transcription factors). [ 45 ] This makes short-read, high-throughput sequencing ideal for MNase-seq as reads for these technologies are highly accurate but can only cover a couple hundred continuous base-pairs in length. [ 46 ] Once sequenced, the reads can be aligned to a reference genome to determine which DNA regions are bound by nucleosomes or proteins of interest, with tools such as Bowtie . [ 4 ] The positioning of nucleosomes elucidated, through MNase-seq, can then be used to predict genomic expression [ 47 ] and regulation [ 48 ] at the time of digestion.
Recently, MNase-seq has also been implemented in determining where transcription factors bind on the DNA. [ 49 ] [ 50 ] Classical ChIP-seq displays issues with resolution quality, stringency in experimental protocol, and DNA fragmentation . [ 50 ] Classical ChIP-seq typically uses sonication to fragment chromatin , which biases heterochromatic regions due to the condensed and tight binding of chromatin regions to each other. [ 50 ] Unlike histones , transcription factors only transiently bind DNA. Other methods, such as sonication in ChIP-seq, requiring the use of increased temperatures and detergents, can lead to the loss of the factor. CUT&RUN sequencing is a novel form of an MNase-based immunoprecipitation . Briefly, it uses an MNase tagged with an antibody to specifically bind DNA-bound proteins that present the epitope recognized by that antibody. Digestion then specifically occurs at regions surrounding that transcription factor, allowing for this complex to diffuse out of the nucleus and be obtained without having to worry about significant background nor the complications of sonication. The use of this technique does not require high temperatures or high concentrations of detergent. Furthermore, MNase improves chromatin digestion due to its exonuclease and endonuclease activity. Cells are lysed in an SDS / Triton X-100 solution. Then, the MNase-antibody complex is added. And finally, the protein-DNA complex can be isolated, with the DNA being subsequently purified and sequenced . The resulting soluble extract contains a 25-fold enrichment in fragments under 50bp. This increased enrichment results in cost-effective high-resolution data. [ 50 ]
Single-cell micrococcal nuclease sequencing (scMNase-seq) is a novel technique that is used to analyze nucleosome positioning and to infer chromatin accessibility with the use of only a single-cell input. [ 51 ] First, cells are sorted into single aliquots using fluorescence-activated cell sorting (FACS) . [ 51 ] The cells are then lysed and digested with micrococcal nuclease. The isolated DNA is subjected to PCR amplification and then the desired sequence is isolated and analyzed. [ 51 ] The use of MNase in single-cell assays results in increased detection of regions such as DNase I hypersensitive sites as well as transcription factor binding sites. [ 51 ]
MNase-seq is one of four major methods ( DNase-seq , MNase-seq, FAIRE-seq , and ATAC-seq ) for more direct determination of chromatin accessibility and the subsequent consequences for gene expression . [ 52 ] All four techniques are contrasted with ChIP-seq , which relies on the inference that certain marks on histone tails are indicative of gene activation or repression, [ 53 ] not directly assessing nucleosome positioning, but instead being valuable for the assessment of histone modifier enzymatic function. [ 4 ]
As with MNase-seq, [ 2 ] DNase-seq was developed by combining an existing DNA endonuclease [ 6 ] with Next-Generation sequencing technology to assay chromatin accessibility. [ 54 ] Both techniques have been used across several eukaryotes to ascertain information on nucleosome positioning in the respective organisms [ 4 ] and both rely on the same principle of digesting open DNA to isolate ~140bp bands of DNA from nucleosomes [ 2 ] [ 55 ] or shorter bands if ascertaining transcription factor information. [ 45 ] [ 55 ] Both techniques have recently been optimized for single-cell sequencing, which corrects for one of the major disadvantages of both techniques; that being the requirement for high cell input. [ 56 ] [ 51 ]
At sufficient concentrations, DNase I is capable of digesting nucleosome-bound DNA to 10bp, whereas micrococcal nuclease cannot. [ 17 ] Additionally, DNase-seq is used to identify DHSs, which are regions of DNA that are hypersensitive to DNase treatment and are often indicative of regulatory regions (e.g. promoters or enhancers ). [ 57 ] An equivalent effect is not found with MNase. As a result of this distinction, DNase-seq is primarily utilized to directly identify regulatory regions, whereas MNase-seq is used to identify transcription factor and nucleosomal occupancy to indirectly infer effects on gene expression. [ 4 ]
FAIRE-seq differs more from MNase-seq than does DNase-seq. [ 4 ] FAIRE-seq was developed in 2007 [ 7 ] and combined with Next-Generation sequencing three years later to study DHSs. [ 58 ] FAIRE-seq relies on the use of formaldehyde to crosslink target proteins with DNA and then subsequent sonication and phenol-chloroform extraction to separate non-crosslinked DNA and crosslinked DNA. The non-crosslinked DNA is sequenced and analyzed, allowing for direct observation of open chromatin. [ 59 ]
MNase-seq does not measure chromatin accessibility as directly as FAIRE-seq. However, unlike FAIRE-seq, it does not necessarily require crosslinking, [ 5 ] nor does it rely on sonication, [ 4 ] but it may require phenol and chloroform extraction . [ 5 ] Two major disadvantages of FAIRE-seq, relative to the other three classes, are the minimum required input of 100,000 cells and the reliance on crosslinking. [ 7 ] Crosslinking may bind other chromatin-bound proteins that transiently interact with DNA, hence limiting the amount of non-crosslinked DNA that can be recovered and assayed from the aqueous phase. [ 52 ] Thus, the overall resolution obtained from FAIRE-seq can be relatively lower than that of DNase-seq or MNase-seq [ 52 ] and with the 100,000 cell requirement, [ 7 ] the single-cell equivalents of DNase-seq [ 56 ] or MNase-seq [ 51 ] make them far more appealing alternatives. [ 4 ]
ATAC-seq is the most recently developed class of chromatin accessibility assays. [ 8 ] ATAC-seq uses a hyperactive transposase to insert transposable markers with specific adapters, capable of binding primers for sequencing, into open regions of chromatin. PCR can then be used to amplify sequences adjacent to the inserted transposons, allowing for determination of open chromatin sequences without causing a shift in chromatin structure. [ 8 ] [ 9 ] ATAC-seq has been proven effective in humans, amongst other eukaryotes, including in frozen samples. [ 60 ] As with DNase-seq [ 56 ] and MNase-seq, [ 51 ] a successful single-cell version of ATAC-seq has also been developed. [ 61 ]
ATAC-seq has several advantages over MNase-seq in assessing chromatin accessibility. ATAC-seq does not rely on the variable digestion of the micrococcal nuclease, nor crosslinking or phenol-chloroform extraction. [ 5 ] [ 9 ] It generally maintains chromatin structure, so results from ATAC-seq can be used to directly assess chromatin accessibility, rather than indirectly via MNase-seq. ATAC-seq can also be completed within a few hours, [ 9 ] whereas the other three techniques typically require overnight incubation periods. [ 5 ] [ 6 ] [ 7 ] The two major disadvantages to ATAC-seq, in comparison to MNase-seq, are the requirement for higher sequencing coverage and the prevalence of mitochondrial contamination due to non-specific insertion of DNA into both mitochondrial DNA and nuclear DNA. [ 8 ] [ 9 ] Despite these minor disadvantages, use of ATAC-seq over the alternatives is becoming more prevalent. [ 4 ] | https://en.wikipedia.org/wiki/MNase-seq |
MOAP ( Mobile Oriented Applications Platform ) is the software platform for NTT DoCoMo 's Freedom of Mobile Multimedia Access (FOMA) service for mobile phones . [ 1 ] [ 2 ]
It has a closed user interface, so third parties cannot develop software for native application software , or install third party applications, unlike S60 and UIQ .
Two MOAP versions exist: [ 3 ]
This mobile technology related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MOAP |
MOCADI [ 1 ] is a Monte Carlo simulation program used to calculate the transport of charged particle beams--as well as fragmentation and fission products from nuclear reactions in target materials--through ion optical systems described by transfer matrices (including up to third order Taylor expansion coefficients) and through layers of matter.
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MOCADI |
Modeller , often stylized as MODELLER , is a computer program used for homology modeling to produce models of protein tertiary structures and quaternary structures (rarer). [ 2 ] [ 3 ] It implements a method inspired by nuclear magnetic resonance spectroscopy of proteins (protein NMR), termed satisfaction of spatial restraints , by which a set of geometrical criteria are used to create a probability density function for the location of each atom in the protein. The method relies on an input sequence alignment between the target amino acid sequence to be modeled and a template protein which structure has been solved.
The program also incorporates limited functions for ab initio structure prediction of loop regions of proteins, which are often highly variable even among homologous proteins and thus difficult to predict by homology modeling.
Modeller was originally written and is currently maintained by Andrej Sali at the University of California, San Francisco . [ 4 ] It runs on the operating systems Unix , Linux , macOS , and Windows . It is freeware for academic use. Graphical user interfaces (GUIs) and commercial versions are distributed by Accelrys . The ModWeb comparative protein structure modeling webserver is based on Modeller and other tools for automatic protein structure modeling, with an option to deposit the resulting models into ModBase . Due to Modeller's popularity, several third party GUIs for MODELLER are available: | https://en.wikipedia.org/wiki/MODELLER |
MOD (Media on Demand) Technology is an advanced infotainment technology designed and licensed by FUNTORO . When it was first introduced in 2008, the technology was one of the most innovative and multifunctional system in the automotive industry. [ 1 ] Currently, it is widely applied in railway, automotive & commercial industries with an aim to provide infotainment services through interactive monitors embedded in seatback or armrest. [ 2 ] [ 3 ]
Apart from VOD (Video on Demand) , MOD Technology is a more advanced and integrated platform that combines entertainment, real-time information, value-added services, advertising system, attractions guidance, shopping & ordering service, network connectivity and Cloud management. [ citation needed ]
MOD Technology can be implemented in coach bus, city bus, sightseeing bus, sleeper bus, railways, stadium, hotels and others. | https://en.wikipedia.org/wiki/MOD_Technology |
MOF-5 or IRMOF-1 is a cubic metal–organic framework compound with the formula Zn 4 O(BDC) 3 , where BDC 2− = 1,4-benzodicarboxylate (MOF-5). [ 1 ] It was first synthesized by graduate students and post doctoral scholars in the lab of Omar M. Yaghi . MOF-5 is notable for exhibiting one of the highest surface area to volume ratios among metal–organic frameworks, at 2200 m 2 /cm 3 . [ 2 ] Additionally, it was the first metal–organic framework studied for hydrogen gas storage . [ 1 ] [ 2 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MOF-5 |
MOF Model to Text Transformation Language ( Mof2Text or MOFM2T ) is an Object Management Group (OMG) specification for a model transformation language . Specifically, it can be used to express transformations which transform a model into text (M2T), for example a platform-specific model into source code or documentation . MOFM2T is one part of OMG's Model-driven architecture (MDA) and reuses many concepts of MOF , OMG's metamodeling architecture. Whereas MOFM2T is used for expressing M2T transformations, OMG's QVT is used for expressing M2M transformations.
This programming-language -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MOF_Model_to_Text_Transformation_Language |
MOLCAS is an ab initio computational chemistry program, developed as a joint project by a number of international institutes. MOLCAS is developed by scientists to be used by scientists. It is not primarily a commercial product and it is not sold in order to produce a fortune for its owner (the Lund University ).
Focus in the program is placed on methods for calculating general electronic structures in both ground and excited states. MOLCAS contains codes for general and effective multiconfigurational SCF calculations at the Complete Active Space (CASSCF) level, but also employing more restricted MCSCF wave functions (RASSCF). It is also possible, at this level of theory, to optimize geometries for equilibrium and transition states using gradient techniques and to compute force fields and vibrational energies. MOLCAS also contains second order perturbation theory codes CASPT2 and RASPT2.
MOLCAS code has been created at the late 1980s by the group of Prof. Björn O. Roos at Lund University . The name of the program is a combination of Molecule (integral code by Jan Almlöf ) and CAS (Complete Active Space program developed by Björn O. Roos).
MOLCAS 2 has been released at 1992. It was distributed on a tape for IBM VM/XA . It contains new configuration interaction code (written by Jeppe Olsen), new integral code (written by Roland Lindh) and coupled cluster code (written at Comenius University ). MOLCAS 4 (1999) was a first release, which runs on any Unix or Linux operating system. In 2001 MOLCAS 5 has been released, featuring a distributed model for code development. [ 1 ]
In September 2017 the bulk of the MOLCAS code was branched as open source (LGPL 2.1 license), under the name OpenMolcas. [ 2 ] The stable version of MOLCAS code is distributed by Lund University .
Main features of MOLCAS can be found at the Molcas website: manual, [ 3 ] collection of tutorials. [ 4 ] There are several publications featuring capability of different versions of MOLCAS: [ 5 ]
. [ 6 ] MOLCAS 7.2 has been independently reviewed at JACS computer software reviews. [ 7 ] | https://en.wikipedia.org/wiki/MOLCAS |
MOLPRO is a software package used for accurate ab initio quantum chemistry calculations. [ 1 ] It is developed by Peter Knowles at Cardiff University and Hans-Joachim Werner at Universität Stuttgart in collaboration with other authors.
The emphasis in the program is on highly accurate computations, with extensive treatment of the electron correlation problem through the multireference configuration interaction , coupled cluster and associated methods. Integral-direct local electron correlation methods reduce the increase of the computational cost with molecular size. Accurate ab initio calculations can then be performed for larger molecules. With new explicitly correlated methods the basis set limit can be very closely approached.
Molpro was designed and maintained by Wilfried Meyer and Peter Pulay in the late 1960s. At that moment, Pulay developed the first analytical gradient code called Hartree-Fock (HF) , [ 2 ] [ 3 ] [ 4 ] and Meyer researched his PNO-CEPA (pseudo-natural orbital coupled-electron pair approximation) methods. [ 5 ] [ 6 ] In 1980, Werner and Meyer developed a new state-averaged, quadratically convergent (MC-SCF) method , which provided geometry optimization for multireference cases . [ 7 ] By the same year, the first internally contracted multireference configuration interaction (IC-MRCI) program was developed by Werner and Reinsch. [ 8 ] About four years later (1984), Werner and Knowles developed on a new generation program called CASSCF (complete active space SCF) . This new CASSCF program combined fast orbital optimization algorithms [ 7 ] with determinant-based full CI codes , [ 9 ] and additional, more general, unitary group configuration interaction (CI) codes. This resulted in the quadratically convergent MCSCF/CASSCF code called MULTI, [ 10 ] [ 11 ] which allowed modals to be optimized a weighted energy average of several states, and is capable of treating both completely general configuration expansions. In fact, this method is still available today. In addition to these organizational developments, Knowles and Werner started to cooperate on a new, more efficient, IC-MRCI method. [ 12 ] [ 13 ] Extensions for accurate treatments of excited states became possible through a new IC-MRCI method. [ 14 ] In brief, the present IC-MRCI will be described as MRCI . These recently developed MCSCF and MRCI methods resulted in the basis of the modern Molpro. In the following years, a number of new programs were added. Analytic energy gradients can be evaluated with coupled-cluster calculations , density functional theory (DFT) , as well as many other programs. These structural changes make the code more modular and easier to use and maintain, and also reduces the probability of input error. [ 15 ] | https://en.wikipedia.org/wiki/MOLPRO |
A MONA number (short for Moths of North America), or Hodges number after Ronald W. Hodges , is part of a numbering system for North American moths found north of Mexico in the Continental United States and Canada , as well as the island of Greenland . [ 1 ] Introduced in 1983 by Hodges through the publication of Check List of the Lepidoptera of America North of Mexico , the system began an ongoing numeration process in order to compile a list of the over 12,000 moths of North America north of Mexico. The system numbers moths within the same family close together for identification purposes. For example, the species Epimartyria auricrinella begins the numbering system at 0001 while Epimartyria pardella is numbered 0002.
The system has become somewhat out of date since its inception for several reasons:
Despite the issues above, the MONA system has remained popular with many websites and publications. It is the most popular numbering system used, largely replacing the older McDunnough Numbers system, while some published lists prefer to use other forms of compilation. [ 3 ] The Moth Photographer's Group (MPG) at Mississippi State University actively monitors the expansive list of North American moths utilizing the MONA system and updates their checklists in accordance with publishings regarding changes and additions. [ 4 ] | https://en.wikipedia.org/wiki/MONA_number |
MOPAC is a computational chemistry software package that implements a variety of semi-empirical quantum chemistry methods based on the neglect of diatomic differential overlap (NDDO) approximation and fit primarily for gas-phase thermochemistry . [ 1 ] Modern versions of MOPAC support 83 elements of the periodic table (H-La, Lu-Bi as atoms, [ 2 ] Ce-Yb as ionic sparkles ) [ 3 ] and have expanded functionality for solvated molecules , [ 4 ] crystalline solids , [ 5 ] and proteins . [ 6 ] MOPAC was originally developed in Michael Dewar 's research group in the early 1980s and released as public domain software on the Quantum Chemistry Program Exchange in 1983. [ 7 ] It became commercial software in 1993, developed and distributed by Fujitsu , and Stewart Computational Chemistry took over commercial development and distribution in 2007. In 2022, it was released as open-source software on GitHub.
MOPAC is primarily a serial command-line program. Its default behavior is to take a molecular geometry specified by an input file and perform a local optimization of the geometry to minimize the heat of formation of the molecule. The details of this process are then summarized by an output file. The behavior of MOPAC can be modified by specifying keywords on the first line of the input file, and translation vectors can be added to the geometry to specify a polymer, surface, or crystal.
MOPAC is compatible with other software to provide graphical user interfaces (GUIs), visualization of outputs, and processing of inputs. The most well-known GUIs that support MOPAC are Chem3D , WebMO, [ 8 ] the Amsterdam Modeling Suite, [ 9 ] and the Molecular Operating Environment . Jmol can visualize some MOPAC outputs such as molecular orbitals and partial charges . Open Babel supports conversion to and from MOPAC's input file format.
MOPAC was originally developed in Michael Dewar 's research group at the University of Texas at Austin to consolidate their previous developments of MINDO/3 and MNDO models and software and to serve as the software implementation of the AM1 model. [ 13 ] The name MOPAC was both an acronym for Molecular Orbital PACkage and a reference to the Mopac Expressway that runs alongside parts of the UT Austin campus. [ 14 ] The first version of MOPAC was deposited in the Quantum Chemistry Program Exchange (QCPE) in 1983 as QCPE Program #455 with James Stewart as its primary author. [ 7 ] James Stewart joined the Dewar group in 1980 as a visiting professor on leave from the University of Strathclyde , [ 15 ] and he continued the development of MOPAC after moving to the United States Air Force Academy in 1984. [ 16 ] In 1993, MOPAC was acquired by Fujitsu and sold as commercial software, while James Stewart continued its development as a consultant. [ 17 ] After 2007, new versions of MOPAC were developed and sold by Stewart Computational Chemistry [ 18 ] with support from the Small Business Innovation Research program. [ 19 ] Concurrent with its commercial development, there was an effort to continue development of the last pre-commercial version of MOPAC as an open-source software project. [ 20 ] [ 21 ] In 2022, the commercial development and distribution of MOPAC ended, and it was officially re-released as an open-source software project on GitHub [ 22 ] developed by the Molecular Sciences Software Institute. [ 23 ]
Early versions of MOPAC distributed by the QCPE were considered to be in the public domain and were forked into several other notable software projects. After James Stewart left, other members of the Dewar group continued to develop a fork of MOPAC called AMPAC that was originally released on the QCPE before also becoming commercial software. [ 24 ] VAMP (Vectorized AMPAC) was a parallel version of AMPAC developed by Timothy Clark's group at the University of Erlangen–Nuremberg . [ 25 ] Donald Truhlar 's group at the University of Minnesota developed both a fork of AMPAC with implicit solvent models, AMSOL, [ 26 ] and a fork of MOPAC itself. [ 27 ] Also, commercial versions of MOPAC distributed by Fujitsu have some proprietary features (e.g. PM5, Tomasi solvation) not available in other versions. [ 28 ]
MOPAC used different versioning systems throughout its development, sometimes with a version number or year stylized into the name. These alternate names include MOPAC3, MOPAC4, MOPAC5, MOPAC6, MOPAC7, MOPAC93, MOPAC97, MOPAC 2000, MOPAC 2007, MOPAC 2009, MOPAC 2012, and MOPAC 2016. [ 29 ] Open-source versions of MOPAC now use semantic versioning . | https://en.wikipedia.org/wiki/MOPAC |
MOPP ( Mission Oriented Protective Posture ; pronounced "mop") is protective gear used by U.S. military personnel in a toxic environment, for example, during a chemical, biological, radiological, or nuclear ( CBRN ) strike.
Each MOPP level corresponds to an increasing level of protection. The readiness level will usually be dictated by the in-theatre commander. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/MOPP_(protective_gear) |
MORE , which stands for MAC independent Opportunistic Routing , is an opportunistic routing protocol designed for wireless mesh networks . The protocol removes the dependency that other opportunistic routing protocols, such as ExOR and SOAR have on the MAC layer. Both of these protocols make use of a scheduler, to co-ordinate transmission among the nodes. Only one node transmits at a given point of time and all the other nodes listen to this. The nodes that listen remove the packets which they have queued for retransmission. This ensures that the same packet is not redundantly retransmitted by different nodes.
MORE makes use of network encoding techniques and brings about spatial re-use by allowing all the nodes to transmit at the same time. Given a file, the source node breaks up the file into K packets. The number of packets each file is divided into varies. The uncoded packets are called "native packets". The source node then creates a linear combination of K packets and forwards them. The code vector represents the random co-efficients chosen by the node to perform encoding. The source also attaches a MORE header to each packet along with a forwarding list. The forwarders listen to the transmission of the source node. If the node that listens to this packet is in the forwarding list, it checks if the packet has any new information which are called as innovative packets. If the packet is innovative, it performs a linear recombination of the packets. This is essentially the linear recombination of the native packets again. The node ignores all non-innovative packets. The destination receives the packets and checks for innovative-ness. Upon receiving K innovative packets, it sends back ACK to the source and continues decoding the packets. The intermediate nodes hear this ACK and stop further transmission followed by the purging of packets in the buffer. [ 1 ]
nodes i
MORE introduces a few overheads in the network. The use of network encoding requires the nodes to have sufficient computing abilities. It also requires the nodes to have sufficient memory to store the packets and process them. Finally, the protocol adds an additional MORE header to each of the packet. | https://en.wikipedia.org/wiki/MORE_protocol |
The Microvariability and Oscillations of Stars/Microvariabilité et Oscillations STellaire ( MOST ), was Canada's first space telescope . Up until nearly 10 years after its launch it was also the smallest space telescope in orbit (for which its creators nicknamed it the "Humble Space Telescope", in reference to one of the largest, the Hubble ). [ 2 ] MOST was the first spacecraft dedicated to the study of asteroseismology , subsequently followed by the now-completed CoRoT and Kepler missions. It was also the first Canadian science satellite launched since ISIS II , 32 years previously.
As its name suggests, its primary mission was to monitor variations in star light, which it did by observing a single target for a long period of time (up to 60 days). Typically, larger space telescopes cannot afford to remain focused on a single target for so long due to the demand for their resources.
At 53 kg (117 lb), 60 cm (24 in) wide and tall, and 24 cm (9 in) deep, [ 3 ] it was the size and weight of a small chest or an extra-large suitcase filled with electronics. This places it in the microsatellite category.
MOST was developed as a joint effort of the Canadian Space Agency , Dynacon Enterprises Limited (now Microsatellite Systems Canada Inc ), the Space Flight Laboratory (SFL) at the University of Toronto Institute for Aerospace Studies , and the University of British Columbia . Led by Principal Investigator Jaymie Matthews , the MOST science team's plan was to use observations from MOST to use asteroseismology to help date the age of the universe, and to search for visible-light signatures from extrasolar planets .
The original SFL application to the CSA is available at https://www.astro.utoronto.ca/~rucinski/MOST_proposal_1997.pdf
MOST featured an instrument [ 4 ] comprising a visible-light dual- CCD camera, fed by a 15-cm aperture Maksutov telescope . One CCD gathered science images, while the other provided images used by star-tracking software that, along with a set of four reaction wheels (computer-controlled motorized flywheels that are similar to gyroscopes ) maintained pointing with an error of less than 1 arc-second , better pointing by far than any other microsatellite to date.
The design of the rest of MOST was inspired by and based on microsatellite bus designs pioneered by AMSAT , and first brought to commercial viability by the microsatellite company SSTL (based at the University of Surrey in the United Kingdom); during the early stages of MOST development, the core group of AMSAT microsatellite satellite designers advised and mentored the MOST satellite design team, via a know-how transfer arrangement with UTIAS. This approach to satellite design is notable for making use of commercial-grade electronics, along with a "small team," "early prototyping" engineering development approach rather different from that used in most other space-engineering programs, to achieve relatively very low costs: MOST's life-cycle cost (design, build, launch and operate) was less than $10 million in Canadian funds (about 7 million Euros or 6 million USD , at exchange rates at time of launch).
Development of the satellite was managed by the Canadian Space Agency 's Space Science Branch, and was funded under its Small Payloads Program; its operations were (as of 2012) managed by the CSA's Space Exploration Branch. It was operated by SFL (where the primary MOST ground station is located) jointly with Microsat Systems Canada Inc. (since the sale of Dynacon's space division to MSCI in 2008). As of ten years after launch, despite failures of two of its components (one of the four reaction wheels and one of the two CCD driver boards), the satellite was still operating well, as a result of both on-going on-board software upgrades as well as built-in hardware redundancy, allowing improvements to performance and to reconfigure around failed hardware units.
In 2008, the MOST Satellite Project Team won the Canadian Aeronautics and Space Institute's Alouette Award, [ 5 ] [ 6 ] which recognizes outstanding contributions to advancement in Canadian space technology, applications, science or engineering.
On 30 April 2014, the Canadian Space Agency announced that funding to continue operating MOST would be withdrawn as of 9 September 2014, [ 7 ] apparently as a result of funding cuts to the Canadian Space Agency's budget by the Harper government, [ 8 ] despite the fact that the satellite continues to be fully operational and capable of making on-going science observations. P.I. Jaymie Matthews responded by saying that "he will consider all options to keep the satellite in orbit, and that includes a direct appeal to the public."
In October 2014, the MOST Satellite was acquired by MSCI, which then commenced commercial operation of the satellite, offering a variety of potential uses including continuing the original MOST mission in partnership with Dr. Matthews, but also other planetary studies, attitude control system algorithm R&D, and Earth observation. MOST was finally decommissioned in March 2019, after an apparent failure of its power subsystem. [ 9 ]
The MOST team has reported a number of discoveries. In 2004 they reported that the star Procyon does not oscillate to the extent that had been expected, [ 10 ] although this has been disputed . [ 11 ] [ 12 ]
In 2006 observations revealed a previously unknown class of variable stars, the "slowly pulsating B supergiants" ( SPBsg ). [ 13 ] In 2011, MOST detected transits by exoplanet 55 Cancri e of its primary star, based on two weeks of nearly continuous photometric monitoring, confirming an earlier detection of this planet, and allowing investigations into the planet's composition. In 2019, MOST photometry was used to disprove claims of permanent starspots on the surface of HD 189733 A that were alleged to be caused by interactions between the magnetic fields of the star and its "hot Jupiter" exoplanet. [ 14 ] | https://en.wikipedia.org/wiki/MOST_(spacecraft) |
MOWSE (for Molecular Weight Search ) is a method to identify proteins from the molecular weight of peptides created by proteolytic digestion and measured with mass spectrometry . [ 1 ]
The MOWSE algorithm was developed by Darryl Pappin at the Imperial Cancer Research Fund and Alan Bleasby at the SERC Daresbury Laboratory . [ 2 ] The probability-based MOWSE score formed the basis of development of Mascot , a proprietary software for protein identification from mass spectrometry data.
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MOWSE |
The MP-5 gas mask is the standard gas mask of the Polish Armed Forces . This mask is designed to protect the user's respiratory tract from airborne toxic agents, radioactive dust and bacterial aerosols. For protection against toxic industrial agents such as ammonia or carbon monoxide , specific combined filters should be used additionally (connected in series with the basic FP-5 combined filter). [ 2 ] The design also allows for the collection of fluids and conversation.
The masks are currently manufactured by Maskpol .
In the mid-1980s, the Department of Respiratory Protection of the Military Institute of Chemistry and Radiometry in Warsaw began work on a new gas mask. It was to replace the previously introduced MP-4 masks. This study had the working name MP-5.
The first design had two screw-in absorbers (on the sides, which provided less inhalation resistance). It also had a headgear from the MP-4 mask and polycarbonate goggles. However, one absorber located centrally was ultimately used.
The face part of the mask is made of plastic. It has one large polyurethane visor[footnote needed]. The canister mounting socket is located in the lower part of the mask. On the right side of the mask is the service valve (for fluid administration and mask tightness testing in an uncontaminated atmosphere), and on the left is the exhalation valve.
The mask is designed with an FP-5 filter (NATO standard thread). Other filters with NATO standard thread can also be used.
The composition of the set
The MP-5 gas mask is stored in the bag as follows:
The mask comes in 4 sizes:
Inspiratory resistance with continuous air flow at a rate of:
Exhalation resistance with continuous air flow rate:
Mask weight (face part with filter):
Patency of drinking device: | https://en.wikipedia.org/wiki/MP-5_gas_mask |
The MP-6 Gas Mask is a Polish military gas mask , which replaces the MP-5 masks.
The MP-6 mask (codenamed Apollo) is designed to protect the soldier's respiratory system from toxic warfare agents, radioactive dust and bacterial aerosols. In addition, the mask's design allows for the intake of fluid and conversation while wearing it. [ 1 ]
The NATO standard filter (40 mm thread) is attached to the mask from the side (on the right, left or both sides, depending on the user's needs). The panoramic visor from the MP-5 mask is replaced by two smaller glasses. [ 1 ] Additionally, the mask glasses are protected with polycarbonate ballistic lenses. [ 2 ]
The fluid collection tube is NATO standard, allowing you to attach a water bottle, canteen or camel toe. [ 2 ]
The mask is manufactured by Maskpol , a company belonging to the Polish Armaments Group. Maskpol previously also produced MP-5 masks. [ 3 ]
The mask was presented at MSPO 2010, [ 4 ] at MSPO 2011 (where she was awarded the DEFENDER statuette) [ 5 ] and MSPO 2012. [ 6 ]
On October 12, 2012, an agreement was signed between the Armament Inspectorate and Maskpol for the delivery of 28,400 MP-6 gas masks in the years 2012–2015. The cost of the masks is to be approximately PLN 21.1 million. [ 3 ]
MP-5 gas mask | https://en.wikipedia.org/wiki/MP-6_gas_mask |
MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III ) [ 5 ] is a coding format for digital audio developed largely by the Fraunhofer Society in Germany under the lead of Karlheinz Brandenburg . [ 12 ] [ 13 ] It was designed to greatly reduce the amount of data required to represent audio, yet still sound like a faithful reproduction of the original uncompressed audio to most listeners; for example, compared to CD-quality digital audio , MP3 compression can commonly achieve a 75–95% reduction in size, depending on the bit rate . [ 14 ] In popular usage, MP3 often refers to files of sound or music recordings stored in the MP3 file format ( .mp3 ) on consumer electronic devices.
Originally defined in 1991 as one of the three audio codecs of the MPEG-1 standard (along with MP2 and MP1 ), it was retained and further extended—defining additional bit rates and support for more audio channels —as the third audio format of the subsequent MPEG-2 standard. MP3 as a file format commonly designates files containing an elementary stream of MPEG-1 Audio or MPEG-2 Audio encoded data, without other complexities of the MP3 standard. Concerning audio compression , which is its most apparent element to end-users, MP3 uses lossy compression to encode data using inexact approximations and the partial discarding of data, allowing for a large reduction in file sizes when compared to uncompressed audio.
The combination of small size and acceptable fidelity led to a boom in the distribution of music over the Internet in the late 1990s, with MP3 serving as an enabling technology at a time when bandwidth and storage were still at a premium. The MP3 format soon became associated with controversies surrounding copyright infringement , music piracy , and the file- ripping and sharing services MP3.com and Napster , among others. With the advent of portable media players (including "MP3 players"), a product category also including smartphones , MP3 support became near-universal and it remains a de facto standard for digital audio despite the creation of newer coding formats such as AAC .
The Moving Picture Experts Group (MPEG) designed MP3 as part of its MPEG-1 , and later MPEG-2 , standards. MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II, and III, was approved as a committee draft for an ISO / IEC standard in 1991, [ 15 ] [ 16 ] finalized in 1992, [ 17 ] and published in 1993 as ISO/IEC 11172-3:1993. [ 8 ] An MPEG-2 Audio (MPEG-2 Part 3) extension with lower sample and bit rates was published in 1995 as ISO/IEC 13818-3:1995. [ 9 ] [ 18 ] It requires only minimal modifications to existing MPEG-1 decoders (recognition of the MPEG-2 bit in the header and addition of the new lower sample and bit rates).
The MP3 lossy compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking . In 1894, the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. [ 19 ] In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. [ 20 ] Between 1967 and 1974, Eberhard Zwicker did work in the areas of tuning and masking of critical frequency-bands, [ 21 ] [ 22 ] which in turn built on the fundamental research in the area from Harvey Fletcher and his collaborators at Bell Labs . [ 23 ]
Perceptual coding was first used for speech coding compression with linear predictive coding (LPC), [ 24 ] which has origins in the work of Fumitada Itakura ( Nagoya University ) and Shuzo Saito ( Nippon Telegraph and Telephone ) in 1966. [ 25 ] In 1978, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs proposed an LPC speech codec , called adaptive predictive coding , that used a psychoacoustic coding-algorithm exploiting the masking properties of the human ear. [ 24 ] [ 26 ] Further optimization by Schroeder and Atal with J.L. Hall was later reported in a 1979 paper. [ 27 ] That same year, a psychoacoustic masking codec was also proposed by M. A. Krasner, [ 28 ] who published and produced hardware for speech (not usable as music bit-compression), but the publication of his results in a relatively obscure Lincoln Laboratory Technical Report [ 29 ] did not immediately influence the mainstream of psychoacoustic codec-development.
The discrete cosine transform (DCT), a type of transform coding for lossy compression, proposed by Nasir Ahmed in 1972, was developed by Ahmed with T. Natarajan and K. R. Rao in 1973; they published their results in 1974. [ 30 ] [ 31 ] [ 32 ] This led to the development of the modified discrete cosine transform (MDCT), proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, [ 33 ] following earlier work by Princen and Bradley in 1986. [ 34 ] The MDCT later became a core part of the MP3 algorithm. [ 35 ]
Ernst Terhardt and other collaborators constructed an algorithm describing auditory masking with high accuracy in 1982. [ 36 ] This work added to a variety of reports from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths.
In 1985, Atal and Schroeder presented code-excited linear prediction (CELP), an LPC-based perceptual speech-coding algorithm with auditory masking that achieved a significant data compression ratio for its time. [ 24 ] IEEE 's refereed Journal on Selected Areas in Communications reported on a wide variety of (mostly perceptual) audio compression algorithms in 1988. [ 37 ] The "Voice Coding for Communications" edition published in February 1988 reported on a wide range of established, working audio bit compression technologies, [ 37 ] some of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations.
The genesis of the MP3 technology is fully described in a paper from Professor Hans Musmann, [ 38 ] who chaired the ISO MPEG Audio group for several years. In December 1988, MPEG called for an audio coding standard. In June 1989, 14 audio coding algorithms were submitted. Because of certain similarities between these coding proposals, they were clustered into four development groups. The first group was ASPEC, by Fraunhofer Gesellschaft , AT&T , France Telecom , Deutsche and Thomson-Brandt . The second group was MUSICAM , by Matsushita , CCETT , ITT and Philips . The third group was ATAC (ATRAC Coding), by Fujitsu , JVC , NEC and Sony . And the fourth group was SB-ADPCM , by NTT and BTRL. [ 38 ]
The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), [ 39 ] and Perceptual Transform Coding (PXFM). [ 40 ] These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips.
Another predecessor of the MP3 format and technology is to be found in the perceptual codec MUSICAM based on an integer arithmetics 32 sub-bands filter bank, driven by a psychoacoustic model. It was primarily designed for Digital Audio Broadcasting (digital radio) and digital TV, and its basic principles were disclosed to the scientific community by CCETT (France) and IRT (Germany) in Atlanta during an IEEE- ICASSP conference in 1991, [ 41 ] after having worked on MUSICAM with Matsushita and Philips since 1989. [ 38 ]
This codec incorporated into a broadcasting system using COFDM modulation was demonstrated on air and in the field [ 42 ] with Radio Canada and CRC Canada during the NAB show (Las Vegas) in 1991. The implementation of the audio part of this broadcasting system was based on a two-chip encoder (one for the subband transform, one for the psychoacoustic model designed by the team of G. Stoll (IRT Germany), later known as psychoacoustic model I) and a real-time decoder using one Motorola 56001 DSP chip running an integer arithmetics software designed by Y.F. Dehery's team (CCETT, France). The simplicity of the corresponding decoder together with the high audio quality of this codec using for the first time a 48 kHz sampling rate , a 20 bits/sample input format (the highest available sampling standard in 1991, compatible with the AES/EBU professional digital input studio standard) were the main reasons to later adopt the characteristics of MUSICAM as the basic features for an advanced digital music compression codec.
During the development of the MUSICAM encoding software, Stoll and Dehery's team made thorough use of a set of high-quality audio assessment material [ 43 ] selected by a group of audio professionals from the European Broadcasting Union, and later used as a reference for the assessment of music compression codecs. The subband coding technique was found to be efficient, not only for the perceptual coding of high-quality sound materials but especially for the encoding of critical percussive sound materials (drums, triangle ,...), due to the specific temporal masking effect of the MUSICAM sub-band filterbank (this advantage being a specific feature of short transform coding techniques).
As a doctoral student at Germany's University of Erlangen-Nuremberg , Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. [ 44 ] MP3 is directly descended from OCF and PXFM, representing the outcome of the collaboration of Brandenburg — working as a postdoctoral researcher at AT&T-Bell Labs with James D. Johnston ("JJ") of AT&T-Bell Labs — with the Fraunhofer Institute for Integrated Circuits , Erlangen (where he worked with Bernhard Grill and four other researchers – "The Original Six" [ 45 ] ), with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society 's Heinrich Herz Institute . In 1993, he joined the staff of Fraunhofer HHI. [ 44 ] An acapella version of the song " Tom's Diner " by Suzanne Vega was the first song used by Brandenburg to develop the MP3 format. It was used as a benchmark to see how well MP3's compression algorithm handled the human voice. Brandenburg adopted the song for testing purposes, listening to it again and again each time he refined the compression algorithm, making sure it did not adversely affect the reproduction of Vega's voice. [ 46 ] Accordingly, he dubbed Vega the "Mother of MP3". [ 47 ] Instrumental music had been easier to compress, but Vega's voice sounded unnatural in early versions of the format. Brandenburg eventually met Vega and heard Tom's Diner performed live.
In 1991, two available proposals were assessed for an MPEG audio standard: MUSICAM ( M asking pattern adapted U niversal S ubband I ntegrated C oding A nd M ultiplexing) and ASPEC ( A daptive S pectral P erceptual E ntropy C oding). The MUSICAM technique, proposed by Philips (Netherlands), CCETT (France), the Institute for Broadcast Technology (Germany), and Matsushita (Japan), [ 48 ] was chosen due to its simplicity and error robustness, as well as for its high level of computational efficiency. [ 49 ] The MUSICAM format, based on sub-band coding , became the basis for the MPEG Audio compression format, incorporating, for example, its frame structure, header format, sample rates, etc.
While much of MUSICAM technology and ideas were incorporated into the definition of MPEG Audio Layer I and Layer II, the filter bank alone and the data structure based on 1152 samples framing (file format and byte-oriented stream) of MUSICAM remained in the Layer III (MP3) format, as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann of the Leibniz University Hannover , the editing of the standard was delegated to Leon van de Kerkhof (Netherlands), Gerhard Stoll (Germany), and Yves-François Dehery (France), who worked on Layer I and Layer II. ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society, and CNET . [ 50 ] It provided the highest coding efficiency.
A working group consisting of van de Kerkhof, Stoll, Leonardo Chiariglione ( CSELT VP for Media), Yves-François Dehery, Karlheinz Brandenburg (Germany) and James D. Johnston (United States) took ideas from ASPEC, integrated the filter bank from Layer II, added some of their ideas such as the joint stereo coding of MUSICAM and created the MP3 format, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s .
The algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 [ 15 ] [ 16 ] and finalized in 1992 [ 17 ] as part of MPEG-1 , the first standard suite by MPEG , which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3 ), published in 1993. [ 8 ] Files or data streams conforming to this standard must handle sample rates of 48k, 44100, and 32k and continue to be supported by current MP3 players and decoders. Thus the first generation of MP3 defined 14 × 3 = 42 interpretations of MP3 frame data structures and size layouts.
The compression efficiency of encoders is typically defined by the bit rate because the compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the compact disc (CD) parameters as references (44.1 kHz , 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with the use of the term compression ratio for lossy encoders.
Karlheinz Brandenburg used a CD recording of Suzanne Vega 's song " Tom's Diner " to assess and refine the MP3 compression algorithm . [ 51 ] This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. This particular track has an interesting property in that the two channels are almost, but not completely, the same, leading to a case where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more critical audio excerpts ( glockenspiel , triangle, accordion , etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats. [ citation needed ]
A reference simulation software implementation, written in the C language and later known as ISO 11172-5 , was developed (in 1991–1996) by the members of the ISO MPEG Audio committee to produce bit-compliant MPEG Audio files (Layer 1, Layer 2, Layer 3). It was approved as a committee draft of the ISO/IEC technical report in March 1994 and printed as document CD 11172-5 in April 1994. [ 52 ] It was approved as a draft technical report (DTR/DIS) in November 1994, [ 53 ] finalized in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. [ 54 ] The reference software in C language was later published as a freely available ISO standard. [ 55 ] Working in non-real time on several operating systems, it was able to demonstrate the first real-time hardware decoding (DSP based) of compressed audio. Some other real-time implementations of MPEG Audio encoders and decoders [ 56 ] were available for digital broadcasting (radio DAB , television DVB ) towards consumer receivers and set-top boxes.
On 7 July 1994, the Fraunhofer Society released the first software MP3 encoder, called l3enc . [ 57 ] The filename extension .mp3 was chosen by the Fraunhofer team on 14 July 1995 (previously, the files had been named .bit ). [ 2 ] With the first real-time software MP3 player WinPlay3 (released 9 September 1995) many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives of the era (≈500–1000 MB ) lossy compression was essential to store multiple albums' worth of music on a home computer as full recordings (as opposed to MIDI notation, or tracker files which combined notation with short recordings of instruments playing single notes).
A hacker named SoloH discovered the source code of the "dist10" MPEG reference implementation shortly after the release on the servers of the University of Erlangen . He developed a higher-quality version and spread it on the internet. This code started the widespread CD ripping and digital music distribution as MP3 over the internet. [ 58 ] [ 59 ] [ 60 ] [ 61 ]
Further work on MPEG audio [ 62 ] was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2 , more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backward compatible MPEG-2 Audio or MPEG-2 Audio BC [ 18 ] ), originally published in 1995. [ 9 ] [ 63 ] MPEG-2 Part 3 (ISO/IEC 13818-3) defined 42 additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined in MPEG-1 Audio. This reduction in sampling rates serves to cut the available frequency fidelity in half while likewise cutting the bit rate by 50%. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. [ 62 ] An MP3 coded with MPEG-2 results in half of the bandwidth reproduction of MPEG-1 appropriate for piano and singing.
A third generation of "MP3" style data streams (files) extended the MPEG-2 ideas and implementation but was named MPEG-2.5 audio since MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered patent holder of MP3, by reducing the frame sync field in the MP3 header from 12 to 11 bits. As in the transition from MPEG-1 to MPEG-2, MPEG-2.5 adds additional sampling rates exactly half of those available using MPEG-2. It thus widens the scope of MP3 to include human speech and other applications yet requires only 25% of the bandwidth (frequency reproduction) possible using MPEG-1 sampling rates. While not an ISO-recognized standard, MPEG-2.5 is widely supported by both inexpensive Chinese and brand-name digital audio players as well as computer software-based MP3 encoders ( LAME ), decoders (FFmpeg) and players (MPC) adding 3 × 8 = 24 additional MP3 frame types. Each generation of MP3 thus supports 3 sampling rates exactly half that of the previous generation for a total of 9 varieties of MP3 format files. The sample rate comparison table between MPEG-1, 2, and 2.5 is given later in the article. [ 64 ] [ 65 ] MPEG-2.5 is supported by LAME (since 2000), Media Player Classic (MPC), iTunes, and FFmpeg.
MPEG-2.5 was not developed by MPEG (see above) and was never approved as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. It is nonetheless ubiquitous and especially advantageous for low-bit-rate human speech applications.
* The ISO standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio) defined three formats: the MPEG-1 Audio Layer I, Layer II and Layer III. The ISO standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Audio) defined an extended version of MPEG-1 Audio: MPEG-2 Audio Layer I, Layer II, and Layer III. MPEG-2 Audio (MPEG-2 Part 3) should not be confused with MPEG-2 AAC (MPEG-2 Part 7 – ISO/IEC 13818-7). [ 18 ]
LAME is the most advanced MP3 encoder. [ citation needed ] LAME includes a variable bit rate (VBR) encoding which uses a quality parameter rather than a bit rate goal. Later versions (2008+) support an n.nnn quality goal which automatically selects MPEG-2 or MPEG-2.5 sampling rates as appropriate for human speech recordings that need only 5512 Hz bandwidth resolution.
In the second half of the 1990s, MP3 files began to spread on the Internet , often via underground pirated song networks. The first known experiment in Internet distribution was organized in the early 1990s by the Internet Underground Music Archive , better known by the acronym IUMA. After some experiments [ 67 ] using uncompressed audio files, this archive started to deliver on the native worldwide low-speed Internet some compressed MPEG Audio files using the MP2 (Layer II) format and later on used MP3 files when the standard was fully completed. The popularity of MP3s began to rise rapidly with the advent of Nullsoft 's audio player Winamp , released in 1997, which still had in 2023 a community of 80 million active users. [ 68 ] In 1998, Windows Media Player 5.2 and later added support for MP3 format. In 1998, the first portable solid-state digital audio player MPMan , developed by SaeHan Information Systems, which is headquartered in Seoul , South Korea , was released and the Rio PMP300 was sold afterward in 1998, despite legal suppression efforts by the RIAA . [ 69 ]
In November 1997, the website mp3.com was offering thousands of MP3s created by independent artists for free. [ 69 ] The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster , was launched in 1999. The ease of creating and sharing MP3s resulted in widespread copyright infringement . Major record companies argued that this free sharing of music reduced sales, and called it " music piracy ". They reacted by pursuing lawsuits against Napster , which was eventually shut down and later sold, and against individual users who engaged in file sharing. [ 70 ]
Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks . Some authorized services, such as Beatport , Bleep , Juno Records , eMusic , Zune Marketplace , Walmart.com , Rhapsody , the recording industry approved re-incarnation of Napster , and Amazon.com sell unrestricted music in the MP3 format.
An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called an elementary stream . Due to the "bit reservoir", frames are not independent items and cannot usually be extracted on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies and amplitudes. The diagram shows that the MP3 Header consists of a sync word , which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata , which precedes or follows the MP3 frames, as noted in the diagram. The data stream can contain an optional checksum .
Joint stereo is done only on a frame-to-frame basis. [ 71 ]
In short, MP3 compression works by reducing the accuracy of certain components of sound that are considered (by psychoacoustic analysis) to be beyond the hearing capabilities of most humans. This method is commonly referred to as perceptual coding or psychoacoustic modeling. [ 72 ] The remaining audio information is then recorded in a space-efficient manner using MDCT and FFT algorithms.
The MP3 encoding algorithm is generally split into four parts. Part 1 divides the audio signal into smaller pieces, called frames, and an MDCT filter is then performed on the output. Part 2 passes the sample into a 1024-point fast Fourier transform (FFT), then the psychoacoustic model is applied and another MDCT filter is performed on the output. Part 3 quantifies and encodes each sample, known as noise allocation, which adjusts itself to meet the bit rate and sound masking requirements. Part 4 formats the bitstream , called an audio frame, which is made up of 4 parts, the header , error check , audio data , and ancillary data . [ 35 ]
The MPEG-1 standard does not include a precise specification for an MP3 encoder but does provide examples of psychoacoustic models, rate loops, and the like in the non-normative part of the original standard. [ 73 ] MPEG-2 doubles the number of sampling rates that are supported and MPEG-2.5 adds 3 more. When this was written, the suggested implementations were quite dated. Implementers of the standard were supposed to devise algorithms suitable for removing parts of the information from the audio input. As a result, many different MP3 encoders became available, each producing files of differing quality. Comparisons were widely available, so it was easy for a prospective user of an encoder to research the best choice. Some encoders that were proficient at encoding at higher bit rates (such as LAME ) were not necessarily as good at lower bit rates. Over time, LAME evolved on the SourceForge website until it became the de facto CBR MP3 encoder. Later an ABR mode was added. Work progressed on true variable bit rate using a quality goal between 0 and 10. Eventually, numbers (such as -V 9.600) could generate excellent quality low bit rate voice encoding at only 41 kbit/s using the MPEG-2.5 extensions.
MP3 uses an overlapping MDCT structure. Each MPEG-1 MP3 frame is 1152 samples, divided into two granules of 576 samples. These samples, initially in the time domain, are transformed in one block to 576 frequency-domain samples by MDCT. [ 74 ] MP3 also allows the use of shorter blocks in a granule, down to a size of 192 samples; this feature is used when a transient is detected. Doing so limits the temporal spread of quantization noise accompanying the transient (see psychoacoustics ). Frequency resolution is limited by the small long block window size, which decreases coding efficiency. [ 71 ] Time resolution can be too low for highly transient signals and may cause smearing of percussive sounds. [ 71 ]
Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution. [ 71 ] Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby decreasing coding efficiency. [ 75 ]
Decoding, on the other hand, is carefully defined in the standard. Most decoders are " bitstream compliant", which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, the comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU clock rates transitioned from MHz to GHz. Encoder/decoder overall delay is not defined, which means there is no official provision for gapless playback . However, some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless playback.
When performing lossy audio encoding, such as creating an MP3 data stream, there is a trade-off between the amount of data generated and the sound quality of the results. The person generating an MP3 selects a bit rate, which specifies how many kilobits per second of audio is desired. The higher the bit rate, the larger the MP3 data stream will be, and, generally, the closer it will sound to the original recording. With too low a bit rate, compression artifacts (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause or a triangle instrument with a relatively low bit rate provides good examples of compression artifacts. Most subjective testings of perceptual codecs tend to avoid using these types of sound materials, however, the artifacts generated by percussive sounds are barely perceptible due to the specific temporal masking feature of the 32 sub-band filterbank of Layer II on which the format is based.
Besides the bit rate of an encoded piece of audio, the quality of MP3-encoded sound also depends on the quality of the encoder algorithm as well as the complexity of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders do feature quite different quality, even with identical bit rates. As an example, in a public listening test featuring two early MP3 encoders set at about 128 kbit/s , [ 76 ] one scored 3.66 on a 1–5 scale, while the other scored only 2.22. Quality is dependent on the choice of encoder and encoding parameters. [ 77 ]
This observation caused a revolution in audio encoding. Early on bit rate was the prime and only consideration. At the time MP3 files were of the very simplest type: they used the same bit rate for the entire file: this process is known as constant bit rate (CBR) encoding. Using a constant bit rate makes encoding simpler and less CPU-intensive. However, it is also possible to optimize the size of the file by creating files where the bit rate changes throughout the file. These are known as variable bit rate. The bit reservoir and VBR encoding were part of the original MPEG-1 standard. The concept behind them is that, in any piece of audio, some sections are easier to compress, such as silence or music containing only a few tones, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some advanced MP3 encoders, it is possible to specify a given quality, and the encoder will adjust the bit rate accordingly. Users that desire a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and generally speaking not need to worry about performing personal listening tests on each piece of music to determine the correct bit rate.
Perceived quality can be influenced by the listening environment (ambient noise), listener attention, listener training, and in most cases by listener audio equipment (such as sound cards, speakers, and headphones). Furthermore, sufficient quality may be achieved by a lesser quality setting for lectures and human speech applications and reduces encoding time and complexity. A test given to new students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music. [ 78 ]
An in-depth study of MP3 audio quality, sound artist and composer Ryan Maguire 's project "The Ghost in the MP3" isolates the sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", [ 79 ] [ 80 ] [ 81 ] the track originally used in the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the International Computer Music Conference. [ 82 ]
Bit rate is the product of the sample rate and number of bits per sample used to encode the music. CD audio is 44100 samples per second. The number of bits per sample also depends on the number of audio channels. The CD is stereo and 16 bits per channel. So, multiplying 44100 by 32 gives 1411200—the bit rate of uncompressed CD digital audio. MP3 was designed to encode this 1411 kbit/s data at 320 kbit/s or less. If less complex passages are detected by the MP3 algorithms then lower bit rates may be employed. When using MPEG-2 instead of MPEG-1, MP3 supports only lower sampling rates (16,000, 22,050, or 24,000 samples per second) and offers choices of bit rate as low as 8 kbit/s but no higher than 160 kbit/s . By lowering the sampling rate, MPEG-2 layer III removes all frequencies above half the new sampling rate that may have been present in the source audio.
As shown in these two tables, 14 selected bit rates are allowed in MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s , along with the 3 highest available sampling rates of 32, 44.1 and 48 kHz . [ 65 ] MPEG-2 Audio Layer III also allows 14 somewhat different (and mostly lower) bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160 kbit/s with sampling rates of 16, 22.05 and 24 kHz which are exactly half that of MPEG-1. [ 65 ] MPEG-2.5 Audio Layer III frames are limited to only 8 bit rates of 8, 16, 24, 32, 40, 48, 56 and 64 kbit/s with 3 even lower sampling rates of 8, 11.025, and 12 kHz. [ citation needed ] On earlier systems that only support the MPEG-1 Audio Layer III standard, MP3 files with a bit rate below 32 kbit/s might be played back sped-up and pitched-up.
Earlier systems also lack fast forwarding and rewinding playback controls on MP3. [ 84 ] [ 85 ]
MPEG-1 frames contain the most detail in 320 kbit/s mode, the highest allowable bit rate setting, [ 86 ] with silence and simple tones still requiring 32 kbit/s . MPEG-2 frames can capture up to 12 kHz sound reproductions needed up to 160 kbit/s . MP3 files made with MPEG-2 do not have 20 kHz bandwidth because of the Nyquist–Shannon sampling theorem . Frequency reproduction is always strictly less than half of the sampling rate, and imperfect filters require a larger margin for error (noise level versus sharpness of filter), so an 8 kHz sampling rate limits the maximum frequency to 4 kHz, while a 48 kHz sampling rate limits an MP3 to a maximum 24 kHz sound reproduction. MPEG-2 uses half and MPEG-2.5 only a quarter of MPEG-1 sample rates.
For the general field of human speech reproduction, a bandwidth of 5,512 Hz is sufficient to produce excellent results (for voice) using the sampling rate of 11,025 and VBR encoding from 44,100 (standard) WAV file. English speakers average 41– 42 kbit/s with -V 9.6 setting but this may vary with the amount of silence recorded or the rate of delivery (wpm). Resampling to 12,000 (6K bandwidth) is selected by the LAME parameter -V 9.4. Likewise -V 9.2 selects a 16,000 sample rate and a resultant 8K lowpass filtering. Older versions of LAME and FFmpeg only support integer arguments for the variable bit rate quality selection parameter. The n.nnn quality parameter (-V) is documented at lame.sourceforge.net but is only supported in LAME with the new style VBR variable bit rate quality selector—not average bit rate (ABR).
A sample rate of 44.1 kHz is commonly used for music reproduction because this is also used for CD audio , the main source used for creating MP3 files. A great variety of bit rates are used on the Internet. A bit rate of 128 kbit/s is commonly used, [ 87 ] at a compression ratio of 11:1, offering adequate audio quality in a relatively small space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to 320 kbit/s are widespread. Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s , (16 bit/sample × 44,100 samples/second × 2 channels / 1,000 bits/kilobit), so the bit rates 128, 160, and 192 kbit/s represent compression ratios of approximately 11:1, 9:1 and 7:1 respectively.
Non-standard bit rates up to 640 kbit/s can be achieved with the LAME encoder and the free format option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to 320 kbit/s . [ 88 ] [ 89 ] [ 90 ] Early MPEG Layer III encoders used what is now called constant bit rate (CBR). The software was only able to use a uniform bit rate on all frames in an MP3 file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting the encoding rate for each frame based on the complexity of the sound in that portion of the recording.
A more sophisticated MP3 encoder can produce variable bit rate audio. MPEG audio may use bit rate switching on a per-frame basis, but only layer III decoders must support it. [ 65 ] [ 91 ] [ 92 ] [ 93 ] VBR is used when the goal is to achieve a fixed level of quality. The final file size of a VBR encoding is less predictable than with constant bit rate. Average bit rate is a type of VBR implemented as a compromise between the two: the bit rate is allowed to vary for more consistent quality, but is controlled to remain near an average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread. The most evolved LAME MP3 encoder supports the generation of VBR, ABR, and even the older CBR MP3 formats.
Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's audio data, allowing temporary changes in effective bit rate, even in a constant bit rate stream. [ 65 ] [ 91 ] Internal handling of the bit reservoir increases encoding delay. [ citation needed ] There is no scale factor band 21 (sfb21) for frequencies above approx 16 kHz , forcing the encoder to choose between less accurate representation in band 21 or less efficient storage in all bands below band 21, the latter resulting in wasted bit rate in VBR encoding. [ 94 ]
The ancillary data field can be used to store user-defined data. The ancillary data is optional and the number of bits available is not explicitly given. The ancillary data is located after the Huffman code bits and ranges to where the next frame's main_data_begin points to. Encoder mp3PRO used ancillary data to encode extra information which could improve audio quality when decoded with its algorithm.
A "tag" in an audio file is a section of the file that contains metadata such as the title, artist, album, track number, or other information about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container format that would support metadata and obviate the need for tags. However, several de facto standards for tag formats exist. As of 2010, the most widespread are ID3v1 and ID3v2 , and the more recently introduced APEv2 . These tags are normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either extract information from the tags or just treat them as ignorable, non-MP3 junk data.
Playing and editing software often contains tag editing functionality, but there are also tag editor applications dedicated to the purpose. Aside from metadata about the audio content, tags may also be used for DRM . [ 95 ] ReplayGain is a standard for measuring and storing the loudness of an MP3 file ( audio normalization ) in its metadata tag, enabling a ReplayGain-compliant player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability.
The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there by 2012 at the latest. In the United States, the technology became substantially patent-free on 16 April 2017 (see below). MP3 patents expired in the US between 2007 and 2017. In the past, many organizations have claimed ownership of patents related to MP3 decoding or encoding. These claims led to several legal threats and actions from a variety of sources. As a result, in countries that allow software patents , uncertainty about which patents must have been licensed to create MP3 products without committing patent infringement was common in the early stages of the technology's adoption.
The initial near-complete MPEG-1 standard (parts 1, 2, and 3) was publicly available on 6 December 1991 as ISO CD 11172. [ 96 ] [ 97 ] In most countries, patents cannot be filed after prior art has been made public, and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after the publication of ISO CD 11172.
An exception is the United States, where patents in force but filed before 8 June 1995 expire after the later of 17 years from the issue date or 20 years from the priority date. A lengthy patent prosecution process may result in a patent issued much later than normally expected (see submarine patents ). The various MP3-related patents expired on dates ranging from 2007 to 2017 in the United States. [ 98 ] Patents for anything disclosed in ISO CD 11172 filed a year or more after its publication are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been patent-free in the US since 22 September 2015, when U.S. patent 5,812,672 , which had a PCT filing in October 1992, expired. [ 99 ] [ 100 ] [ 101 ] If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017, when U.S. patent 6,009,399 , held [ 102 ] and administered by Technicolor , [ 103 ] expired. As a result, many free and open-source software projects, such as the Fedora operating system , have decided to start shipping MP3 support by default, and users will no longer have to resort to installing unofficial packages maintained by third party software repositories for MP3 playback or encoding. [ 104 ]
Technicolor (formerly called Thomson Consumer Electronics) claimed to control MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada, and EU countries. [ 105 ] Technicolor had been actively enforcing these patents. [ 106 ] MP3 license revenues from Technicolor's administration generated about €100 million for the Fraunhofer Society in 2005. [ 107 ] In September 1998, the Fraunhofer Institute sent a letter to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To make, sell or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain a license under these patents from us." [ 108 ] This led to the situation where the LAME MP3 encoder project could not offer its users official binaries that could run on their computer. The project's position was that as source code, LAME was simply a description of how an MP3 encoder could be implemented. Unofficially, compiled binaries were available from other sources.
Sisvel S.p.A., a Luxembourg-based company, administers licenses for patents applying to MPEG Audio. [ 109 ] They, along with its United States subsidiary Audio MPEG, Inc. previously sued Thomson for patent infringement on MP3 technology, [ 110 ] but those disputes were resolved in November 2005 with Sisvel granting Thomson a license to their patents. Motorola followed soon after and signed with Sisvel to license MP3-related patents in December 2005. [ 111 ] Except for three patents, the US patents administered by Sisvel [ 112 ] had all expired in 2015. The three exceptions are: U.S. patent 5,878,080 , expired February 2017; U.S. patent 5,850,456 , expired February 2017; and U.S. patent 5,960,037 , expired 9 April 2017. As of around the first quarter of 2023, Sisvel's licensing program has become a legacy. [ 113 ]
In September 2006, German officials seized MP3 players from SanDisk 's booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, [ 114 ] but that reversal was in turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the words of one commentator. [ 115 ] In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in eastern Texas federal court , claiming infringement of a portable MP3 player patent that Texas MP3 said it had been assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009. [ 116 ] [ 117 ]
Alcatel-Lucent has asserted several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23 February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them. [ 118 ] The court subsequently revoked the award, however, finding that one patent had not been infringed and that the other was not owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft , the judge ruled. [ 119 ] That defense judgment was upheld on appeal in 2008. [ 120 ]
Other lossy formats exist. Among these, Advanced Audio Coding (AAC) is the most widely used, and was designed to be the successor to MP3. There also exist other lossy formats such as mp3PRO and MP2 . They are members of the same technological family as MP3 and depend on roughly similar psychoacoustic models and MDCT algorithms. Whereas MP3 uses a hybrid coding approach that is part MDCT and part FFT , AAC is purely MDCT, significantly improving compression efficiency. [ 121 ] Many of the basic patents underlying these formats are held by Fraunhofer Society, Alcatel-Lucent, Thomson Consumer Electronics , [ 121 ] Bell , Dolby , LG Electronics , NEC , NTT Docomo , Panasonic , Sony Corporation , [ 122 ] ETRI , JVC Kenwood , Philips , Microsoft , and NTT . [ 123 ]
Microsoft created and promoted their own competing standard, Windows Media Audio (WMA) with the claim that it is better than MP3. [ 124 ] When the digital audio player market was taking off, MP3 was widely adopted as the standard hence the popular name "MP3 player". Sony was an exception and used their own ATRAC codec taken from their MiniDisc format, which Sony claimed was better. [ 125 ] Following criticism and lower than expected Walkman sales, in 2004 Sony for the first time introduced native MP3 support to its Walkman players. [ 126 ]
There are also open compression formats like Opus and Vorbis (OGG) that are available free of charge and without any known patent restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro, Vorbis, and Opus, are free of some limitations inherent to the MP3 format that cannot be overcome by any MP3 encoder. [ 98 ] [ 127 ]
Besides lossy compression methods, lossless formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless and many others. | https://en.wikipedia.org/wiki/MP3 |
MPEG-1 is a standard for lossy compression of video and audio . It is designed to compress VHS -quality raw digital video and CD audio down to about 1.5 Mbit/s (26:1 and 6:1 compression ratios respectively) [ 2 ] without excessive quality loss, making video CDs , digital cable / satellite TV and digital audio broadcasting (DAB) practical. [ 3 ] [ 4 ]
Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, and is used in a large number of products and technologies. Perhaps the best-known part of the MPEG-1 standard is the first version of the MP3 audio format it introduced.
The MPEG-1 standard is published as ISO / IEC 11172 , titled Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s .
The standard consists of the following five Parts : [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
The predecessor of MPEG-1 for video coding was the H.261 standard produced by the CCITT (now known as the ITU-T ). The basic architecture established in H.261 was the motion-compensated DCT hybrid video coding structure. [ 10 ] [ 11 ] It uses macroblocks of size 16×16 with block-based motion estimation in the encoder and motion compensation using encoder-selected motion vectors in the decoder, with residual difference coding using a discrete cosine transform (DCT) of size 8×8, scalar quantization , and variable-length codes (like Huffman codes ) for entropy coding . [ 12 ] H.261 was the first practical video coding standard, and all of its described design elements were also used in MPEG-1. [ 13 ]
Modeled on the successful collaborative approach and the compression technologies developed by the Joint Photographic Experts Group and CCITT 's Experts Group on Telephony (creators of the JPEG image compression standard and the H.261 standard for video conferencing respectively), the Moving Picture Experts Group (MPEG) working group was established in January 1988, by the initiative of Hiroshi Yasuda ( Nippon Telegraph and Telephone ) and Leonardo Chiariglione ( CSELT ). [ 14 ] MPEG was formed to address the need for standard video and audio formats, and to build on H.261 to get better quality through the use of somewhat more complex encoding methods (e.g., supporting higher precision for motion vectors). [ 3 ] [ 15 ] [ 16 ]
Development of the MPEG-1 standard began in May 1988. Fourteen video and fourteen audio codec proposals were submitted by individual companies and institutions for evaluation. The codecs were extensively tested for computational complexity and subjective (human perceived) quality, at data rates of 1.5 Mbit/s. This specific bitrate was chosen for transmission over T-1 / E-1 lines and as the approximate data rate of audio CDs . [ 17 ] The codecs that excelled in this testing were utilized as the basis for the standard and refined further, with additional features and other improvements being incorporated in the process. [ 18 ]
After 20 meetings of the full group in various cities around the world, and 4½ years of development and testing, the final standard (for parts 1–3) was approved in early November 1992 and published a few months later. [ 19 ] The reported completion date of the MPEG-1 standard varies greatly: a largely complete draft standard was produced in September 1990, and from that point on, only minor changes were introduced. [ 3 ] The draft standard was publicly available for purchase. [ 20 ] The standard was finished with the 6 November 1992 meeting. [ 21 ] The Berkeley Plateau Multimedia Research Group developed an MPEG-1 decoder in November 1992. [ 22 ] In July 1990, before the first draft of the MPEG-1 standard had even been written, work began on a second standard, MPEG-2 , [ 23 ] intended to extend MPEG-1 technology to provide full broadcast-quality video (as per CCIR 601 ) at high bitrates (3–15 Mbit/s) and support for interlaced video. [ 24 ] Due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos. [ 25 ]
Notably, the MPEG-1 standard very strictly defines the bitstream , and decoder function, but does not define how MPEG-1 encoding is to be performed, although a reference implementation is provided in ISO/IEC-11172-5. [ 2 ] This means that MPEG-1 coding efficiency can drastically vary depending on the encoder used, and generally means that newer encoders perform significantly better than their predecessors. [ 26 ] The first three parts (Systems, Video and Audio) of ISO/IEC 11172 were published in August 1993. [ 27 ]
Due to its age, MPEG-1 is no longer covered by any essential patents and can thus be used without obtaining a licence or paying any fees. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] The ISO patent database lists one patent for ISO 11172, US 4,472,747, which expired in 2003. [ 39 ] The near-complete draft of the MPEG-1 standard was publicly available as ISO CD 11172 [ 20 ] by December 6, 1991. [ 1 ] Neither the July 2008 Kuro5hin article "Patent Status of MPEG-1, H.261 and MPEG-2", [ 40 ] nor an August 2008 thread on the gstreamer-devel [ 41 ] mailing list were able to list a single unexpired MPEG-1 Video and MPEG-1 Audio Layer I/II patent. A May 2009 discussion on the whatwg mailing list mentioned US 5,214,678 patent as possibly covering MPEG-1 Audio Layer II. [ 42 ] Filed in 1990 and published in 1993, this patent is now expired. [ 43 ]
A full MPEG-1 decoder and encoder, with "Layer III audio", could not be implemented royalty free since there were companies that required patent fees for implementations of MPEG-1 Audio Layer III, as discussed in the MP3 article. All patents in the world connected to MP3 expired 30 December 2017, which makes this format totally free for use. [ 44 ] On 23 April 2017, Fraunhofer IIS stopped charging for Technicolor's MP3 licensing program for certain MP3 related patents and software. [ 45 ]
The following corporations filed declarations with ISO saying they held patents for the MPEG-1 Video (ISO/IEC-11172-2) format, although all such patents have since expired. [ 46 ]
Part 1 of the MPEG-1 standard covers systems , and is defined in ISO/IEC-11172-1.
MPEG-1 Systems specifies the logical layout and methods used to store the encoded audio, video, and other data into a standard bitstream, and to maintain synchronization between the different contents. This file format is specifically designed for storage on media, and transmission over communication channels , that are considered relatively reliable. Only limited error protection is defined by the standard, and small errors in the bitstream may cause noticeable defects.
This structure was later named an MPEG program stream : "The MPEG-1 Systems design is essentially identical to the MPEG-2 Program Stream structure." [ 48 ] This terminology is more popular, precise (differentiates it from an MPEG transport stream ) and will be used here.
Program Streams (PS) are concerned with combining multiple packetized elementary streams (usually just one audio and video PES) into a single stream, ensuring simultaneous delivery, and maintaining synchronization. The PS structure is known as a multiplex , or a container format .
Presentation time stamps (PTS) exist in PS to correct the inevitable disparity between audio and video SCR values (time-base correction). 90 kHz PTS values in the PS header tell the decoder which video SCR values match which audio SCR values. [ 49 ] PTS determines when to display a portion of an MPEG program, and is also used by the decoder to determine when data can be discarded from the buffer . [ 51 ] Either video or audio will be delayed by the decoder until the corresponding segment of the other arrives and can be decoded.
PTS handling can be problematic. Decoders must accept multiple program streams that have been concatenated (joined sequentially). This causes PTS values in the middle of the video to reset to zero, which then begin incrementing again. Such PTS wraparound disparities can cause timing issues that must be specially handled by the decoder.
Decoding Time Stamps (DTS), additionally, are required because of B-frames. With B-frames in the video stream, adjacent frames have to be encoded and decoded out-of-order (re-ordered frames). DTS is quite similar to PTS, but instead of just handling sequential frames, it contains the proper time-stamps to tell the decoder when to decode and display the next B-frame (types of frames explained below), ahead of its anchor (P- or I-) frame. Without B-frames in the video, PTS and DTS values are identical. [ 52 ]
To generate the PS, the multiplexer will interleave the (two or more) packetized elementary streams. This is done so the packets of the simultaneous streams can be transferred over the same channel and are guaranteed to both arrive at the decoder at precisely the same time. This is a case of time-division multiplexing .
Determining how much data from each stream should be in each interleaved segment (the size of the interleave) is complicated, yet an important requirement. Improper interleaving will result in buffer underflows or overflows, as the receiver gets more of one stream than it can store (e.g. audio), before it gets enough data to decode the other simultaneous stream (e.g. video). The MPEG Video Buffering Verifier (VBV) assists in determining if a multiplexed PS can be decoded by a device with a specified data throughput rate and buffer size. [ 53 ] This offers feedback to the multiplexer and the encoder, so that they can change the multiplex size or adjust bitrates as needed for compliance.
Part 2 of the MPEG-1 standard covers video and is defined in ISO/IEC-11172-2. The design was heavily influenced by H.261 .
MPEG-1 Video exploits perceptual compression methods to significantly reduce the data rate required by a video stream. It reduces or completely discards information in certain frequencies and areas of the picture that the human eye has limited ability to fully perceive. It also exploits temporal (over time) and spatial (across a picture) redundancy common in video to achieve better data compression than would be possible otherwise. (See: Video compression )
Before encoding video to MPEG-1, the color-space is transformed to Y′CbCr (Y′=Luma, Cb=Chroma Blue, Cr=Chroma Red). Luma (brightness, resolution) is stored separately from chroma (color, hue, phase) and even further separated into red and blue components.
The chroma is also subsampled to 4:2:0 , meaning it is reduced to half resolution vertically and half resolution horizontally, i.e., to just one quarter the number of samples used for the luma component of the video. [ 2 ] This use of higher resolution for some color components is similar in concept to the Bayer pattern filter that is commonly used for the image capturing sensor in digital color cameras. Because the human eye is much more sensitive to small changes in brightness (the Y component) than in color (the Cr and Cb components), chroma subsampling is a very effective way to reduce the amount of video data that needs to be compressed. However, on videos with fine detail (high spatial complexity ) this can manifest as chroma aliasing artifacts. Compared to other digital compression artifacts , this issue seems to very rarely be a source of annoyance. Because of the subsampling, Y′CbCr 4:2:0 video is ordinarily stored using even dimensions ( divisible by 2 horizontally and vertically).
Y′CbCr color is often informally called YUV to simplify the notation, although that term more properly applies to a somewhat different color format. Similarly, the terms luminance and chrominance are often used instead of the (more accurate) terms luma and chroma.
MPEG-1 supports resolutions up to 4095×4095 (12 bits), and bit rates up to 100 Mbit/s. [ 16 ]
MPEG-1 videos are most commonly seen using Source Input Format (SIF) resolution: 352×240, 352×288, or 320×240. These relatively low resolutions, combined with a bitrate less than 1.5 Mbit/s, make up what is known as a constrained parameters bitstream (CPB), later renamed the "Low Level" (LL) profile in MPEG-2. This is the minimum video specifications any decoder should be able to handle, to be considered MPEG-1 compliant . This was selected to provide a good balance between quality and performance, allowing the use of reasonably inexpensive hardware of the time. [ 3 ] [ 16 ]
MPEG-1 has several frame/picture types that serve different purposes. The most important, yet simplest, is I-frame .
"I-frame" is an abbreviation for " Intra-frame ", so-called because they can be decoded independently of any other frames. They may also be known as I-pictures, or keyframes due to their somewhat similar function to the key frames used in animation. I-frames can be considered effectively identical to baseline JPEG images. [ 16 ]
High-speed seeking through an MPEG-1 video is only possible to the nearest I-frame. When cutting a video it is not possible to start playback of a segment of video before the first I-frame in the segment (at least not without computationally intensive re-encoding). For this reason, I-frame-only MPEG videos are used in editing applications.
I-frame only compression is very fast, but produces very large file sizes: a factor of 3× (or more) larger than normally encoded MPEG-1 video, depending on how temporally complex a specific video is. [ 3 ] I-frame only MPEG-1 video is very similar to MJPEG video. So much so that very high-speed and theoretically lossless (in reality, there are rounding errors) conversion can be made from one format to the other, provided a couple of restrictions (color space and quantization matrix) are followed in the creation of the bitstream. [ 54 ]
The length between I-frames is known as the group of pictures (GOP) size. MPEG-1 most commonly uses a GOP size of 15–18. i.e. 1 I-frame for every 14-17 non-I-frames (some combination of P- and B- frames). With more intelligent encoders, GOP size is dynamically chosen, up to some pre-selected maximum limit. [ 16 ]
Limits are placed on the maximum number of frames between I-frames due to decoding complexing, decoder buffer size, recovery time after data errors, seeking ability, and accumulation of IDCT errors in low-precision implementations most common in hardware decoders (See: IEEE -1180).
"P-frame" is an abbreviation for "Predicted-frame". They may also be called forward-predicted frames or inter-frames (B-frames are also inter-frames).
P-frames exist to improve compression by exploiting the temporal (over time) redundancy in a video. P-frames store only the difference in image from the frame (either an I-frame or P-frame) immediately preceding it (this reference frame is also called the anchor frame ).
The difference between a P-frame and its anchor frame is calculated using motion vectors on each macroblock of the frame (see below). Such motion vector data will be embedded in the P-frame for use by the decoder.
A P-frame can contain any number of intra-coded blocks (DCT and Quantized), in addition to any forward-predicted blocks (Motion Vectors). [ 55 ]
If a video drastically changes from one frame to the next (such as a cut ), it is more efficient to encode it as an I-frame.
"B-frame" stands for "bidirectional-frame" or "bipredictive frame". They may also be known as backwards-predicted frames or B-pictures. B-frames are quite similar to P-frames, except they can make predictions using both the previous and future frames (i.e. two anchor frames).
It is therefore necessary for the player to first decode the next I- or P- anchor frame sequentially after the B-frame, before the B-frame can be decoded and displayed. This means decoding B-frames requires larger data buffers and causes an increased delay on both decoding and during encoding. This also necessitates the decoding time stamps (DTS) feature in the container/system stream (see above). As such, B-frames have long been subject of much controversy, they are often avoided in videos, and are sometimes not fully supported by hardware decoders.
No other frames are predicted from a B-frame. Because of this, a very low bitrate B-frame can be inserted, where needed, to help control the bitrate. If this was done with a P-frame, future P-frames would be predicted from it and would lower the quality of the entire sequence. However, similarly, the future P-frame must still encode all the changes between it and the previous I- or P- anchor frame. B-frames can also be beneficial in videos where the background behind an object is being revealed over several frames, or in fading transitions, such as scene changes. [ 3 ] [ 16 ]
A B-frame can contain any number of intra-coded blocks and forward-predicted blocks, in addition to backwards-predicted, or bidirectionally predicted blocks. [ 16 ] [ 55 ]
MPEG-1 has a unique frame type not found in later video standards. "D-frames" or DC-pictures are independently coded images (intra-frames) that have been encoded using DC transform coefficients only (AC coefficients are removed when encoding D-frames—see DCT below) and hence are very low quality. D-frames are never referenced by I-, P- or B- frames. D-frames are only used for fast previews of video, for instance when seeking through a video at high speed. [ 3 ]
Given moderately higher-performance decoding equipment, fast preview can be accomplished by decoding I-frames instead of D-frames. This provides higher quality previews, since I-frames contain AC coefficients as well as DC coefficients. If the encoder can assume that rapid I-frame decoding capability is available in decoders, it can save bits by not sending D-frames (thus improving compression of the video content). For this reason, D-frames are seldom actually used in MPEG-1 video encoding, and the D-frame feature has not been included in any later video coding standards.
MPEG-1 operates on video in a series of 8×8 blocks for quantization. However, to reduce the bit rate needed for motion vectors and because chroma (color) is subsampled by a factor of 4, each pair of (red and blue) chroma blocks corresponds to 4 different luma blocks. That is, for 4 luma blocks of size 8x8, there is one Cb block of 8x8 and one Cr block of 8x8. This set of 6 blocks, with a picture resolution of 16×16, is processed together and called a macroblock .
All of these 8x8 blocks are independently put through DCT and quantization.
A macroblock is the smallest independent unit of (color) video. Motion vectors (see below) operate solely at the macroblock level.
If the height or width of the video are not exact multiples of 16, full rows and full columns of macroblocks must still be encoded and decoded to fill out the picture (though the extra decoded pixels are not displayed).
To decrease the amount of temporal redundancy in a video, only blocks that change are updated, (up to the maximum GOP size). This is known as conditional replenishment. However, this is not very effective by itself. Movement of the objects, and/or the camera may result in large portions of the frame needing to be updated, even though only the position of the previously encoded objects has changed. Through motion estimation, the encoder can compensate for this movement and remove a large amount of redundant information.
The encoder compares the current frame with adjacent parts of the video from the anchor frame (previous I- or P- frame) in a diamond pattern, up to a (encoder-specific) predefined radius limit from the area of the current macroblock. If a match is found, only the direction and distance (i.e. the vector of the motion ) from the previous video area to the current macroblock need to be encoded into the inter-frame (P- or B- frame). The reverse of this process, performed by the decoder to reconstruct the picture, is called motion compensation .
A predicted macroblock rarely matches the current picture perfectly, however. The differences between the estimated matching area, and the real frame/macroblock is called the prediction error. The larger the amount of prediction error, the more data must be additionally encoded in the frame. For efficient video compression, it is very important that the encoder is capable of effectively and precisely performing motion estimation.
Motion vectors record the distance between two areas on screen based on the number of pixels (also called pels). MPEG-1 video uses a motion vector (MV) precision of one half of one pixel, or half-pel. The finer the precision of the MVs, the more accurate the match is likely to be, and the more efficient the compression. There are trade-offs to higher precision, however. Finer MV precision results in using a larger amount of data to represent the MV, as larger numbers must be stored in the frame for every single MV, increased coding complexity as increasing levels of interpolation on the macroblock are required for both the encoder and decoder, and diminishing returns (minimal gains) with higher precision MVs. Half-pel precision was chosen as the ideal trade-off for that point in time. (See: qpel )
Because neighboring macroblocks are likely to have very similar motion vectors, this redundant information can be compressed quite effectively by being stored DPCM -encoded. Only the (smaller) amount of difference between the MVs for each macroblock needs to be stored in the final bitstream.
P-frames have one motion vector per macroblock, relative to the previous anchor frame. B-frames, however, can use two motion vectors; one from the previous anchor frame, and one from the future anchor frame. [ 55 ]
Partial macroblocks, and black borders/bars encoded into the video that do not fall exactly on a macroblock boundary, cause havoc with motion prediction. The block padding/border information prevents the macroblock from closely matching with any other area of the video, and so, significantly larger prediction error information must be encoded for every one of the several dozen partial macroblocks along the screen border. DCT encoding and quantization (see below) also isn't nearly as effective when there is large/sharp picture contrast in a block.
An even more serious problem exists with macroblocks that contain significant, random, edge noise , where the picture transitions to (typically) black. All the above problems also apply to edge noise. In addition, the added randomness is simply impossible to compress significantly. All of these effects will lower the quality (or increase the bitrate) of the video substantially.
Each 8×8 block is encoded by first applying a forward discrete cosine transform (FDCT) and then a quantization process. The FDCT process (by itself) is theoretically lossless, and can be reversed by applying an Inverse DCT ( IDCT ) to reproduce the original values (in the absence of any quantization and rounding errors). In reality, there are some (sometimes large) rounding errors introduced both by quantization in the encoder (as described in the next section) and by IDCT approximation error in the decoder. The minimum allowed accuracy of a decoder IDCT approximation is defined by ISO/IEC 23002-1. (Prior to 2006, it was specified by IEEE 1180 -1990.)
The FDCT process converts the 8×8 block of uncompressed pixel values (brightness or color difference values) into an 8×8 indexed array of frequency coefficient values. One of these is the (statistically high in variance) "DC coefficient", which represents the average value of the entire 8×8 block. The other 63 coefficients are the statistically smaller "AC coefficients", which have positive or negative values each representing sinusoidal deviations from the flat block value represented by the DC coefficient.
An example of an encoded 8×8 FDCT block:
Since the DC coefficient value is statistically correlated from one block to the next, it is compressed using DPCM encoding. Only the (smaller) amount of difference between each DC value and the value of the DC coefficient in the block to its left needs to be represented in the final bitstream.
Additionally, the frequency conversion performed by applying the DCT provides a statistical decorrelation function to efficiently concentrate the signal into fewer high-amplitude values prior to applying quantization (see below).
Quantization is, essentially, the process of reducing the accuracy of a signal, by dividing it by some larger step size and rounding to an integer value (i.e. finding the nearest multiple, and discarding the remainder).
The frame-level quantizer is a number from 0 to 31 (although encoders will usually omit/disable some of the extreme values) which determines how much information will be removed from a given frame. The frame-level quantizer is typically either dynamically selected by the encoder to maintain a certain user-specified bitrate, or (much less commonly) directly specified by the user.
A "quantization matrix" is a string of 64 numbers (ranging from 0 to 255) which tells the encoder how relatively important or unimportant each piece of visual information is. Each number in the matrix corresponds to a certain frequency component of the video image.
An example quantization matrix:
Quantization is performed by taking each of the 64 frequency values of the DCT block, dividing them by the frame-level quantizer, then dividing them by their corresponding values in the quantization matrix. Finally, the result is rounded down. This significantly reduces, or completely eliminates, the information in some frequency components of the picture. Typically, high frequency information is less visually important, and so high frequencies are much more strongly quantized (drastically reduced). MPEG-1 actually uses two separate quantization matrices, one for intra-blocks (I-blocks) and one for inter-block (P- and B- blocks) so quantization of different block types can be done independently, and so, more effectively. [ 3 ]
This quantization process usually reduces a significant number of the AC coefficients to zero, (known as sparse data) which can then be more efficiently compressed by entropy coding (lossless compression) in the next step.
An example quantized DCT block:
Quantization eliminates a large amount of data, and is the main lossy processing step in MPEG-1 video encoding. This is also the primary source of most MPEG-1 video compression artifacts , like blockiness , color banding , noise , ringing , discoloration , etc. This happens when video is encoded with an insufficient bitrate, and the encoder is therefore forced to use high frame-level quantizers ( strong quantization ) through much of the video.
Several steps in the encoding of MPEG-1 video are lossless, meaning they will be reversed upon decoding, to produce exactly the same (original) values. Since these lossless data compression steps don't add noise into, or otherwise change the contents (unlike quantization), it is sometimes referred to as noiseless coding . [ 47 ] Since lossless compression aims to remove as much redundancy as possible, it is known as entropy coding in the field of information theory .
The coefficients of quantized DCT blocks tend to zero towards the bottom-right. Maximum compression can be achieved by a zig-zag scanning of the DCT block starting from the top left and using Run-length encoding techniques.
The DC coefficients and motion vectors are DPCM -encoded.
Run-length encoding (RLE) is a simple method of compressing repetition. A sequential string of characters, no matter how long, can be replaced with a few bytes, noting the value that repeats, and how many times. For example, if someone were to say "five nines", you would know they mean the number: 99999.
RLE is particularly effective after quantization, as a significant number of the AC coefficients are now zero (called sparse data), and can be represented with just a couple of bytes. This is stored in a special 2- dimensional Huffman table that codes the run-length and the run-ending character.
Huffman Coding is a very popular and relatively simple method of entropy coding, and used in MPEG-1 video to reduce the data size. The data is analyzed to find strings that repeat often. Those strings are then put into a special table, with the most frequently repeating data assigned the shortest code. This keeps the data as small as possible with this form of compression. [ 47 ] Once the table is constructed, those strings in the data are replaced with their (much smaller) codes, which reference the appropriate entry in the table. The decoder simply reverses this process to produce the original data.
This is the final step in the video encoding process, so the result of Huffman coding is known as the MPEG-1 video "bitstream."
I-frames store complete frame info within the frame and are therefore suited for random access. P-frames provide compression using motion vectors relative to the previous frame ( I or P ). B-frames provide maximum compression but require the previous as well as next frame for computation. Therefore, processing of B-frames requires more buffer on the decoded side. A configuration of the Group of Pictures (GOP) should be selected based on these factors. I-frame only sequences give least compression, but are useful for random access, FF/FR and editability. I- and P-frame sequences give moderate compression but add a certain degree of random access, FF/FR functionality. I-, P- and B-frame sequences give very high compression but also increase the coding/decoding delay significantly. Such configurations are therefore not suited for video-telephony or video-conferencing applications.
The typical data rate of an I-frame is 1 bit per pixel while that of a P-frame is 0.1 bit per pixel and for a B-frame, 0.015 bit per pixel. [ 56 ]
Part 3 of the MPEG-1 standard covers audio and is defined in ISO/IEC-11172-3.
MPEG-1 Audio utilizes psychoacoustics to significantly reduce the data rate required by an audio stream. It reduces or completely discards certain parts of the audio that it deduces that the human ear can't hear , either because they are in frequencies where the ear has limited sensitivity, or are masked by other (typically louder) sounds. [ 57 ]
Channel encoding modes:
Sampling rates :
Bit rates :
MPEG-1 Audio is divided into 3 layers. Each higher layer is more computationally complex, and generally more efficient at lower bitrates than the previous. [ 16 ] The layers are semi backwards compatible as higher layers reuse technologies implemented by the lower layers. A "full" Layer II decoder can also play Layer I audio, but not Layer III audio, although not all higher level players are "full". [ 57 ]
MPEG-1 Audio Layer I is a simplified version of MPEG-1 Audio Layer II. [ 18 ] Layer I uses a smaller 384-sample frame size for very low delay, and finer resolution. [ 26 ] This is advantageous for applications like teleconferencing, studio editing, etc. It has lower complexity than Layer II to facilitate real-time encoding on the hardware available c. 1990 . [ 47 ]
Layer I saw limited adoption in its time, and most notably was used on Philips ' defunct Digital Compact Cassette at a bitrate of 384 kbit/s. [ 2 ] With the substantial performance improvements in digital processing since its introduction, Layer I quickly became unnecessary and obsolete.
Layer I audio files typically use the extension ".mp1" or sometimes ".m1a".
MPEG-1 Audio Layer II (the first version of MP2, often informally called MUSICAM) [ 57 ] is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. [ 59 ] Decoding MP2 audio is computationally simple relative to MP3, AAC , etc.
MPEG-1 Audio Layer II was derived from the MUSICAM ( Masking pattern adapted Universal Subband Integrated Coding And Multiplexing ) audio codec, developed by Centre commun d'études de télévision et télécommunications (CCETT), Philips , and Institut für Rundfunktechnik (IRT/CNET) [ 16 ] [ 18 ] [ 60 ] as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting.
Most key features of MPEG-1 Audio were directly inherited from MUSICAM, including the filter bank, time-domain processing, audio frame sizes, etc. However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Audio Layer II standard. The widespread usage of the term MUSICAM to refer to Layer II is entirely incorrect and discouraged for both technical and legal reasons. [ 57 ]
MP2 is a time-domain encoder. It uses a low-delay 32 sub-band polyphased filter bank for time-frequency mapping; having overlapping ranges (i.e. polyphased) to prevent aliasing. [ 61 ] The psychoacoustic model is based on the principles of auditory masking , simultaneous masking effects, and the absolute threshold of hearing (ATH). The size of a Layer II frame is fixed at 1152-samples (coefficients).
Time domain refers to how analysis and quantization is performed on short, discrete samples/chunks of the audio waveform. This offers low delay as only a small number of samples are analyzed before encoding, as opposed to frequency domain encoding (like MP3) which must analyze many times more samples before it can decide how to transform and output encoded audio. This also offers higher performance on complex, random and transient impulses (such as percussive instruments, and applause), offering avoidance of artifacts like pre-echo.
The 32 sub-band filter bank returns 32 amplitude coefficients , one for each equal-sized frequency band/segment of the audio, which is about 700 Hz wide (depending on the audio's sampling frequency). The encoder then utilizes the psychoacoustic model to determine which sub-bands contain audio information that is less important, and so, where quantization will be inaudible, or at least much less noticeable. [ 47 ]
The psychoacoustic model is applied using a 1024-point fast Fourier transform (FFT). Of the 1152 samples per frame, 64 samples at the top and bottom of the frequency range are ignored for this analysis. They are presumably not significant enough to change the result. The psychoacoustic model uses an empirically determined masking model to determine which sub-bands contribute more to the masking threshold , and how much quantization noise each can contain without being perceived. Any sounds below the absolute threshold of hearing (ATH) are completely discarded. The available bits are then assigned to each sub-band accordingly. [ 57 ] [ 61 ]
Typically, sub-bands are less important if they contain quieter sounds (smaller coefficient) than a neighboring (i.e. similar frequency) sub-band with louder sounds (larger coefficient). Also, "noise" components typically have a more significant masking effect than "tonal" components. [ 60 ]
Less significant sub-bands are reduced in accuracy by quantization. This basically involves compressing the frequency range (amplitude of the coefficient), i.e. raising the noise floor. Then computing an amplification factor, for the decoder to use to re-expand each sub-band to the proper frequency range. [ 62 ] [ 63 ]
Layer II can also optionally use intensity stereo coding, a form of joint stereo. This means that the frequencies above 6 kHz of both channels are combined/down-mixed into one single (mono) channel, but the "side channel" information on the relative intensity (volume, amplitude) of each channel is preserved and encoded into the bitstream separately. On playback, the single channel is played through left and right speakers, with the intensity information applied to each channel to give the illusion of stereo sound. [ 47 ] [ 60 ] This perceptual trick is known as "stereo irrelevancy". This can allow further reduction of the audio bitrate without much perceivable loss of fidelity, but is generally not used with higher bitrates as it does not provide very high quality (transparent) audio. [ 47 ] [ 61 ] [ 64 ] [ 65 ]
Subjective audio testing by experts, in the most critical conditions ever implemented, has shown MP2 to offer transparent audio compression at 256 kbit/s for 16-bit 44.1 kHz CD audio using the earliest reference implementation (more recent encoders should presumably perform even better). [ 2 ] [ 60 ] [ 61 ] [ 66 ] That (approximately) 1:6 compression ratio for CD audio is particularly impressive because it is quite close to the estimated upper limit of perceptual entropy , at just over 1:8. [ 67 ] [ 68 ] Achieving much higher compression is simply not possible without discarding some perceptible information.
MP2 remains a favoured lossy audio coding standard due to its particularly high audio coding performances on important audio material such as castanet, symphonic orchestra, male and female voices and particularly complex and high energy transients (impulses) like percussive sounds: triangle, glockenspiel and audience applause. [ 26 ] More recent testing has shown that MPEG Multichannel (based on MP2), despite being compromised by an inferior matrixed mode (for the sake of backwards compatibility) [ 2 ] [ 61 ] rates just slightly lower than much more recent audio codecs, such as Dolby Digital (AC-3) and Advanced Audio Coding (AAC) (mostly within the margin of error—and substantially superior in some cases, such as audience applause). [ 69 ] [ 70 ] This is one reason that MP2 audio continues to be used extensively. The MPEG-2 AAC Stereo verification tests reached a vastly different conclusion, however, showing AAC to provide superior performance to MP2 at half the bitrate. [ 71 ] The reason for this disparity with both earlier and later tests is not clear, but strangely, a sample of applause is notably absent from the latter test.
Layer II audio files typically use the extension ".mp2" or sometimes ".m2a".
MPEG-1 Audio Layer III (the first version of MP3 ) is a lossy audio format designed to provide acceptable quality at about 64 kbit/s for monaural audio over single-channel ( BRI ) ISDN links, and 128 kbit/s for stereo sound.
MPEG-1 Audio Layer III was derived from the Adaptive Spectral Perceptual Entropy Coding (ASPEC) codec developed by Fraunhofer as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of digital audio broadcasting. ASPEC was adapted to fit in with the Layer II model (frame size, filter bank, FFT, etc.), to become Layer III. [ 18 ]
ASPEC was itself based on Multiple adaptive Spectral audio Coding (MSC) by E. F. Schroeder , Optimum Coding in the Frequency domain (OCF) the doctoral thesis by Karlheinz Brandenburg at the University of Erlangen-Nuremberg , Perceptual Transform Coding (PXFM) by J. D. Johnston at AT&T Bell Labs , and Transform coding of audio signals by Y. Mahieux and J. Petit at Institut für Rundfunktechnik (IRT/CNET). [ 72 ]
MP3 is a frequency-domain audio transform encoder . Even though it utilizes some of the lower layer functions, MP3 is quite different from MP2.
MP3 works on 1152 samples like MP2, but needs to take multiple frames for analysis before frequency-domain (MDCT) processing and quantization can be effective. It outputs a variable number of samples, using a bit buffer to enable this variable bitrate (VBR) encoding while maintaining 1152 sample size output frames. This causes a significantly longer delay before output, which has caused MP3 to be considered unsuitable for studio applications where editing or other processing needs to take place. [ 61 ]
MP3 does not benefit from the 32 sub-band polyphased filter bank, instead just using an 18-point MDCT transformation on each output to split the data into 576 frequency components, and processing it in the frequency domain. [ 60 ] This extra granularity allows MP3 to have a much finer psychoacoustic model, and more carefully apply appropriate quantization to each band, providing much better low-bitrate performance.
Frequency-domain processing imposes some limitations as well, causing a factor of 12 or 36 × worse temporal resolution than Layer II. This causes quantization artifacts, due to transient sounds like percussive events and other high-frequency events that spread over a larger window. This results in audible smearing and pre-echo . [ 61 ] MP3 uses pre-echo detection routines, and VBR encoding, which allows it to temporarily increase the bitrate during difficult passages, in an attempt to reduce this effect. It is also able to switch between the normal 36 sample quantization window, and instead using 3× short 12 sample windows instead, to reduce the temporal (time) length of quantization artifacts. [ 61 ] And yet in choosing a fairly small window size to make MP3's temporal response adequate enough to avoid the most serious artifacts, MP3 becomes much less efficient in frequency domain compression of stationary, tonal components.
Being forced to use a hybrid time domain (filter bank) /frequency domain (MDCT) model to fit in with Layer II simply wastes processing time and compromises quality by introducing aliasing artifacts. MP3 has an aliasing cancellation stage specifically to mask this problem, but which instead produces frequency domain energy which must be encoded in the audio. This is pushed to the top of the frequency range, where most people have limited hearing, in hopes the distortion it causes will be less audible.
Layer II's 1024 point FFT doesn't entirely cover all samples, and would omit several entire MP3 sub-bands, where quantization factors must be determined. MP3 instead uses two passes of FFT analysis for spectral estimation, to calculate the global and individual masking thresholds. This allows it to cover all 1152 samples. Of the two, it utilizes the global masking threshold level from the more critical pass, with the most difficult audio.
In addition to Layer II's intensity encoded joint stereo, MP3 can use middle/side (mid/side, m/s, MS, matrixed) joint stereo. With mid/side stereo, certain frequency ranges of both channels are merged into a single (middle, mid, L+R) mono channel, while the sound difference between the left and right channels is stored as a separate (side, L-R) channel. Unlike intensity stereo, this process does not discard any audio information. When combined with quantization, however, it can exaggerate artifacts.
If the difference between the left and right channels is small, the side channel will be small, which will offer as much as a 50% bitrate savings, and associated quality improvement. If the difference between left and right is large, standard (discrete, left/right) stereo encoding may be preferred, as mid/side joint stereo will not provide any benefits. An MP3 encoder can switch between m/s stereo and full stereo on a frame-by-frame basis. [ 60 ] [ 65 ] [ 73 ]
Unlike Layers I and II, MP3 uses variable-length Huffman coding (after perceptual) to further reduce the bitrate, without any further quality loss. [ 57 ] [ 61 ]
MP3's more fine-grained and selective quantization does prove notably superior to MP2 at lower-bitrates. It is able to provide nearly equivalent audio quality to Layer II, at a 15% lower bitrate (approximately). [ 70 ] [ 71 ] 128 kbit/s is considered the "sweet spot" for MP3; meaning it provides generally acceptable quality stereo sound on most music, and there are diminishing quality improvements from increasing the bitrate further. MP3 is also regarded as exhibiting artifacts that are less annoying than Layer II, when both are used at bitrates that are too low to possibly provide faithful reproduction.
Layer III audio files use the extension ".mp3".
The MPEG-2 standard includes several extensions to MPEG-1 Audio. [ 61 ] These are known as MPEG-2 BC – backwards compatible with MPEG-1 Audio. [ 74 ] [ 75 ] [ 76 ] [ 77 ] MPEG-2 Audio is defined in ISO/IEC 13818-3.
These sampling rates are exactly half that of those originally defined for MPEG-1 Audio. They were introduced to maintain higher quality sound when encoding audio at lower-bitrates. [ 25 ] The even-lower bitrates were introduced because tests showed that MPEG-1 Audio could provide higher quality than any existing ( c. 1994 ) very low bitrate (i.e. speech ) audio codecs. [ 78 ]
Part 4 of the MPEG-1 standard covers conformance testing, and is defined in ISO/IEC-11172-4.
Conformance: Procedures for testing conformance.
Provides two sets of guidelines and reference bitstreams for testing the conformance of MPEG-1 audio and video decoders, as well as the bitstreams produced by an encoder. [ 16 ] [ 23 ]
Part 5 of the MPEG-1 standard includes reference software, and is defined in ISO/IEC TR 11172–5.
Simulation: Reference software.
C reference code for encoding and decoding of audio and video, as well as multiplexing and demultiplexing. [ 16 ] [ 23 ]
This includes the ISO Dist10 audio encoder code, which LAME and TooLAME were originally based upon.
.mpg is one of a number of file extensions for MPEG-1 or MPEG-2 audio and video compression. MPEG-1 Part 2 video is rare nowadays, and this extension typically refers to an MPEG program stream (defined in MPEG-1 and MPEG-2) or MPEG transport stream (defined in MPEG-2). Other suffixes such as .m2ts also exist specifying the precise container, in this case MPEG-2 TS, but this has little relevance to MPEG-1 media.
.mp3 is the most common extension for files containing MP3 audio (typically MPEG-1 Audio, sometimes MPEG-2 Audio). An MP3 file is typically an uncontained stream of raw audio; the conventional way to tag MP3 files is by writing data to "garbage" segments of each frame, which preserve the media information but are discarded by the player. This is similar in many respects to how raw .AAC files are tagged (but this is less supported nowadays, e.g. iTunes ).
Note that although it would apply, .mpg does not normally append raw AAC or AAC in MPEG-2 Part 7 Containers . The .aac extension normally denotes these audio files. | https://en.wikipedia.org/wiki/MPEG-1 |
MPEG Common Encryption (abbreviated MPEG-CENC ) refers to a set of two MPEG standards governing different container formats :
The specifications are compatible, so that conversion between the encrypted formats can happen without re-encryption.
They define metadata, specific to each format, about which parts of the stream are encrypted and by which encryption scheme. Each encryption scheme may have different methods to retrieve the decryption key.
The standards can be purchased from iso.org, on paper and in digital forms. As of July 2016, the prices were 118 Swiss franc (US$122) for the ISOBMFF version, and 58 Swiss franc (US$60) for the TS version. An included copyright notice prohibits redistribution without written permission, also on local area networks . Each page is watermarked with the purchaser's name and company.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MPEG_Common_Encryption |
MPI-CDG is an autosomal recessive congenital disorder of glycosylation caused by biallelic pathogenic variants in MPI . The clinical symptoms in MPI-CDG are caused by deficient activity of the enzyme mannose phosphate isomerase . Clinically, the most common symptoms of MPI-CDG are chronic diarrhea , failure to thrive, protein-losing enteropathy, and coagulopathy. [ 1 ] MPI-CDG differs from most other described glycosylation disorders due to its lack of central nervous system involvement, and because it has treatment options besides supportive care. Treatment with oral mannose has been shown to improve most symptoms of the disease. [ 2 ] If left untreated, MPI-CDG can be fatal. [ 1 ] MPI-CDG was previously known as CDG-IB. The disorder was first described clinically in 1986, and the underlying genetic defect was identified in 1998. [ 1 ] [ 3 ] | https://en.wikipedia.org/wiki/MPI-CDG |
Multiple Precision Integers and Rationals ( MPIR ) is an open-source software multiprecision integer library forked from the GNU Multiple Precision Arithmetic Library (GMP) project. It consists of much code from past GMP releases, and some original contributed code.
According to the MPIR-devel mailing list, "MPIR is no longer maintained", [ 2 ] except for building the old code on Windows using new versions of Microsoft Visual Studio.
According to the MPIR developers, some of the main goals of the MPIR project were:
MPIR is optimized for many processors (CPUs). Assembly language code exists for these as of 2012 [update] : ARM, DEC Alpha 21064, 21164, and 21264, AMD K6, K6-2, Athlon, K8 and K10, Intel Pentium, Pentium Pro-II-III, Pentium 4, generic x86, Intel IA-64, Core 2, i7, Atom, Motorola-IBM PowerPC 32 and 64, MIPS R3000, R4000, SPARCv7, SuperSPARC, generic SPARCv8, UltraSPARC. | https://en.wikipedia.org/wiki/MPIR_(mathematics_software) |
The MPLAB series of devices are programmers and debuggers for Microchip PIC and dsPIC microcontrollers , developed by Microchip Technology .
The ICD family of debuggers has been produced since the release of the first Flash-based PIC microcontrollers, and the latest ICD 3 currently supports all current PIC and dsPIC devices. It is the most popular combination debugging/programming tool from Microchip.
The REAL ICE emulator is similar to the ICD, with the addition of better debugging features, and various add-on modules that expand its usage scope. The ICE is a family of discontinued in-circuit emulators for PIC and dsPIC devices, and is currently superseded by the REAL ICE.
The MPLAB ICD is the first in-circuit debugger product by Microchip, and is currently discontinued and superseded by ICD 2. [ 1 ] The ICD connected to the engineer's PC via RS-232 , and connected to the device via ICSP. [ 1 ]
The ICD supported devices within the PIC16C and PIC16F families, and supported full speed execution, or single step interactive debugging. [ 1 ] Only one hardware breakpoint was supported by the ICD. [ 1 ]
The MPLAB ICD 2 is a discontinued in-circuit debugger and programmer by Microchip, and is currently superseded by ICD 3. [ 2 ] The ICD 2 connects to the engineer's PC via USB or RS-232 , and connects to the device via ICSP. [ 3 ]
The ICD 2 supports most PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, [ 4 ] and supports full speed execution, or single step interactive debugging. [ 3 ] At breakpoints, data and program memory can be read and modified using the MPLAB IDE. [ 2 ] The ICD 2 firmware is field upgradeable using the MPLAB IDE. [ 2 ]
The ICD 2 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. [ 2 ] Target device voltages from 2.0V to 6.0V are supported. [ 2 ]
The MPLAB ICD 3 is an in-circuit debugger and programmer by Microchip, and is the latest in the ICD series. [ 5 ] The ICD 3 connects to the engineer's PC via USB, and connects to the device via ICSP. [ 5 ] The ICD 3 is entirely USB-bus-powered, and is 15x faster than the ICD 2 for programming devices. [ 5 ]
The ICD 3 supports all current PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. [ 5 ] At breakpoints, data and program memory can be read and modified using the MPLAB IDE. [ 5 ] The ICD 3 firmware is field upgradeable using the MPLAB IDE. [ 5 ]
The ICD 3 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. [ 5 ] Target device voltages from 2.0V to 5.5V are supported. [ 5 ]
The ICD 3 has over-voltage protection in the probe drivers to guard against power surges from the target. [ 5 ] All lines have over-current protection. The ICD 3 can also provide power to a target, up to 100 mA . [ 5 ]
The MPLAB REAL ICE ( I n- C ircuit E mulator) is a high-speed emulator for Microchip devices. It debugs and programs PIC and dsPIC microcontrollers in conjunction with the MPLAB IDE, while the target device is "in-circuit". [ 6 ] [ 7 ] The REAL ICE is significantly faster than the ICD 2, for programming and debugging. [ 8 ] [ 9 ]
The REAL ICE connects to the engineer's PC via a USB 2.0 interface, and connects to the target device via ICSP (PGC/PGD programming pins), typically using a RJ11 connector. LVDS is also available for high-speed data transfer between the device and the REAL ICE. MPLAB REAL ICE is field upgradeable through firmware downloads in MPLAB IDE.
The REAL ICE supports 8-bit devices (PIC10, PIC12, PIC16, PIC18), 16-bit devices (PIC24, dsPIC) and 32-bit devices (PIC32MX). [ 10 ]
The REAL ICE Performance Pak is an optional add-on to the REAL ICE, that consists of a High Speed Probe Driver and Receiver that employ two CAT5 cables. [ 11 ] Debug pins are driven using LVDS communications, and the additional trace connections allow high speed serial trace uploads to the PC. [ 11 ]
The REAL ICE Isolator is an optional add-on to the REAL ICE, that enables connectivity to AC and High-voltage applications not referenced to ground. [ 12 ] Control signals are magnetically or optically isolated providing up to 2.5 kV equivalent isolation protection. [ 12 ] The isolator acts as an isolated bridge, where signals are passed through with complete transparency to the MPLAB REAL ICE or MPLAB IDE. [ 12 ]
The MPLAB ICE2000 is a discontinued in-circuit emulator for PIC and dsPIC devices. [ 13 ] It has been superseded by the REAL ICE.
The ICE2000 connects to the engineer's PC via a parallel port interface, and a USB converter is available. The ICE2000 requires emulator modules, and the test hardware must provide a socket which can take either an emulator module, or a production device.
The MPLAB ICE4000 is a discontinued in-circuit emulator for PIC and dsPIC devices. [ 13 ] It has been superseded by the REAL ICE. [ 14 ] The ICE4000 is no longer directly advertised on Microchip's website, and Microchip states that it is not recommended for new designs. [ 14 ]
The ICE4000 connects to the engineer's PC via a USB 2.0 interface. PIC devices under debug with the ICE4000 ran at full speed, and the emulator supported unlimited breakpoints, and complex break/trigger logic. [ 14 ] The emulator supported multiple external inputs and external outputs to sync with other instruments. [ 14 ] | https://en.wikipedia.org/wiki/MPLAB_devices |
Massively Parallel Monte Carlo ( MPMC ) is a Monte Carlo method package primarily designed to simulate liquids, molecular interfaces, and functionalized nanoscale materials. It was developed originally by Jon Belof and is now maintained by a group of researchers in the Department of Chemistry [ 1 ] and SMMARTT Materials Research Center [ 2 ] at the University of South Florida . [ 3 ] MPMC has been applied to the scientific research challenges of nanomaterials for clean energy , carbon sequestration , and molecular detection. Developed to run efficiently on the most powerful supercomputing platforms, MPMC can scale to extremely large numbers of CPUs or GPUs (with support provided for NVidia 's CUDA architecture [ 4 ] ). Since 2012, MPMC has been released as an open-source software project under the GNU General Public License (GPL) version 3, and the repository is hosted on GitHub .
MPMC was originally written by Jon Belof (then at the University of South Florida) in 2007 for applications toward the development of nanomaterials for hydrogen storage. [ 5 ] Since then MPMC has been released as an open source project and been extended to include a number of simulation methods relevant to statistical physics. The code is now further maintained by a group of researchers (Christian Cioce, Keith McLaughlin, Brant Tudor, Adam Hogan and Brian Space) in the Department of Chemistry and SMMARTT Materials Research Center at the University of South Florida .
MPMC is optimized for the study of nanoscale interfaces. MPMC supports simulation of Coulomb and Lennard-Jones systems, many-body polarization, [ 6 ] coupled-dipole van der Waals, [ 7 ] quantum rotational statistics, [ 8 ] semi-classical quantum effects, advanced importance sampling methods relevant to fluids, and numerous tools for the development of intermolecular potentials. [ 9 ] [ 10 ] [ 11 ] [ 12 ] The code is designed to efficiently run on high-performance computing resources, including the network of some of the most powerful supercomputers in the world made available through the National Science Foundation supported project Extreme Science and Engineering Discovery Environment (XSEDE). [ 13 ] [ 14 ]
MPMC has been applied to the scientific challenges of discovering nanomaterials for clean energy applications, [ 15 ] capturing and sequestering carbon dioxide, [ 16 ] designing tailored organometallic materials for chemical weapons detection, [ 17 ] and quantum effects in cryogenic hydrogen for spacecraft propulsion. [ 18 ] Also simulated and published have been the solid, liquid, supercritical, and gaseous states of matter of nitrogen (N 2 ) [ 11 ] and carbon dioxide (CO 2 ). [ 12 ] | https://en.wikipedia.org/wiki/MPMC |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/MRB_(code) |
The MRB constant is a mathematical constant , with decimal expansion 0.187859… (sequence A037077 in the OEIS ). The constant is named after its discoverer, Marvin Ray Burns, who published his discovery of the constant in 1999. [ 1 ] Burns had initially called the constant "rc" for root constant [ 2 ] but, at Simon Plouffe's suggestion, the constant was renamed the 'Marvin Ray Burns's Constant', or "MRB constant". [ 3 ]
The MRB constant is defined as the upper limit of the partial sums [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
As n {\displaystyle n} grows to infinity, the sums have upper and lower limit points of −0.812140… and 0.187859…, separated by an interval of length 1. The constant can also be explicitly defined by the following infinite sums: [ 4 ]
The constant relates to the divergent series :
There is no known closed-form expression of the MRB constant, [ 9 ] nor is it known whether the MRB constant is algebraic , transcendental or even irrational . | https://en.wikipedia.org/wiki/MRB_constant |
MRC is a file format that has become an industry standard in cryo-electron microscopy (cryoEM) and electron tomography (ET), where the result of the technique is a three-dimensional grid of voxels each with a value corresponding to electron density or electric potential . It was developed by the MRC ( Medical Research Council, UK ) Laboratory of Molecular Biology . [ 1 ] In 2014, the format was standardised. [ 2 ] The format specification is available on the CCP-EM website .
The MRC format is supported by many of the software packages listed in b:Software Tools For Molecular Microscopy .
This computational chemistry -related article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MRC_(file_format) |
An MRI pulse sequence in magnetic resonance imaging (MRI) is a particular setting of pulse sequences and pulsed field gradients , resulting in a particular image appearance. [ 1 ]
A multiparametric MRI is a combination of two or more sequences, and/or including other specialized MRI configurations such as spectroscopy . [ 2 ] [ 3 ]
edit This table does not include uncommon and experimental sequences .
Standard foundation and comparison for other sequences
Standard foundation and comparison for other sequences
Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 ( spin-lattice ; that is, magnetization in the same direction as the static magnetic field) and T2 ( spin-spin ; transverse to the static magnetic field). To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging. To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions , and assessing zonal anatomy in the prostate and uterus .
The standard display of MRI images is to represent fluid characteristics in black and white images, where different tissues turn out as follows:
Proton density (PD)- weighted images are created by having a long repetition time (TR) and a short echo time (TE). [ 36 ] On images of the brain, this sequence has a more pronounced distinction between grey matter (bright) and white matter (darker grey), but with little contrast between brain and CSF. [ 36 ] It is very useful for the detection of arthropathy and injury. [ 37 ]
A gradient echo sequence does not use a 180 degrees RF pulse to make the spins of particles coherent. Instead, it uses magnetic gradients to manipulate the spins, allowing the spins to dephase and rephase when required. After an excitation pulse, the spins are dephased, no signal is produced because the spins are not coherent. When the spins are rephased, they become coherent, and thus signal (or "echo") is generated to form images. Unlike spin echo, gradient echo does not need to wait for transverse magnetisation to decay completely before initiating another sequence, thus it requires very short repetition times (TR), and therefore to acquire images in a short time. After echo is formed, some transverse magnetisations remains. Manipulating gradients during this time will produce images with different contrast. There are three main methods of manipulating contrast at this stage, namely steady-state free-precession (SSFP) that does not spoil the remaining transverse magnetisation, but attempts to recover them (thus producing T2-weighted images); the sequence with spoiler gradient that averages the transverse magnetisations (thus producing mixed T1 and T2-weighted images), and RF spoiler that vary the phases of RF pulse to eliminates the transverse magnetisation, thus producing pure T1-weighted images. [ 39 ]
For comparison purposes, the repetition time of a gradient echo sequence is of the order of 3 milliseconds, versus about 30 ms of a spin echo sequence. [ citation needed ]
Inversion recovery is an MRI sequence that provides high contrast between tissue and lesion. It can be used to provide high T1 weighted image, high T2 weighted image, and to suppress the signals from fat, blood, or cerebrospinal fluid (CSF). [ 40 ]
Diffusion MRI measures the diffusion of water molecules in biological tissues. [ 41 ] Clinically, diffusion MRI is useful for the diagnoses of conditions (e.g., stroke ) or neurological disorders (e.g., multiple sclerosis ), and helps better understand the connectivity of white matter axons in the central nervous system. [ 42 ] In an isotropic medium (inside a glass of water for example), water molecules naturally move randomly according to turbulence and Brownian motion . In biological tissues however, where the Reynolds number is low enough for laminar flow , the diffusion may be anisotropic . For example, a molecule inside the axon of a neuron has a low probability of crossing the myelin membrane. Therefore, the molecule moves principally along the axis of the neural fiber. If it is known that molecules in a particular voxel diffuse principally in one direction, the assumption can be made that the majority of the fibers in this area are parallel to that direction.
The recent development of diffusion tensor imaging (DTI) [ 43 ] enables diffusion to be measured in multiple directions, and the fractional anisotropy in each direction to be calculated for each voxel. This enables researchers to make brain maps of fiber directions to examine the connectivity of different regions in the brain (using tractography ) or to examine areas of neural degeneration and demyelination in diseases like multiple sclerosis.
Another application of diffusion MRI is diffusion-weighted imaging (DWI). Following an ischemic stroke , DWI is highly sensitive to the changes occurring in the lesion. [ 44 ] It is speculated that increases in restriction (barriers) to water diffusion, as a result of cytotoxic edema (cellular swelling), is responsible for the increase in signal on a DWI scan. The DWI enhancement appears within 5–10 minutes of the onset of stroke symptoms (as compared to computed tomography , which often does not detect changes of acute infarct for up to 4–6 hours) and remains for up to two weeks. Coupled with imaging of cerebral perfusion, researchers can highlight regions of "perfusion/diffusion mismatch" that may indicate regions capable of salvage by reperfusion therapy.
Like many other specialized applications, this technique is usually coupled with a fast image acquisition sequence, such as echo planar imaging sequence.
Perfusion-weighted imaging (PWI) is performed by 3 main techniques:
The acquired data is then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
In cerebral infarction , the penumbra has decreased perfusion. [ 24 ] Another MRI sequence, diffusion-weighted MRI , estimates the amount of tissue that is already necrotic, and the combination of those sequences can therefore be used to estimate the amount of brain tissue that is salvageable by thrombolysis and/or thrombectomy .
Functional MRI (fMRI) measures signal changes in the brain that are due to changing neural activity. It is used to understand how different parts of the brain respond to external stimuli or passive activity in a resting state, and has applications in behavioral and cognitive research , and in planning neurosurgery of eloquent brain areas . [ 48 ] [ 49 ] Researchers use statistical methods to construct a 3-D parametric map of the brain indicating the regions of the cortex that demonstrate a significant change in activity in response to the task. Compared to anatomical T1W imaging, the brain is scanned at lower spatial resolution but at a higher temporal resolution (typically once every 2–3 seconds). Increases in neural activity cause changes in the MR signal via T * 2 changes; [ 50 ] this mechanism is referred to as the BOLD ( blood-oxygen-level dependent ) effect. Increased neural activity causes an increased demand for oxygen, and the vascular system actually overcompensates for this, increasing the amount of oxygenated hemoglobin relative to deoxygenated hemoglobin. Because deoxygenated hemoglobin attenuates the MR signal, the vascular response leads to a signal increase that is related to the neural activity. The precise nature of the relationship between neural activity and the BOLD signal is a subject of current research. The BOLD effect also allows for the generation of high resolution 3D maps of the venous vasculature within neural tissue.
While BOLD signal analysis is the most common method employed for neuroscience studies in human subjects, the flexible nature of MR imaging provides means to sensitize the signal to other aspects of the blood supply. Alternative techniques employ arterial spin labeling (ASL) or weighting the MRI signal by cerebral blood flow (CBF) and cerebral blood volume (CBV). The CBV method requires injection of a class of MRI contrast agents that are now in human clinical trials. Because this method has been shown to be far more sensitive than the BOLD technique in preclinical studies, it may potentially expand the role of fMRI in clinical applications. The CBF method provides more quantitative information than the BOLD signal, albeit at a significant loss of detection sensitivity. [ citation needed ]
Magnetic resonance angiography ( MRA ) is a group of techniques based to image blood vessels. Magnetic resonance angiography is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions , aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off").
Phase contrast MRI (PC-MRI) is used to measure flow velocities in the body. It is used mainly to measure blood flow in the heart and throughout the body. PC-MRI may be considered a method of magnetic resonance velocimetry . Since modern PC-MRI typically is time-resolved, it also may be referred to as 4-D imaging (three spatial dimensions plus time). [ 51 ]
Susceptibility-weighted imaging (SWI) is a new type of contrast in MRI different from spin density, T 1 , or T 2 imaging. This method exploits the susceptibility differences between tissues and uses a fully velocity-compensated, three-dimensional, RF-spoiled, high-resolution, 3D-gradient echo scan. This special data acquisition and image processing produces an enhanced contrast magnitude image very sensitive to venous blood, hemorrhage and iron storage. It is used to enhance the detection and diagnosis of tumors, vascular and neurovascular diseases (stroke and hemorrhage), multiple sclerosis, [ 52 ] Alzheimer's, and also detects traumatic brain injuries that may not be diagnosed using other methods. [ 53 ]
Magnetization transfer (MT) is a technique to enhance image contrast in certain applications of MRI.
Bound protons are associated with proteins and as they have a very short T2 decay they do not normally contribute to image contrast. However, because these protons have a broad resonance peak they can be excited by a radiofrequency pulse that has no effect on free protons. Their excitation increases image contrast by transfer of saturated spins from the bound pool into the free pool, thereby reducing the signal of free water. This homonuclear magnetization transfer provides an indirect measurement of macromolecular content in tissue. Implementation of homonuclear magnetization transfer involves choosing suitable frequency offsets and pulse shapes to saturate the bound spins sufficiently strongly, within the safety limits of specific absorption rate for MRI. [ 54 ]
The most common use of this technique is for suppression of background signal in time of flight MR angiography. [ 55 ] There are also applications in neuroimaging particularly in the characterization of white matter lesions in multiple sclerosis . [ 56 ]
Fat suppression is useful for example to distinguish active inflammation in the intestines from fat deposition such as can be caused by long-standing (but possibly inactive) inflammatory bowel disease , but also obesity , chemotherapy and celiac disease . [ 57 ] Without fat suppression techniques, fat and fluid will have similar signal intensities on fast spin-echo sequences. [ 58 ]
Techniques to suppress fat on MRI mainly include: [ 59 ]
This method exploits the paramagnetic properties of neuromelanin and can be used to visualize the substantia nigra and the locus coeruleus . It is used to detect the atrophy of these nuclei in Parkinson's disease and other parkinsonisms , and also detects signal intensity changes in major depressive disorder and schizophrenia . [ 60 ]
The following sequences are not commonly used clinically, and/or are at an experimental stage.
T1 rho (T1ρ) is an experimental MRI sequence that may be used in musculoskeletal imaging. It does not yet have widespread use. [ 61 ]
Molecules have a kinetic energy that is a function of the temperature and is expressed as translational and rotational motions, and by collisions between molecules. The moving dipoles disturb the magnetic field but are often extremely rapid so that the average effect over a long time-scale may be zero. However, depending on the time-scale, the interactions between the dipoles do not always average away. At the slowest extreme the interaction time is effectively infinite and occurs where there are large, stationary field disturbances (e.g., a metallic implant). In this case the loss of coherence is described as a "static dephasing". T2* is a measure of the loss of coherence in an ensemble of spins that includes all interactions (including static dephasing). T2 is a measure of the loss of coherence that excludes static dephasing, using an RF pulse to reverse the slowest types of dipolar interaction. There is in fact a continuum of interaction time-scales in a given biological sample, and the properties of the refocusing RF pulse can be tuned to refocus more than just static dephasing. In general, the rate of decay of an ensemble of spins is a function of the interaction times and also the power of the RF pulse. This type of decay, occurring under the influence of RF, is known as T1ρ. It is similar to T2 decay but with some slower dipolar interactions refocused, as well as static interactions, hence T1ρ≥T2. [ 62 ] | https://en.wikipedia.org/wiki/MRI_pulse_sequence |
mRNA-4157/V940 is an mRNA based cancer vaccine encapsulated in solid lipid nanoparticles . The 34 mRNA sequences in mRNA-4157/V940 vaccine were generated by an automated algorithm integrated with workflow based on massive parallel sequencing of tissue generated from cancer patients. [ 1 ] As adjuvant therapy, mRNA-4157 monotherapy and in combination with pembrolizumab have been investigated in patients with resected solid tumors (melanoma, bladder carcinoma, HPV negative HNSCC, NSCLC, SCLC, MSI-High, or TMB High cancers). It was also investigated in patients with HNSCC and MSS-CRC. [ clarification needed ] [ 2 ] [ 3 ]
mRNA-4157/V940 was initially developed by Moderna starting in 2017 . In May 2018, Moderna and MSD (Merck in US) announced collaboration on further development of the investigational agent. In 2019 Moderna and Merck jointly put mRNA-4157/V940 into clinical trials in combination with Merck's cancer immunotherapy drug pembrolizumab in resected stage IIIB-IV melanoma. [ 4 ] [ 5 ] [ 6 ] [ 7 ] In December 2022, Moderna and MSD announced that the study met its endpoint and demonstrated superiority. [ 8 ] In February 2023, the Food and Drug Administration granted mRNA-4157/V940 breakthrough status . [ 9 ] In April 2023, mRNA-4157 in combination with pembrolizumab received Prime Scheme Designation from the European Medicines Agency. [ 10 ]
In the 2023 AACR meeting, Professor Jeffrey S. Weber, the deputy director of the Perlmutter Cancer Center, presented the primary analysis outcome from the open-label, 2:1 randomization phase 2b study. At the pre-specified analysis point when 42 Recurrence-free survival (RFS) events occurred among 157 participants with resected stage III C -IV melanoma, 22.4% (24/107) in the mRNA-4157 plus pembrolizumab arm had recurrent disease, and 40% (20/50) in the pembrolizumab arm had recurrent disease, which lead to the well-known saying: mRNA-4157 in combination with pembrolizumab reduced risk by 44% in surgical resected melanoma.
In July 2023, MSD and Moderna initiated the phase III study (study V940-001) evaluating mRNA-4157 in combination with pembrolizumab for adjuvant treatment of patients with resected high-risk stage IIB-stage IV melanoma. [ 11 ]
In the meantime, a phase III study of V940 plus Pembrolizumab versus placebo plus pembrolizumab as adjuvant therapy in non-small cell lung cancer patient with resected stage II, IIIA, IIIB (N2) is registered and expected to start at November 2023. [ needs update ] Of note, patients who received prior neoadjuvant therapy for their current NSCLC diagnosis, or have been treated with any agent directed to stimulatory or coinhibitory T-cell receptors (e.g. PD-1, PD-L1, CTLA-4, et al.) are not allowed to enrol in the study. [ 12 ]
mRNA-4157/V940 is an mRNA based cancer vaccine . When administered, it will produce one of several dozen possible abnormal proteins commonly found in cancerous tissues. The production of those proteins is intended to invoke an immune response .
mRNA-4157/V940 is given to patients after their tumors have been sequenced and abnormal proteins identified. The drug is then customized to match a patient's tumor, which makes it an example of personalized medicine . | https://en.wikipedia.org/wiki/MRNA-4157/V940 |
mRNA-based disease diagnosis technologies are diagnostic procedures using messenger RNAs. [ 1 ] as molecular diagnostic tools to discover the relationships between patient's DNAs and their specific biological features. The mRNA-based disease diagnosis technologies have been applied to medical field widely in recent years, especially on early diagnosis of tumors (such as renal cell carcinoma , [ 2 ] hepatocellular carcinoma , [ 3 ] [ 4 ] breast cancer [ 5 ] and prostate cancer [ 6 ] ). The technology can be applied to various types of samples depending on how easily the samples are accessible and whether the samples reliably contain the mRNA that related to specific diseases. For example, in hepatocellular carcinoma, [ 3 ] the tumor tissues excised during the operation are a good resource for mRNA-based test to analysis. Among those most commonly used samples, blood sample is one of the most easily accessible via minimally invasive method. degenerative diseases . Blood has been used in early diagnosis of some cancers, [ 7 ] [ 8 ] such as non-small lung cancer [ 9 ] and neuroendocrine tumors. [ 10 ]
Even though modern medicine has been developing for centuries, we are still faced with quite number of medical challenges. For example, in breast cancer, three traditional expressions are routinely screened for target therapy. [ 11 ] However, for the triple negative breast cancer , none of those biomarkers can be detected leaving it with poor prognosis and high mortality. Innovative technologies like mRNA are aimed to addressing such health-related issues that still exist today. [ 12 ]
A general workflow of mRNA-based disease diagnosis can be summarized as the following steps: [ 16 ] [ 17 ] [ 18 ]
Take breast cancer as an example. The sensitivity of traditional ultrasound screening for breast cancer can be 76%. [ 19 ] While with the blood-based mRNA diagnostic method, the sensitivity could be 80.6%. [ 20 ]
mRNA-based disease diagnostic technologies allow quantitive measurement of mRNA in the certain samples, such as leukemia [ 21 ]
As some technologies such as RNA-seq can provide the entire transcriptome of individual, the mRNA-based disease diagnosis can be developed in the landscape of personalized medicine. In HER-2 breast cancer, detection of ERBB2 mRNA expression levels is helpful in predicting response to anti-HER2-based treatments. [ 22 ]
As mentioned above, the mRNA-based disease diagnostic technology is more sensitive and specific to certain diseases. Even though there is no obvious symptom, the mRNA-based disease diagnostic technology can serve as screening method for early changes in RNA levels. [ 23 ] High serum metadherin mRNA expression was observed in colorectal cancer and associated with poorly differentiated histological grades [ 24 ] | https://en.wikipedia.org/wiki/MRNA-based_disease_diagnosis |
mRNA display is a display technique used for in vitro protein , and/or peptide evolution to create molecules that can bind to a desired target. The process results in translated peptides or proteins that are associated with their mRNA progenitor via a puromycin linkage. The complex then binds to an immobilized target in a selection step ( affinity chromatography ). The mRNA-protein fusions that bind well are then reverse transcribed to cDNA and their sequence amplified via a polymerase chain reaction . The result is a nucleotide sequence that encodes a peptide with high affinity for the molecule of interest.
Puromycin is an analogue of the 3’ end of a tyrosyl-tRNA with a part of its structure mimics a molecule of adenosine , and the other part mimics a molecule of tyrosine . Compared to the cleavable ester bond in a tyrosyl-tRNA, puromycin has a non-hydrolysable amide bond. As a result, puromycin interferes with translation, and causes premature release of translation products.
All mRNA templates used for mRNA display technology have puromycin at their 3’ end. As translation proceeds, ribosome moves along the mRNA template, and once it reaches the 3’ end of the template, the fused puromycin will enter ribosome’s A site and be incorporated into the nascent peptide. The mRNA-polypeptide fusion is then released from the ribosome (Figure 1).
To synthesize an mRNA-polypeptide fusion, the fused puromycin is not the only modification to the mRNA template. [ 1 ] Oligonucleotides and other spacers need to be recruited along with the puromycin to provide flexibility and proper length for the puromycin to enter the A site. Ideally, the linker between the 3’ end of an mRNA and the puromycin has to be flexible and long enough to allow the puromycin to enter the A site upon translation of the last codon. This enables the efficient production of high-quality, full-length mRNA-polypeptide fusion. Rihe Liu et al. optimized the 3’-puromycin oligonucleotide spacer. They reported that dA25 in combination with a Spacer 9 (Glen Research), and dAdCdCP at the 5’ terminus worked the best for the fusion reaction. They found that linkers longer than 40 nucleotides and shorter than 16 nucleotides showed greatly reduced efficiency of fusion formation. Also, when the sequence rUrUP presented adjacent to the puromycin, fusion did not form efficiently. [ 2 ]
In addition to providing flexibility and length, the poly dA portion of the linker also allows further purification of the mRNA-polypeptide fusion due to its high affinity for dT cellulose resin. [ 3 ] The mRNA-polypeptide fusions can be selected over immobilized selection targets for several rounds with increasing stringency. After each round of selection, those library members that stay bound to the immobilized target are PCR amplified, and non-binders are washed off.
The synthesis of an mRNA display library starts from the synthesis of a DNA library. A DNA library for any protein or small peptide of interest can be synthesized by solid-phase synthesis followed by PCR amplification. Usually, each member of this DNA library has a T7 RNA polymerase transcription site and a ribosomal binding site at the 5’ end. The T7 promoter region allows large-scale in vitro T7 transcription to transcribe the DNA library into an mRNA library, which provides templates for the in vitro translation reaction later. The ribosomal binding site in the 5’-untranslated region (5’ UTR) is designed according to the in vitro translation system to be used. There are two popular commercially available in vitro translation systems. One is E. coli S30 Extract System (Promega) that requires a Shine-Dalgarno sequence in the 5’ UTR as a ribosomal binding site; [ 4 ] the other one is Red Nova Lysate (Novagen), which needs a ΔTMV ribosomal binding site.
Once the mRNA library is generated, it will be Urea-PAGE purified and ligated using T4 DNA ligase to the DNA spacer linker containing puromycin at the 3’ end. In this ligation step, a piece of mRNA is ligated with a single stranded DNA with the help from T4 DNA ligase. This is not a standard T4 DNA ligase ligation reaction, where two pieces of double stranded DNA are ligated together. To increase the yield of this special ligation, a single stranded DNA splint may be used to aid the ligation reaction. The 5’ terminus of the splint is designed to be complementary to the 3’ end of the mRNA, and the 3’ terminus of the splint is designed to be complementary to the 5’ end of the DNA spacer linker, which usually consists of poly dA nucleotides (Figure 2).
The ligated mRNA-DNA-puromycin library is translated in Red Nova Lysate (Novagen) or E. coli S30 Extract System (Promega), resulting in polypeptides covalently linked in cis to the encoding mRNA. The in vitro translation can also be done in a PURE (protein synthesis using recombinant elements) system. PURE system is an E. coli cell-free translation system in which only essential translation components are present. Some components, such as amino acids and aminoacyl-tRNA synthases (AARSs) can be omitted from the system. Instead, chemically acylated tRNA can be added into the PURE system. It has been shown that some unnatural amino acids, such as N-methyl-amino acid accylated tRNA can be incorporated into peptides or mRNA-polypeptide fusions in a PURE system. [ 5 ]
After translation, the single-stranded mRNA portions of the fusions will be converted to heteroduplex of RNA/DNA by reverse transcriptase to eliminate any unwanted RNA secondary structures, and render the nucleic acid portion of the fusion more stable. This step is a standard reverse transcription reaction. For instance, it can be done by using Superscript II (GIBCO-BRL) following the manufacturer’s protocol.
The mRNA/DNA-polypeptide fusions can be selected over immobilized selection targets for several rounds (Figure 3). There might be a relatively high background for the first few rounds of selection, and this can be minimized by increasing selection stringency, such as adjusting salt concentration, amount of detergent, and/or temperature during the target/fusion binding period. Following binding selection, those library members that stay bound to the immobilized target are PCR amplified. The PCR amplification step will enrich the population from the mRNA-display library that has higher affinity for the immobilized target. Error-prone PCR can also be done in between each round of selection to further increase the diversity of the mRNA-display library and reduce background in selection. [ 6 ]
A less time-consuming protocol for mRNA display was recently published. [ 7 ]
Although there are many other molecular display technologies, such as phage display , bacterial display , yeast display , and ribosome display , mRNA display technology has many advantages over the others. [ 8 ] The first three biological display libraries listed have polypeptides or proteins expressed on the respective microorganism’s surface and the accompanying coding information for each polypeptide or protein is retrievable from the microorganism’s genome. However, the library size for these three in vivo display systems is limited by the transformation efficiency of each organism. For example, the library size for phage and bacterial display is limited to 1-10 × 10^9 different members. The library size for yeast display is even smaller. Moreover, these cell-based display system only allow the screening and enrichment of peptides/proteins containing natural amino acids. In contrast, mRNA display and ribosome display are in vitro selection methods. They allow a library size as large as 10^15 different members. The large library size increases the probability to select very rare sequences, and also improves the diversity of the selected sequences. In addition, in vitro selection methods remove unwanted selection pressure, such as poor protein expression, and rapid protein degradation, which may reduce the diversity of the selected sequences. Finally, in vitro selection methods allow the application of in vitro mutagenesis [ 9 ] and recombination techniques throughout the selection process.
Although both ribosome display and mRNA display are in vitro selection methods, mRNA display has some advantage over the ribosome display technology. [ 10 ] mRNA display utilizes covalent mRNA-polypeptide complexes linked through puromycin; whereas, ribosome display utilizes stalled, noncovalent ribosome-mRNA-polypeptide complexes. [ 11 ] For ribosome display, selection stringency is limited to keep ribosome-mRNA-polypeptide in a complex because of the noncovalent ribosome-mRNA-polypeptide complexes. This may cause difficulties in reducing background binding during the selection cycle. Also, the polypeptides under selection in a ribosome display system are attached to an enormous rRNA-protein complex, a ribosome, which has a molecular weight of more than 2,000,000 Da. There might be some unpredictable interaction between the selection target and the ribosome, and this may lead to a loss of potential binders during the selection cycle. In contrast, the puromycin DNA spacer linker used in mRNA display technology is much smaller comparing to a ribosome. This linker may have less chance to interact with an immobilized selection target. Thus, mRNA display technology is more likely to give less biased results.
In 1997, Roberts and Szostak showed that fusions between a synthetic mRNA and its encoded myc epitope could be enriched from a pool of random sequence mRNA-polypeptide fusions by immunoprecipitation. [ 6 ]
Nine years later, Fukuda and colleagues chose mRNA display method for in vitro evolution of single-chain Fv (scFv) antibody fragments. [ 12 ] They selected six different scFv mutants with five consensus mutations. However, kinetic analysis of these mutants showed that their antigen-specificity remained similar to that of the wild type. However, they have demonstrated that two of the five consensus mutations were within the complementarity determining regions (CDRs). And they concluded that mRNA display has the potential for rapid artificial evolution of high-affinity diagnostic and therapeutic antibodies by optimizing their CDRs.
Roberts and coworkers have demonstrated that unnatural peptide oligomers consisting of an N-substituted amino acid can be synthesized as mRNA-polypeptide fusions. [ 13 ] N-substituted amino acid-containing peptides have been associated with good proteolytic stability and improved pharmacokinetic properties. This work indicates that mRNA display technology has the potential for selecting drug-like peptides for therapeutic usage resistant to proteolysis. [ 14 ] | https://en.wikipedia.org/wiki/MRNA_display |
mRNA surveillance mechanisms are pathways utilized by organisms to ensure fidelity and quality of messenger RNA (mRNA) molecules. There are a number of surveillance mechanisms present within cells. These mechanisms function at various steps of the mRNA biogenesis pathway to detect and degrade transcripts that have not properly been processed.
The translation of messenger RNA transcripts into proteins is a vital part of the central dogma of molecular biology . mRNA molecules are, however, prone to a host of fidelity errors which can cause errors in translation of mRNA into quality proteins . [ 1 ] RNA surveillance mechanisms are methods cells use to assure the quality and fidelity of the mRNA molecules. [ 2 ] This is generally achieved through marking aberrant mRNA molecule for degradation by various endogenous nucleases . [ 3 ]
mRNA surveillance has been documented in bacteria and yeast . In eukaryotes , these mechanisms are known to function in both the nucleus and cytoplasm . [ 4 ] Fidelity checks of mRNA molecules in the nucleus results in the degradation of improperly processed transcripts before export into the cytoplasm. Transcripts are subject to further surveillance once in the cytoplasm. Cytoplasmic surveillance mechanisms assess mRNA transcripts for the absence of or presence of premature stop codons. [ 3 ] [ 4 ]
Three surveillance mechanisms are currently known to function within cells : the nonsense-mediated mRNA decay pathway (NMD); the nonstop mediated mRNA decay pathways (NSD); and the no-go mediated mRNA decay pathway (NGD).
Nonsense-mediated decay is involved in detection and decay of mRNA transcripts which contain premature termination codons (PTCs). PTCs can arise in cells through various mechanisms: germline mutations in DNA; somatic mutations in DNA; errors in transcription ; or errors in post transcriptional mRNA processing. [ 5 ] [ 6 ] Failure to recognize and decay these mRNA transcripts can result in the production of truncated proteins which may be harmful to the organism. By causing decay of C-terminally truncated polypeptides, the NMD mechanism can protect cells against deleterious dominant -negative, and gain of function effects. [ 7 ] PTCs have been implicated in approximately 30% of all inherited diseases; as such, the NMD pathway plays a vital role in assuring overall survival and fitness of an organism. [ 8 ] [ 9 ]
A surveillance complex consisting of various proteins ( eRF1 , eRF3 , Upf1 , Upf2 and Upf3) is assembled and scans the mRNA for premature stop codons. [ 5 ] The assembly of this complex is triggered by premature translation termination. If a premature stop codon is detected then the mRNA transcript is signalled for degradation – the coupling of detection with degradation occurs. [ 3 ] [ 10 ] [ 11 ]
Seven smg genes (smg1-7) and three UPF genes (Upf1-3) have been identified in Saccharomyces cerevisiae and Caenorhabditis elegans as essential trans-acting factors contributing to NMD activity. [ 12 ] [ 13 ] All of these genes are conserved in Drosophila melanogaster and further mammals where they also play critical roles in NMD. Throughout eukaryotes there are three components which are conserved in the process of NMD. [ 14 ] These are the Upf1/SMG-2, Upf2/SMG-3 and Upf3/SMG-4 complexes. Upf1/SMG-2 is a phosphoprotein in multicellular organisms and is thought to contribute to NMD via its phosphorylation activity. However, the exact interactions of the proteins and their roles in NMD are currently disputed. [ 11 ] [ 12 ] [ 14 ] [ 15 ] [ 16 ]
A premature stop codon must be recognized as different from a normal stop codon so that only the former triggers a NMD response. It has been observed that the ability of a nonsense codon to cause mRNA degradation depends on its relative location to the downstream sequence element and associated proteins. [ 1 ] Studies have demonstrated that nucleotides more than 50–54 nucleotides upstream of the last exon-exon junction can target mRNA for decay. [ 1 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 17 ] Those downstream from this region are unable to do so. Thus, nonsense codons lie more than 50-54 nucleotides upstream from the last exon boundary whereas natural stop codons are located within terminal exons. [ 18 ] Exon junction complexes (EJCs) mark the exon-exon boundaries. EJCs are multiprotein complexes that assemble during splicing at a position about 20–24 nucleotides upstream from the splice junction. [ 19 ] It is this EJC that provides position information needed to discriminate premature stop codons from natural stop codons. Recognition of PTCs appears to be dependent on the definitions of the exon-exon junctions. This suggests involvement of the spliceosome in mammalian NMD. [ 17 ] [ 20 ] Research has investigated the possibility of spliceosome involvement in mammalian NMD and has determined this is a likely possibility. [ 18 ] Furthermore, it has been observed that NMD mechanisms are not activated by nonsense transcripts that are generated from genes that naturally do not contain introns (e.g., Histone H4, Hsp70, melanocortin-4-receptor). [ 7 ]
When the ribosome reaches a PTC the translation factors eRF1 and eRF3 interact with retained EJC complexes though a multiprotein bridge. [ 21 ] The interactions of UPF1 with the terminating complex and with UPF2 /UPF3 of the retained EJCs are critical. It is these interactions which target the mRNA for rapid decay by endogenous nucleases [ 18 ] [ 21 ]
Studies involving organisms such as S. cerevisiae , D.melanogaster and C. elegans have shown that PTC recognition involving invertebrate organisms does not involve exon-exon boundaries. [ 20 ] These studies suggest that invertebrate NMD occurs independently of splicing. As a result, EJCs which are responsible for marking exon-exon boundaries are not required in invertebrate NMD. [ 3 ] Several models have been proposed to explain how PTCs are distinguished from normal stop codons in invertebrates. One of these suggests that there may be a downstream sequence element which functions similar to the exon junctions in mammals. [ 11 ] A second model proposes that a widely present feature in mRNA, such as a 3' poly-A tail, might provide the positional information required for recognition. [ 22 ] Another model, dubbed the "faux 3'UTR model", suggests that premature translation termination may be distinguished from normal termination because of intrinsic features that may allow it to recognize its presence in an inappropriate environment. [ 3 ] These mechanisms, however, have yet to be conclusively demonstrated.
There are two mechanisms of PTC recognition in plants: according to its distance from the EJC (like in vertebrates) or from the poly-A tail. The NMD mechanism in plants induces the decay of mRNAs containing a 3’UTR longer than 300 nucleotides, that is why the proportion of mRNAs with longer 3'UTRs is much lower in plants than in vertebrates. [ 23 ] [ 24 ]
mRNAs with nonsense mutations are generally thought to be targeted for decay via the NMD pathways. The presence of this premature stop codon about 50-54 nucleotides 5' to the exon junction appears to be the trigger for rapid decay; however, it has been observed that some mRNA molecules with a premature stop codon are able to avoid detection and decay. [ 17 ] [ 25 ] In general, these mRNA molecules possess the stop codon very early in the reading frame (i.e. the PTC is AUG-proximal). This appears to be a contradiction to the current accepted model of NMD as this position is significantly 5' of the exon-exon junction. [ 26 ]
This has been demonstrated in β-globulin. β-globulin mRNAs containing a nonsense mutation early in the first exon of the gene are more stable than NMD sensitive mRNA molecules. The exact mechanism of detection avoidance is currently not known. It has been suggested that the poly-A binding protein (PABP) appears to play a role in this stability. [ 27 ] It has been demonstrated in other studies that the presence of this protein near AUG-proximal PTCs appears to promote the stability of these otherwise NMD sensitive mRNAs. It has been observed that this protective effect is not limited only to the β-globulin promoter. [ 25 ] This suggests that this NMD avoidance mechanism may be prevalent in other tissue types for a variety of genes. The current model of NMD may need to be revisited upon further studies.
Nonstop mediated decay (NSD) is involved in the detection and decay of mRNA transcripts which lack a stop codon. [ 29 ] [ 30 ] These mRNA transcripts can arise from many different mechanisms such as premature 3' adenylation or cryptic polyadenylation signals within the coding region of a gene. [ 31 ] This lack of a stop codon results a significant issue for cells. Ribosomes translating the mRNA eventually translate into the 3'poly-A tail region of transcripts and stalls. As a result, it cannot eject the mRNA. [ 32 ] Ribosomes thus may become sequestered associated with the nonstop mRNA and would not be available to translate other mRNA molecules into proteins. Nonstop mediated decay resolves this problem by both freeing the stalled ribosomes and marking the nonstop mRNA for degradation in the cell by nucleases. Nonstop mediated decay consists of two distinct pathways which likely act in concert to decay nonstop mRNA. [ 29 ] [ 30 ]
This pathway is active when Ski7 protein is available in the cell. The Ski7 protein is thought to bind to the empty A site of the ribosome. This binding allows the ribosome to eject the stuck nonstop mRNA molecule – this even frees the ribosome and allows it to translate other transcripts. The Ski7 is now associated with the nonstop mRNA and it is this association which targets the nonstop mRNA for recognition by the cytosolic exosome . The Ski7-exosome complex rapidly deadenylates the mRNA molecule which allows the exosome to decay the transcript in a 3' to 5' fashion. [ 29 ] [ 30 ]
A second type of NSD has been observed in yeast. In this mechanism, the absence of Ski7 results in the loss of poly-A tail binding PABP proteins by the action of the translation ribosome. The removal of these PABP proteins then results in the loss of the protective 5'm7G cap . The loss of the cap results in rapid degradation of the transcript by an endogenous 5'-3' exonuclease such as XrnI. [ 30 ]
No-Go decay (NGD) is the most recently discovered surveillance mechanism. [ 33 ] As such, it is not currently well understood. While authentic targets of NGD are poorly understood, they appear to consist largely of mRNA transcripts on which ribosomes have stalled during translation. This stall can be caused by a variety of factors including strong secondary structures , which may physically block the translational machinery from moving down the transcript. [ 33 ] Dom34/Hbs1 likely binds near the A site of stalled ribosomes and may facilitate recycling of complexes. [ 34 ] In some cases, the transcript is also cleaved in an endonucleolytic fashion near the stall site; however the identity of the responsible endonuclease remains contentious. The fragmented mRNA molecules are then fully degraded by the exosome in a 3' to 5' fashion and by Xrn1 in a 5' to 3' fashion. [ 33 ] It is not currently known how this process releases the mRNA from the ribosomes, however, Hbs1 is closely related to the Ski7 protein which plays a clear role in ribosome release in Ski7 mediated NSD. It is postulated that Hbs1 may play a similar role in NGD. [ 5 ] [ 35 ]
It is possible to determine the evolutionary history of these mechanisms by observing the conservation of key proteins implicated in each mechanism. For example: Dom34/Hbs1 are associated with NGD; [ 33 ] Ski7 is associated with NSD; [ 29 ] and the eRF proteins are associated with NMD. [ 6 ] To this end, extensive BLAST searches have been performed to determine the prevalence of the proteins in various types of organisms. It has been determined that NGD Hbs1 and NMD eRF3 are found only in eukaryotes. However, the NGD Dom34 is universal in eukaryotes and archaea . This suggests that NGD appears to have been the first evolved mRNA surveillance mechanism. The NSD Ski7 protein appears to be restricted strictly to yeast species which suggest that NSD is the most recently evolved surveillance mechanism. This by default leaves NMD as the second evolved surveillance mechanism. [ 36 ] | https://en.wikipedia.org/wiki/MRNA_surveillance |
MRS Bulletin is published by the Materials Research Society in partnership with Springer Nature . [ 1 ] It was established in 1974 as the Materials Research Society Newsletter . There was a year gap in 1981, and then in 1982, it came back as MRS Bulletin . [ 2 ]
The current editor is Gopal R. Rao (2011–present). [ 3 ] The previous editor was Elizabeth Fleischer (1991 to 2011). [ 4 ]
The journal is abstracted and indexed in
According to Journal Citation Reports , the journal has a 2020 impact factor of 6.578. [ 6 ] | https://en.wikipedia.org/wiki/MRS_Bulletin |
MRZ-9547 , also known as ( R )-phenylpiracetam , ( R )-phenotropil , or ( R )-fonturacetam , is a selective dopamine reuptake inhibitor ( IC 50 Tooltip half-maximal inhibitory concentration = 14.5 μM) that was developed by Merz Pharma. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] It is the ( R )- enantiomer of the racetam and nootropic phenylpiracetam (phenotropil; fonturacetam). [ 4 ] [ 5 ] [ 7 ]
The drug was under development for the treatment of fatigue associated with Parkinson's disease and was in phase 1 clinical trials for this indication in June 2014. [ 1 ] However, no recent development has been reported as of November 2017. [ 1 ] There was also interest in MRZ-9547 for treatment of fatigue in people with depression and other conditions, but this was not pursued. [ 2 ] [ 5 ]
Similarly to other dopamine reuptake inhibitors and related agents, MRZ-9547 has been found to have pro-motivational effects in animals and to reverse motivational deficits induced by the dopamine depleting agent tetrabenazine . [ 8 ] [ 3 ] [ 5 ]
The drug, as the enantiopure ( R )-enantiomer of phenylpiracetam, was first described in the scientific literature by 2014. [ 5 ] [ 9 ]
This drug article relating to the nervous system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MRZ-9547 |
MS2 tagging is a technique based upon the natural interaction of the MS2 bacteriophage coat protein with a stem-loop structure from the phage genome , [ 1 ] which is used for biochemical purification of RNA-protein complexes and partnered to GFP for detection of RNA in living cells. [ 2 ] More recently, the technique has been used to monitor the appearance of RNA in living cells, at the site of transcription, or simply by observing the changes in RNA number in the cytoplasm. [ 3 ] [ 4 ] This has revealed that transcription of both prokaryotic and eukaryotic genes occurs in a discontinuous fashion with bursts of transcription separated by irregular intervals.
Start with single-stranded RNA, and create a pattern of stem-loop structures by adding copies of the MS2 RNA-binding sequences to a noncoding region. [ 5 ] The MS2 protein must be fused with GFP and bonded to an mRNA, a complex that contains the MS2’s RNA-binding sequence copies. [ 5 ] The MS2-GFP fusion protein was expressed by transferring it to a cell with a plasmid [ 5 ] (Robert Singer’s lab). The signal encodes within RNA and the signal presences of the nuclear localization signal (NLS) within GFP-MS2 are two signals that introduce from EGFP-MS2-RNA complexes. [ 6 ]
MS2 biotin-tagged RNA affinity purification (MS2-BioTRAP) is one in vivo method of identifying protein-RNA interactions. [ 7 ] Both the RNA that tagged with MS2 and the MS2 protein tag were expressed, and then, the affinity interaction was used to help the process of identifying protein-RNA interactions. [ 7 ]
Advantages:
The MS2-BioTRAP method is fast, flexible, and easy to set up; it scales well and allows the study of the physiological conditions of the protein-RNA interactions. [ 7 ] The MS2 tag is also effective for small molecules when an MS2 coat protein is used to isolate a variety of ribonucleoprotein particles (RNPs) . [ 8 ]
Disadvantages:
One caveat of MS2 tagging is that many copies of the MS2 stem-loop inside the RNA need to be added to produce enough signal to view and track one RNA molecule in the nucleus. [ 5 ] When tracking more than one RNA sequence in the nucleus of cultured cells, more than one target sequence is needed. [ 5 ] This could be affected by the MS2 protein, which has a classical basic nuclear localization signal (NLS), so it could affect the location of the RNA complex, and the nucleus would have most of the GFP-MS2 [ 5 ] (Robert Singer’s lab). [ 6 ] The accumulation of GFP-MS2 in the nucleus will result in strong nuclear fluorescence signals, which will delay or prevent the analysis of RNA nuclear localization because it will hinder the analysis of splicing, RNA editing, the nuclear export of RNA, and RNA translation. [ 6 ] Moreover, due to the addition of the tag, the RNA secondary structure may introduce an artifact. [ 5 ]
Additionally, the small noncoding RNA (sRNA) expression levels and regulatory properties will be influenced by MS2 tag. [ 8 ] Also, by using MS2 as an affinity tag to purify a protein in E. coli bacteria, scientists expressed MS2-MBP, which is an MS2 coat protein carrying mutations fused with maltose-binding proteins . The mutations prevented oligomerization . [ 8 ] | https://en.wikipedia.org/wiki/MS2_tagging |
MSAT ( Mobile Satellite ) is a satellite -based mobile telephony service developed by the National Research Council Canada (NRC). Supported by a number of companies in the US and Canada, MSAT hosts a number of services, including the broadcast of CDGPS signals. The MSAT satellites were built by Hughes (now owned by Boeing ) with a 3 kilowatt solar array power capacity and sufficient fuel for a design life of twelve years. TMI Communications of Canada referred to its MSAT satellite as MSAT-1, while American Mobile Satellite Consortium (now Ligado Networks ) referred to its MSAT as AMSC-1, with each satellite providing backup for the other.
[ 1 ]
MSAT-1 and MSAT-2 have had their share of problems. Mobile Satellite Ventures placed the AMSC-1 satellite into a 2.5 degree inclined orbit operations mode in November 2004, reducing station-keeping fuel usage and extending the satellite's useful life. [ 7 ]
On January 11, 2006, Mobile Satellite Ventures (MSVLP) (changed name to SkyTerra , then became by acquisition LightSquared , then after bankruptcy Ligado Networks ) announced plans to launch a new generation of satellites (in a 3 satellite configuration) to replace the MSAT satellites by 2010. MSV has said that all old MSAT gear would be compatible with the new satellites. [ 8 ] [ 9 ]
The following services are singularly dependent upon the continued operation of the MSAT satellite: | https://en.wikipedia.org/wiki/MSAT |
MSI Wind PC is a nettop counterpart to the MSI Wind Netbook . [ 1 ] The MSI Wind PC is sold in Europe, Asia, [ 2 ] and in the United States, barebones kits were available until Summer 2009, when desktop units also became available. [ 3 ]
On January 15, 2009, MSI announced a new model of the Wind, the NetTop D130, with a dual-core processor. [ 4 ]
There are 3 MSI wind PC models listed on the MSI website: [ when? ] Wind PC, Wind PC (Linux), and the Wind PC 100. In addition, there are 8 products using the "Nettop" name and 6 that use alphanumeric codes, i.e. 6676-003BUS or 6667BB-004US. [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MSI_Wind_PC |
MS BASIC for Macintosh was a dialect of Microsoft BASIC for Macintosh . It was one of the first Microsoft BASIC variants to have optional line numbering, predating QuickBASIC . It was provided in two versions, one with standard binary floating point and another with decimal arithmetic . [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MS_BASIC_for_Macintosh |
mSpot Inc. is the developer of Samsung Music Hub – an all-in-one mobile music service that includes a streaming catalog, cloud music storage, radio, and music store. The service was currently available for Samsung smart mobile devices in the U.S. and EU countries including the UK , France , Germany , Italy and Spain . mSpot became a wholly owned, independently operated subsidiary of Samsung Electronics, in May 2012: in July, 2013, mSpot employees became the Music Team for Samsung's Media Solutions Center. [ citation needed ]
mSpot was founded in 2004 by Daren Tsui and Ed Ho, now CEO and CTO respectively. The company was initially launched as a mobile radio service. In early 2005, Sprint was building the first 2.5 G network and requested mSpot to provide one of the first mobile radio channels for the first service. Initially the service launched with 8 music channels and 5 talk channels (NPR, Acuweather); and is soon expanded to provide sports and entertainment channels, including premium content channels like NPR, and sports. [ 1 ] Soon after, mSpot launched a white-labeled mobile entertainment platform offering music and video content, which is later licensed by other wireless carriers including AT&T and T-Mobile . [ 2 ]
In 2006, mSpot began offering full-length mobile movies that are streamed over wireless networks. Studios that first offered content for the service include Disney and Universal : Scarface is the first mobile movie offered on mSpot Movies, initially offered on Sprint. [ 3 ]
In early 2008, Island Def Jam partnered with mSpot to bring label-sponsored radio to mSpot Radio. [ 4 ] In May 2010, mSpot Music became one of the first "cloud" music services available in the U.S., ahead of Google Music and similar services from Apple and others. [ 5 ] | https://en.wikipedia.org/wiki/MSpot |
mSpy is a brand of mobile and computer parental control monitoring software for iOS , Android , Windows , and macOS . The app monitors and logs user activity on the client device and sends the data to a personalized dashboard. Data the users can monitor includes text messages, calls, GPS locations, social media chats, and more. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is owned by Virtuoso Holding. [ 5 ] [ 6 ]
mSpy was launched as a product for mobile monitoring by Altercon Group in 2010. [ 6 ]
In 2012, the application allowed parents to monitor not only smartphones but also computers running Windows and macOS.
In 2013, mSpy became TopTenReviews cell phone monitoring software award winner. [ 7 ]
By 2014, the business grew nearly 400%, and the app's user numbers exceeded the 1 million mark. [ 8 ]
In 2015, mSpy received the Parents Tested Parents Approved (PTPA) Winner’s Seal of Approval in the United States. [ 9 ] In 2015 and 2018, mSpy was the victim of data breaches which released user data. [ 10 ]
In 2016, mLite, a light version of mSpy, became available from Google Play . The same year, it was awarded the kidSAFE Certified Seal in the United States. [ 11 ]
In 2017, mSpy collaborated with YouTuber and journalist Coby Persin to conduct a social experiment on the dangers of social media and online predators. [ 12 ] [ 13 ] [ 14 ] [ 14 ] A social experiment, conducted with parental consent, involved Coby Persin to befriend three children—aged 12, 13, and 14—via Snapchat and then invite them to meet personally. Each of the participants agreed to the meeting and arrived at the designated location. The video of the experiment received widespread attention and helped to raise awareness about the importance of online security and parental controls. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
In early 2021, mSpy released a new feature - Screenrecorder. The feature allows parents to take screenshots of the kid's screen when they are browsing certain apps. [ 20 ]
In 2024, mSpy's Zendesk was compromised by an unknown threat actor, revealing their customer list. [ 21 ]
As of 2025, mSpy is compatible with Android, iPhone, and iPad devices. It provides access to various types of data stored on the device, including contact information, calendar entries, emails, SMS messages, browser history, photos, videos, and installed applications. Functions also include GPS tracking, geofencing, keyword alerts etc. [ 20 ] [ 22 ]
It was noted that since MSpy runs inconspicuously, there is risk of the software being used illegally. mSpy was called "terrifying" by The Next Web [ 23 ] and was featured in NPR coverage of spyware used against victims of stalking and other domestic violence . [ 24 ] In response mSpy released security updates aimed at reducing the risk of misuse and stated that it "uses encryption protocols to protect user data and that access is restricted to the account holder".
In May 2015, Brian Krebs reported that mSpy was hacked, leaking personal data for hundreds of thousands of users of devices with mSpy installed. [ 25 ] mSpy claimed that there was no data leak, but that instead, it was the victim of blackmailers. [ citation needed ]
In September 2018, Krebs claimed and demonstrated that anyone could easily gain access to the mSpy database containing data for millions of users. [ 26 ] The company responded by stating that the exposed data consisted primarily of error logs and incorrect login attempts. Following the incident, mSpy implemented new security measures, changed encryption keys, and reset passwords for affected accounts. [ 27 ]
A 2024 Sky News story characterised mSpy as " stalkerware ". [ 28 ]
Leaked customer support messages from mSpy reveal misuse of its app for illegally monitoring partners and children. [ 29 ] | https://en.wikipedia.org/wiki/MSpy |
The MTBE controversy concerns methyl tert-butyl ether (MTBE), a gasoline additive that replaced tetraethyllead . MTBE is an oxygenate and raises gasoline's octane number . Its use declined in the United States in response to environmental and health concerns. It has polluted groundwater due to MTBE-containing gasoline being spilled or leaked at gas stations. MTBE spreads more easily underground than other gasoline components due to its higher solubility in water. [ 1 ] Cost estimates for removing MTBE from groundwater and contaminated soil range from $1 billion [ 2 ] to $30 billion, [ 3 ] including removing the compound from aquifers and municipal water supplies, and replacing leaky underground oil tanks. Who will pay for remediation is controversial. In one case, the cost to oil companies to clean up the MTBE in wells belonging to the city of Santa Monica , California is estimated to exceed $200 million. [ 4 ]
Some U.S. states banned MTBE in gasoline. California and New York , which together accounted for 40% of U.S. MTBE consumption, banned usage of the chemical in gasoline, effective 2002 and 2004, respectively. [ 5 ] [ 6 ] As of 2007, 25 states had issued complete or partial bans on the use of MTBE. [ 7 ]
The Energy Policy Act of 2005 prompted gasoline refiners to replace MTBE with ethanol. [ 8 ]
Harford County , Maryland, found MTBE in wells near several of its filling stations beginning in 2004. [ 9 ] This led the state of Maryland to make moves to ban MTBE. [ 10 ] [ 11 ]
In 2005, an Exxon-Mobil station in Fallston , Maryland, was found to be leaking MTBE into the local wells. The discovery resulted in the station being abruptly closed. [ 12 ] Exxon-Mobil referred to the closure as a "business decision". [ 12 ] Following the closure, MTBE levels in the area dropped. [ 13 ]
In September 2004, Harford County placed a six-month moratorium on construction of filling stations. [ 14 ]
In 2006, the wells of a neighborhood in Jacksonville , Maryland, were contaminated by a spill of 26,000 gallons of gasoline from an Exxon-Mobil station in the area, resulting in an ongoing court battle. [ 15 ] [ 16 ] The suit has been filed by the state of Maryland 's Department of the Environment on behalf of the area's residents, seeking millions of dollars in damages from Exxon-Mobil. [ 17 ] Many residents also filed their own separate lawsuits. [ 18 ]
The case began in 2006, when a gasoline tank sprang a leak that was not detected for 34 days. Testing of 120 wells resulted in dangerously high levels of MTBE being found. [ 19 ] Residents were put in danger by the spill, and in order to prevent further health problems, they required bottled water for cooking, drinking, and brushing teeth. [ 20 ] Residents of Jacksonville continue to use bottled water for all activities despite having MTBE filters and alarms installed in their homes. Home values also dropped as a result of the spill. [ 21 ]
In September 2008, Exxon-Mobil settled the case with the state by agreeing to pay a $4 million fine, and face an additional $1 million in penalties annually if they did not work to clean up the spill. [ 22 ]
In March 2009, a jury awarded $150 million in damages to some of the area's residents. The jury did not assess any punitive damages in the case, finding that Exxon Mobil did not act fraudulently. [ 23 ] A separate case including over 150 property owners as plaintiffs began in early 2011. Punitive damages were awarded to the second group of plaintiffs, on the basis that Exxon acted fraudulently, however this decision was later reversed. [ 24 ] [ 25 ]
In 1995 high levels of MTBE were unexpectedly discovered in the water wells of Santa Monica , California, and the U.S. Geological Survey reported detections. [ 26 ] Subsequent U.S. findings indicate tens of thousands of contaminated sites in water wells distributed across the country. As per toxicity alone, MTBE is not classified as a hazard for the environment, but it imparts an unpleasant taste to water even at very low concentrations. The maximum contaminant level of MTBE in drinking water has not yet been established by the United States Environmental Protection Agency (EPA). The leakage problem is partially attributed to the lack of effective regulations for underground storage tanks, but spillage from overfilling is also a contributor. As an ingredient in unleaded gasoline, MTBE is the most water-soluble component. When dissolved in groundwater, MTBE will lead the contaminant plume with the remaining components such as benzene and toluene following. Thus the discovery of MTBE in public groundwater wells indicates that the contaminant source was a gasoline release. Its criticism and subsequent decreased usage, some claim, is more a product of its easy detectability (taste) in extremely low concentrations (ppb) than its toxicity. The MTBE concentrations used in the EU (usually 1.0–1.6%) and allowed (maximum 5%) in Europe are lower than in California. [ 27 ]
Chevron , BP , and other oil companies agreed to settle with Santa Monica for $423 million on May 7, 2008. [ 28 ]
In 2000, EPA drafted plans to phase out the use of MTBE nationwide over four years. [ citation needed ] . Some states enacted MTBE prohibitions without waiting for federal restrictions. [ 7 ] California banned MTBE as a gasoline additive in 2002. [ 5 ] The State of New York banned the use of MTBE as a "fuel additive", effective in 2004. [ 6 ] MTBE use is still legal in the state for other industrial uses. [ 29 ]
The federal Energy Policy Act of 2005 removed the oxygenate requirement for reformulated gasoline and established a renewable fuel standard. [ 30 ] The lack of MTBE liability protection in the law also prompted refiners to substitute ethanol for MTBE as a gasoline additive. [ 31 ]
EPA issued a drinking water health advisory for MTBE, a guidance document for water utilities and the public, in 1997. [ 32 ] The Agency first listed MTBE in 1998 as a candidate for development of a national Maximum Contaminant Level (MCL) standard in drinking water. [ 33 ] EPA included MTBE on its most recent Contaminant Candidate List in 2022 but has not announced whether it will develop an MCL. [ 34 ] [ 35 ] EPA uses toxicity data in developing MCLs for public water systems . [ 36 ]
California established a state-level MCL for MTBE in 2000. [ 37 ] | https://en.wikipedia.org/wiki/MTBE_controversy |
MTBVAC is a candidate vaccine against tuberculosis in humans currently in clinical trials . It is based on a genetically modified form of the Mycobacterium tuberculosis pathogen isolated from humans. [ 1 ] Unlike the BCG vaccine , MTBVAC contains all the antigens present in the strains that infect humans.
The vaccine was constructed at the University of Zaragoza in the laboratory of the Mycobacterial Genetics group, in collaboration with Dr. Brigitte Gicquel of the Pasteur Institute in Paris. [ 2 ] Currently, the University of Zaragoza has an industrial partner: the Spanish biotechnology company BIOFABRI, belonging to ZENDAL group, responsible for the industrial and clinical development of MTBVAC, studying its immunity and safety in two Phase IIa trials in newborn babies and adults in South Africa. [ 3 ] For the Clinical Development of MTBVAC, the tuberculosis vaccine project enjoys the advice and support of the European TBVI (since 2008) and since 2016, of IAVI for the clinical development in adults and adolescents.
MTBVAC discovery follows the principles of vaccination as per Luis Pasteur: isolation of the human pathogen , attenuation by rational inactivation of selected genes , protection assays in animal models, and evaluation in humans. [ 2 ]
The main advantage of using live vaccines based on rational attenuation of M. tuberculosis is their ability to keep the genetic repertoire encoding immunodominant antigens that are absent in BCG, [ 4 ] whereas chromosomal deletions in virulence genes provide assurance for safety and genetic stability. [ 5 ] Such vaccines are expected to safely induce more specific and longer lasting immune responses in humans that can provide protection against all forms of the disease. [ 2 ] This is the rationale that has been followed in the development of the live-attenuated MTBVAC.
The rational attenuation of MTBVAC was achieved by inactivation of the phoP and fadD26 genes, [ 2 ] following the international guidelines to progress live vaccines into clinical development. [ 6 ] Similar to BCG, which was conceived in the early 1900s as an attenuated strain of Mycobacterium bovis that causes tuberculosis (TB) in cows and transmitted to humans mainly through ingestion of unpasteurized milk, the discovery of MTBVAC starts with an unusual outbreak of a multidrug-resistant M. bovis killing more than 100 HIV - positive individuals in Spain in the early 1990s. [ 7 ] From that outbreak, Professor Carlos Martín Montañés and his group identified the phoP gene as a key player in M. tuberculosis virulence . [ 8 ] PhoP gene encodes the PhoP transcription factor of the PhoP / PhoR two-component system essential for the virulence of M. Tuberculosis . [ 8 ] [ 9 ] PhoP was shown to regulate between 2–4% of M. tuberculosis genes, most of which are involved in well-known virulence pathways of the tuberculosis bacillus. [ 10 ] [ 11 ] [ 12 ] As a consequence of the phoP inactivation, MTBVAC can produce but is unable to export ESAT-6, which results in virulence attenuation, but yet maintains the epitopes present in this immunogenic protein. [ 13 ] Other relevant virulence genes regulated by PhoP are involved in the biosynthesis of polyketide -derived acyltrehaloses (DAT, PAT) and sulfolipids (SL), which are first-line lipid constituents of the cell wall that are thought to play a role in host immune modulation, interfering with the recognition of M. tuberculosis by the immune system. [ 14 ] [ 15 ] Finally, phoP is able to modulate protein secretion, and PhoP inactivation in MTBVAC results in increased secretion of immunogenic proteins such as the Ag85 complex. [ 10 ] The fadD26 gene is the first gene of an operon required for the biosynthesis and export of phthiocerol dimycocerosates (PDIM), the main virulence-associated cell-wall lipids of M. Tuberculosis . [ 16 ] [ 17 ] The recent evidence indicates that PDIM are involved in the breaking fagosomal in concert with ESAT-6. [ 18 ] [ 2 ]
Rigorous preclinical studies in different animal models relevant to tuberculosis (in mice, guinea pigs and non-human primates) conducted between 2001 and 2011 have shown adequate attenuation, safety and improved immunogenicity and protective efficacy against the exposure to M. Tuberculosis in comparison with BCG, thus fulfilling regulatory WHO guidelines and the Geneva consensus requirements for progressing live mycobacterial vaccines to first-in-human Phase 1 clinical evaluation. [ 2 ] [ 3 ] [ 6 ] A successful trial in rhesus macaques was reported in 2021. [ 19 ]
The safety and immunogenicity of new vaccines need to be determined in a reduced number of healthy volunteers. Phase 1 studies (can be first-in-human) to define the safety of different ascending doses are usually conducted in small groups of no more than 100 volunteers per trial. These are followed by medium-sized Phase 2 trials (can be > 100) to corroborate safety and determine the optimal therapeutic dose (detailed immunogenicity profile in the case of new vaccines) that helps select the final dose for Phase 3 efficacy evaluation. [ 3 ]
The MTBVAC clinical development started with a first-in-human study in healthy adult volunteers in Lausanne, Switzerland (NCT02013245); [ 20 ] followed by one additional Phase 1 study in healthy newborns in South Africa in collaboration with South African TuBerculosis Vaccine Initiative (SATVI) (NCT02729571) [ 21 ] to corroborate the safety and greater immunogenic potential of MTBVAC in this age-group relative to BCG. Two dose-defining Phase 2 studies were conducted at SATVI covering adults with and without previous exposure to M. tuberculosis (NCT02933281) (ended in Sep 2021) and healthy newborns (NCT03536117) that will be finalized in March 2022. [ 3 ]
Data from the Phase II clinical trials will help define the final (safest and most immunogenic) dose of MTBVAC, triggering the initiation of a multi-center Phase 3 efficacy trial in newborn babies by the second quarter of 2022. Supported by the European & Developing Countries Clinical Trials Partnership (EDCTP funding), this Phase 3 trial will encompass TB-endemic regions of Sub-Saharan Africa, including South Africa, Madagascar and Senegal (registered on ClinicalTrials.gov, NCT04975178). [ 3 ] [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/MTBVAC |
MTConnect is a manufacturing technical standard to retrieve process information from numerically controlled machine tools . As explained by a member of the team that developed it, [ 1 ] "This standard specifies the open-source, royalty-free communications protocol based on XML and HTTP Internet technology for real-time data sharing between shopfloor equipment such as machine tools and computer systems. MTConnect provides a common vocabulary with standardized definitions for the meaning of data that machine tools generate, making the data interpretable by software applications." [ 1 ] A simple, real-world example of how this tool is used to improve shop management is given by the same author. [ 1 ]
The initiative began as a result of lectures given by David Edstrom of Sun Microsystems and David Patterson , professor of Computer Science at the University of California, Berkeley (UCB) at the 2006 annual meeting of the Association for Manufacturing Technology (AMT). [ 2 ] The two lectures promoted an open communication standard to enable Internet connectivity to manufacturing equipment. [ 3 ] Initial development was carried out by a joint effort between the UCB Electrical Engineering and Computer Sciences (EECS) department, the UCB Mechanical Engineering (ME) department (both in the College of Engineering ) and the Georgia Institute of Technology , [ 4 ] using input from industry representatives. The resulting standard is available under royalty-free licensing terms. [ 5 ]
MTConnect is a protocol designed for the exchange of data between shop floor equipment and software applications used for monitoring and data analysis. MTConnect is referred to as a read-only standard, meaning that it only defines the extraction (reading) of data from control devices, not the writing of data to a control device. Freely available, open standards are used for all aspects of MTConnect. Data from shop floor devices is presented in XML format, and is retrieved from information providers, called Agents, using Hypertext Transfer Protocol (HTTP) as the underlying transport protocol . MTConnect provides a RESTful interface, which means the interface is stateless. No session must be established to retrieve data from an MTConnect Agent, and no logon or logoff sequence is required (unless overlying security protocols are added which do). Lightweight Directory Access Protocol (LDAP) is recommended for discovery services.
Version 1.0 was released in December 2008. [ 6 ]
The first public demonstration of MTConnect occurred at the International Manufacturing Technology Show (IMTS) held in Chicago , Illinois September 2008. [ 7 ] There, 25 industrial equipment manufacturers networked their machinery control systems , providing process information that could be retrieved from any web-enabled client connected to the network. [ 8 ]
Subsequent demonstrations occurred at EMO (the European machine tool show) in Milan , Italy in October 2009, [ 9 ] and the 2010 IMTS in Chicago. [ 10 ]
The MTConnect standard has three sections. The first section provides information on the protocol and structure of the XML documents via XML schemas . The second section specifies the machine tool components and the description of the available data. The third and last section specifies the organization of the data streams that can be provided from a manufacturing device. The MTConnect Institute is considering adding a fourth section to support mobile assets that include tools and work-holdings. [ 11 ]
MTConnect took an incremental approach to defining the requirements for manufacturing device communications. It did not exhaustively define every possible piece of data an application can collect from a manufacturing device, but it works forward from business and research objectives to define the required elements to meet those needs. The standard catalogued important components and data items for metal cutting devices. MTConnect provides an extensible XML schema to allow implementors to add custom data to meet their specific needs, while providing as much commonality as possible.
On September 16, 2010, The MTConnect Institute and the OPC Foundation announced cooperation between the respective organizations. [ 12 ]
The maintenance cost and losses in productivity with unplanned downtime for machine tool components such as spindle bearings and ball screws could be reduced if one could proactively take action prior to failure. In addition, cutting tools and inserts are expensive to replace when they are still in good condition, but replacing the tools too late can be costly due to scrap and re-work. The proposed health monitoring application will use MTConnect to extract controller data and pattern recognition algorithms to assess the health condition of the spindle and machine tool axes. The health assessment approach is based on running a routine program each shift in which the most recent data patterns are compared to the baseline data patterns. An online tool condition monitoring module is also proposed and uses controller data such as the spindle motor current, with other add on sensors (vibration, acoustic emission) to accurately estimate and predict tool wear. With the added transparency of the machine tool health information, one can take proactive actions before significant downtime or productivity losses occur. | https://en.wikipedia.org/wiki/MTConnect |
4JT6 , 1AUE , 1FAP , 1NSG , 2FAP , 2GAQ , 2NPU , 2RSE , 3FAP , 4DRH , 4DRI , 4DRJ , 4FAP , 4JSN , 4JSP , 4JSV , 4JSX , 4JT5 , 5FLC
2475
56717
ENSG00000198793
ENSMUSG00000028991
P42345
Q9JLN9
NM_004958 NM_001386500 NM_001386501
NM_020009
NP_004949
NP_064393
The mammalian target of rapamycin ( mTOR ), [ 5 ] also referred to as the mechanistic target of rapamycin , and sometimes called FK506-binding protein 12-rapamycin-associated protein 1 (FRAP1), is a kinase that in humans is encoded by the MTOR gene . [ 6 ] [ 7 ] [ 8 ] mTOR is a member of the phosphatidylinositol 3-kinase-related kinase family of protein kinases . [ 9 ]
mTOR links with other proteins and serves as a core component of two distinct protein complexes , mTOR complex 1 and mTOR complex 2 , which regulate different cellular processes. [ 10 ] In particular, as a core component of both complexes, mTOR functions as a serine/threonine protein kinase that regulates cell growth, cell proliferation , cell motility , cell survival, protein synthesis , autophagy , and transcription . [ 10 ] [ 11 ] As a core component of mTORC2, mTOR also functions as a tyrosine protein kinase that promotes the activation of insulin receptors and insulin-like growth factor 1 receptors . [ 12 ] mTORC2 has also been implicated in the control and maintenance of the actin cytoskeleton . [ 10 ] [ 13 ]
The study of TOR (Target Of Rapamycin) originated in the 1960s with an expedition to Easter Island (known by the island inhabitants as Rapa Nui ), with the goal of identifying natural products from plants and soil with possible therapeutic potential. In 1972, Suren Sehgal identified a small molecule, from the soil bacterium Streptomyces hygroscopicus , that he purified and initially reported to possess potent antifungal activity. He named it rapamycin , noting its original source and activity. [ 14 ] [ 15 ] Early testing revealed that rapamycin also had potent immunosuppressive and cytostatic anti-cancer activity. Rapamycin did not initially receive significant interest from the pharmaceutical industry until the 1980s, when Wyeth-Ayerst supported Sehgal's efforts to further investigate rapamycin's effect on the immune system. This eventually led to its FDA approval as an immunosuppressant following kidney transplantation. However, prior to its FDA approval, how rapamycin worked remained completely unknown.
The discovery of TOR and mTOR stemmed from independent studies of the natural product rapamycin by Joseph Heitman , Rao Movva, and Michael N. Hall in 1991; [ 16 ] by David M. Sabatini , Hediye Erdjument-Bromage, Mary Lui, Paul Tempst, and Solomon H. Snyder in 1994; [ 7 ] and by Candace J. Sabers, Mary M. Martin, Gregory J. Brunn, Josie M. Williams, Francis J. Dumont, Gregory Wiederrecht, and Robert T. Abraham in 1995. [ 8 ] In 1991, working in yeast, Hall and colleagues identified the TOR1 and TOR2 genes. [ 16 ] In 1993, Robert Cafferkey, George Livi, and colleagues, and Jeannette Kunz, Michael N. Hall , and colleagues independently cloned genes that mediate the toxicity of rapamycin in fungi, known as the TOR/DRR genes. [ 17 ] [ 18 ]
Rapamycin arrests fungal activity at the G1 phase of the cell cycle. In mammals, it suppresses the immune system by blocking the G1 to S phase transition in T-lymphocytes . [ 19 ] Thus, it is used as an immunosuppressant following organ transplantation. [ 20 ] Interest in rapamycin was renewed following the discovery of the structurally related immunosuppressive natural product FK506 (later called Tacrolimus) in 1987. In 1989–90, FK506 and rapamycin were determined to inhibit T-cell receptor (TCR) and IL-2 receptor signaling pathways, respectively. [ 21 ] [ 22 ] The two natural products were used to discover the FK506- and rapamycin-binding proteins , including FKBP12 , and to provide evidence that FKBP12–FK506 and FKBP12–rapamycin might act through gain-of-function mechanisms that target distinct cellular functions. These investigations included key studies by Francis Dumont and Nolan Sigal at Merck contributing to show that FK506 and rapamycin behave as reciprocal antagonists. [ 23 ] [ 24 ] These studies implicated FKBP12 as a possible target of rapamycin, but suggested that the complex might interact with another element of the mechanistic cascade. [ 25 ] [ 26 ]
In 1991, calcineurin was identified as the target of FKBP12-FK506. [ 27 ] That of FKBP12-rapamycin remained mysterious until genetic and molecular studies in yeast established FKBP12 as the target of rapamycin, and implicated TOR1 and TOR2 as the targets of FKBP12-rapamycin in 1991 and 1993, [ 16 ] [ 28 ] followed by studies in 1994 when several groups, working independently, discovered the mTOR kinase as its direct target in mammalian tissues. [ 6 ] [ 7 ] [ 20 ] Sequence analysis of mTOR revealed it to be the direct ortholog of proteins encoded by the yeast target of rapamycin 1 and 2 (TOR1 and TOR2 ) genes, which Joseph Heitman, Rao Movva, and Michael N. Hall had identified in August 1991 and May 1993. Independently, George Livi and colleagues later reported the same genes, which they called dominant rapamycin resistance 1 and 2 (DRR1 and DRR2) , in studies published in October 1993.
The protein, now called mTOR, was originally named FRAP by Stuart L. Schreiber and RAFT1 by David M. Sabatini; [ 6 ] [ 7 ] FRAP1 was used as its official gene symbol in humans. Because of these different names, mTOR, which had been first used by Robert T. Abraham, [ 6 ] was increasingly adopted by the community of scientists working on the mTOR pathway to refer to the protein and in homage to the original discovery of the TOR protein in yeast that was named TOR, the Target of Rapamycin, by Joe Heitman, Rao Movva, and Mike Hall. TOR was originally discovered at the Biozentrum and Sandoz Pharmaceuticals in 1991 in Basel, Switzerland, and the name TOR pays further homage to this discovery, as TOR means doorway or gate in German, and the city of Basel was once ringed by a wall punctuated with gates into the city, including the iconic Spalentor . [ 29 ] "mTOR" initially meant "mammalian target of rapamycin", but the meaning of the "m" was later changed to "mechanistic". [ 30 ] Similarly, with subsequent discoveries the zebra fish TOR was named zTOR, the Arabidopsis thaliana TOR was named AtTOR, and the Drosophila TOR was named dTOR. In 2009 the FRAP1 gene name was officially changed by the HUGO Gene Nomenclature Committee (HGNC) to mTOR, which stands for mechanistic target of rapamycin. [ 31 ]
The discovery of TOR and the subsequent identification of mTOR opened the door to the molecular and physiological study of what is now called the mTOR pathway and had a catalytic effect on the growth of the field of chemical biology, where small molecules are used as probes of biology.
mTOR integrates the input from upstream pathways , including insulin , growth factors (such as IGF-1 and IGF-2 ), and amino acids . [ 11 ] mTOR also senses cellular nutrient, oxygen, and energy levels. [ 32 ] The mTOR pathway is a central regulator of mammalian metabolism and physiology, with important roles in the function of tissues including liver, muscle, white and brown adipose tissue, [ 33 ] and the brain, and is dysregulated in human diseases, such as diabetes , obesity , depression , and certain cancers . [ 34 ] [ 35 ] Rapamycin inhibits mTOR by associating with its intracellular receptor FKBP 12. [ 36 ] [ 37 ] The FKBP12– rapamycin complex binds directly to the FKBP12-Rapamycin Binding (FRB) domain of mTOR, inhibiting its activity. [ 37 ]
Plants express the mechanistic target of rapamycin (mTOR) and have a TOR kinase complex. In plants, only the TORC1 complex is present unlike that of mammalian target of rapamycin which also contains the TORC2 complex. [ 38 ] Plant species have TOR proteins in the protein kinase and FKBP-rapamycin binding (FRB) domains that share a similar amino acid sequence to mTOR in mammals. [ 39 ]
Role of mTOR in plants
The TOR kinase complex has been known for having a role in the metabolism of plants. The TORC1 complex turns on when plants are living the proper environmental conditions to survive. Once activated, plant cells undergo particular anabolic reactions. These include plant development, translation of mRNA and the growth of cells within the plant. However, the TORC1 complex activation stops catabolic processes such as autophagy from occurring. [ 38 ] TOR kinase signaling in plants has been found to aid in senescence, flowering, root and leaf growth, embryogenesis, and the meristem activation above the root cap of a plant. [ 40 ] mTOR is also found to be highly involved in developing embryo tissue in plants. [ 39 ]
mTOR is the catalytic subunit of two structurally distinct complexes: mTORC1 and mTORC2. [ 41 ] The two complexes localize to different subcellular compartments, thus affecting their activation and function. [ 42 ] Upon activation by Rheb, mTORC1 localizes to the Ragulator-Rag complex on the lysosome surface where it then becomes active in the presence of sufficient amino acids. [ 43 ] [ 44 ]
mTOR Complex 1 (mTORC1) is composed of mTOR, regulatory-associated protein of mTOR ( Raptor ), mammalian lethal with SEC13 protein 8 ( mLST8 ) and the non-core components PRAS40 and DEPTOR . [ 45 ] [ 46 ] This complex functions as a nutrient/energy/redox sensor and controls protein synthesis. [ 11 ] [ 45 ] The activity of mTORC1 is regulated by rapamycin , insulin, growth factors, phosphatidic acid , certain amino acids and their derivatives (e.g., L -leucine and β-hydroxy β-methylbutyric acid ), mechanical stimuli, and oxidative stress . [ 45 ] [ 47 ] [ 48 ]
mTOR Complex 2 (mTORC2) is composed of MTOR, rapamycin-insensitive companion of MTOR ( RICTOR ), MLST8 , and mammalian stress-activated protein kinase interacting protein 1 ( mSIN1 ). [ 49 ] [ 50 ] mTORC2 has been shown to function as an important regulator of the actin cytoskeleton through its stimulation of F- actin stress fibers, paxillin , RhoA , Rac1 , Cdc42 , and protein kinase C α ( PKCα ). [ 50 ] mTORC2 also phosphorylates the serine/threonine protein kinase Akt/PKB on serine residue Ser473, thus affecting metabolism and survival. [ 51 ] Phosphorylation of Akt's serine residue Ser473 by mTORC2 stimulates Akt phosphorylation on threonine residue Thr308 by PDK1 and leads to full Akt activation. [ 52 ] [ 53 ] In addition, mTORC2 exhibits tyrosine protein kinase activity and phosphorylates the insulin-like growth factor 1 receptor (IGF-1R) and insulin receptor (InsR) on the tyrosine residues Tyr1131/1136 and Tyr1146/1151, respectively, leading to full activation of IGF-IR and InsR. [ 12 ]
Rapamycin ( Sirolimus ) inhibits mTORC1, resulting in the suppression of cellular senescence . [ 54 ] This appears to provide most of the beneficial effects of the drug (including life-span extension in animal studies). Suppression of insulin resistance by sirtuins accounts for at least some of this effect. [ 55 ] Impaired sirtuin 3 leads to mitochondrial dysfunction . [ 56 ]
Rapamycin has a more complex effect on mTORC2, inhibiting it only in certain cell types under prolonged exposure. Disruption of mTORC2 produces the diabetic-like symptoms of decreased glucose tolerance and insensitivity to insulin. [ 57 ]
The mTORC2 signaling pathway is less defined than the mTORC1 signaling pathway. The functions of the components of the mTORC complexes have been studied using knockdowns and knockouts and were found to produce the following phenotypes:
Decreased TOR activity has been found to increase life span in S. cerevisiae , C. elegans , and D. melanogaster . [ 72 ] [ 73 ] [ 74 ] [ 75 ] The mTOR inhibitor rapamycin has been confirmed to increase lifespan in mice. [ 76 ] [ 77 ] [ 78 ] [ 79 ] [ 80 ]
It is hypothesized that some dietary regimes, like caloric restriction and methionine restriction, cause lifespan extension by decreasing mTOR activity. [ 72 ] [ 73 ] Some studies have suggested that mTOR signaling may increase during aging, at least in specific tissues like adipose tissue, and rapamycin may act in part by blocking this increase. [ 81 ] An alternative theory is mTOR signaling is an example of antagonistic pleiotropy , and while high mTOR signaling is good during early life, it is maintained at an inappropriately high level in old age. Calorie restriction and methionine restriction may act in part by limiting levels of essential amino acids including leucine and methionine, which are potent activators of mTOR. [ 82 ] The administration of leucine into the rat brain has been shown to decrease food intake and body weight via activation of the mTOR pathway in the hypothalamus . [ 83 ]
According to the free radical theory of aging , [ 84 ] reactive oxygen species cause damage to mitochondrial proteins and decrease ATP production. Subsequently, via ATP sensitive AMPK , the mTOR pathway is inhibited and ATP-consuming protein synthesis is downregulated, since mTORC1 initiates a phosphorylation cascade activating the ribosome . [ 19 ] Hence, the proportion of damaged proteins is enhanced. Moreover, disruption of mTORC1 directly inhibits mitochondrial respiration . [ 85 ] These positive feedbacks on the aging process are counteracted by protective mechanisms: Decreased mTOR activity (among other factors) upregulates removal of dysfunctional cellular components via autophagy . [ 84 ]
mTOR is a key initiator of the senescence-associated secretory phenotype (SASP). [ 86 ] Interleukin 1 alpha (IL1A) is found on the surface of senescent cells where it contributes to the production of SASP factors due to a positive feedback loop with NF-κB. [ 87 ] [ 88 ] Translation of mRNA for IL1A is highly dependent upon mTOR activity. [ 89 ] mTOR activity increases levels of IL1A, mediated by MAPKAPK2 . [ 87 ] mTOR inhibition of ZFP36L1 prevents this protein from degrading transcripts of numerous components of SASP factors. [ 90 ]
Over-activation of mTOR signaling significantly contributes to the initiation and development of tumors and mTOR activity was found to be deregulated in many types of cancer including breast, prostate, lung, melanoma, bladder, brain, and renal carcinomas. [ 91 ] Reasons for constitutive activation are several. Among the most common are mutations in tumor suppressor PTEN gene. PTEN phosphatase negatively affects mTOR signalling through interfering with the effect of PI3K , an upstream effector of mTOR. Additionally, mTOR activity is deregulated in many cancers as a result of increased activity of PI3K or Akt . [ 92 ] Similarly, overexpression of downstream mTOR effectors 4E-BP1 , S6K1 , S6K2 and eIF4E leads to poor cancer prognosis. [ 93 ] Also, mutations in TSC proteins that inhibit the activity of mTOR may lead to a condition named tuberous sclerosis complex , which exhibits as benign lesions and increases the risk of renal cell carcinoma . [ 94 ]
Increasing mTOR activity was shown to drive cell cycle progression and increase cell proliferation mainly due to its effect on protein synthesis. Moreover, active mTOR supports tumor growth also indirectly by inhibiting autophagy . [ 95 ] Constitutively activated mTOR functions in supplying carcinoma cells with oxygen and nutrients by increasing the translation of HIF1A and supporting angiogenesis . [ 96 ] mTOR also aids in another metabolic adaptation of cancerous cells to support their increased growth rate—activation of glycolytic metabolism . Akt2 , a substrate of mTOR, specifically of mTORC2 , upregulates expression of the glycolytic enzyme PKM2 thus contributing to the Warburg effect . [ 97 ]
mTOR is implicated in the failure of a 'pruning' mechanism of the excitatory synapses in autism spectrum disorders. [ 98 ]
mTOR signaling intersects with Alzheimer's disease (AD) pathology in several aspects, suggesting its potential role as a contributor to disease progression. In general, findings demonstrate mTOR signaling hyperactivity in AD brains. For example, postmortem studies of human AD brain reveal dysregulation in PTEN, Akt, S6K, and mTOR. [ 99 ] [ 100 ] [ 101 ] mTOR signaling appears to be closely related to the presence of soluble amyloid beta (Aβ) and tau proteins, which aggregate and form two hallmarks of the disease, Aβ plaques and neurofibrillary tangles, respectively. [ 102 ] In vitro studies have shown Aβ to be an activator of the PI3K/AKT pathway , which in turn activates mTOR. [ 103 ] In addition, applying Aβ to N2K cells increases the expression of p70S6K, a downstream target of mTOR known to have higher expression in neurons that eventually develop neurofibrillary tangles. [ 104 ] [ 105 ] Chinese hamster ovary cells transfected with the 7PA2 familial AD mutation also exhibit increased mTOR activity compared to controls, and the hyperactivity is blocked using a gamma-secretase inhibitor. [ 106 ] [ 107 ] These in vitro studies suggest that increasing Aβ concentrations increases mTOR signaling; however, significantly large, cytotoxic Aβ concentrations are thought to decrease mTOR signaling. [ 108 ]
Consistent with data observed in vitro, mTOR activity and activated p70S6K have been shown to be significantly increased in the cortex and hippocampus of animal models of AD compared to controls. [ 107 ] [ 109 ] Pharmacologic or genetic removal of the Aβ in animal models of AD eliminates the disruption in normal mTOR activity, pointing to the direct involvement of Aβ in mTOR signaling. [ 109 ] In addition, by injecting Aβ oligomers into the hippocampi of normal mice, mTOR hyperactivity is observed. [ 109 ] Cognitive impairments characteristic of AD appear to be mediated by the phosphorylation of PRAS-40, which detaches from and allows for the mTOR hyperactivity when it is phosphorylated; inhibiting PRAS-40 phosphorylation prevents Aβ-induced mTOR hyperactivity. [ 109 ] [ 110 ] [ 111 ] Given these findings, the mTOR signaling pathway appears to be one mechanism of Aβ-induced toxicity in AD.
The hyperphosphorylation of tau proteins into neurofibrillary tangles is one hallmark of AD. p70S6K activation has been shown to promote tangle formation as well as mTOR hyperactivity through increased phosphorylation and reduced dephosphorylation. [ 104 ] [ 112 ] [ 113 ] [ 114 ] It has also been proposed that mTOR contributes to tau pathology by increasing the translation of tau and other proteins. [ 115 ]
Synaptic plasticity is a key contributor to learning and memory, two processes that are severely impaired in AD patients. Translational control, or the maintenance of protein homeostasis, has been shown to be essential for neural plasticity and is regulated by mTOR. [ 107 ] [ 116 ] [ 117 ] [ 118 ] [ 119 ] Both protein over- and under-production via mTOR activity seem to contribute to impaired learning and memory. Furthermore, given that deficits resulting from mTOR overactivity can be alleviated through treatment with rapamycin, it is possible that mTOR plays an important role in affecting cognitive functioning through synaptic plasticity. [ 103 ] [ 120 ] Further evidence for mTOR activity in neurodegeneration comes from recent findings demonstrating that eIF2α-P, an upstream target of the mTOR pathway, mediates cell death in prion diseases through sustained translational inhibition. [ 121 ]
Some evidence points to mTOR's role in reduced Aβ clearance as well. mTOR is a negative regulator of autophagy; [ 122 ] therefore, hyperactivity in mTOR signaling should reduce Aβ clearance in the AD brain. Disruptions in autophagy may be a potential source of pathogenesis in protein misfolding diseases, including AD. [ 123 ] [ 124 ] [ 125 ] [ 126 ] [ 127 ] [ 128 ] Studies using mouse models of Huntington's disease demonstrate that treatment with rapamycin facilitates the clearance of huntingtin aggregates. [ 129 ] [ 130 ] Perhaps the same treatment may be useful in clearing Aβ deposits as well.
Hyperactive mTOR pathways have been identified in certain lymphoproliferative diseases such as autoimmune lymphoproliferative syndrome (ALPS), [ 131 ] multicentric Castleman disease , [ 132 ] and post-transplant lymphoproliferative disorder (PTLD). [ 133 ]
mTORC1 activation is required for myofibrillar muscle protein synthesis and skeletal muscle hypertrophy in humans in response to both physical exercise and ingestion of certain amino acids or amino acid derivatives. [ 134 ] [ 135 ] Persistent inactivation of mTORC1 signaling in skeletal muscle facilitates the loss of muscle mass and strength during muscle wasting in old age, cancer cachexia , and muscle atrophy from physical inactivity . [ 134 ] [ 135 ] [ 136 ] mTORC2 activation appears to mediate neurite outgrowth in differentiated mouse neuro2a cells . [ 137 ] Intermittent mTOR activation in prefrontal neurons by β-hydroxy β-methylbutyrate inhibits age-related cognitive decline associated with dendritic pruning in animals, which is a phenomenon also observed in humans. [ 138 ]
Active mTORC1 is positioned on lysosomes . mTOR is inhibited [ 140 ] when lysosomal membrane is damaged by various exogenous or endogenous agents, such as invading bacteria , membrane-permeant chemicals yielding osmotically active products (this type of injury can be modeled using membrane-permeant dipeptide precursors that polymerize in lysosomes), amyloid protein aggregates (see above section on Alzheimer's disease ) and cytoplasmic organic or inorganic inclusions including urate crystals and crystalline silica . [ 140 ] The process of mTOR inactivation following lysosomal/endomembrane is mediated by the protein complex termed GALTOR. [ 140 ] At the heart of GALTOR [ 140 ] is galectin-8 , a member of β-galactoside binding superfamily of cytosolic lectins termed galectins , which recognizes lysosomal membrane damage by binding to the exposed glycans on the lumenal side of the delimiting endomembrane. Following membrane damage, galectin-8, which normally associates with mTOR under homeostatic conditions, no longer interacts with mTOR but now instead binds to SLC38A9 , RRAGA / RRAGB , and LAMTOR1 , inhibiting Ragulator 's (LAMTOR1-5 complex) guanine nucleotide exchange function- [ 140 ]
TOR is a negative regulator of autophagy in general, best studied during response to starvation, [ 141 ] [ 142 ] [ 143 ] [ 144 ] [ 145 ] which is a metabolic response. During lysosomal damage however, mTOR inhibition activates autophagy response in its quality control function, leading to the process termed lysophagy [ 146 ] that removes damaged lysosomes. At this stage another galectin , galectin-3 , interacts with TRIM16 to guide selective autophagy of damaged lysosomes. [ 147 ] [ 148 ] TRIM16 gathers ULK1 and principal components (Beclin 1 and ATG16L1 ) of other complexes (Beclin 1- VPS34 - ATG14 and ATG16L1 - ATG5 - ATG12 ) initiating autophagy , [ 148 ] many of them being under negative control of mTOR directly such as the ULK1-ATG13 complex, [ 143 ] [ 144 ] [ 145 ] or indirectly, such as components of t he class III PI3K (Beclin 1, ATG14 and VPS34) since they depend on activating phosphorylations by ULK1 when it is not inhibited by mTOR. These autophagy -driving components physically and functionally link up with each other integrating all processes necessary for autophagosomal formation: (i) the ULK1- ATG13 - FIP200/RB1CC1 complex associates with the LC3B / GABARAP conjugation machinery through direct interactions between FIP200/RB1CC1 and ATG16L1 , [ 149 ] [ 150 ] [ 151 ] (ii) ULK1 -ATG13- FIP200/RB1CC1 complex associates with the Beclin 1 - VPS34 - ATG14 via direct interactions between ATG13 's HORMA domain and ATG14 , [ 152 ] (iii) ATG16L1 interacts with WIPI2 , which binds to PI3P , the enzymatic product of the class III PI3K Beclin 1-VPS34-ATG14. [ 153 ] Thus, mTOR inactivation, initiated through GALTOR [ 140 ] upon lysosomal damage, plus a simultaneous activation via galectin-9 (which also recognizes lysosomal membrane breach) of AMPK [ 140 ] that directly phosphorylates and activates key components ( ULK1 , [ 154 ] Beclin 1 [ 155 ] ) of the autophagy systems listed above and further inactivates mTORC1, [ 156 ] [ 157 ] allows for strong autophagy induction and autophagic removal of damaged lysosomes.
Additionally, several types of ubiquitination events parallel and complement the galectin-driven processes: Ubiquitination of TRIM16-ULK1-Beclin-1 stabilizes these complexes to promote autophagy activation as described above. [ 148 ] ATG16L1 has an intrinsic binding affinity for ubiquitin [ 151 ] ); whereas ubiquitination by a glycoprotein-specific FBXO27-endowed ubiquitin ligase of several damage-exposed glycosylated lysosomal membrane proteins such as LAMP1 , LAMP2 , GNS/ N-acetylglucosamine-6-sulfatase , TSPAN6/ tetraspanin-6 , PSAP/ prosaposin , and TMEM192/transmembrane protein 192 [ 158 ] may contribute to the execution of lysophagy via autophagic receptors such as p62/ SQSTM1 , which is recruited during lysophagy, [ 151 ] or other to be determined functions.
Scleroderma , also known as systemic sclerosis , is a chronic systemic autoimmune disease characterised by hardening ( sclero ) of the skin ( derma ) that affects internal organs in its more severe forms. [ 159 ] [ 160 ] mTOR plays a role in fibrotic diseases and autoimmunity, and blockade of the mTORC pathway is under investigation as a treatment for scleroderma. [ 9 ]
A rare gain-of-function mutation causes Smith-Kingsmore syndrome . [ 161 ]
mTOR inhibitors, e.g. rapamycin , are already used to prevent transplant rejection .
Some articles reported that rapamycin can inhibit mTORC1 so that the phosphorylation of GS (glycogen synthase) can be increased in skeletal muscle. This discovery represents a potential novel therapeutic approach for glycogen storage disease that involve glycogen accumulation in muscle.
There are two primary mTOR inhibitors used in the treatment of human cancers, temsirolimus and everolimus . mTOR inhibitors have found use in the treatment of a variety of malignancies, including renal cell carcinoma (temsirolimus) and pancreatic cancer , breast cancer , and renal cell carcinoma (everolimus). [ 162 ] The complete mechanism of these agents is not clear, but they are thought to function by impairing tumour angiogenesis and causing impairment of the G1/S transition . [ 163 ]
mTOR inhibitors may be useful for treating/preventing several age-associated conditions, [ 164 ] including neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease . [ 165 ] After a short-term treatment with the mTOR inhibitors dactolisib and everolimus , in elderly (65 and older), treated subjects had a reduced number of infections over the course of a year. [ 166 ]
Various natural compounds, including epigallocatechin gallate (EGCG), caffeine , curcumin , berberine , quercetin , resveratrol and pterostilbene , have been reported to inhibit mTOR when applied to isolated cells in culture. [ 167 ] [ 168 ] [ 169 ] As yet no high quality evidence exists that these substances inhibit mTOR signaling or extend lifespan when taken as dietary supplements by humans, despite encouraging results in animals such as fruit flies and mice. Various trials are ongoing. [ 170 ] [ 171 ]
Mechanistic target of rapamycin has been shown to interact with: [ 172 ] | https://en.wikipedia.org/wiki/MTOR |
mTOR inhibitors are a class of drugs used to treat several human diseases, including cancer, autoimmune diseases, and neurodegeneration. They function by inhibiting the mammalian target of rapamycin (mTOR) (also known as the mechanistic target of rapamycin), which is a serine/threonine-specific protein kinase that belongs to the family of phosphatidylinositol-3 kinase (PI3K) related kinases (PIKKs). mTOR regulates cellular metabolism, growth, and proliferation by forming and signaling through two protein complexes , mTORC1 and mTORC2 . The most established mTOR inhibitors are so-called rapalogs (rapamycin and its analogs), which have shown tumor responses in clinical trials against various tumor types. [ 1 ]
The discovery of mTOR was made in 1994 while investigating the mechanism of action of its inhibitor , rapamycin . [ 2 ] [ 3 ] Rapamycin was first discovered in 1975 in a soil sample from Easter Island of South Pacific , also known as Rapa Nui, from where its name is derived. [ 4 ] Rapamycin is a macrolide , produced by the microorganism Streptomyces hygroscopicus and showed antifungal properties. Shortly after its discovery, immunosuppressive properties were detected, which later led to the establishment of rapamycin as an immunosuppressant. In the 1980s, rapamycin was also found to have anticancer activity although the exact mechanism of action remained unknown until many years later. [ 2 ] [ 5 ] [ 6 ]
In the 1990s there was a dramatic change in this field due to studies on the mechanism of action of rapamycin and the identification of the drug target. [ 4 ] It was found that rapamycin inhibited cellular proliferation and cell cycle progression . Research on mTOR inhibition has been a growing branch in science and has promising results. [ 7 ]
In general, protein kinases are classified in two major categories based on their substrate specificity, protein tyrosine kinases and protein serine/threonine kinases . Dual-specificity kinases are subclass of the tyrosine kinases. [ 8 ]
mTOR is a kinase within the family of phosphatidylinositol-3 kinase-related kinases (PIKKs) , [ 9 ] which is a family of serine/threonine protein kinases, with a sequence similarity to the family of lipid kinases, PI3Ks . [ 8 ] These kinases have different biological functions, [ 8 ] but are all large proteins with common domain structure. [ 9 ]
PIKKs have four domains at the protein level, which distinguish them from other protein kinases. From the N-terminus to the C-terminus , these domains are named FRAP-ATM-TRAAP (FAT), the kinase domain (KD), the PIKK-regulatory domain (PRD), and the FAT-C-terminal (FATC). [ 8 ] The FAT domain, consisting of four α-helices , is N-terminal to KD, but that part is referred to as the FKBP12-rapamycin-binding (FRB) domain, which binds the FKBP12-rapamycin complex. [ 8 ] The FAT domain consists of repeats, referred to as HEAT ( Huntingtin , Elongation factor 3 , A subunit of protein phosphatase 2A and TOR1). [ 9 ] Specific protein activators regulate the PIKK kinases but binding of them to the kinase complex causes a conformational change that increases substrate access to the kinase domain. [ 9 ]
Protein kinases have become popular drug targets. [ 10 ] They have been targeted for the discovery and design of small molecule inhibitors and biologics as potential therapeutic agents. Small-molecule inhibitors of protein kinases generally prevent either phosphorylation of proteins substrates or autophosphorylation of the kinase itself. [ 11 ]
It appears that growth factors , amino acids , ATP , and oxygen levels regulate mTOR signaling. Several downstream pathways that regulate cell-cycle progression, [ 12 ] translation , initiation , transcriptional stress responses, [ 13 ] protein stability, and survival of cells are signaling through mTOR.
The serine/threonine kinase mTOR is a downstream effector of the PI3K/AKT pathway, and forms two distinct multiprotein complexes , mTORC1 and mTORC2 . [ 1 ] These two complexes have a separate network of protein partners, feedback loops , substrates , and regulators. [ 15 ] mTORC1 consists of mTOR and two positive regulatory subunits, raptor and mammalian LST8 ( mLST8 ), and two negative regulators, proline-rich AKT substrate 40 (PRAS40) and DEPTOR. [ 1 ] mTORC2 consists of mTOR, mLST8, mSin1 , protor, rictor , and DEPTOR. [ 16 ]
mTORC1 is sensitive to rapamycin but mTORC2 is considered to be resistant and is generally insensitive to nutrients and energy signals. mTORC2 is activated by growth factors , phosphorylates PKCα , AKT and paxillin , and regulates the activity of the small GTPase , Rac , and Rho related to cell survival, migration and regulation of the actin cytoskeleton .
The mTORC1 signaling cascade is activated by phosphorylated AKT and results in phosphorylation of S6K1 , and 4EBP1 , which lead to mRNA translation . [ 1 ]
Many human tumors occur because of dysregulation of mTOR signaling, and can confer higher susceptibility to inhibitors of mTOR. [ 17 ] Deregulations of multiple elements of the mTOR pathway, like PI3K amplification / mutation , PTEN loss of function, AKT overexpression, and S6K1, 4EBP1, and eIF4E overexpression have been related to many types of cancers. Therefore, mTOR is an interesting therapeutic target for treating multiple cancers, both the mTOR inhibitors themselves or in combination with inhibitors of other pathways. [ 1 ]
Upstream, PI3K/AKT signalling is deregulated through a variety of mechanisms, including overexpression or activation of growth factor receptors , such as HER-2 (human epidermal growth factor receptor 2) and IGFR (insulin-like growth factor receptor), mutations in PI3K and mutations/amplifications of AKT. [ 1 ] Tumor suppressor phosphatase and tensin homologue deleted on chromosome 10 (PTEN) is a negative regulator of PI3K signaling. In many cancers the PTEN expression is decreased and may be downregulated through several mechanisms, including mutations , loss of heterozygosity , methylation , and protein instability. [ 16 ]
Downstream, the mTOR effectors S6 kinase 1 (S6K1), eukaryotic initiation factor 4E-binding protein 1 (4EBP1) and eukaryotic initiation factor 4E (eIF4E) are related to cellular transformation. [ 1 ] S6K1 is a key regulator of cell growth and also phosphorylates other important targets. Both eIF4E and S6K1 are included in cellular transformation and their overexpression has been linked to poor cancer prognosis. [ 16 ]
Since the discovery of mTOR, much research has been done on the subject, using rapamycin and rapalogs to understand its biological functions. [ 15 ] [ 18 ] The clinical results from targeting this pathway were not as straight forward as thought at first. Those results have changed the course of clinical research in this field. [ 15 ]
Initially, rapamycin was developed as an antifungal drug against Candida albicans , Aspergillus fumigatus and Cryptococcus neoformans . [ 5 ] A few years later its immunosuppressive properties were detected. Later studies led to the establishment of rapamycin as a major immunosuppressant against transplant rejection , along with cyclosporine A . [ 2 ] Combining rapamycin with cyclosporine A, enhanced rejection prevention in renal transplantation . Therefore, it was possible to use lower doses of cyclosporine, which minimized toxicity of the drug. [ 5 ]
In the 1980s the Developmental Therapeutic Branch of the National Cancer Institute (NCI) evaluated rapamycin and discovered it had an anticancer activity and was non-cytotoxic, but had cytostatic activity against several human cancer types. [ 5 ] However, due to unfavorable pharmacokinetic properties, the development of mTOR inhibitors for the treatment of cancer was not successful at that time. [ 3 ] Since then, rapamycin has also shown to be effective for preventing coronary artery re-stenosis and for the treatment of neurodegenerative diseases . [ 5 ]
The development of rapamycin as an anticancer agent began again in the 1990s with the discovery of temsirolimus (CCI-779). This novel soluble rapamycin derivative had a favorable toxicological profile in animals. More rapamycin derivatives with improved pharmacokinetics and reduced immunosuppressive effects have since then been developed for the treatment of cancer . [ 5 ] These rapalogs include temsirolimus (CCI-779), everolimus (RAD001), and ridaforolimus (AP-23573) which are being evaluated in cancer clinical trials . [ 19 ] Rapamycin analogs have similar therapeutic effects as rapamycin. However they have improved hydrophilicity and can be used for oral and intravenous administration . [ 4 ] In 2012 National Cancer Institute listed more than 200 clinical trials testing the anticancer activity of rapalogs both as monotherapy or as a part of combination therapy for many cancer types. [ 7 ]
Rapalogs , which are the first generation mTOR inhibitors, have proven effective in a range of preclinical models. However, the success in clinical trials is limited to only a few rare cancers. [ 20 ] Animal and clinical studies show that rapalogs are primarily cytostatic , and therefore effective as disease stabilizers rather than for regression. [ 21 ] The response rate in solid tumors where rapalogs have been used as a single-agent therapy have been modest. Due to partial mTOR inhibition as mentioned before, rapalogs are not sufficient for achieving a broad and robust anticancer effect, at least when used as monotherapy . [ 19 ] [ 20 ] [ 22 ]
Another reason for the limited success is that there is a feedback loop between mTORC1 and AKT in certain tumor cells. It seems that mTORC1 inhibition by rapalogs fails to repress a negative feedback loop that results in phosphorylation and activation of AKT. [ 18 ] [ 23 ] These limitations have led to the development of the second generation of mTOR inhibitors. [ 7 ]
Rapamycin and rapalogs (rapamycin derivatives) are small molecule inhibitors , [ 24 ] which have been evaluated as anticancer agents. The rapalogs have more favorable pharmacokinetic profile compared to rapamycin, the parent drug, [ 3 ] despite the same binding sites for mTOR and FKBP12. [ 5 ]
The bacterial natural product rapamycin or sirolimus , [ 6 ] a cytostatic agent , has been used in combination therapy with corticosteroids and cyclosporine in patients who received kidney transplantation to prevent organ rejection both in the US [ 25 ] and Europe, [ 26 ] due to its unsatisfying pharmacokinetic properties. [ 3 ] In 2003, the U.S. Food and Drug Administration approved sirolimus-eluting coronary stents, which are used in patients with narrowing of coronary arteries , or so-called atherosclerosis . [ 27 ]
Recently rapamycin has shown effective in the inhibition of growth of several human cancers and murine cell lines. [ 5 ] Rapamycin is the main mTOR inhibitor, but ridaforolimus /deforolimus (AP23573), everolimus (RAD001), and temsirolimus (CCI-779), are the newly developed rapamycin analogs. [ 2 ]
The rapamycin analog temsirolimus (CCI-779) [ 2 ] is also a noncytotoxic agent which delays tumor proliferation.
Temsirolimus is a prodrug of rapamycin. It is approved by the U.S. Food and Drug Administration (FDA) [ 25 ] and the European Medicines Agency (EMA), [ 28 ] for the treatment of renal cell carcinoma (RCC). Temsirolimus has higher water solubility than rapamycin and is therefore administered by intravenous injection. [ 3 ] [ 6 ] It was approved on May 30, 2007, by FDA for the treatment of advanced RCC. [ 6 ]
Temsirolimus has also been used in a Phase I clinical trial in conjunction with neratinib , a small-molecule irreversible pan-HER tyrosine kinase inhibitor . This study enrolled patients being treated for HER2 -amplified breast cancer, HER2-mutant non-small-cell lung cancer, and other advanced solid tumors. While common toxicities included nausea , stomatitis , and anemia ; responses were noted. [ 29 ]
Everolimus is the second novel Rapamycin analog. [ 2 ] Compared with the parent compound rapamycin , everolimus is more selective for the mTORC1 protein complex, with little impact on the mTORC2 complex. [ 30 ] mTORC1 inhibition by everolimus has been shown to normalize tumor blood vessels, to increase tumor-infiltrating lymphocytes , and to improve adoptive cell transfer therapy . [ 31 ]
From March 30, 2009, to May 5, 2011, the U.S. FDA approved everolimus for the treatment of advanced renal cell carcinoma after failure of treatment with sunitinib or sorafenib , subependymal giant cell astrocytoma (SEGA) associated with tuberous sclerosis (TS), and progressive neuroendocrine tumors of pancreatic origin (PNET). [ 32 ] In July and August 2012, two new indications were approved, for advanced hormone receptor-positive, HER2-negative breast cancer in combination with exemestane, and pediatric and adult patients with SEGA. [ 32 ] In 2009 and 2011, it was also approved throughout the European Union for advanced breast cancer, pancreatic neuroendocrine tumours, advanced renal cell carcinoma, [ 33 ] and SEGA in patients with tuberous sclerosis. [ 34 ]
Ridaforolimus (AP23573, MK-8669), or deforolimus, is another rapamycin analogue that is not a prodrug for sirolimus. [ 2 ] Like temsirolimus it can be administered intravenously, and oral formulation is being estimated for treatment of sarcoma . [ 3 ]
Umirolimus is an immunosuppressant used in drug-eluting stents. [ 35 ]
Zotarolimus is an immunosuppressant used in coronary drug-eluting stents. [ 36 ]
The second generation of mTOR inhibitors is known as ATP-competitive mTOR kinase inhibitors. [ 7 ] [ 37 ] mTORC1/mTORC2 dual inhibitors such as torin-1 , torin-2 and vistusertib, are designed to compete with ATP in the catalytic site of mTOR. They inhibit all of the kinase-dependent functions of mTORC1 and mTORC2 and block the feedback activation of PI3K/AKT signaling, unlike rapalogs, which only target mTORC1. [ 7 ] [ 18 ] Development of these drugs has reached clinical trials, although some, such as vistusertib, have been discontinued. [ 37 ] Like rapalogs, they decrease protein translation , attenuate cell cycle progression, and inhibit angiogenesis in many cancer cell lines and also in human cancer. In fact, they have been proven to be more potent than rapalogs. [ 7 ]
Theoretically, the most important advantages of these mTOR inhibitors is the considerable decrease of AKT phosphorylation on mTORC2 blockade and in addition to a better inhibition on mTORC1. [ 15 ] However, some drawbacks exist. Even though these compounds have been effective in rapamycin-insensitive cell lines, they have only shown limited success in KRAS driven tumors. This suggests that combinational therapy may be necessary for the treatment of these cancers. Another drawback is also their potential toxicity . These facts have raised concerns about the long term efficacy of these types of inhibitors. [ 7 ]
The close interaction of mTOR with the PI3K pathway has also led to the development of mTOR/PI3K dual inhibitors. [ 7 ] Compared with drugs that inhibit either mTORC1 or PI3K, these drugs have the benefit of inhibiting mTORC1, mTORC2, and all the catalytic isoforms of PI3K. Targeting both kinases at the same time reduces the upregulation of PI3K, which is typically produced with an inhibition on mTORC1. [ 15 ] The inhibition of the PI3K/mTOR pathway has been shown to potently block proliferation by inducing G1 arrest in different tumor cell lines. Strong induction of apoptosis and autophagy has also been seen. Despite good promising results, there are preclinical evidence that some types of cancers may be insensitive to this dual inhibition. The dual PI3K/mTOR inhibitors are also likely to have increased toxicity. [ 7 ]
The studies of rapamycin as immunosuppressive agent enabled us to understand its mechanism of action . [ 5 ] It inhibits T-cell proliferation and proliferative responses induced by several cytokines , including interleukin 1 (IL-1) , IL-2 , IL-3 , IL-4 , IL-6 , IGF , PDGF , and colony-stimulating factors (CSFs) . [ 5 ] Rapamycin inhibitors and rapalogs can target tumor growth both directly and indirectly. Direct impact of them on cancer cells depend on the concentration of the drug and certain cellular characteristics. The indirect way, is based on interaction with processes required for tumor angiogenesis . [ 5 ]
Rapamycin and rapalogs crosslink the immunophilin FK506 binding protein, tacrolimus or FKBP-12, through its methoxy group . The rapamycin-FKBP12 complex interferes with FRB domain of mTOR. [ 5 ] [ 6 ] Molecular interaction between FKBP12, mTOR, and rapamycin can last for about three days (72 hours). The inhibition of mTOR blocks the binding of the accessory protein raptor (regulatory-associated protein of mTOR) to mTOR, but that is necessary for downstream phosphorylation of S6K1 and 4EBP1 . [ 5 ] [ 22 ]
As a consequence, S6K1 dephosphorylates, which reduces protein synthesis and decreases cell mortality and size. Rapamycin induces dephosphorylation of 4EBP1 as well, resulting in an increase in p27 and a decrease in cyclin D1 expression. That leads to late blockage of G1/S cell cycle . Rapamycin has shown to induce cancer cell death by stimulating autophagy or apoptosis , but the molecular mechanism of apoptosis in cancer cells has not yet been fully resolved. One suggestion of the relation between mTOR inhibition and apoptosis might be through the downstream target S6K1, which can phosphorylate BAD , a pro-apoptotic molecule, on Ser136. [ 5 ] That reaction breaks the binding of BAD to BCL-XL and BCL2 , a mitochondrial death inhibitors, resulting in inactivation of BAD [ 5 ] and decreased cell survival. [ 6 ] Rapamycin has also shown to induce p53 -independent apoptosis in certain types of cancer. [ 5 ]
Tumor angiogenesis rely on interactions between endothelial vascular growth factors which can all activate the PI3K/AKT/mTOR in endothelial cells, pericytes , or cancer cells. Example of these growth factors are angiopoietin 1 (ANG1) , ANG 2, basic fibroblast growth factor (bFGF) , ephrin-B2 , vascular endothelial growth factor (VEGF) , and members of the tumor growth factor-β (TGFβ) superfamily. One of the major stimuli of angiogenesis is hypoxia, resulting in activation of hypoxia-inducible transcription factors (HIFs) and expression of ANG2, bFGF, PDGF, VEGF, and VEGFR. Inhibition of HIF1α translation by preventing PDGF/PDGFR and VEGF/VEGFR can result from mTOR inhibition. A G0-G1 cell-cycle blockage can be the consequence of inactivation of mTOR in hypoxia-activated pericytes and endothelial cells. [ 5 ]
There is some evidence that extended therapy with rapamycin may have effect on AKT and mTORC2 as well. [ 2 ] [ 38 ]
Pharmacologic down-regulation of (mTOR) pathway during chemotherapy in a mouse model prevents activation of primordial follicles, preserves ovarian function, and maintains normal fertility using clinically available inhibitors INK and RAD. In that way, it helps to maintain fertility while undergoing chemotherapy treatments. These mTOR inhibitors, when administered as pretreatment or co-treatment with standard gonadotoxic chemotherapy, helps to maintain ovarian follicles in their primordial state. [ 39 ]
mTOR promotes the protein synthesis required for synaptic plasticity . [ 40 ] [ 41 ] Studies in cell cultures and hippocampal slices indicate that mTOR inhibition reduces long-term potentiation . [ 41 ] mTOR activation can protect against certain neurodegeneration associated with certain disease conditions. [ 42 ] On the other hand, promotion of autophagy by mTOR inhibition may reduce cognitive decline associated with neurodegeneration. [ 40 ]
Moderate reduction of mTOR activity by 25-30% has been shown to improve brain function, suggesting that the relation between mTOR and cognition is optimized with intermediate doses (2.24 mg/kg/day in mice, human equivalent about 0.19 mg/kg/day [ 43 ] ), where very high or very low doses impair cognition. [ 44 ] Reduction of the inflammatory cytokine Interleukin 1 beta (IL-1β) in mice by mTOR inhibition (with rapamycin in doses of 20 mg/kg/day, human equivalent about 1.6 mg/kg/day [ 43 ] ) has been shown to enhance learning and memory. [ 45 ] Although IL-1β is required for memory, IL-1β normally increases with age, impairing cognitive function. [ 44 ]
The pipecolate region of rapamycin structure seems necessary for rapamycin-binding to FKBP12 . This step is required for further binding of rapamycin to the mTOR kinase, which is the key enzyme in many biological actions of rapamycin. [ 46 ]
The high affinity of rapamycin binding to FKBP12 is explained by number of hydrogen bonds through two different hydrophobic binding pockets, and this has been revealed by X-ray crystal structure of the compound bound to the protein . The structural characteristics common to temsirolimus and sirolimus; the pipecolic acid , tricarbonyl region from C13-C15, and lactone functionalities play the key role in binding groups with the FKBP12. [ 19 ] [ 47 ]
The most important hydrogen bonds are the lactone carbonyl oxygen at C-21 to the backbone NH of Ile56 , amide carbonyl at C-15 to the phenolic group on the sidechain of Tyr82 , and the hydroxyl proton at the hemiketal carbon, C-13, to the sidechain of Asp37 . [ 47 ]
Structural changes to the rapamycin structure can affect binding to mTOR. This could include both direct and indirect binding as a part of binding to FKBP12. Interaction of the FKBP12-rapamycin complex with mTOR corresponds with conformational flexibility of the effector domain of rapamycin. This domain consists of molecular regions that make hydrophobic interactions with the FKB domain and triene region from C-1-C-6, methoxy group at C-7, and methyl groups at C-33, C-27 and C-25. All changes of the macrolide ring can have unpredictable effects on binding and therefore, make determination of SAR for rapalogs problematic. [ 47 ] [ 48 ]
Rapamycin contains no functional groups that ionize in the pH range 1-10 and therefore, are rather insoluble in water. [ 24 ] Despite its effectiveness in preclinic cancer models, its poor solubility in water, stability, and the long half-life elimination made its parenteral use difficult, but the development of soluble rapamycin analogs vanquished various barriers. [ 2 ]
Nonetheless, the rapamycin analogs that have been approved for human use are modified at C-43 hydroxyl group and show improvement in pharmacokinetic parameters as well as drug properties, for example, solubility. [ 48 ]
Rapamycin and temsirolimus have similar chemical structures and bind to FKBP12, though their mechanism of action differs. [ 19 ]
Temsirolimus is a dihydroxymethyl propionic acid ester of rapamycin, and its first derivative. [ 2 ] Therefore, it is more water-soluble, and due to its water solubility it can be given by intravenous formulation. [ 6 ] [ 19 ]
Everolimus has O-2 hydroxyethyl chain substitution and deforolimus has a phosphine oxide substitution at position C-43 in the lactone ring of rapamycin. [ 19 ]
Deforolimus (Ridaforolimus ) has C43 secondary alcohol moiety of the cyclohexyl group of Rapamycin that was substituted with phosphonate and phosphinate groups, preventing the high-affinity binding to mTOR and FKBP. Computational modelling studies helped the synthesise of the compound. [ 6 ]
Treatment with mTOR inhibitors can be complicated by adverse events. The most frequently occurring adverse events are stomatitis, rash, anemia, fatigue, hyperglycemia/hypertriglyceridemia, decreased appetite, nausea, and diarrhea. Additionally, interstitial lung disease is an adverse event of particular importance. mTORi-induced ILD often is asymptomatic (with ground glass abnormalities on chest CT) or mild symptomatic (with a non-productive cough), but can be very severe as well. Even fatalities have been described. Careful diagnosis and treatment, therefore, is essential. Recently, a new diagnostic and therapeutic management approach has been proposed. [ 49 ]
Identification of predictive biomarkers of efficacy for tumor types that are sensitive to mTOR inhibitors remains a major issue. [ 1 ] [ 50 ] Possible predictive biomarkers for tumor response to mTOR inhibitors, as have been described in glioblastoma , breast and prostate cancer cells, may be the differential expression of mTOR pathway proteins, PTEN , AKT , and S6. [ 1 ] Thus, this data is based on preclinical assays, based on in vitro cultured tumor cell lines, which suggest that the effects of mTOR inhibitors may be more pronounced in cancers displaying loss of PTEN functions or PIK3CA mutations. However, the use of PTEN, PIK3CA mutations , and AKT–phospho status for predicting rapalog sensitivity has not been fully validated in clinic. To date, attempts to identify biomarkers of rapalog response have been unsuccessful. [ 21 ]
Clinical and translational data suggest that sensitive tumor types, with adequate parameters and functional apoptosis pathways, might not need high doses of mTOR inhibitors to trigger apoptosis. In most cases, cancer cells might only be partially sensitive to mTOR inhibitors due to redundant signal transduction or lack of functional apoptosis signaling pathways. In situations like this, high doses of mTOR inhibitors might be required. In a recent study of patients with Renal cell carcinoma , resistance to Temsirolimus was associated with low levels of p-AKT and p-S6K1, that play the key role in mTOR activation. These data strongly suggests number of tumors with an activated PI3K/AKT/mTOR signaling pathway that does not respond to mTOR inhibitors. For future studies, it is recommended to exclude patients with low or negative p-AKT levels from trials with mTOR inhibitors.
Current data is insufficient to predict sensitivity of tumors to rapamycin. However, the existing data allows us to characterize tumors that might not respond to rapalogs. [ 5 ]
These second generation mTOR inhibitors bind to ATP-binding site in mTOR kinase domain required for the functions of both mTORC1 and mTORC2 , and result in downregulation of mTOR signaling pathway. Due to PI3K and mTORC2 ability to regulate AKT phosphorylation, these two compounds play a key role in minimizing the feedback activation of AKT. [ 20 ]
Several, so-called mTOR/PI3K dual inhibitors (TPdIs), have been developed and are in early-stage preclinical trials and show promising results. Their development has been benefited from previous studies with PI3K-selective inhibitors. [ 20 ] The activity of these small molecules from rapalog activity differs in the way by blocking both mTORC1-dependent phospholylation of S6K1 and mTORC2-dependent phosphorylation of AKT Ser473 residue. [ 1 ]
Dual mTOR/PI3K inhibitors include dactolisib , voxtalisib , BGT226, SF1126, PKI-587 and many more. For example, Novartis has developed the compound NVPBE235 that was reported to inhibit tumor growth in various preclinical models. It enhances antitumor activity of some other drugs such as vincristine . [ 20 ] Dactolisib seems to inhibit effectively both wild-type and mutant form of PI3KCA, which suggests its use towards wide types of tumors. Studies have shown superior antiproliferative activity to rapalogs and in vivo models have confirmed these potent antineoplastic effects of dual mTOR/PI3K inhibitors. [ 1 ] [ 7 ] These inhibitors target isoforms of PI3K (p110α, β and γ) along with ATP-binding sites of mTORC1 and mTORC2 by blocking PI3K/AKT signaling, even in cancer types with mutations in this pathway. [ 7 ]
New mTOR-specific inhibitors came forth from screening and drug discovery efforts. These compounds block activity of both mTOR complexes and are called mTORC1/mTORC2 dual inhibitors. [ 20 ] Compounds with this characteristics such as sapanisertib (codenamed INK128), AZD8055, and AZD2014 have entered clinical trials .
A series of these mTOR kinase inhibitors have been studied. Their structure is derived from morpholino pyrazolopyrimidine scaffold. [ 20 ] [ 22 ] [ 51 ] Improvements of this type of inhibitors have been made by exchanging the morpholines with bridged morpholines in pyrazolopyrimidine inhibitors and results showed increased selectivity to mTOR by 26000 fold. [ 22 ] [ 52 ]
Although the new generation of mTOR inhibitors hold great promise for anticancer therapy and are rapidly moving into clinical trials, there are many important issues that determine their success in the clinic. First of all predictable biomarkers for benefit of these inhibitors are not available. It appears that genetic determinants predispose cancer cells to be sensitive or resistant to these compounds. Tumors that depend on PI3K/mTOR pathway should respond to these agents but it is unclear if compounds are effective in cancers with distinct genetic lesions. [ 20 ]
Inhibition of mTOR is a promising strategy for treatment of number of cancers. Limited clinical activity of selective mTORC1 agents have made them unlikely to have impact in cancer treatment. The development of competitive ATP-catalytic inhibitors have the ability to block both mTORC1 and mTORC2. [ 53 ]
The limitations of currently available rapalogs have led to new approaches to mTOR targeting. Studies suggest that mTOR inhibitors may have anticancer activity in many cancer types, such as RCC , neuroendocrine tumors , breast cancer , hepatocellular carcinoma , sarcoma , and large B-cell lymphoma . [ 3 ] One major limitation for the development of mTOR inhibition therapy is that biomarkers are not presently available to predict which patient will respond to them. A better understanding of the molecular mechanisms that are involved in the response of cancer cells to mTOR inhibitors are still required so this can be possible. [ 7 ]
A way to overcome the resistance and improve efficacy of mTOR targeting agents may be with stratification of patients and selection of drug combination therapies. This may lead to a more effective and personalized cancer therapy. [ 1 ] [ 7 ] Although further research is needed, mTOR targeting still remains an attractive and promising therapeutic option for the treatment of cancer. [ 7 ] | https://en.wikipedia.org/wiki/MTOR_inhibitors |
The MTV-1 Micro TV was the second model of a near pocket-sized television . The first was the Panasonic IC model TR-001 introduced in 1970. The MTV-1 was developed by Clive Sinclair ( Sinclair Radionics Ltd ). It was shown to the public at trade shows in London and Chicago in January, 1977, and released for sale in 1978. Development spanned 10 years and included £ 1.6 million from the UK government in 1976.
The MTV-1 used an AEG Telefunken 2-inch (5.1 cm) black-and-white, electrostatic deflection cathode ray tube (CRT) and included a rechargeable 4- AA -cell NiCad battery pack. It measured 4×6.25×1.625 inches (101.6×158.8×41.3 mm) and weighed 28 ounces (790 g). It was able to receive either PAL or NTSC transmissions on VHF or UHF , the world's first multi-standard TV. A Welsh company, Wolsey Electronics, manufactured it for Sinclair. Custom ICs made by Texas Instruments and Sinclair contributed to its small size and low power consumption.
The original US$395 (about £205 [ 1 ] ) price tag proved to be too high to sell many of them, and Sinclair lost over £1.8 million in 1978, eventually selling its remaining inventory to liquidators at greatly reduced prices.
The MTV-1B, released later in 1978 at the much lower price of £99 , was able to receive only System I UHF signals. | https://en.wikipedia.org/wiki/MTV-1 |
This is a glossary of terms common in multi-user dungeon (MUD) multiplayer virtual worlds .
See wizard | https://en.wikipedia.org/wiki/MUD_terminology |
The MUPID (short for "Mehrzweck Universell Programmierbarer Intelligenter Decoder" in German) or MUPID A320 was an early home computer like system introduced in 1981, designed and invented by Hermann Maurer at TU Graz , Austria [ 1 ] to be used as a Bildschirmtext terminal , but it was also capable of being used as a stand-alone computer. [ 2 ]
It had a Zilog Z80 microprocessor and came with BASIC as operating system, 128 KB of RAM , a V.24 1200/75 baud modem , audio input/output for tape recorder , a parallel printer interface and an optional external floppy drive unit. At the time it excelled in having advanced color graphic capabilities. [ 3 ]
There were several model variations: the MUPID C2D (square case with separate keyboard ) and MUPID C2D2 were available in Germany , the MUPID C2A2 or " Komfort MUPID " (later models) were available in Austria. [ 3 ]
The Mupid was also sold under other brands, as the Grundig PTC 100 (C2D2 variation in a different color) and the Siemens T3110 (C2D variation). [ 3 ]
A card for older IBM PC compatibles with an ISA slot was developed that gave a PC the same graphics capabilities as a MUPID. [ 4 ]
The original Mupid A320 was followed in 1983 by the MUPID II . [ 1 ] This version supported the CEPT Prestel standard, had a better keyboard, a 320 × 240 graphics mode, four voice sound, two DIN6 joystick connectors, a DIN8 tape interface, a DB25 modem, a DE9 serial connector and a DIN8 external disk drive connector. [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MUPID |
In the study of partial differential equations , the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients. MUSCL stands for Monotonic Upstream-centered Scheme for Conservation Laws (van Leer, 1979), and the term was introduced in a seminal paper by Bram van Leer (van Leer, 1979). In this paper he constructed the first high-order , total variation diminishing (TVD) scheme where he obtained second order spatial accuracy.
The idea is to replace the piecewise constant approximation of Godunov's scheme by reconstructed states, derived from cell-averaged states obtained from the previous time-step. For each cell, slope limited, reconstructed left and right states are obtained and used to calculate fluxes at the cell boundaries (edges). These fluxes can, in turn, be used as input to a Riemann solver , following which the solutions are averaged and used to advance the solution in time. Alternatively, the fluxes can be used in Riemann-solver-free schemes, which are basically Rusanov-like schemes.
We will consider the fundamentals of the MUSCL scheme by considering the following simple first-order, scalar, 1D system, which is assumed to have a wave propagating in the positive direction,
Where u {\displaystyle u} represents a state variable and F {\displaystyle F} represents a flux variable.
The basic scheme of Godunov uses piecewise constant approximations for each cell, and results in a first-order upwind discretisation of the above problem with cell centres indexed as i {\displaystyle i} . A semi-discrete scheme can be defined as follows,
This basic scheme is not able to handle shocks or sharp discontinuities as they tend to become smeared. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation with a step wave propagating to the right. The simulation was carried out with a mesh of 200 cells and used a 4th order Runge–Kutta time integrator (RK4).
To provide higher resolution of discontinuities, Godunov's scheme can be extended to use piecewise linear approximations of each cell, which results in a central difference scheme that is second-order accurate in space. The piecewise linear approximations are obtained from
Thus, evaluating fluxes at the cell edges we get the following semi-discrete scheme
where u i + 1 / 2 {\displaystyle u_{i+1/2}} and u i − 1 / 2 {\displaystyle u_{i-1/2}} are the piecewise approximate values of cell edge variables, i.e. ,
Although the above second-order scheme provides greater accuracy for smooth solutions, it is not a total variation diminishing (TVD) scheme and introduces spurious oscillations into the solution where discontinuities or shocks are present. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation u t + u x = 0 {\displaystyle \,u_{t}+u_{x}=0} , with a step wave propagating to the right. This loss of accuracy is to be expected due to Godunov's theorem . The simulation was carried out with a mesh of 200 cells and used RK4 for time integration.
MUSCL based numerical schemes extend the idea of using a linear piecewise approximation to each cell by using slope limited left and right extrapolated states. This results in the following high resolution, TVD discretisation scheme,
Which, alternatively, can be written in the more succinct form,
The numerical fluxes F i ± 1 / 2 ∗ {\displaystyle F_{i\pm 1/2}^{*}} correspond to a nonlinear combination of first and second-order approximations to the continuous flux function.
The symbols u i + 1 / 2 ∗ {\displaystyle u_{i+1/2}^{*}} and u i − 1 / 2 ∗ {\displaystyle u_{i-1/2}^{*}} represent scheme dependent functions (of the limited extrapolated cell edge variables), i.e. ,
where, using downwind slopes:
and
The function ϕ ( r i ) {\displaystyle \phi \left(r_{i}\right)} is a limiter function that limits the slope of the piecewise approximations to ensure the solution is TVD, thereby avoiding the spurious oscillations that would otherwise occur around discontinuities or shocks - see Flux limiter section. The limiter is equal to zero when r ≤ 0 {\displaystyle r\leq 0} and is equal to unity when r = 1 {\displaystyle r=1} . Thus, the accuracy of a TVD discretization degrades to first order at local extrema, but tends to second order over smooth parts of the domain.
The algorithm is straight forward to implement. Once a suitable scheme for F i + 1 / 2 ∗ {\displaystyle F_{i+1/2}^{*}} has been chosen, such as the Kurganov and Tadmor scheme (see below), the solution can proceed using standard numerical integration techniques.
A precursor to the Kurganov and Tadmor (KT) central scheme , (Kurganov and Tadmor, 2000), is the Nessyahu and Tadmor (NT) a staggered central scheme , (Nessyahu and Tadmor, 1990). It is a Riemann-solver-free, second-order, high-resolution scheme that uses MUSCL reconstruction. It is a fully discrete method that is straight forward to implement and can be used on scalar and vector problems, and can be viewed as a Rusanov flux (also called the local Lax-Friedrichs flux) supplemented with high order reconstructions. The algorithm is based upon central differences with comparable performance to Riemann type solvers when used to obtain solutions for PDE's describing systems that exhibit high-gradient phenomena.
The KT scheme extends the NT scheme and has a smaller amount of numerical viscosity than the original NT scheme. It also has the added advantage that it can be implemented as either a fully discrete or semi-discrete scheme. Here we consider the semi-discrete scheme.
The calculation is shown below:
Where the local propagation speed , a i ± 1 2 {\displaystyle a_{i\pm {\frac {1}{2}}}\ } , is the maximum absolute value of the eigenvalue of the Jacobian of F ( u ( x , t ) ) {\displaystyle F\left(u\left(x,t\right)\right)} over cells i , i ± 1 {\displaystyle {i},{i\pm 1}} given by
and ρ ( ∂ F ( u ( t ) ) ∂ u ) {\displaystyle \rho \left({\frac {\partial F\left(u\left(t\right)\right)}{\partial u}}\right)\ } represents the spectral radius of ∂ F ( u ( t ) ) ∂ u . {\displaystyle {\frac {\partial F\left(u\left(t\right)\right)}{\partial u}}.}
Beyond these CFL related speeds, no characteristic information is required.
The above flux calculation is most frequently called Lax-Friedrichs flux (though it's worth mentioning that such flux expression does not appear in Lax, 1954 but rather on Rusanov, 1961).
An example of the effectiveness of using a high resolution scheme is shown in the diagram opposite, which illustrates the 1D advective equation u t + u x = 0 {\displaystyle u_{t}+u_{x}=0\ } , with a step wave propagating to the right. The simulation was carried out on a mesh of 200 cells, using the Kurganov and Tadmor central scheme with Superbee limiter and used RK-4 for time integration. This simulation result contrasts extremely well against the above first-order upwind and second-order central difference results shown above. This scheme also provides good results when applied to sets of equations - see results below for this scheme applied to the Euler equations. However, care has to be taken in choosing an appropriate limiter because, for example, the Superbee limiter can cause unrealistic sharpening for some smooth waves.
The scheme can readily include diffusion terms, if they are present. For example, if the above 1D scalar problem is extended to include a diffusion term, we get
for which Kurganov and Tadmor propose the following central difference approximation,
Where,
Full details of the algorithm ( full and semi-discrete versions) and its derivation can be found in the original paper (Kurganov and Tadmor, 2000), along with a number of 1D and 2D examples. Additional information is also available in the earlier related paper by Nessyahu and Tadmor (1990).
Note: This scheme was originally presented by Kurganov and Tadmor as a 2nd order scheme based upon linear extrapolation . A later paper (Kurganov and Levy, 2000) demonstrates that it can also form the basis of a third order scheme. A 1D advective example and an Euler equation example of their scheme, using parabolic reconstruction (3rd order), are shown in the parabolic reconstruction and Euler equation sections below.
It is possible to extend the idea of linear-extrapolation to higher order reconstruction, and an example is shown in the diagram opposite. However, for this case the left and right states are estimated by interpolation of a second-order, upwind biased, difference equation. This results in a parabolic reconstruction scheme that is third-order accurate in space.
We follow the approach of Kermani (Kermani, et al., 2003), and present a third-order upwind biased scheme, where the symbols u i + 1 2 ∗ {\displaystyle u_{i+{\frac {1}{2}}}^{*}} and u i − 1 2 ∗ {\displaystyle u_{i-{\frac {1}{2}}}^{*}} again represent scheme dependent functions (of the limited reconstructed cell edge variables). But for this case they are based upon parabolically reconstructed states, i.e. ,
and
Where κ {\displaystyle \kappa \ } = 1/3 and,
and the limiter function ϕ ( r ) {\displaystyle \phi \left(r\right)\ } , is the same as above.
Parabolic reconstruction is straight forward to implement and can be used with the Kurganov and Tadmor scheme in lieu of the linear extrapolation shown above. This has the effect of raising the spatial solution of the KT scheme to 3rd order. It performs well when solving the Euler equations, see below. This increase in spatial order has certain advantages over 2nd order schemes for smooth solutions, however, for shocks it is more dissipative - compare diagram opposite with above solution obtained using the KT algorithm with linear extrapolation and Superbee limiter. This simulation was carried out on a mesh of 200 cells using the same KT algorithm but with parabolic reconstruction. Time integration was by RK-4, and the alternative form of van Albada limiter, ϕ v a ( r ) = 2 r 1 + r 2 {\displaystyle \phi _{va}(r)={\frac {2r}{1+r^{2}}}\ } , was used to avoid spurious oscillations.
For simplicity we consider the 1D case without heat transfer and without body force. Therefore, in conservation vector form, the general Euler equations reduce to
where
and where U {\displaystyle {\mbox{U}}} is a vector of states and F {\displaystyle {\mbox{F}}} is a vector of fluxes.
The equations above represent conservation of mass , momentum , and energy . There are thus three equations and four unknowns, ρ {\displaystyle \rho } (density) u {\displaystyle u} (fluid velocity), p {\displaystyle p} (pressure) and E {\displaystyle E} (total energy). The total energy is given by,
where e {\displaystyle e\ } represents specific internal energy.
In order to close the system an equation of state is required. One that suits our purpose is
where γ {\displaystyle \gamma \ } is equal to the ratio of specific heats [ c p / c v ] {\displaystyle \left[c_{p}/c_{v}\right]} for the fluid.
We can now proceed, as shown above in the simple 1D example, by obtaining the left and right extrapolated states for each state variable. Thus, for density we obtain
where
Similarly, for momentum ρ u {\displaystyle \rho u} , and total energy E {\displaystyle E} . Velocity u {\displaystyle u} , is calculated from momentum, and pressure p {\displaystyle p} , is calculated from the equation of state.
Having obtained the limited extrapolated states, we then proceed to construct the edge fluxes using these values. With the edge fluxes known, we can now construct the semi-discrete scheme, i.e. ,
The solution can now proceed by integration using standard numerical techniques.
The above illustrates the basic idea of the MUSCL scheme. However, for a practical solution to the Euler equations, a suitable scheme (such as the above KT scheme), also has to be chosen in order to define the function F i ± 1 2 ∗ {\displaystyle \mathbf {F} _{i\pm {\frac {1}{2}}}^{*}} .
The diagram opposite shows a 2nd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) with Linear Extrapolation and Ospre limiter. This illustrates clearly demonstrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm and Ospre limiter . Time integration was performed by a 4th order SHK (equivalent performance to RK-4) integrator. The following initial conditions ( SI units) were used:
The diagram opposite shows a 3rd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) but with parabolic reconstruction and van Albada limiter. This again illustrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm with Parabolic Extrapolation and van Albada limiter . The alternative form of van Albada limiter, ϕ v a ( r ) = 2 r 1 + r 2 {\displaystyle \phi _{va}(r)={\frac {2r}{1+r^{2}}}\ } , was used to avoid spurious oscillations. Time integration was performed by a 4th order SHK integrator. The same initial conditions were used.
Various other high resolution schemes have been developed that solve the Euler equations with good accuracy. Examples of such schemes are,
More information on these and other methods can be found in the references below. An open source implementation of the Kurganov and Tadmor central scheme can be found in the external links below. | https://en.wikipedia.org/wiki/MUSCL_scheme |
MVDS is an acronym for terrestrial " Multipoint Video Distribution System ".
MVDS currently is a part of broader MWS (Multimedia Wireless System) standards.
In the European Union MWS works in 10.7–13.5 and 40.5–43.5 GHz frequency bands.
Research for 42 GHz frequency has been done under the European Commission EMBRACE (Efficient Millimetre Broadband Radio Access for Convergence and Evolution) initiative.
This article about wireless technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MVDS |
In materials science , MXenes (pronounced "max-enes") are a class of two-dimensional inorganic compounds along with MBenes , that consist of atomically thin layers of transition metal carbides , nitrides , or carbonitrides. MXenes accept a variety of hydrophilic terminations. [ 1 ] [ 2 ] The first MXene was reported in 2011 at Drexel University's College of Engineering , and were named by combining the prefix "MAX" or "MX" (for MAX phases ), with "ene" by analogy to graphene . [ 1 ] [ 3 ]
As-synthesized MXenes prepared via HF etching have an accordion -like morphology, which can be referred to as multi-layer MXene (ML-MXene), or few-layer MXene (FL-MXene) given fewer than five layers. Because the surfaces of MXenes can be terminated by functional groups, the naming convention M n+1 X n T x can be used, where T is a functional group (e.g. O, F, OH, Cl). [ 2 ]
MXenes adopt three structures with one metal on the M site, as inherited from the parent MAX phases : M 2 C, M 3 C 2 , and M 4 C 3 . They are produced by selectively etching out the A element from a MAX phase or other layered precursor (e.g., Mo 2 Ga 2 C), which has the general formula M n+1 AX n , where M is an early transition metal, A is an element from group 13 or 14 of the periodic table, X is C and/or N, and n = 1–4. [ 4 ] MAX phases have a layered hexagonal structure with P6 3 /mmc symmetry, where M layers are nearly closed packed and X atoms fill octahedral sites. [ 2 ] Therefore, M n+1 X n layers are interleaved with the A element, which is metallically bonded to the M element. [ 5 ] [ 6 ]
Double transition metal MXenes can take two forms, ordered double transition metal MXenes or solid solution MXenes. For ordered double transition metal MXenes, they have the general formulas: M' 2 M"C 2 or M' 2 M" 2 C 3 where M' and M" are different transition metals. Double transition metal carbides that have been synthesized include Mo 2 TiC 2 , Mo 2 Ti 2 C 3 , Cr 2 TiC 2 , and Mo 4 VC 4 . In some of these MXenes (such as Mo 2 TiC 2 , Mo 2 Ti 2 C 3 , and Cr 2 TiC 2 ), the Mo or Cr atoms are on outer edges of the MXene and these atoms control electrochemical properties of the MXenes. [ 7 ]
For solid-solution MXenes, they have the general formulas: (M' 2−y M" y )C, (M' 3−y M" y )C 2 , (M' 4−y M" y )C 3 , or (M' 5−y M" y )C 4 , where the metals are randomly distributed throughout the structure in solid solutions leading to continuously tailorable properties. [ 8 ]
By designing a parent 3D atomic laminate, (Mo 2/3 Sc 1/3 ) 2 AlC, with in-plane chemical ordering, and by selectively etching the Al and Sc atoms, there is evidence for 2D Mo 1.33 C sheets with ordered metal divacancies. [ 9 ]
MXenes are typically synthesized by a top-down selective etching process. This synthetic route is scalable, with no loss or change in properties as the batch size is increased. [ 10 ] [ 11 ] Producing a MXene by etching a MAX phase occurs mainly by using strong etching solutions that contain a fluoride ion (F − ), such as hydrofluoric acid (HF), [ 2 ] ammonium bifluoride (NH 4 HF 2 ), [ 12 ] and a mixture of hydrochloric acid (HCl) and lithium fluoride (LiF). [ 13 ] For example, etching of Ti 3 AlC 2 in aqueous HF at room temperature causes the A (Al) atoms to be selectively removed, and the surface of the carbide layers becomes terminated by O, OH, and/or F atoms. [ 14 ] [ 15 ] MXene can also be obtained in Lewis acid molten salts, such as ZnCl 2 , and a Cl terminal can be realized. [ 16 ] The Cl-terminated MXene is structurally stable up to 750 °C. [ 17 ] A general Lewis acid molten salt approach was proven viable to etch most of MAX phases members (such as MAX-phase precursors with A elements Si, Zn, and Ga) by some other melts (CdCl 2 , FeCl 2 , CoCl 2 , CuCl 2 , AgCl, and NiCl 2 ). [ 18 ]
The MXene Ti 4 N 3 was the first nitride MXene reported, and is prepared by a different procedure than those used for carbide MXenes. To synthesize Ti 4 N 3 , the MAX phase Ti 4 AlN 3 is mixed with a molten eutectic fluoride salt mixture of lithium fluoride , sodium fluoride , and potassium fluoride and treated at elevated temperatures. This procedure etches out Al, yielding multilayered Ti 4 N 3 , which can further be delaminated into single and few layers by immersing the MXene in tetrabutylammonium hydroxide , followed by sonication. [ 19 ]
MXenes can also be synthesized directly or via CVD processes. [ 20 ] Recently, single crystalline monolayer W5N6 has been successfully synthesized by CVD in wafer scale [ 21 ] [ 22 ] which shows promise of MXenes in electronic application in the future.
Since their first discovery, scientists have sought a more effective and efficient synthesis process. In a 2018 report, Peng et al. described a hydrothermal etching technique. [ 23 ] In this etching method, the MAX phase is treated in the solution of acid and salt under high pressure and temperature conditions. The method is more effective in producing MXene dots and nano-sheets. [ 24 ] Moreover, it is safer since there is no release of HF fumes during the etching process. [ 23 ]
2-1 MXenes: Ti 2 C, [ 25 ] V 2 C, [ 26 ] Nb 2 C, [ 26 ] Mo 2 C [ 27 ] Mo 2 N, [ 28 ] Ti 2 N, [ 29 ] (Ti 2−y Nb y )C, [ 8 ] (V 2−y Nb y )C, [ 8 ] (Ti 2−y V y )C, [ 8 ] W 1.33 C, [ 30 ] Nb 1.33 C, [ 31 ] Mo 1.33 C, [ 32 ] Mo 1.33 Y 0.67 C [ 32 ]
3-2 MXenes: Ti 3 C 2 , [ 1 ] Ti 3 CN, [ 25 ] Zr 3 C 2 [ 33 ] and Hf 3 C 2 [ 34 ]
4-3 MXenes: Ti 4 N 3 , [ 19 ] Nb 4 C 3 , [ 35 ] Ta 4 C 3 , [ 25 ] V 4 C 3 , [ 36 ] (Mo,V) 4 C 3 [ 37 ]
5-4 MXenes: Mo 4 VC 4 [ 4 ]
Double transition metal MXenes:
2-1-2 MXenes: Mo 2 TiC 2 , [ 7 ] Cr 2 TiC 2 , [ 7 ] Mo 2 ScC 2 [ 38 ]
2-2-3 MXenes: Mo 2 Ti 2 C 3 [ 7 ]
2D transition-metal carbides surfaces can be chemically transformed with a variety of functional groups such as O, NH, S, Cl, Se, Br, and Te surface terminations as well as bare MXenes. [ 39 ] The strategy involves installation and removal of the surface groups by performing substitution and elimination reactions in molten inorganic salts. [ 40 ] Covalent bonding of organic molecules to MXene surfaces has been demonstrated through reaction with aryl diazonium salts. [ 41 ] Moreover, heating and re-termination experiments of Ti 3 C 2 T x have shown that H 2 O, with a strong bonding to the Ti-Ti bridge-sites, can be considered as a termination species. An O and H 2 O terminated Ti 3 C 2 T x -surface restricts the CO 2 adsorption to the Ti on-top sites and may reduce the ability to store positive ions, such as Li + and Na + . On the other hand, an O and H 2 O terminated Ti 3 C 2 T x -surface shows the capability to split water. [ 42 ]
Since MXenes are layered solids and the bonding between the layers is weak, intercalation of the guest molecules in MXenes is possible. Guest molecules include dimethyl sulfoxide (DMSO) , hydrazine , and urea . [ 2 ] For example, N 2 H 4 (hydrazine) can be intercalated into Ti 3 C 2 (OH) 2 with the molecules parallel to the MXene basal planes to form a monolayer. Intercalaction increases the MXene c lattice parameter (crystal structure parameter that is directly proportional to the distance between individual MXene layers), which weakens the bonding between MX layers. [ 2 ] Ions, including Li + , Pb 2+ , and Al 3+ , can also be intercalated into MXenes, either spontaneously or when a negative potential is applied to a MXene electrode. [ 43 ]
Ti 3 C 2 MXene produced by HF etching has accordion-like morphology with residual forces that keep MXene layers together preventing separation into individual layers. Although those forces are quite weak, ultrasound treatment results only in very low yields of single-layer flakes. For large scale delamination, DMSO is intercalated into ML-MXene powders under constant stirring to further weaken the interlayer bonding and then delaminated with ultrasound treatment. This results in large scale layer separation and formation of the colloidal solutions of the FL-MXene. These solutions can later be filtered to prepare MXene "paper" (similar to Graphene oxide paper ). [ 44 ]
For the case of Ti 3 C 2 T x and Ti 2 CT x , etching with concentrated hydrofluoric acid leads to open, accordion-like morphology with a compact distance between layers (this is common for other MXene compositions as well). To be dispersed in suspension, the material must be pre-intercalated with something like dimethylsulfoxide. However, when etching is conducted with hydrochloric acid and LiF as a fluoride source, morphology is more compact with a larger inter-layer spacing, presumably due to amounts of intercalated water. [ 13 ] The material has been found to be 'clay-like': as seen in clay materials (e.g. smectite clays and kaolinite), Ti 3 C 2 T x demonstrates the ability to expand its interlayer distance hydration and can reversibly exchange charge-balancing Group I and Group II cations. [ 45 ] Further, when hydrated, the MXene clay becomes pliable and can be molded into desired shapes, becoming a hard solid upon drying. Unlike most clays, however, MXene clay shows high electrical conductivity upon drying and is hydrophilic , and disperses into single layer two-dimensional sheets in water without surfactants . Further, due to these properties, it can be rolled into free-standing, additive-free electrodes for energy storage applications.
MXenes can be solution-processed in aqueous or polar organic solvents, such as water, ethanol , dimethyl formamide , propylene carbonate , etc., [ 46 ] enabling various types of deposition via vacuum filtration, spin coating , spray coating, dip coating , and roll casting. [ 47 ] [ 48 ] [ 49 ] There have been studies conducted on ink-jet printing of additive free Ti 3 C 2 T x inks and inks composed of Ti 3 C 2 T x and proteins. [ 50 ] [ 51 ]
Lateral flake size often plays a role in the observed properties and there are several synthetic routes that produce varying degrees of flake size. [ 47 ] [ 52 ] For example, when HF is used as an etchant, the intercalation and delamination step will require sonication to exfoliate material into single flakes, resulting in flakes that are several hundreds of nanometers in lateral size. This is beneficial for applications such as catalysis and select biomedical and electrochemical applications. However, if larger flakes are warranted, especially for electronic or optical applications, defect-free and large area flakes are necessary. This can be achieved by Minimally Intensive Layer Delamination (MILD) method, where the quantity of LiF to MAX phase is scaled up resulting in flakes that can be delminated in situ when washing to neutral pH. [ 47 ]
Post-synthesis processing techniques to tailor the flake size have also been investigated, such as sonication, differential centrifugation, and density gradient centrifugation procedures. [ 53 ] [ 54 ] Post processing methods rely heavily on the as-produced flake size. Using sonication allows for a decrease in flake size from 4.4 μm (as-produced), to an average of 1.0 μm after 15 minutes of bath sonication (100 W, 40 kHz), down to 350 nm after 3 hours of bath sonication. By utilizing probe sonication (8 s ON, 2 s OFF pulse, 250 W), flakes were reduced to an average of 130 nm in lateral size. [ 53 ] Differential centrifugation , also known as cascading centrifugation, can be used to select flakes based on lateral size by increasing the centrifuge speed sequentially from low speeds (e.g. 1000 rpm) to high speeds (e.g., 10000 rpm) and collecting the sediment. When this was performed, "large" (800 nm), "medium" (300 nm) and "small" (110 nm) flakes can be obtained. [ 54 ] Density gradient centrifugation is also another method for selecting flakes based on lateral size, where a density gradient is employed in the centrifuge tube and flakes move through the centrifuge tube at different rates based on the flake density relative to the medium. In the case of sorting MXenes, a sucrose and water density gradient can be used from 10 to 66 w/v %. [ 53 ] Using density gradients allows for more mono-disperse distributions in flake sizes and studies show the flake distribution can be varied from 100 to 10 μm without employing sonication. [ 53 ]
With a high electron density at the Fermi level, MXene monolayers are predicted to be metallic. [ 1 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] In MAX phases, N(E F ) is mostly M 3d orbitals, and the valence states below E F are composed of two sub-bands. One, sub-band A, made of hybridized Ti 3d-Al 3p orbitals, is near E F , and another, sub-band B, −10 to −3 eV below E F which is due to hybridized Ti 3d-C 2p and Ti 3d-Al 3s orbitals. Said differently, sub-band A is the source of Ti-Al bonds, while sub-band B is the source of Ti-C bond. Removing A layers causes the Ti 3d states to be redistributed from missing Ti-Al bonds to delocalized Ti-Ti metallic bond states near the Fermi energy in Ti 2 , therefore N(E F ) is 2.5–4.5 times higher for MXenes than MAX phases. [ 1 ] Experimentally, the predicted higher N(E F ) for MXenes has not been shown to lead to higher resistivities than the corresponding MAX phases. The energy positions of the O 2p (~6 eV) and the F 2p (~9 eV) bands from the Fermi level of Ti 2 CT x and Ti 3 C 2 T x both depend on the adsorption sites and the bond lengths to the termination species. [ 59 ] Significant changes in the Ti-O/F coordination are observed with increasing temperature in the heat treatment. [ 60 ]
Only MXenes without surface terminations are predicted to be magnetic. Cr 2 C, Cr 2 N, and Ta 3 C 2 are predicted to be ferromagnetic; Ti 3 C 2 and Ti 3 N 2 are predicted to be anti-ferromagnetic. None of these magnetic properties have yet been demonstrated experimentally. [ 1 ]
Membranes of MXenes, such as Ti 3 C 2 and Ti 2 C, have dark colors, indicating their strong light absorption in the visible wavelengths. MXenes are promising photo-thermal materials due to their strong visible light absorption. [ 61 ] [ 62 ] More interestingly, it is reported that the optical properties of MXenes such as Ti 3 C 2 and Ti 2 C in the IR region quite differ from that in the visible wavelengths. [ 63 ] For the wavelengths above 1.4 micrometer, these materials show negative permittivity, resulting in a strong metallic response to the IR light. In other words, they are highly reflective to IR lights. From the Kirchhoff's law of radiation, a low IR absorption means a low IR emissivity. The two MXenes materials show IR emissivity as low as 0.1, which are similar to some metals. [ 63 ] Such materials that are visible black but IR white are highly desired in many areas, such as camouflage, thermal management, and information encryption.
There is a growing body of the literature that recognises MXenes as high-performance corrosion inhibitors. The corrosion resistance of Ti 3 C 2 T x MXene can be attributed to the synergy of good dispersibility, barrier effect and corrosion inhibitor release. [ 64 ]
Compared to graphene oxide , which has been widely reported as an antibacterial agent, Ti 2 C MXene shows a lack of antibacterial properties. [ 65 ] However, MXene of Ti 3 C 2 MXene shows a higher antibacterial efficiency toward both Gram-negative E. coli and Gram-positive B. subtilis . [ 66 ] Colony forming unit and regrowth curves showed that more than 98% of both bacterial cells lost viability at 200 μg/mL Ti 3 C 2 colloidal solution within 4 h of exposure. [ 66 ] Damage to the cell membrane was observed, which resulted in release of cytoplasmic materials from the bacterial cells and cell death. [ 66 ] The principal in vitro studies of cytotoxicity of 2D sheets of MXenes showed promise for applications in bioscience and biotechnology. [ 67 ] Presented studies of anticancer activity of the Ti 3 C 2 MXene was determined on two normal (MRC-5 and HaCaT) and two cancerous (A549 and A375) cell lines. The cytotoxicity results indicated that the observed toxic effects were higher against cancerous cells compared to normal ones. [ 67 ] The mechanisms of potential toxicity were also elucidated. It was shown that Ti 3 C 2 MXene may affect the occurrence of oxidative stress and, in consequence, the generation of reactive oxygen species (ROS). [ 67 ] Further studies on Ti 3 C 2 MXene revealed potential of MXenes as a novel ceramic photothermal agent used for cancer therapy. [ 61 ] In neuronal biocompatibility studies, neurons cultured on Ti 3 C 2 are as viable as those in control cultures, and they can adhere, grow axonal processes, and form functional networks. [ 68 ]
Recently, Ti 3 C 2 MXenes have been used as flowing electrodes in a flow-electrode capacitive deionization cell for the removal of ammonia from simulated wastewater. MXene FE-CDI demonstrated a 100x improvement in ion absorption capacity at 10x greater energy efficiency as compared to activated carbon flowing electrodes. [ 69 ] One-micron-thick Ti 3 C 2 MXene membranes demonstrated ultrafast water flux (approximately 38 L/(Bar·h·m 2 )) and differential sieving of salts depending on both the hydration radius and charge of the ions. [ 70 ] Cations larger than the interlayer spacing of MXene do not permeate through Ti 3 C 2 membranes. [ 70 ] As for smaller cations, the ones with a larger charge permeate an order of magnitude slower than single-charged cations. [ 70 ]
As conductive layered materials with tunable surface terminations, MXenes have been shown to be promising for energy storage applications ( Li-ion batteries , supercapacitors , and energy storage components), [ 71 ] [ 72 ] composites , photocatalysis , [ 73 ] water purification , [ 70 ] gas sensors , [ 74 ] [ 75 ] transparent conducting electrodes, [ 48 ] neural electrodes, [ 68 ] as a metamaterial , [ 76 ] SERS substrate, [ 77 ] photonic diode, [ 78 ] electrochromic device , [ 49 ] and triboelectric nanogenerator (TENGs). [ 79 ]
MXenes have been investigated experimentally in lithium-ion batteries (LIBs) (e.g. V 2 CT x , [ 26 ] Nb 2 CT x , [ 26 ] Ti 2 CT x , [ 80 ] and Ti 3 C 2 T x [ 44 ] ). V 2 CT x has demonstrated the highest reversible charge storage capacity among MXenes in multi-layer form (280 mAhg −1 at 1C rate and 125 mAhg −1 at 10C rate). Multi-layer Nb 2 CT x showed a stable, reversible capacity of 170 mAhg −1 at 1C rate and 110 mAhg −1 at a 10C rate. Although Ti 3 C 2 T x shows the lowest capacity among the four MXenes in multi-layer form, it can be delaminated via sonication of the multi-layer powder. By virtue of higher electrochemically active and accessible surface area, delaminated Ti 3 C 2 T x paper demonstrates a reversible capacity of 410 mAhg −1 at 1C and 110 mAhg −1 at 36C rate. As a general trend, M 2 X MXenes can be expected to have greater capacity than their M 3 X 2 or M 4 X 3 counterparts at the same applied current, since M 2 X MXenes have the fewest atomic layers per sheet.
In addition to high power capabilities, each MXene has a different active voltage window, which could allow their use as battery cathodes/anodes. Moreover, the experimentally measured capacity for Ti 3 C 2 T x paper is higher than predicted from computer simulations, indicating that further investigation is required to ascertain the charge storage mechanism. [ 81 ]
MXenes exhibit promising performances for sodium-ion batteries . Na + should diffuse rapidly on MXene surfaces, which is favorable for fast charging/discharging. [ 82 ] [ 83 ] Two layers of Na + can be intercalated in between MXene layers. [ 84 ] [ 85 ] As a typical example, multilayered Ti 2 CT x MXene as a negative electrode material showed a capacity of 175 mA h g −1 and good rate capability. [ 86 ] It is possible to tune the Na-ion insertion potentials of MXenes by changing the transition metal and surface functional groups. [ 82 ] [ 43 ] V 2 CT x MXene has been successfully applied as a cathode material. [ 87 ] Porous MXene-based paper electrodes have been reported to exhibit high volumetric capacities and stable cycling performance, demonstrating promise for devices where size matters. [ 88 ]
MXenes are under study to improve supercapacitor energy density. Improvements come from increased charge storage density, which can be increased in several ways. Increasing the available surface area for potential redox reactions through increasing interlayer spacing can accommodate more ions, but reduces electrode density. The synthesis route controls the surface chemistry and plays a large role in determining the intercalation reaction rate and the charge storage density. For example, molten salt prepared Ti 3 C 2 T x MXenes, with chlorine surface groups, show a capacity of 142 mAh g −1 at 13C rate and 75 mAh g −1 at 128C rate, driven by full desolvation of Li + , allowing for increased charge storage density in the electrode. [ 89 ] In comparison, Ti 3 C 2 T x MXenes prepared through HF etching show a capacity of 107.2 mAh g −1 at 1C rate. [ 90 ]
Composite Ti 3 C 2 T x -based electrodes, including Ti 3 C 2 T x /polymer (e.g. PPy , Polyaniline ), [ 91 ] [ 92 ] Ti 3 C 2 T x /TiO 2 , [ 93 ] and Ti 3 C 2 T x /Fe 2 O 3 have been explored. Notably, Ti 3 C 2 T x hydrogel electrodes delivered a high volumetric capacitance of up to 1500 F/cm 3 . [ 94 ]
Supercapacitor electrodes based on Ti 3 C 2 T x MXene paper in aqueous solutions demonstrate excellent cyclability and the ability to store 300-400 F/cm 3 , which translates to three times as much energy as for activated carbon and graphene -based capacitors. [ 95 ] Ti 3 C 2 MXene clay showed a volumetric capacitance of 900 F/cm 3 , a higher capacitance per unit of volume than most other materials, without losing any of its capacitance through more than 10,000 charge/discharge cycles. [ 13 ]
In Ti 3 C 2 T x MXene electrodes for lithium-ion electrolytes, the choice of solvent greatly affected the ion transport and intercalation kinetics. In a propylene carbonate (PC) solvent, efficient desolvation of lithium ions during intercalation led to increased volumetric charge storage, with negligible increase in electrode volume. The improved kinetics garnered through solvent choice led to improved charge storage density when comparing the PC system to acetonitrile or dimethyl sulfoxide by a factor greater than 2. [ 96 ]
FL-Ti 3 C 2 (the most studied MXene) nanosheets can mix intimately with polymers such as polyvinyl alcohol (PVA), forming alternating MXene-PVA layered structures. The electrical conductivities of the composites can be controlled from 4×10 −4 to 220 S/cm (MXene weight content from 40% to 90%). The composites have tensile strength up to 400% stronger than pure MXene films and show better capacitance up to 500 F/cm 3 . [ 97 ] By using electrostatic self-assembly, flexible and conductive MXene/ graphene supercapacitor electrodes are produced. The free-standing MXene/graphene electrode displays a volumetric capacitance
of 1040 F/cm 3 , an impressive rate capability with 61% capacitance retention and in long cycle life. [ 98 ] A method of alternative filtration for forming MXene-carbon nanomaterials composite films is also devised. These composites show better rate performance at high scan rates in supercapacitors. [ 99 ] The insertion of polymers or carbon nanomaterials between MXene layers enables electrolyte ions to diffuse more easily through the MXenes, which is the key for their applications in flexible energy storage devices. The mechanical properties of epoxy/MXenes is comparable with graphene and CNTs, the tensile strength and modulus can increase up to 67% and 23% respectively. [ 100 ] MXene/C-dot nanocomposites are reported to exhibit synergistic optical absorption and thermal properties of MXene and C-dot nanomaterials. [ 101 ]
MXenes-based sensors have been studied for various applications, including gas, [ 102 ] and biological sensing. [ 103 ] One of the novel sensors where MXenes were applied is a SERS. [ 77 ] [ 104 ] It was reported that Ti 3 C 2 T x MXenes substrates are applicable in sensing salicylic acid, [ 104 ] a metabolite of acetylsalicylic acid (also known as Aspirin), organic dye molecules [ 77 ] and biomolecules. [ 105 ]
Another promising area for applications of MXenes is gas sensing. MXenes-based gas sensors have shown high sensitivity and selectivity towards various gases, including ammonia, alcohols, nitrogen dioxide, and sulfur dioxide . [ 102 ] These sensors can be used for environmental monitoring, industrial safety, and healthcare applications.
Porous MXenes (Ti 3 C 2 , Nb 2 C and V 2 C) have been produced via a facile chemical etching method at room temperature. [ 106 ] Porous Ti 3 C 2 has a larger specific surface area and more open structure, and can be filtered as flexible films with, or without, the addition of carbon nanotubes (CNTs). [ 106 ] The as-fabricated p-Ti 3 C 2 /CNT films showed significantly improved lithium ion storage capabilities, with a capacity as high as 1250 mA·h·g −1 at 0.1 C, excellent cycling stability, and good rate performance. [ 106 ]
Scientists at Drexel University in the US have created spray on antennas that perform as well as current antennas found in phones, routers and other gadgets by painting MXene's onto everyday objects, widening the scope of the Internet of things considerably. [ 107 ] [ 108 ]
MXene SERS substrates have been manufactured by spray-coating and were used to detect several common dyes, with calculated enhancement factors reaching ~10 6 . Titanium carbide MXene demonstrates SERS effect in aqueous colloidal solutions, suggesting the potential for biomedical or environmental applications, where MXene can selectively enhance positively charged molecules. [ 77 ] Transparent conducting electrodes have been fabricated with titanium carbide MXene showing the ability to transmit approximately 97% of visible light per nanometer thickness. The performance of MXene transparent conducting electrodes depends on the MXene composition as well as synthesis and processing parameters. [ 109 ]
Nb 2 C MXenes exhibit surface-group-dependent superconductivity. [ 39 ] | https://en.wikipedia.org/wiki/MXenes |
The MYRRHA ( Multi-purpose hYbrid Research Reactor for High-tech Applications ) is a design project of a nuclear reactor coupled to a proton accelerator . This makes it an accelerator-driven system (ADS). MYRRHA will be a lead-bismuth cooled fast reactor with two possible configurations: sub-critical or critical. [ 1 ]
The project is managed by SCK CEN , the Belgian Centre for Nuclear Research. Its design will be adapted as a function of the experience gained from a first research project with a small proton accelerator and a lead-bismuth eutectic target: GUINEVERE. [ 2 ]
MYRRHA is anticipated to be constructed in 2036, with a first phase (100 MeV LINAC accelerator) expected to be completed in 2026 if successfully demonstrated. [ 3 ]
In a traditional power-generating nuclear reactor , the nuclear fuel is arranged in such a way that the two or three neutrons released from a fission event will induce one other atom in the fuel to fission. This is known as criticality . To maintain this precise balance, a number of control systems are used like control rods and neutron poisons . In most such designs, a loss of control can lead to a runaway reaction, heating the fuel until it melts. Various feedback systems and active controls prevent this.
The concept behind a number of advanced reactor designs is to arrange the fuel so it is always below criticality. Under normal conditions, this would lead to it rapidly "turning off" as the neutron counts continue to fall. In order to produce power, some other source of neutrons has to be provided. In most designs, these are provided from a second much smaller reactor running on a neutron-rich fuel, like highly enriched uranium. This is the basis for the fast breeder reactor and similar designs. In order for this to work, the reactor generally has to use a coolant that has a low neutron cross-section, water will slow the neutrons down too much. Typical coolants for fast reactors are sodium or lead-bismuth.
In the accelerator driven reactor, these extra neutrons are instead provided by a particle accelerator . These produce protons which are shot into a target, normally a heavy metal. The energy of the protons causes neutrons to be knocked off the atoms in the target, a process known as neutron spallation . These neutrons are then fed into the reactor, making up the number needed to bring the reactor back to criticality. The MYRRHA design uses the lead-bismuth cooling fuel as the target, shooting the protons directly into the reactor core.
MYRRHA is a project presently under development of a research reactor aiming to demonstrate the feasibility of the ADS and the lead-cooled fast reactor concepts, with various research applications from spent-fuel irradiation to material irradiation testing. [ 4 ] A linear accelerator is under development to provide a beam of fast protons that hits a spallation target, producing neutrons. These neutrons are necessary to keep the nuclear reactor running when operated in sub-critical mode, but to increase its versatility the reactor is also designed to operate in critical mode with fast neutron and thermal neutron zones.
The accelerator will accelerate protons to an energy of 600 MeV with a beam current of up to 4 mA. In subcritical mode, if the accelerator stops the reactor power drops immediately. To avoid thermal cycles the accelerator needs to be extremely reliable. MYRRHA aims at no more than 10 outages longer than three seconds per 100 days. [ 5 ] A first prototype stage of the accelerator was started in 2020. [ 6 ]
The accelerator and two targets are called Minerva, and construction was started in 2024. [ 7 ]
The high reliability and intense beam current required for operating such a machine makes the proton accelerator potentially interesting for online isotope separation . Phase I of the project therefore also includes the design and feasibility study of ISOL@MYRRHA to investigate exotic isotopes. [ 8 ]
The protons collide with a liquid lead-bismuth eutectic . The high atomic number of the target leads to a large number of neutrons via spallation . [ 9 ]
The pool type, or the loop type, reactor will be cooled by a lead-bismuth eutectic. Separated into a fast neutron zone and a thermal neutron zone, the reactor is planned to use a mixed oxide of uranium and plutonium (with 35 wt. % PuO 2 ). [ 10 ]
Two operating modes are foreseen: critical and sub-critical.
In sub-critical mode, the reactor is planned to run with a criticality under 0.95: On average a fission reaction will induce less than one additional fission reaction, the reactor does not have enough fissile material to sustain a chain reaction on its own and relies on the neutrons from the spallation target. As additional safety feature the reactor can be passively cooled when the accelerator is switched off. [ 9 ] | https://en.wikipedia.org/wiki/MYRRHA |
In infrared astronomy , the M band is an atmospheric transmission window centered on 4.8 micrometres (in the mid-infrared ). [ 1 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/M_band_(infrared) |
MaMF , or Mammalian Motif Finder, is an algorithm for identifying motifs to which transcription factors bind. [ 1 ]
The algorithm takes as input a set of promoter sequences, and a motif width(w), and as output, produces a ranked list of 30 predicted motifs(each motif is defined by a set of N sequences, where N is a parameter).
The algorithm firstly indexes each sub-sequence of length n, where n is a parameter around 4-6 base pairs , in each promoter, so they can be looked up efficiently. This index is then used to build a list of all pairs of sequences of length w, such that each sequence shares an n-mer , and each sequence forms an ungapped alignment with a substring of length w from the string of length 2w around the match, with a score exceeding a cut-off.
The pairs of sequences are then scored. The scoring function favours pairs which are very similar, but disfavours sequences which are very common in the target genome. The 1000 highest scoring pairs are kept, and the others are discarded. Each of these 1000 'seed' motifs are then used to search iteratively search for further sequences of length which maximise the score(a greedy algorithm ), until N sequences for that motif are reached.
Very similar motifs are discarded, and the 30 highest scoring motifs are returned as output. | https://en.wikipedia.org/wiki/MaMF |
Maban , mabain or mabanba is a material that is held to be magical in Australian Aboriginal mythology . It is the material from which the shamans and elders of indigenous Australia supposedly derive their magical powers. [ 1 ]
Among the Ngaanyatjarra people, practitioners are known as maparn or maparnjarra . | https://en.wikipedia.org/wiki/Maban |
In mathematics, MacMahon's master theorem ( MMT ) is a result in enumerative combinatorics and linear algebra . It was discovered by Percy MacMahon and proved in his monograph Combinatory analysis (1916). It is often used to derive binomial identities, most notably Dixon's identity .
In the monograph, MacMahon found so many applications of his result, he called it "a master theorem in the Theory of Permutations." He explained the title as follows: "a Master Theorem from the masterly and rapid fashion in which it deals with various questions otherwise troublesome to solve."
The result was re-derived (with attribution) a number of times, most notably by I. J. Good who derived it from his multilinear generalization of the Lagrange inversion theorem . MMT was also popularized by Carlitz who found an exponential power series version. In 1962, Good found a short proof of Dixon's identity from MMT. In 1969, Cartier and Foata found a new proof of MMT by combining algebraic and bijective ideas (built on Foata's thesis) and further applications to combinatorics on words , introducing the concept of traces . Since then, MMT has become a standard tool in enumerative combinatorics.
Although various q -Dixon identities have been known for decades, except for a Krattenthaler–Schlosser extension (1999), the proper q-analog of MMT remained elusive. After Garoufalidis–Lê–Zeilberger's quantum extension (2006), a number of noncommutative extensions were developed by Foata–Han, Konvalinka–Pak, and Etingof–Pak. Further connections to Koszul algebra and quasideterminants were also found by Hai–Lorentz, Hai–Kriegk–Lorenz, Konvalinka–Pak, and others.
Finally, according to J. D. Louck, the theoretical physicist Julian Schwinger re-discovered the MMT in the context of his generating function approach to the angular momentum theory of many-particle systems . Louck writes:
It is the MacMahon Master Theorem that unifies the angular momentum properties of composite systems in the binary build-up of such systems from more elementary constituents. [ 1 ]
Let A = ( a i j ) m × m {\displaystyle A=(a_{ij})_{m\times m}} be a complex matrix, and let x 1 , … , x m {\displaystyle x_{1},\ldots ,x_{m}} be formal variables. Consider a coefficient
(Here the notation [ f ] g {\displaystyle [f]g} means "the coefficient of monomial f {\displaystyle f} in g {\displaystyle g} ".) Let t 1 , … , t m {\displaystyle t_{1},\ldots ,t_{m}} be another set of formal variables, and let T = ( δ i j t i ) m × m {\displaystyle T=(\delta _{ij}t_{i})_{m\times m}} be a diagonal matrix . Then
where the sum runs over all nonnegative integer vectors ( k 1 , … , k m ) {\displaystyle (k_{1},\dots ,k_{m})} ,
and I m {\displaystyle I_{m}} denotes the identity matrix of size m {\displaystyle m} .
Consider a matrix
Compute the coefficients G (2 n , 2 n , 2 n ) directly from the definition:
where the last equality follows from the fact that on the right-hand side we have the product of the following coefficients:
which are computed from the binomial theorem . On the other hand, we can compute the determinant explicitly:
Therefore, by the MMT, we have a new formula for the same coefficients:
where the last equality follows from the fact that we need to use an equal number of times all three terms in the power. Now equating the two formulas for coefficients G (2 n , 2 n , 2 n ) we obtain an equivalent version of Dixon's identity: | https://en.wikipedia.org/wiki/MacMahon's_master_theorem |
The history of macOS , Apple 's current Mac operating system formerly named MAC OS XXX until 2011 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS . That system, up to and including its final release Mac OS 9 , was a direct descendant of the operating system Apple had used in its Mac computers since their introduction in 1984. However, the current macOS is a UNIX operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997. [ 1 ]
macOS components derived from BSD include multiuser access, TCP/IP networking, and memory protection. [ 2 ]
Although it was originally marketed as simply "version 10" of Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition for users and developers, versions 10.0 through 10.4 were able to run Mac OS 9 and its applications in the Classic Environment , a compatibility layer .
macOS was first released in 1999 as Mac OS X Server 1.0 . It was built using the technologies Apple acquired from NeXT, but did not include the signature Aqua user interface (UI). The desktop version aimed at regular users— Mac OS X 10.0 —shipped in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion , macOS Server is no longer offered as a standalone operating system; instead, server management tools are available for purchase as an add-on. The macOS Server app was discontinued on April 21, 2022, and will stop working on macOS 13 Ventura or later. Starting with the Intel build of Mac OS X 10.5 Leopard , most releases have been certified as Unix systems conforming to the Single UNIX Specification . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Lion was referred to by Apple as "Mac OS X Lion" and sometimes as "OS X Lion"; Mountain Lion was officially referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was further renamed to "macOS" starting with macOS Sierra.
macOS retained the major version number 10 throughout its development history until the release of macOS 11 Big Sur in 2020.
Mac OS X 10.0 and 10.1 were given names of big cats as internal code names ("Cheetah" and "Puma"). Starting with Mac OS X 10.2 Jaguar, big-cat names were used as marketing names; starting with OS X 10.9 Mavericks, names of locations in California were used as marketing names instead.
The current major version, MacOS Sequoia , was announced on June 10, 2024, at WWDC 2024 and released on September 16 of that year.
After Apple removed Steve Jobs from management in 1985, he left the company and attempted to create the "next big thing", with funding from Ross Perot [ 8 ] and himself. The result was the NeXT Computer . As the first workstation to include a digital signal processor (DSP) and a high-capacity optical disc drive, NeXT hardware was advanced for its time, but was expensive relative to the rapidly commoditizing workstation market. The hardware was phased out in 1993; however, the company's object-oriented operating system NeXTSTEP had a more lasting legacy as it eventually became the basis for Mac OS X.
NeXTSTEP was based on the Mach kernel developed at CMU (Carnegie Mellon University) [ 9 ] and BSD , an implementation of Unix dating back to the 1970s. It featured an object-oriented programming framework based on the Objective-C language. This environment is known today in the Mac world as Cocoa . It also supported the innovative Enterprise Objects Framework database access layer and WebObjects application server development environment, among other notable features. [ citation needed ]
All but abandoning the idea of an operating system, NeXT managed to maintain a business selling WebObjects and consulting services, only ever making modest profits in its last few quarters as an independent company. NeXTSTEP underwent an evolution into OPENSTEP which separated the object layers from the operating system below, allowing it to run with less modification on other platforms. OPENSTEP was, for a short time, adopted by Sun and HP .
However, by this point, a number of other companies — notably Apple, IBM, Microsoft, and even Sun itself — were claiming they would soon be releasing similar object-oriented operating systems and development tools of their own. Some of these efforts, such as Taligent , did not fully come to fruition; others, like Java , gained widespread adoption. [ citation needed ]
On February 4, 1997, Apple Computer acquired NeXT for $427 million, and used OPENSTEP as the basis for Mac OS X , as it was called at the time. [ 10 ] Traces of the NeXT software heritage can still be seen in macOS. For example, in the Cocoa development environment, the Objective-C library classes have "NS" prefixes, and the HISTORY section of the manual page for the defaults command in macOS straightforwardly states that the command "First appeared in NeXTStep." [ citation needed ]
Meanwhile, Apple was facing commercial difficulties of its own. The decade-old Macintosh System Software had reached the limits of its single-user, co-operative multitasking architecture, and its once-innovative user interface was looking increasingly outdated. A massive development effort to replace it, known as Copland , was started in 1994, but was generally perceived outside Apple to be a hopeless case due to political infighting and conflicting goals. By 1996, Copland was nowhere near ready for release, and the project was eventually cancelled. Some elements of Copland were incorporated into Mac OS 8 , released on July 26, 1997.
After considering the purchase of BeOS — a multimedia-enabled, multi-tasking OS designed for hardware similar to Apple's, the company decided instead to acquire NeXT and use OPENSTEP as the basis for their new OS. Avie Tevanian took over OS development, and Steve Jobs was brought on as a consultant. At first, the plan was to develop a new operating system based almost entirely on an updated version of OPENSTEP, with the addition of a virtual machine subsystem — known as the Blue Box — for running "classic" Macintosh applications. The result was known by the code name Rhapsody , slated for release in late 1998.
Apple expected that developers would port their software to the considerably more powerful OPENSTEP libraries once they learned of its power and flexibility. Instead, several major developers such as Adobe told Apple that this would never occur, and that they would rather leave the platform entirely. This "rejection" of Apple's plan was largely the result of a string of previous broken promises from Apple; after watching one "next OS" after another disappear and Apple's market share dwindle, developers were not interested in doing much work on the platform at all, let alone a re-write.
Apple's financial losses continued and the board of directors lost confidence in CEO Gil Amelio , asking him to resign. The board asked Steve Jobs to lead the company on an interim basis, essentially giving him carte blanche to make changes to return the company to profitability. When Jobs announced at the World Wide Developer's Conference that what developers really wanted was a modern version of the Mac OS, and Apple was going to deliver it [ citation needed ] , he was met with applause.
Over the next two years, a major effort was applied to porting the original Macintosh API to Unix libraries known as Carbon . Mac OS applications could be ported to Carbon without the need for a complete re-write, making them operate as native applications on the new operating system. Meanwhile, applications written using the older toolkits would be supported using the "Classic" Mac OS 9 environment. Support for C , C++ , Objective-C , Java , and Python were added, furthering developer comfort with the new platform. [ citation needed ]
During this time, the lower layers of the operating system (the Mach kernel and the BSD layers on top of it [ 11 ] ) were re-packaged and released under the Apple Public Source License . They became known as Darwin . The Darwin kernel provides a stable and flexible operating system, which takes advantage of the contributions of programmers and independent open-source projects outside Apple; however, it sees little use outside the Macintosh community. [ citation needed ]
During this period, the Java programming language had increased in popularity, and an effort was started to improve Mac Java support. This consisted of porting a high-speed Java virtual machine to the platform, and exposing macOS-specific "Cocoa" APIs to the Java language. [ citation needed ]
The first release of the new OS — Mac OS X Server 1.0 — used a modified version of the Mac OS GUI, but all client versions starting with Mac OS X Developer Preview 3 used a new theme known as Aqua . Aqua marked a significant shift from the Mac OS 9 interface, which had seen minimal changes since the original Macintosh OS. It introduced full-color scalable graphics, text and graphic anti-aliasing, simulated shading and highlights, transparency, shadows, and animation. A new feature was the Dock, an application launcher which took advantage of these capabilities.
Despite this, Mac OS X maintained a substantial degree of consistency with the traditional Mac OS interface and Apple's own Apple Human Interface Guidelines , with its pull-down menu at the top of the screen, familiar keyboard shortcuts, and support for a single-button mouse. The development of Aqua was delayed somewhat by the switch from OpenStep's Display PostScript engine to one developed in-house that was free of any license restrictions, known as Quartz . [ citation needed ]
With the exception of Mac OS X Server 1.0 and the original public beta, the first several macOS versions were named after big cats . Prior to its release, version 10.0 was code named "Cheetah" internally at Apple, and version 10.1 was code named internally as "Puma".
After the code name "Jaguar" for version 10.2 received publicity in the media, Apple began openly using the names to promote the operating system: 10.3 was marketed as "Panther", 10.4 as "Tiger", 10.5 as "Leopard", 10.6 as "Snow Leopard", 10.7 as "Lion", and 10.8 as "Mountain Lion". "Panther", "Tiger", and "Leopard" were registered as trademarks.
Apple registered "Lynx" and "Cougar", but these were allowed to lapse. [ 35 ] Apple started using the name of locations in California for subsequent releases: 10.9 Mavericks was named after Mavericks , a popular surfing destination; 10.10 Yosemite was named after Yosemite National Park ; 10.11 El Capitan was named for the El Capitan rock formation in Yosemite National Park; 10.12 Sierra was named for the Sierra Nevada mountain range; and 10.13 High Sierra was named for the area around the High Sierra Camps . [ 36 ]
In 2016, OS X was renamed to macOS. A few years later, in 2020, with the release of macOS Big Sur , the first component of the version number was incremented from 10 to 11, so Big Sur's initial release's version number was 11.0 instead of 10.16, making the version numbers of macOS behave the way the version numbers of Apple's other operating systems do. [ 37 ] All subsequent major releases also increased the first component of the version number.
On September 13, 2000, Apple released a $29.95 [ 38 ] "preview" version of Mac OS X (internally codenamed Kodiak ) in order to gain feedback from users. [ 39 ] It marked the first public availability of the Aqua interface , and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in spring 2001. [ 40 ]
On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah ). [ 41 ] The initial version was slow, incomplete, and had very few applications available at the time of its launch, mostly from independent developers. Critics suggested that the operating system was not ready for mainstream adoption, but they recognized the importance of its initial launch as a base to improve upon. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to completely overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Following some bug fixes, kernel panics became much less frequent.
Mac OS X 10.1 (internally codenamed Puma ) was released on September 25, 2001. [ 42 ] It has better performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users. Apple released a US$129 upgrade CD for Mac OS 9 .
On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month. [ 43 ]
On August 23, 2002, [ 44 ] Apple followed up with Mac OS X 10.2 Jaguar , the first release to use its code name as part of the branding. [ 45 ] It brought great raw performance improvements, a sleeker look, and many powerful user-interface enhancements (over 150, according to Apple [ 46 ] ), including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book , and an instant messaging client named iChat . [ 47 ] The Happy Mac which had appeared during the Mac OS startup sequence for almost 18 years was replaced with a large grey Apple logo with the introduction of Mac OS X 10.2.
Mac OS X Panther was released on October 24, 2003. In addition to providing much improved performance, it also incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching , Exposé (Window manager), FileVault , Safari , iChat AV (which added videoconferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. [ 48 ] Support for some early G3 computers such as the Power Macintosh and PowerBook was discontinued.
Mac OS X Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. [ 49 ] As with Panther, certain older machines were no longer supported; Tiger requires a Mac with a built-in FireWire port.
Among the new features, Tiger introduced Spotlight , Dashboard , Smart Folders , updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator , VoiceOver , Core Image and Core Video . The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. [ 50 ]
On January 10, 2006, Apple released the first Intel x86 -based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC -based Macs and the new Intel-based machines, with the exception of the Intel release dropping support for the Classic environment. [ 50 ] 10.4.4 introduced Rosetta , which translated 32-bit PowerPC machine code to 32-bit x86 code, allowing applications for PowerPC to run on Intel-based Macs without modification.
Only PowerPC Macs can be booted from retail copies of the Tiger client DVD, but there is a Universal DVD of Tiger Server 10.4.7 (8K1079) that can boot both PowerPC and Intel Macs.
Mac OS X Leopard was released on October 26, 2007. Apple called it "the largest update of Mac OS X". Leopard supports both PowerPC - and Intel x86 -based Macintosh computers; support for Macs with the G3 processor was dropped, and Macs with the G4 processor required a minimum clock rate of 867 MHz and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine , Spaces , Boot Camp pre-installed, [ 51 ] full support for 64-bit applications (including graphical applications), new features in Mail and iChat , and a number of new security features.
Leopard is the first Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. [ 3 ] [ 52 ] Leopard dropped support for the Classic Environment and all Classic applications, [ 53 ] and was the final version of Mac OS X to support the PowerPC architecture.
Mac OS X Snow Leopard was released on August 28, 2009, the last version to be available on disc. Rather than delivering big changes to the appearance and end user functionality like the previous releases of Mac OS X , the development of Snow Leopard was deliberately focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes were a difference in the disk space that the operating system frees up after a clean installation when compared to Mac OS X 10.5 Leopard , a more responsive Finder rewritten in Cocoa , faster Time Machine backups, more reliable and user friendly disk ejects, a more powerful version of the Preview application, and a faster Safari web browser.
An update also introduced support for the Mac App Store , Apple's digital distribution platform for macOS applications and subsequent macOS upgrades. [ 54 ] Snow Leopard only supports Macs with Intel CPUs, requires at least 1 GB of RAM , and drops default support for applications built for the PowerPC architecture. However, Rosetta can be installed as an additional component to retain support for PowerPC-only applications. [ 55 ] It is the final version to support 32-bit Intel Macs.
Mac OS X Lion (also known as OS X Lion) was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications ( Launchpad ) and (a greater use of) multi-touch gestures, to the Mac. This release removed Rosetta , making it incapable of running PowerPC applications. It requires 2 GB of memory. Changes made to the graphical user interface (GUI) include the Launchpad (similar to the home screen of iOS and iPadOS devices), auto-hiding scrollbars that only appear when they are being used, and Mission Control, which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. [ 56 ] Apple also made changes to applications: they resume in the same state as they were before they were closed (similar to iOS). Documents auto-save by default.
OS X Mountain Lion was released on July 25, 2012. It incorporates some features seen in iOS 5, which include Game Center , support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud . 2 GB of memory is required. [ 57 ] Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features, including support for Baidu as an option for Safari search engine. [ 58 ] Notification Center is added, providing an overview of alerts from applications. It is a desktop version similar to the one in iOS 5.0 and higher. Notes is added, as an application separate from Mail, synching with its iOS counterpart [ 59 ] [ 60 ] through the iCloud service. Messages, an instant messaging software application , [ 61 ] replaces iChat . [ 62 ]
OS X Mavericks was released on October 22, 2013, as a free update through the Mac App Store worldwide. [ 63 ] It placed emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to the OS X platform. iBooks and Apple Maps applications were added. Mavericks requires 2 GB of memory to operate. It is the first version named under Apple's then-new theme of places in California , dubbed Mavericks after the surfing location . [ 64 ] [ 65 ] Unlike previous versions of OS X, which had progressively decreasing prices since 10.6, 10.9 was available at no charge to all users of compatible systems running Snow Leopard (10.6) or later, [ 66 ] beginning Apple's policy of free upgrades for life on its operating system and business software. [ 67 ]
OS X Yosemite was released to the general public on October 16, 2014, as a free update through the Mac App Store worldwide. It featured a major overhaul of user interface, replaced skeuomorphism with flat graphic design and blurred translucency effects, following the aesthetic introduced with iOS 7. It introduced features called Continuity and Handoff, which allow for tighter integration between paired OS X and iOS devices: the user can handle phone calls or text messages on either their Mac or their iPhone, and edit the same Pages document on either their Mac or their iPad. A later update of the OS included Photos as a replacement for iPhoto and Aperture . [ citation needed ]
OS X El Capitan was revealed on June 8, 2015, during the WWDC 2015 keynote speech. [ 68 ] It was made available as a public beta in July and was made available publicly on September 30, 2015. Apple described this release as containing "Refinements to the Mac Experience" and "Improvements to System Performance" rather than new features. Refinements include public transport built into the Maps application, GUI improvements to the Notes application, as well as adopting San Francisco as the system font. Metal API , an application enhancing software, had debuted in this operating system, being available to "all Macs since 2012". [ 69 ]
macOS Sierra was announced on June 13, 2016, during the WWDC16 keynote speech. The update brought the Siri assistant to macOS, featuring several Mac-specific features, like searching for files. It also allowed websites to support Apple Pay as a method of transferring payment, using either a nearby iOS device or Touch ID to authenticate. iCloud also received several improvements, such as the ability to store a user's Desktop and Documents folders on iCloud so they could be synced with other Macs on the same Apple ID. It was released publicly on September 20, 2016. [ 70 ]
macOS High Sierra was announced on June 5, 2017, during the WWDC17 keynote speech. It was released on September 25, 2017. The release includes many under-the-hood improvements, including a switch to Apple File System (APFS) , the introduction of Metal 2 , support for HEVC video , and improvements to virtual reality support. In addition, numerous changes were made to standard applications including Photos, Safari, Notes, and Spotlight. [ 71 ]
macOS Mojave was announced on June 4, 2018, during the WWDC18 keynote speech. It was released on September 24, 2018. Some of the key new features were Dark wallpaper in dark mode, Desktop stacks and Dynamic Desktop, which changes the desktop background image to correspond to the user's current time of day. [ 72 ]
macOS Catalina was announced on June 3, 2019, during the WWDC19 keynote speech. It was released on October 7, 2019. It primarily focuses on updates to built-in apps, such as replacing iTunes with separate Music, Podcasts, and TV apps, redesigned Reminders and Books apps, and a new Find My app. It also features Sidecar, which allows the user to use an iPad as a second screen for their computer, or even simulate a graphics tablet with an Apple Pencil. It is the first version of macOS not to support 32-bit applications . The Dashboard application was also removed in the update. [ 73 ] [ 74 ] Since macOS Catalina, iOS apps can run on macOS with Project Catalyst but requires the app to be made compatible [ 75 ] unlike ARM-powered Apple silicon Macs that can run all iOS apps by default. [ 76 ]
macOS Big Sur was announced on June 22, 2020, during the WWDC20 keynote speech. [ 77 ] It was released November 12, 2020. [ 78 ] The major version number is changed, for the first time since "Mac OS X" was released, making it macOS 11. It brings ARM support, new icons, GUI changes to the system, [ 79 ] and other bug fixes.
Since macOS 11.2.3, it is no longer possible to install iOS apps by default from an IPA file instead of the Mac App Store on Apple silicon Macs, which now requires third-party software to unlock the functionality. [ 80 ] [ 81 ] Big Sur introduced Rosetta 2 to allow 64-bit Intel applications to run on Apple silicon Macs. However, Intel-based Macs are unable to run ARM-based applications, including iOS and iPadOS apps.
macOS Monterey was announced on June 7, 2021, during the WWDC21 keynote speech. [ 82 ] It was released on October 25, 2021. [ 83 ] macOS Monterey introduces new features such as Universal Control, which allows users to use a single keyboard and mouse to move between devices; AirPlay, which now allows users to present and share almost anything; the Shortcuts app, also introduced to macOS, gives users access to galleries of pre-built shortcuts, designed for Macs, a service brought from iOS, and users can now also set up shortcuts, among other things. [ 84 ] macOS Monterey is the final version of macOS that officially supports macOS Server.
macOS Ventura was announced on June 6, 2022, during the WWDC22 keynote speech. [ 85 ] It was released on October 24, 2022. [ 86 ] macOS Ventura introduces Stage Manager, a new and optional window manager, a redesigned settings app, and Continuity Camera, which is a program that allows Mac users to use their iPhone as a camera, and several other new features. [ 85 ] It is also the first version of macOS without macOS Server support.
macOS Sonoma was announced on June 5, 2023, during the WWDC23 keynote speech. Key changes include a revamp of Widgets, the user lock screen, and a video wallpaper/screensaver feature using Apple TV's screen saver videos. [ 87 ] It was released on September 26, 2023. [ 88 ]
macOS Sequoia was announced on June 10, 2024, during the WWDC24 keynote speech. This release introduced Apple Intelligence , with a limited initial feature set focused on basic writing and image generation tools complemented by ChatGPT integration. An iPhone Mirroring app for remotely controlling a user's iPhone was included, along with a password manager app, system support for tiling and resizing windows, and various other minor updates to Safari, Maps, Messages and Notes. It was released on September 16, 2024. [ 89 ] | https://en.wikipedia.org/wiki/MacOS_version_history |
In graph theory , Mac Lane's planarity criterion is a characterisation of planar graphs in terms of their cycle spaces , named after Saunders Mac Lane who published it in 1937. It states that a finite undirected graph is planar if and only if
the cycle space of the graph (taken modulo 2) has a cycle basis in which each edge of the graph participates in at most two basis vectors .
For any cycle c in a graph G on m edges, one can form an m -dimensional 0-1 vector that has a 1 in the coordinate positions corresponding to edges in c and a 0 in the remaining coordinate positions. The cycle space C ( G ) of the graph is the vector space formed by all possible linear combinations of vectors formed in this way. In Mac Lane's characterization, C ( G ) is a vector space over the finite field GF(2) with two elements; that is, in this vector space, vectors are added coordinatewise modulo two. A 2-basis of G is a basis of C ( G ) with the property that, for each edge e in G , at most two basis vectors have nonzero coordinates in the position corresponding to e . Then, stated more formally, Mac Lane's characterization is that the planar graphs are exactly the graphs that have a 2-basis.
One direction of the characterisation states that every planar graph has a 2-basis. Such a basis may be found as the collection of boundaries of the bounded faces of a planar embedding of the given graph G .
If an edge is a bridge of G , it appears twice on a single face boundary and therefore has a zero coordinate in the corresponding vector. Thus, the only edges that have nonzero coordinates are the ones that separate two different faces; these edges appear either once (if one of the faces is the unbounded one) or twice in the collection of boundaries of bounded faces. It remains to prove that these cycles form a basis. One way to prove this by induction . As a base case, G is a tree, then it has no bounded faces and C ( G ) is zero-dimensional and has an empty basis. Otherwise, removing an edge from the unbounded face of G reduces both the dimension of the cycle space and the number of bounded faces by one and the induction follows.
Alternatively, it is possible to use Euler's formula to show that the number of cycles in this collection equals the circuit rank of G , which is the dimension of the cycle space. Each nonempty subset of cycles has a vector sum that represents the boundary of the union of the bounded faces in the subset, which cannot be empty (the union includes at least one bounded face and excludes the unbounded face, so there must be some edges separating them). Therefore, there is no subset of cycles whose vectors sum to zero, which means that all the cycles are linearly independent . As a linearly independent set of the same size as the dimension of the space, this collection of cycles must form a basis.
O'Neil (1973) provided the following simple argument for the other direction of the characterization, based on Wagner's theorem characterizing the planar graphs by forbidden minors . As O'Neill observes, the property of having a 2-basis is preserved under graph minors : if one contracts an edge, the same contraction may be performed in the basis vectors, if one removes an edge that has a nonzero coordinate in a single basis vector, then that vector may be removed from the basis, and if one removes an edge that has a nonzero coordinate in two basis vectors, then those two vectors may be replaced by their sum (modulo two). Additionally, if C ( G ) is a cycle basis for any graph, then it must cover some edges exactly once, for otherwise its sum would be zero (impossible for a basis), and so C ( G ) can be augmented by one more cycle consisting of these singly-covered edges while preserving the property that every edge is covered at most twice.
However, the complete graph K 5 has no 2-basis: C ( G ) is six-dimensional, each nontrivial vector in C ( G ) has nonzero coordinates for at least three edges, and so any augmented basis would have at least 21 nonzeros, exceeding the 20 nonzeros that would be allowed if each of the ten edges were nonzero in at most two basis vectors. By similar reasoning, the complete bipartite graph K 3,3 has no 2-basis: C ( G ) is four-dimensional, and each nontrivial vector in C ( G ) has nonzero coordinates for at least four edges, so any augmented basis would have at least 20 nonzeros, exceeding the 18 nonzeros that would be allowed if each of the nine edges were nonzero in at most two basis vectors. Since the property of having a 2-basis is minor-closed and is not true of the two minor-minimal nonplanar graphs K 5 and K 3,3 , it is also not true of any other nonplanar graph.
Lefschetz (1965) provided another proof, based on algebraic topology . He uses a slightly different formulation of the planarity criterion, according to which a graph is planar if and only if it has a set of (not necessarily simple) cycles covering every edge exactly twice, such that the only nontrivial relation among these cycles in C ( G ) is that their sum be zero. If this is the case, then leaving any one of the cycles out produces a basis satisfying Mac Lane's formulation of the criterion. If a planar graph is embedded on a sphere, its face cycles clearly satisfy Lefschetz's property. Conversely, as Lefschetz shows, whenever a graph G has a set of cycles with this property, they necessarily form the face cycles of an embedding of the graph onto the sphere.
Ja'Ja' & Simon (1982) used Mac Lane's planarity criterion as part of a parallel algorithm for testing graph planarity and finding planar embeddings. Their algorithm partitions the graph into triconnected components , after which there is a unique planar embedding (up to the choice of the outer face) and the cycles in a 2-basis can be assumed to be all the peripheral cycles of the graph. Ja'Ja' and Simon start with a fundamental cycle basis of the graph (a cycle basis generated from a spanning tree by forming a cycle for each possible combination of a path in the tree and an edge outside the tree) and transform it into a 2-basis of peripheral cycles. These cycles form the faces of a planar embedding of the given graph.
Mac Lane's planarity criterion allows the number of bounded face cycles in a planar graph to be counted easily, as the circuit rank of the graph. This property is used in defining the meshedness coefficient of the graph, a normalized variant of the number of bounded face cycles that is computed by dividing the circuit rank by 2 n − 5 , the maximum possible number of bounded faces of a planar graph with the same vertex set ( Buhl et al. 2004 ). | https://en.wikipedia.org/wiki/Mac_Lane's_planarity_criterion |
Macadam is a type of road construction pioneered by Scottish engineer John Loudon McAdam c. 1820 , in which crushed stone is placed in shallow, convex layers and compacted thoroughly. A binding layer of stone dust (crushed stone from the original material) may form; it may also, after rolling, be covered with a cement or bituminous binder to keep dust and stones together. The method simplified what had been considered state-of-the-art at that point.
Pierre-Marie-Jérôme Trésaguet is sometimes considered the first person to bring post-Roman science to road building. A Frenchman from an engineering family, he worked paving roads in Paris from 1757 to 1764. As chief engineer of road construction of Limoges , he had opportunity to develop a better and cheaper method of road construction. In 1775, Tresaguet became engineer-general and presented his answer for road improvement in France, which soon became standard practice there. [ 1 ]
Trésaguet had recommended a roadway consisting of three layers of stones laid on a crowned subgrade with side ditches for drainage. The first two layers consisted of angular hand-broken aggregate , maximum size 3 inches (7.6 cm), to a depth of about 8 inches (20 cm). The third layer was about 2 inches (5 cm) thick with a maximum aggregate size of 1 inch (2.5 cm). [ 2 ] This top-level surface permitted a smoother shape and protected the larger stones in the road structure from iron wheels and horse hooves. To keep the running surface level with the countryside, this road was put in a trench, which created drainage problems. These problems were addressed by changes that included digging deep side ditches, making the surface as solid as possible, and constructing the road with a difference in elevation (height) between the two edges, that difference being referred to interchangeably as the road's camber or cross slope . [ 2 ]
Thomas Telford , born in Dumfriesshire , Scotland , [ 3 ] was a surveyor and engineer who applied Tresaguet's road building theories. In 1801 Telford worked for the Commission of Highland Roads and Bridges . He became director of the Holyhead Road Commission between 1815 and 1830. Telford extended Tresaguet's theories but emphasized high-quality stone. He recognized that some of the road problems of the French could be avoided by using cubical stone blocks. [ 4 ]
Telford used roughly 12 in × 10 in × 6 in (30 cm × 25 cm × 15 cm) partially shaped paving stones (pitchers), with a slight flat face on the bottom surface. He turned the other faces more vertically than Tresaguet's method. The longest edge was arranged crossways to the traffic direction, and the joints were broken in the method of conventional brickwork but with the smallest faces of the pitcher forming the upper and lower surfaces. [ 4 ]
Broken stone was wedged into the spaces between the tapered perpendicular faces to provide the layer with good lateral control. Telford kept the natural formation level and used masons to camber the upper surface of the blocks. He placed a 6-inch (15 cm) layer of stone no bigger than 2.4 in (6 cm) on top of the rock foundation. To finish the road surface he covered the stones with a mixture of gravel and broken stone. This structure came to be known as "Telford pitching." Telford's road depended on a resistant structure to prevent water from collecting and corroding the strength of the pavement. Telford raised the pavement structure above ground level whenever possible. [ citation needed ]
Where the structure could not be raised, Telford drained the area surrounding the roadside. Previous road builders in Britain ignored drainage problems, and Telford's rediscovery of drainage principles was a major contribution to road construction. [ 5 ] Notably, around the same time, John Metcalf strongly advocated that drainage was in fact an important factor in road construction and astonished colleagues by building dry roads even through marshland. He accomplished this by incorporating a layer of brushwood and heather. [ citation needed ]
John Loudon McAdam was born in Ayr , Scotland, in 1756. In 1787, he became a trustee of the Ayrshire Turnpike in the Scottish Lowlands and during the next seven years his hobby became an obsession. He moved to Bristol , England, in 1802 and became a Commissioner for Paving in 1806. [ 6 ] On 15 January 1816, he was elected surveyor general of roads for the Bristol turnpike trust and was responsible for 149 miles of road. [ 6 ] He then put his ideas about road construction into practice, the first 'macadamised' stretch of road being Marsh Road at Ashton Gate, Bristol. [ 6 ] He also began to actively propagate his ideas in two booklets called Remarks (or Observations) on the Present System of Roadmaking , (which ran nine editions between 1816 and 1827) and A Practical Essay on the Scientific Repair and Preservation of Public Roads, published in 1819. [ 7 ]
McAdam's method was simpler yet more effective at protecting roadways: he discovered that massive foundations of rock upon rock were unnecessary and asserted that native soil alone would support the road and traffic upon it, as long as it was covered by a road crust that would protect the soil underneath from water and wear. [ 8 ] An under-layer of small angular broken stones would act as a solid mass. Keeping the surface stones smaller than the width of a wheel made for a good running surface. The small surface stones also provided low stress on the road, so long as it could be kept reasonably dry. [ 9 ]
Unlike Telford and other road builders of the time, McAdam laid his roads almost level. His 30-foot-wide (9.1 m) road required a rise of only 3 inches (7.6 cm) from the edges to the centre. Cambering and elevation of the road above the water table enabled rain water to run off into ditches on either side. [ 10 ]
Size of stones was central to McAdam's road building theory. The lower 8 in (20 cm) road thickness was restricted to stones no larger than 3 inches (7.5 cm). The upper 2-inch-thick (5 cm) layer of stones was limited to stones 2 centimetres ( 3 ⁄ 4 in) in diameter; these were checked by supervisors who carried scales. A workman could check the stone size himself by seeing if the stone would fit into his mouth. The importance of the 2 cm stone size was that the stones needed to be much smaller than the four-inch (10 cm) width of the iron carriage wheels that travelled on the road. [ 5 ]
McAdam believed that the "proper method" of breaking stones for utility and rapidity was accomplished by people sitting down and using small hammers, breaking the stones so that none of them was larger than six ounces (170 g) in weight. He also wrote that the quality of the road would depend on how carefully the stones were spread on the surface over a sizeable space, one shovelful at a time. [ 11 ]
McAdam directed that no substance that would absorb water and affect the road by frost should be incorporated into the road. Neither was anything to be laid on the clean stone to bind the road. The action of the road traffic would cause the broken stone to combine with its own angles, merging into a level, solid surface that would withstand weather or traffic. [ 12 ]
The first macadam road built in the United States was constructed between Hagerstown and Boonsboro, Maryland , and was named at the time Boonsborough Turnpike Road. This was the last section of unimproved road between Baltimore on the Chesapeake Bay to Wheeling on the Ohio River . Before it was macadamized, stagecoaches travelling the Hagerstown to Boonsboro road in the winter had taken 5 to 7 hours to cover the 10-mile (16 km) stretch. [ 13 ] [ 14 ]
This road was completed in 1823, using McAdam's road techniques, except that the finished road was compacted with a cast iron roller instead of relying on road traffic for compaction. [ 15 ] The second American road built using McAdam principles was the Cumberland Road which was 73 miles (117 km) long and was completed in 1830 after five years of work. [ 13 ] [ 14 ]
McAdam's renown is his effective and economical construction, which was a great improvement over the methods used by his generation. He emphasised that roads could be constructed for any kind of traffic, and he helped to alleviate the resentment travellers felt toward increasing traffic on the roads. His legacy lies in his advocacy of effective road maintenance and management. He advocated a central road authority with trained professional officials who could be paid a salary that would keep them from corruption. These professionals could give their entire time to these duties and be held responsible for their actions. [ 16 ]
McAdam's road building technology was applied to roads by other engineers. One of these engineers was Richard Edgeworth , who filled the gaps between the surface stones with a mixture of stone dust and water, providing a smoother surface for the increased traffic using the roads. [ 17 ] This basic method of construction is sometimes known as water-bound macadam. Although this method required a great deal of manual labour, it resulted in a strong and free-draining pavement. Roads constructed in this manner were described as "macadamized." [ 17 ]
With the advent of motor vehicles , dust became a serious problem on macadam roads. The area of low air pressure created under fast-moving vehicles sucked dust from the road surface, creating dust clouds and a gradual unraveling of the road material. [ 18 ] This problem was approached by spraying tar on the surface to create tar-bound macadam. In 1902 a Swiss doctor, Ernest Guglielminetti , came upon the idea of using tar from Monaco 's gasworks for binding the dust. [ 19 ] Later a mixture of coal tar and ironworks slag , patented by Edgar Purnell Hooley as tarmac , was introduced.
A more durable road surface (modern mixed asphalt pavement), sometimes referred to in the U.S. as blacktop, was introduced in the 1920s. Instead of laying the stone and sand aggregates on the road and then spraying the top surface with binding material, in the asphalt paving method the aggregates are thoroughly mixed with the binding material and the mixture is laid all together. [ 20 ] While macadam roads have been resurfaced in most developed countries , some are preserved along stretches of roads such as the United States' National Road . [ 21 ]
Because of the historic use of macadam as a road surface, roads in some parts of the United States (such as parts of Pennsylvania ) are referred to as macadam, even though they might be made of asphalt or concrete . Similarly, the term "tarmac" is sometimes colloquially applied to asphalt roads or aircraft runways . [ 22 ] | https://en.wikipedia.org/wiki/Macadam |
The Macaronesian Biogeographic Region is a biogeographic region, as defined by the European Environment Agency , that covers the Azores , the Canary Islands , and Madeira .
The name comes from the group of four archipelagos collectively known as Macaronesia that also include Cape Verde , which is not included in the European region.
The Macaronesian Biogeographic Region includes the Portuguese archipelagos of the Azores and Madeira, and the Spanish Canary Islands. [ 1 ]
The Natura 2000 list of sites of Community importance for the region was the first such list to be adopted, in December 2001.
It contained 208 sites covering over 5,000 square kilometres (1,900 sq mi) of land and sea.
The list is updated every year. [ 1 ] As of 14 December 2018 it contained 224 entries ranging from ES0000041 Ojeda, Inagua y Pajonales, 3,527.6 hectares (8,717 acres) at 27°56′38″N 15°41′55″W / 27.9438°N 15.6985°W / 27.9438; -15.6985 to PTTER0018 Costa das Quatro Ribeiras — Ilha da Terceira, 267.63 hectares (661.3 acres) at 38°48′N 27°13′W / 38.80°N 27.21°W / 38.80; -27.21 . [ 2 ]
The archipelagos all have a volcanic origin, complex landscape and gentle climate, and have rich biodiversity. [ 1 ] | https://en.wikipedia.org/wiki/Macaronesian_Biogeographic_Region |
Macarthur Astronomical Society is an organisation of amateur astronomers , based in the Macarthur Region of outer South Western Sydney , New South Wales , Australia.
The constitutionally adopted objectives of the Society are: (i) to foster the science of Astronomy ; (ii) to organise observational field nights for the purpose of carrying out astronomical observation ; (iii) to assist and give advice regarding astronomical instrumentation; and (iv) to participate in/co-operate with other scientific societies and groups with a similar scientific interest in astronomy.
In keeping with these objectives, the society's three core activities are:
Formed in 1996 [ 1 ] in Ingleburn, New South Wales by Philip Ainsworth, Macarthur Astronomical Society Inc. is registered as an independent Incorporated Association by the NSW Fair Trading . Its affairs are governed by its own constitution [ 2 ] and managed by an elected seven member Management Committee. As required by NSW Fair Trading, the secretary of the society acts as Public Officer. [ 3 ] The Society is approved by the NSW Commissioner of Police for the purpose of an exemption from obtaining a laser pointer permit. [ 4 ]
The monthly meetings of the Society provide a platform for professional astronomers and prominent amateur astronomers, on each third Monday (Jan.to Nov.). These meetings were renamed the Macarthur Astronomy Forum in 2011. Guest speakers have included Nobel Laureate Professor Brian Schmidt , Professor Bryan Gaensler , Australia's Astronomer at Large Professor Fred Watson , [ 5 ] Mark Phillips and NASA astronaut Greg Chamitoff .
Patrons are appointed by the Management Committee. Between 2009 and 2011 the Society had dual Patrons.
The committee is tasked with the total management of the affairs of the Society and aims to mix youth with experience. It meets monthly and consists of a President, Vice-President, Secretary, Treasurer and three other Committee Members. Office bearers are elected by the membership at an Annual General Meeting, normally held in April each year. Whilst a ballot is provided for, the Society has traditionally never received more than one nomination per position, thus a ballot has never been held.
On 9 December 2014, MAS won the University of Western Sydney Excellence in Partnership Award . [ 9 ] The University awards this to recognize the many and highly valued contributions of the University's community partners. The accompanying citation reads: "The Macarthur Astronomical Society has, in partnership with the Campbelltown Rotary Observatory, conducted astronomy talks and activities to bring the latest advances in physics, astrophysics and high technology to the community. This enables the community to participate in debates about science in an informed manner with experts and politicians."
The Society instituted an annual Students Night in 2015, to encourage school children from Prairiewood High School to study the science of astronomy and report their research findings to the Society's Macarthur Astronomy Forum in December each year. [ 10 ]
During 2018, a Student Mentoring Programme was introduced to assist year 7 – 11 students at Broughton Anglican College to complete a scientific astronomical investigation as part of their science courses.
The Society's journal " Prime Focus " was published monthly, for the benefit of members, between 1996 and 2012. Initially the publication was a printed edition but since 2009 it was distributed electronically. In 2011, the first colour editions were published and printed copies became available again. The journal ceased in October 2012 but resumed for a brief period in 2020.
The Society has published two DVDs, "magnitude" and "magnitude II" , both containing the best astro-images taken by its members.
The Society has had the following authors of astronomy books within its ranks. [ 11 ]
The Society has held major public exhibitions displaying the astro-photographic work of its members:
In 2011, the Society set up a sub-committee to seek a suitable site – remote from city lighting, yet within easy reach of Campbelltown/Camden – at which to locate its first astronomical observatory.
In 2012, a suitable site was identified in the Dharawal National Park and the Society pursued opportunities to secure use of the site. [ 17 ] The location was originally the site of the North Cliff coal mine, operated by BHP . Whilst anticipating some opposition to placing an observatory in a national park, the society was inspired by the Australian Astronomical Observatory in the Warrumbungles National Park and the concept received much local support. [ 18 ]
If successful, the observatory would have been used for astronomical research, public outreach, astro-imaging and members private observing. [ 19 ] Whilst the proposal was welcomed in the community and supported by the mine lease-holder, it did not gain the necessary government support.
The Society organises a volunteer computing team [ 20 ] for the purpose of carrying out scientific research using the Berkeley Open Infrastructure for Network Computing (BOINC) Project Management middleware platform, which allows users to contribute to a range of scientific computing projects at the same time. Volunteer computing is often also referred to as Citizen science , Distributed computing or Grid computing . The team is currently working as volunteers on projects for theSkyNet , SETI@home , Einstein@home , asteroids@home, LHC@home and other BOINC projects. | https://en.wikipedia.org/wiki/Macarthur_Astronomical_Society |
Macaulay brackets are a notation used to describe the ramp function
A popular alternative transcription uses angle brackets, viz. ⟨ x ⟩ {\displaystyle \langle x\rangle } . [ 1 ] Another commonly used notation is x {\displaystyle x} + or ( x ) {\displaystyle (x)} + for the positive part of x {\displaystyle x} , which avoids conflicts with { . . . } {\displaystyle \{...\}} for set notation .
Macaulay's notation is commonly used in the static analysis of bending moments of a beam. This is useful because shear forces applied on a member render the shear and moment diagram discontinuous. Macaulay's notation also provides an easy way of integrating these discontinuous curves to give bending moments, angular deflection, and so on. For engineering purposes, angle brackets are often used to denote the use of Macaulay's method .
The above example simply states that the function takes the value ( x − a ) n {\displaystyle (x-a)^{n}} for all x values larger than a . With this, all the forces acting on a beam can be added, with their respective points of action being the value of a .
A particular case is the unit step function ,
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macaulay_brackets |
In mathematics , the Macdonald identities are some infinite product identities associated to affine root systems , introduced by Ian Macdonald ( 1972 ). They include as special cases the Jacobi triple product identity , Watson's quintuple product identity , several identities found by Dyson (1972) , and a 10-fold product identity found by Winquist (1969) .
Kac (1974) and Moody (1975) pointed out that the Macdonald identities are the analogs of the Weyl denominator formula for affine Kac–Moody algebras and superalgebras. | https://en.wikipedia.org/wiki/Macdonald_identities |
The Macedonian Journal of Chemistry and Chemical Engineering is a biannual peer-reviewed scientific journal of chemistry established in 1974 by the Society of Chemists and Technologists of Macedonia . [ 1 ] Since 2022 it is co-published with the Ss. Cyril and Methodius University of Skopje . It consists of two parts: The first, larger part contains peer-reviewed scientific articles from the various fields of chemistry and chemical engineering , written in English and accompanied by abstracts in Macedonian . The second part, written in Macedonian or in English, contains society and related news.
The journal was first published as the Bulletin of the Chemists and Technologists of Macedonia in 1974, and obtained its current name in 2007. The journal and the articles published since 2006 are available online. It is a diamond open access journal, neither the readers nor the authors pay any fees.
The following persons have been editors-in-chief :
This article about a chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Macedonian_Journal_of_Chemistry_and_Chemical_Engineering |
The Macfarlane Burnet Medal and Lecture is a biennial award given by the Australian Academy of Science to recognise outstanding scientific research in the biological sciences. [ 1 ]
It was established in 1971 and honours the memory of the Nobel laureate Sir Frank Macfarlane Burnet , OM KBE MD FAA FRS, the Australian virologist best known for his contributions to immunology and is the academy's highest award for biological sciences.
Source: Australian Academy of Science | https://en.wikipedia.org/wiki/Macfarlane_Burnet_Medal_and_Lecture |
In theoretical physics , particularly in discussions of gravitation theories , Mach's principle (or Mach's conjecture [ 1 ] ) is the name given by Albert Einstein to an imprecise hypothesis often credited to the physicist and philosopher Ernst Mach . The hypothesis attempted to explain how rotating objects, such as gyroscopes and spinning celestial bodies, maintain a frame of reference .
The proposition is that the existence of absolute rotation (the distinction of local inertial frames vs. rotating reference frames ) is determined by the large-scale distribution of matter, as exemplified by this anecdote: [ 2 ]
You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move? [ a ] [ 2 ]
Mach's principle says that this is not a coincidence—that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force . There are a number of rival formulations of the principle, often stated in vague ways like " mass out there influences inertia here". A very general statement of Mach's principle is "local physical laws are determined by the large-scale structure of the universe". [ 3 ]
Mach's concept was a guiding factor in Einstein's development of the general theory of relativity . Einstein realized that the overall distribution of matter would determine the metric tensor which indicates which frame is stationary with respect to rotation. Frame-dragging and conservation of gravitational angular momentum makes this into a true statement in the general theory in certain solutions. But because the principle is so vague, many distinct statements have been made which would qualify as a Mach principle , some of which are false. The Gödel rotating universe is a solution of the field equations that is designed to disobey Mach's principle in the worst possible way. In this example, the distant stars seem to be revolving faster and faster as one moves further away. This example does not completely settle the question of the physical relevance of the principle because it has closed timelike curves .
Mach put forth the idea in his book The Science of Mechanics (1883 in German, 1893 in English). Before Mach's time, the basic idea also appears in the writings of George Berkeley . [ 4 ] After Mach, the book Absolute or Relative Motion? (1896) by Benedict Friedlaender and his brother Immanuel contained ideas similar to Mach's principle. [ page needed ]
There is a fundamental issue in relativity theory: if all motion is relative, how can we measure the inertia of a body? We must measure the inertia with respect to something else. But what if we imagine a particle completely on its own in the universe? We might hope to still have some notion of its state of motion. Mach's principle is sometimes interpreted as the statement that such a particle's state of motion has no meaning in that case.
In Mach's words, the principle is embodied as follows: [ 5 ]
[The] investigator must feel the need of... knowledge of the immediate connections, say, of the masses of the universe. There will hover before him as an ideal insight into the principles of the whole matter, from which accelerated and inertial motions will result in the same way.
Albert Einstein seemed to view Mach's principle as something along the lines of: [ 6 ]
...inertia originates in a kind of interaction between bodies...
In this sense, at least some of Mach's principles are related to philosophical holism . Mach's suggestion can be taken as the injunction that gravitation theories should be relational theories . Einstein brought the principle into mainstream physics while working on general relativity . Indeed, it was Einstein who first coined the phrase Mach's principle . There is much debate as to whether Mach really intended to suggest a new physical law since he never states it explicitly.
The writing in which Einstein found inspiration was Mach's book The Science of Mechanics (1883, tr. 1893), where the philosopher criticized Newton 's idea of absolute space , in particular the argument that Newton gave sustaining the existence of an advantaged reference system: what is commonly called "Newton's bucket argument ".
In his Philosophiae Naturalis Principia Mathematica , Newton tried to demonstrate that one can always decide if one is rotating with respect to the absolute space, measuring the apparent forces that arise only when an absolute rotation is performed. If a bucket is filled with water, and made to rotate, initially the water remains still, but then, gradually, the walls of the vessel communicate their motion to the water, making it curve and climb up the borders of the bucket, because of the centrifugal forces produced by the rotation. This experiment demonstrates that the centrifugal forces arise only when the water is in rotation with respect to the absolute space (represented here by the earth's reference frame, or better, the distant stars) instead, when the bucket was rotating with respect to the water no centrifugal forces were produced, this indicating that the latter was still with respect to the absolute space.
Mach, in his book, says that the bucket experiment only demonstrates that when the water is in rotation with respect to the bucket no centrifugal forces are produced, and that we cannot know how the water would behave if in the experiment the bucket's walls were increased in depth and width until they became leagues big. In Mach's idea this concept of absolute motion should be substituted with a total relativism in which every motion, uniform or accelerated, has sense only in reference to other bodies ( i.e. , one cannot simply say that the water is rotating, but must specify if it's rotating with respect to the vessel or to the earth). In this view, the apparent forces that seem to permit discrimination between relative and "absolute" motions should only be considered as an effect of the particular asymmetry that there is in our reference system between the bodies which we consider in motion, that are small (like buckets), and the bodies that we believe are still (the earth and distant stars), that are overwhelmingly bigger and heavier than the former.
This same thought had been expressed by the philosopher George Berkeley in his De Motu . It is then not clear, in the passages from Mach just mentioned, if the philosopher intended to formulate a new kind of physical action between heavy bodies. This physical mechanism should determine the inertia of bodies, in a way that the heavy and distant bodies of our universe should contribute the most to the inertial forces. More likely, Mach only suggested a mere "redescription of motion in space as experiences that do not invoke the term space ". [ 7 ] What is certain is that Einstein interpreted Mach's passage in the former way, originating a long-lasting debate.
Most physicists believe Mach's principle was never developed into a quantitative physical theory that would explain a mechanism by which the stars can have such an effect. Mach himself never made his principle exactly clear. [ 7 ] : 9–57 Although Einstein was intrigued and inspired by Mach's principle, Einstein's formulation of the principle is not a fundamental assumption of general relativity , although the principle of equivalence of gravitational and inertial mass is most certainly fundamental.
Because intuitive notions of distance and time no longer apply, what exactly is meant by "Mach's principle" in general relativity is even less clear than in Newtonian physics and at least 21 formulations of Mach's principle are possible, some being considered more strongly Machian than others. [ 7 ] : 530 A relatively weak formulation is the assertion that the motion of matter in one place should affect which frames are inertial in another.
Einstein, before completing his development of the general theory of relativity, found an effect which he interpreted as being evidence of Mach's principle. We assume a fixed background for conceptual simplicity, construct a large spherical shell of mass, and set it spinning in that background. The reference frame in the interior of this shell will precess with respect to the fixed background. This effect is known as the Lense–Thirring effect . Einstein was so satisfied with this manifestation of Mach's principle that he wrote a letter to Mach expressing this:
it... turns out that inertia originates in a kind of interaction between bodies, quite in the sense of your considerations on Newton's pail experiment... If one rotates [a heavy shell of matter] relative to the fixed stars about an axis going through its center, a Coriolis force arises in the interior of the shell; that is, the plane of a Foucault pendulum is dragged around (with a practically unmeasurably small angular velocity). [ 6 ]
The Lense–Thirring effect certainly satisfies the very basic and broad notion that "matter there influences inertia here". [ 8 ] The plane of the pendulum would not be dragged around if the shell of matter were not present, or if it were not spinning. As for the statement that "inertia originates in a kind of interaction between bodies", this, too, could be interpreted as true in the context of the effect.
More fundamental to the problem, however, is the very existence of a fixed background, which Einstein describes as "the fixed stars". Modern relativists see the imprints of Mach's principle in the initial-value problem. Essentially, we humans seem to wish to separate spacetime into slices of constant time. When we do this, Einstein's equations can be decomposed into one set of equations, which must be satisfied on each slice, and another set, which describe how to move between slices. The equations for an individual slice are elliptic partial differential equations . In general, this means that only part of the geometry of the slice can be given by the scientist, while the geometry everywhere else will then be dictated by Einstein's equations on the slice. [ clarification needed ]
In the context of an asymptotically flat spacetime , the boundary conditions are given at infinity. Heuristically, the boundary conditions for an asymptotically flat universe define a frame with respect to which inertia has meaning. By performing a Lorentz transformation on the distant universe, of course, this inertia can also be transformed [ clarification needed ] .
A stronger form of Mach's principle applies in Wheeler–Mach–Einstein spacetimes , which require spacetime to be spatially compact and globally hyperbolic . In such universes Mach's principle can be stated as the distribution of matter and field energy-momentum (and possibly other information) at a particular moment in the universe determines the inertial frame at each point in the universe (where "a particular moment in the universe" refers to a chosen Cauchy surface ). [ 7 ] : 188–207
There have been other attempts to formulate a theory that is more fully Machian, such as the Brans–Dicke theory and the Hoyle–Narlikar theory of gravity , but most physicists argue that none have been fully successful. At an exit poll of experts, held in Tübingen in 1993, when asked the question "Is general relativity perfectly Machian?", 3 respondents replied "yes", and 22 replied "no". To the question "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?" the result was 14 "yes" and 7 "no". [ 7 ] : 106
However, Einstein was convinced that a valid theory of gravity would necessarily have to include the relativity of inertia:
So strongly did Einstein believe at that time in the relativity of inertia that in 1918 he stated as being on an equal footing three principles on which a satisfactory theory of gravitation should rest:
In 1922, Einstein noted that others were satisfied to proceed without this [third] criterion and added,
"This contentedness will appear incomprehensible to a later generation however."
It must be said that, as far as I can see, to this day, Mach's principle has not brought physics decisively farther. It must also be said that the origin of inertia is and remains the most obscure subject in the theory of particles and fields. Mach's principle may therefore have a future – but not without the quantum theory.
In 1953, in order to express Mach's Principle in quantitative terms, the Cambridge University physicist Dennis W. Sciama proposed the addition of an acceleration dependent term to the Newtonian gravitation equation. [ 9 ] Sciama's acceleration dependent term was F = G m A m B a r c 2 {\textstyle F=G{\frac {m_{A}m_{B}{\bf {a}}}{rc^{2}}}\ } where r is the distance between the particles, G is the gravitational constant, a is the relative acceleration and c represents the speed of light in vacuum. Sciama referred to the effect of the acceleration dependent term as Inertial Induction .
The broad notion that "mass there influences inertia here" has been expressed in several forms. Hermann Bondi and Joseph Samuel have listed eleven distinct statements that can be called Mach principles, labelled Mach0 through Mach10 (taking inspiration from the Mach number ). [ 10 ] Though their list is not necessarily exhaustive, it does give a flavor for the variety possible.
First stand still, and let your arms hang loose at your sides. Observe that the stars are more or less unmoving, and that your arms hang more or less straight down. Then pirouette. The stars will seem to rotate around the zenith, and at the same time your arms will be drawn upward by centrifugal force. It would surely be a remarkable coincidence if the inertial frame, in which your arms hung freely, just happened to be the reference frame in which typical stars are at rest, unless there were some interaction between the stars and you that determined your inertial frame. [ 2 ] | https://en.wikipedia.org/wiki/Mach's_principle |
Mach cutoff is a phenomenon of high-altitude supersonic flight in which the sonic boom generated at speeds not too far above Mach 1 never reaches the ground. [ 1 ] : 2 [ 2 ]
Flight is supersonic when the aircraft speed exceeds the speed of sound in the immediately surrounding air. A side-effect is the creation of a sonic shock-wave, heard on the ground in the vicinity as a loud and disturbing bang, known as a sonic boom. For this reason supersonic flight is generally limited to travel over oceans and large seas, and to high altitude. [ 3 ]
But there is a temperature gradient between the ground (generally a few degrees Celsius) and flight altitude (generally several tens of degrees below zero). This causes the sound waves travelling downwards to diffract towards the horizontal. [ 1 ] Accordingly, so long as the flight path is high enough and aircraft speed is not too far above Mach 1, this diffraction causes the sonic boom instead to be heard as subsonic sound; it is called Mach cutoff. [ 1 ] : 2
Calculating this varies for any flight, as the required diffraction depends on temperature gradient, air pressure gradient and differing wind speeds in the air column. [ 4 ]
This aviation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mach_cutoff |
The Mach number ( M or Ma ), often only Mach , ( / m ɑː k / ; German: [max] ) is a dimensionless quantity in fluid dynamics representing the ratio of flow velocity past a boundary to the local speed of sound . [ 1 ] [ 2 ] It is named after the Austrian physicist and philosopher Ernst Mach .
M = u c , {\displaystyle \mathrm {M} ={\frac {u}{c}},}
where:
By definition, at Mach 1, the local flow velocity u is equal to the speed of sound. At Mach 0.65, u is 65% of the speed of sound (subsonic), and, at Mach 1.35, u is 35% faster than the speed of sound (supersonic).
The local speed of sound, and hence the Mach number, depends on the temperature of the surrounding gas. The Mach number is primarily used to determine the approximation with which a flow can be treated as an incompressible flow . The medium can be a gas or a liquid. The boundary can be travelling in the medium, or it can be stationary while the medium flows along it, or they can both be moving, with different velocities : what matters is their relative velocity with respect to each other. The boundary can be the boundary of an object immersed in the medium, or of a channel such as a nozzle , diffuser or wind tunnel channelling the medium. As the Mach number is defined as the ratio of two speeds, it is a dimensionless quantity. If M < 0.2–0.3 and the flow is quasi-steady and isothermal , compressibility effects will be small and simplified incompressible flow equations can be used. [ 1 ] [ 2 ]
The Mach number is named after the physicist and philosopher Ernst Mach , [ 3 ] in honour of his achievements, according to a proposal by the aeronautical engineer Jakob Ackeret in 1929. [ 4 ] The word Mach is always capitalized since it derives from a proper name, and since the Mach number is a dimensionless quantity rather than a unit of measure , the number comes after the word Mach. It was also known as Mach's number by Lockheed when reporting the effects of compressibility on the P-38 aircraft in 1942. [ 5 ]
Mach number is a measure of the compressibility characteristics of fluid flow : the fluid (air) behaves under the influence of compressibility in a similar manner at a given Mach number, regardless of other variables. [ 6 ] As modeled in the International Standard Atmosphere , dry air at mean sea level , standard temperature of 15 °C (59 °F), the speed of sound is 340.3 meters per second (1,116.5 ft/s; 761.23 mph; 1,225.1 km/h; 661.49 kn). [ 7 ] The speed of sound is not a constant; in a gas, it increases proportionally to the square root of the absolute temperature , and since atmospheric temperature generally decreases with increasing altitude between sea level and 11,000 meters (36,089 ft), the speed of sound also decreases. For example, the standard atmosphere model lapses temperature to −56.5 °C (−69.7 °F) at 11,000 meters (36,089 ft) altitude, with a corresponding speed of sound (Mach 1) of 295.0 meters per second (967.8 ft/s; 659.9 mph; 1,062 km/h; 573.4 kn), 86.7% of the sea level value.
The terms subsonic and supersonic are used to refer to speeds below and above the local speed of sound, and to particular ranges of Mach values. This occurs because of the presence of a transonic regime around flight (free stream) M = 1 where approximations of the Navier-Stokes equations used for subsonic design no longer apply; the simplest explanation is that the flow around an airframe locally begins to exceed M = 1 even though the free stream Mach number is below this value.
Meanwhile, the supersonic regime is usually used to talk about the set of Mach numbers for which linearised theory may be used, where for example the ( air ) flow is not chemically reacting, and where heat-transfer between air and vehicle may be reasonably neglected in calculations.
Generally, NASA defines high hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Aircraft operating in this regime include the Space Shuttle and various space planes in development.
The subsonic speed range is that range of speeds within which, all of the airflow over an aircraft is less than Mach 1. The critical Mach number (Mcrit) is lowest free stream Mach number at which airflow over any part of the aircraft first reaches Mach 1. So the subsonic speed range includes all speeds that are less than Mcrit.
The transonic speed range is that range of speeds within which the airflow over different parts of an aircraft is between subsonic and supersonic. So the regime of flight from Mcrit up to Mach 1.3 is called the transonic range.
Aircraft designed to fly at supersonic speeds show large differences in their aerodynamic design because of the radical differences in the behavior of flows above Mach 1. Sharp edges, thin aerofoil sections, and all-moving tailplane / canards are common. Modern combat aircraft must compromise in order to maintain low-speed handling.
Flight can be roughly classified in six categories:
At transonic speeds, the flow field around the object includes both sub- and supersonic parts. The transonic period begins when first zones of M > 1 flow appear around the object. In case of an airfoil (such as an aircraft's wing), this typically happens above the wing. Supersonic flow can decelerate back to subsonic only in a normal shock; this typically happens before the trailing edge. (Fig.1a)
As the speed increases, the zone of M > 1 flow increases towards both leading and trailing edges. As M = 1 is reached and passed, the normal shock reaches the trailing edge and becomes a weak oblique shock: the flow decelerates over the shock, but remains supersonic. A normal shock is created ahead of the object, and the only subsonic zone in the flow field is a small area around the object's leading edge. (Fig.1b)
When an aircraft exceeds Mach 1 (i.e. the sound barrier ), a large pressure difference is created just in front of the aircraft . This abrupt pressure difference, called a shock wave , spreads backward and outward from the aircraft in a cone shape (a so-called Mach cone ). It is this shock wave that causes the sonic boom heard as a fast moving aircraft travels overhead. A person inside the aircraft will not hear this. The higher the speed, the more narrow the cone; at just over M = 1 it is hardly a cone at all, but closer to a slightly concave plane.
At fully supersonic speed, the shock wave starts to take its cone shape and flow is either completely supersonic, or (in case of a blunt object), only a very small subsonic flow area remains between the object's nose and the shock wave it creates ahead of itself. (In the case of a sharp object, there is no air between the nose and the shock wave: the shock wave starts from the nose.)
As the Mach number increases, so does the strength of the shock wave and the Mach cone becomes increasingly narrow. As the fluid flow crosses the shock wave, its speed is reduced and temperature, pressure, and density increase. The stronger the shock, the greater the changes. At high enough Mach numbers the temperature increases so much over the shock that ionization and dissociation of gas molecules behind the shock wave begin. Such flows are called hypersonic.
It is clear that any object travelling at hypersonic speeds will likewise be exposed to the same extreme temperatures as the gas behind the nose shock wave, and hence choice of heat-resistant materials becomes important.
As a flow in a channel becomes supersonic, one significant change takes place. The conservation of mass flow rate leads one to expect that contracting the flow channel would increase the flow speed (i.e. making the channel narrower results in faster air flow) and at subsonic speeds this holds true. However, once the flow becomes supersonic, the relationship of flow area and speed is reversed: expanding the channel actually increases the speed.
The obvious result is that in order to accelerate a flow to supersonic, one needs a convergent-divergent nozzle, where the converging section accelerates the flow to sonic speeds, and the diverging section continues the acceleration. Such nozzles are called de Laval nozzles and in extreme cases they are able to reach hypersonic speeds (Mach 13 (15,900 km/h; 9,900 mph) at 20 °C).
When the speed of sound is known, the Mach number at which an aircraft is flying can be calculated by M = u c {\displaystyle \mathrm {M} ={\frac {u}{c}}}
where:
and the speed of sound varies with the thermodynamic temperature as:
c = γ ⋅ R ∗ ⋅ T , {\displaystyle c={\sqrt {\gamma \cdot R_{*}\cdot T}},}
where:
If the speed of sound is not known, Mach number may be determined by measuring the various air pressures (static and dynamic) and using the following formula that is derived from Bernoulli's equation for Mach numbers less than 1.0. Assuming air to be an ideal gas , the formula to compute Mach number in a subsonic compressible flow is: [ 8 ]
M = 2 γ − 1 [ ( q c p + 1 ) γ − 1 γ − 1 ] {\displaystyle \mathrm {M} ={\sqrt {{\frac {2}{\gamma -1}}\left[\left({\frac {q_{c}}{p}}+1\right)^{\frac {\gamma -1}{\gamma }}-1\right]}}\,}
where:
The formula to compute Mach number in a supersonic compressible flow is derived from the Rayleigh supersonic pitot equation:
p t p = [ γ + 1 2 M 2 ] γ γ − 1 ⋅ [ γ + 1 1 − γ + 2 γ M 2 ] 1 γ − 1 {\displaystyle {\frac {p_{t}}{p}}=\left[{\frac {\gamma +1}{2}}\mathrm {M} ^{2}\right]^{\frac {\gamma }{\gamma -1}}\cdot \left[{\frac {\gamma +1}{1-\gamma +2\gamma \,\mathrm {M} ^{2}}}\right]^{\frac {1}{\gamma -1}}}
Mach number is a function of temperature and true airspeed.
Aircraft flight instruments , however, operate using pressure differential to compute Mach number, not temperature.
Assuming air to be an ideal gas , the formula to compute Mach number in a subsonic compressible flow is found from Bernoulli's equation for M < 1 (above): [ 8 ] M = 5 [ ( q c p + 1 ) 2 7 − 1 ] {\displaystyle \mathrm {M} ={\sqrt {5\left[\left({\frac {q_{c}}{p}}+1\right)^{\frac {2}{7}}-1\right]}}\,}
The formula to compute Mach number in a supersonic compressible flow can be found from the Rayleigh supersonic pitot equation (above) using parameters for air:
M ≈ 0.88128485 ( q c p + 1 ) ( 1 − 1 7 M 2 ) 2.5 {\displaystyle \mathrm {M} \approx 0.88128485{\sqrt {\left({\frac {q_{c}}{p}}+1\right)\left(1-{\frac {1}{7\,\mathrm {M} ^{2}}}\right)^{2.5}}}}
where:
As can be seen, M appears on both sides of the equation, and for practical purposes a root-finding algorithm must be used for a numerical solution (the equation is a septic equation in M 2 and, though some of these may be solved explicitly, the Abel–Ruffini theorem guarantees that there exists no general form for the roots of these polynomials). It is first determined whether M is indeed greater than 1.0 by calculating M from the subsonic equation. If M is greater than 1.0 at that point, then the value of M from the subsonic equation is used as the initial condition for fixed point iteration of the supersonic equation, which usually converges very rapidly. [ 8 ] Alternatively, Newton's method can also be used. | https://en.wikipedia.org/wiki/Mach_number |
Mach reflection is a supersonic fluid dynamics effect, named for Ernst Mach , and is a shock wave reflection pattern involving three shocks.
Mach reflection can exist in steady, pseudo-steady and unsteady flows. When a shock wave, which is moving with a constant velocity, propagates over a solid wedge, the flow generated by the shock impinges
on the wedge thus generating a second reflected shock, which ensures that the velocity of
the flow is parallel to the wedge surface. Viewed in the frame of the reflection point, this
flow is locally steady, and the flow is referred to as pseudosteady. When
the angle between the wedge and the primary shock is sufficiently large, a single reflected
shock is not able to turn the flow to a direction parallel to the wall and a transition to Mach
reflection occurs. [ 1 ]
In a steady flow situation, if a wedge is placed into a steady supersonic flow in such
a way that its oblique attached shock impinges on a flat wall parallel to the free stream,
the shock turns the flow toward the wall and a reflected shock is required to turn the flow
back to a direction parallel to the wall. When the shock angle exceeds a certain value, the
deflection achievable by a single reflected shock is insufficient to turn the flow back to a
direction parallel to the wall and transition to Mach reflection is observed. [ 1 ]
Mach reflection consists of three shocks, namely the incident shock, the reflected shock and a Mach stem, as well as a slip plane. The point where the three shocks meet is known as the 'triple point' in two dimensions, or a shock-shock in three dimensions. [ 2 ]
The only type of Mach reflection possible in steady flow is direct-Mach reflection, in which the Mach stem is convex away from the oncoming flow, and the slip plane slopes towards the reflecting surface.
By new results [ 3 ] [ 4 ] [ 5 ] there is a new configuration of shock waves - configuration with a negative angle of reflection in steady flow. Numerical simulations demonstrate two forms of this configuration - one with a kinked reflected shock wave, and an unstable double Mach configuration, depending on the transition path.
In pseudo-steady flows, the triple point moves away from the reflecting surface and the reflection is a direct-Mach reflection. In unsteady flows, it is also possible that the triple point remains stationary relative to the reflecting surface (stationary-Mach reflection), or moves toward the reflecting surface (inverse-Mach reflection). In inverse Mach reflection, the Mach stem is convex toward the oncoming flow, and the slip plane curves away from the reflecting surface. Each one of these configurations can assume one of the following three possibilities: single-Mach reflection, transitional-Mach reflection and double-Mach reflection. [ 2 ] | https://en.wikipedia.org/wiki/Mach_reflection |
Mach tuck is an aerodynamic effect whereby the nose of an aircraft tends to pitch downward as the airflow around the wing reaches supersonic speeds. This diving tendency is also known as tuck under . [ 1 ] The aircraft will first experience this effect at significantly below Mach 1. [ 2 ]
Mach tuck is usually caused by two things: a rearward movement of the centre of pressure of the wing, and a decrease in wing downwash velocity at the tailplane , both of which cause a nose down pitching moment . [ citation needed ] For a particular aircraft design only one of these may be significant in causing a tendency to dive — for example, a delta-winged aircraft with no foreplane or tailplane in the first case, and the Lockheed P-38 [ 3 ] in the second case. Alternatively, a particular design may have no significant tendency, such as the Fokker F28 Fellowship . [ 4 ]
As an aerofoil generating lift moves through the air, the air flowing over the top surface accelerates to a higher local speed than the air flowing over the bottom surface. When the aircraft speed reaches its critical Mach number the accelerated airflow locally reaches the speed of sound and creates a small shock wave, even though the aircraft is still travelling below the speed of sound. [ 5 ] The region in front of the shock wave generates high lift. As the aircraft itself flies faster, the shock wave over the wing gets stronger and moves rearwards, creating high lift further back along the wing. It is this rearward movement of lift which causes the aircraft to tuck or pitch nose-down.
The severity of Mach tuck on any given design is affected by the thickness of the aerofoil, the sweep angle of the wing, and the location of the tailplane relative to the main wing.
A tailplane which is positioned further aft can provide a larger stabilizing pitch-up moment.
The camber and thickness of the aerofoil affect the critical Mach number, with a more highly curved upper surface causing a lower critical Mach number.
On a swept wing the shock wave typically forms first at the wing root , especially if it is more cambered than the wing tip . As speed increases, the shock wave and associated lift extend outwards and, because the wing is swept, backwards.
The changing airflow over the wing can reduce the downwash over a conventional tailplane, promoting a stronger nose-down pitching moment.
Another problem with a separate horizontal stabilizer is that it can itself achieve local supersonic flow with its own shock wave. This can affect the operation of a conventional elevator control surface.
Aircraft without enough elevator authority to maintain trim and fly level can enter a steep, sometimes unrecoverable dive. [ 6 ] Until the aircraft is supersonic, the faster top shock wave can reduce the authority of the elevator and horizontal stabilizers . [ 7 ]
Mach tuck may or may not occur depending on aircraft design. Many modern aircraft have little or no effect. [ 8 ]
Recovery is sometimes impossible in subsonic aircraft; however, as an aircraft descends into lower, warmer, denser air, control authority (meaning the ability to control the aircraft) may return because drag tends to slow the aircraft while the speed of sound and control authority both increase.
To prevent Mach stall from progressing, the pilot should keep the airspeed below the type's critical Mach number by reducing thrust , extending air brakes , and if possible, extending the landing gear .
A number of design techniques are used to counter the effects of Mach tuck.
On both conventional tailplane and canard foreplane configurations, the horizontal stabiliser may be made large and powerful enough to correct the large trim changes associated with Mach tuck. In place of the conventional elevator control surface, the whole stabiliser may be made moveable or "all-flying", sometimes called a stabilator . This both increases the authority of the stabilizer over a wider range of aircraft pitch, but also avoids the controllability issues associated with a separate elevator. [ 7 ]
Aircraft that fly supersonic for long periods, such as Concorde , may compensate for Mach tuck by moving fuel between tanks in the fuselage to change the position of the centre of mass to match the changing location of the centre of pressure, thereby minimizing the amount of aerodynamic trim required.
A Mach trimmer is a device which varies the pitch trim automatically as a function of Mach number to oppose Mach tuck and maintain level flight.
The fastest World War II fighters were the first aircraft to experience Mach tuck. Their wings were not designed to counter Mach tuck because research on supersonic airfoils was just beginning; areas of supersonic flow, together with shock waves and flow separation, [ 9 ] were present on the wing. This condition was known at the time as compressibility burble and was known to exist on propeller tips at high aircraft speeds. [ 10 ]
The P-38 was one of the first 400 mph fighters, and it suffered more than the usual teething troubles. [ 11 ] It had a thick, high-lift wing, distinctive twin booms and a single, central nacelle containing the cockpit and armament. It quickly accelerated to terminal velocity in a dive. The short stubby fuselage had a detrimental effect in reducing the critical Mach number of the 15% thick wing center section with high velocities over the canopy adding to those on the upper surface of the wing. [ 12 ] Mach tuck occurred at speeds above Mach 0.65; [ 3 ] the air flow over the wing center section became transonic , causing a loss of lift. The resultant change in downwash at the tail caused a nose-down pitching moment and the dive to steepen (Mach tuck). The aircraft was very stable in this condition [ 3 ] making recovery from the dive very difficult.
Dive recovery (auxiliary) [ 13 ] flaps were added to the underside of the wing (P-38J-LO) to increase the wing lift and downwash at the tail to allow recovery from transonic dives.
This article incorporates public domain material from Airplane Flying Handbook . United States government . This article incorporates public domain material from Pilot's Handbook of Aeronautical Knowledge . United States government . | https://en.wikipedia.org/wiki/Mach_tuck |
In fluid dynamics , a Mach wave , also known as a weak discontinuity , [ 1 ] [ 2 ] is a pressure wave traveling with the speed of sound caused by a slight change of pressure added to a compressible flow . These weak waves can combine in supersonic flow to become a shock wave if sufficient Mach waves are present at any location. Such a shock wave is called a Mach stem or Mach front . Thus, it is possible to have shockless compression or expansion in a supersonic flow by having the production of Mach waves sufficiently spaced ( cf. isentropic compression in supersonic flows). A Mach wave is the weak limit of an oblique shock wave where time averages of flow quantities don't change (a normal shock is the other limit). If the size of the object moving at the speed of sound is near 0, then this domain of influence of the wave is called a Mach cone . [ 3 ] [ 4 ]
A Mach wave propagates across the flow at the Mach angle μ , which is the angle formed between the Mach wave wavefront and a vector that points opposite to the vector of motion. [ 3 ] [ 5 ] It is given by
where M is the Mach number .
Mach waves can be used in schlieren or shadowgraph observations to determine the local Mach number of the flow. Early observations by Ernst Mach used grooves in the wall of a duct to produce Mach waves in a duct, which were then photographed by the schlieren method, to obtain data about the flow in nozzles and ducts. Mach angles may also occasionally be visualized out of their condensation in air, for example vapor cones around aircraft during transonic flight. | https://en.wikipedia.org/wiki/Mach_wave |
Mache (symbol ME from German Mache-Einheit, plural Maches) is a unit of volumic radioactivity named for the Austrian physicist Heinrich Mache . [ 1 ] It was defined as the quantity of radon (ignoring its daughter isotopes ; in practice, mostly radon-222) per litre of air which ionises a sustained current of 0.001 esu (0.001 StatAmpere ).
1 ME = 3.64 Eman = 3.64×10 −10 Ci /L = 13.4545 Bq /L.
This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mache_(unit) |
A machine check exception ( MCE ) is a type of computer error that occurs when a problem involving the computer's hardware is detected. With most mass-market personal computers, an MCE indicates faulty or misconfigured hardware.
The nature and causes of MCEs can vary by architecture and generation of system. In some designs, an MCE is always an unrecoverable error, that halts the machine, requiring a reboot . In other architectures, some MCEs may be non-fatal, such as for single-bit errors corrected by ECC memory . On some architectures, such as PowerPC , certain software bugs can cause MCEs, such as an invalid memory access. On other architectures, such as x86 , MCEs typically originate from hardware only.
IBM System/360 Operating System ( OS/360 ) records input/output errors in a dataset called SYS1.LOGREC. Since then IBM has coined the term error recording data set ( ERDS ) for successor versions that allow the installation to choose the name and for operating systems not derived from OS/360. [ 1 ]
In OS/360, the installation can choose several levels of support for handling machine checks. The most sophisticated, Machine Check Handler (MCH), records failure data on SYS1.LOGREC and attempts recovery. The installation can print those data using the Environmental Record Editing and Printing Program (EREP) service aid or the stand-alone version SEREP. The MCH can handle memory failures in refreshable nucleus control sections by reading a fresh copy from SYS1.ASRLIB and can handle memory errors in SVC transient areas by reading a fresh copy of the SVC module from SYS1.SVCLIB.
In z/OS the installation can either use an ERDS or can define a z/OS System Logger log stream [ 2 ] to hold the error data. As with OS/360, the installation uses EREP to print those data; SEREP is no longer available. The MCH is no longer optional, and handles many more failure modes than the OS/360 MCH.
On Microsoft Windows platforms, in the event of an unrecoverable MCE, the system generates a BugCheck — also called a STOP error, or a Blue Screen of Death .
More recent versions of Windows use the Windows Hardware Error Architecture (WHEA), and generate STOP code 0x124, WHEA_UNCORRECTABLE_ERROR. The four parameters (in parentheses) will vary, but the first is always 0x0 for an MCE. [ 3 ] Example:
Older versions of Windows use the Machine Check Architecture , with STOP code 0x9C, MACHINE_CHECK_EXCEPTION. [ 4 ] Example:
On Linux , the kernel writes messages about MCEs to the kernel message log and the system console . When the MCEs are not fatal, they will also typically be copied to the system log and/or systemd journal . For some systems, ECC and other correctable errors may be reported through MCE facilities. [ 5 ]
Example:
Some of the main hardware problems that cause MCEs include:
Machine checks are a hardware problem, not a software problem. They are often the result of overclocking or overheating. In some cases, the CPU will shut itself off once passing a thermal limit to avoid permanent damage. But they can also be caused by bus errors introduced by other failing components, like memory or I/O devices. Possible causes include:
Cooling problems are usually obvious upon inspection. A failing motherboard or processor can be identified by swapping them with functioning parts. Memory can be checked by booting to a diagnostic tool, like memtest86 . Non-essential failing I/O devices and controllers can be identified by unplugging them if possible or disabling the devices to see if the problem disappears. If the failures typically only occur fairly soon after the OS is booted or not at all or not for days, it may be suggestive of a power supply issue. With a power supply problem, the failure often occurs when power demand peaks as the OS starts up any external devices for use.
For IA-32 and Intel 64 processors, consult the Intel 64 and IA-32 Architectures Software Developer's Manual [ 6 ] Chapter 15 (Machine-Check Architecture), or the Microsoft KB Article on Windows Exceptions. [ 7 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Machine-check_exception |
Machine-learned interatomic potentials ( MLIPs ), or simply machine learning potentials ( MLPs ), are interatomic potentials constructed by machine learning programs. Beginning in the 1990s, researchers have employed such programs to construct interatomic potentials by mapping atomic structures to their potential energies. These potentials are referred to as MLIPs or MLPs .
Such machine learning potentials promised to fill the gap between density functional theory , a highly accurate but computationally intensive modelling method, and empirically derived or intuitively-approximated potentials, which were far lighter computationally but substantially less accurate. Improvements in artificial intelligence technology heightened the accuracy of MLPs while lowering their computational cost, increasing the role of machine learning in fitting potentials. [ 1 ] [ 2 ]
Machine learning potentials began by using neural networks to tackle low-dimensional systems. While promising, these models could not systematically account for interatomic energy interactions; they could be applied to small molecules in a vacuum, or molecules interacting with frozen surfaces, but not much else – and even in these applications, the models often relied on force fields or potentials derived empirically or with simulations. [ 1 ] These models thus remained confined to academia.
Modern neural networks construct highly accurate and computationally light potentials, as theoretical understanding of materials science was increasingly built into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. There exist some nonlocal models, but these have been experimental for almost a decade. For most systems, reasonable cutoff radii enable highly accurate results. [ 1 ] [ 3 ]
Almost all neural networks intake atomic coordinates and output potential energies. For some, these atomic coordinates are converted into atom-centered symmetry functions. From this data, a separate atomic neural network is trained for each element; each atomic network is evaluated whenever that element occurs in the given structure, and then the results are pooled together at the end. This process – in particular, the atom-centered symmetry functions which convey translational, rotational, and permutational invariances – has greatly improved machine learning potentials by significantly constraining the neural network search space. Other models use a similar process but emphasize bonds over atoms, using pair symmetry functions and training one network per atom pair. [ 1 ] [ 4 ]
Other models to learn their own descriptors rather than using predetermined symmetry-dictating functions. These models, called message-passing neural networks (MPNNs), are graph neural networks. Treating molecules as three-dimensional graphs (where atoms are nodes and bonds are edges), the model takes feature vectors describing the atoms as input, and iteratively updates these vectors as information about neighboring atoms is processed through message functions and convolutions . These feature vectors are then used to predict the final potentials. The flexibility of this method often results in stronger, more generalizable models. In 2017, the first-ever MPNN model (a deep tensor neural network) was used to calculate the properties of small organic molecules. Such technology was commercialized, leading to the development of Matlantis in 2022, which extracts properties through both the forward and backward passes . [ citation needed ]
One popular class of machine-learned interatomic potential is the Gaussian Approximation Potential (GAP), [ 5 ] [ 6 ] [ 7 ] which combines compact descriptors of local atomic environments [ 8 ] with Gaussian process regression [ 9 ] to machine learn the potential energy surface of a given system. To date, the GAP framework has been used to successfully develop a number of MLIPs for various systems, including for elemental systems such as Carbon , [ 10 ] [ 11 ] Silicon , [ 12 ] Phosphorus , [ 13 ] and Tungsten , [ 14 ] as well as for multicomponent systems such as Ge 2 Sb 2 Te 5 [ 15 ] and austenitic stainless steel , Fe 7 Cr 2 Ni. [ 16 ] | https://en.wikipedia.org/wiki/Machine-learned_interatomic_potential |
In communications and computing , a machine-readable medium (or computer-readable medium ) is a medium capable of storing data in a format easily readable by a digital computer or a sensor .
It contrasts with human-readable medium and data .
The result is called machine-readable data or computer-readable data , and the data itself can be described as having machine-readability .
Machine-readable data must be structured data . [ 1 ]
Attempts to create machine-readable data occurred as early as the 1960s. At the same time that seminal developments in machine-reading and natural-language processing were releasing (like Weizenbaum's ELIZA ), people were anticipating the success of machine-readable functionality and attempting to create machine-readable documents. One such example was musicologist Nancy B. Reich 's creation of a machine-readable catalog of composer William Jay Sydeman 's works in 1966.
In the United States, the OPEN Government Data Act of 14 January 2019 defines machine-readable data as "data in a format that can be easily processed by a computer without human intervention while ensuring no semantic meaning is lost." The law directs U.S. federal agencies to publish public data in such a manner, [ 2 ] ensuring that "any public data asset of the agency is machine-readable". [ 3 ]
Machine-readable data may be classified into two groups: human-readable data that is marked up so that it can also be read by machines (e.g. microformats , RDFa , HTML ), and data file formats intended principally for processing by machines ( CSV , RDF , XML , JSON ). These formats are only machine readable if the data contained within them is formally structured; exporting a CSV file from a badly structured spreadsheet does not meet the definition.
Machine readable is not synonymous with digitally accessible . A digitally accessible document may be online, making it easier for humans to access via computers, but its content is much harder to extract, transform, and process via computer programming logic if it is not machine-readable. [ 4 ]
Extensible Markup Language (XML) is designed to be both human- and machine-readable, and Extensible Stylesheet Language Transformation (XSLT) is used to improve the presentation of the data for human readability. For example, XSLT can be used to automatically render XML in Portable Document Format ( PDF ). Machine-readable data can be automatically transformed for human-readability but, generally speaking, the reverse is not true.
For purposes of implementation of the Government Performance and Results Act (GPRA) Modernization Act, the Office of Management and Budget (OMB) defines "machine readable format" as follows: "Format in a standard computer language (not English text) that can be read automatically by a web browser or computer system. (e.g.; xml). Traditional word processing documents and portable document format (PDF) files are easily read by humans but typically are difficult for machines to interpret. Other formats such as extensible markup language ( XML ), ( JSON ), or spreadsheets with header columns that can be exported as comma separated values (CSV) are machine readable formats. As HTML is a structural markup language, discreetly labeling parts of the document, computers are able to gather document components to assemble tables of contents, outlines, literature search bibliographies, etc. It is possible to make traditional word processing documents and other formats machine readable but the documents must include enhanced structural elements." [ 5 ]
Examples of machine-readable media include magnetic media such as magnetic disks , cards, tapes , and drums , punched cards and paper tapes , optical discs , barcodes and magnetic ink characters .
Common machine-readable technologies include magnetic recording, processing waveforms , and barcodes . Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
Examples include:
Machine-readable dictionary (MRD) is a dictionary stored as machine-readable data instead of being printed on paper. It is an electronic dictionary and lexical database .
A machine-readable dictionary is a dictionary in an electronic form that can be loaded in a database and can be queried via application software. It may be a single language explanatory dictionary or a multi-language dictionary to support translations between two or more languages or a combination of both. Translation software between multiple languages usually apply bidirectional dictionaries. An MRD may be a dictionary with a proprietary structure that is queried by dedicated software (for example online via internet) or it can be a dictionary that has an open structure and is available for loading in computer databases and thus can be used via various software applications. Conventional dictionaries contain a lemma with various descriptions. A machine-readable dictionary may have additional capabilities and is therefore sometimes called a smart dictionary. An example of a smart dictionary is the Open Source Gellish English dictionary .
The term dictionary is also used to refer to an electronic vocabulary or lexicon as used for example in spelling checkers . If dictionaries are arranged in a subtype-supertype hierarchy of concepts (or terms) then it is called a taxonomy . If it also contains other relations between the concepts, then it is called an ontology . Search engines may use either a vocabulary, a taxonomy or an ontology to optimise the search results. Specialised electronic dictionaries are morphological dictionaries or syntactic dictionaries.
A machine-readable passport (MRP) is a machine-readable travel document (MRTD) with the data on the identity page encoded in optical character recognition format. Many countries began to issue machine-readable travel documents in the 1980s. Most travel passports worldwide are MRPs. The International Civil Aviation Organization (ICAO) requires all ICAO member states to issue only MRPs as of April 1, 2010, and all non-MRP passports must expire by November 24, 2015. [ 7 ]
Machine-readable passports are standardized by the ICAO Document 9303 (endorsed by the International Organization for Standardization and the International Electrotechnical Commission as ISO/IEC 7501-1) and have a special machine-readable zone ( MRZ ), which is usually at the bottom of the identity page at the beginning of a passport. The ICAO 9303 describes three types of documents corresponding to the ISO/IEC 7810 sizes:
The fixed format allows specification of document type, name, document number, nationality, date of birth, sex, and document expiration date. All these fields are required on a passport. There is room for optional, often country-dependent, supplementary information. There are also two sizes of machine-readable visas similarly defined.
Computers with a camera and suitable software can directly read the information on machine-readable passports. This enables faster processing of arriving passengers by immigration officials, and greater accuracy than manually-read passports, as well as faster data entry, more data to be read and better data matching against immigration databases and watchlists.
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22.
This computer-storage -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Machine-readable_medium_and_data |
The Machine Age [ 1 ] [ 2 ] [ 3 ] is an era that includes the early-to-mid 20th century, sometimes also including the late 19th century. An approximate dating would be about 1880 to 1945. Considered to be at its peak in the time between the first and second world wars, the Machine Age overlaps with the late part of the Second Industrial Revolution (which ended around 1914 at the start of World War I) and continues beyond it until 1945 at the end of World War II. The 1940s saw the beginning of the Atomic Age , where modern physics saw new applications such as the atomic bomb , [ 4 ] the first computers , [ 5 ] and the transistor . [ 6 ] The Digital Revolution ended the intellectual model of the machine age founded in the mechanical and heralding a new more complex model of high technology . The digital era has been called the Second Machine Age , with its increased focus on machines that do mental tasks.
Artifacts of the Machine Age include:
The Machine Age is considered to have influenced: | https://en.wikipedia.org/wiki/Machine_Age |
The Machine Intelligence Research Institute ( MIRI ), formerly the Singularity Institute for Artificial Intelligence ( SIAI ), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence . MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI). [ 1 ] [ 2 ] [ 3 ] However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity, [ 1 ] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field. [ 2 ]
Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel . The San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism ". [ 4 ] [ 5 ] In 2011, its offices were four apartments in downtown Berkeley. [ 6 ] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University , [ 7 ] and in the following month took the name "Machine Intelligence Research Institute". [ 8 ]
In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations. [ 3 ] [ 9 ] : 327
In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI. [ 10 ] In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years. [ 11 ] [ 12 ]
In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI. [ 13 ]
MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly. [ 3 ] [ 14 ] [ 15 ]
MIRI researchers advocate early safety work as a precautionary measure. [ 16 ] However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". [ 14 ] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware. [ 17 ]
MIRI aligns itself with the principles and objectives of the effective altruism movement. [ 18 ] | https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute |
In civil engineering , machine control is used to accurately position earthwork machinery based on 3D design models and GPS systems , and thus aid machine operators to e.g. control the position of a road grader 's blade. [ 1 ] Many machine control systems utilize the Real Time Kinematic (RTK) system to improve the positioning accuracy.
There are six dominant manufacturers of machine control systems: iDig , Leica Geosystems , MOBA Mobile Automation AG, Trimble Navigation Limited, Unicontrol and Topcon Positioning Systems. In 2010, The Kellogg Report [ 2 ] was published to serve as a resource for comparing the systems from these and other manufacturers.
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Machine_control |
[ 1 ] In the manufacturing industry, with regard to numerically controlled machine tools , the phrase machine coordinate system refers to the physical limits of the motion of the machine in each of its axes, and to the numerical coordinate which is assigned (by the machine tool builder) to each of these limits. CNC Machinery refers to machines and devices that are controlled by using programmed commands which are encoded on to a storage medium, and NC refers to the automation of machine tools that are operated by abstract commands programmed and encoded onto a storage medium.
The absolute coordinate system uses the cartesian coordinate system, where a point on the machine is specifically defined. The cartesian coordinate system is a set of three number lines labeled X, Y, and Z, which are used to determine the point in the workspace that the machine needs to operate in. This absolute coordinate system allows the machine operator to edit the machine code in a way where the specifically defined section is easy to pinpoint. Before putting in theses coordinates though, the machine operator needs to set a point of origin on the machine. The point of origin in the cartesian system is 0, 0, 0. This allows the machine operator to know which directions are positive and negative in the cartesian plane. It also makes sure that every move made is based on the distance from this origin point. [ citation needed ]
The relative coordinate system, also known as the incremental coordinate system, also uses the cartesian coordinate system, but in a different manner. The relative coordinate system allows the machine operator to define a point in the workspace based on, or relative to, the previous point that the machine tool was at. This means that after every move the machine tool makes, the point that it ends up at is based on the distance from the previous point. So, the origin set on the machine changes after every move. [ citation needed ]
The polar coordinate system does not use the cartesian coordinate system. It uses the distance from the point of origin to the point, and the angle from either the point of origin or the previous point used. This means that the polar coordinate system can be used in tangent with either the absolute coordinate system or the relative coordinate system. This just has to be specified within the code of the machine being used. The points in the polar coordinate system can be measured using a rule and protractor to get an approximate point, or the machine operator can use trigonometry to find the exact number needed for the machine to work. [ 2 ]
This industry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Machine_coordinate_system |
Machine element or hardware refers to an elementary component of a machine . These elements consist of three basic types:
While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Machine elements are basic mechanical parts and features used as the building blocks of most machines . [ 2 ] Most are standardized to common sizes, but customs are also common for specialized applications. [ 3 ]
Machine elements may be features of a part (such as screw threads or integral plain bearings) or they may be discrete parts in and of themselves such as wheels, axles, pulleys, rolling-element bearings , or gears. All of the simple machines may be described as machine elements, and many machine elements incorporate concepts of one or more simple machines. For example, a leadscrew incorporates a screw thread , which is an inclined plane wrapped around a cylinder .
Many mechanical design, invention, and engineering tasks involve a knowledge of various machine elements and an intelligent and creative combining of these elements into a component or assembly that fills a need (serves an application). | https://en.wikipedia.org/wiki/Machine_element |
Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis , and by extension in the subject of computational science . The quantity is also called macheps and it has the symbols Greek epsilon ε {\displaystyle \varepsilon } .
There are two prevailing definitions, denoted here as rounding machine epsilon or the formal definition and interval machine epsilon or mainstream definition .
In the mainstream definition , machine epsilon is independent of rounding method, and is defined simply as the difference between 1 and the next larger floating point number .
In the formal definition , machine epsilon is dependent on the type of rounding used and is also called unit roundoff , which has the symbol bold Roman u .
The two terms can generally be considered to differ by simply a factor of two, with the formal definition yielding an epsilon half the size of the mainstream definition , as summarized in the tables in the next section.
The following table lists machine epsilon values for standard floating-point formats.
The IEEE standard does not define the terms machine epsilon and unit roundoff , so differing definitions of these terms are in use, which can cause some confusion.
The two terms differ by simply a factor of two. The more-widely used term (referred to as the mainstream definition in this article), is used in most modern programming languages and is simply defined as machine epsilon is the difference between 1 and the next larger floating point number . The formal definition can generally be considered to yield an epsilon half the size of the mainstream definition , although its definition does vary depending on the form of rounding used.
The two terms are described at length in the next two subsections.
The formal definition for machine epsilon is the one used by Prof. James Demmel in lecture scripts, [ 3 ] the LAPACK linear algebra package, [ 4 ] numerics research papers [ 5 ] and some scientific computing software. [ 6 ] Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning, which is explored in depth throughout this subsection.
Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, b {\displaystyle b} , and by the precision p {\displaystyle p} , i.e. the number of radix b {\displaystyle b} digits of the significand (including any leading implicit bit). All the numbers with the same exponent , e {\displaystyle e} , have the spacing, b e − ( p − 1 ) {\displaystyle b^{e-(p-1)}} . The spacing changes at the numbers that are perfect powers of b {\displaystyle b} ; the spacing on the side of larger magnitude is b {\displaystyle b} times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent e = 0 {\displaystyle e=0} . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2} . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form 1 + a {\displaystyle 1+a} where a {\displaystyle a} is between 0 {\displaystyle 0} and b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2} . All these numbers round to 1 {\displaystyle 1} with relative error a / ( 1 + a ) {\displaystyle a/(1+a)} . The maximum occurs when a {\displaystyle a} is at the upper end of its range. The 1 + a {\displaystyle 1+a} in the denominator is negligible compared to the numerator, so it is left off for expediency, and just b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2} is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to 1 {\displaystyle 1} , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
Thus, the maximum spacing between a normalised floating point number, x {\displaystyle x} , and an adjacent normalised number is 2 ε | x | {\displaystyle 2\varepsilon |x|} . [ 7 ]
Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating-point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating-point number. Suppose (1) x {\displaystyle x} , y {\displaystyle y} are floating-point numbers, (2) ∙ {\displaystyle \bullet } is an arithmetic operation on floating-point numbers such as addition or multiplication, and (3) ∘ {\displaystyle \circ } is the infinite precision operation. According to the standard, the computer calculates:
By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where z {\displaystyle z} in absolute magnitude is at most ε {\displaystyle \varepsilon } or u . The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
This alternative definition is significantly more widespread: machine epsilon is the difference between 1 and the next larger floating point number . This definition is used in language constants in Ada , C , C++ , Fortran , MATLAB , Mathematica , Octave , Pascal , Python and Rust etc., and defined in textbooks like « Numerical Recipes » by Press et al .
By this definition, ε equals the value of the unit in the last place relative to 1, i.e. b − ( p − 1 ) {\displaystyle b^{-(p-1)}} (where b is the base of the floating point system and p is the precision) and the unit roundoff is u = ε / 2, assuming round-to-nearest mode, and u = ε , assuming round-by-chop .
The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types [ 8 ] [ 9 ] and corresponding constants in other programming languages. [ 10 ] [ 11 ] [ 12 ] It is also widely used in scientific computing software [ 13 ] [ 14 ] [ 15 ] and in the numerics and computing literature. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
Where standard libraries do not provide precomputed values (as < float.h > does with FLT_EPSILON , DBL_EPSILON and LDBL_EPSILON for C and < limits > does with std::numeric_limits< T >::epsilon() in C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate power formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute interval machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.
Note that results depend on the particular floating-point format used, such as float , double , long double , or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.
In a strict sense the term machine epsilon means the 1 + ε {\displaystyle 1+\varepsilon } accuracy directly supported by the processor (or coprocessor), not some 1 + ε {\displaystyle 1+\varepsilon } accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
IEEE 754 floating-point formats have the property that, when reinterpreted as a two's complement integer of the same width, they monotonically increase over positive values and monotonically decrease over negative values (see the binary representation of 32 bit floats ). They also have the property that 0 < | f ( x ) | < ∞ {\displaystyle 0<|f(x)|<\infty } , and | f ( x + 1 ) − f ( x ) | ≥ | f ( x ) − f ( x − 1 ) | {\displaystyle |f(x+1)-f(x)|\geq |f(x)-f(x-1)|} (where f ( x ) {\displaystyle f(x)} is the aforementioned integer reinterpretation of x {\displaystyle x} ). In languages that allow type punning and always use IEEE 754–1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
Example in Python:
64-bit doubles give 2.220446e-16, which is 2 −52 as expected.
The following simple algorithm can be used to approximate [ clarification needed ] the machine epsilon, to within a factor of two of its true value, using a linear search .
The machine epsilon, ε mach {\textstyle \varepsilon _{\text{mach}}} can also simply be calculated as two to the negative power of the number of bits used for the mantissa.
ε mach = 2 − bits used for magnitude of mantissa {\displaystyle \varepsilon _{\text{mach}}\ =\ 2^{-{\text{bits used for magnitude of mantissa}}}}
If y {\textstyle y} is the machine representation of a number x {\textstyle x} then the absolute relative error in the representation is | x − y x | ≤ ε mach . {\textstyle \left|{\dfrac {x-y}{x}}\right|\leq \varepsilon _{\text{mach}}.} [ 20 ]
The following proof is limited to positive numbers and machine representations using round-by-chop .
If x {\textstyle x} is a positive number we want to represent, it will be between a machine number x b {\textstyle x_{b}} below x {\textstyle x} and a machine number x u {\textstyle x_{u}} above x {\textstyle x} .
If x b = ( 1. b 1 b 2 … b m ) 2 × 2 k {\textstyle x_{b}=\left(1.b_{1}b_{2}\ldots b_{m}\right)_{2}\times 2^{k}} , where m {\textstyle m} is the number of bits used for the magnitude of the significand , then:
x u = [ ( 1. b 1 b 2 … b m ) 2 + ( 0.00 … 1 ) 2 ] × 2 k = [ ( 1. b 1 b 2 … b m ) 2 + 2 − m ] × 2 k = ( 1. b 1 b 2 … b m ) 2 × 2 k + 2 − m × 2 k = ( 1. b 1 b 2 … b m ) 2 × 2 k + 2 − m + k . {\displaystyle {\begin{aligned}x_{u}&=\left[(1.b_{1}b_{2}\ldots b_{m})_{2}+(0.00\ldots 1)_{2}\right]\times 2^{k}\\&=\left[(1.b_{1}b_{2}\ldots b_{m})_{2}+2^{-m}\right]\times 2^{k}\\&=(1.b_{1}b_{2}\ldots b_{m})_{2}\times 2^{k}+2^{-m}\times 2^{k}\\&=(1.b_{1}b_{2}\ldots b_{m})_{2}\times 2^{k}+2^{-m+k}.\end{aligned}}}
Since the representation of x {\textstyle x} will be either x b {\textstyle x_{b}} or x u {\textstyle x_{u}} ,
| x − y | ≤ | x b − x u | = 2 − m + k {\displaystyle {\begin{aligned}\left|x-y\right|&\leq \left|x_{b}-x_{u}\right|\\&=2^{-m+k}\end{aligned}}} | x − y x | ≤ 2 − m + k x ≤ 2 − m + k x b = 2 − m + k ( 1 ⋅ b 1 b 2 … b m ) 2 2 k = 2 − m ( 1 ⋅ b 1 b 2 … b m ) 2 ≤ 2 − m = ε mach . {\displaystyle {\begin{aligned}\left|{\frac {x-y}{x}}\right|&\leq {\frac {2^{-m+k}}{x}}\\&\leq {\frac {2^{-m+k}}{x_{b}}}\\&={\frac {2^{-m+k}}{(1\cdot b_{1}b_{2}\ldots b_{m})_{2}2^{k}}}\\&={\frac {2^{-m}}{(1\cdot b_{1}b_{2}\ldots b_{m})_{2}}}\\&\leq 2^{-m}=\varepsilon _{\text{mach}}.\end{aligned}}}
Although this proof is limited to positive numbers and round-by-chop, the same method can be used to prove the inequality in relation to negative numbers and round-to-nearest machine representations. | https://en.wikipedia.org/wiki/Machine_epsilon |
Machine ethics (or machine morality , computational morality , or computational ethics ) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents . [ 1 ] Machine ethics differs from other ethical fields related to engineering and technology . It should not be confused with computer ethics , which focuses on human use of computers. It should also be distinguished from the philosophy of technology , which concerns itself with technology's grander social effects. [ 2 ]
James H. Moor , one of the pioneering theoreticians in the field of computer ethics , defines four kinds of ethical robots. As an extensive researcher on the studies of philosophy of artificial intelligence , philosophy of mind , philosophy of science , and logic , Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents. A machine can be more than one type of agent. [ 3 ]
(See artificial systems and moral responsibility .)
Before the 21st century the ethics of machines had largely been the subject of science fiction , mainly due to computing and artificial intelligence (AI) limitations. Although the definition of "machine ethics" has evolved since, the term was coined by Mitchell Waldrop in the 1987 AI magazine article "A Question of Responsibility":
One thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov's three laws of robotics . [ 4 ]
In 2004, Towards Machine Ethics [ 5 ] was presented at the AAAI Workshop on Agent Organizations: Theory and Practice. [ 6 ] Theoretical foundations for machine ethics were laid out.
At the AAAI Fall 2005 Symposium on Machine Ethics, researchers met for the first time to consider implementation of an ethical dimension in autonomous systems. [ 7 ] A variety of perspectives of this nascent field can be found in the collected edition Machine Ethics [ 8 ] that stems from that symposium.
In 2007, AI magazine published "Machine Ethics: Creating an Ethical Intelligent Agent", [ 9 ] an article that discussed the importance of machine ethics, the need for machines that represent ethical principles explicitly, and challenges facing those working on machine ethics. It also demonstrated that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of ethical judgments and use that principle to guide its behavior.
In 2009, Oxford University Press published Moral Machines, Teaching Robots Right from Wrong , [ 10 ] which it advertised as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics." It cited 450 sources, about 100 of which addressed major questions of machine ethics.
In 2011, Cambridge University Press published a collection of essays about machine ethics edited by Michael and Susan Leigh Anderson, [ 8 ] who also edited a special issue of IEEE Intelligent Systems on the topic in 2006. [ 11 ] The collection focuses on the challenges of adding ethical principles to machines. [ 12 ]
In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots, [ 13 ] and Nick Bostrom 's Superintelligence: Paths, Dangers, Strategies , which raised machine ethics as the "most important...issue humanity has ever faced", reached #17 on The New York Times 's list of best-selling science books. [ 14 ]
In 2016 the European Parliament published a paper [ 15 ] to encourage the Commission to address robots' legal status. [ 16 ] The paper includes sections about robots' legal liability, in which it is argued that their liability should be proportional to their level of autonomy. The paper also discusses how many jobs could be taken by AI robots. [ 17 ]
In 2019 the Proceedings of the IEEE published a special issue on Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , edited by Alan Winfield , Katina Michael, Jeremy Pitt and Vanessa Evers. [ 18 ] "The issue includes papers describing implicit ethical agents, where machines are designed to avoid unethical outcomes, as well as explicit ethical agents, or machines that either encode or learn ethics and determine actions based on those ethics". [ 19 ]
Some scholars, such as Bostrom and AI researcher Stuart Russell , argue that, if AI surpasses humanity in general intelligence and becomes " superintelligent ", this new superintelligence could become powerful and difficult to control: just as the mountain gorilla 's fate depends on human goodwill, so might humanity's fate depend on a future superintelligence's actions. [ 20 ] In their respective books Superintelligence and Human Compatible , Bostrom and Russell assert that while the future of AI is very uncertain, the risk to humanity is great enough to merit significant action in the present.
This presents the AI control problem : how to build an intelligent agent that will aid its creators without inadvertently building a superintelligence that will harm them. The danger of not designing control right "the first time" is that a superintelligence may be able to seize power over its environment and prevent us from shutting it down. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (one way of building an AI whose goals are aligned with human or optimal values). A number of organizations are researching the AI control problem, including the Future of Humanity Institute , the Machine Intelligence Research Institute , the Center for Human-Compatible Artificial Intelligence , and the Future of Life Institute .
AI paradigms have been debated, especially their efficacy and bias. Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3 ) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis ). [ 21 ] In contrast, Chris Santos-Lang has argued in favor of neural networks and genetic algorithms on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable than machines to criminal hackers . [ 22 ] [ 23 ]
In 2009, in an experiment at the Ecole Polytechnique Fédérale of Lausanne 's Laboratory of Intelligent Systems, AI robots were programmed to cooperate with each other and tasked with searching for a beneficial resource while avoiding a poisonous one. [ 24 ] During the experiment, the robots were grouped into clans, and the successful members' digital genetic code was used for the next generation, a type of algorithm known as a genetic algorithm. After 50 successive generations in the AI, one clan's members discovered how to distinguish the beneficial resource from the poisonous one. The robots then learned to lie to each other in an attempt to hoard the beneficial resource from other robots. [ 24 ] In the same experiment, the same robots also learned to behave selflessly and signaled danger to other robots, and died to save other robots. [ 22 ] Machine ethicists have questioned the experiment's implications. In the experiment, the robots' goals were programmed to be "terminal", but human motives typically require never-ending learning.
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the possibility that they could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might acquire autonomy, and to what degree they could use it to pose a threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including the ability to find power sources on their own and to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science fiction is probably unlikely, but that there are other potential hazards and pitfalls. [ 25 ]
Some experts and academics have questioned the use of robots in military combat, especially robots with a degree of autonomy. [ 26 ] The U.S. Navy funded a report that indicates that as military robots become more complex, we should pay greater attention to the implications of their ability to make autonomous decisions. [ 27 ] [ 28 ] The president of the Association for the Advancement of Artificial Intelligence has commissioned a study of this issue. [ 29 ]
Preliminary work has been conducted on methods of integrating artificial general intelligences (full ethical agents as defined above) with existing legal and social frameworks. Approaches have focused on their legal position and rights. [ 30 ]
Big data and machine learning algorithms have become popular in numerous industries, including online advertising , credit ratings , and criminal sentencing, with the promise of providing more objective, data-driven results, but have been identified as a potential way to perpetuate social inequalities and discrimination . [ 31 ] [ 32 ] A 2015 study found that women were less likely than men to be shown high-income job ads by Google 's AdSense . Another study found that Amazon 's same-day delivery service was intentionally made unavailable in black neighborhoods. Both Google and Amazon were unable to isolate these outcomes to a single issue, and said the outcomes were the result of the black box algorithms they use. [ 31 ]
The U.S. judicial system has begun using quantitative risk assessment software when making decisions related to releasing people on bail and sentencing in an effort to be fairer and reduce the imprisonment rate . These tools analyze a defendant's criminal history, among other attributes. In a study of 7,000 people arrested in Broward County , Florida , only 20% of people predicted to commit a crime using the county's risk assessment scoring system proceeded to commit a crime. [ 32 ] A 2016 ProPublica report analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years. The report found that only 61% of those deemed high-risk committed additional crimes during that period. The report also flagged that African-American defendants were far more likely to be given high-risk scores than their white counterparts. [ 32 ] It has been argued that such pretrial risk assessments violate Equal Protection rights on the basis of race, due to factors including possible discriminatory intent by the algorithm itself, under a theory of partial legal capacity for artificial intelligences. [ 33 ]
In 2016, the Obama administration 's Big Data Working Group—an overseer of various big-data regulatory frameworks—released reports warning of "the potential of encoding discrimination in automated decisions" and calling for "equal opportunity by design" for applications such as credit scoring. [ 34 ] [ 35 ] The reports encourage discourse among policy-makers, citizens, and academics alike, but recognize that no solution yet exists for the encoding of bias and discrimination into algorithmic systems.
In March 2018, in an effort to address rising concerns over machine learning's impact on human rights , the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning. [ 36 ] The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning: [ 36 ]
In January 2020, Harvard University's Berkman Klein Center for Internet and Society published a meta-study of 36 prominent sets of principles for AI, identifying eight key themes: privacy, accountability, safety and security, transparency and explainability , fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. [ 37 ] Researchers at the Swiss Federal Institute of Technology in Zurich conducted a similar meta-study in 2019. [ 38 ]
There have been several attempts to make ethics computable, or at least formal . Isaac Asimov 's Three Laws of Robotics are not usually considered suitable for an artificial moral agent, [ 39 ] but whether Kant's categorical imperative can be used has been studied. [ 40 ] It has been pointed out that human value is, in some aspects, very complex. [ 41 ] A way to explicitly surmount this difficulty is to receive human values directly from people through some mechanism, for example by learning them. [ 42 ] [ 43 ] [ 44 ] Another approach is to base current ethical considerations on previous similar situations. This is called casuistry , and could be implemented through research on the Internet. The consensus from a million past decisions would lead to a new decision that is democracy-dependent. [ 9 ] Bruce M. McLaren built an early (mid-1990s) computational model of casuistry, a program called SIROCCO built with AI and case-base reasoning techniques that retrieves and analyzes ethical dilemmas. [ 45 ] But this approach could lead to decisions that reflect society's biases and unethical behavior. The negative effects of this approach can be seen in Microsoft's Tay , a chatterbot that learned to repeat racist and sexually charged tweets. [ 46 ]
One thought experiment focuses on a Genie Golem with unlimited powers presenting itself to the reader. This Genie declares that it will return in 50 years and demands that it be provided with a definite set of morals it will then immediately act upon. This experiment's purpose is to spark discourse over how best to handle defining sets of ethics that computers may understand. [ 47 ]
Some recent work attempts to reconstruct AI morality and control more broadly as a problem of mutual contestation between AI as a Foucauldian subjectivity on the one hand and humans or institutions on the other hand, all within a disciplinary apparatus . Certain desiderata need to be fulfilled: embodied self-care, embodied intentionality, imagination and reflexivity, which together would condition AI's emergence as an ethical subject capable of self-conduct. [ 48 ]
In science fiction , movies and novels have played with the idea of sentient robots and machines.
Neill Blomkamp 's Chappie (2015) enacts a scenario of being able to transfer one's consciousness into a computer. [ 49 ] Alex Garland 's 2014 film Ex Machina follows an android with artificial intelligence undergoing a variation of the Turing Test , a test administered to a machine to see whether its behavior can be distinguished from that of a human. Films such as The Terminator (1984) and The Matrix (1999) incorporate the concept of machines turning on their human masters.
Asimov considered the issue in the 1950s in I, Robot . At the insistence of his editor John W. Campbell Jr. , he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing his three laws' boundaries to see where they break down or create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. [ 50 ] Philip K. Dick 's 1968 novel Do Androids Dream of Electric Sheep? explores what it means to be human. In his post-apocalyptic scenario, he questions whether empathy is an entirely human characteristic. The book is the basis for the 1982 science-fiction film Blade Runner . | https://en.wikipedia.org/wiki/Machine_ethics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.