text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Machine guarding is a safety feature on or around manufacturing or other engineering equipment consisting of a shield or device covering hazardous areas of a machine to prevent contact with body parts or to control hazards like chips or sparks from exiting the machine. Machine guarding provides a means to protect humans from injury while working nearby or while operating equipment. It is often the first line of defense to protect operators from injury while working on or around industrial machinery during normal operations. In the U.S., machine guarding is referred to in OSHA 's CFR 1910.212; [ 1 ] in the U.K., machinery safety is covered mainly by PUWER ." [ 2 ]
Point guarding refers to guarding of moving parts on a machine that present a hazard to the machine operator or others who may come in contact with the hazard. OSHA 1910.212(a)(2) requires these guards to be “affixed to the machine where possible." [ 1 ] Specifics for the type and construction of the guard are determined by the proximity of the guard to the hazard, and the type of hazard. Point of Operation Guarding refers to guarding the area of the machine where the work is performed. Construction of the machine and the guarding should “prevent the operator from having any part of his/her body in the danger zone during the operating cycle.” [ 3 ]
Perimeter or barrier guarding refers to a barrier placed around a work area where an automated piece of equipment-like a robotic arm-performs a function. This type of guarding is generally a wire partition system, but also can take the form of pressure sensitive mats or light curtains. Wire partitions systems used as machine guards must be fixed in place either on the machine or around its perimeter. These guarding systems may be configured with various sizes of wire mesh, solid sheet metal, or clear polycarbonate panels. [ 4 ] Use of these materials depends on the hazard being guarded and the distance between the hazard and the guard. Mesh opening size of the guard depends on the proximity of the guard to the hazard. If the guard is installed close to moving parts, the mesh openings must be smaller than if the guard is installed a greater distance away from the moving parts. Mesh opening sizes for specific distances from the hazard are defined in ANSI/RIA R15.06-2012. [ 5 ] Solid barriers such as sheet metal or clear view polycarbonate can be used to shield the hazard as well as control sparks or liquids. Size of the barrier (height and width) must conform to ANSI/RIA R15.06-2012. [ 5 ] Wire partition guards provide a means of access to the guarded equipment with doors, lift out sections, or other controlled openings.
Electrical interlocks or other devices tied into the equipment's control circuit are mounted on access points and are configured to stop equipment operations when opened. These access points should also incorporate hardware to comply with the OSHA lockout/tagout regulation 1910.147. [ 6 ] The lockout/tagout hardware allows the operator or maintenance person to lock the equipment in the stopped state while performing duties in the hazardous area.
Light curtain systems may be used alone or in conjunction with other guarding systems. The light curtain projects a field or beams of light between two points and, when the field is broken-at any point, the light curtain system interrupts the circuit it is wired into. These can be used in areas where an operator must interact with operating equipment to prevent movement of the equipment while the operator works a hazardous area.
Pressure sensitive mats that are wired into the equipment's control system can be placed in hazardous areas and be set to stop the equipment if stepped on. These devices can be used alone or in conjunction with other guarding devices, usually at operator interaction points.
Packaging Equipment or pallet wrappers have a moving load and an arm that stretches plastic wrap around the load. Typically a three sided wire partition guard is placed around the wrapper, and a light curtain controls access to the open side where the wrapper is accessed by a lift truck. [ 7 ]
Robotic Welding Cells incorporate wire mesh fixed barriers outfitted with vinyl welding curtains and light curtains or pressure sensitive mats. The welding curtains mounted inside of the fixed barrier control exposure to welding flash, sparks and spatter from the welding operation. While the light curtain or pressure sensitive mats prevent welding operations while the operator is loading/unloading the weld fixtures.
Robotic Material Handling for pallet pack or de-palletizing could use any of the aforementioned guarding systems. Light curtains or pressure sensitive mats around the perimeter of the work area that stop the robot when an operator enters. Wire partition could be used around the work area with an interlocked gate which stops the robot when opened. | https://en.wikipedia.org/wiki/Machine_guarding |
The term Machine Guidance is used to describe a wide range of techniques which improve the productivity of agricultural, mining and construction equipment. It is most commonly used to describe systems which incorporate GPS , Motion Measuring Units (MMU) and other devices to provide on-board systems with information about the movement of the machine in either 3, 5 or 7 axis of rotation. Feedback to the operator is provided through audio and visual displays which allows improved control of the machine in relation to the intended or designed direction of travel. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Machine_guidance |
The machine industry or machinery industry is a subsector of the industry , that produces and maintains machines for consumers, the industry, and most other companies in the economy.
This machine industry traditionally belongs to the heavy industry . Nowadays, many smaller industrial manufacturing companies in this branch are considered part of the light industry . Most machine tool manufacturers in the machinery industry are called machine factories .
The machine industry is a subsector of the industry that produces a range of products from power tools , different types of machines , and domestic technology to factory equipment etc. On the one hand the machine industry provides:
These means of production are called capital goods , because a certain amount of capital is invested . Much of those production machines require regular maintenance, which becomes supplied specialized companies in the machine industry.
On the other end the machinery industry supplies consumer goods, including kitchen appliances, refrigerators, washers, dryers and a like. Production of radio and television, however, is generally considered belonging to the electrical equipment industry. The machinery industry itself is a major customer of the steel industry .
The production of the machinery industry varies widely from single-unit production and series production to mass production . [ 1 ] Single-unit production is about constructing unique products, which are specified in specific customer requirements . Due to modular design such devices and machines can often be manufactured in small series, which significantly reduces the costs. From a certain stage in the production, the specific customer requirements are built-in, and the unique product is created. [ 1 ]
The machinery industry came into existence during the Industrial Revolution . Companies in this emerging field grew out of iron foundries , shipyards , forges and repair shops. [ 2 ] Often companies were a combination of machine factory and shipyard. Early in the 20th century several motorcycle and automobile manufacturers began their own machine factories.
Prior to the industrial revolution a variety of machines existed such as clocks , weapons and running gear for mills ( watermill , windmill , horse mill etc.) Production of these machines were on much smaller scale in artisan workshops mostly for the local or regional market. With the advent of the industrial revolution manufacturing began of composite tools with more complex construction, such as steam engines and steam generators for the evolving industry and transport. [ 2 ] In addition, the emerging machine factories started making machines for production machines as textile machinery, compressors, agricultural machinery, and engines for ships.
During the first decades of the industrial revolution in England, from 1750, there was a concentration of labor usually in not yet mechanized factories. Many new machines were invented, which were initially made by the inventors themselves. Early in the 18th century, the first steam engines, the Newcomen engine , came into use throughout Britain and Europe, principally to pump water out of mines.
In the 1770s James Watt significantly improved this design. He introduced a steam engine easy employable to supply a large amounts of energy, which set the mechanization of factories underway. In England certain cities concentrated on making specific products, such as specific types of textiles or pottery. Around these cities specialized machinery industry arose in order to enable the mechanization of the plants. Hereby late in the 18th century arose the first machinery industry in the UK and also in Germany and Belgium.
The Industrial Revolution received a further boost with the upcoming railways . These arose at the beginning of the 19th century in England as innovation in the mining industry . The work in coal mines was hard and dangerous, and so there was a great need for tools to ease this work. In 1804, Richard Trevithick placed the first steam engine on rails, and was in 1825 the Stockton and Darlington Railway was opened, intended to transport coals from the mine to the port. In 1835 the first train drove in continental Europe between Mechelen and Brussels, and in the Netherlands in 1839 the first train drove between Amsterdam and Haarlem. For the machinery industry this brought all sorts of new work with new machinery for metallurgy, machine tool for metalworking, production of steam engines for trains with all its necessities etc.
In time the market for the machine industry became wider, specialized products were manufactured for a greater national and often international market. For example, it was not uncommon in the second half of the 19th century that American steelmakers ordered their production in England, where new steelmaking techniques were more advanced. In the far east Japan would import these product until the early 1930s, the creation of an own machinery industry got underway.
The term "machinery industry" came into existence later in the 19th century. One of the first times this branch of industry was recognized as such, and was investigated, was in a production statistics of 1907 created by the British Ministry of Trade and Industry . In this statistic the output of the engineering industry, was divided into forty different categories, including for example, agricultural machinery, machinery for the textile industry and equipment, and parts for train and tram. [ 3 ]
The inventions of new propulsion techniques based on electric motors, internal combustion engines and gas turbines brought a new generation of machines in the 20th century from cars to household appliances. Not only the product range of the machinery industry increased considerably, but especially smaller machines could also deliver products in much greater numbers fabricated in mass production . With the rise of mass production in other parts of the industry, there was also a high demand for manufacturing and production systems, to increase the entire production.
Shortage of labor in agriculture and industry at the beginning of the second half of the 20th century, raised the need for further mechanization of production, which required for more specific machines. The rise of the computer made further automation of production possible, which in turn set new demands on the machinery industry.
The machinery industry produces different kind of products, for example, engines, pumps, logistics equipment; for different kind of markets from the agriculture industry, food & beverage industry, manufacturing industry, health industry, and amusement industry till different branches of the consumer market . As such companies in the machine industry can be classified by product of market. [ 4 ]
In the world of today, all kinds of Industry classifications exists. Some classifications recognize the machine industry as a specific class, and offer a subdivision for this field. For example, the Dutch Standard Industrial Classification of 1993, developed by the Statistics Netherlands , give the following breakdown of the machinery industry:
This composition of the machinery industry has been significantly altered with the latest revision of the Dutch Standard Industrial Classification of 1993. The Standard Industrial Classification of 1974 broke down the machinery industry into nine sectors:
It may be clear that classification is by markets, and the more recent classification is by product.
The machine industry makes a very diverse range of products. A selection:
In ASEAN, the machine industry is a vital part of the region's economic structure. As of 2023, the industry employed approximately 2.5 million people across member countries. The sector generated a combined turnover of around $120 billion, with about 65% of this revenue coming from exports.
The ASEAN region is home to roughly 12,000 active companies in the machine industry, with the majority being small and medium-sized enterprises. Around 90% of these companies employ fewer than 500 people. On average, each employee in this sector generates approximately $48,000 in revenue annually.
Prominent companies in the ASEAN machine industry include ST Engineering (Singapore), Aikawa Iron Works (Thailand), and San Miguel Corporation (Philippines). The industry's significant growth can be attributed to favorable macroeconomic policies, open trade regimes, and increasing demand for manufactured goods both within the region and globally.
ASEAN's machine industry has been significantly influenced by advancements in automation and digital infrastructure, contributing to increased productivity and competitiveness on a global scale [ 5 ]
In Germany, in 2011 about 900,000 people were employed [ 6 ] in the machine industry and an estimated of 300,000 abroad. The combined turnover of the sector was €238 billion, of which 60% came from export. There were about 6,600 active companies, and 95% of those companies employed less than 500 people. Each employee generated an average of 148,000 Euro. Some of the largest companies in Germany are DMG Mori Seiki AG , GEA Group , Siemens AG , and ThyssenKrupp .
In the French machinery industry in 2009 about 650,000 people were employed, and the sector generated a turnover of 44 billion euros. Because of the crisis, the turnover of the sector had fallen by 15 percent. Due to stronger consumer spending and continuing demand from the energy sector and transport sector, the damage of the crisis was still limited. [ 7 ] Alternatively, some companies decided to focus their request on used industrial equipment. This guarantee attractive prices and better time delivery. [ 8 ] [ 9 ]
In Japan, the machine industry plays a significant role in the economy, employing a considerable number of people and generating substantial revenue. As of 2023, approximately 1.5 million people were employed in the machine tool industry. The combined turnover of the sector was estimated to be around $60 billion, with about 70% of the revenue coming from exports. [ 10 ] [ 11 ]
Japan is home to about 3,000 active companies in the machine industry, with 85% of these companies employing fewer than 500 people. The average revenue generated per employee in the industry is approximately $40,000. Some of the largest and most influential companies in Japan's machine industry include Fanuc Corporation, Mitsubishi Electric Corporation, and DMG Mori Seiki Co., Ltd.
In the Netherlands in 1996, a total of some 93,000 workers were employed in the machinery industry, with approximately 2,500 companies present. In 1000 of these companies there were working 20 or more employees. [ 1 ] In the Netherlands, according to the Chamber of Commerce in this subsector of the industry in 2011 some 15,000 companies were active. [ 12 ] Some of the largest companies in the Netherlands are Lely , Philips and Stork B.V.
U.S. machinery industries had total domestic and foreign sales of $413.7 billion in 2011. The United States is the world’s largest market for machinery, as well as the third-largest supplier. American manufacturers held a 58.5 percent share of the U.S. domestic market. [ 13 ]
Media related to Machinery industry at Wikimedia Commons | https://en.wikipedia.org/wiki/Machine_industry |
In lean manufacturing , machine operator efficiency ( MOE ) is the performance of an employee who operates industrial machinery. [ 1 ] The operator's efficiency is measured as the time spent producing product divided by the time the operator is on duty. [ 2 ] For example: if an operator is assigned to run a CNC machine tool for seven hours, but they only have four hours' worth of continuous uninterrupted output of workpieces—their MOE rating is 57% (4 divided by 7) for this seven-hour period of time.
There is a similar lean manufacturing KPI called overall equipment effectiveness (OEE). The major difference between OEE and MOE is that the OEE rating is on the machine and the MOE is on the person. [ citation needed ]
MOE is a measure of operator performance only, regardless of the type of machine or the speed of the machine they are working on. MOE only measures the operator's ability to keep the machine running continuously (load and unload parts faster that the automatic cycle time of the machine). The MOE rating of an operator will travel with them as they moves from one machine to another. This is easily accomplished because the MOE rating is a universal calculation of time and not reliant on the complex OEE calculations of loading, availability, performance, and quality. [ citation needed ]
Industrial dashboards can be used to display statistics for MOE ratings on each operator. To boost overall profitability for a manufacturing plant, machine operators are sometimes compensated with a salary bonus based on their MOE ratings. [ citation needed ] | https://en.wikipedia.org/wiki/Machine_operator_efficiency |
Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. [ 1 ] [ 2 ] [ 3 ] The basic method that the computers take in and respond to their environment is through the attached hardware . Until recently input was limited to a keyboard, or a mouse, but advances in technology, both in hardware and software , have allowed computers to take in sensory input in a way similar to humans. [ 1 ] [ 2 ]
Machine perception allows the computer to use this sensory input, as well as conventional computational means of gathering information , to gather information with greater accuracy and to present it in a way that is more comfortable for the user . [ 1 ] These include computer vision , machine hearing , machine touch, and machine smelling , as artificial scents are, at a chemical compound , molecular , atomic level, indiscernible and identical. [ 4 ] [ 5 ]
The end goal of machine perception is to give machines the ability to see , feel and perceive the world as humans do and therefore for them to be able to explain in a human way why they are making their decisions, to warn us when it is failing and more importantly, the reason why it is failing. [ 6 ] This purpose is very similar to the proposed purposes for artificial intelligence generally, except that machine perception would only grant machines limited sentience , rather than bestow upon machines full consciousness , self-awareness , and intentionality .
Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and high-dimensional data from the real world to produce numerical or symbolic information, e.g., in the forms of decisions. Computer vision has many applications already in use today such as facial recognition , geographical modeling, and even aesthetic judgment. [ 7 ]
However, machines still struggle to interpret visual impute accurately if it is blurry or if the viewpoint at which stimuli are viewed varies often. Computers also struggle to determine the proper nature of some stimulus if overlapped by or seamlessly touching another stimulus. This refers to the Principle of Good Continuation . Machines also struggle to perceive and record stimulus functioning according to the Apparent Movement principle which is a field of research in Gestalt psychology .
Machine hearing, also known as machine listening or computer audition is the ability of a computer or machine to take in and process sound data such as speech or music. [ 8 ] [ 9 ] This area has a wide range of application including music recording and compression, speech synthesis and speech recognition . [ 10 ] Moreover, this technology allows the machine to replicate the human brain's ability to selectively focus on a specific sound against many other competing sounds and background noise. This ability is called " auditory scene analysis ". The technology enables the machine to segment several streams occurring at the same time. [ 8 ] [ 11 ] [ 12 ] Many commonly used devices such as a smartphones, voice translators and cars make use of some form of machine hearing. Present technology still has challenges in speech segmentation . This means it is occasionally unable to correctly split words within sentences especially when spoken in an atypical accent.
Machine touch is an area of machine perception where tactile information is processed by a machine or computer. Applications include tactile perception of surface properties and dexterity whereby tactile information can enable intelligent reflexes and interaction with the environment. [ 13 ] Though this could possibly be done through measuring when and where friction occurs and also the nature and intensity of the friction, machines however still do not have any way of measuring few ordinary physical human experiences including physical pain. For example, scientists have yet to invent a mechanical substitute for the Nociceptors in the body and brain that are responsible for noticing and measuring physical human discomfort and suffering.
Scientists are developing computers known as machine olfaction which can recognize and measure smells as well. Airborne chemicals are sensed and classified with a device sometimes known as an electronic nose . [ 14 ] [ 15 ]
The electronic tongue is an instrument that measures and compares tastes . As per the IUPAC technical report, an “electronic tongue” as analytical instrument including an array of non-selective chemical sensors with partial specificity to different solution components and an appropriate pattern recognition instrument, capable to recognize quantitative and qualitative compositions of simple and complex solutions [ 16 ] [ 17 ]
Chemical compounds responsible for taste are detected by human taste receptors . Similarly, the multi-electrode sensors of electronic instruments detect the same dissolved organic and inorganic compounds . Like human receptors, each sensor has a spectrum of reactions different from the other. The information given by each sensor is complementary, and the combination of all sensors' results generates a unique fingerprint. Most of the detection thresholds of sensors are similar to or better than human receptors.
In the biological mechanism, taste signals are transduced by nerves in the brain into electric signals. E-tongue sensors process is similar: they generate electric signals as voltammetric and potentiometric variations.
Other than those listed above, some of the future hurdles that the science of machine perception still has to overcome include, but are not limited to:
- Embodied cognition - The theory that cognition is a full body experience, and therefore can only exist, and therefore be measure and analyzed, in fullness if all required human abilities and processes are working together through a mutually aware and supportive systems network.
- The Moravec's paradox (see the link)
- The Principle of similarity - The ability young children develop to determine what family a newly introduced stimulus falls under even when the said stimulus is different from the members with which the child usually associates said family with. (An example could be a child figuring that a chihuahua is a dog and house pet rather than vermin.)
- The Unconscious inference : The natural human behavior of determining if a new stimulus is dangerous or not, what it is, and then how to relate to it without ever requiring any new conscious effort.
- The innate human ability to follow the likelihood principle in order to learn from circumstances and others over time.
- The recognition-by-components theory - being able to mentally analyze and break even complicated mechanisms into manageable parts with which to interact with. For example: A person seeing both the cup and the handle parts that make up a mug full of hot cocoa, in order to use the handle to hold the mug so as to avoid being burned.
- The free energy principle - determining long before hand how much energy one can safely delegate to being aware of things outside one's self without the loss of the needed energy one requires for sustaining their life and function satisfactorily. This allows one to become both optimally aware of the world around them self without depleting their energy so much that they experience damaging stress, decision fatigue, and/or exhaustion. | https://en.wikipedia.org/wiki/Machine_perception |
Machine perfusion (MP) is an artificial perfusion technique often used for organ preservation to help facilitate organ transplantation . MP works by continuously pumping a specialized solution through donor organs , mimicking the body's natural blood flow while actively controlling temperature , oxygen levels , chemical composition , and mechanical stress within the organ. By maintaining organ viability outside the body for extended periods, machine perfusion addresses critical challenges in organ transplantation, such as limited preservation times. [ 1 ] [ 2 ] [ 3 ]
Machine perfusion has various forms and can be categorised according to the temperature of the perfusate: cold (4 °C) and warm (37 °C). [ 4 ] Machine perfusion has been applied to renal transplantation , [ 5 ] liver transplantation [ 6 ] and lung transplantation . [ 7 ] It is an alternative to static cold storage (SCS).
A record-long of human transplant organ preservation with machine perfusion of a liver for 3 days rather than usually <12 hours was reported in 2022. It could possibly be extended to 10 days and prevent substantial cell damage by low temperature preservation methods. [ 8 ] [ 9 ] Alternative approaches include novel cryoprotectant solvents. [ 10 ] [ 11 ]
There is a novel organ perfusion system under development that can restore, i.e. on the cellular level, multiple vital (pig) organs one hour after death (during which the body had a prolonged warm ischaemia ), [ 12 ] [ 13 ] and a similar method/system for reviving (pig) brains hours after death. [ 12 ] [ 14 ] The system for cellular recovery could be used to preserve donor organs or for revival -treatments in medical emergencies. [ 12 ]
An essential preliminary to the development of kidney storage and transplantation was the work of Alexis Carrel in developing methods for Vascular anastomosis . [ 15 ] Carrel went on to describe the first kidney transplants, which were performed in dogs in 1902; Ullman [ 16 ] independently described similar experiments in the same year. In these experiments kidneys were transplanted without there being any attempt at storage.
The crucial step in making in vitro storage of kidneys possible, was the demonstration by Fuhrman in 1943, [ 17 ] of a reversible effect of hypothermia on the metabolic processes of isolated tissues. Prior to this, kidneys had been stored at normal body temperatures using blood or diluted blood perfusates, [ 18 ] [ 19 ] but no successful reimplantations had been made. Fuhrman showed that slices of rat kidney cortex and brain withstood cooling to 0.2 °C for one hour at which temperature their oxygen consumption was minimal. When the slices were rewarmed to 37 °C their oxygen consumption recovered to normal.
The beneficial effect of hypothermia on ischaemic intact kidneys was demonstrated by Owens in 1955 [ 20 ] when he showed that, if dogs were cooled to 23-26 °C, and their thoracic aortas were occluded for 2 hours, their kidneys showed no apparent damage when the dogs were rewarmed. This protective effect of hypothermia on renal ischaemic damage was confirmed by Bogardus [ 21 ] who showed a protective effect from surface cooling of dog kidneys whose renal pedicles were clamped in situ for 2 hours. Moyer [ 22 ] demonstrated the applicability of these dog experiments to the human, by showing the same effect on dog and human kidney function from the same periods of hypothermic ischaemia.
It was not until 1958 that it was shown that intact dog kidneys would survive ischaemia even better if they were cooled to lower temperatures. Stueber [ 23 ] showed that kidneys would survive in situ clamping of the renal pedicle for 6 hours if the kidneys were cooled to 0-5 °C by being placed in a cooling jacket, and Schloerb [ 24 ] showed that a similar technique with cooling of heparinised dog kidneys to 2-4 °C gave protection for 8 hours but not 12 hours. Schloerb also attempted in vitro storage and auto-transplantation of cooled kidneys, and had one long term survivor after 4 hours kidney storage followed by reimplantation and immediate contralateral nephrectomy . He also had a near survivor, after 24-hour kidney storage and delayed contralateral nephrectomy, in a dog that developed a late arterial thrombosis in the kidney.
These methods of surface cooling were improved by the introduction of techniques in which the kidney's vascular system was flushed out with cold fluid prior to storage. This had the effect of increasing the speed of cooling of the kidney and removed red cells from the vascular system. Kiser [ 25 ] used this technique to achieve successful 7 hours in vitro storage of a dog kidney, when the kidney had been flushed at 5 °C with a mixture of dextran and diluted blood prior to storage. In 1960 Lapchinsky [ 26 ] confirmed that similar storage periods were possible, when he reported eight dogs surviving after their kidneys had been stored at 2-4 °C for 28 hours, followed by auto-transplantation and delayed contralateral nephrectomy. Although Lapchinsky gave no details in his paper, Humphries [ 27 ] reported that these experiments had involved cooling the kidneys for 1 hour with cold blood, and then storage at 2-4 °C, followed by rewarming of the kidneys over 1 hour with warm blood at the time of reimplantation. The contralateral nephrectomies were delayed for two months.
Humphries [ 27 ] developed this storage technique by continuously perfusing the kidney throughout the period of storage. He used diluted plasma or serum as the perfusate and pointed out the necessity for low perfusate pressures to prevent kidney swelling, but admitted that the optimum values for such variables as perfusate temperature, Po 2 , and flow, remained unknown. His best results, at this time, were 2 dogs that survived after having their kidneys stored for 24 hours at 4-10 °C followed by auto-transplantation and delayed contralateral nephrectomy a few weeks later.
Calne [ 28 ] challenged the necessity of using continuous perfusion methods by demonstrating that successful 12-hour preservation could be achieved using much simpler techniques. Calne had one kidney supporting life even when the contralateral nephrectomy was performed at the same time as the reimplantation operation. Calne merely heparinised dog kidneys and then stored them in iced solution at 4 °C. Although 17-hour preservation was shown to be possible in one experiment when nephrectomy was delayed, no success was achieved with 24-hour storage.
The next advance was made by Humphries [ 29 ] in 1964, when he modified the perfusate used in his original continuous perfusion system, and had a dog kidney able to support life after 24-hour storage, even when an immediate contralateral nephrectomy was performed at the same time as the reimplantation. In these experiments autogenous blood, diluted 50% with Tis-U-Sol solution at 10 °C, was used as the perfusate. The perfusate pressure was 40 mm Hg and perfusate pH 7.11-7.35 (at 37 °C). A membrane lung was used for oxygenation to avoid damaging the blood.
In attempting to improve on these results Manax [ 30 ] investigated the effect of hyperbaric oxygen , and found that successful 48-hour storage of dog kidneys was possible at 2 °C without using continuous perfusion, when the kidneys were flushed with a dextran /Tis-U-Sol solution before storage at 7.9 atmospheres pressure, and if the contralateral nephrectomy was delayed till 2 to 4 weeks after reimplantation. Manax postulated that hyperbaric oxygen might work either by inhibiting metabolism or by aiding diffusion of oxygen into the kidney cells, but he reported no control experiments to determine whether other aspects of his model were more important than hyperbaria.
A marked improvement in storage times was achieved by Belzer in 1967 [ 31 ] when he reported successful 72-hour kidney storage after returning to the use of continuous perfusion using a canine plasma based perfusate at 8-12 °C. Belzer [ 32 ] found that the crucial factor in permitting uncomplicated 72-hour perfusion was cryoprecipitation of the plasma used in the perfusate to reduce the amount of unstable lipo-proteins which otherwise precipitated out of solution and progressively obstructed the kidney's vascular system. A membrane oxygenator was also used in the system in a further attempt to prevent denaturation of the lipo-proteins because only 35% of the lipo-proteins were removed by cryo-precipitation. The perfusate comprised 1 litre of canine plasma, 4 mEq of magnesium sulphate , 250 mL of dextrose , 80 units of insulin , 200,000 units of penicillin and 100 mg of hydrocortisone . Besides being cryo-precipitated , the perfusate was pre-filtered through a 0.22 micron filter immediately prior to use. Belzer used a perfusate pH of 7.4-7.5, a Po 2 of 150–190 mm Hg, and a perfusate pressure of 50–80 mm Hg systolic, in a machine that produced a pulsatile perfusate flow. Using this system Belzer had 6 dogs surviving after their kidneys had been stored for 72 hours and then reimplanted, with immediate contralateral nephrectomies being performed at the reimplantation operations.
Belzer's use of hydrocortisone as an adjuvant to preservation had been suggested by Lotke's work with dog kidney slices, [ 33 ] in which hydrocortisone improved the ability of slices to excrete PAH and oxygen after 30 hour storage at 2-4 °C; Lotke suggested that hydrocortisone might be acting as a lysosomal membrane stabiliser in these experiments. The other components of Belzer's model were arrived at empirically. The insulin and magnesium were used partially in an attempt to induce artificial hibernation , as Suomalainen [ 34 ] found this regime to be effective in inducing hibernation in natural hibernators. The magnesium was also provided as a metabolic inhibitor following Kamiyama's demonstration [ 35 ] that it was an effective agent in dog heart preservation. A further justification for the magnesium was that it was needed to replace calcium which had been bound by citrate in the plasma .
Belzer [ 36 ] demonstrated the applicability of his dog experiments to human kidney storage when he reported his experiences in human renal transplantation using the same storage techniques as he had used for dog kidneys. He was able to store kidneys for up to 50 hours with only 8% of patients requiring post operative dialysis when the donor had been well prepared.
In 1968 Humphries [ 37 ] reported 1 survivor out of 14 dogs following 5 day storage of their kidneys in a perfusion machine at 10 °C, using a diluted plasma medium containing extra fatty acids. However, delayed contralateral nephrectomy 4 weeks after reimplantation was necessary in these experiments to achieve success, and this indicated that the kidneys were severely injured during storage.
In 1969 Collins [ 38 ] reported an improvement in the results that could be achieved with simple non perfusion methods of hypothermic kidney storage. He based his technique on the observation by Keller [ 39 ] that the loss of electrolytes from a kidney during storage could be prevented by the use of a storage fluid containing cations in quantities approaching those normally present in cells. In Collins' model, the dogs were well hydrated prior to nephrectomy, and were also given mannitol to induce a diuresis . Phenoxybenzamine, a vasodilator and lysozomal enzyme stabiliser, [ 40 ] [ 41 ] was injected into the renal artery before nephrectomy. The kidneys were immersed in saline immediately after removal, and perfused through the renal artery with 100-150 mL of a cold electrolyte solution from a height of 100 cm. The kidneys remained in iced saline for the rest of the storage period. The solution used for these successful cold perfusions imitated the electrolyte composition of intracellular fluids by containing large amounts of potassium and magnesium. The solution also contained glucose, heparin, procaine and phenoxybenzamine. The solution's pH was 7.0 at 25 °C. Collins was able to achieve successful 24-hour storage of 6 kidneys, and 30 hour storage of 3 kidneys, with the kidneys functioning immediately after reimplantation, despite immediate contralateral nephrectomies. Collins emphasised the poor results obtained with a Ringer's solution flush, in finding similar results with this management when compared with kidneys treated by surface cooling alone. Liu [ 42 ] reported that Collins' solution could give successful 48-hour storage when the solution was modified by the inclusion of amino acids and vitamins. However, Liu performed no control experiments to show that these modifications were crucial.
Difficulty was found by other workers in repeating Belzer's successful 72-hour perfusion storage experiments. Woods [ 43 ] was able to achieve successful 48-hour storage of 3 out of 6 kidneys when he used the Belzer additives with cryoprecipitated plasma as the perfusate in a hypothermic perfusion system, but he was unable to extend the storage time to 72 hours as Belzer had done. However, Woods [ 44 ] later achieved successful 3 and 7 days storage of dog kidneys. Woods had modified Belzer's perfusate by the addition of 250 mg of methyl prednisolone, increased the magnesium sulphate content to 16.2 mEq and the insulin to 320 units. Six of 6 kidneys produced life sustaining function when they were reimplanted after 72 hours storage despite immediate contralateral nephrectomies; 1 of 2 kidneys produced life sustaining function after 96 hours storage, 1 of 2 after 120 hours storage, and 1 of 2 after 168 hours storage. Perfusate pressure was 60 mm Hg with a perfusate pump rate of 70 beats per minute, and perfusate pH was automatically maintained at 7.4 by a CO 2 titrator. Woods stressed the importance of hydration of the donor and recipient animals. Without the methyl prednisolone, Woods found vessel fragility to be a problem when storage times were longer than 48 hours.
A major simplification to the techniques of hypothermic perfusion storage was made by Johnson [ 45 ] and Claes in 1972 [ 46 ] with the introduction of an albumin based perfusate. This perfusate eliminated the need for the manufacture of the cryoprecipitated and millipore filtered plasma used by Belzer. The preparation of this perfusate had been laborious and time-consuming, and there was the potential risk from hepatitis virus and cytotoxic antibodies. The absence of lipo-proteins from the perfusate meant that the membrane oxygenator could be eliminated from the perfusion circuit, as there was no need to avoid a perfusate/air interface to prevent precipitation of lipo-proteins. Both workers used the same additives as recommended by Belzer.
The solution that Johnson used was prepared by the Blood Products Laboratory (Elstree: England) by extracting heat labile fibrinogen and gamma globulins from plasma to give a plasma protein fraction (PPF) solution. The solution was incubated at 60 °C for 10 hours to inactivate the agent of serum hepatitis. [ 47 ] The result was a 45 g/L human albumin solution containing small amounts of gamma and beta globulins which was stable between 0 °C and 30 °C for 5 years. [ 48 ] PPF contained 2.2 mmol/L of free fatty acids. [ 49 ]
Johnson's [ 45 ] experiments were mainly concerned with the storage of kidneys that had been damaged by prolonged warm injury. However, in a control group of non-warm injured dog kidneys, Johnson showed that 24-hour preservation was easily achieved when using a PPF perfusate, and he described elsewhere [ 50 ] a survivor after 72 hours perfusion and reimplantation with immediate contralateral nephrectomy. With warm injured kidneys, PPF perfusion gave better results than Collins' method, with 6 out of 6 dogs surviving after 40 minutes warm injury and 24-hour storage followed by reimplantation of the kidneys and immediate contralateral nephrectomy. Potassium, magnesium, insulin, glucose, hydrocortisone and ampicillin were added to the PPF solution to provide an energy source and to prevent leakage of intracellular potassium. Perfusate temperature was 6 °C, pressure 40–80 mm Hg, and Po 2 200–400 mm Hg. The pH was maintained between 7.2 and 7.4.
Claes [ 46 ] used a perfusate based on human albumin (Kabi: Sweden) diluted with saline to a concentration of 45 g/L. Claes preserved 4 out of 5 dog kidneys for 96 hours with the kidneys functioning immediately after reimplantation despite immediate contralateral nephrectomies. Claes also compared this perfusate with Belzer's cryoprecipitated plasma in a control group and found no significant difference between the function of the reimplanted kidneys in the two groups.
The only other group besides Woods' to report successful seven-day storage of kidneys was Liu and Humphries [ 51 ] in 1973. They had three out of seven dogs surviving, after their kidneys had been stored for seven days followed by reimplantation and immediate contralateral nephrectomy. Their best dog had a peak post reimplantation creatinine of 50 mg/L (0.44 mmol/L). Liu used well hydrated dogs undergoing a mannitol diuresis and stored the kidneys at 9 °C – 10 °C using a perfusate derived from human PPF. The PPF was further fractionated by using a highly water-soluble polymer (Pluronic F-38), and sodium acetyl tryptophanate and sodium caprylate were added to the PPF as stabilisers to permit pasteurisation. To this solution were added human albumin, heparin, mannitol, glucose, magnesium sulphate, potassium chloride, insulin, methyl prednisolone, carbenicillin, and water to adjust the osmolality to 300-310 mosmol /kg. The perfusate was exchanged after 3.5 days storage. Perfusate pressure was 60 mm Hg or less, at a pump rate of 60 per minute. Perfusate pH was 7.12–7.32 (at 37 °C), Pco2 27–47 mm Hg, and Po 2 173–219 mm Hg. In a further report on this study Humphries [ 52 ] found that when the experiments were repeated with a new batch of PPF no survivors were obtained, and histology of the survivors from the original experiment showed glomerular hypercellularity which he attributed to a possible toxic effect of the Pluronic polymer.
Joyce and Proctor [ 53 ] reported the successful use of a simple dextran based perfusate for 72-hour storage of dog kidneys. 10 out of 17 kidneys were viable after reimplantation and immediate contralateral nephrectomy. Joyce used non pulsatile perfusion at 4 °C with a perfusate containing Dextran 70 (Pharmacia) 2.1%, with additional electrolytes, glucose (19.5 g/L), procaine and hydrocortisone. The perfusate contained no plasma or plasma components. Perfusate pressure was only 30 cm H 2 O, pH 7.34-7.40 and Po 2 250–400 mm Hg. This work showed that, for 72-hour storage, no nutrients other than glucose were needed, and low perfusate pressures and flows were adequate.
In 1973 Sacks [ 54 ] showed that simple ice storage could be successfully used for 72-hour storage when a new flushing solution was used for the initial cooling and flush out of the kidney. Sacks removed kidneys from well hydrated dogs that were diuresing after a mannitol infusion, and flushed the kidneys with 200 mL of solution from a height of 100 cm. The kidneys were then simply kept at 2 °C for 72 hours without further perfusion. Reimplantation was followed by immediate contralateral nephrectomies. The flush solution was designed to imitate intracellular fluid composition and contained mannitol as an impermeable ion to further prevent cell swelling. The osmolality of the solution was 430 mosmol/kg and its pH was 7.0 at 2 °C. The additives that had been used by Collins (dextrose, phenoxybenzamine, procaine and heparin) were omitted by Sacks.
These results have been equalled by Ross [ 55 ] who also achieved successful 72-hour storage without using continuous perfusion, although he was unable to reproduce Collins' or Sacks' results using the original Collins' or Sacks' solutions. Ross's successful solution was similar in electrolyte composition to intracellular fluid with the addition of hypertonic citrate and mannitol. No phosphate, bicarbonate, chloride or glucose were present in the solution; the osmolality was 400 mosmol/kg and the pH 7.1. Five of 8 dogs survived reimplantation of their kidneys and immediate contralateral nephrectomy, when the kidneys had been stored for 72 hours after having been flushed with Ross's solution; but Ross was unable to achieve 7 day storage with this technique even when delayed contralateral nephrectomy was used.
The requirements for successful 72-hour hypothermic perfusion storage have been further defined by Collins who showed that pulsatile perfusion was not needed if a perfusate pressure of 49 mm Hg was used, and that 7 °C was a better temperature for storage than 2 °C or 12 °C. [ 56 ] [ 57 ] He also compared various perfusate compositions and found that a phosphate buffered perfusate could be used successfully, so eliminating the need for a carbon dioxide supply. [ 56 ] Grundmann [ 58 ] has also shown that low perfusate pressure is adequate. He used a mean pulsatile pressure of 20 mm Hg in 72-hour perfusions and found that this gave better results than mean pressures of 15, 40, 50 or 60 mm Hg.
Successful storage up to 8 days was reported by Cohen [ 59 ] using various types of perfusate – with the best result being obtained when using a phosphate buffered perfusate at 8 °C. Inability to repeat these successful experiments was thought to be due to changes that had been made in the way that the PPF was manufactured with higher octanoic acid content being detrimental. Octanoic acid was shown to be able to stimulate metabolic activity during hypothermic perfusion [ 60 ] and this might be detrimental.
The structural changes that occur during 72-hour hypothermic storage of previously uninjured kidneys have been described by Mackay [ 61 ] who showed how there was progressive vacuolation of the cytoplasm of the cells which particularly affected the proximal tubules . On electron microscopy the mitochondria were seen to become swollen with early separation of the internal cristal membranes and later loss of all internal structure. Lysosomal integrity was well preserved until late, and the destruction of the cell did not appear to be caused by lytic enzymes because there was no more injury immediately adjacent to the lysosomes than in the rest of the cell. [ citation needed ]
Woods [ 44 ] [ 62 ] and Liu [ 51 ] – when describing successful 5 and 7 day kidney storage - described the light microscopic changes seen at the end of perfusion and at post mortem, but found few gross abnormalities apart from some infiltration with lymphocytes and occasional tubular atrophy.
The changes during short perfusions of human kidneys prior to reimplantation have been described by Hill [ 63 ] who also performed biopsies 1 hour after reimplantation. On electron microscopy Hill found endothelial damage which correlated with the severity of the fibrin deposition after reimplantation. The changes that Hill saw in the glomeruli on light microscopy were occasional fibrin thrombi and infiltration with polymorphs. Hill suspected that these changes were an immunologically induced lesion, but found that there was no correlation between the severity of the histological lesion and the presence or absence of immunoglobulin deposits. [ citation needed ]
There are several reports of the analysis of urine produced by kidneys during perfusion storage. Kastagir [ 64 ] analysed urine produced during 24-hour perfusion and found it to be an ultrafiltrate of the perfusate, Scott [ 65 ] found a trace of protein in the urine during 24-hour storage, and Pederson [ 66 ] found only a trace of protein after 36 hours perfusion storage. Pederson mentioned that he had found heavy proteinuria during earlier experiments. Woods [ 62 ] noted protein casts in the tubules of viable kidneys after 5 day storage, but he did not analyse the urine produced during perfusion. In Cohen's study [ 59 ] there was a progressive increase in urinary protein concentration during 8 day preservation until the protein content of the urine equalled that of the perfusate. This may have been related to the swelling of the glomerular basement membranes and the progressive fusion of epithelial cell foot processes that was also observed during the same period of perfusion storage.
The mechanisms that damage kidneys during hypothermic storage can be sub-divided as follows:
At normal temperatures pumping mechanisms in cell walls retain intracellular potassium at high levels and extrude sodium. If these pumps fail sodium is taken up by the cell and potassium lost. Water follows the sodium passively and results in swelling of the cells. The importance of this control of cell swelling was demonstrated by McLoughlin [ 67 ] who found a significant correlation between canine renal cortical water content and the ability of kidneys to support life after 36-hour storage. The pumping mechanism is driven by the enzyme system known as Na+K+- activated ATPase [ 68 ] and is inhibited by cold. Levy [ 69 ] found that metabolic activity at 10 °C, as indicated by oxygen consumption measurements, was reduced to about 5% of normal and, because all enzyme systems are affected in a similar way by hypothermia, ATPase activity is markedly reduced at 10 °C.
There are, however, tissue and species differences in the cold sensitivity of this ATPase which may account for the differences in the ability of tissues to withstand hypothermia. Martin [ 70 ] has shown that in dog kidney cortical cells some ATPase activity is still present at 10 °C but not at 0 °C. In liver and heart cells activity was completely inhibited at 10 °C and this difference in the cold sensitivity of ATPase correlated with the greater difficulty in controlling cell swelling during hypothermic storage of liver and heart cells. A distinct ATPase is found in vessel walls, and this was shown by Belzer [ 71 ] to be completely inhibited at 10 °C, when at this temperature kidney cortical cells ATPase is still active. These experiments were performed on aortic endothelium, but if the vascular endothelium of the kidney has the same properties, then vascular injury may be the limiting factor in prolonged kidney storage.
Willis [ 72 ] has shown how hibernators derive some of their ability to survive low temperatures by having a Na+K+-ATPase which is able to transport sodium and potassium actively across their cell membranes, at 5 °C, about six times faster than in non-hibernators; this transport rate is sufficient to prevent cell swelling.
The rate of cooling of a tissue may also be significant in the production of injury to enzyme systems. Francavilla [ 73 ] showed that when liver slices were rapidly cooled (immediate cooling to 12 °C in 6 minutes) anaerobic glycolysis, as measured on rewarming to 37 °C, was inhibited by about 67% of the activity that was demonstrated in slices that had been subjected to delayed cooling. However, dog kidney slices were less severely affected by the rapid cooling than were the liver slices.
All cells require ATP as an energy source for their metabolic activity. The kidney is damaged by anoxia when kidney cortical cells are unable to generate sufficient ATP under anaerobic conditions to meet the needs of the cells. When excising a kidney some anoxia is inevitable in the interval between dividing the renal artery and cooling the kidney. It has been shown by Bergstrom [ 74 ] that 50% of a dog's kidney's cortical cells ATP content is lost within 1 minute of clamping the renal artery, and similar results were found by Warnick [ 75 ] in whole mice kidneys, with a fall in cellular ATP by 50% after about 30 seconds of warm anoxia. Warnick and Bergstrom also showed that cooling the kidney immediately after removal markedly reduced any further ATP loss. When these non warm-injured kidneys were perfused with oxygenated hypothermic plasma, ATP levels were reduced by 50% after 24-hour storage and, after 48 hours, mean tissue ATP levels were a little higher than this indicating that synthesis of ATP had occurred. Pegg [ 76 ] has shown that rabbit kidneys can resynthesize ATP after a period of perfusion storage following warm injury, but no resynthesis occurred in non warm-injured kidneys.
Warm anoxia can also occur during reimplantation of the kidney after storage. Lannon [ 77 ] showed, by measurements of succinate metabolism, how the kidney was more sensitive to a period of warm hypoxia occurring after storage than to the same period of warm hypoxia occurring immediately prior to storage.
Active metabolism of glucose with production of bicarbonate has been demonstrated by Pettersson [ 78 ] and Cohen. [ 59 ]
Pettersson studies [ 78 ] were on the metabolism of glucose and fatty acids by kidneys during 6 day hypothermic perfusion storage and he found that the kidneys consumed glucose at 4.4 μmol/g/day and fatty acids at 5.8 μmol/g/day. In Cohen's study [ 59 ] the best 8 day stored kidneys consumed glucose at the rate of 2.3 μmol/g/day and 4.9 μmol/g/day respectively which made it likely that they were using fatty acids at similar rates to Pettersson's dogs' kidneys. The constancy of both the glucose consumption rate and the rate of bicarbonate production implied that no injury was affecting the glycolytic enzyme or carbonic anhydrase enzyme systems.
Lee [ 79 ] showed that fatty acids were the preferred substrate of the rabbit's kidney cortex at normothermic temperatures, and glucose the preferred substrate for the medullary cells which normally metabolise anaerobically. Abodeely [ 80 ] showed that both fatty acids and glucose could be utilised by the outer medulla of the rabbit's kidney but that glucose was used preferentially. At hypothermia the metabolic needs of the kidney are much reduced but measurable consumption of glucose, fatty acids and ketone bodies occurs. Horsburgh [ 49 ] showed that lipid is utilised by hypothermic kidneys, with palmitate consumption being 0-15% of normal in the rat kidney cortex at 15 °C. Pettersson [ 78 ] showed that, on a molar basis, glucose and fatty acids were metabolised by hypothermically perfused kidneys at about the same rates. The cortex of the hypothermic dog kidney was shown by Huang [ 81 ] to lose lipid (35% loss of total lipid after 24 hours) unless oleate was added to the kidney perfusate. Huang commented that this loss could affect the structure of the cell and that the loss also suggested that the kidney was utilising fatty acid. In a later publication Huang [ 82 ] showed that dog kidney cortex slices metabolised fatty acids, but not glucose, at 10 °C.
Even if the correct nutrients are provided, they may be lost by absorption into the tubing of the preservation system. Lee [ 83 ] demonstrated that silicone rubber (a material used extensively in kidney preservation systems) absorbed 46% of a perfusate's oleic acid after 4 hours of perfusion.
Abouna [ 84 ] showed that ammonia was released into the perfusate during 3 day kidney storage, and suggested that this might be toxic to the kidney cells unless removed by frequent replacement of the perfusate. Some support for the use of perfusate exchange during long perfusions was provided by Liu [ 51 ] who used perfusate exchange in his successful 7 day storage experiments. Grundmann [ 85 ] also found that 96-hour preservation quality was improved by the use of a double volume of perfusate or by perfusate exchange. However, Grundmann's conclusions were based on comparisons with a control group of only 3 dogs. Cohen [ 59 ] was unable to demonstrate any production of ammonia during 8 days of perfusion and no benefit from perfusate exchange; the progressive alkalinity that occurred during perfusion was shown to be due to bicarbonate production.
Certain perfusates have been shown to have toxic effects on kidneys as a result of the inadvertent inclusion of particular chemicals in their formulation. Collins [ 86 ] showed that the procaine included in the formulation of his flush fluids could be toxic, and Pegg [ 87 ] has commented how toxic materials, such as PVC plasticizers, may be washed out of perfusion circuit tubing. Dvorak [ 88 ] showed that the methyl-prednisolone addition to the perfusate that was thought to be essential by Woods [ 62 ] might in some circumstances be harmful. He showed that with over g of methyl-prednisolone in 650 mL of perfusate (compared with 250 mg in 1 litre used by Woods) irreversible haemodynamic and structural changes were produced in the kidney after 20 hours of perfusion. There was necrosis of capillary loops, occlusion of Bowman's spaces, basement membrane thickening and endothelial cell damage.
The level of nucleotides remaining in the cell after storage was thought by Warnick [ 75 ] to be important in determining whether the cell would be able to re-synthesize ATP and recover after rewarming. Frequent changing of the perfusate or the use of a large volume of perfusate has the theoretical disadvantage that broken down adenine nucleotides may be washed out of the cells and so not be available for re-synthesis into ATP when the kidney is rewarmed.
Nuclear DNA is injured during cold storage of kidneys. Lazarus [ 89 ] showed that single stranded DNA breaks occurred within 16 hours in hypothermically stored mice kidneys, with the injury being inhibited a little by storage in Collins' or Sacks' solutions. This nuclear injury differed from that seen in warm injury when double stranded DNA breaks occurred. [ 90 ]
Perfusion storage methods can mechanically injury the vascular endothelium of the kidney, which leads to arterial thrombosis or fibrin deposition after reimplantation. Hill [ 63 ] noted that, in human kidneys, fibrin deposition in the glomerulus after reimplantation and postoperative function, correlated with the length of perfusion storage. He had taken biopsies at revascularisation from human kidneys preserved by perfusion or ice storage, and showed by electron microscopy that endothelial disruption only occurred in those kidneys that had been perfused. Biopsies taken one hour after revascularisation showed platelets and fibrin adherent to any areas of denuded vascular basement membrane. A different type of vascular damage was described by Sheil [ 91 ] who showed how a jet lesion could be produced distal to the cannula tied into the renal artery, leading to arterial thrombosis approximately 1 cm distal to the cannula site.
There is evidence that immunological mechanisms may injure hypothermically perfused kidneys after reimplantation if the perfusate contained specific antibody. Cross [ 92 ] described two pairs of human cadaver kidneys that were perfused simultaneously with cryoprecipitated plasma containing type specific HLA antibody to one of the pairs. Both these kidneys suffered early arterial thrombosis. Light [ 93 ] described similar hyperacute rejection following perfusion storage and showed that the cryoprecipitated plasma used contained cytotoxic IgM antibody. This potential danger of using cryoprecipitated plasma was demonstrated experimentally by Filo [ 94 ] who perfused dog kidneys for 24 hours with specifically sensitised cryoprecipitated dog plasma and found that he could induce glomerular and vascular lesions with capillary engorgement, endothelial swelling, infiltration by polymorphonuclear leucocytes and arterial thrombosis. Immunofluorescent microscopy demonstrated specific binding of IgG along endothelial surfaces, in glomeruli, and also in vessels. After reimplantation, complement fixation and tissue damage occurred in a similar pattern. There was some correlation between the severity of the histological damage and subsequent function of the kidneys.
Many workers have attempted to prevent kidneys rewarming during reimplantation but only Cohen has described using a system of active cooling. [ 59 ] Measurements of lysosomal enzyme release from kidneys subjected to sham anastomoses, when either in or out of the cooling system, demonstrated how sensitive kidneys were to rewarming after a period of cold storage, and confirmed the effectiveness of the cooling system in preventing enzyme release. A further factor in minimising injury at the reimplantation operations may have been that the kidneys were kept at 7 °C within the cooling coil, which was within a degree of the temperature used during perfusion storage, so that the kidneys were not subjected to the greater changes in temperature that would have occurred if ice cooling had been used.
Dempster [ 95 ] described using slow release of the vascular clamps at the end of kidney reimplantation operations to avoid injuring the kidney, but other workers have not mentioned whether or not they used this manoeuvre. After Cohen found vascular injury with intra renal bleeding after 3 days of perfusion storage, [ 59 ] a technique of slow revascularisation was used for all subsequent experiments, with the aim of giving the intra- renal vessels time to recover their tone sufficiently to prevent full systolic pressure being applied to the fragile glomerular vessels. The absence of gross vascular injury in his later perfusions may be attributable to the use of this manoeuvre. | https://en.wikipedia.org/wiki/Machine_perfusion |
A machine shop or engineering workshop is a room, building, or company where machining , a form of subtractive manufacturing, is done. In a machine shop, machinists use machine tools and cutting tools to make parts, usually of metal or plastic (but sometimes of other materials such as glass or wood ). A machine shop can be a small business (such as a job shop ) or a portion of a factory , whether a toolroom or a production area for manufacturing . The building construction and the layout of the place and equipment vary, and are specific to the shop; for instance, the flooring in one shop may be concrete, or even compacted dirt, and another shop may have asphalt floors. A shop may be air-conditioned or not; but in other shops it may be necessary to maintain a controlled climate. Each shop has its own tools and machinery which differ from other shops in quantity, capability and focus of expertise.
The parts produced can be the end product of the factory, to be sold to customers in the machine industry , the car industry , the aircraft industry , or others. It may encompass the frequent machining of customized components. In other cases, companies in those fields have their own machine shops.
The production can consist of cutting , shaping, drilling , finishing, and other processes , frequently those related to metalworking . The machine tools typically include metal lathes , milling machines , machining centers, multitasking machines, drill presses , or grinding machines , many controlled with computer numerical control (CNC). Other processes, such as heat treating , electroplating , or painting of the parts before or after machining, are often done in a separate facility.
A machine shop can contain some raw materials (such as bar stock for machining) and an inventory of finished parts. These items are often stored in a warehouse . The control and traceability of the materials usually depend on the company's management and the industries that are served, standard certification of the establishment, and stewardship.
A machine shop can be a capital intensive business, because the purchase of equipment can require large investments . A machine shop can also be labour-intensive , especially if it is specialized in repairing machinery on a job production basis, but production machining (both batch production and mass production ) is much more automated than it was before the development of CNC, programmable logic control (PLC), microcomputers , and robotics . It no longer requires masses of workers , although the jobs that remain tend to require high talent and skill . Training and experience in a machine shop can both be scarce and valuable.
Methodology, such as the practice of 5S , the level of compliance over safety practices and the use of personal protective equipment by the personnel, as well as the frequency of maintenance to the machines and how stringent housekeeping is performed in a shop, may vary widely from one shop to another.
The first machine shops started to appear in the 19th century when the Industrial Revolution was already long underway. Before the industrial revolution parts and tools were produced in workshops in local villages and cities on small-scale often for a local market. The first machinery that made possible the Industrial Revolution were also developed in similar workshops .
The production machines in the first factories were built on site, where every part was still individually made to fit. After some time those factories started their own workshops, where parts of the existing machinery were repaired or modified. In those days textiles were the dominant industry, and these industries started to further develop their own machine tools.
Further development early in the 19th century in England , Germany and Scotland of machine tools and cheaper methods for the production of steel, such as the Bessemer steel , triggered the Second Industrial Revolution , which culminated in early factory electrification, mass production and the production line. The machine shop emerged as Burghardt called, a "place in which metal parts are cut to the size required and put together to form mechanical units or machines, the machines so made to be used directly or indirectly in the production of the necessities and luxuries of civilization." [ 1 ]
The rise of machine shops and their specific manufacturing and organizational problems triggered the early job shop management pioneers, whose theories became known as scientific management . One of the earliest publications in this field was Horace Lucian Arnold , who in 1896 wrote a first series of articles about "Modern Machine-Shop Economics." [ 2 ] This work stretched out from production technology, production methods and factory lay out to time studies, production planning , and machine shop management. A series of publications on these topics would follow. In 1899 Joshua Rose published the book Modern machine-shop practice, about the operation, construction, and principles of shop machinery, steam engines, and electrical machinery.
In 1903 the Cyclopedia of Modern Shop Practice was published with Howard Monroe Raymond as Editor-in-Chief, and in the same year Frederick Winslow Taylor published his Shop management; a paper read before the American society of mechanical engineers. New York. Taylor had started his workmanship as a machine-shop laborer at Midvale Steel Works in 1878, and worked his way up to machine shop foreman, research director, and finally chief engineer of the works. As an independent consulting engineer one of his first major assignments was in 1898 at Bethlehem Steel was to solve an expensive machine-shop capacity problem.
In 1906 Oscar E. Perrigo published the popular book Modern machine shop, construction the equipment and management of machine shops. The first part of Modern machine shop, Perrigo (1906) focussed on the physical construction of the building and presented a model machine shop. With this model machine shop, Perrigo explored the way the space in factories could be organized. [ 3 ] This was not uncommon in his days. Many industrial engineers , like Alexander Hamilton Church , J. Slater Lewis , Hugo Diemer etc., published plans for some new industrial complex.
These works among others cumulated in the scientific management movement on which Taylor in 1911 wrote his famous The Principles of Scientific Management , a seminal text of modern organization and decision theory , with a significant part dedicated to the organization of machine shops. [ 4 ] The introduction of new cutting materials as high-speed steel , and better organization of the production by implementing new scientific management methods such as planning boards (see image), significantly improved machine shop productivity and efficiency of machine shops. In the course of the 20th century, these further increased with the further development of technology.
In the early 20th century, the power for the machine tools was still supplied by a mechanical belt , which was powered by a central steam engine. In the course of the 20th-century electric motors took over the power supply of the machine tools.
As materials and chemical substances, including cutting oil, become more sophisticated, the awareness of the impact on the environment slowly grew. In parallel to the acknowledgment of the ever-present reality of accidents and potential occupational injury, the sorting of scrap materials for recycling and the disposal of refuse evolved in an area related to the environment, safety, and health. In regulated machine shops this would turn into a constant practice supported by what would be a discipline known as EHS (for environment, health, and safety), or of a similar name, such as HQSE that would include quality assurance .
In the second part of the 20th century, automation started with numerical control (NC) automation, and computer numerical control (CNC).
Digital instruments for quality control and inspection become widely available, and the utilization of lasers for precision measurements became more common for the larger shops that can afford the equipment.
Further integration of information technology into machine tools lead to the beginning of computer-integrated manufacturing . Production design and production became integrated into CAD/CAM , and production control became integrated in enterprise resource planning .
In the late of the 20th century, the introduction of industrial robots further increased factory automation. Typical applications of robots include welding, painting, assembly, pick and place (such as packaging, palletizing and SMT), product inspection, and testing. As a result of this introduction the machine shop also "has been modernized to the extent that robotics and electronic controls have been introduced into the operation and control of machines. [ 5 ] For small machine shops, though, having robots is more of an exception.
A machine is a tool containing one or more parts that uses energy to perform an intended action. Machines are usually powered by mechanical, chemical, thermal, or electrical means, and are often motorized . Historically, a power tool also required moving parts to classify as a machine. However, the advent of electronics has led to the development of power tools without moving parts that are considered machines. [ 6 ]
Machining is any of the various processes in which a piece of raw material is cut into a desired final shape and size by a controlled material-removal process. The many processes that have this common theme, controlled material removal, are today collectively known as subtractive manufacturing, in distinction from processes of controlled material addition, which are known as additive manufacturing. Exactly what the "controlled" part of the definition implies can vary, but it almost always implies the use of machine tools (in addition to just power tools and hand tools).
Though not all machine shops may have a CNC milling center, commonly, they may have access to a manual milling machine.
A machine tool is a machine for shaping or machining metal or other rigid materials, usually by cutting, boring, grinding , shearing, or other forms of deformation. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools use some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus the relative movement between the workpiece and the cutting tool is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand".
Professional management of the inventory of cutting tools occurs mainly in larger operations. Smaller machine shops may have a more limited assortment of endmills, keyseat cutters, inserts, and other cutting tools. The choice in the sophistication of the design of the cutting tool, including material and finish, commonly depends on the job and the price of the cutting tool. In some instances, the cost of custom-made tools could be prohibitive for a small shop.
Depending on the industry and demands of the job, a cutting tool may only be used on a certain type of material, that is, a cutting tool may not contact another workpiece made of different chemical composition .
Not all machine shops are equipped with a mill and not all machine shops are aimed to do milling work.
Some machine shops are better organized than others, and some places are kept cleaner than other establishments. In some instances, the shop is swept minutes before the end of every shift, and in other cases, there's no schedule or routine, or the cycle for sweeping and cleaning is more relaxed.
When it comes to machines, in some places the care and maintenance of the equipment are paramount, and the swarf (commonly known as chips) produced after parts have been machined, are removed daily, and then the machine is air-blown and wiped clean; while in other machine shops, the chips are left in the machines until is an absolute necessity to remove them; the second instance is not advisable.
The remanent or residue of materials used, such as aluminum, steel, and oil, among others, can be gathered and recycled, and commonly, it may be sold. However, not all machine shops practice recycling , and not all have personnel dedicated to enforcing the habit of separating and keeping materials separated. In larger and organized operations, such responsibility may be delegated to the Health, Safety, Environment, and Quality (HSEQ) department.
Quality assurance , quality control and inspection , are terms commonly used interchangeably. The accuracy and precision to be attained depends on several determining factors. Since not all machines have the same level of reliability and capability to execute predictable finished results within certain tolerances , nor all manufacturing processes achieve the same range of exactness, the machine shop is then limited to its own dependability in delivering the desire outcomes. Subsequently, subject to the rigor declared by the customer, the machine shop may be required to undergo a verification and validation even prior to the issuance and acknowledgment of an order.
The machine shop may have a specific area established for measuring and inspecting the parts in order to confirm compliance, while other shops only rely on the inspections performed by the machinists and fabricators. For instance, in some shops, a granite, calibrated surface plate may be shared by different departments, and in other shops, the lathes, the mills, etc., may have their own, or may not have one at all.
The standards followed, the industry served, quality control, and mainly the type of practices in the machine shop, will denote the utilization of precision inspection instruments, and the accuracy of metrology employed. This means that not all machine shops implement a periodic interval for calibrating measuring devices. Not all machine shops have the same type of measuring instruments, though it is common to find micrometers , Vernier calipers , granite surface plates, among others.
The frequency and precision for calibrating metrology instruments may vary and it may require hiring the services of a specialized third-party. Also, in some instances, maintaining all instruments existent in the shop calibrated may be a requirement to not fall out of compliance.
The location and orientation of the machines are important. Preferably, some prior thought has been given in the positioning of the equipment; likely not as meticulously as in a plant layout study , the closeness of the machines, the types of machines, were the raw material are received and kept, as well as other factors, including ventilation, are taken in account to establish the initial layout of the machine shop. A routing diagram and daily operations may dictate the need to rearrange.
Profitability is commonly a driving consideration in regards to maximizing production, and thus aligning the machines in an effective manner; however, other critical factors must be considered, such as the preventive maintenance of the equipment and safety in the workplace. For instance, allowing room for a technician to maneuver behind the machining center to inspect connections, and not placing the machine where it would block the emergency exit.
Some shops have cages or rooms dedicated to keeping certain tools or supplies; for instance, a room may be dedicated to only welding supplies, gas tanks, etcetera; or where janitorial supplies or other consumables such as grinding disks are stored. Depending on the size of the operation, management, and controls, these areas may be restricted and locked, or these could be staffed by an employee, as by a tool crib attendant; in other instances, the storage rooms or cages are accessible to all personnel. Not all shops have a tool crib or storage room(s) though, and in many cases, a large cabinet suffices.
Also, the way hand tools are stored and are made available to the fabricator or operators depends on how the shop functions or is managed. In many cases, common hand tools are visible in the work area and at reach for anyone. In many cases, the workers do not need to provide their own tools since the daily tools are available and provided, but in many other cases, the workers bring their own tools and toolboxes to their workplace
Safety is a consideration that needs to be observed and enforced daily and constantly; however, a shop may vary from other shops in strictness and thoroughness when it comes to the actual practice, policies implemented and overall seriousness ascertained by the personnel and management. In an effort to standardize some common guidelines, in the United States, the Occupational Safety and Health Administration ( OSHA ) issues didactic material and enforces precautions with the goal of preventing accidents.
In a machine shop usually, there are numerous practices that are known in relation to working safely with machines. Some of the common practices include:
Safety precautions in a machine shop are aimed to avoid injuries and tragedies, for example, to eliminate the possibility of a worker being fatally harmed by being entangled in a lathe.
Many machines have safety measurements as built-in parts of their design; for example, an operator must press two buttons which are out of the way for a press or punch to function, and thus not pinch the operator's hands. | https://en.wikipedia.org/wiki/Machine_shop |
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring , grinding , shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath ) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or " freehand ". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material. [ 1 ]
The precise definition of the term machine tool varies among users. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft ), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts .
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools , simply provided a label for "tools that were machines instead of hand tools". Early lathes , those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. [ 2 ] This lathe "produced screw threads out of wood and employed a true compound slide rest". [ 2 ]
In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal". [ 3 ]
The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of conventional machining and grinding . These processes are a type of deformation that produces swarf . However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies , shearing, swaging , riveting , and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille , [ 4 ] [ page needed ] which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition.
The colloquial sense implying conventional metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining , electrochemical machining , electron beam machining , photochemical machining , and ultrasonic machining , or even plasma cutting and water jet cutting , are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, [ 5 ] and retrofits of existing machines are underway. [ 6 ]
Forerunners of machine tools included bow drills and potter's wheels , which had existed in ancient Egypt prior to 2500 BC, and lathes , known to have existed in multiple regions of Europe since at least 1000 to 500 BC. [ 7 ] But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others.
Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery ); clocks ; textile machinery; steam engines ( stationary , marine , rail , and otherwise ) (the story of how Watt 's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe [ 8 ] ); sewing machines ; bicycles ; automobiles ; and aircraft . Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries.
Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. [ 8 ] Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron . Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process.
James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776. [ 8 ] [ 9 ]
The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth . That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth [ 10 ] who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography.
The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium).
The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates.
With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy. [ 8 ] The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. [ 11 ] Others, such as Henry Maudslay , James Nasmyth , and Joseph Whitworth , soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale.
Important early machine tools included the slide rest lathe, screw-cutting lathe , turret lathe , milling machine , pattern tracing lathe, shaper , and metal planer , which were all in use before 1840. [ 12 ] With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. [ 13 ] Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries.
American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns. [ 14 ] [ 15 ]
The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide.
[A]ll the turning of the iron for the cotton machinery built by Mr. Slater was done with hand chisels or tools in lathes turned by cranks with hand power.
Machine tools can be powered from a variety of sources. Human and animal power (via cranks , treadles , treadmills , or treadwheels ) were used in the past, as was water power (via water wheel ); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900. [ 17 ]
Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon. [ citation needed ]
Machine tools can be operated manually, or under automatic control. [ 18 ] Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines . NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators. [ citation needed ]
Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers , and have dramatically changed the way parts are made. [ citation needed ]
Examples of machine tools are:
When fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are:
Other techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines.
The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. [ 19 ] The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. [ 19 ] South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively. [ 19 ]
. A biography of a machine tool builder that also contains some general history of the industry. | https://en.wikipedia.org/wiki/Machine_tool |
Machinery's Handbook for machine shop and drafting-room; a reference book on machine design and shop practice for the mechanical engineer, draftsman, toolmaker, and machinist (the full title of the 1st edition) is a classic reference work in mechanical engineering and practical workshop mechanics in one volume published by Industrial Press , New York, since 1914. The first edition was created by Erik Oberg (1881–1951) and Franklin D. Jones (1879–1967), who are still mentioned on the title page of the 29th edition (2012). Recent editions of the handbook contain chapters on mathematics, mechanics , materials , measuring , toolmaking, manufacturing , threading , gears , and machine elements , combined with excerpts from ANSI standards. Machinery's Handbook is still regularly revised and updated; the most current revision is Edition 32 (2024). It continues to be the "bible of the metalworking industries" today. The work is available in online and ebook form as well as print.
During the decades from World War I to World War II , McGraw-Hill published a similar handbook, American Machinists' Handbook , which competed directly with Industrial Press's Machinery's Handbook . McGraw-Hill ceased publication of their guide after the 8th edition (1945). Another short-lived spin-off appeared in 1955.
Machinery's Handbook is the inspiration for similar works in other countries, such as Sweden's Karlebo handbok (1st ed. 1936).
In 1917, Oberg and Jones also published Machinery's Encyclopedia in 7 volumes. The handbook and encyclopedia are named after the monthly magazine Machinery (Industrial Press, 1894–1973), where the two were consulting editors. | https://en.wikipedia.org/wiki/Machinery's_Handbook |
A machinist calculator is a hand-held calculator programmed with built-in formulas making it easy and quick for machinists to establish speeds, feeds and time without guesswork or conversion charts. Formulas may include revolutions per minute (RPM), surface feet per minute (SFM), inches per minute (IPM), feed per tooth (FPT). A cut time (CT) function takes the user, step-by-step, through a calculation to determine cycle time (execution time) for a given tool motion. Other features may include a metric -English conversion function, a stop watch /timer function and a standard math calculator.
This type of calculator is useful for machinists , programmers , inspectors , estimators , supervisors, and students.
When Handheld Machinist calculators first came to market they were complicated to use due to their small liquid-crystal displays and were fairly expensive with a price of around $70-$80. These older units were missing many features and could not be upgraded. Modern smartphone app versions have additional features. | https://en.wikipedia.org/wiki/Machinist_calculator |
A Machmeter is an aircraft pitot-static system flight instrument that
shows the ratio of the true airspeed to the speed of sound ,
a dimensionless quantity called Mach number . This is shown on a Machmeter as a decimal fraction .
An aircraft flying at the speed of sound is flying
at a Mach number of one, expressed as Mach 1 .
As an aircraft in transonic flight approaches the speed of sound,
it first reaches its critical mach number, where air flowing
over low-pressure areas of its surface locally reaches the
speed of sound, forming shock waves . The indicated airspeed for this condition changes with ambient temperature,
which in turn changes with altitude .
Therefore, indicated airspeed is not entirely adequate to
warn the pilot of the impending problems. Mach number is
more useful, and most high-speed aircraft are limited to a maximum operating Mach number, also known as M MO .
For example, if the M MO is Mach 0.83, then at 9,100 m (30,000 ft) where the speed of sound under standard conditions is 1,093 kilometres per hour (590 kn), the true airspeed at M MO is 906 kilometres per hour (489 kn). The speed of sound increases with air temperature, so at Mach 0.83 at 3,000 m (10,000 ft) where the air is much warmer than at 9,100 m (30,000 ft), the true airspeed at M MO would be 982 km/h (530 kn).
Modern electronic Machmeters use information from an air data computer system which makes calculations using inputs from a pitot-static system . Some older mechanical Machmeters use an altitude aneroid and an airspeed capsule which together convert pitot-static pressure into Mach number. The Machmeter suffers from instrument and position errors.
In subsonic flow the Mach meter can be calibrated according to:
where:
When a shock wave forms across the pitot tube the required formula is derived from the Rayleigh Supersonic Pitot equation, and is solved iteratively:
where:
Note that the inputs required are total pressure and static pressure. Air temperature input is not required.
This article incorporates public domain material from Instrument Flying Handbook . United States government .
This aviation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Machmeter |
Maciej S. Kumosa is a materials scientist and academic. He holds the title of John Evans Professor at the University of Denver , and is the director of the Center for Novel High Voltage/Temperature Materials and Structures (HVT). [ 1 ]
Kumosa's research interests involve analyzing advanced materials at multiple scales, both experimentally and numerically, for applications in electrical, aerospace, and other fields under extreme operating conditions. [ 2 ]
Kumosa serves as an Editorial Board Member for Composites Science and Technology , [ 3 ] Structural Durability & Health Monitoring , [ 4 ] and Fibers . [ 5 ]
Maciej Kumosa was born on July 13, 1953, in Warsaw , Poland, to a family with teaching, medicine, and farming backgrounds. His father, Dr. Stefan Kumosa, [ 6 ] was a well-respected physician in Słupca , a small town in the middle of communist Poland with approximately 5,000 residents during that period. At age five, Kumosa was relocated from Warsaw to Slupca, where he received his elementary and high school education from primary school number 1 and Marshal Józef Piłsudski High School [ 7 ] in 1968 and 1972, respectively.
Kumosa earned his Masters's degree in Applied Mechanics and Materials Science from the Technical University of Wroclaw , in 1978. He continued his studies at the same university, completing his Ph.D. in Applied Mechanics and Materials Science in 1982. [ 8 ]
In 1981, Kumosa began his career as a senior research assistant in the Institute of Materials Science and Applied Mechanics at the Technical University of Wroclaw and was appointed as an assistant professor in 1983. He then served as a senior research associate at the University of Cambridge from 1984 to 1990, followed by an appointment as an associate professor in the Department of Materials Science and Engineering and Department of Electrical Engineering and Applied Physics at Oregon Graduate Institute (OGI) in Portland from 1990 until 1998. In 1996, he joined the University of Denver as a research professor in the Department of Engineering and was later promoted to the roles of associate professor and full professor. Since 2006, he has been serving as a John Evans Professor at the University of Denver (DU). [ 9 ] He retired as an academic in 2024. [ 1 ]
Kumosa was the chair of the Mechanical and Materials Engineering (MME) Department at DU from 2007 to 2009 and later served as the director of the Center for Nanoscale Science and Engineering from 2007 to 2012. Since 2014, he has been serving as the center director of the National Science Foundation Industry/University Cooperative Research Center for Novel High Voltage/Temperature Materials and Structures (the HVT Center). [ 10 ] The first Phase of the HVT Center was completed on September 30, 2024. During its 10 years of operation, the Center graduated 42 PHD and 15 master's students and published 205 journal papers at its four academic sites. [ 1 ]
Kumosa's research focused on advanced materials under extreme conditions for electrical and aerospace applications, using experimental and numerical methods to optimize performance. [ 2 ] He has authored publications spanning the fields of composites, materials science, applied physics , applied mechanics, and general science, including IEEE journals, conference proceedings, engineering magazines, and national research reports. [ 8 ]
Kumosa's research has been funded by federal and private sponsors, including the National Science Foundation, the Air Force Office of Scientific Research, the Department of Energy (Headquarters), NASA, the Bonneville Power Administration, and the Western Area Power Administration. His main private research sponsors have been the Lockheed Martin Corporation, the Electric Power Research Institute, Tri-State Generation and Transmission, the Alabama Power Company, and Pacific Gas & Electric.
As part of his Ph.D. research, Kumosa investigated both numerically and experimentally the initiation of cracking by mechanical twins in silicon iron. [ 11 ] He used an anisotropic Eshelby approach to predict the stresses required to initiate cracks by terminated mechanical twins and to determine the directions of shear deformations associated with mechanical twining. [ 12 ]
Kumosa started experimenting with thin-walled Glass Reinforced Polymer (GRP) composite structures by subjecting them to internal pressure to determine the effects of multiaxial loads in the initiation of damage in the composites. His mentors at that time were Leszek Golaski and Waclaw Kasprzak. [ 13 ]
After his graduation and initial academic appointments in Poland, Kumosa pursued an academic journey abroad in 1984. After a year as a visiting research fellow at the University of Liverpool, he then moved in December 1984 to Cambridge, England, where he spent over six years in the Department of Materials Science and Metallurgy at the University of Cambridge. Collaborating with Derek Hull and his research group, he focused on material science research and in particular advanced composites investigations. [ 14 ]
During his time at Cambridge, Kumosa conducted research that involved the application of Finite Element Methods (FEM) to predict failures in advanced composite structures under multiaxial loading conditions. [ 15 ] His investigations extended to the examination of stress corrosion cracking (SCC) in Glass Reinforced Polymer (GRP) composites, [ 16 ] along with the analysis of mixed-mode failure and fracture in both GRP and Carbon Fiber Reinforced (CFRP) composites. [ 17 ] Moreover, he evaluated the potential use of Acoustic Emission (AE) for monitoring composite structures and contributed to research on the crashworthiness of composites. [ 14 ]
Kumosa contributed to the development of the Iosipescu shear test, demonstrating its uniqueness through FEM [ 18 ] including the consideration of axial splits and their impact on composite failure predictions. [ 17 ] Collaborating with W. Broughton, he further redesigned the test to incorporate biaxial shear-dominated conditions, a seminal modification at that time. [ 19 ] In addition, his FEM research supported multiaxial testing of filament-wound composite cylinders, accounting for the presence of hoop cracks in thin-walled composite tubes. [ 20 ]
Kumosa, along with Sigalas and Hull, proposed the first numerical model of a composite tube subjected to axial crashing, resulting in a highly cited paper. [ 20 ] Additionally, he demonstrated the precise counting of fractured fibers in the stress corrosion cracking of Glass Polymer Composites using AE monitoring. [ 21 ]
In May 1990, Kumosa relocated to the Oregon Graduate Institute of Science and Technology (OGI) in Portland, Oregon. His SCC and shear testing of composites projects transitioned to OGI, forming the basis for two research programs: the study of biaxial failures in high-temperature polyimide composites [ 22 ] and the investigation of in-service failures of High Voltage transmission composite insulators. [ 23 ]
Kumosa’s key graduate students at OGI who helped him build the foundation of his future NSF HVT Center were Kevin Searles, Qiong Qiu, Anurag Bansal and Jun Ding [ 24 ] [ 25 ]
At OGI from 1990 to 1995, Kumosa, together with Korusiewicz and Ding, worked on the failure analysis and design of advanced metallic alloys, contributing to the GE90 project. [ 26 ] Focused on jet engine applications, his research group focused on studying nickel-based superalloys and titanium aluminides used in the GE90 engine for their resistance to high-temperature fracture and fatigue. [ 26 ]
From 1992 to 2006, Kumosa supervised research on High Voltage (HV) composite insulators, also known as Non-Ceramic Insulators (NCIs). He directed initiatives addressing the challenges faced by these insulators used in transmission lines and substations globally. These insulators, subjected to intense mechanical, electrical, and environmental stresses, presented operational life challenges. One of his significant contributions has been to provide an explanation for various large HV transmission line insulator failures attributed to brittle fractures. Notably, he addressed the 14 energized line drops on the Western Area Power Administration's 345 kV Craig Bonanza line in Colorado and has elucidated the causes behind five catastrophic 500 kV line drops at Pacific Gas & Electric in California in 1995/1996 as well. [ 27 ]
Working initially with his graduate students Bansal and Qiu at OGI and then at DU with Lucas Kumosa Jr, Tom Ely, Paul Predecki, Dwight Smith, and Daniel Armentrout, Kumosa made contributions by identifying the specific type of acid responsible for brittle fracture failures in California, Colorado, and other global regions. [ 28 ] [ 29 ] Moreover, he conducted simulations of brittle fractures in insulator (GRP) composites under high voltage conditions, shedding light on critical failure mechanisms. [ 30 ]
Kumosa's work improved the understanding of insulator failure mechanisms and contributed to global advancements in High Voltage transmission system reliability. A notable development was the establishment of the first ranking system for commonly used GRP rod materials, evaluating their resistance to High Voltage brittle fracture and other in-service failures. [ 27 ] He and his research teams also proposed the inaugural comprehensive model explaining insulator failures arising from improper crimping, which provided insights into failure modes. [ 31 ]
From 1992 to 2004, Kumosa's High-Temperature Polymer Matrix Composite research aimed to understand fundamental failure mechanisms in High-Temperature (HT) composites. Using medium and high-stiffness carbon fibers with various HT polyimide resins, the research explored the impact of aging on composite strength properties, focusing on temperature variations and biaxial shear-dominated loading conditions. He advanced multidisciplinary technologies for affordable propulsion components, aiming for optimal performance and durability at elevated temperatures with reduced cooling needs. [ 32 ]
In the course of this research, Kumosa, in collaboration with Benedikt and Predecki, developed experimental and numerical techniques to assess manufacturing stresses in propulsion engine components. These techniques, which are based on embedded aluminum inclusions, X-ray diffraction, and non-linear multiple inclusion Eshelby models, played a crucial role in predicting residual manufacturing stresses in High-Temperature Polymer Matrix Composites (HT PMCs) used in a composite combustion chamber with substantially reduced weight. [ 33 ] [ 34 ]
Additionally, Kumosa's collaborative efforts with Odegard, Rupnowski and Gentz led to the prediction of the failure properties of these composites under High-Temperature, multiaxial shear-dominated conditions. [ 35 ] [ 36 ] [ 37 ] An unprecedented evaluation of the aging resistance of the composites in nitrogen (physical aging) and air (chemical aging) at temperatures as high as 400 °C was undertaken for the first time. [ 38 ] The culmination of this research manifested in the optimization of High-Temperature (HT) combustion chamber composites, achieved through the meticulous selection and integration of fibers and matrices tailored to exhibit superior performance under high-temperature conditions. [ 39 ]
Kumosa and his team of graduate students have directed their research toward High-Temperature High-Voltage Polymer Core Composite Conductors (PCCC) for use in High-Voltage (HV) transmission lines. Between 2008 and 2010, he and Burks were the first to determine the critical bend radius of the most popular HTLS PCCC. [ 40 ] Additionally, from 2009 to 2012, they demonstrated the sensitivity of PCCC rods to transverse loading under aeolian vibrations. [ 41 ] Their findings also suggested that bearing stresses due to crimping the conductor at a dead-end connection could be considered for effective fatigue life design. This effect was evaluated for the first time for PCCC rods at various stages of environmental aging, using a unique combined experimental/numerical approach. [ 42 ]
Kumosa and Middleton conducted life predictions for PCCC conductors, indicating that exposure to high temperatures appeared to be more damaging to PCCC rods than the impact of highly concentrated ozone. Taking into account potential environmental conditions such as high temperature and ozone pollution, it was predicted that PCCC rods could endure in service for many years if the operating temperature did not exceed 120°C with an ozone concentration of no more than about 1%. [ 43 ] Subsequently, he and Hoffman demonstrated that the in-service life of the conductors could be significantly extended (by 75%) through the application of special Teflon coatings on the rods. [ 44 ]
Considering the prevalent issue faced by utilities using traditional steel/aluminum designs, especially in coastal environments, Kumosa, Håkansson, Hoffman and others conducted research to evaluate the resistance of the current PCCC design to corrosion on transmission lines. They proposed a potent analytical model of atmospheric galvanic corrosion of PCCC conductors, which was subsequently numerically and experimentally verified. [ 45 ] He and his research teams have presented insights into the in-service performance of the next generation of High-Voltage High-Temperature Low Sag Polymer Core Composite Conductors, akin to their previous work on HV composite insulators. Their efforts have led to numerous potential improvements in design, as highlighted in various publications, including a feature in the Denver Business Journal , where he also discussed how the new transmission line product could save lives. [ 46 ]
Kumosa has led a research initiative funded by the National Science Foundation (NSF) and directed the HVT Center.
Kumosa's previous projects, including the PCCC conductor research, were integrated into the HVT Center, and new projects were initiated. Within the PCCC conductor project, his research performed in collaboration with Waters and Hoffman focused on the conductors' resilience to low-velocity excessive transverse impacts using unique fixtures and simulating impact behavior through Finite Element Method (FEM) analysis. The conductors exhibited superiority over their Al/steel counterparts in this regard. [ 47 ]
The group also demonstrated the successful monitoring of PCCC conductors for various static and dynamic loads using Fiber Bragg Grating (FBG) sensors. These sensors proved effective in monitoring conductors during installation and in-service for both small and large deformations. [ 48 ]
Among the new projects in the HVT Center, Kumosa studied the "Effect of Oxygen Aging on Ti/Al/V Powders used in Additive Manufacturing" with Billy Grell, Zach Loftus, and others. [ 49 ] Along with Lu, Yi, Solis-Ramos and other researchers, he also investigated the "Synergistic Aging of Polymers and their Composites." [ 50 ] The extreme aging of silicone rubbers used in HV voltage applications was another research conducted by him and Bleszynski in the Center, resulting in the design, manufacture, and testing of an HV silicone rubber with improved resistance to extreme aging by about 50%. [ 51 ]
Henderson, Predecki, and Kumosa's project "Prevention of Ballistic Damage to HV Transformer Bushings" tested the use of ballistic polymer coatings on HV porcelain transformer bushings to protect them against high-power rifle damage, demonstrating for the first time that the bushings could be safeguarded against vandalism with properly designed and applied coatings. [ 52 ] His collaborative project with Waters, Hoffman, and Predeck titled, "Polymerization in Single Fiber Composites using FBG Sensors" introduced a novel technique using FBG sensors within the HVT Center. [ 53 ] This technique evaluated the responses of modeled polymer and metal composites to manufacturing conditions, using FBG sensors to identify the beginning and end of curing, the gel point, cooling strains, and stresses for polymers such as epoxies, and was later applied successfully to monitor the solidification of metals. [ 54 ]
More recently in 2023, Kumosa's research teams studied both the "Modernization of Large Power Transformer (LPT) Tanks" [ 55 ] and the "Development of Next-generation Graphene and Graphene Oxide Epoxy Base Nanocomposites". [ 56 ] In the LPT project, he, Jide Williams, Hoffman, and Predecki demonstrated for the first time that heavy LPT tanks could be replaced with advanced PMCs for weight reduction, superior resistance to rifle damage, and improved performance in other adverse in-service conditions. [ 57 ] In the Graphene Oxide Project, Matt Reil and others discovered a new powerful toughening mechanism in an epoxy resin with embedded graphene oxide nano-particles which was subsequently explained through extensive numerical and experimental simulations and verifications. [ 56 ] | https://en.wikipedia.org/wiki/Maciej_Kumosa |
The Mackey–Arens theorem is an important theorem in functional analysis that characterizes those locally convex vector topologies that have some given space of linear functionals as their continuous dual space .
According to Narici (2011), this profound result is central to duality theory ; a theory that is "the central part of the modern theory of topological vector spaces." [ 1 ]
Let X be a vector space and let Y be a vector subspace of the algebraic dual of X that separates points on X .
If 𝜏 is any other locally convex Hausdorff topological vector space topology on X , then we say that 𝜏 is compatible with duality between X and Y if when X is equipped with 𝜏 , then it has Y as its continuous dual space.
If we give X the weak topology 𝜎( X , Y ) then X 𝜎( X , Y ) is a Hausdorff locally convex topological vector space (TVS) and 𝜎( X , Y ) is compatible with duality between X and Y (i.e. X σ ( X , Y ) ′ = ( X σ ( X , Y ) ) ′ = Y {\displaystyle X_{\sigma (X,Y)}^{\prime }=\left(X_{\sigma (X,Y)}\right)^{\prime }=Y} ).
We can now ask the question: what are all of the locally convex Hausdorff TVS topologies that we can place on X that are compatible with duality between X and Y ?
The answer to this question is called the Mackey–Arens theorem.
Mackey–Arens theorem [ 2 ] — Let X be a vector space and let 𝒯 be a locally convex Hausdorff topological vector space topology on X . Let X ' denote the continuous dual space of X and let X T {\displaystyle X_{\mathcal {T}}} denote X with the topology 𝒯. Then the following are equivalent:
And furthermore, | https://en.wikipedia.org/wiki/Mackey–Arens_theorem |
In mathematics , Maclaurin's inequality , named after Colin Maclaurin , is a refinement of the inequality of arithmetic and geometric means .
Let a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} be non-negative real numbers , and for k = 1 , 2 , … , n {\displaystyle k=1,2,\ldots ,n} , define the averages S k {\displaystyle S_{k}} as follows:
S k = ∑ 1 ≤ i 1 < ⋯ < i k ≤ n a i 1 a i 2 ⋯ a i k ( n k ) . {\displaystyle S_{k}={\frac {\displaystyle \sum _{1\leq i_{1}<\cdots <i_{k}\leq n}a_{i_{1}}a_{i_{2}}\cdots a_{i_{k}}}{\displaystyle {n \choose k}}}.}
The numerator of this fraction is the elementary symmetric polynomial of degree k {\displaystyle k} in the n {\displaystyle n} variables a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} , that is, the sum of all products of k {\displaystyle k} of the numbers a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} with the indices in increasing order. The denominator is the number of terms in the numerator, the binomial coefficient ( n k ) . {\displaystyle {\tbinom {n}{k}}.} Maclaurin's inequality is the following chain of inequalities :
S 1 ≥ S 2 ≥ S 3 3 ≥ ⋯ ≥ S n n {\textstyle S_{1}\geq {\sqrt {S_{2}}}\geq {\sqrt[{3}]{S_{3}}}\geq \cdots \geq {\sqrt[{n}]{S_{n}}}} ,
with equality if and only if all the a i {\displaystyle a_{i}} are equal.
For n = 2 {\displaystyle n=2} , this gives the usual inequality of arithmetic and geometric means of two non-negative numbers. Maclaurin's inequality is well illustrated by the case n = 4 {\displaystyle n=4} :
a 1 + a 2 + a 3 + a 4 4 ≥ a 1 a 2 + a 1 a 3 + a 1 a 4 + a 2 a 3 + a 2 a 4 + a 3 a 4 6 ≥ a 1 a 2 a 3 + a 1 a 2 a 4 + a 1 a 3 a 4 + a 2 a 3 a 4 4 3 ≥ a 1 a 2 a 3 a 4 4 . {\begin{aligned}&\quad {\frac {a_{1}+a_{2}+a_{3}+a_{4}}{4}}\\[8pt]&\geq {\sqrt {\frac {a_{1}a_{2}+a_{1}a_{3}+a_{1}a_{4}+a_{2}a_{3}+a_{2}a_{4}+a_{3}a_{4}}{6}}}\\[8pt]&\geq {\sqrt[{3}]{\frac {a_{1}a_{2}a_{3}+a_{1}a_{2}a_{4}+a_{1}a_{3}a_{4}+a_{2}a_{3}a_{4}}{4}}}\\[8pt]&\geq {\sqrt[{4}]{a_{1}a_{2}a_{3}a_{4}}}.\end{aligned}}
Maclaurin's inequality can be proved using Newton's inequalities or a generalised version of Bernoulli's inequality .
This article incorporates material from MacLaurin's Inequality on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Maclaurin's_inequality |
A Maclaurin spheroid is an oblate spheroid which arises when a self-gravitating fluid body of uniform density rotates with a constant angular velocity. This spheroid is named after the Scottish mathematician Colin Maclaurin , who formulated it for the shape of Earth in 1742. [ 1 ] In fact the figure of the Earth is far less oblate than Maclaurin's formula suggests, since the Earth is not homogeneous, but has a dense iron core. The Maclaurin spheroid is considered to be the simplest model of rotating ellipsoidal figures in hydrostatic equilibrium since it assumes uniform density.
For a spheroid with equatorial semi-major axis a {\displaystyle a} and polar semi-minor axis c {\displaystyle c} , the angular velocity Ω {\displaystyle \Omega } about c {\displaystyle c} is given by Maclaurin's formula [ 2 ]
where e {\displaystyle e} is the eccentricity of meridional cross-sections of the spheroid, ρ {\displaystyle \rho } is the density and G {\displaystyle G} is the gravitational constant . The formula predicts two possible equilibrium figures, one which approaches a sphere ( e → 0 {\displaystyle e\rightarrow 0} ) when Ω → 0 {\displaystyle \Omega \rightarrow 0} and the other which approaches a very flattened spheroid ( e → 1 {\displaystyle e\rightarrow 1} ) when Ω → 0 {\displaystyle \Omega \rightarrow 0} . The maximum angular velocity occurs at eccentricity e = 0.92996 {\displaystyle e=0.92996} and its value is Ω 2 / ( π G ρ ) = 0.449331 {\displaystyle \Omega ^{2}/(\pi G\rho )=0.449331} , so that above this speed, no equilibrium figures exist. The angular momentum L {\displaystyle L} is
where M {\displaystyle M} is the mass of the spheroid and a ¯ {\displaystyle {\bar {a}}} is the mean radius , the radius of a sphere of the same volume as the spheroid.
For a Maclaurin spheroid of eccentricity greater than 0.812670, [ 3 ] a Jacobi ellipsoid of the same angular momentum has lower total energy. If such a spheroid is composed of a viscous fluid (or in the presence of gravitational radiation reaction), and if it suffers a perturbation which breaks its rotational symmetry, then it will gradually elongate into the Jacobi ellipsoidal form, while dissipating its excess energy as heat (or gravitational waves ). This is termed secular instability ; see Roberts–Stewartson instability and Chandrasekhar–Friedman–Schutz instability . However, for a similar spheroid composed of an inviscid fluid (or in the absence of radiation reaction), the perturbation will merely result in an undamped oscillation. This is described as dynamic (or ordinary ) stability .
A Maclaurin spheroid of eccentricity greater than 0.952887 [ 3 ] is dynamically unstable. Even if it is composed of an inviscid fluid and has no means of losing energy, a suitable perturbation will grow (at least initially) exponentially. Dynamic instability implies secular instability (and secular stability implies dynamic stability). [ 4 ] | https://en.wikipedia.org/wiki/Maclaurin_spheroid |
Macle is a term used in crystallography . [ 1 ] It is a crystalline form, twin-crystal or double crystal (such as chiastolite ). It is crystallographic twin according to the spinel twin law and is seen in octahedral crystals or minerals such as diamond and spinel. The twin law name comes from the fact that is commonly observed in the mineral spinel. [ 2 ] A version with five units about a common axis is called a fiveling . [ 3 ]
Macle is an old French word, a heraldic term for a voided lozenge (one diamond shape within another). [ 4 ] Etymologically the word is derived from the Latin macula meaning spot, mesh , or hole. [ 5 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macle |
A Macquarium is an aquarium made from, or made to sit within, the shell of an Apple Macintosh computer. The term was coined by computer writer Andy Ihnatko as a joke at the outdated Macintosh 512K ; [ 1 ] Macquariums have since been built both by Ihnatko himself and by others.
Ihnatko originally designed his Macquarium to use the Compact Macintosh -style shell. In the early 1990s, several Mac models in this form factor (such as the Macintosh 128K , Macintosh 512K and Macintosh Plus ) were becoming obsolete, and Ihnatko considered that turning one into an aquarium might be "the final upgrade", as well as an affordable way to have a color Compact Mac. Ihnatko has mentioned in interviews [ 1 ] that he saw attempts to build Macintosh aquariums at trade shows that, among other drawbacks, suffered from noticeable water level lines across the "screen" that spoiled the illusion of a "really good screensaver". This drove him to design a version without a visible water line, and which allowed the external case of the donor Mac to remain intact.
Ihnatko's slant-front tank design, made of glass, had a nominal capacity of approximately 10 liters (2.2 UK gallons or 2.5 US gallons). Some subsequent designs have utilized acrylic glass or lexan . Because of its small capacity relative to most other aquariums, the Macquarium is considered a form of nano aquarium , which requires a higher level of diligence to maintain proper water chemistry and cleanliness.
The parts for some of Ihnatko's Macquariums were constructed with parts from two sources located near Apple's headquarters in Cupertino, California . For these aquariums, Ihnatko used the case of the Macintosh as the tank and sealed the screen and vent holes to be watertight.
Macquariums are often stocked with 2–3 goldfish , which do not require tank heaters and are cheap. However, because goldfish grow large, have high oxygen requirements, and are messy eaters, they require much larger tanks for long-term survival. [ 2 ] As such, Siamese fighting fish and small shrimp are better options for Macquariums.
Other Mac models have similarly been turned into aquariums, such as the Macintosh TV , the Apple Lisa , and the Power Mac G4 Cube . [ 3 ] Various iMac models, such as the iMac G3 , have been used to make "iMacquariums". By 1995, a Macquarium based on a Macintosh LC 575 appeared in a Macintosh magazine titled "Macquarium '95".
The term "Macquarium", as it refers to the Macintosh-based aquarium, is unrelated to the Atlanta, Georgia , user experience firm Macquarium Intelligent Communications. [ 4 ]
This fishkeeping -related article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macquarium |
Macro-creatine kinase (macro-CK) is a macroenzyme, an enzyme of high molecular weight and prolonged half-life found in human serum . [ 1 ] It is one of the most common macroenzymes. [ 1 ] Macro-CK type 1 is a complex formed by one of the creatine kinase isoenzyme types, typically CK-BB, and antibodies ; typically IgG , sometimes IgA , rarely IgM . Macro-CK type 2 is formed from mitochondrial CK polymer. [ 2 ]
Macro-CK type 1 has been associated with autoimmune and other chronic conditions. [ 1 ] Macro-CK type 2 has been associated with malignancy . [ 1 ]
Macro-CK has been implicated as a source of interference in interpretation of medical labs. [ 3 ] | https://en.wikipedia.org/wiki/Macro-creatine_kinase |
In engineering , macro-engineering (alternatively known as mega engineering ) is the implementation of large-scale design projects. It can be seen as a branch of civil engineering or structural engineering applied on a large landmass . In particular, macro-engineering is the process of marshaling and managing of resources and technology on a large scale to carry out complex tasks that last over a long period. In contrast to conventional engineering projects, macro-engineering projects (called macro-projects or mega-projects) are multidisciplinary , involving collaboration from all fields of study. Because of the size of macro-projects they are usually international.
Macro-engineering is an evolving field that has only recently started to receive attention. Because we routinely deal with challenges that are multinational in scope, such as global warming and pollution , macro-engineering is emerging as a transcendent solution to worldwide problems.
Macro-engineering is distinct from Megascale engineering due to the scales where they are applied. Where macro-engineering is currently practical, mega-scale engineering is still within the domain of speculative fiction because it deals with projects on a planetary or stellar scale.
Macro engineering examples include the construction of the Panama Canal and the Suez Canal .
Examples of projects include the Channel Tunnel and the planned Gibraltar Tunnel .
Two intellectual centers focused on macro-engineering theory and practice are the Candida Oancea Institute in Bucharest , and The Center for Macro Projects and Diplomacy at Roger Williams University in Bristol, Rhode Island . | https://en.wikipedia.org/wiki/Macro-engineering |
A macro key is a keyboard key that can be configured to perform custom, user -defined behavior. Many keyboards do not have a macro key, but some have one or more. Some consider a macro key to enhance productivity by allowing them to do operations via a single key press that otherwise requires slower or multiple UI actions.
Custom behavior typically involves one or more user interface (UI) operations such as keystrokes and mouse actions. [ 1 ] For example, a macro key might be configured to launch a program. A gamer might configure it for rapid-fire.
Some early PC keyboards had a single key located on the lowest row of keys, either to the left of the Z key or to the right of the right control key . Sometimes it was treated as a backslash , but its behavior varied. It generated a special scan code so that a program could associate unique behavior to it.
Around 2010, some mice had a macro button with a similar utility.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macro_key |
Macrobenthos consists of the organisms that live at the bottom of a water column [ 1 ] and are visible to the naked eye. [ 2 ] In some classification schemes, these organisms are larger than 1 mm; [ 1 ] in another, the smallest dimension must be at least 0.5 mm. [ 3 ] They include polychaete worms , pelecypods , anthozoans , echinoderms , sponges , ascidians , crustaceans .
The marine macrobenthos community is a critical component and reliable indicator of the biotic integrity of marine ecosystems, especially the intertidal ecosystems. [ 4 ] [ 5 ] [ 6 ] On the one hand, macrobenthos plays a vital role in maintaining ecosystem functions, such as material cycling in sediments and energy flow in food webs . On the other hand, macrobenthos is relatively sedentary and therefore reflects the ambient conditions of sediments, in which many pollutants (e.g., heavy metals and organic enrichment ) are ultimately partitioned. [ 7 ] [ 8 ] [ 9 ]
Heavy metal pollution is one of the most common anthropogenic pressures that impact marine ecosystems (e.g., intertidal zones, coastal waters, and estuaries), which has been documented by many studies throughout the world. [ 10 ] [ 11 ] [ 12 ] Heavy metal contaminants can result in adverse toxic effects on benthic organisms, [ 13 ] [ 14 ] leading to the changes in composition, structure, and ecosystem function of macrobenthic communities. [ 15 ] [ 5 ] [ 16 ] [ 17 ] [ 18 ] For example, in Aveiro Lagoon (Portugal), with the increase of mercury contamination , the total abundance and species richness decreased, and tolerant taxa increased; [ 19 ] in Incheon Harbour (Korea) and the coastal zone south of Sfax (Tunisia), macrobenthic community gradually changed with the pollution levels, and species diversity decreased with decreased distance from the pollution source. [ 7 ] [ 20 ] However, most of the studies were conducted in the subtidal zones other than intertidal zones, which are more vulnerable to human activities. [ 9 ]
Macrobenthos consists of numerous taxa, and different species have a different tolerance to environmental pressures. For example, polychaetes Capitella capitata and Heteromastus filiformis are naturally tolerant to environmental disturbance , which could live well in a highly organic enrichment and/or heavy metal polluted area, [ 21 ] [ 7 ] [ 22 ] while some taxa (e.g., polychaete Magelona dakini and amphipods Perioculodes longimanus ) are inherently sensitive to environmental disturbance, and could not survive in such highly polluted zones. [ 23 ] [ 24 ] [ 9 ]
This indicates that each species has evolved a unique survival strategy to adapt to different environmental conditions, even though it may be similar in some ways with other species. When facing loads of contaminants, such as metal(loid)s and organic enrichment or other contaminants gradients, macrobenthos have to make some reactions to resist such adverse environmental conditions. Therefore, macrobenthic responses may reflect different types and levels of pollutant impacts. [ 7 ] [ 5 ] [ 9 ]
A visual examination of macroorganisms at the bottom of an aquatic ecosystem can be a good indicator of water quality. [ 25 ] | https://en.wikipedia.org/wiki/Macrobenthos |
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG , where they are called MCU blocks , H.261 , MPEG-1 Part 2 , H.262/MPEG-2 Part 2 , H.263 , MPEG-4 Part 2 , and H.264/MPEG-4 AVC . [ 1 ] [ 2 ] [ 3 ] [ 4 ] In H.265/HEVC , the macroblock as a basic processing unit has been replaced by the coding tree unit . [ 5 ]
A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. [ 1 ] In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. These samples are split into four Y blocks, one Cb block and one Cr block. This design is also used in JPEG and most other macroblock-based video codecs with a fixed transform block size, such as MPEG-1 Part 2 and H.262/MPEG-2 Part 2. In other chroma subsampling formats, e.g. 4:0:0, 4:2:2, or 4:4:4, the number of chroma samples in a macroblock will be smaller or larger, and the grouping of chroma samples into blocks will differ accordingly.
In more modern macroblock-based video coding standards such as H.263 and H.264/AVC, transform blocks can be of sizes other than 8×8 samples. For instance, in H.264/AVC main profile, the transform block size is 4×4. [ 4 ] In H.264/AVC High profile, the transform block size can be either 4×4 or 8×8, adapted on a per-macroblock basis. [ 4 ]
Distinct from the division into transform blocks, a macroblock can be split into prediction blocks. In early standards such as H.261, MPEG-1 Part 2, and H.262/MPEG-2 Part 2, motion compensation is performed with one motion vector per macroblock. [ 1 ] [ 2 ] In more modern standards such as H.264/AVC, a macroblock can be split into multiple variable-sized prediction blocks, called partitions. [ 4 ] In an inter-predicted macroblock in H.264/AVC, a separate motion vector is specified for each partition. [ 4 ] Correspondingly, in an intra-predicted macroblock, where samples are predicted by extrapolating from the edges of neighboring blocks, the predicted direction is specified on a per-partition basis. [ 4 ] In H.264/AVC, prediction partition size ranges from 4×4 to 16×16 samples for both inter-prediction (motion compensation) and intra-prediction. [ 4 ]
A possible bitstream representation of a macroblock in a video codec which uses motion compensation and transform coding is given below. [ 6 ] It is similar to the format used in H.261 . [ 1 ]
The term macroblocking is commonly used to refer to block coding artifacts. | https://en.wikipedia.org/wiki/Macroblock |
A macrocell or macrosite is a cell in a mobile phone network that provides radio coverage served by a high power cell site (tower, antenna or mast). Generally, macrocells provide coverage larger than microcell . The antennas for macrocells are mounted on ground-based masts, rooftops and other existing structures, at a height that provides a clear view over the surrounding buildings and terrain. Macrocell base stations have power outputs of typically tens of watts. Macrocell performance can be increased by increasing the efficiency of the transceiver. [ 1 ]
The term macrocell is used to describe the widest range of cell sizes. Macrocells are found in rural areas or along highways. Over a smaller cell area, a microcell is used in a densely populated urban area. Picocells are used for areas smaller than microcells, such as a large office, a mall, or train station . Currently the smallest area of coverage that can be implemented with a femtocell is a home or small office.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macrocell |
Macrocycles are often described as molecules and ions containing a ring of twelve or more atoms. [ 2 ] Classical examples include the crown ethers , calixarenes , porphyrins , and cyclodextrins . Macrocycles describe a large, mature area of chemistry. [ 3 ]
Macrocycle : Cyclic macromolecule or a macromolecular cyclic portion of a macromolecule.
Note 1: A cyclic macromolecule has no end-groups but may nevertheless be regarded as a
chain.
Note 2: In the literature, the term macrocycle is sometimes used for molecules of low relative molecular mass that would not be considered macromolecules. [ 4 ]
The formation of macrocycles by ring-closure is called macrocyclization . [ 5 ] The central challenge to macrocyclization is that ring-closing reactions do not favor the formation of large rings. Instead, medium sized rings or polymers tend to form. Early macrocyclizations were achieved ketonic decarboxylations for the preparation of terpenoid macrocycles. So, while Ružička was able to produce various macrocycles, the yields were low. [ 6 ] This kinetic problem can be addressed by using high-dilution reactions , whereby intramolecular processes are favored relative to polymerizations. [ 7 ] Reactions amenable to high dilution include Dieckmann condensation and related based-induced reactions of esters with remote halides.
Some macrocyclizations are achieved using template reactions . Templates are ions, molecules, surfaces etc. that bind and pre-organize reactants, guiding them toward formation of a particular ring size. [ 8 ] The crown ethers are often generated in the presence of an alkali metal cation, which organizes the condensing components by complexation. [ 9 ] An illustrative macrocyclization is the synthesis of (−)- muscone from (+)- citronellal . The 15-membered ring is generated by ring-closing metathesis . [ 10 ]
Macrocyclic stereocontrol refers to the directed outcome of a given intermolecular or intramolecular reaction that is governed by the conformational preference of a macrocycle. Stereocontrol for cyclohexane rings is well established in organic chemistry, in large part due to the axial/equatorial preferential positioning of substituents on the ring. Macrocyclic stereocontrol models the substitution and reactions of medium and large rings in organic chemistry , with remote stereogenic elements providing enough conformational influence to direct the outcome of a reaction.
Early assumptions towards macrocycles in synthetic chemistry considered them far too floppy to provide any degree of stereochemical or regiochemical control in a reaction. Investigations in the late 1970s and 1980s challenged this assumption, [ 12 ] while several others found crystallographic data [ 13 ] and NMR data [ 14 ] that suggested macrocyclic rings were not the floppy, conformationally ill-defined species many assumed.
The rigidity of a macrocyclic ring depends significantly on the substitution and the overall size. [ 15 ] [ 16 ] Significantly, even small conformational preferences, such as those envisioned in floppy macrocycles, can profoundly influence the ground state of a given reaction, providing stereocontrol such as in the synthesis of miyakolide. [ 17 ]
Reaction classes used in synthesis of natural products under the macrocyclic stereocontrol model for obtaining a desired stereochemistry include: hydrogenations such as in neopeltolide [ 18 ] and (±)-methynolide, [ 19 ] epoxidations such as in (±)-periplanone B [ 20 ] and lonomycin A, [ 21 ] hydroborations such as in 9-dihydroerythronolide B, [ 22 ] enolate alkylations such as in (±)-3-deoxyrosaranolide, [ 23 ] dihydroxylations such as in cladiell-11-ene-3,6,7-triol, [ 24 ] and reductions such as in eucannabinolide. [ 25 ]
Macrocycles can access a number of stable conformations, with preferences to reside in those that minimize the number of transannular nonbonded interactions within the ring. [ 16 ] Medium rings (8-11 atoms) are the most strained with between 9-13 (kcal/mol) strain energy; analysis of the factors important in considering larger macrocyclic conformations can thus be modeled by looking at medium ring conformations. [ 26 ] [ page needed ] Conformational analysis of odd-membered rings suggests they tend to reside in less symmetrical forms with smaller energy differences between stable conformations. [ 27 ]
Conformational analysis of medium rings begins with examination of cyclooctane . Spectroscopic methods have determined that cyclooctane possesses three main conformations: chair-boat , chair-chair , and boat-boat . Cyclooctane prefers to reside in a chair-boat conformation, minimizing the number of eclipsing ethane interactions (shown in blue), as well as torsional strain. [ 28 ] The chair-chair conformation is the second most abundant conformation at room temperature, with a ratio of 96:4 chair-boat:chair-chair observed. [ 12 ]
Substitution positional preferences in the ground state conformer of methyl cyclooctane can be approximated using parameters similar to those for smaller rings. In general, the substituents exhibit preferences for equatorial placement, except for the lowest energy structure (pseudo A-value of -0.3 kcal/mol in figure below) in which axial substitution is favored. The "pseudo A-value" is best treated as the approximate energy difference between placing the methyl substituent in the equatorial or axial positions. The most energetically unfavorable interaction involves axial substitution at the vertex of the boat portion of the ring (6.1 kcal/mol).
These energetic differences can help rationalize the lowest energy conformations of 8 atom ring structures containing an sp 2 center. In these structures, the chair-boat is the ground state model, with substitution forcing the structure to adopt a conformation such that non-bonded interactions are minimized from the parent structure. [ 29 ] From the cyclooctene figure below, it can be observed that one face is more exposed than the other, foreshadowing a discussion of privileged attack angles (see peripheral attack).
X-ray analysis of functionalized cyclooctanes provided proof of conformational preferences in these medium rings. Significantly, calculated models matched the obtained X-ray data, indicating that computational modeling of these systems could in some cases quite accurately predict conformations. The increased sp 2 character of the cyclopropane rings favor them to be placed similarly such that they relieve non-bonded interactions. [ 30 ]
Similar to cyclooctane, a cyclodecane ring exhibits several conformations with two lower energy conformations. The boat-chair-boat conformation is energetically minimized, while the chair-chair-chair conformation has significant eclipsing interactions.
These ground-state conformational preferences are useful analogies to more highly functionalized macrocyclic ring systems, where local effects can still be governed to first approximation by energy minimized conformations even though the larger ring size allows more conformational flexibility of the entire structure. For example, in methyl cyclodecane, the ring can be expected to adopt the minimized conformation of boat-chair-boat. The figure below shows the energetic penalty between placing the methyl group at certain sites within the boat-chair-boat structure. Unlike canonical small ring systems, the cyclodecane system with the methyl group placed at the "corners" of the structure exhibits no preference for axial vs. equatorial positioning due to the presence of an unavoidable gauche-butane interaction in both conformations. Significantly more intense interactions develop when the methyl group is placed in the axial position at other sites in the boat-chair-boat conformation. [ 12 ]
Similar principles guide the lowest energy conformations of larger ring systems. Along with the acyclic stereocontrol principles outlined below, subtle interactions between remote substituents in large rings, analogous to those observed for 8-10 membered rings, can influence the conformational preferences of a molecule. In conjunction with remote substituent effects, local acyclic interactions can also play an important role in determining the outcome of macrocyclic reactions. [ 31 ] The conformational flexibility of larger rings potentially allows for a combination of acyclic and macrocyclic stereocontrol to direct reactions. [ 31 ]
The stereochemical result of a given reaction on a macrocycle capable of adopting several conformations can be modeled by a Curtin-Hammett scenario. In the diagram below, the two ground state conformations exist in an equilibrium, with some difference in their ground state energies. Conformation B is lower in energy than conformation A, and while possessing a similar energy barrier to its transition state in a hypothetical reaction, thus the product formed is predominantly product B (P B) arising from conformation B via transition state B (TS B). The inherent preference of a ring to exist in one conformation over another provides a tool for stereoselective control of reactions by biasing the ring into a given configuration in the ground state. The energy differences, ΔΔG ‡ and ΔG 0 are significant considerations in this scenario. The preference for one conformation over another can be characterized by ΔG 0 , the free energy difference, which can, at some level, be estimated from conformational analysis. The free energy difference between the two transition states of each conformation on its path to product formation is given by ΔΔG ‡ . The value of ΔG 0 between not just one, but many accessible conformations is the underlying energetic impetus for reactions occurring from the most stable ground state conformation and is the crux of the peripheral attack model outlined below. [ 32 ]
Macrocyclic rings containing sp 2 centers display a conformational preference for the sp 2 centers to avoid transannular nonbonded interactions by orienting perpendicular to the plan of the ring. W. Clark Still proposed that the ground state conformations of macrocyclic rings, containing the energy minimized orientation of the sp 2 center, display one face of an olefin outwards from the ring. [ 12 ] [ 20 ] [ 23 ] Addition of reagents from the outside the olefin face and the ring (peripheral attack) is thus favored, while attack from across the ring on the inward diastereoface is disfavored. Ground state conformations dictate the exposed face of the reactive site of the macrocycle, thus both local and distant stereocontrol elements must be considered. The peripheral attack model holds well for several classes of macrocycles, though relies on the assumption that ground state geometries remain unperturbed in the corresponding transition state of the reaction.
Early investigations of macrocyclic stereocontrol studied the alkylation of 8-membered cyclic ketones with varying substitution. [ 12 ] In the example below, alkylation of 2-methylcyclooctanone occurred to yield the predominantly trans product. Proceeding from the lowest energy conformation of 2-methylcycloctanone, peripheral attack is observed from either one of the low energy (energetic difference of 0.5 (kcal/mol)) enolate conformations, resulting in a trans product from either of the two depicted transition state conformations. [ 33 ]
Unlike the cyclooctanone case, alkylation of 2-cyclodecanone rings does not display significant diastereoselectivity. [ 12 ]
However, 10-membered cyclic lactones display significant diastereoselectivity. [ 12 ] The proximity of the methyl group to the ester linkage was directly correlated with the diastereomeric ratio of the reaction products, with placement at the 9 position (below) yielding the highest selectivity. In contrast, when the methyl group was placed at the 7 position, a 1:1 mixture of diastereomers was obtained. Placement of the methyl group at the 9-position in the axial position yields the most stable ground state conformation of the 10-membered ring leading to high diastereoselectivity.
Conjugate addition to the E-enone below also follows the expected peripheral attack model to yield predominantly trans product. [ 33 ] High selectivity in this addition can be attributed to the placement of sp 2 centers such that transannular nonbonded interactions are minimized, while also placing the methyl substitution in the more energetically favorable position for cyclodecane rings. This ground state conformation heavily biases conjugate addition to the less hindered diastereoface.
Similar to intermolecular reactions, intramolecular reactions can show significant stereoselectivity from the ground state conformation of the molecule. In the intramolecular Diels-Alder reaction depicted below, the lowest energy conformation yields the observed product. [ 34 ] The structure minimizing repulsive steric interactions provides the observed product by having the lowest barrier to a transition state for the reaction. Though no external attack by a reagent occurs, this reaction can be thought of similarly to those modeled with peripheral attack; the lowest energy conformation is the most likely to react for a given reaction.
The lowest energy conformations of macrocycles also influence intramolecular reactions involving transannular bond formation. In the intramolecular Michael addition sequence below, the ground state conformation minimizes transannular interactions by placing the sp 2 centers at the appropriate vertices, while also minimizing diaxial interactions. [ 35 ]
These principles have been applied in multiple natural product targets containing medium and large rings. The syntheses of cladiell-11-ene-3,6,7-
triol, [ 24 ] (±)-periplanone B, [ 20 ] eucannabinolide, [ 25 ] and neopeltolide [ 18 ] are all significant in their usage of macrocyclic stereocontrol en route to the desired structural targets.
The cladiellin family of marine natural products feature 9-membered rings. The synthesis of (−)-cladiella-6,11-dien-3-ol allowed access to a variety of other members of the cladiellin family. The conversion to cladiell-11-ene-3,6,7-triol makes use of macrocyclic stereocontrol in the dihydroxylation of a trisubstituted olefin. Below is shown the synthetic step controlled by the ground state conformation of the macrocycle, allowing stereoselective dihydroxylation without the usage of an asymmetric reagent. This example of substrate controlled addition is an example of the peripheral attack model in which two centers on the molecule are added two at once in a concerted fashion.
The synthesis of (±)-periplanone B is a prominent example of macrocyclic stereocontrol. [ 20 ] Periplanone B is a sex pheromone of the American female cockroach, and has been the target of several synthetic attempts. Significantly, two reactions on the macrocyclic precursor to (±)-periplanone B were directed using only ground state conformational preferences and the peripheral attack model. Reacting from the most stable boat-chair-boat conformation, asymmetric epoxidation of the cis-internal olefin can be achieved without using a reagent-controlled epoxidation method or a directed epoxidation with an allylic alcohol.
Epoxidation of the ketone was achieved, and can be modeled by peripheral attack of the sulfur ylide on the carbonyl group in a Johnson-Corey-Chaykovsky reaction to yield the protected form of (±)-periplanone B. Deprotection of the alcohol followed by oxidation yielded the desired natural product.
In the synthesis of the cytotoxic germacranolide sesquiterpene eucannabinolide, Still demonstrates the application of the peripheral attack model to the reduction of a ketone to set a new stereocenter using NaBH 4 . Significantly, the synthesis of eucannabinolide relied on the usage of molecular mechanics (MM2) computational modeling to predict the lowest energy conformation of the macrocycle to design substrate-controlled stereochemical reactions.
Neopeltolide was originally isolated from sponges near the Jamaican coast and exhibits nanomolar cytoxic activity against several lines of cancer cells. The synthesis of the neopeltolide macrocyclic core displays a hydrogenation controlled by the ground state conformation of the macrocycle.
One important application are the many macrocyclic antibiotics, the macrolides , e.g. clarithromycin . Many metallocofactors are bound to macrocyclic ligands, which include porphyrins , corrins , and chlorins . These rings arise from multistep biosynthetic processes that also feature macrocycles.
Macrocycles often bind ions and facilitate ion transport across hydrophobic membranes and solvents. The macrocycle envelops the ion with a hydrophobic sheath, which facilitates phase transfer properties. [ 36 ]
Macrocycles are often bioactive and could be useful for drug delivery. [ 37 ] [ 38 ]
Over the last few years, macrocyclic molecules have become increasingly relevant in drug discovery. For a long time, this motif was found almost exclusively in natural products (s. Cyclosporine ), but it can now also be found in some completely synthetic molecules (s. Grazoprevir ). [ 39 ]
Special focus is placed on macrocyclic peptides, as these are comparatively easy to produce. In addition, their risk is classified as comparatively low because, like the body's own proteins, they consist of amino acids (which can, however, be modified). [ 40 ] Normally it is difficult for molecules above a certain size and number of hydrogen bond donors and acceptors to get absorbed orally. [ 41 ] However, it is now possible to make these molecules bio-orally available through certain modifications of the amino acids and through high N-alkylation. [ 42 ] [ 43 ] A chameleon-like behaviour of such molecules can also be observed, because the parts of the molecules that are directed outwards and inwards can change depending on the environment and thus influence the solubility. [ 44 ] [ 45 ] | https://en.wikipedia.org/wiki/Macrocyclic_stereocontrol |
Macroevolution comprises the evolutionary processes and patterns which occur at and above the species level. [ 1 ] [ 2 ] [ 3 ] In contrast, microevolution is evolution occurring within the population(s) of a single species. In other words, microevolution is the scale of evolution that is limited to intraspecific (within-species) variation, while macroevolution extends to interspecific (between-species) variation. [ 4 ] The evolution of new species ( speciation ) is an example of macroevolution. This is the common definition for 'macroevolution' used by contemporary scientists. [ a ] [ b ] [ c ] [ d ] [ e ] [ f ] [ g ] [ h ] [ i ] Although, the exact usage of the term has varied throughout history. [ 4 ] [ 10 ] [ 11 ]
Macroevolution addresses the evolution of species and higher taxonomic groups ( genera , families , orders , etc) and uses evidence from phylogenetics , [ 5 ] the fossil record, [ 9 ] and molecular biology to answer how different taxonomic groups exhibit different species diversity and/or morphological disparity . [ 12 ]
After Charles Darwin published his book On the Origin of Species [ 13 ] in 1859, evolution was widely accepted to be real phenomenon. However, many scientists still disagreed with Darwin that natural selection was the primary mechanism to explain evolution. Prior to the modern synthesis , during the period between the 1880s to the 1930s (dubbed the ‘ Eclipse of Darwinism ’) many scientists argued in favor of alternative explanations. These included ‘ orthogenesis ’, and among its proponents was the Russian entomologist Yuri A. Filipchenko .
Filipchenko appears to have been the one who coined the term ‘macroevolution’ in his book Variabilität und Variation (1927). [ 11 ] While introducing the concept, he claimed that the field of genetics is insufficient to explain “the origin of higher systematic units” above the species level.
Bei einer solchen Sachlage muß zugegeben werden, daß die Entscheidung der Frage über die Faktoren der größeren Züge der Evolution, d. h. dessen, was wir Makroevolution nennen, unabhängig von den Ergebnissen der gegenwärtigen Genetik geschehen muß. So vorteilhaft es für uns auch wäre, uns auch in dieser Frage auf die exakten Resultate der Genetik zu stützen, so sind sie doch, unserer Meinung nach, zu diesem Zweck ganz unbrauchbar, da die Frage über die Entstehung der höheren systematischen Einheiten ganz außerhalb des Forschungsgebietes der Genetik liegt. Infolgedessen ist letztere auch eine exakte Wissenschaft, während die Deszendenzlehre heute, ebenso wie auch im XIX. Jahrhundert, einen einen spekulativen Charakter trägt.
In such a state of affairs, it must be admitted that the decision of the question depends on the factors of the larger features of evolution, of what we call macroevolution, must occur independently of the results of current genetics. As advantageous as it would be for us to rely on the exact results of genetics in this question, they are, in our opinion, completely useless for this purpose, since the question about the origin of the higher systematic units lies entirely outside the field research area of genetics. As a result, the latter is also an exact science, while the doctrine of descent today, as well as in the 19th century, has a speculative character.
— Yuri Filipchenko, Variabilität und Variation (1927), pages 93-94 [ 11 ]
Regarding the origin of higher systematic units, Filipchenko stated his claim that ‘like-produces-like’. A taxon must originate from other taxa of equivalent rank. A new species must come from an old species, a genus from an older genus, a family from another family, etc.
— Yuri Filipchenko, Variabilität und Variation (1927), page 89 [ 11 ]
Filipchenko believed this was the only way to explain the origin of the major characters that define species and especially higher taxonomic groups ( genera , families , orders , etc). For example, the origin of families must require the sudden appearance of new traits which are different in greater magnitude compared to the characters required for the origin of a genus or species. However, this view is no longer consistent with contemporary understanding of evolution. Furthermore, the Linnaean ranks of ‘genus’ (and higher) are not real entities but artificial concepts which break down when they are combined with the process of evolution. [ 15 ] [ 10 ]
Nevertheless, Filipchenko’s distinction between microevolution and macroevolution had a major impact on the development of evolutionary science. The term was adopted by Filipchenko's protégé Theodosius Dobzhansky in his book ‘Genetics und the Origin of Species’ (1937), a seminal piece that contributed to the development of the Modern Synthesis . ‘Macroevolution’ was also adopted by those who used it to criticize the Modern Synthesis. A notable example of this was the book The Material Basis of Evolution (1940) by the geneticist Richard Goldschmidt , a close friend of Filipchenko. [ 16 ] Goldschmidt suggested saltational evolutionary changes either due to mutations that affect the rates of developmental processes [ 17 ] or due to alterations in the chromosomal pattern. [ 18 ] Particularly the latter idea was widely rejected by the modern synthesis , but the hopeful monster concept based on Evolutionary developmental biology (or evo-devo) explanations found a moderate revival in recent times. [ 19 ] [ 20 ] Occasionally such dramatic changes can lead to novel features that survive.
As an alternative to saltational evolution, Dobzhansky [ 21 ] suggested that the difference between macroevolution and microevolution reflects essentially a difference in time-scales, and that macroevolutionary changes were simply the sum of microevolutionary changes over geologic time. This view became broadly accepted, and accordingly, the term macroevolution has been used widely as a neutral label for the study of evolutionary changes that take place over a very large time-scale. [ 22 ] Further, species selection [ 2 ] suggests that selection among species is a major evolutionary factor that is independent from and complementary to selection among organisms. Accordingly, the level of selection has become the conceptual basis of a third definition, which defines macroevolution as evolution through selection among interspecific variation. [ 4 ]
The fact that both micro- and macroevolution (including common descent) are supported by overwhelming evidence remains uncontroversial within the scientific community . However, there has been considerable debate over the past 80 years regarding causal and explanatory connection between microevolution and macroevolution. [ 1 ]
The ‘Extrapolation’ view holds there is no fundamental difference between the two aside from scale; i.e. macroevolution is merely cumulative microevolution. Hence, the patterns observed at the macroevolutionary scale can be explained by microevolutionary processes over long periods of time.
The ‘Decoupled’ view holds that microevolutionary processes are decoupled from macroevolutionary processes because there are separate macroevolutionary processes that cannot be sufficiently explained by microevolutionary processes alone.
" ... macroevolutionary processes are underlain by microevolutionary phenomena and are compatible with microevolutionary theories, but macroevolutionary studies require the formulation of autonomous hypotheses and models (which must be tested using macroevolutionary evidence). In this (epistemologically) very important sense, macroevolution is decoupled from microevolution: macroevolution is an autonomous field of evolutionary study."
Francisco J. Ayala (1983) [ 23 ]
Many scientists see macroevolution as a field of study rather than a distinct process that is similar to the process of microevolution. Thus, macroevolution is concerned with the history of life and macroevolutionary explanations encompasses ecology, paleontology, mass extinctions, plate tectonics, and unique events such as the Cambrian explosion. [ 24 ] [ 5 ] [ 25 ] [ 26 ] [ 16 ] [ 10 ] [ 27 ]
Within microevolution, the evolutionary process of changing heritable characteristics (e.g. changes in allele frequencies) is described by population genetics , with mechanisms such as mutation , natural selection , and genetic drift . However, the scope of evolution can be expanded to higher scales where different observations are made. Macroevolutionary mechanisms are provided to explain these. [ 2 ] For example, speciation can be discussed in terms of the ‘mode’, i.e. how speciation occurs. Different modes of speciation include sympatric and allopatric ). Additionally, scientists research the 'tempo' of speciation, i.e. the rate at which species change genetically and/or morphologically. Classically, competing hypothesis for the tempo of specieation include phyletic gradualism and punctuated equilibrium ). Lastly, what are the causes of speciation is also extensively researched. [ 1 ]
More questions can be asked regarding the evolution of species and higher taxonomic groups ( genera , families , orders , etc), and how these have evolved across geography and vast spans of geological time . Such questions are researched from various fields of science. This makes the study of 'macroevolution' interdisciplinary . For example:
According to the modern definition, the evolutionary transition from the ancestral to the daughter species is microevolutionary, because it results from selection (or, more generally, sorting) among varying organisms. However, speciation has also a macroevolutionary aspect, because it produces the interspecific variation species selection operates on. [ 4 ] Another macroevolutionary aspect of speciation is the rate at which it successfully occurs, analogous to reproductive success in microevolution. [ 2 ]
Speciation is the process in which populations within one species change to an extent at which they become reproductively isolated , that is, they cannot interbreed anymore. However, this classical concept has been challenged and more recently, a phylogenetic or evolutionary species concept has been adopted. Their main criteria for new species is to be diagnosable and monophyletic , that is, they form a clearly defined lineage. [ 29 ] [ 30 ]
Charles Darwin first discovered that speciation can be extrapolated so that species not only evolve into new species, but also into new genera , families and other groups of animals. In other words, macroevolution is reducible to microevolution through selection of traits over long periods of time. [ 31 ] In addition, some scholars have argued that selection at the species level is important as well. [ 32 ] The advent of genome sequencing enabled the discovery of gradual genetic changes both during speciation but also across higher taxa. For instance, the evolution of humans from ancestral primates or other mammals can be traced to numerous but individual mutations. [ 33 ]
One of the main questions in evolutionary biology is how new structures evolve, such as new organs . Macroevolution is often thought to require the evolution of structures that are 'completely new'. However, fundamentally novel structures are not necessary for dramatic evolutionary change. As can be seen in vertebrate evolution , most "new" organs are actually not new—they are simply modifications of previously existing organs. For instance, the evolution of mammal diversity in the past 100 million years has not required any major innovation. [ 34 ] All of this diversity can be explained by modification of existing organs, such as the evolution of elephant tusks from incisors . Other examples include wings (modified limbs), feathers (modified reptile scales ), [ 35 ] lungs (modified swim bladders , e.g. found in fish ), [ 36 ] [ 37 ] or even the heart (a muscularized segment of a vein ). [ 38 ]
The same concept applies to the evolution of "novel" tissues. Even fundamental tissues such as bone can evolve from combining existing proteins ( collagen ) with calcium phosphate (specifically, hydroxy-apatite ). This probably happened when certain cells that make collagen also accumulated calcium phosphate to get a proto-bone cell. [ 39 ]
Microevolution is facilitated by mutations , the vast majority of which have no or very small effects on gene or protein function. For instance, the activity of an enzyme may be slightly changed or the stability of a protein slightly altered. However, occasionally mutations can dramatically change the structure and functions of protein. This may be called "molecular macroevolution".
Protein function . There are countless cases in which protein function is dramatically altered by mutations. For instance, a mutation in acetaldehyde dehydrogenase (EC:1.2.1.10) can change it to a 4-hydroxy-2-oxopentanoate pyruvate lyase (EC:4.1.3.39), i.e., a mutation that changes an enzyme from one to another EC class (there are only 7 main classes of enzymes). [ 40 ] Another example is the conversion of a yeast galactokinase (Gal1) to a transcription factor (Gal3) which can be achieved by an insertion of only two amino acids. [ 41 ]
While some mutations may not change the molecular function of a protein significantly, their biological function may be dramatically changed. For instance, most brain receptors recognize specific neurotransmitters, but that specificity can easily be changed by mutations. This has been shown by acetylcholine receptors that can be changed to serotonin or glycine receptors which actually have very different functions. Their similar gene structure also indicates that they must have arisen from gene duplications . [ 42 ]
Protein structure . Although protein structures are highly conserved, sometimes one or a few mutations can dramatically change a protein. For instance, an IgG-binding , 4 β {\displaystyle \beta } + α {\displaystyle \alpha } fold can be transformed into an albumin -binding, 3-α fold via a single amino-acid mutation. This example also shows that such a transition can happen with neither function nor native structure being completely lost. [ 43 ] In other words, even when multiple mutations are required to convert one protein or structure into another, the structure and function is at least partially retained in the intermediary sequences. Similarly, domains can be converted into other domains (and thus other functions). For instance, the structures of SH3 folds can evolve into OB folds which in turn can evolve into CLB folds. [ 44 ]
A macroevolutionary benchmark study is Sepkoski's [ 45 ] [ 46 ] work on marine animal diversity through the Phanerozoic. His iconic diagram of the numbers of marine families from the Cambrian to the Recent illustrates the successive expansion and dwindling of three " evolutionary faunas " that were characterized by differences in origination rates and carrying capacities. Long-term ecological changes and major geological events are postulated to have played crucial roles in shaping these evolutionary faunas. [ 47 ]
Macroevolution is driven by differences between species in origination and extinction rates. Remarkably, these two factors are generally positively correlated: taxa that have typically high diversification rates also have high extinction rates. This observation has been described first by Steven Stanley , who attributed it to a variety of ecological factors. [ 48 ] Yet, a positive correlation of origination and extinction rates is also a prediction of the Red Queen hypothesis , which postulates that evolutionary progress (increase in fitness) of any given species causes a decrease in fitness of other species, ultimately driving to extinction those species that do not adapt rapidly enough. [ 49 ] High rates of origination must therefore correlate with high rates of extinction. [ 4 ] Stanley's rule, which applies to almost all taxa and geologic ages, is therefore an indication for a dominant role of biotic interactions in macroevolution.
While the vast majority of mutations are inconsequential, some can have a dramatic effect on morphology or other features of an organism. One of the best studied cases of a single mutation that leads to massive structural change is the Ultrabithorax mutation in fruit flies. The mutation duplicates the wings of a fly to make it look like a dragonfly , a different order of insect.
The evolution of multicellular organisms is one of the major breakthroughs in evolution. The first step of converting a unicellular organism into a metazoan (a multicellular organism) is to allow cells to attach to each other. This can be achieved by one or a few mutations . In fact, many bacteria form multicellular assemblies, e.g. cyanobacteria or myxobacteria . Another species of bacteria, Jeongeupia sacculi , form well-ordered sheets of cells, which ultimately develop into a bulbous structure. [ 50 ] [ 51 ] Similarly, unicellular yeast cells can become multicellular by a single mutation in the ACE2 gene, which causes the cells to form a branched multicellular form. [ 52 ]
The wings of bats have the same structural elements (bones) as any other five-fingered mammal (see periodicity in limb development ). However, the finger bones in bats are dramatically elongated, so the question is how these bones became so long. It has been shown that certain growth factors such as bone morphogenetic proteins (specifically Bmp2 ) is over expressed so that it stimulates an elongation of certain bones. Genetic changes in the bat genome identified the changes that lead to this phenotype and it has been recapitulated in mice: when specific bat DNA is inserted in the mouse genome, recapitulating these mutations, the bones of mice grow longer. [ 53 ]
Snakes evolved from lizards . Phylogenetic analysis shows that snakes are actually nested within the phylogenetic tree of lizards, demonstrating that they have a common ancestor. [ 54 ] This split happened about 180 million years ago and several intermediary fossils are known to document the origin. In fact, limbs have been lost in numerous clades of reptiles , and there are cases of recent limb loss . For instance, the skink genus Lerista has lost limbs in multiple cases, with all possible intermediary steps, that is, there are species which have fully developed limbs, shorter limbs with 5, 4, 3, 2, 1 or no toes at all. [ 55 ]
While human evolution from their primate ancestors did not require massive morphological changes, our brain has sufficiently changed to allow human consciousness and intelligence. While the latter involves relatively minor morphological changes it did result in dramatic changes to brain function . [ 56 ] Thus, macroevolution does not have to be morphological, it can also be functional.
Most lizards are egg-laying and thus need an environment that is warm enough to incubate their eggs. However, some species have evolved viviparity , that is, they give birth to live young, as almost all mammals do. In several clades of lizards, egg-laying (oviparous) species have evolved into live-bearing ones, apparently with very little genetic change. For instance, a European common lizard, Zootoca vivipara , is viviparous throughout most of its range, but oviparous in the extreme southwest portion. [ 57 ] [ 58 ] That is, within a single species, a radical change in reproductive behavior has happened. Similar cases are known from South American lizards of the genus Liolaemus which have egg-laying species at lower altitudes, but closely related viviparous species at higher altitudes, suggesting that the switch from oviparous to viviparous reproduction does not require many genetic changes. [ 59 ]
Most animals are either active at night or during the day. However, some species switched their activity pattern from day to night or vice versa. For instance, the African striped mouse ( Rhabdomys pumilio ), transitioned from the ancestrally nocturnal behavior of its close relatives to a diurnal one. Genome sequencing and transcriptomics revealed that this transition was achieved by modifying genes in the rod phototransduction pathway, among others. [ 60 ]
Subjects studied within macroevolution include: [ 61 ] | https://en.wikipedia.org/wiki/Macroevolution |
Macroflora is a term used for all the plants occurring in a particular area that are large enough to be seen with the naked eye. [ 1 ] It is usually synonymous with the Flora and can be contrasted with the microflora , a term used for all the bacteria and other microorganisms in an ecosystem .
Macroflora is also an informal term used by many palaeobotanists to refer to an assemblage of plant fossils as preserved in the rock. [ 2 ] This is in contrast to the flora , which in this context refers to the assemblage of living plants that were growing in a particular area, whose fragmentary remains became entrapped within the sediment from which the rock was formed and thus became the macroflora.
This botany article is a stub . You can help Wikipedia by expanding it .
This paleobotany -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macroflora |
Guatemala is one of the richest biodiversity hotspots in the world. [ 1 ] This is due to the variety of its territory and ecosystems that occur from sea level up to more than 4,000 meters above sea level. Ecological niches include (but are not limited to) subtropical and tropical rain forests , wetlands , dry forests , scrublands, cloud forests , and pine-fir forests in the highlands. Despite this wealth, however, our knowledge on the mycobiota of the country is very poor. [ 2 ] There are several reasons for this, primarily the prolonged Guatemalan civil war (1960–1996) and related political and social instability that have severely hampered field work in the country. The lack of trained local mycologists has certainly also delayed the detailed investigation of the rich mycota inhabiting the highly diversified Guatemalan biotopes .
Larger fungi (usually referred to as macrofungi or macromycetes) are of particular interest because of their importance as food resources and as a component of traditional culture in many places. [ 3 ] Moreover, many basidiomycetes and ascomycetes with conspicuous sporocarps often play an important role as ectomycorrhizal mycobionts of trees and shrubs of boreal forests in the Northern Hemisphere and are important elements in many areas of the Southern Hemisphere. [ 4 ] Although Guatemalan macrofungi have not been as yet extensively surveyed, a preliminary checklist encompasses some 350 species of macromycetes (31 ascomycetes and 319 basidiomycetes) occurring in 163 genera and 20 ascomycetous and basidiomycetous orders. [ 5 ] Recently, 12 species of Ascomycetes where cited, with the new records, there are now 44 ascomycete species known from Guatemala [ 6 ] Most available observations pertain to the highlands, in the departments of Alta Verapaz , Baja Verapaz , Chimaltenango , Guatemala , El Quiché , Huehuetenango , and Quetzaltenango , while the wide lowland Petén region has been scantly explored, despite the fact that it accounts for about one third of Guatemala's area and, together with adjacent areas of Belize and southern Mexico , comprise the largest unbroken tract of tropical forest north of the Brazilian Amazon . At the order level, Agaricales was found to host the larger number of species (almost one third of the entire set), followed by Polyporales and Boletales . The most represented genera are Amanita , Russula , Lactarius , Laccaria , Suillus . [ 5 ] Intriguingly, all these genera are ectomycorrhizal with the several Pinus and Quercus species that form extensive pine and mixed forests of the highlands, and/or with the endangered Abies guatemalensis (pinabete), most abundant between 2800–3200 m elevation on the Sierra de los Cuchumatanes in western Guatemala. [ 7 ]
"The Mesoamerican tradition of eating wild edible fungi continues from Mexico to west Guatemala then is absent from much of Honduras and Nicaragua , even though both contain forest areas that in theory support production of edible fungi", remarked Eric Boa in his reference volume on worldwide wild edible fungi. [ 3 ]
Indeed, the deep mycophily of Guatemalan indigenous people is apparent. Ethnomycological surveys conducted in the highlands, mainly through visits to local markets and interviews with vendors, revealed that some 130 species were identified as edible species, most of which actually sold in markets or along roadsides. [ 5 ] [ 8 ] [ 9 ] Species of edible mushrooms belonging to different genera (e.g., Amanita , Lactarius , Helvella ) are often offered mixed together, and sold in form of 'medida', i.e. a fixed amount, which equals the content of a small basket. However, the more popular and valuable species are usually sold separately. Lactarius deliciosus and L. indigo – known as 'Shara' (or 'Xara') 'amarilla' and 'Shara' (or 'Xara') 'azul', respectively, or 'Cabeza de Xara' in local Spanish (Sharas, also known as 'urracas', are birds, variously coloured, living in different parts of the country) – Amanita caesarea complex (hongo de San Juan), and Cantharellus cibarius (anacate), are among the most appreciated edible mushrooms among Guatemalan Maya people . Daldinia fissa is recorded as a common edible ascomycete from the municipality of Tecpán, Department of Chimaltenango. The mushroom is named as "tzan tz'i" in Kaqchikel dialect, that means "dog nose " or "chucho nose" because of ascostroma shape. [ 10 ] Many, but by no means all, edible species are identified through common vernacular names that have been sometimes recorded in several Maya languages . [ 5 ] Generally, mushrooms are gathered and sold by women, often in family groups spanning three generations. Localities with more traditional knowledge , based on the number of species are used, were Tecpán (Chimaltenango) with 31 species, followed by San Juan Comalapa (Chimaltenango)and Totonicapán city (Totonicapán), with 22 species each. The collection of wild edible fungi is done throughout the year and the main forms used for marketing were the "medida", "unidad", "Libra" and "manojo", being subject to their species or group of species in question use. [ 11 ] The use of macrofungi in Guatemala other than for human consumption is limited to a few instances, such as for wound healing and for preventing infections (spores and dried mycelia of Calvatia lilacina and C. cyathiformis ), cicatrizing substances to treat burns in children (sporocarps of Geastrum and Lycoperdon ), to heal and disinfect wounds and to treat bee stings (dried specimens of Lycoperdon marginatum ). [ 5 ]
Guatemala hosted the 7th International Workshop on Edible Ectomycorrhizal Mushrooms (IWEMM-7). Held in the colonial city of Antigua, from July 29 to August 3, 2013, the congress convened researchers from worldwide institutions to discuss the most recent information about diversity, cultivation and production of wild edible mycorrhizal mushrooms. Several talks also dealt with the current status of knowledge on macrofungi in Guatemala and their traditional use. | https://en.wikipedia.org/wiki/Macrofungi_of_Guatemala |
A macrograph or photomacrograph is an image taken at a scale that is visible to the naked eye , as opposed to a micrographic image, taken with a microscope . It is sometimes defined more precisely as an image at a scale of less than ten times magnification . [ 1 ]
This term is often applied to a three-dimensional image taken of a material using a low-power stereomicroscope . These images are used in materials science , particularly in the study of stress fractures in metals. [ 2 ] [ 3 ] This method can also be used to assay the fine structure of steel, in a standardized test called the Baumann method that creates a sulfur print showing the amount and distribution of sulfur inclusions through the metal structure. [ 4 ]
This photography-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macrograph |
Macroinvertebrate Community Index (MCI) is an index used in New Zealand to measure the water quality of fresh water streams. [ 1 ] The presence or lack of macroinvertebrates such as insects, worms and snails in a river or stream can give a biological indicator on the health of that waterway. [ 2 ] The MCI assigns a number to each species of macroinvertebrate based on the sensitivity of that species to pollution . The index then calculates an average score. [ 1 ] A higher score on the MCI generally indicates a more healthy stream. [ 2 ]
The MCI (Macroinvertebrate Community Index) relies on an allocation of scores to freshwater macroinvertebrates based on their pollution tolerances. Freshwater macroinvertebrates found in pristine conditions would score higher than those found in polluted areas. [ 3 ] MCI values can be calculated using macroinvertebrate presence-absence data using this equation: [ 3 ]
MCI = [(site score)/(# of scoring taxa)]*20
Previous water quality assessments have relied on both chemical and habitat analysis, however, these methods have been proven to be insufficient due to pollution from nonpoint sources. [ 4 ] Species living in an aquatic environment may be the best natural indicator of environmental quality and reveal the effects of any habitat alteration or pollution, [ 4 ] and have proved to respond to a wide range of stressors such as sedimentation , urbanization , agricultural practices and forest harvesting effects. [ 5 ] Any changes that may occur in macroinvertebrate communities that lead to a reduction in diversity increase the dominance of pollution-tolerant invertebrates, such as oligochaetes and chironomids . [ 6 ] Thus, a lack of species diversity and low biotic index scores of inhabitant macroinvertebrates may be an indicator of poor water quality. [ 7 ] The risk of water quality degradation is the greatest in low-elevation areas, where high intensity agriculture and urban development are the dominant land uses. [ 8 ]
Macroinvertebrate communities are the preferred indicators of aquatic ecosystem health because they are very easy to both collect and identify, and have short life spans, thus responding very quickly to changes in their environment. [ 5 ] The MCI methods of utilizing macroinvertebrate communities to assess the overall health of an aquatic environment continues to be the most reliable, applicable, and widely acclaimed method around the world. [ 9 ]
Variations on the MCI
In addition to the MCI indexed defined above, there are also two other variations of the MCI. The QMCI (Quantitative Macroinvertebrate Community Index) and the SQMCI (Semi-Quantitative Macroinvertebrate Community Index). Both MCI and QMCI are widely used in countries like New Zealand. The combination of widespread use and good performance of the MCI and the QMCI in detecting water quality in aquatic ecosystems has sparked interest in further refinement of the methods in New Zealand. [ 10 ] The QMCI, just like the MCI, was initially designed to evaluate the organic enrichment in aquatic ecosystems. The third index, the SQMCI, was created to reduce sampling and processing efforts required for the QMCI. [ 11 ] The SQMCI will respond in a similar matter to the QMCI in community dominance, however, will require fewer samples to achieve the same precision. The SQMCI gives a comparative appraisal to the QMCI with under 40% of the exertion , in circumstances that macroinvertebrate densities are not required. This diminishes expenses and also enhances the logical solidness of biomonitoring projects. [ 10 ] Both the QMCI and SQMCI are similar to the MCI in the way that they are graded on a 1 (extremely tolerant) to 10 (highly intolerant) scale. However, they differ in the way that MCI is calculated using presence-absence data whereas QMCI uses quantitative or percentage data. [ 11 ] Having a qualitative, quantitative, and semi-quantitative version of the same index has raised some questions as to if this is a good thing or not. All three indexes have the same purpose, which is to measure the quality of an aquatic ecosystem, however, there are no clear recommendations about when each one is most appropriate to be used. In a study conducted on 88 rivers, Scarsbrook et al. (2000) concluded MCI is more useful than the QMCI for recognizing changes in stream water quality over time. Having three forms of a similar index may prompt to various conclusions and also opens the route for specific utilization of either file to give bias to a specific position or position taken by a specialist. [ 11 ] In August 2019, the Ministry for the Environment released a draft National Policy Statement for Freshwater Management, and a report from Scientific and Technical Advisory Group that recommended including three different measures, MCI, QMCI and Average Score Per Metric (ASPM). [ 12 ]
QMCI values can be calculated using:
QMCI = Σ_(i=1)^(i=s)▒(n_i*a_i)/N
SQMCI values can be calculated similar to QMCI except that coded abundances are substituted for actual counts. Example:
SQMCI = Σ_(i=1)^(i=s)▒(n_i*a_i)/N
Factors Influencing MCI There are several factors which can affect the data acquisition of MCI when assessing the water quality of an aquatic ecosystem. Hard-bottom and Soft-bottom channels can often yield different results and many researchers will use two different versions of the MCI. For example, in a study by Stark & Mallard (2007) they discuss that hard and soft bottom channels have separate versions of the MCI and the two versions can not be combined into one data set because of the differences in taxa and tolerance values. [ 8 ]
Spatial variability is also of interest in terms of affecting the data acquired through MCI. Sites which are progressively down stream often tend to yield a lower MCI value. There may also be confounding influences between riffles, runs, or pools with a single stream reach. [ 13 ]
Depth and velocity have also been raised as a concern with regards to effecting results, however Stark (1993) investigated the influences of the sampling method, water depth, current velocity and substratum on the results and found that both MCI and QMCI are independent of depth, velocity, and substratum from macroinvertebrate samples collected from stony riffles. [ 10 ] This finding is an advantage for the assessment of water pollution.
There have been several studies conducted on seasonal variability, which has been considered the main influential factor on the assessment of water quality. It has been concluded that all models should test data that has been collected in the season as the reference data, which is being used. [ 13 ]
There have been several other factors such as water temperature, invertebrate life histories and dissolved oxygen levels that have all been explained as causes of seasonal variability. Warmer seasons have biotic indices that are indicative of poorer stream health. [ 13 ] Warmer seasons such as summer, would have increased temperatures therefore increasing water temperature and decreasing the amount of dissolved oxygen in the water making the environment less ideal to aquatic macroinvertebrates. In return, this effects the density of macroinvertebrate population and changes the results of the indices. | https://en.wikipedia.org/wiki/Macroinvertebrate_Community_Index |
Macromolecular Bioscience is a monthly peer-reviewed scientific journal covering polymer science . [ 1 ] It publishes Reviews, Feature Articles, Communications, and Full Papers at the intersection of polymer and materials sciences with life science and medicine . The editorial office is in Weinheim , Germany. The editor-in-chief is Anne Pfisterer. [ 2 ] According to the Journal Citation Reports , the journal has a 2020 impact factor of 4.979. [ 3 ]
This article about a biochemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Macromolecular_Bioscience |
Macromolecular Chemistry and Physics is a biweekly peer-reviewed scientific journal covering polymer science . It publishes full papers, talents, trends, and highlights in all areas of polymer science , from chemistry to physical chemistry , physics , and materials science .
Macromolecular Chemistry and Physics was established in 1947 as Die Makromolekulare Chemie/Macromolecular Chemistry by Hermann Staudinger [ 1 ] and obtained its current title in 1994. [ 2 ] According to the Journal Citation Reports , the journal has a 2021 impact factor of 2.996. [ 3 ]
This article about a physical chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Macromolecular_Chemistry_and_Physics |
The Macromolecular Crystallographic Information File ( mmCIF ) also known as PDBx/mmCIF is a standard text file format for representing macromolecular structure data, developed by the International Union of Crystallography (IUCr) and the Protein Data Bank [ 1 ] It is an extension of the Crystallographic Information File (CIF), specifically for macromolecular data, such as proteins and nucleic acids, incorporating elements from the PDB file format .
mmCIF is intended as an alternative to the Protein Data Bank (PDB) format and is now the default format used by the Protein Data Bank . [ 2 ]
mmCIF was designed to address limitations of the PDB format in terms of capacity and flexibility, especially with the increasing size and complexity of macromolecular structures being determined.
The format is part of the larger Crystallographic Information Framework , a system of exchange protocols based on data dictionaries and relational rules expressible in different machine-readable manifestations, including, but not restricted to, the original Crystallographic Information File and XML .
An example of the mmCIF file format is key-value style is: [ 2 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macromolecular_Crystallographic_Information_File |
Macromolecular Materials and Engineering is a monthly peer-reviewed scientific journal covering polymer science . [ 1 ] It publishes Reviews, Feature Articles, Communications, and Full Papers on design, modification, characterization, and processing of advanced polymeric materials. Published topics include materials research on engineering polymers, tailor-made functional polymer systems, and new polymer additives . The editor-in-chief is David Huesmann. [ 2 ]
According to the Journal Citation Reports , the journal has a 2020 impact factor of 4.367. [ 3 ] | https://en.wikipedia.org/wiki/Macromolecular_Materials_and_Engineering |
Macromolecular Rapid Communications is a biweekly peer-reviewed scientific journal covering polymer science . It publishes Communications, Feature Articles and Reviews on general polymer science , from chemistry and physics of polymers to polymers in materials science and life sciences . [ 1 ]
The journal was founded in 1979 as a supplement to the first journal in the field of polymer science, the Journal für Makromolekulare Chemie ( Journal for Macromolecular Chemistry ) as a forum for the rapid publication of the newest and most exciting developments in the field of polymer science. According to the Journal Citation Reports , the journal has a 2020 impact factor of 5.734. [ 2 ] The editorial office is in Weinheim , Germany. | https://en.wikipedia.org/wiki/Macromolecular_Rapid_Communications |
Macromolecular Reaction Engineering is a peer-reviewed scientific journal published monthly by Wiley-VCH . The journal covers academic and industrial research in the field of polymer reaction engineering, which includes polymer science . It emerged from a section that was part of Macromolecular Materials and Engineering . The journal publishes reviews, feature articles, communications, and full papers in the entire field of polymer reaction engineering, including polymer reaction modeling, reactor optimization, and control. Its 2020 impact factor is 1.931.
The journal also produces special issues. The 2009 and 2010 topics included "New Frontiers in Polymer Engineering" and "Controlled Radical Polymerization". [ 1 ] [ 2 ]
Macromolecular Reaction Engineering is intended for polymer scientists, chemists , physicists , materials scientists , theoreticians , and chemical engineers . The journal covers recent and significant results of academic and industrial research in the field of interest, encompassing all related topics - this includes polymer reaction modeling, reactor optimization and control, polyolefins , polymer production , sensors , process control , polymers , macromolecular materials , polymer reaction engineering, modelling, reactor optimization , polymeric materials , and polymer engineering . [ 3 ]
The journal is abstracted and indexed in Chemical Abstracts Service , Chemistry Citation Index , Compendex , Current Contents /Engineering, Computing & Technology, Current Contents/Physical, Chemical & Earth Sciences, Inspec , Journal Citation Reports /Science Edition, Materials Science Citation Index , and the Science Citation Index Expanded . [ 3 ]
Official website | https://en.wikipedia.org/wiki/Macromolecular_Reaction_Engineering |
Macromolecular Theory and Simulations is a peer-reviewed scientific journal covering polymer science . It publishes Reviews, Feature Articles, Communications, and Full Papers on all aspects from macromolecular theory to advanced computer simulation. According to the Journal Citation Reports , the journal has a 2020 impact factor of 1.530. [ 1 ]
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Macromolecular_Theory_and_Simulations |
In molecular biology , the term macromolecular assembly ( MA ) refers to massive chemical structures such as viruses and non-biologic nanoparticles , cellular organelles and membranes and ribosomes , etc. that are complex mixtures of polypeptide , polynucleotide , polysaccharide or other polymeric macromolecules . They are generally of more than one of these types, and the mixtures are defined spatially (i.e., with regard to their chemical shape), and with regard to their underlying chemical composition and structure . Macromolecules are found in living and nonliving things, and are composed of many hundreds or thousands of atoms held together by covalent bonds ; they are often characterized by repeating units (i.e., they are polymers ). Assemblies of these can likewise be biologic or non-biologic, though the MA term is more commonly applied in biology, and the term supramolecular assembly is more often applied in non-biologic contexts (e.g., in supramolecular chemistry and nanotechnology ). MAs of macromolecules are held in their defined forms by non-covalent intermolecular interactions (rather than covalent bonds ), and can be in either non-repeating structures (e.g., as in the ribosome (image) and cell membrane architectures), or in repeating linear, circular, spiral, or other patterns (e.g., as in actin filaments and the flagellar motor , image). The process by which MAs are formed has been termed molecular self-assembly , a term especially applied in non-biologic contexts. A wide variety of physical/biophysical, chemical/biochemical, and computational methods exist for the study of MA; given the scale (molecular dimensions) of MAs, efforts to elaborate their composition and structure and discern mechanisms underlying their functions are at the forefront of modern structure science.
A biomolecular complex , also called a biomacromolecular complex , is any biological complex made of more than one biopolymer ( protein , RNA , DNA , [ 5 ] carbohydrate ) or large non-polymeric biomolecules ( lipid ). The interactions between these biomolecules are non-covalent. [ 6 ] Examples:
The biomacromolecular complexes are studied structurally by X-ray crystallography , NMR spectroscopy of proteins , cryo-electron microscopy and successive single particle analysis , and electron tomography . [ 9 ] The atomic structure models obtained by X-ray crystallography and biomolecular NMR spectroscopy can be docked into the much larger structures of biomolecular complexes obtained by lower resolution techniques like electron microscopy, electron tomography, and small-angle X-ray scattering . [ 10 ]
Complexes of macromolecules occur ubiquitously in nature, where they are involved in the construction of viruses and all living cells. In addition, they play fundamental roles in all basic life processes ( protein translation , cell division , vesicle trafficking , intra- and inter-cellular exchange of material between compartments, etc.). In each of these roles, complex mixtures of become organized in specific structural and spatial ways. While the individual macromolecules are held together by a combination of covalent bonds and intra molecular non-covalent forces (i.e., associations between parts within each molecule, via charge-charge interactions , van der Waals forces , and dipole–dipole interactions such as hydrogen bonds ), by definition MAs themselves are held together solely via the noncovalent forces, except now exerted between molecules (i.e., intermolecular interactions ). [ citation needed ]
The images above give an indication of the compositions and scale (dimensions) associated with MAs, though these just begin to touch on the complexity of the structures; in principle, each living cell is composed of MAs, but is itself an MA as well. In the examples and other such complexes and assemblies, MAs are each often millions of daltons in molecular weight (megadaltons, i.e., millions of times the weight of a single, simple atom), though still having measurable component ratios ( stoichiometries ) at some level of precision. As alluded to in the image legends, when properly prepared, MAs or component subcomplexes of MAs can often be crystallized for study by protein crystallography and related methods, or studied by other physical methods (e.g., spectroscopy , microscopy ). [ citation needed ]
Virus structures were among the first studied MAs; other biologic examples include ribosomes (partial image above), proteasomes, and translation complexes (with protein and nucleic acid components), procaryotic and eukaryotic transcription complexes, and nuclear and other biological pores that allow material passage between cells and cellular compartments. Biomembranes are also generally considered MAs, though the requirement for structural and spatial definition is modified to accommodate the inherent molecular dynamics of membrane lipids , and of proteins within lipid bilayers . [ 15 ]
During assembly of the bacteriophage (phage) T4 virion , the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis . [ 16 ] Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence [ 17 ]
The study of MA structure and function is challenging, in particular because of their megadalton size, but also because of their complex compositions and varying dynamic natures. Most have had standard chemical and biochemical methods applied (methods of protein purification and centrifugation , chemical and electrochemical characterization, etc.). In addition, their methods of study include modern proteomic approaches, computational and atomic-resolution structural methods (e.g., X-ray crystallography ), small-angle X-ray scattering (SAXS) and small-angle neutron scattering (SANS), force spectroscopy, and transmission electron microscopy and cryo-electron microscopy . Aaron Klug was recognized with the 1982 Nobel Prize in Chemistry for his work on structural elucidation using electron microscopy, in particular for protein-nucleic acid MAs including the tobacco mosaic virus (a structure containing a 6400 base ssRNA molecule and >2000 coat protein molecules). The crystallization and structure solution for the ribosome, MW ~ 2.5 MDa, an example of part of the protein synthetic 'machinery' of living cells, was object of the 2009 Nobel Prize in Chemistry awarded to Venkatraman Ramakrishnan , Thomas A. Steitz , and Ada E. Yonath . [ 18 ]
Finally, biology is not the sole domain of MAs. The fields of supramolecular chemistry and nanotechnology each have areas that have developed to elaborate and extend the principles first demonstrated in biologic MAs. Of particular interest in these areas has been elaborating the fundamental processes of molecular machines , and extending known machine designs to new types and processes. [ citation needed ] | https://en.wikipedia.org/wiki/Macromolecular_assembly |
In host–guest chemistry , macromolecular cages are a type of macromolecule structurally consisting of a three-dimensional chamber surrounded by a molecular framework. Macromolecular cages can be considered large-sized organic molecular cages . Macromolecular cage architectures come in various sizes ranging from 1-50 nm and have varying topologies as well as functions. [ 1 ] They can be synthesized through covalent bonding or self-assembly through non-covalent interactions . Most macromolecular cages that are formed through self-assembly are sensitive to pH , temperature, and solvent polarity. [ 1 ]
Metal Organic Polyhedra (MOPs) comprise a specific type of self-assembled macromolecular cage that is formed through unique coordination and is typically chemically and thermally stable. [ 1 ] MOPs have cage-like frameworks with an enclosed cavity. The discrete self-assembly of metal ions and organic scaffolds to form MOPs into highly symmetrical architectures, is a modular process and has various applications. The self-assembly of various subunits that result in high symmetry is a common occurrence in biological systems. Specific examples of this are ferritin , capsid , and the tobacco mosaic virus , which are formed by the self-assembly of protein subunits into a polyhedral symmetry. Nonbiological polyhedra formed with metal ions and organic linkers are metal based macromolecular cages that have nanocavities with multiple openings or pores that allow small molecules to permeate and pass through. [ 1 ] MOPs have been used to encapsulate a number of guests through various host-guest interactions (e.g. electrostatic interactions, hydrogen bonding, and steric interactions). [ 1 ] MOPs are biomimetic materials that have potential for biomedical and biochemical applications. In order for the cage to work effectively and have biomedical relevance, it has to be chemically stable, biocompatible, and needs to operate mechanistically in aqueous media. Macromolecular cages in general can be used for a variety of applications (e.g. nanoencapsulation, biosensing , drug delivery , regulation of nanoparticle synthesis, and catalysis ). [ 1 ] [ 2 ]
There are also a class of macromolecular cages that are synthetically formed through covalent bonding as opposed to self-assembly. Through the covalent-bond-forming strategy the cage molecules can be synthesized methodically with customizable functionality and regulated cavity size. Cage-shaped polymers are macromolecular analogues of molecular cages such as cryptand . [ 2 ] A cage molecule of this type can be tuned by the degree of polymerization . The polymers that are typically used to make the polymer based macromolecular cages are made with star shaped polymers or nonlinear polymer precursors. [ 3 ] [ 2 ] [ 4 ] The molecular size of the polymeric macromolecular cage is controlled by the molecular weight of the star-shaped polymer or branched polymer . The macromolecular cages made from non-linear polymers are designed to have molecular recognition, respond to external stimuli and self-assemble into higher order structures. [ 3 ]
Fullerenes are a class of carbon allotropes that were first discovered in 1985 and are also an example of macromolecular cages. Buckminsterfullerene (C 60 ) and the 60 atoms of this molecule are arranged in a cage-like structure and the framework resembles a soccer ball; the molecule has an icosahedral symmetry. [ citation needed ] C 60 has versatile applications due to its macromolecular cage structure; for example, it can be used for water purification, catalysis, bio-pharmaceuticals, serve as a carrier of radionuclides for MRI , and drug delivery. [ 5 ]
There are many examples of highly symmetrical macromolecular cage motifs known as protein cages in biological systems. The term protein cage delineates a diverse range of protein structures that are formed by the self-assembly of protein subunits into hollow macromolecular nanoparticles. [ 6 ] These protein cages are nanoparticles that have one or more cavities present in their structure. The size of the cavity contributes to the size of the particle that the cavity can enclose, for example inorganic nanoparticles, nucleic acids, and even other proteins. [ 6 ] The interior or chamber portion of the protein cage is usually accessible through a pore which is located in between protein subunits. [ 6 ] [ 7 ] The RNA exosome has nuclease active sites that are present in a cavity where 3' RNA degradation takes place; access to this cavity is controlled by a pore and this serves to prevent uncontrollable RNA decay. [ 7 ] Some protein cages are dynamic structures that assemble and disassemble in response to external stimuli. [ 6 ] Other examples of protein cages are clathrin cages , viral envelopes , chaperonins , and the iron storage protein ferritin . [ 1 ] [ 6 ]
There are various methods used to form polymeric macromolecular cages. One synthetic method uses ring opening and multiple click chemistry in the first step to form trefoil and quatrefoil-shaped polymers, which can then be topologically converted into cages using hydrogenolysis . The initiator in this synthesis is azido and hydroxy functionalized p -xylene and the monomer is butylene oxide . [ 2 ] The ring opening polymerization and simultaneous click cyclizations of butylene oxide with the initiator is catalyzed by t -Bu-P 4 . This synthetic strategy was used to form cage-shaped polybutylene oxides; cage-shaped block copolymers are also formed using a similar method. [ 2 ] One synthetic strategy utilizes atom transfer radical polymerization and click chemistry methods to form figure eight and cage-shaped polystyrene ; in this case the precursor is nonlinear polystyrene. [ 4 ] Another synthetic strategy employs intramolecular ring-opening metathesis oligomerization of a star polymer and this reaction method is catalyzed by diluted Grubb's third generation catalyst. [ 3 ]
Covalent Organic Frameworks (COFs) have also been used to form cage architectures and in one such example Schiff base cyclization was used to form the macromolecular cage molecule. [ 8 ] In this synthesis 1,3,5-triformylbenzene and ( R,R )-(1,2)-diphenylethylenediamine undergo cycloimination in dichloromethane with trifluoroacetic acid as a catalyst to form a COF cage molecule. Macrocyclizations have also been employed to form peptoid based macromolecular cages, the specific methodology utilizes a one pot synthesis to form steroid-aryl hybrid cages using two- and three-fold Ugi type macrocyclization reactions. [ 9 ]
Macromolecular cages can also be formed synthetically using biomolecules. Protein cages can be genetically engineered, and the outside of the cage can be tailored with synthetic polymers, which is known as protein-polymer conjugation. [ 6 ] Preformed polymer chains can be attached to the surface of the protein using chemical linkers. Polymerization can also occur from the protein surface, and the polymer can also be bound to the surface of protein cages via electrostatic interactions. [ 6 ] The purpose of this modification is to make synthetic protein cages more biocompatible ; this post synthetic modification makes the protein cage less susceptible to an immune response and stabilizes the cage from degradation from proteases . [ 6 ] Virus-like protein (VLP) cages have also been synthesized and recombinant DNA technology is used to form non-native virus-like proteins. The first reported case of the formation of non-native VLP constructs into a capsid-like structure utilized a functionalized gold core for nucleation. [ 10 ] The self-assembly of the VLP was initiated by the electrostatic interaction of the functionalized gold nanoparticles which is similar to the interaction of a native virus with its nucleic acid component. These viral protein cages have potential applications in biosensing and medical imaging. [ 10 ] DNA origami is another strategy to form macromolecular cages or containers. In one case, a 3D macromolecular cage with icosahedral symmetry (resembling viral capsids ) was formed based on the synthetic strategy in 2D origami. [ 11 ] The structure had an inside volume or hollow cavity encased by triangular faces, similar to a pyramid. This close-faced cage was designed to potentially encapsulate other materials such as proteins and metal nanoparticles . [ 11 ] | https://en.wikipedia.org/wiki/Macromolecular_cages |
The phenomenon of macromolecular crowding alters the properties of molecules in a solution when high concentrations of macromolecules such as proteins are present. [ 2 ] Such conditions occur routinely in living cells ; for instance, the cytosol of Escherichia coli contains about 300–400 mg / ml of macromolecules. [ 3 ] Crowding occurs since these high concentrations of macromolecules reduce the volume of solvent available for other molecules in the solution, which has the result of increasing their effective concentrations. Crowding can promote formation of a biomolecular condensate by colloidal phase separation.
This crowding effect can make molecules in cells behave in radically different ways than in test-tube assays. [ 4 ] Consequently, measurements of the properties of enzymes or processes in metabolism that are made in the laboratory ( in vitro ) in dilute solutions may be different by many orders of magnitude from the true values seen in living cells ( in vivo ). The study of biochemical processes under realistically crowded conditions is very important, since these conditions are a ubiquitous property of all cells and crowding may be essential for the efficient operation of metabolism. Indeed, in vitro studies have shown that crowding greatly influences binding stability of proteins to DNA. [ 5 ]
The interior of cells is a crowded environment. For example, an Escherichia coli cell is only about 2 micrometres (μm) long and 0.5 μm in diameter, with a cell volume of 0.6 - 0.7 μm 3 . [ 6 ] However, E. coli can contain up to 4,288 different types of proteins, [ 7 ] and about 1,000 of these types are produced at a high enough level to be easily detected. [ 8 ] Added to this mix are various forms of RNA and the cell's DNA chromosome, giving a total concentration of macromolecules of between 300 and 400 mg/ml. [ 3 ] In eukaryotes the cell's interior is further crowded by the protein filaments that make up the cytoskeleton , this meshwork divides the cytosol into a network of narrow pores. [ 9 ]
These high concentrations of macromolecules occupy a large proportion of the volume of the cell, which reduces the volume of solvent that is available for other macromolecules. This excluded volume effect increases the effective concentration of macromolecules (increasing their chemical activity ), which in turn alters the rates and equilibrium constants of their reactions. [ 10 ] In particular this effect alters dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes , or when DNA-binding proteins bind to their targets in the genome . [ 11 ] Crowding may also affect enzyme reactions involving small molecules if the reaction involves a large change in the shape of the enzyme. [ 10 ]
The size of the crowding effect depends on both the molecular mass and shape of the molecule involved, although mass seems to be the major factor – with the effect being stronger with larger molecules. [ 10 ] Notably, the size of the effect is non-linear, so macromolecules are much more strongly affected than are small molecules such as amino acids or simple sugars . Macromolecular crowding is therefore an effect exerted by large molecules on the properties of other large molecules.
Macromolecular crowding is an important effect in biochemistry and cell biology . For example, the increase in the strength of interactions between proteins and DNA [ 5 ] produced by crowding may be of key importance in processes such as transcription and DNA replication . [ 12 ] [ 13 ] Crowding has also been suggested to be involved in processes as diverse as the aggregation of hemoglobin in sickle-cell disease , and the responses of cells to changes in their volume. [ 4 ]
The importance of crowding in protein folding is of particular interest in biophysics . Here, the crowding effect can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. [ 14 ] However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation . [ 15 ] [ 16 ] Crowding may also increase the effectiveness of chaperone proteins such as GroEL in the cell, [ 17 ] which could counteract this reduction in folding efficiency. [ 18 ] It has also been shown that macromolecular crowding affects protein-folding dynamics as well as overall protein shape where distinct conformational changes are accompanied by secondary structure alterations implying that crowding-induced shape changes may be important for protein function and malfunction in vivo. [ 19 ]
A particularly striking example of the importance of crowding effects involves the crystallins that fill the interior of the lens . These proteins have to remain stable and in solution for the lens to be transparent; precipitation or aggregation of crystallins causes cataracts . [ 20 ] Crystallins are present in the lens at extremely high concentrations, over 500 mg/ml, and at these levels crowding effects are very strong. The large crowding effect adds to the thermal stability of the crystallins, increasing their resistance to denaturation . [ 21 ] This effect may partly explain the extraordinary resistance shown by the lens to damage caused by high temperatures. [ 22 ]
Crowding may also play a role in diseases that involve protein aggregation, such as sickle cell anemia where mutant hemoglobin forms aggregates and alzheimer's disease , where tau protein forms neurofibrillary tangles under crowded conditions within neurons. [ 4 ] [ 23 ]
Due to macromolecular crowding, enzyme assays and biophysical measurements performed in dilute solution may fail to reflect the actual process and its kinetics taking place in the cytosol. [ 24 ] One approach to produce more accurate measurements would be to use highly concentrated extracts of cells, to try to maintain the cell contents in a more natural state. However, such extracts contain many kinds of biologically active molecules, which can interfere with the phenomena being studied. [ 2 ] Consequently, crowding effects are mimicked in vitro by adding high concentrations of relatively inert molecules such as polyethylene glycol , ficoll , dextran , or serum albumin to experimental media. [ 5 ] [ 25 ] However, using such artificial crowding agents can be complicated, as these crowding molecules can sometimes interact in other ways with the process being examined, such as by binding weakly to one of the components. [ 2 ]
A major importance of macromolecular crowding to biological systems stems from its effect on protein folding . The underlying physical mechanism by which macromolecular crowding helps to stabilize proteins in their folded state is often explained in terms of excluded volume - the volume inaccessible to the proteins due to their interaction with macromolecular crowders. [ 26 ] [ 27 ] This notion goes back to Asakura and Oosawa, who have described depletion forces induced by steric, hard-core, interactions. [ 28 ] [ 29 ] A hallmark of the mechanism inferred from the above is that the effect is completely a-thermal, and thus completely entropic. These ideas were also proposed to explain why small cosolutes, namely protective osmolytes , which are preferentially excluded from proteins, also shift the protein folding equilibrium towards the folded state. [ 30 ] However, it has been shown by various methods, both experimental [ 31 ] [ 32 ] [ 33 ] and theoretical, [ 34 ] [ 35 ] [ 36 ] that depletion forces are not always entropic in nature. | https://en.wikipedia.org/wiki/Macromolecular_crowding |
Macromolecular docking is the computational modelling of the quaternary structure of complexes formed by two or more interacting biological macromolecules . Protein –protein complexes are the most commonly attempted targets of such modelling, followed by protein– nucleic acid complexes. [ 1 ]
The ultimate goal of docking is the prediction of the three-dimensional structure of the macromolecular complex of interest as it would occur in a living organism. Docking itself only produces plausible candidate structures. These candidates must be ranked using methods such as scoring functions to identify structures that are most likely to occur in nature.
The term "docking" originated in the late 1970s, with a more restricted meaning; then, "docking" meant refining a model of a complex structure by optimizing the separation between the interactors but keeping their relative orientations fixed. Later, the relative orientations of the interacting partners in the modelling was allowed to vary, but the internal geometry of each of the partners was held fixed. This type of modelling is sometimes referred to as "rigid docking". With further increases in computational power, it became possible to model changes in internal geometry of the interacting partners that may occur when a complex is formed. This type of modelling is referred to as "flexible docking".
The biological roles of most proteins, as characterized by which other macromolecules they interact with , are known at best incompletely. Even those proteins that participate in a well-studied biological process (e.g., the Krebs cycle ) may have unexpected interaction partners or functions which are unrelated to that process.
In cases of known protein–protein interactions, other questions arise. Genetic diseases (e.g., cystic fibrosis ) are known to be caused by misfolded or mutated proteins, and there is a desire to understand what, if any, anomalous protein–protein interactions a given mutation can cause. In the distant future, proteins may be designed to perform biological functions, and a determination of the potential interactions of such proteins will be essential.
For any given set of proteins, the following questions may be of interest, from the point of view of technology or natural history:
If they do bind,
If they do not bind,
Protein–protein docking is ultimately envisaged to address all these issues. Furthermore, since docking methods can be based on purely physical principles, even proteins of unknown function (or which have been studied relatively little) may be docked. The only prerequisite is that their molecular structure has been either determined experimentally, or can be estimated by a protein structure prediction technique.
Protein–nucleic acid interactions feature prominently in the living cell. Transcription factors , which regulate gene expression , and polymerases , which catalyse replication , are composed of proteins, and the genetic material they interact with is composed of nucleic acids. Modeling protein–nucleic acid complexes presents some unique challenges, as described below.
In the 1970s, complex modelling revolved around manually identifying features on the surfaces of the interactors, and interpreting the consequences for binding, function and activity; any computer programmes were typically used at the end of the modelling process, to discriminate between the relatively few configurations which remained after all the heuristic constraints had been imposed. The first use of computers was in a study on hemoglobin interaction in sickle-cell fibres. [ 2 ] This was followed in 1978 by work on the trypsin - BPTI complex. [ 3 ] Computers discriminated between good and bad models using a scoring function which rewarded large interface area, and pairs of molecules in contact but not occupying the same space. The computer used a simplified representation of the interacting proteins, with one interaction centre for each residue. Favorable electrostatic interactions, including hydrogen bonds , were identified by hand. [ 4 ]
In the early 1990s, more structures of complexes were determined, and available computational power had increased substantially. With the emergence of bioinformatics , the focus moved towards developing generalized techniques which could be applied to an arbitrary set of complexes at acceptable computational cost. The new methods were envisaged to apply even in the absence of phylogenetic or experimental clues; any specific prior knowledge could still be introduced at the stage of choosing between the highest ranking output models, or be framed as input if the algorithm catered for it.
1992 saw the publication of the correlation method, [ 5 ] an algorithm which used the fast Fourier transform to give a vastly improved scalability for evaluating coarse shape complementarity on rigid-body models. This was extended in 1997 to cover coarse electrostatics. [ 6 ]
In 1996 the results of the first blind trial were published, [ 7 ] in which six research groups attempted to predict the complexed structure of TEM-1 Beta-lactamase with Beta-lactamase inhibitor protein (BLIP). The exercise brought into focus the necessity of accommodating conformational change and the difficulty of discriminating between conformers. It also served as the prototype for the CAPRI assessment series, which debuted in 2001. [ citation needed ]
If the bond angles, bond lengths and torsion angles of the components are not modified at any stage of complex generation, it is known as rigid body docking . A subject of speculation is whether or not rigid-body docking is sufficiently good for most docking. When substantial conformational change occurs within the components at the time of complex formation, rigid-body docking is inadequate. However, scoring all possible conformational changes is prohibitively expensive in computer time. Docking procedures which permit conformational change, or flexible docking procedures, must intelligently select small subset of possible conformational changes for consideration.
Successful docking requires two criteria:
For many interactions, the binding site is known on one or more of the proteins to be docked. This is the case for antibodies and for competitive inhibitors . In other cases, a binding site may be strongly suggested by mutagenic or phylogenetic evidence. Configurations where the proteins interpenetrate severely may also be ruled out a priori .
After making exclusions based on prior knowledge or stereochemical clash, the remaining space of possible complexed structures must be sampled exhaustively, evenly and with a sufficient coverage to guarantee a near hit. Each configuration must be scored with a measure that is capable of ranking a nearly correct structure above at least 100,000 alternatives. This is a computationally intensive task, and a variety of strategies have been developed.
Each of the proteins may be represented as a simple cubic lattice. Then, for the class of scores which are discrete convolutions , configurations related to each other by translation of one protein by an exact lattice vector can all be scored almost simultaneously by applying the convolution theorem . [ 5 ] It is possible to construct reasonable, if approximate, convolution-like scoring functions representing both stereochemical and electrostatic fitness.
Reciprocal space methods have been used extensively for their ability to evaluate enormous numbers of configurations. They lose their speed advantage if torsional changes are introduced. Another drawback is that it is impossible to make efficient use of prior knowledge. The question also remains whether convolutions are too limited a class of scoring function to identify the best complex reliably.
In Monte Carlo , an initial configuration is refined by taking random steps which are accepted or rejected based on their induced improvement in score (see the Metropolis criterion ), until a certain number of steps have been tried. The assumption is that convergence to the best structure should occur from a large class of initial configurations, only one of which needs to be considered. Initial configurations may be sampled coarsely, and much computation time can be saved. Because of the difficulty of finding a scoring function which is both highly discriminating for the correct configuration and also converges to the correct configuration from a distance, the use of two levels of refinement, with different scoring functions, has been proposed. [ 8 ] Torsion can be introduced naturally to Monte Carlo as an additional property of each random move.
Monte Carlo methods are not guaranteed to search exhaustively, so that the best configuration may be missed even using a scoring function which would in theory identify it. How severe a problem this is for docking has not been firmly established.
To find a score which forms a consistent basis for selecting the best configuration, studies are carried out on a standard benchmark (see below) of protein–protein interaction cases. Scoring functions are assessed on the rank they assign to the best structure (ideally the best structure should be ranked 1), and on their coverage (the proportion of the benchmark cases for which they achieve an acceptable result).
Types of scores studied include:
It is usual to create hybrid scores by combining one or more categories above in a weighted sum whose weights are optimized on cases from the benchmark. To avoid bias, the benchmark cases used to optimize the weights must not overlap with the cases used to make the final test of the score.
The ultimate goal in protein–protein docking is to select the ideal ranking solution according to a scoring scheme that would also give an insight into the affinity of the complex. Such a development would drive in silico protein engineering , computer-aided drug design and/or high-throughput annotation of which proteins bind or not (annotation of interactome ). Several scoring functions have been proposed for binding affinity / free energy prediction. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] However the correlation between experimentally determined binding affinities and the predictions of nine commonly used scoring functions have been found to be nearly orthogonal (R 2 ~ 0). [ 13 ] It was also observed that some components of the scoring algorithms may display better correlation to the experimental binding energies than the full score, suggesting that a significantly better performance might be obtained by combining the appropriate contributions from different scoring algorithms. Experimental methods for the determination of binding affinities are: surface plasmon resonance (SPR), Förster resonance energy transfer , radioligand -based techniques, isothermal titration calorimetry (ITC), microscale thermophoresis (MST) or spectroscopic measurements and other fluorescence techniques. Textual information from scientific articles can provide useful cues for scoring. [ 14 ]
A benchmark of 84 protein–protein interactions with known complexed structures has been developed for testing docking methods. [ 15 ] The set is chosen to cover a wide range of interaction types, and to avoid repeated features, such as the profile of interactors' structural families according to the SCOP database. Benchmark elements are classified into three levels of difficulty (the most difficult containing the largest change in backbone conformation). The protein–protein docking benchmark contains examples of enzyme-inhibitor, antigen-antibody and homomultimeric complexes.
The latest version of protein-protein docking benchmark consists of 230 complexes. [ 16 ] A protein-DNA docking benchmark consists of 47 test cases. [ 17 ] A protein-RNA docking benchmark was curated as a dataset of 45 non-redundant test cases [ 18 ] with complexes solved by X-ray crystallography only as well as an extended dataset of 71 test cases with structures derived from homology modelling as well. [ 19 ] The protein-RNA benchmark has been updated to include more structures solved by X-ray crystallography and now it consists of 126 test cases. [ 20 ] The benchmarks have a combined dataset of 209 complexes. [ 21 ]
A binding affinity benchmark has been based on the protein–protein docking benchmark. [ 13 ] 81 protein–protein complexes with known experimental affinities are included; these complexes span over 11 orders of magnitude in terms of affinity. Each entry of the benchmark includes several biochemical parameters associated with the experimental data, along with the method used to determine the affinity. This benchmark was used to assess the extent to which scoring functions could also predict affinities of macromolecular complexes.
This Benchmark was post-peer reviewed and significantly expanded. [ 22 ] The new set is diverse in terms of the biological functions it represents, with complexes that involve G-proteins and receptor extracellular domains, as well as antigen/antibody, enzyme/inhibitor, and enzyme/substrate complexes. It is also diverse in terms of the partners' affinity for each other, with K d ranging between 10 −5 and 10 −14 M. Nine pairs of entries represent closely related complexes that have a similar structure, but a very different affinity, each pair comprising a cognate and a noncognate assembly. The unbound structures of the component proteins being available, conformation changes can be assessed. They are significant in most of the complexes, and large movements or disorder-to-order transitions are frequently observed. The set may be used to benchmark biophysical models aiming to relate affinity to structure in protein–protein interactions, taking into account the reactants and the conformation changes that accompany the association reaction, instead of just the final product. [ 22 ]
The Critical Assessment of PRediction of Interactions [ 23 ] is an ongoing series of events in which researchers throughout the community try to dock the same proteins, as provided by the assessors. Rounds take place approximately every 6 months. Each round contains between one and six target protein–protein complexes whose structures have been recently determined experimentally. The coordinates and are held privately by the assessors, with the cooperation of the structural biologists who determined them. The assessment of submissions is double blind .
CAPRI attracts a high level of participation (37 groups participated worldwide in round seven) and a high level of interest from the biological community in general. Although CAPRI results are of little statistical significance owing to the small number of targets in each round, the role of CAPRI in stimulating discourse is significant. (The CASP assessment is a similar exercise in the field of protein structure prediction). | https://en.wikipedia.org/wiki/Macromolecular_docking |
A macromolecule is a very large molecule important to biological processes , such as a protein or nucleic acid . It is composed of thousands of covalently bonded atoms . Many macromolecules are polymers of smaller molecules called monomers . The most common macromolecules in biochemistry are biopolymers ( nucleic acids , proteins , and carbohydrates ) and large non-polymeric molecules such as lipids , nanogels and macrocycles . [ 1 ] Synthetic fibers and experimental materials such as carbon nanotubes [ 2 ] [ 3 ] are also examples of macromolecules.
Macromolecule Large molecule
A molecule of high relative molecular mass, the structure of which essentially comprises the multiple repetition of units derived, actually or conceptually, from molecules of low relative molecular mass.
1. In many cases, especially for synthetic polymers, a molecule can be regarded as having a high relative molecular mass if the addition or removal of one or a few of the units has a negligible effect on the molecular properties. This statement fails in the case of certain macromolecules for which the properties may be critically dependent on fine details of the molecular structure. 2. If a part or the whole of the molecule fits into this definition, it may be described as either macromolecular or polymeric , or by polymer used adjectivally. [ 4 ]
The term macromolecule ( macro- + molecule ) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). [ 5 ] At that time the term polymer , as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size. [ 6 ]
Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry , the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate. [ 7 ]
According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules. [ 8 ]
Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass . Complicated biomacromolecules, on the other hand, require multi-faceted structural description such as the hierarchy of structures used to describe proteins . In British English , the word "macromolecule" tends to be called " high polymer ".
Macromolecules often have unusual physical properties that do not occur for smaller molecules. [ how? ]
Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents , instead forming colloids . Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low.
High concentrations of macromolecules in a solution can alter the rates and equilibrium constants of the reactions of other macromolecules, through an effect known as macromolecular crowding . [ 9 ] This comes from macromolecules excluding other molecules from a large part of the volume of the solution, thereby increasing the effective concentrations of these molecules.
All living organisms are dependent on three essential biopolymers for their biological functions: DNA , RNA and proteins . [ 10 ] Each of these molecules is required for life since each plays a distinct, indispensable role in the cell . [ 11 ] The simple summary is that DNA makes RNA, and then RNA makes proteins .
DNA, RNA, and proteins all consist of a repeating structure of related building blocks ( nucleotides in the case of DNA and RNA, amino acids in the case of proteins). In general, they are all unbranched polymers, and so can be represented in the form of a string. Indeed, they can be viewed as a string of beads, with each bead representing a single nucleotide or amino acid monomer linked together through covalent chemical bonds into a very long chain.
In most cases, the monomers within the chain have a strong propensity to interact with other amino acids or nucleotides. In DNA and RNA, this can take the form of Watson–Crick base pairs (G–C and A–T or A–U), although many more complicated interactions can and do occur.
Because of the double-stranded nature of DNA, essentially all of the nucleotides take the form of Watson–Crick base pairs between nucleotides on the two complementary strands of the double helix .
In contrast, both RNA and proteins are normally single-stranded. Therefore, they are not constrained by the regular geometry of the DNA double helix, and so fold into complex three-dimensional shapes dependent on their sequence. These different shapes are responsible for many of the common properties of RNA and proteins, including the formation of specific binding pockets , and the ability to catalyse biochemical reactions.
DNA is an information storage macromolecule that encodes the complete set of instructions (the genome ) that are required to assemble, maintain, and reproduce every living organism. [ 12 ]
DNA and RNA are both capable of encoding genetic information, because there are biochemical mechanisms which read the information coded within a DNA or RNA sequence and use it to generate a specified protein. On the other hand, the sequence information of a protein molecule is not used by cells to functionally encode genetic information. [ 1 ] : 5
DNA has three primary attributes that allow it to be far better than RNA at encoding genetic information. First, it is normally double-stranded, so that there are a minimum of two copies of the information encoding each gene in every cell. Second, DNA has a much greater stability against breakdown than does RNA, an attribute primarily associated with the absence of the 2'-hydroxyl group within every nucleotide of DNA. Third, highly sophisticated DNA surveillance and repair systems are present which monitor damage to the DNA and repair the sequence when necessary. Analogous systems have not evolved for repairing damaged RNA molecules. Consequently, chromosomes can contain many billions of atoms, arranged in a specific chemical structure.
Proteins are functional macromolecules responsible for catalysing the biochemical reactions that sustain life. [ 1 ] : 3 Proteins carry out all functions of an organism, for example photosynthesis, neural function, vision, and movement. [ 13 ]
The single-stranded nature of protein molecules, together with their composition of 20 or more different amino acid building blocks, allows them to fold in to a vast number of different three-dimensional shapes, while providing binding pockets through which they can specifically interact with all manner of molecules. In addition, the chemical diversity of the different amino acids, together with different chemical environments afforded by local 3D structure, enables many proteins to act as enzymes , catalyzing a wide range of specific biochemical transformations within cells. In addition, proteins have evolved the ability to bind a wide range of cofactors and coenzymes , smaller molecules that can endow the protein with specific activities beyond those associated with the polypeptide chain alone.
RNA is multifunctional, its primary function is to encode proteins , according to the instructions within a cell's DNA. [ 1 ] : 5 They control and regulate many aspects of protein synthesis in eukaryotes .
RNA encodes genetic information that can be translated into the amino acid sequence of proteins, as evidenced by the messenger RNA molecules present within every cell, and the RNA genomes of a large number of viruses. The single-stranded nature of RNA, together with tendency for rapid breakdown and a lack of repair systems means that RNA is not so well suited for the long-term storage of genetic information as is DNA.
In addition, RNA is a single-stranded polymer that can, like proteins, fold into a very large number of three-dimensional structures. Some of these structures provide binding sites for other molecules and chemically active centers that can catalyze specific chemical reactions on those bound molecules. The limited number of different building blocks of RNA (4 nucleotides vs >20 amino acids in proteins), together with their lack of chemical diversity, results in catalytic RNA ( ribozymes ) being generally less-effective catalysts than proteins for most biological reactions.
Carbohydrate macromolecules ( polysaccharides ) are formed from polymers of monosaccharides . [ 1 ] : 11 Because monosaccharides have multiple functional groups , polysaccharides can form linear polymers (e.g. cellulose ) or complex branched structures (e.g. glycogen ). Polysaccharides perform numerous roles in living organisms, acting as energy stores (e.g. starch ) and as structural components (e.g. chitin in arthropods and fungi). Many carbohydrates contain modified monosaccharide units that have had functional groups replaced or removed.
Polyphenols consist of a branched structure of multiple phenolic subunits. They can perform structural roles (e.g. lignin ) as well as roles as secondary metabolites involved in signalling , pigmentation and defense .
Some examples of macromolecules are synthetic polymers ( plastics , synthetic fibers , and synthetic rubber ), graphene , and carbon nanotubes . Polymers may be prepared from inorganic matter as well as for instance in inorganic polymers and geopolymers . The incorporation of inorganic elements enables the tunability of properties and/or responsive behavior as for instance in smart inorganic polymers . | https://en.wikipedia.org/wiki/Macromolecule |
Macromolecules is a peer-reviewed scientific journal that has been published since 1968 by the American Chemical Society . Initially published bimonthly, it became monthly in 1983 and then, in 1990, biweekly. [ 1 ] Macromolecules is abstracted and indexed in Scopus , EBSCOhost , PubMed , Web of Science , and SwetsWise. The editor-in-chief is Marc A. Hillmyer. [ 2 ] Its first editor was Dr. Field H. Winslow . [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Macromolecules_(journal) |
Macromomycin B is an antibiotic with anticancer activity. [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macromomycin_B |
Macromonomer molecule : A macromolecule that has one end-group which enables it to act as a monomer molecule, contributing only a single monomeric unit to a chain of the final macromolecule. [ 1 ]
In polymer chemistry , a macromonomer (or macromer ) is a macromolecule with one end-group that enables it to act as a reactive monomer and undergo further polymerization . Macromonomers will contribute a single repeat unit to a chain of the completed macromolecule. [ 2 ] [ 3 ] [ 4 ]
Several macromonomers have been successfully synthesized utilizing various methods such as controlled radical polymerization (CRP) [ 5 ] and copper-catalyzed "click" coupling. [ 6 ]
Due to the larger size of macromonomers (as opposed to the size of regular monomers ), synthetic challenges are brought about, giving reason for the analysis of polymerization mechanisms. Recent studies have shown that macromonomer polymerization kinetics and mechanisms can be significantly affected by the topological effect. [ 7 ]
Macromonomers are also used in controlled graft copolymerization . [ 4 ] | https://en.wikipedia.org/wiki/Macromonomer |
In physics , macrons are microscopic (dust-sized) particles, accelerated to high speeds. The term was first used in the late 1960s, when it was believed that macrons could be accelerated cheaply in small particle accelerators as a way of achieving low-cost fusion power . [ 1 ]
This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macron_(physics) |
Macrophages ( / ˈ m æ k r oʊ f eɪ dʒ / ; abbreviated M φ , MΦ or MP ) are a type of white blood cell of the innate immune system that engulf and digest pathogens, such as cancer cells , microbes , cellular debris and foreign substances, which do not have proteins that are specific to healthy body cells on their surface. [ 1 ] [ 2 ] This self-protection method can be contrasted with that employed by Natural Killer cells . This process of engulfment and digestion is called phagocytosis , which acts to defend the host against infection and injury. [ 3 ]
Macrophages are found in essentially all tissues, [ 4 ] where they patrol for potential pathogens by amoeboid movement . They take various forms (with various names) throughout the body (e.g., histiocytes , Kupffer cells , alveolar macrophages , microglia , and others), but all are part of the mononuclear phagocyte system . Besides phagocytosis, they play a critical role in nonspecific defense ( innate immunity ) and also help initiate specific defense mechanisms ( adaptive immunity ) by recruiting other immune cells such as lymphocytes . For example, they are important as antigen presenters to T cells . In humans, dysfunctional macrophages cause severe diseases such as chronic granulomatous disease that result in frequent infections.
Beyond increasing inflammation and stimulating the immune system, macrophages also play an important anti-inflammatory role and can decrease immune reactions through the release of cytokines . Macrophages that encourage inflammation are called M1 macrophages, whereas those that decrease inflammation and encourage tissue repair are called M2 macrophages. [ 5 ] This difference is reflected in their metabolism; M1 macrophages have the unique ability to metabolize arginine to the "killer" molecule nitric oxide , whereas M2 macrophages have the unique ability to metabolize arginine to the "repair" molecule ornithine . [ 6 ] However, this dichotomy has been recently questioned as further complexity has been discovered. [ 7 ] Macrophages are widely thought of as highly plastic and fluid cells, with a fluctuating phenotype.
Human macrophages are about 21 micrometres (0.00083 in) in diameter [ 8 ] and are produced by the differentiation of monocytes in tissues. They can be identified using flow cytometry or immunohistochemical staining by their specific expression of proteins such as CD14 , CD40 , CD11b , CD64 , F4/80 (mice)/ EMR1 (human), lysozyme M, MAC-1 /MAC-3 and CD68 . [ 9 ]
Macrophages were first discovered and named by Élie Metchnikoff , a Russian Empire zoologist, in 1884. [ 10 ] [ 11 ]
A majority of macrophages are stationed at strategic points where microbial invasion or accumulation of foreign particles is likely to occur. These cells together as a group are known as the mononuclear phagocyte system and were previously known as the reticuloendothelial system. Each type of macrophage, determined by its location, has a specific name:
Investigations concerning Kupffer cells are hampered because in humans, Kupffer cells are only accessible for immunohistochemical analysis from biopsies or autopsies. From rats and mice, they are difficult to isolate, and after purification, only approximately 5 million cells can be obtained from one mouse.
Macrophages can express paracrine functions within organs that are specific to the function of that organ. In the testis , for example, macrophages have been shown to be able to interact with Leydig cells by secreting 25-hydroxycholesterol , an oxysterol that can be converted to testosterone by neighbouring Leydig cells. [ 15 ] Also, testicular macrophages may participate in creating an immune privileged environment in the testis, and in mediating infertility during inflammation of the testis.
Cardiac resident macrophages participate in electrical conduction via gap junction communication with cardiac myocytes . [ 16 ]
Macrophages can be classified on basis of the fundamental function and activation. According to this grouping, there are classically activated (M1) macrophages , wound-healing macrophages (also known as alternatively-activated (M2) macrophages ), and regulatory macrophages (Mregs). [ 17 ]
Macrophages that reside in adult healthy tissues either derive from circulating monocytes or are established before birth and then maintained during adult life independently of monocytes. [ 18 ] [ 19 ] By contrast, most of the macrophages that accumulate at diseased sites typically derive from circulating monocytes. [ 20 ] Leukocyte extravasation describes monocyte entry into damaged tissue through the endothelium of blood vessels as they become macrophages. Monocytes are attracted to a damaged site by chemical substances through chemotaxis , triggered by a range of stimuli including damaged cells, pathogens and cytokines released by macrophages already at the site. At some sites such as the testis, macrophages have been shown to populate the organ through proliferation. [ 21 ] Unlike short-lived neutrophils , macrophages survive longer in the body, up to several months.
Macrophages are professional phagocytes and are highly specialized in removal of dying or dead cells and cellular debris. This role is important in chronic inflammation, as the early stages of inflammation are dominated by neutrophils, which expend themselves and are ingested by macrophages. [ 22 ] Macrophages normally present themselves at the wound site within 2 days following the injury.
The neutrophils are at first attracted to a site, where they perform their function and die, before they or their neutrophil extracellular traps are phagocytized by the macrophages. [ 22 ] [ 23 ] The first wave of neutrophils acts for approximately 2 days at the site and signals to attract macrophages. These macrophages will then ingest the aged neutrophils. [ 22 ]
The removal of dying cells is, to a greater extent, handled by fixed macrophages , which will stay at strategic locations such as the lungs, liver, neural tissue , bone, spleen and connective tissue, ingesting foreign materials such as pathogens and recruiting additional macrophages if needed. [ 24 ] The phagocytosis and clearance of apoptotic remains is called efferocytosis and is also carried out by other cell types, not all of which are professional phagocytes.
When a macrophage ingests a pathogen, the pathogen becomes trapped in a phagosome , which then fuses with a lysosome . Within the phagolysosome , enzymes and toxic peroxides digest the pathogen. However, some bacteria (such as Mycobacterium tuberculosis ) have become resistant to these methods of digestion. Typhoidal Salmonellae induce their own phagocytosis by host macrophages in vivo and inhibit digestion by lysosomal action, thereby using macrophages for their own replication and causing macrophage apoptosis. [ 25 ] Macrophages are capable of engulfing and digesting many bacteria during their life. They can die eventually due to factors including pathogenic cytotoxicity, oxidative stress, and phagocytosis-induced apoptosis. [ 26 ] Phagocytosis-induced apoptosis results from the powerful apoptotic stimulus of consuming bacteria and is observed in (at least) macrophages and neutrophils.
When a pathogen invades, tissue resident macrophages are among the first cells to respond. [ 27 ] Two of the main roles of the tissue resident macrophages are to phagocytose incoming antigen and to secrete proinflammatory cytokines that induce inflammation and recruit other immune cells to the site. [ 28 ]
Macrophages can internalize antigens through receptor-mediated phagocytosis. [ 29 ] Macrophages have a wide variety of pattern recognition receptors (PRRs) that can recognize microbe-associated molecular patterns (MAMPs) from pathogens. Many PRRs, such as toll-like receptors (TLRs), scavenger receptors (SRs), C-type lectin receptors, among others, recognize pathogens for phagocytosis. [ 29 ] Macrophages can also recognize pathogens for phagocytosis indirectly through opsonins , which are molecules that attach to pathogens and mark them for phagocytosis. [ 30 ] Opsonins can cause a stronger adhesion between the macrophage and pathogen during phagocytosis, hence opsonins tend to enhance macrophages’ phagocytic activity. [ 31 ] Both complement proteins and antibodies can bind to antigens and opsonize them. Macrophages have complement receptor 1 (CR1) and 3 (CR3) that recognize pathogen-bound complement proteins C3b and iC3b, respectively, as well as fragment crystallizable γ receptors (FcγRs) that recognize the fragment crystallizable (Fc) region of antigen-bound immunoglobulin G (IgG) antibodies. [ 30 ] [ 32 ] When phagocytosing and digesting pathogens, macrophages go through a respiratory burst where more oxygen is consumed to supply the energy required for producing reactive oxygen species (ROS) and other antimicrobial molecules that digest the consumed pathogens. [ 28 ] [ 33 ]
Recognition of MAMPs by PRRs can activate tissue resident macrophages to secrete proinflammatory cytokines that recruit other immune cells. Among the PRRs, TLRs play a major role in signal transduction leading to cytokine production. [ 29 ] The binding of MAMPs to TLR triggers a series of downstream events that eventually activates transcription factor NF-κB and results in transcription of the genes for several proinflammatory cytokines, including IL-1β , IL-6 , TNF-α , IL-12B , and type I interferons such as IFN-α and IFN-β. [ 34 ] Systemically, IL-1β, IL-6, and TNF-α induce fever and initiate the acute phase response in which the liver secretes acute phase proteins . [ 27 ] [ 28 ] [ 35 ] Locally, IL-1β and TNF-α cause vasodilation, where the gaps between blood vessel epithelial cells widen, and upregulation of cell surface adhesion molecules on epithelial cells to induce leukocyte extravasation . [ 27 ] [ 28 ] Additionally, activated macrophages have been found to have delayed synthesis of prostaglandins (PGs) which are important mediators of inflammation and pain. Among the PGs, anti-inflammatory PGE2 and pro-inflammatory PGD2 increase the most after activation, with PGE2 increasing expression of IL-10 and inhibiting production of TNFs via the COX-2 pathway. [ 36 ] [ 37 ]
Neutrophils are among the first immune cells recruited by macrophages to exit the blood via extravasation and arrive at the infection site. [ 35 ] Macrophages secrete many chemokines such as CXCL1 , CXCL2 , and CXCL8 (IL-8) that attract neutrophils to the site of infection. [ 27 ] [ 35 ] After neutrophils have finished phagocytosing and clearing the antigen at the end of the immune response, they undergo apoptosis, and macrophages are recruited from blood monocytes to help clear apoptotic debris. [ 38 ]
Macrophages also recruit other immune cells such as monocytes, dendritic cells, natural killer cells, basophils, eosinophils, and T cells through chemokines such as CCL2 , CCL4 , CCL5 , CXCL8 , CXCL9 , CXCL10 , and CXCL11 . [ 27 ] [ 35 ] Along with dendritic cells, macrophages help activate natural killer (NK) cells through secretion of type I interferons (IFN-α and IFN-β) and IL-12 . IL-12 acts with IL-18 to stimulate the production of proinflammatory cytokine interferon gamma (IFN-γ) by NK cells, which serves as an important source of IFN-γ before the adaptive immune system is activated. [ 35 ] [ 39 ] IFN-γ enhances the innate immune response by inducing a more aggressive phenotype in macrophages, allowing macrophages to more efficiently kill pathogens. [ 35 ]
Some of the T cell chemoattractants secreted by macrophages include CCL5 , CXCL9 , CXCL10 , and CXCL11 . [ 27 ]
Macrophages are professional antigen presenting cells (APC), meaning they can present peptides from phagocytosed antigens on major histocompatibility complex (MHC) II molecules on their cell surface for T helper cells. [ 41 ] Macrophages are not primary activators of naïve T helper cells that have never been previously activated since tissue resident macrophages do not travel to the lymph nodes where naïve T helper cells reside. [ 42 ] [ 43 ] Although macrophages are also found in secondary lymphoid organs like the lymph nodes, they do not reside in T cell zones and are not effective at activating naïve T helper cells. [ 42 ] The macrophages in lymphoid tissues are more involved in ingesting antigens and preventing them from entering the blood, as well as taking up debris from apoptotic lymphocytes. [ 42 ] [ 44 ] Therefore, macrophages interact mostly with previously activated T helper cells that have left the lymph node and arrived at the site of infection or with tissue resident memory T cells. [ 43 ]
Macrophages supply both signals required for T helper cell activation: 1) Macrophages present antigen peptide-bound MHC class II molecule to be recognized by the corresponding T cell receptor (TCR), and 2) recognition of pathogens by PRRs induce macrophages to upregulate the co-stimulatory molecules CD80 and CD86 (also known as B7 ) that binds to CD28 on T helper cells to supply the co-stimulatory signal. [ 35 ] [ 41 ] These interactions allow T helper cells to achieve full effector function and provide T helper cells with continued survival and differentiation signals preventing them from undergoing apoptosis due to lack of TCR signaling. [ 41 ] For example, IL-2 signaling in T cells upregulates the expression of anti-apoptotic protein Bcl-2 , but T cell production of IL-2 and the high-affinity IL-2 receptor IL-2RA both require continued signal from TCR recognition of MHC-bound antigen. [ 35 ] [ 45 ]
Macrophages can achieve different activation phenotypes through interactions with different subsets of T helper cells, such as T H 1 and T H 2. [ 17 ] Although there is a broad spectrum of macrophage activation phenotypes, there are two major phenotypes that are commonly acknowledged. [ 17 ] They are the classically activated macrophages, or M1 macrophages, and the alternatively activated macrophages, or M2 macrophages. M1 macrophages are proinflammatory, while M2 macrophages are mostly anti-inflammatory. [ 17 ]
T H 1 cells play an important role in classical macrophage activation as part of type 1 immune response against intracellular pathogens (such as intracellular bacteria ) that can survive and replicate inside host cells, especially those pathogens that replicate even after being phagocytosed by macrophages. [ 46 ] After the TCR of T H 1 cells recognize specific antigen peptide-bound MHC class II molecules on macrophages, T H 1 cells 1) secrete IFN-γ and 2) upregulate the expression of CD40 ligand (CD40L), which binds to CD40 on macrophages. [ 47 ] [ 35 ] These 2 signals activate the macrophages and enhance their ability to kill intracellular pathogens through increased production of antimicrobial molecules such as nitric oxide (NO) and superoxide (O 2- ). [ 27 ] [ 35 ] This enhancement of macrophages' antimicrobial ability by T H 1 cells is known as classical macrophage activation, and the activated macrophages are known as classically activated macrophages, or M1 macrophages. The M1 macrophages in turn upregulate B7 molecules and antigen presentation through MHC class II molecules to provide signals that sustain T cell help. [ 47 ] The activation of T H 1 and M1 macrophage is a positive feedback loop, with IFN-γ from T H 1 cells upregulating CD40 expression on macrophages; the interaction between CD40 on the macrophages and CD40L on T cells activate macrophages to secrete IL-12; and IL-12 promotes more IFN-γ secretion from T H 1 cells. [ 35 ] [ 47 ] The initial contact between macrophage antigen-bound MHC II and TCR serves as the contact point between the two cells where most of the IFN-γ secretion and CD-40L on T cells concentrate to, so only macrophages directly interacting with T H 1 cells are likely to be activated. [ 35 ]
In addition to activating M1 macrophages, T H 1 cells express Fas ligand (FasL) and lymphotoxin beta (LT-β) to help kill chronically infected macrophages that can no longer kill pathogens. [ 35 ] The killing of chronically infected macrophages release pathogens to the extracellular space that can then be killed by other activated macrophages. [ 35 ] T H 1 cells also help recruit more monocytes, the precursor to macrophages, to the infection site. T H 1 secretion TNF-α and LT-α to make blood vessels easier for monocytes to bind to and exit. [ 35 ] T H 1 secretion of CCL2 as a chemoattractant for monocytes. IL-3 and GM-CSF released by T H 1 cells stimulate more monocyte production in the bone marrow. [ 35 ]
When intracellular pathogens cannot be eliminated, such as in the case of Mycobacterium tuberculosis , the pathogen is contained through the formation of granuloma , an aggregation of infected macrophages surrounded by activated T cells. [ 48 ] The macrophages bordering the activated lymphocytes often fuse to form multinucleated giant cells that appear to have increased antimicrobial ability due to their proximity to T H 1 cells, but over time, the cells in the center start to die and form necrotic tissue. [ 43 ] [ 48 ]
T H 2 cells play an important role in alternative macrophage activation as part of type 2 immune response against large extracellular pathogens like helminths . [ 35 ] [ 49 ] T H 2 cells secrete IL-4 and IL-13, which activate macrophages to become M2 macrophages, also known as alternatively activated macrophages. [ 49 ] [ 50 ] M2 macrophages express arginase-1 , an enzyme that converts arginine to ornithine and urea . [ 49 ] Ornithine help increase smooth muscle contraction to expel the worm and also participates in tissue and wound repair. Ornithine can be further metabolized to proline , which is essential for synthesizing collagen . [ 49 ] M2 macrophages can also decrease inflammation by producing IL-1 receptor antagonist (IL-1RA) and IL-1 receptors that do not lead to downstream inflammatory signaling (IL-1RII). [ 35 ] [ 51 ]
Another part of the adaptive immunity activation involves stimulating CD8 + via cross presentation of antigens peptides on MHC class I molecules. Studies have shown that proinflammatory macrophages are capable of cross presentation of antigens on MHC class I molecules, but whether macrophage cross-presentation plays a role in naïve or memory CD8 + T cell activation is still unclear. [ 28 ] [ 52 ] [ 44 ]
Macrophages have been shown to secrete cytokines BAFF and APRIL, which are important for plasma cell isotype switching. APRIL and IL-6 secreted by macrophage precursors in the bone marrow help maintain survival of plasma cells homed to the bone marrow. [ 53 ]
There are several activated forms of macrophages. [ 17 ] In spite of a spectrum of ways to activate macrophages, there are two main groups designated M1 and M2 . M1 macrophages: as mentioned earlier (previously referred to as classically activated macrophages), [ 55 ] M1 "killer" macrophages are activated by LPS and IFN-gamma , and secrete high levels of IL-12 and low levels of IL-10 . M1 macrophages have pro-inflammatory, bactericidal, and phagocytic functions. [ 56 ] In contrast, the M2 "repair" designation (also referred to as alternatively activated macrophages) broadly refers to macrophages that function in constructive processes like wound healing and tissue repair, and those that turn off damaging immune system activation by producing anti-inflammatory cytokines like IL-10 . M2 is the phenotype of resident tissue macrophages, and can be further elevated by IL-4 . M2 macrophages produce high levels of IL-10, TGF-beta and low levels of IL-12. Tumor-associated macrophages are mainly of the M2 phenotype, and seem to actively promote tumor growth. [ 57 ]
Macrophages exist in a variety of phenotypes which are determined by the role they play in wound maturation. Phenotypes can be predominantly separated into two major categories; M1 and M2. M1 macrophages are the dominating phenotype observed in the early stages of inflammation and are activated by four key mediators: interferon-γ (IFN-γ), tumor necrosis factor (TNF), and damage associated molecular patterns (DAMPs). These mediator molecules create a pro-inflammatory response that in return produce pro-inflammatory cytokines like Interleukin-6 and TNF. Unlike M1 macrophages, M2 macrophages secrete an anti-inflammatory response via the addition of Interleukin-4 or Interleukin-13. They also play a role in wound healing and are needed for revascularization and reepithelialization. M2 macrophages are divided into four major types based on their roles: M2a, M2b, M2c, and M2d. How M2 phenotypes are determined is still up for discussion but studies have shown that their environment allows them to adjust to whichever phenotype is most appropriate to efficiently heal the wound. [ 56 ]
M2 macrophages are needed for vascular stability. They produce vascular endothelial growth factor-A and TGF-β1 . [ 56 ] There is a phenotype shift from M1 to M2 macrophages in acute wounds, however this shift is impaired for chronic wounds. This dysregulation results in insufficient M2 macrophages and its corresponding growth factors that aid in wound repair. With a lack of these growth factors/anti-inflammatory cytokines and an overabundance of pro-inflammatory cytokines from M1 macrophages chronic wounds are unable to heal in a timely manner. Normally, after neutrophils eat debris/pathogens they perform apoptosis and are removed. At this point, inflammation is not needed and M1 undergoes a switch to M2 (anti-inflammatory). However, dysregulation occurs as the M1 macrophages are unable/do not phagocytose neutrophils that have undergone apoptosis leading to increased macrophage migration and inflammation. [ 56 ]
Both M1 and M2 macrophages play a role in promotion of atherosclerosis . M1 macrophages promote atherosclerosis by inflammation. M2 macrophages can remove cholesterol from blood vessels, but when the cholesterol is oxidized, the M2 macrophages become apoptotic foam cells contributing to the atheromatous plaque of atherosclerosis. [ 58 ] [ 59 ]
The first step to understanding the importance of macrophages in muscle repair, growth, and regeneration is that there are two "waves" of macrophages with the onset of damageable muscle use– subpopulations that do and do not directly have an influence on repairing muscle. The initial wave is a phagocytic population that comes along during periods of increased muscle use that are sufficient to cause muscle membrane lysis and membrane inflammation, which can enter and degrade the contents of injured muscle fibers. [ 60 ] [ 61 ] [ 62 ] These early-invading, phagocytic macrophages reach their highest concentration about 24 hours following the onset of some form of muscle cell injury or reloading. [ 63 ] Their concentration rapidly declines after 48 hours. [ 61 ] The second group is the non-phagocytic types that are distributed near regenerative fibers. These peak between two and four days and remain elevated for several days while muscle tissue is rebuilding. [ 61 ] The first subpopulation has no direct benefit to repairing muscle, while the second non-phagocytic group does.
It is thought that macrophages release soluble substances that influence the proliferation, differentiation, growth, repair, and regeneration of muscle, but at this time the factor that is produced to mediate these effects is unknown. [ 63 ] It is known that macrophages' involvement in promoting tissue repair is not muscle specific; they accumulate in numerous tissues during the healing process phase following injury. [ 64 ]
Macrophages are essential for wound healing . [ 65 ] They replace polymorphonuclear neutrophils as the predominant cells in the wound by day two after injury. [ 66 ] Attracted to the wound site by growth factors released by platelets and other cells, monocytes from the bloodstream enter the area through blood vessel walls. [ 67 ] Numbers of monocytes in the wound peak one to one and a half days after the injury occurs. Once they are in the wound site, monocytes mature into macrophages. The spleen contains half the body's monocytes in reserve ready to be deployed to injured tissue. [ 68 ] [ 69 ]
The macrophage's main role is to phagocytize bacteria and damaged tissue, [ 65 ] and they also debride damaged tissue by releasing proteases. [ 70 ] Macrophages also secrete a number of factors such as growth factors and other cytokines, especially during the third and fourth post-wound days. These factors attract cells involved in the proliferation stage of healing to the area. [ 71 ] Macrophages may also restrain the contraction phase. [ 72 ] Macrophages are stimulated by the low oxygen content of their surroundings to produce factors that induce and speed angiogenesis [ 73 ] and they also stimulate cells that re-epithelialize the wound, create granulation tissue, and lay down a new extracellular matrix . [ 74 ] [ better source needed ] By secreting these factors, macrophages contribute to pushing the wound healing process into the next phase.
Scientists have elucidated that as well as eating up material debris, macrophages are involved in the typical limb regeneration in the salamander. [ 75 ] [ 76 ] They found that removing the macrophages from a salamander resulted in failure of limb regeneration and a scarring response. [ 75 ] [ 76 ]
As described above, macrophages play a key role in removing dying or dead cells and cellular debris. Erythrocytes have a lifespan on average of 120 days and so are constantly being destroyed by macrophages in the spleen and liver. Macrophages will also engulf macromolecules , and so play a key role in the pharmacokinetics of parenteral irons . [ citation needed ]
The iron that is released from the haemoglobin is either stored internally in ferritin or is released into the circulation via ferroportin . In cases where systemic iron levels are raised, or where inflammation is present, raised levels of hepcidin act on macrophage ferroportin channels, leading to iron remaining within the macrophages. [ 77 ]
Melanophages are a subset of tissue-resident macrophages able to absorb pigment, either native to the organism or exogenous (such as tattoos ), from extracellular space. In contrast to dendritic juncional melanocytes , which synthesize melanosomes and contain various stages of their development, the melanophages only accumulate phagocytosed melanin in lysosome-like phagosomes. [ 78 ] [ 79 ] This occurs repeatedly as the pigment from dead dermal macrophages is phagocytosed by their successors, preserving the tattoo in the same place. [ 80 ]
Every tissue harbors its own specialized population of resident macrophages, which entertain reciprocal interconnections with the stroma and functional tissue. [ 81 ] [ 82 ] These resident macrophages are sessile (non-migratory), provide essential growth factors to support the physiological function of the tissue (e.g. macrophage-neuronal crosstalk in the guts), [ 83 ] and can actively protect the tissue from inflammatory damage. [ 84 ]
Nerve-associated macrophages or NAMs are those tissue-resident macrophages that are associated with nerves. Some of them are known to have an elongated morphology of up to 200μm [ 85 ]
Due to their role in phagocytosis, macrophages are involved in many diseases of the immune system. For example, they participate in the formation of granulomas , inflammatory lesions that may be caused by a large number of diseases. Some disorders, mostly rare, of ineffective phagocytosis and macrophage function have been described, for example. [ 86 ]
In their role as a phagocytic immune cell macrophages are responsible for engulfing pathogens to destroy them. Some pathogens subvert this process and instead live inside the macrophage. This provides an environment in which the pathogen is hidden from the immune system and allows it to replicate. [ citation needed ]
Diseases with this type of behaviour include tuberculosis (caused by Mycobacterium tuberculosis ) and leishmaniasis (caused by Leishmania species). [ citation needed ]
In order to minimize the possibility of becoming the host of an intracellular bacteria, macrophages have evolved defense mechanisms such as induction of nitric oxide and reactive oxygen intermediates, [ 87 ] which are toxic to microbes. Macrophages have also evolved the ability to restrict the microbe's nutrient supply and induce autophagy . [ 88 ]
Once engulfed by a macrophage, the causative agent of tuberculosis, Mycobacterium tuberculosis , [ 89 ] avoids cellular defenses and uses the cell to replicate. Recent evidence suggests that in response to the pulmonary infection of Mycobacterium tuberculosis , the peripheral macrophages matures into M1 phenotype. Macrophage M1 phenotype is characterized by increased secretion of pro-inflammatory cytokines (IL-1β, TNF-α, and IL-6) and increased glycolytic activities essential for clearance of infection. [ 1 ]
Upon phagocytosis by a macrophage, the Leishmania parasite finds itself in a phagocytic vacuole. Under normal circumstances, this phagocytic vacuole would develop into a lysosome and its contents would be digested. Leishmania alter this process and avoid being destroyed; instead, they make a home inside the vacuole. [ citation needed ]
Infection of macrophages in joints is associated with local inflammation during and after the acute phase of Chikungunya (caused by CHIKV or Chikungunya virus). [ 90 ]
Adenovirus (most common cause of pink eye) can remain latent in a host macrophage, with continued viral shedding 6–18 months after initial infection. [ citation needed ]
Brucella spp. can remain latent in a macrophage via inhibition of phagosome – lysosome fusion; causes brucellosis (undulant fever). [ citation needed ]
Legionella pneumophila , the causative agent of Legionnaires' disease , also establishes residence within macrophages. [ citation needed ]
Macrophages are the predominant cells involved in creating the progressive plaque lesions of atherosclerosis . [ 91 ]
Focal recruitment of macrophages occurs after the onset of acute myocardial infarction . These macrophages function to remove debris, apoptotic cells and to prepare for tissue regeneration . [ 92 ] Macrophages protect against ischemia-induced ventricular tachycardia in hypokalemic mice. [ 93 ]
Macrophages also play a role in human immunodeficiency virus (HIV) infection. Like T cells , macrophages can be infected with HIV, and even become a reservoir of ongoing virus replication throughout the body. HIV can enter the macrophage through binding of gp120 to CD4 and second membrane receptor, CCR5 (a chemokine receptor). Both circulating monocytes and macrophages serve as a reservoir for the virus. [ 94 ] Macrophages are better able to resist infection by HIV-1 than CD4+ T cells, although susceptibility to HIV infection differs among macrophage subtypes. [ 95 ]
Macrophages can contribute to tumor growth and progression by promoting tumor cell proliferation and invasion, fostering tumor angiogenesis and suppressing antitumor immune cells. [ 96 ] [ 97 ] Inflammatory compounds, such as tumor necrosis factor (TNF)-alpha released by the macrophages activate the gene switch nuclear factor-kappa B . NF-κB then enters the nucleus of a tumor cell and turns on production of proteins that stop apoptosis and promote cell proliferation and inflammation. [ 98 ] Moreover, macrophages serve as a source for many pro-angiogenic factors including vascular endothelial factor (VEGF), tumor necrosis factor-alpha (TNF-alpha), macrophage colony-stimulating factor (M-CSF/CSF1) and IL-1 and IL-6 , [ 99 ] contributing further to the tumor growth.
Macrophages have been shown to infiltrate a number of tumors. Their number correlates with poor prognosis in certain cancers, including cancers of breast, cervix, bladder, brain and prostate. [ 100 ] [ 101 ] Some tumors can also produce factors, including M-CSF/CSF1, MCP-1/CCL2 and Angiotensin II , that trigger the amplification and mobilization of macrophages in tumors. [ 102 ] [ 103 ] [ 104 ] Additionally, subcapsular sinus macrophages in tumor-draining lymph nodes can suppress cancer progression by containing the spread of tumor-derived materials. [ 105 ]
Experimental studies indicate that macrophages can affect all therapeutic modalities, including surgery , chemotherapy , radiotherapy , immunotherapy and targeted therapy . [ 97 ] [ 106 ] [ 107 ] Macrophages can influence treatment outcomes both positively and negatively. Macrophages can be protective in different ways: they can remove dead tumor cells (in a process called phagocytosis ) following treatments that kill these cells; they can serve as drug depots for some anticancer drugs; [ 108 ] they can also be activated by some therapies to promote antitumor immunity. [ 109 ] Macrophages can also be deleterious in several ways: for example they can suppress various chemotherapies, [ 110 ] [ 111 ] radiotherapies [ 112 ] [ 113 ] and immunotherapies. [ 114 ] [ 115 ] Because macrophages can regulate tumor progression, therapeutic strategies to reduce the number of these cells, or to manipulate their phenotypes, are currently being tested in cancer patients. [ 116 ] [ 117 ] However, macrophages are also involved in antibody mediated cytotoxicity (ADCC) and this mechanism has been proposed to be important for certain cancer immunotherapy antibodies. [ 118 ] Similarly, studies identified macrophages genetically engineered to express chimeric antigen receptors as promising therapeutic approach to lowering tumor burden. [ 119 ]
It has been observed that increased number of pro-inflammatory macrophages within obese adipose tissue contributes to obesity complications including insulin resistance and diabetes type 2. [ 120 ]
The modulation of the inflammatory state of adipose tissue macrophages has therefore been considered a possible therapeutic target to treat obesity-related diseases. [ 121 ] Although adipose tissue macrophages are subject to anti-inflammatory homeostatic control by sympathetic innervation, experiments using ADRB2 gene knockout mice indicate that this effect is indirectly exerted through the modulation of adipocyte function, and not through direct Beta-2 adrenergic receptor activation, suggesting that adrenergic stimulation of macrophages may be insufficient to impact adipose tissue inflammation or function in obesity. [ 122 ]
Within the fat ( adipose ) tissue of CCR2 deficient mice , there is an increased number of eosinophils , greater alternative macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet. [ 123 ] This is partially caused by a phenotype switch of macrophages induced by necrosis of fat cells ( adipocytes ). In an obese individual some adipocytes burst and undergo necrotic death, which causes the residential M2 macrophages to switch to M1 phenotype. This is one of the causes of a low-grade systemic chronic inflammatory state associated with obesity. [ 124 ] [ 125 ]
Though very similar in structure to tissue macrophages, intestinal macrophages have evolved specific characteristics and functions given their natural environment, which is in the digestive tract. Macrophages and intestinal macrophages have high plasticity causing their phenotype to be altered by their environments. [ 126 ] Like macrophages, intestinal macrophages are differentiated monocytes, though intestinal macrophages have to coexist with the microbiome in the intestines. This is a challenge considering the bacteria found in the gut are not recognized as "self" and could be potential targets for phagocytosis by the macrophage. [ 127 ]
To prevent the destruction of the gut bacteria, intestinal macrophages have developed key differences compared to other macrophages. Primarily, intestinal macrophages do not induce inflammatory responses. Whereas tissue macrophages release various inflammatory cytokines, such as IL-1, IL-6 and TNF-α, intestinal macrophages do not produce or secrete inflammatory cytokines. This change is directly caused by the intestinal macrophages environment. Surrounding intestinal epithelial cells release TGF-β , which induces the change from proinflammatory macrophage to noninflammatory macrophage. [ 127 ]
Even though the inflammatory response is downregulated in intestinal macrophages, phagocytosis is still carried out. There is no drop off in phagocytosis efficiency as intestinal macrophages are able to effectively phagocytize the bacteria, S. typhimurium and E. coli , but intestinal macrophages still do not release cytokines, even after phagocytosis. Also, intestinal macrophages do not express lipopolysaccharide (LPS), IgA, or IgG receptors. [ 128 ] The lack of LPS receptors is important for the gut as the intestinal macrophages do not detect the microbe-associated molecular patterns (MAMPS/PAMPS) of the intestinal microbiome. Nor do they express IL-2 and IL-3 growth factor receptors. [ 127 ]
Intestinal macrophages have been shown to play a role in inflammatory bowel disease (IBD), such as Crohn's disease (CD) and ulcerative colitis (UC). In a healthy gut, intestinal macrophages limit the inflammatory response in the gut, but in a disease-state, intestinal macrophage numbers and diversity are altered. This leads to inflammation of the gut and disease symptoms of IBD. Intestinal macrophages are critical in maintaining gut homeostasis . The presence of inflammation or pathogen alters this homeostasis, and concurrently alters the intestinal macrophages. [ 129 ] There has yet to be a determined mechanism for the alteration of the intestinal macrophages by recruitment of new monocytes or changes in the already present intestinal macrophages. [ 128 ]
Additionally, a new study reveals macrophages limit iron access to bacteria by releasing extracellular vesicles, improving sepsis outcomes. [ 130 ]
Macrophages were first discovered late in the 19th century by zoologist Élie Metchnikoff . [ 131 ] Metchnikoff revolutionized the branch of macrophages by combining philosophical insights and the evolutionary study of life. [ 132 ] Later on, Van Furth during the 1960s proposed the idea that circulating blood monocytes in adults allowed for the origin of all tissue macrophages. [ 133 ] In recent years, publishing regarding macrophages has led people to believe that multiple resident tissue macrophages are independent of the blood monocytes as it is formed during the embryonic stage of development. [ 134 ] Within the 21st century, all the ideas concerning the origin of macrophages (present in tissues) were compiled together to suggest that physiologically complex organisms, from macrophages independently by mechanisms that don't have to depend on the blood monocytes. [ 135 ] | https://en.wikipedia.org/wiki/Macrophage |
Macrophage-activating lipopeptide 2 (MALP-2) is a lipopeptide Toll-like receptor (TLR)-2 and 6 agonist . It is used in immunological research to simulate Mycoplasma bacterial infections and activate immune cells . MALP-2 holds promise as a novel vaccine adjuvant due to its activation of TLRs. [ 1 ] [ 2 ] It also promotes vascular , bone , and wound healing . [ 3 ] [ 4 ]
MALP-2 has the structure S-2,3-bis(palmityloxy)-(2R)-propyl-cysteinyl-GNNDESNISFKEK and is a post-translationally modified CGNNDESNISFKEK peptide in which in the N-terminus cysteine residue sidechain is linked to a diacylglycerol moiety where the two acyl groups are both derived from palmitic acid . [ 5 ]
MALP-2 was initially named mycoplasma-derived high- molecular-weight material (MDHM) and, as the name suggests, had originally been isolated from Mycoplasma fermentans as an amphiphilic molecule with macropage -activating properties. This discovery helped explain how Mycoplasma bacteria can provoke immune responses despite lacking a cell wall . [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Macrophage-activating_lipopeptide_2 |
The terms "macrophage" and "microphage" are used in ecology to describe heterotrophs that consume food in two different ways. Both macrophages and microphages "ingest solid food and may process it through some sort of alimentary canal ." [ 1 ] However, a macrophage "handles food items singly, while a microphage handles food items in bulk without manipulating them individually." [ 2 ] Microphages include suspension feeders , and often incidentally digest low-quality food items. [ 1 ]
Another category of heterotrophs based on feeding mechanism, known as " osmotrophs ," is made up of organisms (primarily fungi and bacteria) that absorb organism matter directly across their cell membranes. [ 1 ]
The terms "macrophage" and "microphage" were originally used in this sense by Jordan and Hirsch (1927; cited in Yonge 1928). [ 2 ] Although they have been used in ecology texts as recently as 2002, [ 1 ] the terms macrophage and microphage today are primarily used to describe two different types of white blood cells in the vertebrate immune system | https://en.wikipedia.org/wiki/Macrophage_(ecology) |
Macrophagic myofasciitis ( MMF ) is a histopathological finding involving inflammatory microphage formations with aluminium-containing crystal inclusions and associated microscopic muscle necrosis in biopsy samples of the deltoid muscle . Based on the presence of aluminium and the common practice of administering vaccines into the deltoid, it has been proposed that the abnormalities are a result of immunisation with aluminium adjuvant -containing vaccines. The findings were observed in a minority of persons being evaluated for "diffuse myalgias, arthralgias or muscle weakness" who underwent deltoid muscle biopsies. The individuals had a history of receiving aluminium-containing vaccines, administered months to several years prior to observation of MMF histopathology. [ 1 ] [ 2 ]
It has been subsequently proposed that macrophagic myofasciitis is in fact a systemic disorder where various diseases develop in association and as consequence of vaccination with aluminium-containing vaccines in susceptible individuals, however, the World Health Organization has concluded that "[t]here is no evidence to suggest that MMF is a specific illness", and that "[t]he current evidence neither establishes nor excludes a generalized disorder affecting other organs." [ 1 ] [ 2 ]
According to the WHO, "There is no evidence to suggest that MMF is a specific illness. MMF is a lesion containing aluminium salts, identified by histopathological examination, found at the site of previous vaccination with an aluminium-containing vaccine". [ 3 ] | https://en.wikipedia.org/wiki/Macrophagic_myofasciitis |
In soil, macropores are defined as cavities that are larger than 75 μm. [ 1 ] Functionally, pores of this size host preferential soil solution flow and rapid transport of solutes and colloids . Macropores increase the hydraulic conductivity of soil, allowing water to infiltrate and drain quickly, and shallow groundwater to move relatively rapidly via lateral flow. In soil, macropores are created by plant roots , soil cracks, soil fauna , and by aggregation of soil particles into peds . Macropores can also be found in soil between larger individual mineral particles such as sand or gravel.
Macropores may be defined differently in other contexts. Within the context of porous solids (i.e., not porous aggregations such as soil), colloid and surface chemists define macropores as cavities that are larger than 50 nm. [ 2 ]
Primary particles ( sand , silt and clay ) in soil are bound together by various agents and under different processes to form soil aggregates ( peds ). Spaces of different shapes and sizes exist within and between these soil aggregates. The larger spaces between aggregates are called macropores. Macropores can be formed under the influence of physical processes such as wet/dry and freeze/thaw cycles, which result in cracks and fissures of soils. They can also be formed under biological processes where plant roots and soil organisms play an important role in their formation. [ 3 ] Macropores created by biological activities are also called biopores. For example, plant roots create large spaces between soil aggregates with their growth and decay. Soil fauna , especially burrowing species such as earthworms , contributes to the formation of macropores with their movement and activities in soils. In general, the formation of macropore is negatively related to soil depth as these physical and biological processes diminish with depth.
As an important part of soil structure , macropores are vital to the provision of many soil ecosystem services. They allow free movement of water and air, influence transport of chemicals and provide habitats for soil organisms. Therefore, understanding the importance of soil macropore is also critical to achieving sustainable management of our soil resources.
Water can move freely under the influence of gravity in soil macropores when compared to micropores (much smaller pores in soils) where water is held by capillary forces . [ 4 ] Water also tends to move along paths of the least resistance. Connected macropores create these paths and result in the so-called preferential flows [ 5 ] in soils. Such attributes of macropores will allow fast movement of water into and across soils, that can significantly improve soil infiltration rate and permeability . These in turn can help to reduce surface runoff, soil erosion and prevent flooding. It also contributes to groundwater recharge that replenish water resources .
On the other hand, these pores will be filled with air when they do not hold water. An extended network of macropores helps to improve gas exchange between soil and the atmosphere , [ 6 ] especially when these macropores are connected to soil surface. Soil gases such as carbon dioxide and oxygen are important elements of soil respiration . Oxygen is essential to the growth of plant roots and soil organisms while the release of carbon dioxide through respiration is an integral part of the global carbon cycling .
Optimal water and air movement through soils not only provide essential elements to sustain life but are also fundamental to various soil processes such as nutrient cycling .
As macropores facilitate water movement in soils, they also inevitably influence the transport of chemicals which are dissolved in water. As a result, macropores can play a significant role in affecting the cycling of soil nutrients and the distribution of soil pollutants . For instance, while preferential flow paths consist of macropores enhance the drainage of soil water, the dissolved nutrients can be carried away rapidly and lead to an uneven distribution of water as well as chemicals in the soils. When excess chemicals or pollutants are released into groundwater, they can cause water pollution in the receiving water bodies . [ 7 ] This can be a concern especially to some land uses such as agricultural activities, [ 8 ] as it leads to issues regarding the effectiveness of irrigation and fertilization as well as impacts of environmental pollution . For example, excessive nitrate converted from nitrogen fertilizers can be washed into groundwater under heavy rainfall or irrigation. Subsequently, a high level of nitrate in drinking water can cause health concerns. [ 9 ]
Being large pores in soils, macropores allow easy movement of water and air that they provide favourable spaces for plant root growth and habitats for soil organisms . [ 10 ] Consequently, these pores, with various residing soil organisms such as earthworms and larvae, also become important locations of soil bio-chemical processes that affect the overall soil quality.
Soil macropores are not uniform but have an irregular geometry . They vary in shapes, sizes, and even surface roughness . When connected together, they form specific networks in soils. Therefore, the characteristics of these macropore networks can have significant influences on their functions in soils, especially in relation to water movement, aeration, and plant root growth.
The interconnectedness of soil macropores affects the capability of soil to conduct water and thus controls its water infiltration and hydraulic conductivity . Higher connectivity of soil macropores is usually associated with higher soil permeability . [ 11 ] Connection of macropores with soil surface and groundwater also contributes to water infiltration into soils and replenishment of groundwater. The connectivity of soil macropores influences the vertical and lateral movement of both water and solutes in soils.
Interconnected soil macropores may not create continuous paths, especially across the soil boundaries. The existence of dead-end pores can block or slow down water and air movement. Therefore, the continuity of soil macropores is also an influential factor in soil processes.
For example, higher continuousness of macropores can result in higher gas exchange between soil and the atmosphere while lead to better soil aeration. Continued connection of macropores will also provide extended spaces that plants can easily grow their roots into, without sacrificing aboveground biomass by allocating resources for their roots to search for new spaces in discontinued areas. [ 12 ]
While soil macropore can be connected continuously to form long channels between two points in a soil, these channels are mostly sinuous rather than straight. Tortuosity is basically a ratio between the actual path length and the shortest distance between two points. [ 13 ] In essence, tortuosity of macropore paths indicates their resistance to water flow. The more sinuous the paths, the higher the resistance. This will then affect the speed of water movement and distribution in soils.
Soil macropores are a vital part of soil structure and their conservation is critical to sustainable management of our soil resources. This is particularly true to soils that are constantly subject to human disturbance, such as tilled agricultural fields where the shape and size of macropores can be altered by tillage.
Soil macropores are easily affected by soil compaction . [ 14 ] Compacted soils, for example in forest landings , usually have a low macropore proportion (macro-porosity) with impeded water movement.
Organic matter can be incorporated into disturbed soils to improve their macro-porosity and related soil functions [ 15 ] | https://en.wikipedia.org/wiki/Macropore |
Macroprolactin is the term used to describe complexed forms of the pituitary hormone prolactin which are found in blood. The most common complex found in blood consists of prolactin and immunoglobulin G (IgG). [ 1 ] While the free prolactin hormone is active, prolactin in the macroprolactin complex does not have any biological activity in the body and is considered benign. [ 2 ] However, macroprolactin is detected by all Laboratory tests that measure prolactin in blood. [ 3 ] This leads to misdiagnosis of hyperprolactinaemia in many people, especially those with other symptoms, such as infertility [ 4 ] or menstrual problems. [ 5 ]
"Macroprolactin" is most commonly a complex of prolactin and IgG (typically IgG4), displaying a molecular weight of approximately 150 kDa (which is hence 6–7 fold higher that the native molecule). Polymeric aggregate of highly glycosylated prolactin monomers or prolactin-IgA complexes (i.e. non-IgG-type macroprolactin) act similarly and also count as "macroprolactin". [ 6 ]
In patients with hyperprolactinemia , the serum pattern of prolactin isoforms usually encompasses 60%–90% monomeric prolactin, 15%–30% big-prolactin (40–60 kDa: usually prolactin dimers or big-big degradation products) and 0%–10% big-big prolactin (>100 kDa). [ 7 ] The condition of macroprolactinaemia is hence defined as predominance (i.e. >30%–60%) of circulating prolactin isoforms with molecular weight >100 kDa. [ 8 ]
There are certain chemicals, such as polyethylene glycol , that can be added to remove macroprolactin from a suspicious sample. The sample can then be re-analysed to see if the prolactin levels are still high. The gold standard test to diagnose macroprolactin is gel-filtration chromatography . | https://en.wikipedia.org/wiki/Macroprolactin |
In genetics , macrosatellites are the largest of the tandem repeats within DNA . Each macrosatellite repeat typically is several thousand base pairs in length, and the entire repeat array often spans hundreds of kilobases. [ 1 ] Reduced number of repeats on chromosome 4 (D4Z4 repeats) causes euchromatization of local DNA and is the predominant cause of facioscapulohumeral muscular dystrophy (FSHD). [ 2 ] Other macrosatellites are RS447, NBL2 and DXZ4 , [ 3 ] [ 1 ] [ 4 ] [ 5 ] although RS447 is also commonly referred to as a "megasatellite."
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macrosatellite |
Macroscopic quantum phenomena are processes showing quantum behavior at the macroscopic scale , rather than at the atomic scale where quantum effects are prevalent. The best-known examples of macroscopic quantum phenomena are superfluidity and superconductivity ; other examples include the quantum Hall effect , Josephson effect and topological order . Since 2000 there has been extensive experimental work on quantum gases, particularly Bose–Einstein condensates .
Between 1996 and 2016 six Nobel Prizes were given for work related to macroscopic quantum phenomena. [ 1 ] Macroscopic quantum phenomena can be observed in superfluid helium and in superconductors , [ 2 ] but also in dilute quantum gases, dressed photons such as polaritons and in laser light. Although these media are very different, they are all similar in that they show macroscopic quantum behavior, and in this respect they all can be referred to as quantum fluids .
Quantum phenomena are generally classified as macroscopic when the quantum states are occupied by a large number of particles (of the order of the Avogadro number ) or the quantum states involved are macroscopic in size (up to kilometer-sized in superconducting wires). [ 3 ]
The concept of macroscopically occupied quantum states is introduced by Fritz London . [ 4 ] [ 5 ] In this section it will be explained what it means if a single state is occupied by a very large number of particles. We start with the wave function of the state written as
with Ψ 0 the amplitude and φ {\displaystyle \varphi } the phase. The wave function is normalized so that
The physical interpretation of the quantity
depends on the number of particles. Fig. 1 represents a container with a certain number of particles with a small control volume Δ V inside. We check from time to time how many particles are in the control box. We distinguish three cases:
In quantum mechanics the particle probability flow density J p (unit: particles per second per m 2 ), also called probability current , can be derived from the Schrödinger equation to be
with q the charge of the particle and A → {\displaystyle {\vec {A}}} the vector potential; cc stands for the complex conjugate of the other term inside the brackets. [ 6 ] For neutral particles q = 0 , for superconductors q = −2 e (with e the elementary charge) the charge of Cooper pairs. With Eq. ( 1 )
If the wave function is macroscopically occupied the particle probability flow density becomes a particle flow density. We introduce the fluid velocity v s via the mass flow density
The density (mass per volume) is
so Eq. ( 5 ) results in
This important relation connects the velocity, a classical concept, of the condensate with the phase of the wave function, a quantum-mechanical concept.
At temperatures below the lambda point , helium shows the unique property of superfluidity. The fraction of the liquid that forms the superfluid component is a macroscopic quantum fluid . The helium atom is a neutral particle , so q = 0 . Furthermore, when considering helium-4 , the relevant particle mass is m = m 4 , so Eq. ( 8 ) reduces to
For an arbitrary loop in the liquid, this gives
Due to the single-valued nature of the wave function
with n integer, we have
The quantity
is the quantum of circulation. For a circular motion with radius r
In case of a single quantum ( n = 1 )
When superfluid helium is put in rotation, Eq. ( 13 ) will not be satisfied for all loops inside the liquid unless the rotation is organized around vortex lines (as depicted in Fig. 2). These lines have a vacuum core with a diameter of about 1 Å (which is smaller than the average particle distance). The superfluid helium rotates around the core with very high speeds. Just outside the core ( r = 1 Å), the velocity is as large as 160 m/s. The cores of the vortex lines and the container rotate as a solid body around the rotation axes with the same angular velocity. The number of vortex lines increases with the angular velocity (as shown in the upper half of the figure). Note that the two right figures both contain six vortex lines, but the lines are organized in different stable patterns. [ 7 ]
In the original paper [ 8 ] Ginzburg and Landau observed the existence of two types of superconductors depending
on the energy of the interface between the normal and superconducting states. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors , superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value H c . Depending on the geometry of the sample, one may obtain an intermediate state [ 9 ] consisting of a baroque pattern [ 10 ] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors , raising the applied field past a critical value H c 1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength H c 2 , superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized . Most pure elemental superconductors, except niobium and carbon nanotubes , are Type I, while almost all impure and compound superconductors are Type II.
The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957.
He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices . For this and related work, he was awarded the Nobel Prize in 2003 with Ginzburg and Leggett . [ 11 ]
For superconductors the bosons involved are the so-called Cooper pairs which are quasiparticles formed by two electrons. [ 12 ] Hence m = 2 m e and q = −2 e where m e and e are the mass of an electron and the elementary charge. It follows from Eq. ( 8 ) that
Integrating Eq. ( 15 ) over a closed loop gives
As in the case of helium we define the vortex strength
and use the general relation
where Φ is the magnetic flux enclosed by the loop. The so-called fluxoid is defined by
In general the values of κ and Φ depend on the choice of the loop. Due to the single-valued nature of the wave function and Eq. ( 16 ) the fluxoid is quantized
The unit of quantization is called the flux quantum
The flux quantum plays a very important role in superconductivity. The earth magnetic field is very small (about 50 μT), but it generates one flux quantum in an area of 6 μm by 6 μm. So, the flux quantum is very small. Yet it was measured to an accuracy of 9 digits as shown in Eq. ( 21 ). Nowadays the value given by Eq. ( 21 ) is exact by definition.
In Fig. 3 two situations are depicted of superconducting rings in an external magnetic field. One case is a thick-walled ring and in the other case the ring is also thick-walled, but is interrupted by a weak link. In the latter case we will meet the famous Josephson relations . In both cases we consider a loop inside the material. In general a superconducting circulation current will flow in the material. The total magnetic flux in the loop is the sum of the applied flux Φ a and the self-induced flux Φ s induced by the circulation current
The first case is a thick ring in an external magnetic field (Fig. 3a). The currents in a superconductor only flow in a thin layer at the surface. The thickness of this layer is determined by the so-called London penetration depth . It is of μm size or less. We consider a loop far away from the surface so that v s = 0 everywhere so κ = 0. In that case the fluxoid is equal to the magnetic flux (Φ v = Φ). If v s = 0 Eq. ( 15 ) reduces to
Taking the rotation gives
Using the well-known relations ∇ → × ∇ → φ = 0 {\displaystyle {\vec {\nabla }}\times {\vec {\nabla }}\varphi =0} and ∇ → × A → = B → {\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}} shows that the magnetic field in the bulk of the superconductor is zero as well. So, for thick rings, the total magnetic flux in the loop is quantized according to
Weak links play a very important role in modern superconductivity. In most cases weak links are oxide barriers between two superconducting thin films, but it can also be a crystal boundary (in the case of high-Tc superconductors ). A schematic representation is given in Fig. 4. Now consider the ring which is thick everywhere except for a small section where the ring is closed via a weak link (Fig. 3b). The velocity is zero except near the weak link. In these regions the velocity contribution to the total phase change in the loop is given by (with Eq. ( 15 ))
The line integral is over the contact from one side to the other in such a way that the end points of the line are well inside the bulk of the superconductor where v s = 0 . So the value of the line integral is well-defined (e.g. independent of the choice of the end points). With Eqs. ( 19 ), ( 22 ), and ( 26 )
Without proof we state that the supercurrent through the weak link is given by the so-called DC Josephson relation [ 13 ]
The voltage over the contact is given by the AC Josephson relation
The names of these relations (DC and AC relations) are misleading since they both hold in DC and AC situations. In the steady state (constant Δ φ ∗ {\displaystyle \Delta \varphi ^{*}} ) Eq. ( 29 ) shows that V =0 while a nonzero current flows through the junction. In the case of a constant applied voltage (voltage bias) Eq. ( 29 ) can be integrated easily and gives
Substitution in Eq. ( 28 ) gives
This is an AC current. The frequency
is called the Josephson frequency. One μV gives a frequency of about 500 MHz. By using Eq. ( 32 ) the flux quantum is determined with the high precision as given in Eq. ( 21 ).
The energy difference of a Cooper pair, moving from one side of the contact to the other, is Δ E = 2eV . With this expression Eq. ( 32 ) can be written as Δ E = hν which is the relation for the energy of a photon with frequency ν .
Fig. 5 shows a so-called DC SQUID . It consists of two superconductors connected by two weak links. The fluxoid quantization of a loop through the two bulk superconductors and the two weak links demands
If the self-inductance of the loop can be neglected the magnetic flux in the loop Φ is equal to the applied flux
with B the magnetic field, applied perpendicular to the surface, and A the surface area of the loop. The total supercurrent is given by
Substitution of Eq( 33 ) in ( 35 ) gives
Using a well known geometrical formula we get
Since the sin-function can vary only between −1 and +1 a steady solution is only possible if the applied current is below a critical current given by
Note that the critical current is periodic in the applied flux with period Φ 0 . The dependence of the critical current on the applied flux is depicted in Fig. 6. It has a strong resemblance with the interference pattern generated by a laser beam behind a double slit. In practice the critical current is not zero at half integer values of the flux quantum of the applied flux. This is due to the fact that the self-inductance of the loop cannot be neglected. [ 15 ]
Type-II superconductivity is characterized by two critical fields called B c1 and B c2 . At a magnetic field B c1 the applied magnetic field starts to penetrate the sample, but the sample is still superconducting. Only at a field of B c2 the sample is completely normal. For fields in between B c1 and B c2 magnetic flux penetrates the superconductor in well-organized patterns, the so-called Abrikosov vortex lattice similar to the pattern shown in Fig. 2. [ 16 ] A cross section of the superconducting plate is given in Fig. 7. Far away from the plate the field is homogeneous, but in the material superconducting currents flow which squeeze the field in bundles of exactly one flux quantum. The typical field in the core is as big as 1 tesla. The currents around the vortex core flow in a layer of about 50 nm with current densities on the order of 15 × 10 12 A/m 2 . That corresponds with 15 million ampère in a wire of one mm 2 .
The classical types of quantum systems, superconductors and superfluid helium, were discovered in the beginning of the 20th century. Near the end of the 20th century, scientists discovered how to create very dilute atomic or molecular gases, cooled first by laser cooling and then by evaporative cooling . [ 17 ] They are trapped using magnetic fields or optical dipole potentials in ultrahigh vacuum chambers. Isotopes which have been used include rubidium (Rb-87 and Rb-85), strontium (Sr-87, Sr-86, and Sr-84) potassium (K-39 and K-40), sodium (Na-23), lithium (Li-7 and Li-6), and hydrogen (H-1). The temperatures to which they can be cooled are as low as a few nanokelvin. The developments have been very fast in the past few years. A team of NIST and the University of Colorado has succeeded in creating and observing vortex quantization in these systems. [ 18 ] The concentration of vortices increases with the angular velocity of the rotation, similar to the case of superfluid helium and superconductivity. | https://en.wikipedia.org/wiki/Macroscopic_quantum_phenomena |
In quantum mechanics , macroscopic quantum self-trapping is when two Bose–Einstein condensates weakly linked by an energy barrier which particles can tunnel through , nevertheless end up with a higher average number of bosons on one side of the junction than the other. The junction of two Bose–Einstein condensates is mostly analogous to a Josephson junction , which is made of two superconductors linked by a non-conducting barrier. However, superconducting Josephson junctions do not display macroscopic quantum self-trapping, and thus macroscopic quantum self-tunneling is a distinguishing feature of Bose–Einstein condensate junctions. Self-trapping occurs when the self-interaction energy Λ {\displaystyle \Lambda } between the Bosons is larger than a critical value called Λ c MJJ {\displaystyle \Lambda _{c}^{\text{MJJ}}} . [ 1 ] [ 2 ] Λ c MJJ = 1 + 1 − z ( 0 ) 2 cos ( θ ( 0 ) + θ A-C ) z ( 0 ) 2 / 2 {\displaystyle \Lambda _{c}^{\text{MJJ}}={\frac {1+{\sqrt {1-z(0)^{2}}}\cos(\theta (0)+\theta _{\text{A-C}})}{z(0)^{2}/2}}}
It was first described in 1997. [ 3 ] It has been observed in Bose–Einsten condensates of exciton-polaritons , [ 4 ] and predicted for a condensate of magnons . [ 1 ]
While the tunneling of a particle through classically forbidden barriers can be described by the particle's wave function , this merely gives the probability of tunneling. Although various factors can increase or decrease the probability of tunneling, one can not be certain whether or not tunneling will occur.
When two condensates are placed in a double potential well and the phase and population differences are such that the system is in equilibrium , the population difference will remain fixed. A naïve conclusion is that there is no tunneling at all, and the bosons are truly "trapped" on one side of the junction. However, macroscopic quantum self-trapping does not rule out quantum tunneling — rather, only the possibility of observing tunneling is ruled out. In the event that a particle tunnels through the barrier, another particle tunnels in the opposite direction. Because the identity of individual particles is lost in that case, no tunneling can be observed, and the system is considered to remain at rest .
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macroscopic_quantum_self-trapping |
A macroscopic quantum state is a state of matter in which macroscopic properties, such as mechanical motion , [ 1 ] thermal conductivity , electrical conductivity [ 2 ] and viscosity , can be described only by quantum mechanics rather than merely classical mechanics . [ 3 ] This occurs primarily at low temperatures where little thermal motion is present to mask the quantum nature of a substance.
Macroscopic quantum phenomena can emerge from coherent states of superfluids and superconductors . [ 3 ] Quantum states of motion have been directly observed in a macroscopic mechanical resonator (see quantum machine ).
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Macroscopic_quantum_state |
The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye , without magnifying optical instruments . [ 1 ] [ 2 ] It is the opposite of microscopic .
When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations ( microscopy ) or theories ( microphysics , statistical physics ) of objects of geometric lengths smaller than perhaps some hundreds of micrometres .
A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope ) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope ). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics . An example of a topic that extends from macroscopic to microscopic viewpoints is histology .
Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. [ 3 ] At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of the Planck constant . Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points with no magnitude, still having their finite masses. Classical mechanics also considers mathematically idealized extended materials as geometrically continuously substantial. Such idealizations are useful for most everyday calculations, but may fail entirely for molecules, atoms, photons, and other elementary particles (and vice versa). In many ways, classical mechanics can be considered a mainly macroscopic theory. On the much smaller scale of atoms and molecules, classical mechanics may fail, and the interactions of particles are then described by quantum mechanics. Near the absolute minimum of temperature , the Bose–Einstein condensate exhibits effects on macroscopic scale that demand description by quantum mechanics.
In the quantum measurement problem the issue of what constitutes macroscopic and what constitutes the quantum world is unresolved and possibly unsolvable. The related correspondence principle can be articulated thus: every macroscopic phenomena can be formulated as a problem in quantum theory. A violation of the correspondence principle would thus ensure an empirical distinction between the macroscopic and the quantum.
In pathology , macroscopic diagnostics generally involves gross pathology , in contrast to microscopic histopathology .
The term "megascopic" is a synonym. "Macroscopic" may also refer to a "larger view", namely a view available only from a large perspective (a hypothetical "macroscope" ). A macroscopic position could be considered the "big picture".
Particle physics , dealing with the smallest physical systems, is also known as high energy physics . Physics of larger length scales, including the macroscopic scale, is also known as low energy physics . Intuitively, it might seem incorrect to associate "high energy" with the physics of very small, low mass–energy systems, like subatomic particles. By comparison, one gram of hydrogen , a macroscopic system, has ~ 6 × 10 23 times [ 4 ] the mass–energy of a single proton , a central object of study in high energy physics. Even an entire beam of protons circulated in the Large Hadron Collider , a high energy physics experiment, contains ~ 3.23 × 10 14 protons, [ 5 ] each with 6.5 × 10 12 eV of energy, for a total beam energy of ~ 2.1 × 10 27 eV or ~ 336.4 MJ , which is still ~ 2.7 × 10 5 times lower than the mass–energy of a single gram of hydrogen. Yet, the macroscopic realm is "low energy physics", while that of quantum particles is "high energy physics".
The reason for this is that the "high energy" refers to energy at the quantum particle level . While macroscopic systems indeed have a larger total energy content than any of their constituent quantum particles, there can be no experiment or other observation of this total energy without extracting the respective amount of energy from each of the quantum particles – which is exactly the domain of high energy physics. Daily experiences of matter and the Universe are characterized by very low energy. For example, the photon energy of visible light is about 1.8 to 3.2 eV. Similarly, the bond-dissociation energy of a carbon-carbon bond is about 3.6 eV. This is the energy scale manifesting at the macroscopic level, such as in chemical reactions . Even photons with far higher energy, gamma rays of the kind produced in radioactive decay , have photon energy that is almost always between 10 5 eV and 10 7 eV – still two orders of magnitude lower than the mass–energy of a single proton. Radioactive decay gamma rays are considered as part of nuclear physics , rather than high energy physics.
Finally, when reaching the quantum particle level, the high energy domain is revealed. The proton has a mass–energy of ~ 9.4 × 10 8 eV ; some other massive quantum particles, both elementary and hadronic , have yet higher mass–energies. Quantum particles with lower mass–energies are also part of high energy physics; they also have a mass–energy that is far higher than that at the macroscopic scale (such as electrons ), or are equally involved in reactions at the particle level (such as neutrinos ). Relativistic effects , as in particle accelerators and cosmic rays , can further increase the accelerated particles' energy by many orders of magnitude, as well as the total energy of the particles emanating from their collision and annihilation . | https://en.wikipedia.org/wiki/Macroscopic_scale |
The mad scientist (also mad doctor or mad professor ) is a stock character of a scientist who is perceived as "mad, bad and dangerous to know" [ 1 ] or " insane " owing to a combination of unusual or unsettling personality traits and the unabashedly ambitious, taboo or hubristic nature of their experiments. As a motif in fiction, the mad scientist may be villainous ( evil genius ) or antagonistic, benign, or neutral; may be insane , eccentric , or clumsy; and often works with fictional technology or fails to recognise or value common human objections to attempting to play God . Some may have benevolent intentions, even if their actions are dangerous or questionable, which can make them accidental antagonists .
The prototypical fictional mad scientist was Victor Frankenstein , creator of his eponymous monster , [ 2 ] [ 3 ] [ 4 ] who made his first appearance in 1818, in the novel Frankenstein, or the Modern Prometheus by Mary Shelley . Though the novel's title character, Victor Frankenstein, is a sympathetic character, the critical element of conducting experiments that cross "boundaries that ought not to be crossed", heedless of the consequences, is present in Shelley's novel. Frankenstein was trained as both an alchemist and a modern scientist, which makes him the bridge between two eras of an evolving archetype. The book is said to be a precursor of a new genre, science fiction , [ 5 ] [ 6 ] although as an example of gothic horror [ 7 ] [ 8 ] [ 9 ] [ 10 ] it is connected with other antecedents as well.
The year 1896 saw the publication of H. G. Wells 's The Island of Doctor Moreau , in which the titular doctor—a controversial vivisectionist —has isolated himself entirely from civilisation in order to continue his experiments in surgically reshaping animals into humanoid forms , heedless of the suffering he causes. [ 11 ] In 1925, the novelist Alexander Belyaev introduced mad scientists to the Russian people through the novel Professor Dowell's Head , in which the antagonist performs experimental head transplants on bodies stolen from the morgue, and reanimates the corpses.
Fritz Lang 's movie Metropolis ( 1927 ) brought the archetypical mad scientist to the screen in the form of Rotwang , the evil genius whose machines had originally given life to the dystopian city of the title. [ 12 ] Rotwang's laboratory influenced many subsequent movie sets with its electrical arcs , bubbling apparatus, and bizarrely complicated arrays of dials and controls. Portrayed by actor Rudolf Klein-Rogge , Rotwang himself is the prototypically conflicted mad scientist; though he is master of almost mystical scientific power, he remains a slave to his own desires for power and revenge. [ citation needed ] Rotwang's appearance was also influential—the character's shock of flyaway hair, wild-eyed demeanor, and his quasi- fascist [ citation needed ] laboratory garb have all been adopted as shorthand for the mad scientist "look." Even his mechanical right hand has become a mark of twisted scientific power, echoed notably in Stanley Kubrick 's film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb and in the novel The Three Stigmata of Palmer Eldritch (1965) by Philip K. Dick . [ citation needed ]
A recent survey of 1,000 horror films distributed in the UK between the 1930s and 1980s reveals mad scientists or their creations have been the villains of 30 percent of the films; scientific research has produced 39 percent of the threats; and, by contrast, scientists have been the heroes of a mere 11 percent. [ 13 ] Boris Karloff played mad scientists in several of his 1930s and 1940s films.
The Mad scientist was a staple of the Republic/Universal/Columbia movie serials of the 1930s and 40s. Examples include:
Mad scientists were most conspicuous in popular culture after World War II . The sadistic human experimentation conducted under the auspices of the Nazis , especially those of Josef Mengele , and the invention of the atomic bomb , gave rise in this period to genuine fears that science and technology had gone out of control. That the scientific and technological build-up during the Cold War brought about increasing threats of unparalleled destruction of the human species did not lessen the impression. Mad scientists frequently figure in science fiction and motion pictures from the period. [ 14 ]
Mad scientists in animation include Professor Frink from The Simpsons , Professor Farnsworth from Futurama , Rick Sanchez from Rick and Morty , Rintaro Okabe from Science Adventure , Dr. Heinz Doofenshmirtz from Phineas and Ferb , and Dr. Lullah from StuGo .
Walt Disney Pictures had Mickey Mouse trying to save his dog Pluto from The Mad Doctor (1933).
Depictions of mad scientists in Warner Brothers' Merrie Melodies / Looney Tunes cartoons include Hair-Raising Hare (1946, based on Peter Lorre ), Birth of a Notion (1947, again based on Lorre), Water, Water Every Hare (1952, based on Boris Karloff ).
While both Tom and Jerry dabbled in mad science in some of the Hanna-Barbera cartoons, an actual mad scientist did not appear until [ citation needed ] Switchin' Kitten (1961). | https://en.wikipedia.org/wiki/Mad_scientist |
Madapollam / ˌ m æ d ə ˈ p ɒ l ə m / is a soft cotton fabric manufactured from fine yarns with a dense pick laid out in linen weave . Madapollam is used as an embroidery and handkerchief fabric and as a base for fabric printing . [ 1 ] [ 2 ] The equal warp and weft mean that the tensile strength and shrinkage is the same in any two directions at right angles and that the fabric absorbs liquids such as ink, paint and aircraft dope equally along its X and Y axes.
It was used as the covering for the de Havilland Mosquito [ 3 ] a pioneer of wooden monocoque airframe construction in military aircraft, as well as in other aircraft, where it was tautened and stiffened with aircraft dope . [ 4 ] [ failed verification ]
The cloth takes its name from the eponymous village near Narsapur, West Godavari , Andhra Pradesh, India, where the East India Company had a cloth factory. [ 5 ] | https://en.wikipedia.org/wiki/Madapollam |
The Madelung constant is used in determining the electrostatic potential of a single ion in a crystal by approximating the ions by point charges . It is named after Erwin Madelung , a German physicist. [ 1 ]
Because the anions and cations in an ionic solid attract each other by virtue of their opposing charges, separating the ions requires a certain amount of energy. This energy must be given to the system in order to break the anion–cation bonds. The energy required to break these bonds for one mole of an ionic solid under standard conditions is the lattice energy .
The Madelung constant allows for the calculation of the electric potential V i of the ion at position r i due to all other ions of the lattice
where r i j = | r i − r j | {\displaystyle r_{ij}=|r_{i}-r_{j}|} is the distance between the i th and the j th ion. In addition,
If the distances r ij are normalized to the nearest neighbor distance r 0 , the potential may be written
with M i being the ( dimensionless ) Madelung constant of the i th ion
Another convention is to base the reference length on the cubic root w of the unit cell volume, which for cubic systems is equal to the lattice constant . Thus, the Madelung constant then reads
The electrostatic energy of the ion at site r i then is the product of its charge with the potential acting at its site
There occur as many Madelung constants M i in a crystal structure as ions occupy different lattice sites. For example, for the ionic crystal NaCl , there arise two Madelung constants – one for Na and another for Cl. Since both ions, however, occupy lattice sites of the same symmetry they both are of the same magnitude and differ only by sign. The electrical charge of the Na + and Cl − ion are assumed to be onefold positive and negative, respectively, z Na = 1 and z Cl = −1 . The nearest neighbour distance amounts to half the lattice constant of the cubic unit cell r 0 = a 2 {\displaystyle r_{0}={\tfrac {a}{2}}} and the Madelung constants become
The prime indicates that the term j = k = ℓ = 0 {\displaystyle j=k=\ell =0} is to be left out. Since this sum is conditionally convergent it is not suitable as definition of Madelung's constant unless the order of summation is also specified. There are two "obvious" methods of summing this series, by expanding cubes or expanding spheres. Although the latter is often found in the literature, [ 2 ]
it fails to converge , as was shown by Emersleben in 1951. [ 3 ] The summation over expanding cubes converges to the correct value, although very slowly. An alternative summation procedure, presented by Borwein , Borwein and Taylor, uses analytic continuation of an absolutely convergent series. [ 4 ]
There are many practical methods for calculating Madelung's constant using either direct summation (for example, the Evjen method [ 5 ] ) or integral transforms , which are used in the Ewald method . [ 6 ] A fast converging formula for the Madelung constant of NaCl is
The continuous reduction of M with decreasing coordination number Z for the three cubic AB compounds (when accounting for the doubled charges in ZnS) explains the observed propensity of alkali halides to crystallize in the structure with highest Z compatible with their ionic radii . Note also how the fluorite structure being intermediate between the caesium chloride and sphalerite structures is reflected in the Madelung constants.
It is assumed for the calculation of Madelung constants that an ion's charge density may be approximated by a point charge . This is allowed, if the electron distribution of the ion is spherically symmetric. In particular cases, however, when the ions reside on lattice site of certain crystallographic point groups , the inclusion of higher order moments, i.e. multipole moments of the charge density might be required. It is shown by electrostatics that the interaction between two point charges only accounts for the first term of a general Taylor series describing the interaction between two charge distributions of arbitrary shape. Accordingly, the Madelung constant only represents the monopole -monopole term.
The electrostatic interaction model of ions in solids has thus been extended to a point multipole concept that also includes higher multipole moments like dipoles , quadrupoles etc. [ 8 ] [ 9 ] [ 10 ] These concepts require the determination of higher order Madelung constants or so-called electrostatic lattice constants. The proper calculation of electrostatic lattice constants has to consider the crystallographic point groups of ionic lattice sites; for instance, dipole moments may only arise on polar lattice sites, i. e. exhibiting a C 1 , C 1 h , C n or C nv site symmetry ( n = 2, 3, 4 or 6). [ 11 ] These second order Madelung constants turned out to have significant effects on the lattice energy and other physical properties of heteropolar crystals. [ 12 ]
The Madelung constant is also a useful quantity in describing the lattice energy of organic salts. Izgorodina and coworkers have described a generalised method (called the EUGEN method) of calculating the Madelung constant for any crystal structure. [ 13 ] | https://en.wikipedia.org/wiki/Madelung_constant |
In theoretical physics , the Madelung equations , or the equations of quantum hydrodynamics , are Erwin Madelung 's alternative formulation of the Schrödinger equation for a spinless non relativistic particle, written in terms of hydrodynamical variables, similar to the Navier–Stokes equations of fluid dynamics . [ 1 ] The derivation of the Madelung equations is similar to the de Broglie–Bohm formulation , which represents the Schrödinger equation as a quantum Hamilton–Jacobi equation . In both cases the hydrodynamic interpretations are not equivalent to Schrodinger's equation without the addition of a quantization condition.
In the fall of 1926, Erwin Madelung reformulated [ 2 ] [ 3 ] Schrödinger's quantum equation in a more classical and visualizable form resembling hydrodynamics. His paper was one of numerous early attempts at different approaches to quantum mechanics, including those of Louis de Broglie and Earle Hesse Kennard . [ 4 ] The most influential of these theories was ultimately de Broglie's through the 1952 work of David Bohm [ 5 ] now called Bohmian mechanics .
In 1994 Timothy C. Wallstrom showed [ 6 ] that an additional ad hoc quantization condition must be added to the Madelung equations to reproduce Schrodinger's work. His analysis paralleled earlier work [ 7 ] by Takehiko Takabayashi on the hydrodynamic interpretation of Bohmian mechanics. The mathematical foundations of the Madelung equations continue to be a topic of research. [ 8 ]
The Madelung equations are quantum Euler equations : [ citation needed ] ∂ t ρ m + ∇ ⋅ ( ρ m v ) = 0 , d v d t = ∂ t v + v ⋅ ∇ v = − 1 m ∇ ( Q + V ) , {\displaystyle {\begin{aligned}&\partial _{t}\rho _{m}+\nabla \cdot (\rho _{m}\mathbf {v} )=0,\\[4pt]&{\frac {d\mathbf {v} }{dt}}=\partial _{t}\mathbf {v} +\mathbf {v} \cdot \nabla \mathbf {v} =-{\frac {1}{m}}\mathbf {\nabla } (Q+V),\end{aligned}}} where
The Madelung equations answer the question whether v ( x , t ) {\displaystyle \mathbf {v} (\mathbf {x} ,t)} obeys the continuity equations of hydrodynamics and, subsequently, what plays the role of the stress tensor . [ 9 ]
The circulation of the flow velocity field along any closed path obeys the auxiliary quantization condition Γ ≐ ∮ m v ⋅ d l = 2 π n ℏ {\textstyle \Gamma \doteq \oint {m\mathbf {v} \cdot d\mathbf {l} }=2\pi n\hbar } for all integers n . [ 10 ] [ 11 ]
The Madelung equations are derived by first writing the wavefunction in polar form [ 12 ] [ 13 ] ψ ( x , t ) = R ( x , t ) e i S ( x , t ) / ℏ , {\displaystyle \psi (\mathbf {x} ,t)=R(\mathbf {x} ,t)e^{iS(\mathbf {x} ,t)/\hbar },} with R ≥ 0 {\displaystyle R\geq 0} and S {\displaystyle S} both real and ρ ( x , t ) = ψ ( x , t ) ∗ ψ ( x , t ) = R 2 ( x , t ) , {\displaystyle \rho (\mathbf {x} ,t)=\psi (\mathbf {x} ,t)^{*}\psi (\mathbf {x} ,t)=R^{2}(\mathbf {x} ,t),} the associated probability density. Substituting this form into the probability current gives: J = ℏ 2 m i ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) = 1 m ρ ( x , t ) ∇ S ( x , t ) = ρ ( x , t ) v ( x , t ) , {\displaystyle \mathbf {J} ={\frac {\hbar }{2mi}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {1}{m}}\rho (\mathbf {x} ,t)\nabla S(\mathbf {x} ,t)=\rho (\mathbf {x} ,t)\mathbf {v} (\mathbf {x} ,t),} where the flow velocity is expressed as v ( x , t ) = 1 m ∇ S ( x , t ) . {\displaystyle \mathbf {v} (\mathbf {x} ,t)={\frac {1}{m}}\nabla S(\mathbf {x} ,t).} However, the interpretation of v {\displaystyle \mathbf {v} } as a "velocity" should not be taken too literal, because a simultaneous exact measurement of position and velocity would necessarily violate
the uncertainty principle . [ 14 ]
Next, substituting the polar form into the Schrödinger equation i ℏ ∂ ∂ t ψ ( x , t ) = [ − ℏ 2 2 m ∇ 2 + V ( x ) ] ψ ( x , t ) , {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (\mathbf {x} ,t)=\left[{\frac {-\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {x} )\right]\psi (\mathbf {x} ,t),} and performing the appropriate differentiations, dividing the equation by e i S ( x , t ) / ℏ {\displaystyle e^{iS(\mathbf {x} ,t)/\hbar }} and separating the real and imaginary parts, one obtains a system of two coupled partial differential equations: ∂ t R ( x , t ) + 1 m ∇ R ( x , t ) ⋅ ∇ S ( x , t ) + 1 2 m R ( x , t ) Δ S ( x , t ) = 0 , ∂ t S ( x , t ) + 1 2 m [ ∇ S ( x , t ) ] 2 + V ( x ) = ℏ 2 2 m Δ R ( x , t ) R ( x , t ) . {\displaystyle {\begin{aligned}&\partial _{t}R(\mathbf {x} ,t)+{\frac {1}{m}}\nabla R(\mathbf {x} ,t)\cdot \nabla S(\mathbf {x} ,t)+{\frac {1}{2m}}R(\mathbf {x} ,t)\Delta S(\mathbf {x} ,t)=0,\\&\partial _{t}S(\mathbf {x} ,t)+{\frac {1}{2m}}\left[\nabla S(\mathbf {x} ,t)\right]^{2}+V(\mathbf {x} )={\frac {\hbar ^{2}}{2m}}{\frac {\Delta R(\mathbf {x} ,t)}{R(\mathbf {x} ,t)}}.\end{aligned}}} The first equation corresponds to the imaginary part of Schrödinger equation and can be interpreted as the continuity equation . The second equation corresponds to the real part and is also referred to as the quantum Hamilton-Jacobi equation . [ 15 ] Multiplying the first equation by 2 R {\displaystyle 2R} and calculating the gradient of the second equation results in the Madelung equations: ∂ t ρ ( x , t ) + ∇ ⋅ [ ρ ( x , t ) v ( x , t ) ] = 0 , d d t v ( x , t ) = ∂ t v ( x , t ) + [ v ( x , t ) ⋅ ∇ ] v ( x , t ) = − 1 m ∇ [ V ( x ) − ℏ 2 2 m Δ ρ ( x , t ) ρ ( x , t ) ] = − 1 m ∇ [ V ( x ) + Q ( x , t ) ] . {\displaystyle {\begin{aligned}&\partial _{t}\rho (\mathbf {x} ,t)+\nabla \cdot \left[\rho (\mathbf {x} ,t)v(\mathbf {x} ,t)\right]=0,\\&{\frac {d}{dt}}\mathbf {v} (\mathbf {x} ,t)=\partial _{t}v(\mathbf {x} ,t)+\left[v(\mathbf {x} ,t)\cdot \nabla \right]v(\mathbf {x} ,t)=-{\frac {1}{m}}\nabla \left[V(\mathbf {x} )-{\frac {\hbar ^{2}}{2m}}{\frac {\Delta {\sqrt {\rho (\mathbf {x} ,t)}}}{\sqrt {\rho (\mathbf {x} ,t)}}}\right]=-{\frac {1}{m}}\nabla \left[V(\mathbf {x} )+Q(\mathbf {x} ,t)\right].\end{aligned}}} with quantum potential Q ( x , t ) = − ℏ 2 2 m Δ ρ ( x , t ) ρ ( x , t ) . {\displaystyle Q(\mathbf {x} ,t)=-{\frac {\hbar ^{2}}{2m}}{\frac {\Delta {\sqrt {\rho (\mathbf {x} ,t)}}}{\sqrt {\rho (\mathbf {x} ,t)}}}.}
Alternatively, the quantum Hamilton-Jacobi equation can be written in a form similar to the Cauchy momentum equation : d d t v = f − 1 ρ m ∇ ⋅ p Q , {\displaystyle {\frac {d}{dt}}\mathbf {v} =\mathbf {f} -{\frac {1}{\rho _{m}}}\nabla \cdot \mathbf {p} _{Q},} with an external force defined as f ( x ) = − 1 m ∇ V ( x ) , {\displaystyle \mathbf {f} (\mathbf {x} )=-{\frac {1}{m}}\nabla V(\mathbf {x} ),} and a quantum pressure tensor [ 16 ] p Q = − ( ℏ / 2 m ) 2 ρ m ∇ ⊗ ∇ ln ρ m . {\displaystyle \mathbf {p} _{Q}=-(\hbar /2m)^{2}\rho _{m}\nabla \otimes \nabla \ln \rho _{m}.}
The integral energy stored in the quantum pressure tensor is proportional to the Fisher information , which accounts for the quality of measurements. Thus, according to the Cramér–Rao bound , the Heisenberg uncertainty principle is equivalent to a standard inequality for the efficiency of measurements. [ 17 ] [ 18 ]
The thermodynamic definition of the quantum chemical potential μ = Q + V = 1 ρ m H ^ ρ m {\displaystyle \mu =Q+V={\frac {1}{\sqrt {\rho _{m}}}}{\widehat {H}}{\sqrt {\rho _{m}}}} follows from the hydrostatic force balance above: ∇ μ = m ρ m ∇ ⋅ p Q + ∇ V . {\displaystyle \nabla \mu ={\frac {m}{\rho _{m}}}\nabla \cdot \mathbf {p} _{Q}+\nabla V.} According to thermodynamics, at equilibrium the chemical potential is constant everywhere, which corresponds straightforwardly to the stationary Schrödinger equation. Therefore, the eigenvalues of the Schrödinger equation are free energies, which differ from the internal energies of the system. The particle internal energy is calculated as ε = μ − tr ( p Q ) m ρ m = − ℏ 2 8 m ( ∇ ln ρ m ) 2 + U {\displaystyle \varepsilon =\mu -\operatorname {tr} (\mathbf {p} _{Q}){\frac {m}{\rho _{m}}}=-{\frac {\hbar ^{2}}{8m}}(\nabla \ln \rho _{m})^{2}+U} and is related to the local Carl Friedrich von Weizsäcker correction . [ 19 ] | https://en.wikipedia.org/wiki/Madelung_equations |
In organic chemistry , Madelung synthesis is a chemical reaction that produces (substituted or unsubstituted) indoles by the intramolecular cyclization of N -phenylamides using strong base at high temperature. The Madelung synthesis was reported in 1912 by Walter Madelung, when he observed that 2-phenylindole was synthesized using N -benzoyl-o- toluidine and two equivalents of sodium ethoxide in a heated, airless reaction. Common reaction conditions include use of sodium or potassium alkoxide as base in hexane or tetrahydrofuran solvents, at temperatures ranging between 200–400 °C. A hydrolysis step is also required in the synthesis. The Madelung synthesis is important because it is one of few known reactions that produce indoles from a base-catalyzed thermal cyclization of N-acyl-o-toluidines.
Variants with other bases or additional substituents are possible, but the method is essentially confined to the preparation of 2-alkinylindoles (not easily accessible through electrophilic aromatic substitution ) because of vigorous reaction conditions. A detailed reaction mechanism for the Madelung synthesis follows.
The reaction begins with the extraction of a hydrogen from the nitrogen of the amide substituent and the extraction of a benzylic hydrogen from the substituent ortho to the amide substituent by a strong base. Next, the carbanion resulting from the benzylic hydrogen extraction performs a nucleophilic attack on the electrophilic carbonyl carbon of the amide group. When this occurs, the pi-bond of the amide is converted into a lone pair , creating a negatively charged oxygen . After these initial steps, strong base is no longer required and hydrolysis must occur. The negatively charged nitrogen is protonated to regain its neutral charge, and the oxygen is protonated twice to harbor a positive charge in order to become a good leaving group . A lone pair from the nitrogen forms a pi-bond to expel the positively charged leaving group, and also causes the nitrogen to harbor a positive charge. The final step of the reaction is an elimination reaction (specifically an E2 reaction ), which involves the extraction of the other hydrogen that was once benzylic, before the bicyclic compound was formed, whose electrons are converted into a new pi-bond in the ring system. This allows the pi-bond formed by nitrogen in the preceding step to be converted back into a lone pair on nitrogen to restore nitrogen's neutral charge.
Various techniques have been applied to increase the yield of the desired indole product. When the aromatic ring has electron-donating substituents higher yields are obtained, and the opposite is true when the aromatic ring has electron-withdrawing substituents. [ 1 ] However, when the R5 substituent is an electron-withdrawing substituent, the yield is increased instead of decreased. Additionally, the efficiency of the reaction is also heavily dependent on the bulkiness of the R6 substituent. The bulkier this group, the less efficient is the reaction.
The conditions required for the Madelung synthesis are quite severe. Fortunately, the aforementioned modifications have been since applied to enhance its practicality, working to decrease the required temperature at which the reaction is performed and increase the desired product yield. For example, when electron-donating are placed on the aromatic ring of the N-phenylamide and an electron-withdrawing substituent is substituted at R5, the required temperature for the reaction decreases to approximately 25 °C. [ 1 ] Even more impressively, researchers have discovered that the required temperature for the Madelung synthesis decreases to a temperature range of −20 – 25 °C when butyl lithium (BuLi) and lithium diisopropylamide (LDA) bases are used, and when tetrahydrofuran is used as the solvent. [ 2 ] This particular modification, the use of either of these metal-mediated bases, is termed the Madelung-Houlihan variation. [ 3 ]
The Madelung synthesis has many important applications in chemistry , biochemistry , and industrial chemistry . This reaction served useful in synthesizing, with an 81% yield, the architecturally complex tremorgenic indole alkaloid (-)-penitrem D, a molecule naturally produced by ergot fungus that causes various muscular and neurological diseases in livestock . [ 4 ] Because this toxin ultimately causes significant economic problems in the livestock industry, understanding how to synthesize and easily decompose alkaloid (-)-penitrem D is of great importance. Nonetheless, the synthesis of such a complex molecule was, by itself, an incredible feat.
Another facet through which the Madelung synthesis has served useful is in the synthesis of 2,6-diphenyl-1,5-diaza-1,5-dihydro-s- indacene , from 2,5-dimethyl-1,4-phenylenediamine. [ 5 ]
This synthesis was performed without modification to the Madelung synthesis, using sodium ethoxide base at a temperature of 320 – 330 °C. This indacene has shown to be an organic light-emitting diode that may have important applications for low-cost light displays in commercial industry.
The Smith-modified Madelung synthesis, also called the Smith indole synthesis, was discovered in 1986 by Amos Smith and his research team. This synthesis employs a condensation reaction of organolithium reagents derived from 2-alkyl-N- trimethylsilyl anilines by esters or carboxylic acids to yield substituted indoles. [ 6 ] This synthesis has proven applicable to a wide variety of substituted anilines, including those with alkyl , methoxy , and halide groups, and can react with non-enolizable esters or lactones to yield N-lithioketamine intermediates. These intermediates then undergo intramolecular heteroatom Peterson olefination to yield indolinines, which then tautomerize to 2-substituted indoles. The Smith indole synthesis is one of the most important modifications to the Madelung synthesis.
The Smith indole synthesis begins by use of two equivalents of an organolithium reagent (as organolithium reagents are very strong bases) to extract a hydrogen from both the alkyl substituent and the nitrogen, resulting in a negative charge on both. The synthesis proceeds with a nucleophilic attack of the carbanion on the electrophilic carbonyl carbon of the ester or carboxylic acid. When this occurs, the pi-bond of the electrophile is converted into a lone pair on the oxygen. These lone pairs are then reconverted back into a pi-bond, resulting in the expulsion of the -OR group. Next, the negatively charged nitrogen performs a nucleophilic attack on the adjacent electrophilic carbonyl carbon, again causing the pi-bond of the electrophile to be converted into a lone pair on the oxygen. This negatively charged oxygen then performs a nucleophilic attack on the silicon atom of the trimethylsilyl (TMS) group, resulting in a tricyclic compound, and a positively charged silicon atom and neutral oxygen atom. The synthesis proceeds through an intramolecular heteroatom Peterson olefination, ultimately resulting in an elimination reaction which expels a TMSO group and forms a pi-bond in the five-membered ring at the nitrogen atom. Then, keto-enol tautomerism occurs, resulting in the desired product. | https://en.wikipedia.org/wiki/Madelung_synthesis |
Madhava's correction term is a mathematical expression attributed to Madhava of Sangamagrama (c. 1340 – c. 1425), the founder of the Kerala school of astronomy and mathematics , that can be used to give a better approximation to the value of the mathematical constant π ( pi ) than the partial sum approximation obtained by truncating the Madhava–Leibniz infinite series for π . The Madhava–Leibniz infinite series for π is
Taking the partial sum of the first n {\displaystyle n} terms we have the following approximation to π :
Denoting the Madhava correction term by F ( n ) {\displaystyle F(n)} , we have the following better approximation to π :
Three different expressions have been attributed to Madhava as possible values of F ( n ) {\displaystyle F(n)} , namely,
In the extant writings of the mathematicians of the Kerala school there are some indications regarding how the correction terms F 1 ( n ) {\displaystyle F_{1}(n)} and F 2 ( n ) {\displaystyle F_{2}(n)} have been obtained, but there are no indications on how the expression F 3 ( n ) {\displaystyle F_{3}(n)} has been obtained. This has led to a lot of speculative work on how the formulas might have been derived.
The expressions for F 2 ( n ) {\displaystyle F_{2}(n)} and F 3 ( n ) {\displaystyle F_{3}(n)} are given explicitly in the Yuktibhasha , a major treatise on mathematics and astronomy authored by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530, but that for F 1 ( n ) {\displaystyle F_{1}(n)} appears there only as a step in the argument leading to the derivation of F 2 ( n ) {\displaystyle F_{2}(n)} . [ 1 ] [ 2 ]
The Yuktidipika–Laghuvivrthi commentary of Tantrasangraha , a treatise written by Nilakantha Somayaji an astronomer/mathematician belonging to the Kerala school of astronomy and mathematics and completed in 1501, presents the second correction term in the following verses (Chapter 2: Verses 271–274): [ 3 ] [ 1 ]
English translation of the verses: [ 3 ]
In modern notations this can be stated as follows (where d {\displaystyle d} is the diameter of the circle):
If we set p = 2 n − 1 {\displaystyle p=2n-1} , the last term in the right hand side of the above equation reduces to 4 d F 2 ( n ) {\displaystyle 4dF_{2}(n)} .
The same commentary also gives the correction term F 3 ( n ) {\displaystyle F_{3}(n)} in the following verses (Chapter 2: Verses 295–296):
English translation of the verses: [ 3 ]
In modern notations, this can be stated as follows:
where the "multiplier" m = 1 + ( ( p + 1 ) / 2 ) 2 . {\textstyle m=1+\left((p+1)/2\right)^{2}.} If we set p = 2 n − 1 {\displaystyle p=2n-1} , the last term in the right hand side of the above equation reduces to 4 d F 3 ( n ) {\displaystyle 4dF_{3}(n)} .
Let
Then, writing p = 2 n + 1 {\displaystyle p=2n+1} , the errors | π 4 − s i ( n ) | {\displaystyle \left|{\frac {\pi }{4}}-s_{i}(n)\right|} have the following bounds: [ 2 ] [ 4 ]
The errors in using these approximations in computing the value of π are
The following table gives the values of these errors for a few selected values of n {\displaystyle n} .
It has been noted that the correction terms F 1 ( n ) , F 2 ( n ) , F 3 ( n ) {\displaystyle F_{1}(n),F_{2}(n),F_{3}(n)} are the first three convergents of the following continued fraction expressions: [ 3 ]
The function f ( n ) {\displaystyle f(n)} that renders the equation
exact can be expressed in the following form: [ 1 ]
The first three convergents of this infinite continued fraction are precisely the correction terms of Madhava. Also, this function f ( n ) {\displaystyle f(n)} has the following property:
In a paper published in 1990, a group of three Japanese researchers proposed an ingenious method by which Madhava might have obtained the three correction terms. Their proposal was based on two assumptions: Madhava used 355 / 113 {\displaystyle 355/113} as the value of π and he used the Euclidean algorithm for division. [ 5 ] [ 6 ]
Writing
and taking π = 355 / 113 , {\displaystyle \pi =355/113,} compute the values S ( n ) , {\displaystyle S(n),} express them as a fraction with 1 as numerator, and finally ignore the fractional parts in the denominator to obtain approximations:
This suggests the following first approximation to S ( n ) {\displaystyle S(n)} which is the correction term F 1 ( n ) {\displaystyle F_{1}(n)} talked about earlier.
The fractions that were ignored can then be expressed with 1 as numerator, with the fractional parts in the denominators ignored to obtain the next approximation. Two such steps are:
This yields the next two approximations to S ( n ) , {\displaystyle S(n),} exactly the same as the correction terms F 2 ( n ) , {\displaystyle F_{2}(n),}
and F 3 ( n ) , {\displaystyle F_{3}(n),}
attributed to Madhava. | https://en.wikipedia.org/wiki/Madhava's_correction_term |
Madhava's sine table is the table of trigonometric sines constructed by the 14th century Kerala mathematician - astronomer Madhava of Sangamagrama (c. 1340 – c. 1425). The table lists the jya-s or Rsines of the twenty-four angles from 3.75 ° to 90° in steps of 3.75° (1/24 of a right angle , 90°). Rsine is just the sine multiplied by a selected radius and given as an integer. In this table, as in Aryabhata's earlier table , R is taken as 21600 ÷ 2 π ≈ 3437.75.
The table is encoded in the letters of the Sanskrit alphabet using the Katapayadi system , giving entries the appearance of the verses of a poem.
Madhava's original work containing the table has not been found. The table is reproduced in the Aryabhatiyabhashya of Nilakantha Somayaji [ 1 ] (1444–1544) and also in the Yuktidipika/Laghuvivrti commentary of Tantrasamgraha by Sankara Variar (circa. 1500–1560). [ 2 ] : 114–123
The verses below are given as in Cultural foundations of mathematics by C.K. Raju. [ 2 ] : 114–123 They are also given in the Malayalam Commentary of Karanapaddhati by P.K. Koru [ 3 ] but slightly differently.
The verses are:
श्रेष्ठं नाम वरिष्ठानां हिमाद्रिर्वेदभावनः । तपनो भानु सूक्तज्ञो मध्यमं विद्धि दोहनम् ॥ १ ॥ धिगाज्यो नाशनं कष्टं छन्नभोगाशयाम्बिका । मृगाहारो नरेशोयं वीरो रणजयोत्सुकः ॥ २ ॥ मूलं विशुद्धं नाळस्य गानेषु विरळा नराः । अशुद्धिगुप्ता चोरश्रीः शङ्कुकर्णो नगेश्वरः ॥ ३ ॥ तनुजो गर्भजो मित्रं श्रीमानत्र सुखी सखे । शशी रात्रौ हिमाहारौ वेगज्ञः पथि सिन्धुरः ॥ ४ ॥ छाया लयो गजो नीलो निर्मलो नास्ति सत्कुले । रात्रौ दर्पणमभ्राङ्गं नागस्तुङ्गनखो बली ॥ ५ ॥ धीरो युवा कथालोलः पूज्यो नारीजनैर्भगः । कन्यागारे नागवल्ली देवो विश्वस्थली भृगुः ॥ ६ ॥ तत्परादिकलान्तास्तु महाज्या माधवोदिताः । स्वस्वपूर्वविशुद्धे तु शिष्टास्तत्खण्डमौर्विकाः ॥ ७ ॥
The quarters of the first six verses represent entries for the twenty-four angles from 3.75° to 90° in steps of 3.75° (first column). The second column contains the Rsine values encoded as Sanskrit words (in Devanagari). The third column contains the same in ISO 15919 transliterations . The fourth column contains the numbers decoded into arcminutes, arcseconds, and arcthirds in modern numerals. The modern values scaled by the traditional “radius” (21600 ÷ 2 π , with the modern value of π with two decimals in the arcthirds are given in the fifth column.
The last verse means: “These are the great R-sines as said by Madhava, comprising arcminutes, seconds and thirds. Subtracting from each the previous will give the R-sine-differences.”
By comparing, one can note that Madhava's values are accurately given rounded to the declared precision of thirds except for Rsin(15°) where one feels he should have rounded up to 889′45″16‴ instead.
Note that in the Katapayadi system the digits are written in the reverse order, so for example the literal entry corresponding to 15° is 51549880 which is reversed and then read as 0889′45″15‴. Note that the 0 does not carry a value but is used for the metre of the poem alone.
Without going into the philosophy of why the value of R = 21600 ÷ 2 π was chosen etc, the simplest way to relate the jya tables to our modern concept of sine tables is as follows:
Even today sine tables are given as decimals to a certain precision. If sin(15°) is given as 0.1736, it means the rational 1736 ÷ 10000 is a good approximation of the actual infinite precision number. The only difference is that in the earlier days they had not standardized on decimal values (or powers of ten as denominator) for fractions. Hence they used other denominators based on other considerations (which are not discussed here).
Hence the sine values represented in the tables may simply be taken as approximated by the given integer values divided by the R chosen for the table.
Another possible confusion point is the usage of angle measures like arcminute etc in expressing the R-sines. Modern sines are unitless ratios. Jya-s or R-sines are the same multiplied by a measure of length or distance. However, since these tables were mostly used for astronomy, and distance on the celestial sphere is expressed in angle measures, these values are also given likewise. However, the unit is not really important and need not be taken too seriously, as the value will anyhow be used as part of a rational and the unit will cancel out.
However, this also leads to the usage of sexagesimal subdivisions in Madhava's refining the earlier table of Aryabhata. Instead of choosing a larger R , he gave the extra precision determined by him on top of the earlier given minutes by using seconds and thirds. As before, these may simply be taken as a different way of expressing fractions and not necessarily as angle measures.
Consider some angle whose measure is A . Consider a circle of unit radius and center O. Let the arc PQ of the circle subtend an angle A at the center O. Drop the perpendicular QR from Q to OP; then the length of the line segment RQ is the value of the trigonometric sine of the angle A . Let PS be an arc of the circle whose length is equal to the length of the segment RQ. For various angles A , Madhava's table gives the measures of the corresponding angles ∠ {\displaystyle \angle } POS in arcminutes , arcseconds and sixtieths of an arcsecond .
As an example, let A be an angle whose measure is 22.50°. In Madhava's table, the entry corresponding to 22.50° is the measure in arcminutes, arcseconds and sixtieths of an arcsecond of the angle whose radian measure is the value of sin 22.50° , which is 0.3826834;
For an angle whose measure is A , let
Then:
Each of the lines in the table specifies eight digits. Let the digits corresponding to angle A (read from left to right) be:
Then according to the rules of the Katapayadi system they should be taken from right to left and we have:
The value of the above angle B expressed in radians will correspond to the sine value of A .
As said earlier, this is the same as dividing the encoded value by the taken R value:
The table lists the following digits corresponding to the angle A = 45.00°:
This yields the angle with measure:
From which we get:
The value of the sine of A = 45.00° as given in Madhava's table is then just B converted to radians:
Evaluating the above, one can find that sin 45° is 0.70710681… This is accurate to 6 decimal places.
No work of Madhava detailing the methods used by him for the computation of the sine table has survived. However from the writings of later Kerala mathematicians including Nilakantha Somayaji ( Tantrasangraha ) and Jyeshtadeva ( Yuktibhāṣā ) that give ample references to Madhava's accomplishments, it is conjectured that Madhava computed his sine table using the power series expansion of sin x : | https://en.wikipedia.org/wiki/Madhava's_sine_table |
The Madukkarai Wall is a historic border fortification demarcating the boundaries of the three ancient kingdoms of Chera , Chola , and Pandya . The wall was supposedly erected by the goddess Sellandiyamman and may have been built as early as the 1st century AD.
The Madukkarai wall [ note 1 ] is a stone and earthen fortification with a parallel embankment in central Tamil Nadu . The wall was built during the pre-Sangam period to demarcate the trijunction of the Chera , Chola and Pandya kingdoms. People of this region believe that the Goddess Sellandiyamman [ note 2 ] miraculously erected the wall overnight to prevent border disputes.
The border between the Chera i.e. Kongu Nadu and Chola Nadu is demarcated by the Karaipottanar river. [ 1 ]
The Karaipottanar river is a tributary of the Kaveri river to the north. The temple of Madukkarai Sellandiyamman at Mayanur (Tamil Nadu) [ 2 ] is the culminating point of the wall. The wall ends at Madurai Meenakshi Amman Temple . [ 3 ]
The wall is described as of historic importance in the 1907 (British era) gazetteer of Madras: [ 4 ]
The wall is of unknown age, but could be as old as the 1st century AD:
The wall still demarcates the boundary between the Karur and Kulithalai taluks (districts) [ note 3 ] and after that, between the Dindigul and Tiruchirappalli districts. | https://en.wikipedia.org/wiki/Madukkarai_Wall |
The mafia hypothesis posits that brood parasite eggs are accepted by the host out of fear of retaliation (nest destruction) from the brood parasite, in an example of coevolution .
Amotz Zahavi proposed it in 1979, and it was tested by Manuel Soler in 1995. [ 1 ]
Maria Abou Chakra, of the Max Planck Institute for Evolutionary Biology, with others, successfully mathematically modeled the mafia hypothesis as a viable strategy, conditional on two factors: [ 2 ] [ 3 ]
They found that the proportion of mafia vs non mafia brood parasites and unconditionally vs conditionally accepting hosts cycled over time: if all hosts unconditionally accepted parasite eggs, then it would not be worth the effort of revisiting the nest- being 'mafia'. If sufficiently few parasites were mafia, then only accepting parasite eggs after nest destruction once would be best for the hosts. As such, the mafia proportion of parasites would increase, thereby leading to unconditional acceptance by hosts, and so on. [ 4 ]
Nest destruction also occurs as a result of 'farming'- attempts to synchronize the hosts' schedule with the parasites'. [ 5 ] [ 6 ] It bears similarities to the mafia strategy in that both engage in depredation of nests. [ 7 ]
The farmer strategy complicates the mafia/non, un/conditional acceptance model, as in the case of farmers, rejection enters as a viable third host strategy. [ 8 ] | https://en.wikipedia.org/wiki/Mafia_hypothesis |
Mageba (stylised as mageba ) is a civil engineering service provider [ 1 ] and manufacturer of bridge bearings, expansion joints, seismic protection and structural monitoring devices for the construction industry. [ 2 ] The company is headquartered in Bulach, Switzerland, and operates through offices in Europe, Americas and Asia Pacific. [ 3 ] In all, mageba has official representations in over 40 countries.
mageba was founded in 1963 in Bulach, Switzerland. By 1969 the company was designing and manufacturing a variety of bridge bearings and expansion joints, [ 4 ] and had heavy duty testing facilities in operation. [ 5 ] In 2004 the company merged with Proceq. [ 6 ] The resulting company continued to design and manufacture bridge bearings and expansion joints. [ 7 ] [ 8 ]
In April 2011, mageba USA LLC was founded with offices in New York and San Jose.
By then mageba had production facilities in Fussach (Austria), Shanghai (China), and offices in Uslar and Stuttgart (Germany) and Cugy (Switzerland). By 2012, the company had four facilities in India, [ 9 ] and was also operating in Russia, South Korea, and Turkey.
mageba has supplied bearings and expansion joints to more than 10,000 bridges around the world, [ 10 ] including the Audubon Bridge in Louisiana USA, Incheon Bridge in South Korea [ 11 ] the Golden Ears Bridge in British Columbia, Canada(2009), the Bandra Worli Sea link in India, [ 12 ] [ 13 ] the Øresund Bridge which has linked Denmark and Sweden since 2000, and the Tsing Ma Bridge in Hong Kong. [ 14 ]
mageba also installs and services bridge components. [ 15 ]
A recent focus of activities of the firm has been the provision of structure surveillance services, including installation and remote monitoring of sensors, inspections and testing. [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Mageba_(Swiss_company) |
The Magellanic Cloud Emission Line Survey ( MCELS ) is a joint project of Cerro Tololo Inter-American Observatory ( Chile ) and the University of Michigan using the CTIO Curtis/Schmidt Telescope. The main goal of the project is to trace the ionized gas in the Magellanic Clouds using narrow-band filters ([S II], Hα and [O III]) and investigate the physical properties of the interstellar medium of these galaxies. Those emission lines are produced by different astrophysical objects and processes. [ 1 ] [ 2 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magellanic_Cloud_Emission-line_Survey |
The Magellanic Premium , also known as the Magellanic Gold Medal and Magellanic Prize is awarded for major contributions in the field of navigation (whether by sea, air, or in space), astronomy, or natural philosophy.
The Premium was established in 1786 through a grant by Jean-Hyacinthe Magellan ( Portuguese : João Jacinto de Magalhães ). Benjamin Franklin , then President of the American Philosophical Society , accepted it and established the terms of reference under which it would be given.
In the 217 years since Magellan offered the Premium, the APS has awarded on only 36 occasions (as of 2021 [update] ): twelve for navigation, twelve for natural philosophy, and eleven for astronomy.
Source: American Philosophical Society | https://en.wikipedia.org/wiki/Magellanic_Premium |
The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial , P 2 (cos θ ) = 0 , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging , structures with ordered collagen , such as tendons and ligaments , oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect.
The magic angle θ m is
θ m = arccos 1 3 = arctan 2 ≈ 0.955 32 rad ≈ 54.7 ∘ , {\displaystyle \theta _{\mathrm {m} }=\arccos {\frac {1}{\sqrt {3}}}=\arctan {\sqrt {2}}\approx 0.955\,32\ {\text{rad}}\approx 54.7^{\circ }\!,}
where arccos and arctan are the inverse cosine and tangent functions respectively. OEIS : A195696
θ m is the angle between the space diagonal of a cube and any of its three connecting edges, see image.
Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as arccos − 1 / 3 or 2 arctan √ 2 radians ≈ 109.4712° . This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle).
In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling , chemical shift anisotropy (CSA), and first-order quadrupolar coupling , depend on the orientation of the interaction tensor with the external magnetic field.
By spinning the sample around a given axis, their average angular dependence becomes:
⟨ 3 cos 2 θ − 1 ⟩ = ( 3 cos 2 θ r − 1 ) ( 3 cos 2 β − 1 ) , {\displaystyle \left\langle 3\cos ^{2}\theta -1\right\rangle =\left(3\cos ^{2}\theta _{\mathrm {r} }-1\right)\left(3\cos ^{2}\beta -1\right)\!,}
where θ is the angle between the principal axis of the interaction and the magnetic field, θ r is the angle of the axis of rotation relative to the magnetic field and β is the (arbitrary) angle between the axis of rotation and principal axis of the interaction.
For dipolar couplings, the principal axis corresponds to the internuclear vector between the coupled spins; for the CSA, it corresponds to the direction with the largest deshielding; for the quadrupolar coupling, it corresponds to the z -axis of the electric-field gradient tensor.
The angle β cannot be manipulated as it depends on the orientation of the interaction relative to the molecular frame and on the orientation of the molecule relative to the external field. The angle θ r , however, can be decided by the experimenter. If one sets θ r = θ m ≈ 54.7° , then the average angular dependence goes to zero. Magic angle spinning is a technique in solid-state NMR spectroscopy which employs this principle to remove or reduce the influence of anisotropic interactions, thereby increasing spectral resolution.
For a time-independent interaction, i.e. heteronuclear dipolar couplings, CSA and first-order quadrupolar couplings, the anisotropic component is greatly reduced and almost suppressed in the limit of fast spinning, i.e. when the spinning frequency is greater than the width of the interaction.
The averaging is only close to zero in a first-order perturbation theory treatment; higher order terms cause allowed frequencies at multiples of the spinning frequency to appear, creating spinning side-bands in the spectra.
Time-dependent interactions, such as homonuclear dipolar couplings, are more difficult to average to their isotropic values by magic angle spinning; a network of strongly coupled spins will produce a mixing of spin states during the course of the sample rotation, interfering with the averaging process.
The magic angle artifact refers to the increased signal observed when MRI sequences with short echo time (TE) (e.g., T 1 or proton density spin-echo sequences) are used to image tissues with well-ordered collagen fibers in one direction (e.g., tendon or articular hyaline cartilage). [ 1 ] This artifact occurs when the angle such fibers make with the magnetic field is equal to θ m .
Example: This artifact comes into play when evaluating the rotator cuff tendons of the shoulder. The magic angle effect can create the appearance of supraspinatus tendinitis .
To achieve optimal loading in a straight rubber hose the fibres must be positioned under an angle of approximately 54.7 angular degrees, also referred to as the magic angle. The magic angle of 54.7 exactly balances the internal-pressure-induced longitudinal stress and the hoop (circumferential) stress. [ 2 ] | https://en.wikipedia.org/wiki/Magic_angle |
The magic angle is a particular value of the collection angle of an electron microscope at which the measured energy-loss spectrum "magically" becomes independent of the tilt angle of the sample with respect to the beam direction. The magic angle is not uniquely defined for isotropic samples, but the definition is unique in the (typical) case of small angle scattering on materials with a "c-axis", such as graphite .
The "magic" angle depends on both the incoming electron energy (which is typically fixed) and the energy loss suffered by the electron . The ratio of the magic angle θ M {\displaystyle \theta _{M}} to the characteristic angle θ E {\displaystyle \theta _{E}} is roughly independent of the energy loss and roughly independent of the particular type of sample considered.
For the case of a relativistic incident electron, the "magic" angle is defined by the equality of two different functions (denoted below by A {\displaystyle A} and C {\displaystyle C} ) of the collection angle α {\displaystyle \alpha } :
A ( α ) = 1 2 ∫ 0 α 2 d x x ( x + θ E 2 ( 1 − β 2 ) ) 2 {\displaystyle A(\alpha )={\frac {1}{2}}\int _{0}^{\alpha ^{2}}dx{\frac {x}{(x+\theta _{E}^{2}{(1-\beta ^{2}))}^{2}}}}
and
C ( α ) = θ E 2 ( 1 − β 2 ) 2 ∫ 0 α 2 d x 1 ( x + θ E 2 ( 1 − β 2 ) ) 2 {\displaystyle C(\alpha )=\theta _{E}^{2}{(1-\beta ^{2})}^{2}\int _{0}^{\alpha ^{2}}dx{\frac {1}{{(x+\theta _{E}^{2}(1-\beta ^{2}))}^{2}}}}
where β {\displaystyle \beta } is the speed of the incoming electron divided by the speed of light (N.B., the symbol β {\displaystyle \beta } is also often used in the older literature to denote the collection angle instead of α {\displaystyle \alpha } ).
Of course, the above integrals may easily be evaluated in terms of elementary functions , but they are presented as above because in the above form it is easier to see that the former integral is due to momentum transfers which are perpendicular to the beam direction, whereas the latter is due to momentum transfers parallel to the beam direction.
Using the above definition, it is then found that θ M ≈ 2 θ E {\displaystyle \theta _{M}\approx 2\theta _{E}} | https://en.wikipedia.org/wiki/Magic_angle_(EELS) |
In solid-state NMR spectroscopy , magic-angle spinning (MAS) is a technique routinely used to produce better resolution NMR spectra. MAS NMR consists in spinning the sample (usually at a frequency of 1 to 130 kHz ) at the magic angle θ m (ca. 54.74°, where cos 2 θ m =1/3) with respect to the direction of the magnetic field .
Three main interactions responsible in solid state NMR ( dipolar , chemical shift anisotropy , quadrupolar ) often lead to very broad and featureless NMR lines. However, these three interactions in solids are orientation-dependent and can be averaged to some extent by MAS:
In solution-state NMR, most of these interactions are averaged out because of the rapid time-averaged molecular motion that occurs due to the thermal energy (molecular tumbling).
The spinning of the sample is achieved via an impulse air turbine mechanism, where the sample tube is lifted with a frictionless compressed gas bearing and spun with a gas drive. Sample tubes are hollow cylinders coming in a variety of outer diameters ranging from 0.70 to 7 mm, mounted with a turbine cap. The rotors are typically made from zirconium oxide, although other ceramic materials ( silicon nitride ) or polymers ( poly(methyl methacrylate) (PMMA), polyoxymethylene (POM)) can be found. Removable caps close the ends of the sample tube. They are made from a range of materials typically Kel-F , Vespel , or zirconia and boron nitride for an extended temperature range.
Magic-angle spinning was first described in 1958 by Edward Raymond Andrew , A. Bradbury, and R. G. Eades [ 1 ] and independently in 1959 by I. J. Lowe. [ 2 ] The name "magic-angle spinning" was coined in 1960 by Cornelis J. Gorter at the AMPERE congress in Pisa. [ 3 ]
HRMAS is usually applied to solutions and gels where dipole-dipole interactions are insufficiently averaged by the intermediate molecular motion. HRMAS can dramatically average out residual dipolar interactions and result in spectra with linewidths similar to solution-state NMR. HRMAS links the gap between solution-state and solid-state NMR, and enable the use of solution-state experiments [ 4 ]
HRMAS and its medical research application was first described in a 1997 study of human brain tissues from a neurodegenerative disorder. [ 5 ]
Use of Magic Angle Spinning has been extended from solid-state to liquid (solution) NMR. [ 6 ]
The magic-angle-turning (MAT) technique introduced by Gan employs slow (approximately 30 Hz) rotation of a powdered sample at the magic angle, in concert with pulses synchronized to 1/3 of the rotor period, to obtain isotropic-shift information in one dimension of a 2D spectrum. [ 7 ]
Rather than using cylindrical rotors, spinning spheres can be spun stably at the magic angle, which can be used to increase the filling factor of the coils, hence improve the sensitivity. [ 8 ] Magic angle spinning spheres allow stable MAS with faster spinning rates. [ 9 ]
There are significant advantages to using MAS NMR in structural biology. Magic angle spinning can be used to characterize large insoluble systems, including biological assemblies and intact viruses, that cannot be studied with other methods. [ 10 ] | https://en.wikipedia.org/wiki/Magic_angle_spinning |
A magic eye tube or tuning indicator , in technical literature called an electron-ray indicator tube , [ 1 ] is a vacuum tube which gives a visual indication of the amplitude of an electronic signal, such as an audio output, radio-frequency signal strength, or other functions. [ 1 ] The magic eye (also called a cat's eye , or tuning eye in North America) is a specific type of such a tube with a circular display similar to the EM34 illustrated. Its first broad application was as a tuning indicator in radio receivers , to give an indication of the relative strength of the received radio signal, to show when a radio station was properly tuned in. [ 1 ]
The magic eye tube was the first in a line of development of cathode ray type tuning indicators developed as a cheaper alternative to needle movement meters. It was not until the 1960s that needle meters were made inexpensively enough in Japan to displace indicator tubes. [ 2 ] Tuning indicator tubes were used in vacuum tube receivers from around 1936 to 1980, before vacuum tubes were replaced by transistors in radios. [ 3 ] An earlier tuning aid which the magic eye replaced was the "tuneon" neon lamp . [ 3 ] [ 4 ]
The magic eye tube (or valve) for tuning radio receivers was invented in 1932 by Allen B. DuMont (who spent most of the 1930s improving the lifetime of cathode ray tubes , and ultimately formed the DuMont Television Network ). [ 5 ] [ 6 ] [ 7 ]
The RCA 6E5 from 1935 was the first commercial tube. [ 8 ] [ 9 ]
The earlier types were end-viewed (EM34), usually with an octal or side-contact base. Later developments featured a smaller side-viewed noval B9A based all-glass type with either a fan type display or a band display (EM84). The end-viewed version had a round cone-shaped fluorescent screen together with the black cap that shielded the red light from the cathode/heater assembly. This design prompted the contemporary advertisers to coin the term magic eye, a term still used.
There was also a sub-miniature version with wire ends ( Mullard DM70/DM71, Mazda 1M1/1M3, GEC/Marconi Y25) intended for battery operation, used in one Ever Ready AM/FM battery receiver with push-pull output, as well as a small number of AM/FM mains receivers, which lit the valve from the 6.3 V heater supply via a 220 ohm resistor or from the audio output valve's cathode bias. Some reel-to-reel tape recorders also used the DM70/DM71 to indicate recording level, including a transistorized model with the valve lit from the bias-oscillator voltage.
The function of a magic eye can be achieved with modern semiconductor circuitry and optoelectronic displays. The high voltages (100 volts or more) required by these tubes are no longer in modern devices, so the magic eye tube is obsolete.
A magic eye tube is a miniature cathode ray tube , usually with a built-in triode signal amplifier . It usually glows bright green, (occasionally yellow in some very old types, e.g., EM4) and the glowing ends grow to meet in the middle as the voltage on a control grid increases. It is used in a circuit that drives the grid with a voltage that changes with signal strength; as the tuning knob is turned, the gap in the eye becomes narrowest when a station is tuned in correctly.
Internally, the device is a vacuum tube consisting of two plate electrode assemblies, one creating a triode amplifier and the other a display section consisting of a conical-shaped target anode coated with zinc silicate or similar material. The display section's anode is usually directly connected to the receiver's full positive high tension (HT) voltage, whilst the triode-anode is usually (internally) connected to a control electrode mounted between the cathode and the target-anode, and externally connected to positive HT via a high-value resistor, typically 1 megaohm.
When the receiver is switched on but not tuned to a station, the target-anode glows green due to electrons striking it, with the exception of the area by the internal control-electrode. This electrode is typically 150–200 V negative with respect to the target-anode, repelling electrons from the target in this region, causing a dark sector to appear on the display.
The control-grid of the triode-amplifier section is connected to a point where a negative control voltage dependent on signal strength is available, e.g. the automatic gain control (AGC) line in an AM superheterodyne receiver, or the limiter stage or FM detector in an FM receiver. As a station is tuned in the triode-grid becomes more negative with respect to the common cathode.
The purpose of magic eye tubes in radio sets is to help with accurate tuning to a station; the tube makes peaks in signal strength more obvious by producing a visual indication, which is better than using the ear alone. The eye is especially useful because the AGC action tends to increase the audio volume of a mistuned station, so the volume varies relatively little as the tuning knob is turned. The tuning eye was driven by the AGC voltage rather than the audio signal.
When, in the early 1950s, FM radio sets were made available on the UK market, there were many different types of magic eye tubes with differing displays, but they all worked the same way. Some had a separate small display to light up indicating a stereo signal on FM.
The British Leak company used an EM84 indicator as a very precise tuning-indicator in their Troughline FM tuner series, by mixing the AGC voltages from the two limiter valve grids at the indicator sensing-grid. By this means accurate tuning was indicated by a fully open sharp shadow, whilst off-tune the indicator produced a partially closed shadow.
In U.S. made radios, the first type issued was the type 6E5 single pie shaped image, introduced by RCA and used in their 1936 line of radios. Other radio makers used the 6E5 as well until, soon after, the less sensitive type 6G5 was introduced. Also, a type 6AB5 aka 6N5 tube with lower plate voltage was introduced for series filament radios. Type number 6U5 was similar to the 6G5 but had a straight glass envelope. Zenith Radio used a type 6T5 in their 1938 model year radios with "Target tuning" indicator (resembling a camera iris), but was abandoned after a year, with Ken-Rad manufacturing a replacement type. All these types use a 6-pin base with two larger pins for filament connection.
Several other "eye tubes" were introduced in U.S. radios and also used in test equipment and audio gear, including the octal-based types 6AF6GT, 6AD6GT and 1629. The latter was an industrial type with 12 volt filament looking identical to type 6E5. Later U.S. made audio gear used European tubes like EM80 (equivalent to 6BR5), EM81 (6DA5), EM84 (6FG6), EM85 (6DG7) or EM87 (6HU6).
Magic eye tubes were used as the recording level indicator for tape recorders (for example in the Echolette [ de ] ), and it is also possible to use them (in a specially adapted circuit) as a means of rough frequency comparison as a simpler alternative to Lissajous figures .
A magic eye tube acts as an inexpensive uncalibrated (and not necessarily linear ) voltage indicator, and can be used wherever an indication of voltage is needed, saving the cost of a more accurate calibrated meter .
At least one design of capacitance bridge uses this type of tube to indicate that the bridge is balanced.
The magic eye tube appears on the cover of My Morning Jacket 's 2011 album Circuital . The tube is shown almost fully lit. | https://en.wikipedia.org/wiki/Magic_eye_tube |
The concept of magic numbers in the field of chemistry refers to a specific property (such as stability) for only certain representatives among a distribution of structures. It was first recognized by inspecting the intensity of mass-spectrometric signals of rare gas cluster ions. [ 1 ] Then, the same effect was observed with sodium clusters. [ 2 ] [ 3 ]
In case a gas condenses into clusters of atoms, the number of atoms in these clusters that are most likely to form varies between a few and hundreds. However, there are peaks at specific cluster sizes, deviating from a pure statistical distribution. Therefore, it was concluded that clusters of these specific numbers of atoms dominate due to their exceptional stability. The concept was also successfully applied to explain the mono-dispersed occurrence of thiolate-protected gold clusters ; here the outstanding stability of specific cluster sizes is connected with their respective electronic configuration.
The term magic numbers is also used in the field of nuclear physics . In this context, magic numbers refer to a specific number of protons or neutrons that forms complete nucleon shells . [ 4 ]
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magic_number_(chemistry) |
In nuclear physics , a magic number is a number of nucleons (either protons or neutrons , separately) such that they are arranged into complete shells within the atomic nucleus . As a result, atomic nuclei with a "magic" number of protons or neutrons are much more stable than other nuclei. The seven most widely recognized magic numbers as of 2019 are 2, 8, 20, 28, 50, 82, and 126 .
For protons, this corresponds to the elements helium , oxygen , calcium , nickel , tin , lead , and the hypothetical unbihexium , although 126 is so far only known to be a magic number for neutrons. Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay.
The unusual stability of isotopes having magic numbers means that transuranium elements could theoretically be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers . Large isotopes with magic numbers of nucleons are said to exist in an island of stability . Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed. [ 1 ] [ 2 ] [ 3 ]
Before this was realized, higher magic numbers, such as 184, 258, 350, and 462, were predicted based on simple calculations that assumed spherical shapes: these are generated by the formula 2 ( ( n 1 ) + ( n 2 ) + ( n 3 ) ) {\displaystyle 2({\tbinom {n}{1}}+{\tbinom {n}{2}}+{\tbinom {n}{3}})} (see Binomial coefficient ) . It is now believed that the sequence of spherical magic numbers cannot be extended in this way. Further predicted magic numbers are 114, 122, 124, and 164 for protons as well as 184, 196, 236, and 318 for neutrons. [ 1 ] [ 4 ] [ 5 ] However, more modern calculations predict 228 and 308 for neutrons, along with 184 and 196. [ 6 ]
While working on the Manhattan Project , the German physicist Maria Goeppert Mayer became interested in the properties of nuclear fission products, such as decay energies and half-lives. [ 7 ] In 1948, she published a body of experimental evidence for the occurrence of closed nuclear shells for nuclei with 50 or 82 protons or 50, 82, and 126 neutrons. [ 8 ]
It had already been known that nuclei with 20 protons or neutrons were stable: that was evidenced by calculations by Hungarian-American physicist Eugene Wigner , one of her colleagues in the Manhattan Project. [ 9 ] Two years later, in 1950, a new publication followed in which she attributed the shell closures at the magic numbers to spin-orbit coupling. [ 10 ] According to Steven Moszkowski, a student of Goeppert Mayer, the term "magic number" was coined by Wigner: "Wigner too believed in the liquid drop model , but he recognized, from the work of Maria Mayer, the very strong evidence for the closed shells. It seemed a little like magic to him, and that is how the words 'Magic Numbers' were coined." [ 11 ]
These magic numbers were the bedrock of the nuclear shell model , which Mayer developed in the following years together with Hans Jensen and culminated in their shared 1963 Nobel Prize in Physics. [ 12 ]
Nuclei which have neutron numbers and proton ( atomic ) numbers both equal to one of the magic numbers are called "doubly magic", and are generally very stable against decay. [ 13 ] The known doubly magic isotopes are helium-4 , helium -10, oxygen-16 , calcium-40 , calcium-48 , nickel -48, nickel-56, nickel-78, tin -100, tin-132, and lead -208. While only helium-4, oxygen-16, calcium-40, and lead-208 are completely stable, calcium-48 is extremely long-lived and therefore found naturally, disintegrating only by a very inefficient double beta minus decay process. Double beta decay in general is so rare that several nuclides exist which are predicted to decay by this mechanism but in which no such decay has yet been observed. Even in nuclides whose double beta decay has been confirmed through observations, half lives usually exceed the age of the universe by orders of magnitude, and emitted beta or gamma radiation is for virtually all practical purposes irrelevant. On the other hand, helium-10 is extremely unstable, and has a half-life of just 260(40) yoctoseconds ( 2.6(4) × 10 −22 s ).
Doubly magic effects may allow the existence of stable isotopes which otherwise would not have been expected. An example is calcium-40 , with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 and nickel -48 are doubly magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a relatively light element, but like calcium-40, it is stabilized by being doubly magic. As an exception, although oxygen-28 has 8 protons and 20 neutrons, it is unbound with respect to four-neutron decay and appears to lack closed neutron shells, so it is not regarded as doubly magic. [ 14 ]
Magic number shell effects are seen in ordinary abundances of elements: helium-4 is among the most abundant (and stable) nuclei in the universe [ 15 ] and lead-208 is the heaviest stable nuclide ( at least by known experimental observations). Alpha decay (the emission of a 4 He nucleus – also known as an alpha particle – by a heavy element undergoing radioactive decay) is common in part due to the extraordinary stability of helium-4, which makes this type of decay energetically favored in most heavy nuclei over neutron emission , proton emission or any other type of cluster decay . The stability of 4 He also leads to the absence of stable isobars of mass number 5 and 8; indeed, all nuclides of those mass numbers decay within fractions of a second to produce alpha particles.
Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 and tin-132 are examples of doubly magic isotopes of tin that are unstable, and represent endpoints beyond which stability drops off rapidly. Nickel-48, discovered in 1999, is the most proton-rich doubly magic nuclide known. [ 16 ] At the other extreme, nickel-78 is also doubly magic, with 28 protons and 50 neutrons, a ratio observed only in much heavier elements, apart from tritium with one proton and two neutrons ( 78 Ni: 28/50 = 0.56; 238 U: 92/146 = 0.63). [ 17 ]
In December 2006, hassium -270, with 108 protons and 162 neutrons, was discovered by an international team of scientists led by the Technical University of Munich , having a half-life of 9 seconds. [ 18 ] Hassium-270 evidently forms part of an island of stability , and may even be doubly magic due to the deformed ( American football - or rugby ball -like) shape of this nucleus. [ 19 ] [ 20 ]
Although Z = 92 and N = 164 are not magic numbers, the undiscovered neutron-rich nucleus uranium -256 may be doubly magic and spherical due to the difference in size between low- and high- angular momentum orbitals, which alters the shape of the nuclear potential . [ 21 ]
Magic numbers are typically obtained by empirical studies; if the form of the nuclear potential is known, then the Schrödinger equation can be solved for the motion of nucleons and energy levels determined. Nuclear shells are said to occur when the separation between energy levels is significantly greater than the local mean separation.
In the shell model for the nucleus, magic numbers are the numbers of nucleons at which a shell is filled. For instance, the magic number 8 occurs when the 1s 1/2 , 1p 3/2 , 1p 1/2 energy levels are filled, as there is a large energy gap between the 1p 1/2 and the next highest 1d 5/2 energy levels.
The atomic analog to nuclear magic numbers are those numbers of electrons leading to discontinuities in the ionization energy . These occur for the noble gases helium , neon , argon , krypton , xenon , radon and oganesson . Hence, the "atomic magic numbers" are 2, 10, 18, 36, 54, 86 and 118. As with the nuclear magic numbers, these are expected to be changed in the superheavy region due to spin/orbit-coupling effects affecting subshell energy levels. Hence copernicium (112) and flerovium (114) are expected to be more inert than oganesson (118), and the next noble gas after these is expected to occur at element 172 rather than 168 (which would continue the pattern).
In 2010, an alternative explanation of magic numbers was given in terms of symmetry considerations. Based on the fractional extension of the standard rotation group, the ground state properties (including the magic numbers) for metallic clusters and nuclei were simultaneously determined analytically. A specific potential term is not necessary in this model. [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Magic_number_(physics) |
In computer programming , a magic number is any of the following:
The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code. This breaks one of the oldest rules of programming, dating back to the COBOL , FORTRAN and PL/1 manuals of the 1960s. [ 1 ]
In the following example that computes the price after tax, 1.05 is considered a magic number:
The use of unnamed magic numbers in code obscures the developers' intent in choosing that number, [ 2 ] increases opportunities for subtle errors, and makes it more difficult for the program to be adapted and extended in the future. [ 3 ] As an example, it is difficult to tell whether every digit in 3.14159265358979323846 is correctly typed, or if the constant can be truncated to 3.14159 without affecting the functionality of the program with its reduced precision. Replacing all significant magic numbers with named constants (also called explanatory variables) makes programs easier to read, understand and maintain. [ 4 ]
The example above can be improved by adding a descriptively named variable:
Names chosen to be meaningful in the context of the program can result in code that is more easily understood by a maintainer who is not the original author (or even by the original author after a period of time). [ 5 ] An example of an uninformatively named constant is int SIXTEEN = 16 , while int NUMBER_OF_BITS = 16 is more descriptive.
The problems associated with magic 'numbers' described above are not limited to numerical types and the term is also applied to other data types where declaring a named constant would be more flexible and communicative. [ 1 ] Thus, declaring const string testUserName = "John" is better than several occurrences of the 'magic value' "John" in a test suite .
For example, if it is required to randomly shuffle the values in an array representing a standard pack of playing cards , this pseudocode does the job using the Fisher–Yates shuffle algorithm:
where a is an array object, the function randomInt(x) chooses a random integer between 1 and x , inclusive, and swapEntries(i, j) swaps the i th and j th entries in the array. In the preceding example, 52 and 53 are magic numbers, also not clearly related to each other. It is considered better programming style to write the following:
This is preferable for several reasons:
Disadvantages are:
In some contexts, the use of unnamed numerical constants is generally accepted (and arguably "not magic"). While such acceptance is subjective, and often depends on individual coding habits, the following are common examples:
The constants 1 and 0 are sometimes used to represent the Boolean values true and false in programming languages without a Boolean type, such as older versions of C . Most modern programming languages provide a boolean or bool primitive type and so the use of 0 and 1 is ill-advised. This can be more confusing since 0 sometimes means programmatic success (when -1 means failure) and failure in other cases (when 1 means success).
In C and C++, 0 represents the null pointer . As with Boolean values, the C standard library includes a macro definition NULL whose use is encouraged. Other languages provide a specific null or nil value and when this is the case no alternative should be used. The typed pointer constant nullptr has been introduced with C++11.
Format indicators were first used in early Version 7 Unix source code. [ citation needed ]
Unix was ported to one of the first DEC PDP-11 /20s, which did not have memory protection . So early versions of Unix used the relocatable memory reference model. [ 6 ] Pre- Sixth Edition Unix versions read an executable file into memory and jumped to the first low memory address of the program, relative address zero. With the development of paged versions of Unix, a header was created to describe the executable image components. Also, a branch instruction was inserted as the first word of the header to skip the header and start the program. In this way a program could be run in the older relocatable memory reference (regular) mode or in paged mode. As more executable formats were developed, new constants were added by incrementing the branch offset . [ 7 ]
In the Sixth Edition source code of the Unix program loader, the exec() function read the executable ( binary ) image from the file system. The first 8 bytes of the file was a header containing the sizes of the program (text) and initialized (global) data areas. Also, the first 16-bit word of the header was compared to two constants to determine if the executable image contained relocatable memory references (normal), the newly implemented paged read-only executable image, or the separated instruction and data paged image. [ 8 ] There was no mention of the dual role of the header constant, but the high order byte of the constant was, in fact, the operation code for the PDP-11 branch instruction ( octal 000407 or hex 0107). Adding seven to the program counter showed that if this constant was executed , it would branch the Unix exec() service over the executable image eight byte header and start the program.
Since the Sixth and Seventh Editions of Unix employed paging code, the dual role of the header constant was hidden. That is, the exec() service read the executable file header ( meta ) data into a kernel space buffer, but read the executable image into user space , thereby not using the constant's branching feature. Magic number creation was implemented in the Unix linker and loader and magic number branching was probably still used in the suite of stand-alone diagnostic programs that came with the Sixth and Seventh Editions. Thus, the header constant did provide an illusion and met the criteria for magic .
In Version Seven Unix, the header constant was not tested directly, but assigned to a variable labeled ux_mag [ 9 ] and subsequently referred to as the magic number . Probably because of its uniqueness, the term magic number came to mean executable format type, then expanded to mean file system type, and expanded again to mean any type of file.
Magic numbers are common in programs across many operating systems. Magic numbers implement strongly typed data and are a form of in-band signaling to the controlling program that reads the data type(s) at program run-time. Many files have such constants that identify the contained data. Detecting such constants in files is a simple and effective way of distinguishing between many file formats and can yield further run-time information .
The Unix utility program file can read and interpret magic numbers from files, and the file which is used to parse the information is called magic . The Windows utility TrID has a similar purpose.
Magic numbers are common in API functions and interfaces across many operating systems , including DOS , Windows and NetWare :
This is a list of limits of data storage types: [ 15 ]
It is possible to create or alter globally unique identifiers (GUIDs) so that they are memorable, but this is highly discouraged as it compromises their strength as near-unique identifiers. [ 16 ] [ 17 ] The specifications for generating GUIDs and UUIDs are quite complex, which is what leads to them being virtually unique, if properly implemented. [ 18 ]
Microsoft Windows product ID numbers for Microsoft Office products sometimes end with 0000-0000-0000000FF1CE ("OFFICE"), such as { 90160000-008C-0000-0000-0000000FF1CE }, the product ID for the "Office 16 Click-to-Run Extensibility Component".
Java uses several GUIDs starting with CAFEEFAC . [ 19 ]
In the GUID Partition Table of the GPT partitioning scheme, BIOS Boot partitions use the special GUID { 21686148-6449-6E6F-744E-656564454649 } [ 20 ] which does not follow the GUID definition; instead, it is formed by using the ASCII codes for the string " Hah!IdontNeedEFI " partially in little endian order. [ 21 ]
Magic debug values are specific values written to memory during allocation or deallocation, so that it will later be possible to tell whether or not they have become corrupted, and to make it obvious when values taken from uninitialized memory are being used. Memory is usually viewed in hexadecimal, so memorable repeating or hexspeak values are common. Numerically odd values may be preferred so that processors without byte addressing will fault when attempting to use them as pointers (which must fall at even addresses). Values should be chosen that are away from likely addresses (the program code, static data, heap data, or the stack). Similarly, they may be chosen so that they are not valid codes in the instruction set for the given architecture.
Since it is very unlikely, although possible, that a 32-bit integer would take this specific value, the appearance of such a number in a debugger or memory dump most likely indicates an error such as a buffer overflow or an uninitialized variable .
Famous and common examples include:
Used by VLC player and some IP cameras in RTP / RTCP protocol, VLC player sends four bytes in the order of the endianness of the system. Some IP cameras expect the player to send this magic number and do not start the stream if it is not received.
Most of these are 32 bits long – the word size of most 32-bit architecture computers.
The prevalence of these values in Microsoft technology is no coincidence; they are discussed in detail in Steve Maguire 's book Writing Solid Code from Microsoft Press . He gives a variety of criteria for these values, such as:
Since they were often used to mark areas of memory that were essentially empty, some of these terms came to be used in phrases meaning "gone, aborted, flushed from memory"; e.g. "Your program is DEADBEEF". [ citation needed ] | https://en.wikipedia.org/wiki/Magic_number_(programming) |
A magic pipe is a surreptitious change to a ship's oily water separator (OWS), or other waste-handing equipment, which allows waste liquids to be discharged in contravention of maritime pollution regulations. Such equipment alterations may allow hundreds of thousands of gallons of contaminated water to be discharged untreated, causing extensive pollution of marine waters. [ 1 ]
The pipe may be improvised, aboard ship, from available hoses and pumps, to discharge untreated waste water directly into the sea. As ships are required to keep records of waste and its treatment, magic pipe cases often involve falsification of these records. [ 2 ] [ 3 ] The pipe is ironically called "magic" because it bypasses the ship's oily water separator and goes directly overboard. Hence, it can make untreated bilge water "magically disappear". [ 4 ]
Often the pipe can be easily disconnected and stored away into a different location aboard the ship so state and regulatory officers can not detect its usage. The use of magic pipes continues to this day, as well as efforts to improve bilge water treatment to make the use of magic pipes unnecessary. [ 4 ]
In the United States, magic pipe cases often attract large fines for shipping lines, and prison sentences for crew. [ 1 ] Cases are often brought to light by whistle blowers , [ 6 ] including a 2016 case involving Princess Cruises , which resulted in a record US$40 million fine. [ 5 ] In April 2021 a ship engineer on the Zao Galaxy, an oil tanker, was convicted of intentionally dumping oily bilge water in February 2019 and submitting false paperwork in an attempt to conceal the crime. The engineer may receive a substantial prison sentence and fine. The ship operator was fined US$1.65 million and ordered to "implement a comprehensive Environmental Compliance Plan." [ 7 ]
On older OWS systems bypass pipes were fitted with regulatory approval. These approved pipes are no longer fitted on newer vessels. [ 8 ]
In some serious emergencies ship's crews are allowed to discharge untreated bilge water overboard, but they need to declare these emergencies in the ship's records and oil record book . Unregistered discharges violate the MARPOL 73/78 international pollution control treaty. [ 9 ] [ 10 ]
The problem is worsened by a lack of facilities in developing countries; some port reception facilities do not allow for oily water to be discharged easily and cost effectively. [ 11 ] Crew members, engineers, and ship owners can receive huge fines and even imprisonment if they continue to use a magic pipe to pollute the environment. [ 4 ] [ 12 ]
Conclusively, some engineers use the magic pipe manipulation technique because of:
The oily bilge waste comes from a ship's engines and fuel systems. The waste is required to be offloaded when a ship is in port and either burned in an incinerator or taken to a waste management facility. In rare occasions, bilge water can be discharged into the ocean but only after almost all oil is separated out. [ 5 ] | https://en.wikipedia.org/wiki/Magic_pipe |
The magic pushbutton is a common anti-pattern in graphical user interfaces . [ 1 ] [ 2 ]
At its core, the anti-pattern consists of a system partitioned into two parts: user interface and business logic , that are coupled through a single point, clicking the "magic pushbutton" or submitting a form of data. As it is a single point interface, this interface becomes over-complicated to implement. The temporal coupling of these units is a major problem: every interaction in the user interface must happen before the pushbutton is pressed, business logic can only be applied after the button was pressed. Cohesion of each unit also tends to be poor: features are bundled together whether they warrant this or not, simply because there is no other structured place in which to put them.
To a user, a magic pushbutton system appears clumsy and frustrating to use. Business logic is unavailable before the button press, so the user interface appears as a bland form-filling exercise. There is no opportunity for assistance in filling out fields, or for offering drop-down lists of acceptable values. In particular, it is impossible to provide assistance with later fields, based on entries already placed in earlier fields. For instance, a choice from a very large list of insurance claim codes might be filtered to a much smaller list, if the user has already selected Home/Car/Pet insurance, or if they have already entered their own identification and so the system can determine the set of risks for which they're actually covered, omitting the obscure policies that are now known to be irrelevant for this transaction.
One of the most off-putting aspects of a magic pushbutton is its tendency for the user interaction to proceed by entering a large volume of data, then having it rejected for some unexpected reason. This is particularly poor design when it is combined with the infamous "Redo from scratch" messages of older systems. Even where a form is returned with the entered data preserved and the problem field highlighted, it is still off-putting to users to have to return to a field that they thought they had completed some minutes earlier.
These features, and their lack with a magic pushbutton, are particularly important for naive users who are likely to make mistakes, less so for experts or the system's own programmers. This type of interface failing has been highlighted by the web, and the need to support more public users, rather than a more traditional user group of role-based office workers, carrying out the same tasks on the same system, over and over. Even though a developer who knows the system intimately and can enter data perfectly the first time around is able to use it efficiently, this is no indication that such a system is suitable for use by its actual users.
The magic pushbutton often arises through poor management of the design process in the early stages, together with a lack of importance placed on user experience, relative to project completion. At a simple view, the simplicity of the magic pushbutton is attractive as it has few user interface modules and their interactions appear simple too. This view hides the complexity inside each module, and also devalues interface quality relative to cost.
In a modern system (i.e., one where processing is cheap and competing interface standards are high), users should simply not be left to enter data for long periods without some automatic interaction to guide, validate, or to tailor the system according to the developing state of the data they've so far entered. Leaving them alone to "just get on with it", then validating everything at the end, means that the corrections needed will be detected further and further from when that data was entered. As an a priori principle, corrections needed should be highlighted as soon and as close to when they are either entered, or could first be identified.
In an event-driven interface, most events triggered by the "completion" of a field will present an opportunity to either validate that field, or to guide the choices for entering the next. They may even control which field the user is taken to next: sub-sections of a form are often made relevant or irrelevant by values entered early on, and users should not need to manually skip these, if it can be done for them.
In this scenario, the programmer draws the user interface first and then writes the business logic in the automatically created methods .
The following is a typical example of a magic pushbutton in Borland Delphi :
A better way to do this is to refactor the business logic (in this example storing the filename to the registry) into a separate class.
and call this class Save method from the Click handler: | https://en.wikipedia.org/wiki/Magic_pushbutton |
In computer programming , a magic string is an input that a programmer believes will never come externally and which activates otherwise hidden functionality. A user of this program would likely provide input that gives an expected response in most situations. However, if the user does in fact innocently (unintentionally) provide the pre-defined input, invoking the internal functionality, the program response is often quite unexpected to the user (thus appearing "magical"). [ 1 ]
Typically, the implementation of magic strings is due to time constraints. A developer must find a fast solution instead of delving more deeply into a problem and finding a better solution. For example, when testing a program that takes a user's personal details and verifies their credit card number, a developer may decide to add a magic string shortcut whereby entering the unlikely input of "***" as a credit card number would cause the program to automatically proceed as if the card were valid, without spending time verifying it. If the developer forgets to remove the magic string, and a user of the final program happens to enter "***" as a placeholder credit card number while filling in the form, the user would inadvertently trigger the hidden functionality.
Often there are significant time constraints out of the developer's control right from the beginning of their involvement in a project. Common issues that might lead to this anti-pattern as a result:
Restricting the format of the input is a possible maintenance (bug fixing) solution — essentially this means validating input information to check that it is in the correct format, in order to reduce the possibility of the magic string being discovered by the user. Examples include validating a telephone number to ensure that it contains only digits (and possibly spaces and punctuation to a limited extent) or checking that a person's name has a forename and a surname (and is appropriately capitalised). An exception is made for the magic string in the validation code so that it will not be rejected by validation. It is expected that, since a user would likely quickly notice the strict enforcement of formatting, it would likely not occur to the user to try inputting a string not conforming to the format. Therefore, it is very unlikely for the user to try the magic string.
As with any input validation process, it is important to ensure that the format is not restrictive in a way that unintentionally restricts the use of the application by some users. An example of this is restricting telephone number or postal code [ 6 ] input based on one country's system (e.g. requiring every user to give a five-digit ZIP code ), causing problems for legitimate users who are based in other countries.
As is often the case with anti-patterns, there exist specific scenarios where magic strings are a correct solution for an implementation. Examples include cheat codes [ 7 ] and Easter eggs . Furthermore, there are cases when users invent magic strings, and systems that have not coded to accept them can produce unexpected results such as missing license plates. [ 8 ]
The following is a list of some known incidents where use of a magic string has caused problems. | https://en.wikipedia.org/wiki/Magic_string |
The magic wavelength (also known as a related quantity, magic frequency ) is the wavelength of an optical lattice where the polarizabilities of two atomic clock states have the same value, such that the AC Stark shift caused by the laser intensity fluctuation has no effect on the transition frequency between the two clock states. [ 1 ] [ 2 ] [ 3 ]
The laser field in an optical lattice induces an electric dipole moment in the atoms to exert forces on them and hence confine them. However, the difference in polarizabilities of the atomic states leads to an AC Stark shift in the transition frequency between the two states, a shift that is dependent on the laser optical intensity at the particular atom location in the lattice. [ 1 ] When it comes to precise measurements of transition frequency such as atomic clocks, the temporal fluctuations of the laser optical intensity would then deteriorate the clock accuracy. Furthermore, due to the spatial variation of laser intensity in the lattice, the atom's motion within the lattice would also be coupled into the uncertainty of the internal transition frequency of the atom.
Despite having different function forms, the polarizabilities of two atomic states do have a dependency on the wavelength of the laser field. In some cases, it is then possible to find a particular wavelength at which the two atomic states happen to have exactly the same polarizability. This particular wavelength, where the AC Stark shift vanishes for the transition frequency, is called the magic wavelength, and the frequency that corresponds to this wavelength is called the magic frequency. This idea was first introduced by Hidetoshi Katori 's calculation in 2003, [ 1 ] and then experimentally achieved by Katori's group in 2003. [ 3 ]
This time -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magic_wavelength |
The Maginot Line ( / ˈ m æ ʒ ɪ n oʊ / ; French : Ligne Maginot [liɲ maʒino] ), [ a ] [ 1 ] named after the French Minister of War André Maginot , is a line of concrete fortifications , obstacles and weapon installations built by France in the 1930s to deter invasion by Nazi Germany and force them to move around the fortifications. It was impervious to most forms of attack; consequently, the Germans invaded through the Low Countries in 1940, passing it to the north. The line, which was supposed to be fully extended further towards the west to avoid such an occurrence, was finally scaled back in response to demands from Belgium . Indeed, Belgium feared it would be sacrificed in the event of another German invasion. The line has since become a metaphor for expensive efforts that offer a false sense of security. [ 2 ]
Constructed on the French side of its borders with Italy , Switzerland , Germany , Luxembourg and Belgium , the line did not extend to the English Channel . French strategy, therefore, envisioned a move into Belgium to counter a German assault. Based on France's experience with trench warfare during World War I , the massive Maginot Line was built in the run-up to World War II , after the Locarno Conference in 1925 gave rise to a fanciful and optimistic "Locarno spirit". French military experts believed the line would deter German aggression because it would slow an invasion force long enough for French forces to mobilise and counterattack.
The Maginot Line was invulnerable to aerial bombings and tank fire; it used underground railways as a backup. It also had state-of-the-art living conditions for garrisoned troops, supplying air conditioning and eating areas for their comfort. [ 3 ] French and British officers had anticipated the geographical limits of the Maginot Line; when Germany invaded the Netherlands and Belgium , they carried out plans to form an aggressive front that cut across Belgium and connected to the Maginot Line.
The French line was weak near the Ardennes . General Maurice Gamelin , when drafting the Dyle Plan , believed this region, with its rough terrain, would be an unlikely invasion route of German forces; if it were traversed, it would be done at a slow rate that would allow the French time to bring up reserves and counterattacks. The German Army, having reformulated their plans from a repeat of the First World War-era plan, became aware of and exploited this weak point in the French defensive front. A rapid advance through the forest and across the River Meuse encircled much of the Allied forces, resulting in a sizeable force having to be evacuated at Dunkirk and leaving the troops to the south unable to mount an effective resistance to the German invasion of France . [ 4 ]
The Maginot Line was built to fulfill several purposes:
Maginot Line fortifications were manned by specialist units of fortress infantry, artillery and engineers. The infantry manned the lighter weapons of the fortresses and formed units with the mission of operating outside if necessary. Artillery troops operated the heavy guns, and the engineers were responsible for maintaining and operating other specialist equipment, including all communications systems. All these troops wore distinctive uniform insignia and considered themselves among the elite of the French Army. During peacetime, fortresses were only partly manned by full-time troops. They would be supplemented by reservists who lived in the local area and who could be quickly mobilised in an emergency. [ 10 ]
Full-time Maginot Line troops were accommodated in barracks built close to the fortresses. They were also accommodated in complexes of wooden housing adjacent to each fortress, which were more comfortable than living inside, but were not expected to survive wartime bombardment. [ 11 ] The training was carried out at a fortress near the town of Bitche in Moselle in Lorraine , built in a military training area and so capable of live fire exercises. This was impossible elsewhere as the other parts of the line were located in civilian areas. [ 11 ]
Although the name "Maginot Line" suggests a relatively thin linear fortification, it was 20–25 kilometres (12–16 miles) deep from the German border to the rear area. It was composed of an intricate system of strong points, fortifications and military facilities such as border guard posts, communications centres, infantry shelters, barricades, artillery, machine-gun and anti-tank-gun emplacements, supply depots, infrastructure facilities and observation posts. These various structures reinforced a principal line of resistance made up of the most heavily armed ouvrages , which can be roughly translated as fortresses or big defensive works.
This consisted of blockhouses and strong-houses, which were often camouflaged as residential homes, built within a few metres of the border and manned by troops to give the alarm in the event of a surprise attack and to delay enemy tanks with prepared explosives and barricades .
Approximately 5 km (3 mi) behind the border there was a line of anti-tank blockhouses that were intended to provide resistance to armoured assault, sufficient to delay the enemy and allow time for the crews of the C.O.R.F. ouvrages to be ready at their battle stations. These outposts covered the main passages within the principal line.
This line began 10 km (6 mi) behind the border. It was preceded by anti-tank obstacles made of metal rails planted vertically in six rows, with heights varying from 0.70–1.40 metres (2 ft 4 in – 4 ft 7 in) and buried to a depth of 2 m (6 ft 7 in). These anti-tank obstacles extended from end to end in front of the main works, over hundreds of kilometres, interrupted only by extremely dense forests, rivers, or other nearly impassable terrains.
The anti-tank obstacle system was followed by an anti-personnel obstacle system made primarily of dense barbed wire. Anti-tank road barriers also made it possible to block roads at necessary points of passage through the tank obstacles.
These bunkers were armed with twin machine-guns (abbreviated as JM — Jumelage de mitrailleuses — in French) and anti-tank guns of 37 or 47 mm (1.5 or 1.9 in). They could be single (with a firing room in one direction) or double (two firing rooms in opposite directions). These generally had two floors, with a firing level and a support/infrastructure level that provided the troops with rest and services ( power-generating units , reserves of water, fuel, food, ventilation equipment, etc.). The infantry casemates often had one or two "cloches" or turrets located on top of them. These GFM cloches were sometimes used to emplace machine guns or observation periscopes. 20 to 30 men manned them.
These small fortresses reinforced the line of infantry bunkers . The petits ouvrages were generally made up of several infantry bunkers, connected by a tunnel network with attached underground facilities, such as barracks, electric generators , ventilation systems, mess halls , infirmaries and supply caches. Their crew consisted of between 100 and 200 men.
These fortresses were the most important fortifications on the Maginot Line, having the sturdiest construction and the heaviest artillery. These were composed of at least six "forward bunker systems" or "combat blocks" and two entrances and were connected via a network of tunnels that often had narrow gauge electric railways for transport between bunker systems. The blocks contained infrastructure such as power stations, independent ventilating systems, barracks and mess halls, kitchens, water storage and distribution systems, hoists, ammunition stores, workshops and spare parts and food stores. Their crews ranged from 500 to more than 1,000 men.
These were located on hills that provided a good view of the surrounding area. Their purpose was to locate the enemy, direct and correct the indirect fire of artillery, and report on the progress and position of critical enemy units. These are large reinforced buried concrete bunkers, equipped with armoured turrets containing high-precision optics, connected with the other fortifications by field telephone and wireless transmitters (known in French by the acronym T.S.F., Télégraphie Sans Fil ).
This system connected every fortification in the Maginot Line, including bunkers, infantry and artillery fortresses, observation posts and shelters. Two telephone wires were placed parallel to the line of fortifications, providing redundancy in case a wire was cut. There were places along the cable where dismounted soldiers could connect to the network.
These were found from 500–1,000 m (1,600–3,300 ft) behind the principal line of resistance. These were buried concrete bunkers designed to house and shelter up to a company of infantry (200 to 250 men). They had amenities such as electric generators, ventilation systems, water supplies, kitchens and heating, which allowed their occupants to hold out in the event of an attack. They could also be used as a local headquarters and counterattack base.
Flood zones were natural basins or rivers that could be flooded on demand and thus constitute an additional obstacle in the event of an enemy offensive.
These were built near the major fortifications so fortress ( ouvrage ) crews could reach their battle stations in the shortest possible time in the event of a surprise attack during peacetime.
A network of 600 mm ( 1 ft 11 + 5 ⁄ 8 in ) narrow-gauge railways was built to rearm and resupply the main fortresses ( ouvrages ) from supply depots up to 50 km (31 mi) away. Petrol-engined armoured locomotives pulled supply trains along these narrow-gauge lines. (A similar system was developed with armoured steam engines in 1914–1918.)
Initially above-ground but then buried, and connected to the civil power grid, these provided electric power to the many fortifications and fortresses.
This was hauled by locomotives to planned locations to support the emplaced artillery in the fortresses, which was intentionally limited in range to 10–12 km (6–7 mi).
There are 142 ouvrages , 352 casemates , 78 shelters, 17 observatories and around 5,000 blockhouses in the Maginot Line. [ b ]
There are several kinds of armoured cloches. Cloches are non-retractable turrets. The word cloche is a French term meaning bell due to its shape. All cloches were made of alloy steel.
The line included the following retractable turrets.
Both static and mobile artillery units were assigned to defend the Maginot Line. Régiments d'artillerie de position (RAP) consisted of static artillery units. Régiments d'artillerie mobile de forteresse (RAMF) consisted of mobile artillery. [ 12 ]
The defences were first proposed by Marshal Joseph Joffre . He was opposed by modernists such as Paul Reynaud and Charles de Gaulle , who favoured investment in armour and aircraft. Joffre had support from Marshal Henri Philippe Pétain , and the government organised many reports and commissions. André Maginot finally convinced the government to invest in the scheme. Maginot was another veteran of World War I; he became the French Minister of Veteran Affairs and then Minister of War (1928–1932).
In January 1923, after Weimar Germany defaulted on reparations , the French Premier Raymond Poincaré responded by sending French troops to occupy Germany's Ruhr region. During the ensuing Ruhrkampf ("Ruhr struggle") between the Germans and the French that lasted until September 1923, Britain condemned the French occupation of the Ruhr . A period of sustained Francophobia broke out in Britain, with Poincaré being vilified in Britain as a cruel bully punishing Germany with unreasonable reparations demands. The British—who openly championed the German position on reparations—applied intense economic pressure on France to change its policies towards Germany. At a conference in London in 1924 to settle the Franco-German crisis caused by the Ruhrkampf , the British Prime Minister Ramsay MacDonald successfully pressed the French Premier Édouard Herriot to make concessions to Germany. The British diplomat Sir Eric Phipps , who attended the conference, commented afterwards that:
The London Conference was for the French 'man in the street' one long Calvary as he saw M. Herriot abandoning one by one the cherished possessions of French preponderance on the Reparations Commission, the right of sanctions in the event of German default, the economic occupation of the Ruhr, the French-Belgian railway Régie , and finally, the military occupation of the Ruhr within a year. [ 13 ]
The great conclusion that was drawn in Paris after the Ruhrkampf and the 1924 London Conference was that France could not make unilateral military moves to uphold Versailles as the resulting British hostility to such moves was too dangerous to the republic. Beyond that, the French were well aware of the contribution of Britain and its dominions to the victory of 1918. French decision-makers believed they needed Britain's help to win another war; the French could only go so far with alienating the British. [ 14 ] From 1871 forward, French elites had concluded that France had no hope of defeating Germany on its own, and France would need an alliance with another great power to defeat the Reich . [ 15 ]
In 1926, The Manchester Guardian ran an exposé showing the Reichswehr had been developing military technology forbidden by the Treaty of Versailles in the Soviet Union . The secret German-Soviet cooperation started in 1921. [ citation needed ] The German statement following The Manchester Guardian ' s article that Germany did not feel bound by the terms of Versailles and would violate them as much as possible gave much offence in France. Nonetheless, in 1927, the Inter-Allied Commission , which was responsible for ensuring that Germany complied with Part V of the Treaty of Versailles, was abolished as a goodwill gesture reflecting the "Spirit of Locarno ". [ 16 ] When the Control Commission was dissolved, the commissioners in their final report issued a blistering statement, stating that Germany had never sought to abide by Part V and the Reichswehr had been engaging in covert rearmament all through the 1920s. Under the Treaty of Versailles, France was to occupy the Rhineland region of Germany until 1935. Still, the last French troops left the Rhineland in June 1930 in exchange for Germany accepting the Young Plan . [ 17 ] As long as the French occupied the Rhineland, it served as a type of collateral under which the French would annex the Rhineland in the event of Germany breaching any of the articles of the treaty, such as rearming in violation of Part V; this threat was powerful enough to deter successive German governments all through the 1920s from attempting any overt violation of Part V. [ 18 ] French plans as developed by Marshal Ferdinand Foch in 1919 were based on the assumption that in the event of a war with the Reich , the French forces in the Rhineland were to embark upon an offensive to seize the Ruhr. [ 18 ] A variant of the Foch plan had been used by Poincaré in 1923 when he ordered the French occupation of the Ruhr. [ 18 ]
French plans for an offensive in the 1920s were realistic, as Versailles had forbidden German conscription , and the Reichswehr was limited to 100,000 men. Once the French forces left the Rhineland in 1930, this form of leverage with the Rhineland as collateral was no longer available to Paris, which from then on had to depend on Berlin's word that it would continue to abide by the terms of the Versailles and Locarno treaties, which stated that the Rhineland was to stay demilitarised forever. [ 18 ] Given that Germany had engaged in covert rearmament with the co-operation of the Soviet Union starting in 1921 (a fact that had become public knowledge in 1926) and that every German government had gone out of its way to insist on the moral invalidity of Versailles, claiming it was based upon the so-called Kriegsschuldlüge ("War guilt lie") that Germany started the war in 1914, the French had little faith that the Germans would willingly allow the Rhineland's demilitarised status to continue forever, and believed that at some time in the future, Germany would rearm in violation of Versailles, reintroduce conscription and remilitarise the Rhineland. [ 18 ] The decision to build the Maginot Line in 1929 was a tacit French admission that without the Rhineland as collateral, Germany was soon going to rearm and that the terms of Part V had a limited lifespan. [ 18 ]
After 1918, the German economy was twice as large as that of France; Germany had a population of 70 million compared to France's 40 million, and the French economy was hobbled by the need to reconstruct the enormous damage of World War I, while German territory had seen little fighting. French military chiefs were dubious about their ability to win another war against Germany on its own, especially an offensive war. [ 18 ] French decision-makers knew that the victory of 1918 had been achieved because the British Empire and the United States were allies in the war and that the French would have been defeated on their own. [ 17 ] With the United States isolationist and Britain stoutly refusing to make the "continental commitment" to defend France on the same scale as in World War I, the prospects of Anglo-American assistance in another war with Germany appeared to be doubtful at best. [ 17 ] Versailles did not call for military sanctions in the event of the German military reoccupying the Rhineland or breaking Part V, while Locarno committed Britain and Italy to come to French aid in the event of a "flagrant violation" of the Rhineland's demilitarised status, it did not define what a "flagrant violation" would be. [ 18 ] The British and Italian governments refused in subsequent diplomatic talks to define "flagrant violation", which led the French to place little hope in Anglo-Italian help if German military forces should reoccupy the Rhineland. [ 18 ] Given the diplomatic situation in the late 1920s, the Quai d'Orsay informed the government that French military planning should be based on a worst-case scenario that France would fight the next war against Germany without the help of Britain or the United States. [ 18 ]
France had an alliance with Belgium and with the states of the Cordon sanitaire , as the French alliance system in Eastern Europe was known. Although the alliances with Belgium, Poland , Czechoslovakia , Romania and Yugoslavia were appreciated in Paris, it was widely understood that this was no compensation for the absence of Britain and the United States. The French military was especially insistent that the population disparity made an offensive war of manoeuvre and swift advances suicidal, as there would always be far more German divisions; a defensive strategy was needed to counter Germany. [ 18 ] The French assumption was always that Germany would not go to war without conscription, which would allow the German Army to take advantage of the Reich ' s numerical superiority. Without the natural defensive barrier provided by the Rhine River, French generals argued that France needed a new defensive barrier made of concrete and steel to replace it. [ 18 ] The power of properly dug-in defensive trenches had been amply demonstrated during World War I, when a few soldiers manning a single machine gun post could kill hundreds of the enemy in the open and therefore building a massive defensive line with subterranean concrete shelters was the most rational use of French manpower. [ 19 ]
The American historian William Keylor wrote that given the diplomatic conditions of 1929 and likely trends – with the United States isolationist and Britain unwilling to make the "continental commitment" – the decision to build the Maginot Line was not irrational and stupid, as building the Maginot Line was a sensible response to the problems that would be created by the coming French withdrawal from the Rhineland in 1930. [ 19 ] Part of the rationale for the Maginot Line stemmed from the severe French losses during the First World War and their effect on the French population. [ 20 ] The drop in the birth rate during and after the war, resulting in a national shortage of young men, created an "echo" effect on the generation that provided the French conscript army in the mid-1930s. [ 20 ] Faced with a manpower shortage, French planners had to rely more on older and less fit reservists , who would take longer to mobilise and would diminish the French industry because they would leave their jobs. Static defensive positions were therefore intended not only to buy time but to economise on men by defending an area with fewer and less mobile forces. However, in 1940, France deployed about twice as many men, 36 divisions (roughly one third of its force), for the defence of the Maginot Line in Alsace and Lorraine. In contrast, the opposing German Army Group C only contained 19 divisions, fewer than a seventh of the force committed in the Manstein Plan for the invasion of France. [ 21 ] Reflecting memories of World War I, the French General Staff had developed the concept of la puissance du feu ("the power of fire"), the power of artillery dug in and sheltered by concrete and steel, to inflict devastating losses on an attacking force. [ 22 ]
French planning for war with Germany was always based on the assumption that the war would be la guerre de longue durée (the long war) , in which the superior economic resources of the Allies would gradually grind the Germans down. [ 23 ] The fact that the Wehrmacht embraced the strategy of Blitzkrieg (Lightning War) with the vision of swift wars in which Germany would win quickly via a knockout blow was a testament to the fundamental soundness of the concept of la guerre de longue durée . [ 23 ] Germany had the largest economy in Europe but lacked many of the raw materials necessary for a modern industrial economy (making the Reich vulnerable to a blockade) and the ability to feed its population. The guerre de longue durée strategy called for the French to halt the expected German offensive meant to give the Reich a swift victory; afterwards, there would be an attrition struggle; once the Germans were exhausted, France would begin an offensive to win the war. [ 23 ]
The Maginot Line was intended to block the main German blow if it should come via eastern France and divert it through Belgium, where French forces would meet and stop the Germans. [ 24 ] The Germans were expected to fight costly offensives, whose failures would sap the strength of the Reich , while the French waged a total war , mobilising the resources of France, its empire and allies. [ 25 ] Besides the demographic reasons, a defensive strategy served the needs of French diplomacy towards Great Britain. [ 26 ] The French imported a third of their coal from Britain, and 32 per cent of all imports through French ports were carried by British ships. [ 26 ] Of French trade, 35 per cent was with the British Empire and the majority of the tin , rubber , jute , wool and manganese used by France came from the British Empire. [ 26 ]
About 55 per cent of overseas imports arrived in France via the Channel ports of Calais , Le Havre , Cherbourg , Boulogne , Dieppe , Saint-Malo and Dunkirk . [ 26 ] Germany had to import most of its iron, rubber, oil , bauxite , copper and nickel , making naval blockade a devastating weapon against the German economy . [ 27 ] For economic reasons, the success of the strategy of la guerre de longue durée would at the very least require Britain to maintain a benevolent neutrality , preferably to enter the war as an ally as British sea power could protect French imports while depriving Germany of hers. A defensive strategy based on the Maginot Line was an excellent way of demonstrating to Britain that France was not an aggressive power and would only go to war in the event of German aggression, a situation that would make it more likely that Britain would enter the war on France's side. [ 28 ]
The line was built in several phases from 1930 by the Service Technique du Génie (STG), overseen by Commission d'Organisation des Régions Fortifiées (CORF). The main construction was largely completed by 1939, at the cost of around 3 billion French francs (around 3.9 billion in today's U.S. dollar’s worth). [ clarification needed ] The line stretched from Switzerland to Luxembourg and a much lighter extension was extended to the Strait of Dover after 1934. The original construction did not cover the area ultimately chosen by the Germans for their first challenge, which was through the Ardennes in 1940, a plan known as Fall Gelb (Case Yellow), due to the neutrality of Belgium. The location of this attack, chosen because of the location of the Maginot Line, was through the Belgian Ardennes Forest (sector 4), which is off the map to the left of Maginot Line sector 6 (as marked).
The specification of the defences was very high, with extensive and interconnected bunker complexes for thousands of men; there were 45 main forts ( grands ouvrages ) at intervals of 15 km (9.3 mi), 97 smaller forts ( petits ouvrages ) and 352 casemates between, with over 100 km (62 mi) of tunnels . Artillery was coordinated with protective measures to ensure that one fort could support the next in line by bombarding it directly without harm. The largest guns were, therefore 135 mm (5.3 in) fortress guns; larger weapons were to be part of the mobile forces and were to be deployed behind the lines.
The fortifications did not extend through the Ardennes Forest (which was believed to be impenetrable by Commander-in-Chief Maurice Gamelin ) or along France's border with Belgium because the two countries had signed an alliance in 1920, by which the French army would operate in Belgium if the German forces invaded. However, after France had failed to counter the German remilitarisation of the Rhineland , Belgium—thinking that France was not a reliable ally—abrogated the treaty in 1936 and declared neutrality . France quickly extended the Maginot Line along the Franco-Belgian border, but not to the standard of the rest of the line. As the water table in this region is high, there was the danger of underground passages getting flooded, which the line designers knew would be difficult and expensive to overcome.
In 1939 U.S. Army officer Kenneth Nichols visited the Metz sector, where he was impressed by the formidable formations which he thought the Germans would have to outflank by driving through Belgium. In discussion with General Brousseau, the commander of the Metz sector and other officers, the general outlined the French problem in extending the line to the sea in that placing the line along the Belgian-German border required the agreement of Belgium, but putting the line along the French-Belgian border relinquished Belgium to the Germans. Another complication was Holland, and the various governments never resolved their problems. [ 29 ]
When the British Expeditionary Force landed in France in September 1939, they and the French reinforced and extended the Maginot line to the sea in a flurry of construction from 1939 to 1940, accompanied by general improvements all along the line. The final line was strongest around the industrial regions of Metz , Lauter and Alsace , while other areas were, in comparison, only weakly guarded. In contrast, the propaganda about the line made it appear far greater a construction than it was; illustrations showed multiple storeys of interwoven passages and even underground rail yards and cinemas . This reassured allied civilians.
Czechoslovakia also feared Hitler and began building its own defences. As an ally of France, they got advice on the Maginot design and applied it to Czechoslovak border fortifications . The design of the casemates is similar to the ones found in the southern part of the Maginot Line, and photographs of them are often confused with Maginot forts. Following the Munich Agreement and the German occupation of Czechoslovakia , the Germans were able to use the Czech fortifications to plan attacks that proved successful against the western fortifications (the Belgian Fort Eben-Emael is the best-known example).
The World War II German invasion plan of 1940 ( Sichelschnitt ) was designed to deal with the line. A decoy force sat opposite the line while a second Army Group cut through the Low Countries of Belgium and the Netherlands, as well as through the Ardennes Forest, which lay north of the main French defences. Thus the Germans were able to avoid a direct assault on the Maginot Line by violating the neutrality of Belgium, Luxembourg and the Netherlands . Attacking on 10 May, German forces were well into France within five days and they continued to advance until 24 May, when they stopped near Dunkirk .
During the advance to the English Channel , the Germans overran France's border defence with Belgium and several Maginot Forts in the Maubeuge area whilst the Luftwaffe simply flew over it. On 19 May, the German 16th Army captured the isolated petit ouvrage La Ferté (south-east of Sedan ) after conducting a deliberate assault by combat engineers backed up by heavy artillery , taking the fortifications in only four days. [ 30 ] The entire French crew of 107 soldiers was killed during the action. On 14 June 1940, the day Paris fell, the German 1st Army went over to the offensive in "Operation Tiger" and attacked the Maginot Line between St Avold and Saarbrücken . The Germans then broke through the fortification line as defending French forces retreated southward. In the following days, infantry divisions of the 1st Army attacked fortifications on each side of the penetration, capturing four petits ouvrages. The 1st Army also conducted two attacks against the Maginot Line further to the east in northern Alsace. One attack broke through a weak section of the line in the Vosges Mountains , but the French defenders stopped a second attack near Wissembourg . On 15 June, infantry divisions of the German 7th Army attacked across the Rhine River in Operation "Small Bear", deeply penetrating the defences and capturing the cities of Colmar and Strasbourg .
By early June, the German forces had cut off the line from the rest of France, and the French government was making overtures for an armistice , which was signed on 22 June in Compiègne . As the line was surrounded, the German Army attacked a few ouvrages from the rear but was unsuccessful in capturing any significant fortifications. The main fortifications of the line were still mostly intact, many commanders were prepared to hold out, and the Italian advance had been contained. Nevertheless, Maxime Weygand signed the surrender instrument and the army was ordered out of their fortifications to be taken to POW camps .
When the Allied forces invaded in June 1944, the line, now held by German defenders, was again largely bypassed; fighting touched only portions of the fortifications near Metz and in northern Alsace towards the end of 1944. During the German offensive Operation Nordwind in January 1945, Maginot Line casemates and fortifications were utilised by Allied forces, especially in the Bas-Rhin department in Grand Est , and some German units had been supplemented with flamethrower tanks in anticipation of this possibility. [ 31 ] In January 1945 von Luck with 21 Panzerdivision was tasked with cutting through the old Maginot Line defences and severing Allied links with Strasbourg as part of Operation Nordwind. He was told there were no plans available of the Line but that it was “barely manned and constituted no obstacle”. However they came up against fierce resistance and concentrated American artillery fire. They had to withdraw on 6 January 1945 and again after another attack on 8 January, although they drove a "tiny wedge" into the Line. [ 32 ] Stephen Ambrose wrote that in January 1945, "a part of the line was used for the purpose it had been designed for and showed what a superb fortification it was." Here the Line ran east-west, around the villages of Rittershoffen and Hatten , south of Wissembourg . [ 33 ]
After the war, the French re-manned the line and undertook some modifications. With the advent of French nuclear weapons in the early 1960s, the line became an expensive anachronism. Some of the larger ouvrages were converted to command centres. When France withdrew from NATO 's military component in 1966, much of the line was abandoned, with the NATO facilities turned back over to French forces and the rest of it auctioned off to the public or left to decay. [ 34 ] A number of old fortifications have now been turned into wine cellars , a mushroom farm , and even a disco . Besides that, a few private houses are built atop some blockhouses. [ 35 ]
Ouvrage Rochonvillers was retained by the French Army as a command centre into the 1990s but was deactivated following the disappearance of the Soviet threat. Ouvrage Hochwald is the only facility in the main line that remains in active service as a hardened command facility for the French Air Force known as Drachenbronn Airbase .
In 1968, when scouting locations for On Her Majesty's Secret Service , producer Harry Saltzman used his French contacts to gain permission to use portions of the Maginot Line as SPECTRE headquarters in the film. Saltzman provided art director Syd Cain with a tour of the complex. Still, Cain said that the location would be challenging to light and film inside and that artificial sets could be constructed at the studios for a fraction of the cost. [ 36 ] The idea was shelved.
In analysing the Maginot Line, Ariel Ilan Roth summarised its main purpose: it was not "as popular myth would later have it, to make France invulnerable", but it was constructed to make the appeal of flanking the French "far outweigh the appeal of attacking them head on". [ 5 ] J.E. Kaufmann and H.W. Kaufmann added that before construction in October 1927, the Superior Council of War adopted the final design for the line and identified that one of the main missions would be to deter a German cross-border assault with only minimal force to allow "the army time to mobilise." [ 37 ] In addition, the French envisioned that the Germans would conduct a repeat of their First World War battle plan to flank the defences and drew up their overall strategy with that in mind. [ 38 ] [ 39 ]
Julian Jackson highlighted one of the line's roles was to facilitate that strategy by "free[ing] manpower for offensive operations elsewhere... and to protect the forces of manoeuvre"; the latter included a more mechanised and modernised military, which would advance into Belgium and engage the German main thrust flanking the line. [ 38 ] In support, Roth commented that the French strategy envisioned one of two possibilities by advancing into Belgium: "either there would be a decisive battle in which France might win, or, more likely, a front would develop and stabilise". The latter meant the next war's destructive consequences would not take place on French soil. [ 5 ]
Postwar assessment of whether the Maginot Line served its purpose has been mixed. Its enormous cost and its failure to prevent German forces from invading France have caused journalists and political commentators to remain divided on whether the line was worthwhile. [ 40 ] [ 41 ]
The historian Clayton Donnell commented, "If one believes the Maginot Line was built for the primary purpose of stopping a German invasion of France, most will consider it a massive failure and a waste of money... in reality, the line was not built to be the ultimate saviour of France". [ 42 ] Donnell argued that the primary purpose of "prevent[ing] a concerted attack on France through the traditional invasion routes and to permit time for the mobilisation of troops... was fulfilled", as was the French strategy of forcing the Germans to enter Belgium, which ideally would have allowed "the French to fight on favourable terrain". However, he noted that the French failed to use the line as the basis for an offensive. [ 43 ]
Marc Romanych and Martin Rupp highlight that "poor decisions and missed opportunities" plagued the line and point to its purpose of conserving manpower: "about 20 percent of [France's] field divisions remained inactive along the Maginot Line". Belgium was overrun, and British and French forces evacuated at Dunkirk . They argue had those troops been moved north, "it is possible that Heeresgruppe A's advance could have been blunted, giving time for Groupe d'armees 1 to reorganise". [ 44 ] Kaufmann and Kaufmann commented, "When all is said and done, the Maginot Line did not fail to accomplish its original mission... it provided a shield that bought time for the army to mobilise... [and] concentrate its best troops along the Belgian border to engage the enemy." [ 45 ]
The psychological factor of the Maginot Line has also been discussed. Its construction created a false sense of security, which was widely believed by the French population. [ 42 ] Kaufmann and Kaufmann comment that it was an unintended consequence of André Maginot's efforts to "focus the public's attention on the work being done, emphasising the role and nature of the line". That resulted in "the media exaggerating their descriptions by turning the line into an impregnable fortified position that would seal the frontier". The false sense of security contributed "to the development of the "Maginot mentality". [ 46 ]
Jackson commented that "it has often been alleged that the Maginot Line contributed to France's defeat by making the military too complacent and defence-minded. Such accusations are unfounded". [ 47 ] Historians have pointed to numerous reasons for the French defeat: faulty strategy and doctrine, dispersion of forces, the loss of command and control, poor communications, faulty intelligence that provided excessive German numbers, the slow nature of the French response to the German penetration of the Ardennes and a failure to understand the nature and speed of the German doctrine. [ 48 ] [ 49 ] More seriously, historians have noted rather than the Germans doing what the French had envisioned, the French played into the Germans' hand, culminating in their defeat. [ 50 ] [ 43 ]
When the French Army failed in Belgium, the Maginot Line covered their retreat. [ 45 ] Romanych and Rupp indicate that except for the loss of several insignificant fortifications from insufficient defending troops, the actual fortifications and troops "withstood the test of battle", repulsed numerous attacks, and "withstood intense aerial and artillery bombardment". [ 51 ] Kaufmann and Kaufmann point to the Maginot Line along the Italian border, which "demonstrated the effectiveness of the fortifications... when properly employed". [ 52 ]
The term " Maginot Line " has become a part of the English language: "America's Maginot Line" was the title used for an Atlantic Magazine article about America's military bases in Asia. [ 53 ] The article portrayed vulnerability by showing a rocket being transported through a marshy area atop an ox. [ 54 ] New York Times headlined "Maginot Line in the Sky" in 2000 [ 55 ] and "A New Maginot Line" in 2001. [ 56 ] It was also frequently referenced in wartime films, notably Thunder Rock , The Major and the Minor (albeit as a comedic metaphor) and Passage to Marseille .
Somewhat like " line in the sand " it is also used in non-military situations, as in "Reagan's budgetary Maginot Line." [ 57 ]
Canadian singer-songwriter Geoff Berner has a song called "Maginot Line" on his album We Shall Not , detailing the debacle. [ 58 ]
Footnotes
Citations
Books
Journals | https://en.wikipedia.org/wiki/Maginot_Line |
Magma oceans are vast fields of surface magma that exist during periods of a planet 's or some natural satellite 's accretion when the celestial body is completely or partly molten . [ 1 ]
In the early Solar System , magma oceans were formed by the melting of planetesimals and planetary impacts . [ 1 ] Small planetesimals are melted by the heat provided by the radioactive decay of aluminium-26 . [ 1 ] As planets grew larger, the energy was then supplied from giant impacts with other planetary bodies. [ 2 ] Magma oceans are integral parts of planetary formation as they facilitate the formation of a core through metal segregation [ 3 ] and an atmosphere and hydrosphere through degassing. [ 4 ] Evidence exists to support the existence of magma oceans on both the Earth and the Moon . [ 1 ] [ 5 ] Magma oceans may survive for millions to tens of millions of years, interspersed by relatively mild conditions.
The sources of the energy required for the formation of magma oceans in the early Solar System were the radioactive decay of aluminium-26, accretionary impacts, and core formation. [ 1 ] The abundance and short half life of aluminium-26 allowed it to function as one of the sources of heat for the melting of planetesimals. With aluminium-26 as a heat source, planetesimals that had accreted within 2 Ma after the formation of the first solids in the Solar System could melt. [ 1 ] Melting in the planetesimals began in the interior and the interior magma ocean transported heat via convection. [ 1 ] Planetesimals larger than 20 km in radius that accreted within 2 Ma are expected to have melted, although not completely. [ 1 ]
The kinetic energy provided by accretionary impacts and the loss of potential energy from a planet during core formation are also large heat sources for planet melting. [ 1 ] Core formation, also referred to as metal-silicate differentiation, is the separation of metallic components from silicate in the magma that sink to form a planetary core. [ 1 ] Accretionary impacts that produce heat for the melting of planet embryos and large terrestrial planets have an estimated timescale of tens to hundreds of millions of years. [ 1 ] A prime example would be the Moon-forming impact on Earth, that is thought to have formed a magma ocean with a depth of up to 2000 km. [ 1 ] [ 5 ] The energy of accretionary impacts foremost melt the exterior of the planetary body, and the potential energy provided by core differentiation and the sinking of metals melts the interior. [ 1 ]
The findings of the Apollo missions were the first articles of evidence to suggest the existence of a magma ocean on the Moon. [ 1 ] The rocks in the samples acquired from the missions were found to be composed of a mineral called anorthite . [ 1 ] Anorthite consists mostly of a variety of plagioclase feldspars , which are lower in density than magma. [ 1 ] This discovery gave rise to the hypothesis that the rocks formed through an ascension to the surface of a magma ocean during the early life stages of the Moon. [ 1 ] Additional evidence for the existence of the Lunar Magma Ocean includes the sources of mare basalts and KREEP (K for potassium, REE for rare-earth elements, and P for phosphorus). [ 1 ] The existence of these components within the mostly anorthositic crust of the Moon are synonymous with the solidification of the Lunar Magma Ocean. [ 1 ] Furthermore, the abundance of the trace element europium within the Moon's crust suggests that it was absorbed from the magma ocean, leaving europium deficits in the mare basalt rock sources of the Moon's crust. [ 1 ] The lunar magma ocean was initially 200-300 km thick and the magma achieved a temperature of about 2000 K. [ 5 ] After the early stages of the Moon's accretion, the magma ocean was subjected to cooling caused by convection in the planet's interior. [ 5 ]
During its formation, the Earth likely suffered a series of magma oceans resulting from giant impacts, [ 6 ] the final one being the Moon-forming impact. [ 5 ] The best chemical evidence for the existence of magma oceans on Earth is the abundance of certain siderophile elements in the mantle that record magma ocean depths of approximately 1000 km during accretion. [ 7 ] [ 8 ] The scientific evidence to support the existence of magma oceans on early Earth is not as developed as the evidence for the Moon because of the recycling of the Earth's crust and mixing of the mantle. [ 1 ] Unlike Earth, indications of a magma ocean on the Moon such as the flotation crust, elemental components in rocks, and KREEP have been preserved throughout its lifetime. [ 1 ]
Today Earth's outer core is a liquid layer about 2,260 km (1,400 mi) thick, composed mostly of molten iron and molten nickel that lies above Earth's solid inner core and below its mantle . [ 9 ] [ 10 ] [ 11 ] This layer may be considered as an ocean of molten iron and nickel inside Earth. | https://en.wikipedia.org/wiki/Magma_ocean |
Magmatic water , also known as juvenile water , is an aqueous phase in equilibrium with minerals that have been dissolved by magma deep within the Earth's crust and is released to the atmosphere during a volcanic eruption . It plays a key role in assessing the crystallization of igneous rocks , particularly silicates , as well as the rheology and evolution of magma chambers . Magma is composed of minerals , crystals and volatiles in varying relative natural abundance . [ 1 ] Magmatic differentiation varies significantly based on various factors, most notably the presence of water. [ 2 ] An abundance of volatiles within magma chambers decreases viscosity and leads to the formation of minerals bearing halogens , including chloride and hydroxide groups. In addition, the relative abundance of volatiles varies within basaltic , andesitic , and rhyolitic magma chambers, leading to some volcanoes being exceedingly more explosive than others. Magmatic water is practically insoluble in silicate melts but has demonstrated the highest solubility within rhyolitic melts. An abundance of magmatic water has been shown to lead to high-grade deformation, as a result of altering the composition of hydrogen isotope biogeochemistry ( δ 2 H ) and stable oxygen isotope ratios ( δ 18 O ) within host rocks.
Magma exists in three main forms that vary in composition. [ 3 ] When magma crystallizes within the crust , it forms an extrusive igneous rock. Dependent on the composition of the magma, it may form either rhyolite , andesite , or basalt . [ 3 ] Volatiles, particularly water and carbon dioxide, significantly impact the behavior of each form of magma differently. [ 4 ] , [ 2 ] Magma with a high concentration of volatiles has a significant reduction in temperature of up to hundreds of degrees, which reduces its inherent viscosity. [ 5 ] The behavior of magma is also altered by varying mineralogic compositions, which is noted in Figure 1 . For instance, magmatic water leads to the crystallization of several minerals abundant in hydroxyl- or halogenated-groups, including garnets . Analyses of these minerals can be used to analyze the conditions of formation in the interior of rocky planets . [ 5 ] , [ 6 ]
Volatiles are present in nearly all magma in different concentrations. Examples of volatiles within magma include water, carbon dioxide, and halogen gases. [ 1 ] High pressures allow these volatiles to stay relatively stable within solution. [ 1 ] However, over time, as the magmatic pressure decreases, volatiles will rise out of solution in the gaseous phase, further decreasing the magmatic pressure. [ 1 ] These pressure differences cause drastic differences in the volume of a magma. [ 1 ] Pressure difference causes some forms of volcanoes to be highly explosive and others to be effusive . [ 1 ]
An example of a mineral containing hydroxyl groups is garnet. Garnet is an anhydrous mineral commonly analyzed within geological subdisciplines because of its general stability. One study analyzed the presence of garnets within the upper mantle through infrared spectroscopy and showed absorption at approximately 3500 cm −1 , which is consistent with the presence of hydroxyl groups . These garnets have been shown to vary in composition dependent on its geographic origin. [ 6 ] One particular study in Southern Africa determined concentrations ranging from 1 ppm - 135 ppm. [ 6 ] However, this is significantly lower than the hydroxyl content in regions such as the Colorado Plateau . It was also demonstrated that there is an inverse correlation regarding the concentration of OH and Mg + Fe.
Basaltic magma is the most abundant in iron, magnesium, and calcium but the lowest in silica, potassium, and sodium. [ 1 ] , [ 3 ] The composition of silica within basaltic magma ranges from 45-55 weight percent (wt.%), or mass fraction of a species. [ 1 ] It forms in temperatures ranging from approximately 1830 °F to 2200 °F. [ 1 ] , [ 3 ] Basaltic magma has the lowest viscosity and volatiles content, yet still may be up to 100,000 times more viscous than water. [ 1 ] Because of its low viscosity, this is the least explosive form of magma. Basaltic magma may found in regions such as Hawaii , known for its shield volcanoes . [ 1 ] , [ 7 ]
Basaltic magma forms minerals such as calcium-rich plagioclase feldspar and pyroxene . The water composition of basaltic magma varies dependent on the evolution of the magma chamber. Arc magmas, such as Izarú in Costa Rica, range from 3.2-3.5 wt.%. [ 8 ]
Andesitic magma is an intermediate magma and is approximately evenly dispersed regarding iron, magnesium, calcium, sodium, and potassium. [ 1 ] [ 3 ] The silica composition of andesitic magma ranges from 55 - 65 wt.%. [ 1 ] It forms in temperatures ranging from approximately 1470 °F to 1830 °F. [ 1 ] , [ 3 ] Andesitic magma has an intermediate viscosity and volatiles content. [ 1 ] It forms minerals such as plagioclase feldspar, mica , and amphibole .
Rhyolitic magma is felsic and the most abundant in silica, potassium, and sodium but the lowest in iron, magnesium, and calcium. [ 1 ] [ 3 ] The silica composition of rhyolitic magma ranges from 65-75 wt.%. [ 1 ] It forms in the lowest temperature range, from about 1200 °F to 1470 °F. [ 1 ] , [ 3 ] Rhyolitic magma has the highest viscosity and gas content. [ 1 ] It produces the most explosive volcanic eruptions, including the catastrophic eruption of Mount Vesuvius . [ 1 ] It forms minerals such as orthoclase feldspar, sodium-rich plagioclase feldspar, quartz , mica, and amphibole.
Precipitation of minerals is affected by water solubility within silicate melts, which typically exists as hydroxyl groups bound to Si 4+ or Group 1 and Group 2 cations in concentrations ranging from approximately 6-7 wt. %. [ 9 ] , [ 10 ] Specifically, the equilibrium of water and dissolved oxygen yields hydroxides, where the K eq has been approximated between 0.1 and 0.3. [ 10 ]
This inherent solubility is low yet varies greatly depending on the pressure of the system. Rhyolitic magmas have the highest solubility, ranging from approximately 0% at the surface to nearly 10% at 1100 °C and 5 kbar . Degassing occurs when hydrous magma is uplifted, gradually converting the dissolved water to aqueous phase. This aqueous phase is typically abundant in volatiles, metals ( copper , lead , zinc , silver and gold ), and Group 1 and Group 2 cations. Dependent on which cation the hydroxyl is bound to, it significantly impacts the properties of a volcanic eruption, particularly its explosiveness. [ 9 ] During unusually high temperature and pressure conditions exceeding 374 °C and 218 bar, water enters a supercritical fluid state and becomes no longer a liquid or a gas. [ 9 ]
Isotopic data from various locations within the Mid-Atlantic Ridge indicates the presence of mafic-to-felsic intrusive igneous rocks, including gabbro , diorite , and plagiogranite . [ 11 ] These rocks showed high-grade metamorphism because of the presence of magmatic water, exceeding 600 °C. This deformation depleted host rocks of 18 O, leading to further analysis of the ratio of 18 O to 16 O ( δ 18 O ). [ 11 ]
Water in equilibrium with igneous melts should bear the same isotopic signature for 18 O and δ 2 H . However, isotopic studies of magmatic water have demonstrated similarities to meteoric water , indicating circulation of magmatic and meteoric groundwater systems. [ 12 ]
Isotopic analyses of fluid inclusions indicate a wide range of δ 18 O and δ 2 H content. [ 13 ] Studies within these environments have shown an abundance of 18 O and depletion in 2 H relative to SMOW and meteoric waters. Within ore deposits, fluid inclusion data showed that the presence of δ 18 O vs δ 2 H are within the expected range. | https://en.wikipedia.org/wiki/Magmatic_water |
A magnesium(I) dimer is a molecular compound containing a magnesium to magnesium bond (Mg-Mg), giving the metal an apparent +1 oxidation state. Alkaline earth metals are commonly found in the +2- oxidation state , such as magnesium . The M 2+ are considered as redox - inert , meaning that the +2 state is significant. [ 1 ] However, recent advancements in main group chemistry have yielded low-valent magnesium(I) dimers , also given as Mg(I), with the first compound being reported in 2007. [ 2 ] They can be generally represented as LMg-MgL, with L being a monoanionic ligand . [ 3 ] For example, β-diketiminate , commonly referred to as Nacnac, is a useful chelate regarding these complexes. By tuning the ligand, the thermodynamics of the complex change. [ 4 ] For instance, the ability to add substituents onto Nacnac can contribute to the steric bulk , which can affect reactivity and stability . [ 5 ] As their discovery has grown, so has their usefulness. They are employed in organic and inorganic reduction reactions. It is soluble in a hydrocarbon solvent, like toluene , stoichiometric , selective, and safe. [ 2 ]
The first zinc (I) dimer was isolated in 2004, with more being synthesized in subsequent years. [ 6 ] The chemical similarities between magnesium and zinc led researchers to believe that a Mg(I) dimer could then be achieved. With a calculated stability of Mg—Mg bonded dimers, a synthesis route was needed.
S-block compounds with low oxidation states can be short lived. There are various techniques available for use. However, the generation and detection of these molecules rely on frozen inert gas matrices , low pressures, high temperatures in the gas phase, or a combination of these. This can then be combined with theoretical studies to gain more information regarding the complex. Matrix isolation techniques were carried out for gaining spectroscopic insight on how the Mg(I) dimer may behave. [ 7 ]
By heating magnesium diboride, MgB 2 , at 700 Celsius (°C) with a pressure of 0.1 mbar , and passing HCl gas over it several products are formed, such as magnesium chloride , MgCl. The generation of •MgCl and subsequent compounds from the reaction then underwent further study. At 10 Kelvin (K) , the solution was combined with an inert gas, undergoing IR and Raman spectroscopic techniques, combined with Density Functional Theory (DFT) calculations. This showed the monomeric and dimeric Mg(I) Halides , •MgCl and ClMgMgCl, a linear molecule . [ 7 ] While these studies were useful in gaining more insight on the Mg-Mg bond characteristics, it failed to yield a stable Mg(I) dimer in ambient conditions.
The stability of the Mg-Mg bond needed to be dealt with. Researchers began to investigate sterically demanding guanidinates and amidinates . Their stabilizing abilities in low-oxidation state chemistry was attractive since it allowed for other low-valent main group complexes to be achieved. [ 8 ] This research also allowed for the first stable dimer Mg(I) dimer, [{(Priso)Mg} 2 ]. [ 9 ] Potassium reduction of heteroleptic Mg(II) iodide precursor complexes were then carried out. The ligands guanidinato and, β-diketiminato Mg(II) iodide etherate complexes can be prepared from free N H ligands and methyl magnesium iodide in diethyl ether . An example of the synthesis of the precursor synthesis can be shown below. [ 10 ] An additional precursor synthesis is shown, needed for [{( t Bu Nacnac)Mg} 2 ], which can be explained in the section below.
Reducing the species and its related precursors with sodium or potassium have given dimeric magnesium(I) compounds such as [{(Priso)Mg} 2 ] and other compounds with substituted versions of β-diketiminato. These compounds, with a general formula of [{(ArNacnac)Mg} 2 ]. However, as the size of the substituent on Nacnac decreased, the difficulty to isolate a magnesium(I) dimer increased. This can be shown by phenol , where only a Mg(II) dimer was gained, given by [(PhNacnac) 2 Mg]. For a bulkier analogue such as [{( t Bu Nacnac)Mg} 2 ] a different synthesis route was carried out. Dibutyl magnesium and iodine were chosen since the free β -diketimine, t BuNacnacH has a different reactivity.This is due to t BuNacnacH not reacting with the Grignard reagent shown above. Instead, it can be heated with dibutylmagnesium and become de protonated .
For the reactant, the [ clarification needed ] was stabilized by utilizing a bulkier, or more sterically demanding, N-ligand. This reaction is carried out through potassium reduction of the α-diimine, MeDip DAB and Mg(II) chloride in tetrahydrofuran (THF). It can be noted that MeDip DAB can be shown by the chemical formula as [(DipNCMe) 2 ]). The shown Mg(I) complexes are all thermally stable. Some can even tolerate temperatures up to 300 °C. They also range in colors from colorless to orange. As these compounds are investigated further, the dimers have been found to be kinetically stabilized by multiple β-diketiminate derivatives, a guanidinate, a diiminophosphinate, an enediamide, and several diimine- enolates . [ 3 ] , [ 11 ] , [ 12 ] , [ 13 ]
The Mg(I) dimer formula, LMgMgL, has undergone multiple theoretical investigations regarding the bonds. Furthermore, L, a monoanionic ligand, can also include halides, hydrogen, small alkyl groups, aryl groups, cyclopentadienyl with respective derivatives and chelating monoanionic nitrogen ligands. Mg—Mg bonded molecules underwent the primary investigation, with the bond length found to be 2.76-2.89 Å. Additionally, the bond dissociation energy was found to be between 45 and 48 kcal mol −1 . Specifically, for ClMgMgCl, it was found to be 47.1 kcal mol −1 . [ 7 ]
The Mg-Mg bond for a neutral magnesium(I) dimer has shown to be significantly sigma-bonding . This arises from the s-orbital overlap of the two metals. [ 14 ] The bonding interaction that occurs may be connected to the highest occupied molecular orbital (HOMO), giving the highest energy bond of the molecule. This can be reflected through the Wiberg Bond Index (WBI). The sigma single bond gives a WBI value of 0.9, having 90% s-character. Further theoretical investigations have proved that this does not hold for every complex. There can be notable p-orbital contribution to the Mg—Mg, with it being determined to be 55% in some complexes as the charge changes. [ 12 ] There were also findings regarding the lowest unoccupied molecular orbital (LUMO). For example, bonding character was also discovered in nearly degenerate LUMO and LUMO+1, with a HOMO-LUMO gap of 93 kcal mol −1 . [ 9 ]
Reducing agents can also be considered in demand as the rapid rise of low oxidation state chemistry has been reliant on them. Common compounds include but are not limited to potassium graphite (KC 8 ), sodium naphthalenide and its alkali derivatives, or s-block metals in their elemental form, such as lithium. [ 15 ] However, these reducing agents can have drawbacks, especially concerning accessing low oxidation states. For instance, these complexes may not be soluble in certain solutions, may lack certain selectivity, or can have an over-reduction effect of the initial precursor. Additionally, other side reactions can occur. More importantly, corrosion can be considered. Pure magnesium can be employed as an example. As the humidity increases, the corrosion rate of pure magnesium increases. [ 16 ] At 10% humidity, there is no corrosion; at 30% there is a small layer of surface oxide , with slight corrosion evident; at 80% an amorphous phase coats about 30% of the surface and shows significant corrosion. [ 17 ] Instances like these concerning the disadvantages of reducing agents, can make the dimers more appealing to certain chemical synthesizes.
Mg(I) dimers have the potential to be reducing agents that can be utilized in organic and organometallic synthesis . The thermal stability, moderate air and water sensitivity , and wide range of solubility in organic solvents may make the dimer attractive to chemists. An example of this can be shown through low oxidation state germanium (Ge) chemistry. Using Mg(I) dimers led to a Ge double bond. It can also be noted that the product had low yield . Additionally, the ligand, Dip Nacnac has poor solubility in the reaction solvent of ether . This allows for easy separation too. [ 2 ]
Additionally, hydrogen storage has been gained significant research attention as an alternative to fossil fuels.
Ammonia borane , NH 3 BH 3 , has a high H-content , at 19.6%, concerning hydrogen storage material. [ 18 ] However, there are issues regarding the safety, kinetics, and practical characteristics of the compound. Alternatively, more s-block amidoboranes have been researched as an alternative, with some interest lying in magnesium amidoborane. [ 18 ] Some studies have shown that using reductive dehydrogenation of ammonia borane can be achieved using Mg(I) dimers. [ 19 ]
Grignard reagents , given by RMgX, with R being a monoanionic organic substituent and X being a halide, are thought to proceed through some magnesium(I) intermediates , such as RMgMgX. [ 20 ] It is believed that some of the transformations that occur with Grignard reagents may proceed via single electron transfer . This proceeds from the RMg to the substrate . [ 9 ] Organic one electron reductions are also believed to be in equilibrium with a univalent magnesium compounds such as XMgMgX. [ 21 ] This reagent could have potential to be more selective when compared to other reducing agents, such as samarium (II) Iodide, SmI 2 , that also acts as a one electron reductant. [ 22 ]
Mg(I) dimers have also been researched for carbon monoxide , CO, activation . Further research regarding the synthesis of these complexes revealed that their behavior can be compared to low-valent f-blocks compounds concerning the reduction of CO 2 , socyanides , and nitriles . [ 23 ] , [ 24 ]
Computational studies carried out further reinforced this idea by showing parallels between CO activation with f-block metal hydride complexes. The researchers first started with Mg(I) dimers. These dimers were then hydrogenated in hopes of generating magnesium(II) dimers. Additionally, the hydrogenation of Mg(I) dimers in a CO atmosphere, led to Cross-Coupled alkoxy products. The mechanism in which this reaction proceeded was shown by the computational studies to proceed similarly to related reactions of f-block metal hydride complexes. More specifically, researchers drew a close analogy to the reactivity of [{(DippNacnac)Mg} 2 ] towards CO 2 . After the reaction was carried out, the dimer was shown to have a similar reactivity towards CO 2 that could also be shown in samarium(II) or uranium(III) complexes. [ 13 ] This reaction could also illustrate the potential magnesium(I) dimers have for the conversion of H 2 /CO mixtures. By using more reactive dimers, it was hypothesized that uses for the stoichiometric or catalytic transformation of CO/H 2 mixtures to value added oxygenate products. [ 3 ] Finally, the similarity to low-valent f-block complexes can give rise to a more affordable, nontoxic , and nonradioactive practices. Comparatively, the diamagnetic dimer could also a difference in electronic properties when compared to paramagnetic nature of the lanthanides and actinides . [ 25 ] | https://en.wikipedia.org/wiki/Magnesium(I)_dimer |
Magnesium/Teflon/Viton ( MTV ) is a pyrolant . Teflon and Viton are trademarks of DuPont for polytetrafluoroethylene , (C 2 F 4 ) n , and fluoroelastomer , (CH 2 CF 2 ) n (CF(CF 3 )CF 2 ) n . Thermites based on magnesium/Teflon/Viton, aka MTV-compositions, have been in use since the 1950s as payloads in infrared decoy flare applications. [ 1 ] Derived from the acronym MTV is the expression "MTV-Flare" for pyrotechnic infrared decoy flares.
Whereas in conventional visual pyrotechnic illuminants sodium nitrate , NaNO 3 , is used as an oxidizer , in MTV compositions the polytetrafluoroethylene, (C 2 F 4 ) n , acts as fluorine source. The very high reaction enthalpy , Δ R H {\displaystyle \Delta _{\mathrm {R} }H} , upon combustion of magnesium with PTFE is based on the formation of magnesium fluoride, having a very high negative enthalpy of formation ( Δ f H o {\displaystyle \Delta _{\mathrm {f} }H^{o}} = −1124 kJ mol −1 ):
As much carbon and heat are released upon combustion of MTV the combustion flame can be described as a grey body of high emissivity . [ 2 ]
Depending on stoichiometry , MTV displays varying burn rates and yields different reaction products. With constant Viton-content the burn rate increases exponentially with increasing magnesium content. [ 3 ] Nevertheless the burn rate of MTV, as is the case with many metallized pyrotechnic compositions is strongly dependent on the specific surface area of the metal fuel, that are particle morphology and dimensions. Generally magnesium powder having a high specific surface area will exhibit a higher burn rate than those having a smaller specific area. The main reaction products for MTV at Mg contents between 30 and 65 wt% are magnesium fluoride , soot and vaporized magnesium. [ 4 ]
For aerial decoy flares magnesium rich compositions are used with Mg contents between 55 and 65 wt%. At these stoichiometries only a part of the applied Mg reacts with the PTFE. The surplus Mg is vaporised and reacts with the atmospheric oxygen; likewise the thermally excited soot reacts with the atmospheric oxygen:
Pyrotechnic compositions based on magnesium/polytetrafluoroethylene with stoichiometries from 25 wt% to 90 wt% magnesium are, according to German explosive legislation, the Koenen test ( steel sleeve test ), and BAM impact test , explosive substances. Due to their sensitivity and their reaction behaviour these substances are categorized as group 1.1.2. [ 5 ] MTV compositions explode at minimum confinement (also self confinement) at relative low amounts. MTV compositions are sensitive toward thermal ignition.
In addition MTV compositions in loose and pressed state are extraordinarily sensitive to friction and electrostatic discharges (ESD). [ 6 ] Hence, suitable measures have to be taken to avoid ESD while processing and handling of MTV.
Since aircraft and helicopters could (and still can) counter surface-to-air and air-to-air missiles with the substance, MTV was a classified issue until the mid-1980s. It was not until 1997 that the U.S. government released a formerly classified invention, U.S. patent 5,679,921 (filing year 1957), that originally described the properties and applications of MTV. [ 7 ]
Although missile development has improved seeker countermeasures against MTV flares there are still numerous missile systems fielded worldwide based on 1st generation technology. Hence MTV flares are still not obsolete in fighting unknown threats. Together with advanced spectral flares (see Flare (countermeasure) ) they are part of the so-called "cocktail solution". [ 8 ]
E.-C. Koch, Metal-Fluorocarbon Based Energetic Materials , Wiley-VCH, 2012 , 360 pages | https://en.wikipedia.org/wiki/Magnesium/Teflon/Viton |
Magnesium aluminide is an intermetallic compound of magnesium and aluminium . Common phases (molecular structures) include the beta phase (Mg 2 Al 3 ) and the gamma phase (Mg 17 Al 12 ), which both have cubic crystal structures. Magnesium aluminides are important constituents of 5XXX aluminium alloys (aluminium-magnesium) and magnesium-aluminium alloys, determining many of their engineering properties. Due to the advantage of low density and being strong, magnesium aluminide is important for aircraft engines. [ 2 ] MgAl has also been investigated for use as a reactant to produce metal hydrides in hydrogen storage technology. Like many intermetallics, MgAl compounds often have unusual stoichiometries with large and complex unit cells .
β {\displaystyle \beta } -Mg 2 Al 3 is a complex metallic alloy which crystallizes in cubic unit cell with 1168 atoms per cell. If the Wyckoff positions were fully occupied, the structure would have 1832 atoms, but the sites are only partially occupied. The structure has very low density, and has been suggested for use in hydrogen storage . [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magnesium_aluminide |
Magnesium azide is an inorganic chemical compound with the formula Mg(N 3 ) 2 . It is composed of the magnesium cation ( Mg 2+ ) and the azide anions ( N − 3 ).
Magnesium azide hydrolyzes easily. [ 1 ] [ 2 ] Like most azides, it is explosive.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magnesium_azide |
Magnesium chloride is an inorganic compound with the formula Mg Cl 2 . It forms hydrates MgCl 2 · n H 2 O , where n can range from 1 to 12. These salts are colorless or white solids that are highly soluble in water. These compounds and their solutions, both of which occur in nature, have a variety of practical uses. Anhydrous magnesium chloride is the principal precursor to magnesium metal, which is produced on a large scale. Hydrated magnesium chloride is the form most readily available. [ 2 ]
Magnesium chloride can be extracted from brine or sea water . In North America and South America, it is obtained primarily from Great Salt Lake brine. In the Jordan Valley , it is obtained from the Dead Sea . The mineral bischofite ( MgCl 2 ·6H 2 O ) is extracted (by solution mining) out of ancient seabeds, for example, the Zechstein seabed in northwest Europe. Some deposits result from high content of magnesium chloride in the primordial ocean. [ 3 ] Some magnesium chloride is made from evaporation of seawater.
In the Dow process , magnesium chloride is regenerated from magnesium hydroxide using hydrochloric acid :
It can also be prepared from magnesium carbonate by a similar reaction.
MgCl 2 crystallizes in the cadmium chloride CdCl 2 motif, therefore it loses water upon heating: n = 12 (−16.4 °C), 8 (−3.4 °C), 6 (116.7 °C), 4 (181 °C), 2 (about 300 °C). [ 4 ] In the hexahydrate, the Mg 2+ is also octahedral , being coordinated to six water ligands . [ 5 ] The octahydrate and the dodecahydrate can be crystallized from water below 298K. As verified by X-ray crystallography , these "higher" hydrates also feature [Mg(H 2 O) 6 ] 2+ ions. [ 6 ] A decahydrate has also been crystallized. [ 7 ]
Anhydrous MgCl 2 is produced industrially by heating the complex salt named hexamminemagnesium dichloride [Mg(NH 3 ) 6 ] 2+ (Cl − ) 2 . [ 2 ] The thermal dehydration of the hydrates MgCl 2 · n H 2 O ( n = 6, 12) does not occur straightforwardly. [ 8 ]
As suggested by the existence of hydrates, anhydrous MgCl 2 is a Lewis acid , although a weak one. One derivative is tetraethylammonium tetrachloromagnesate [N(CH 2 CH 3 ) 4 ] 2 [MgCl 4 ] . The adduct MgCl 2 ( TMEDA ) is another. [ 9 ] In the coordination polymer with the formula MgCl 2 ( dioxane ) 2 , Mg adopts an octahedral geometry. [ 10 ] The Lewis acidity of magnesium chloride is reflected in its deliquescence , meaning that it attracts moisture from the air to the extent that the solid turns into a liquid.
Anhydrous MgCl 2 is the main precursor to metallic magnesium. The reduction of Mg 2+ into metallic Mg is performed by electrolysis in molten salt . [ 2 ] [ 11 ] As it is also the case for aluminium , an electrolysis in aqueous solution is not possible as the produced metallic magnesium would immediately react with water, or in other words that the water H + would be reduced into gaseous H 2 before Mg reduction could occur. So, the direct electrolysis of molten MgCl 2 in the absence of water is required because the reduction potential to obtain Mg is lower than the stability domain of water on an E h –pH diagram ( Pourbaix diagram ).
The production of metallic magnesium at the cathode (reduction reaction) is accompanied by the oxidation of the chloride anions at the anode with release of gaseous chlorine . This process is developed at a large industrial scale.
Magnesium chloride is one of many substances used for dust control, soil stabilization , and wind erosion mitigation. [ 12 ] When magnesium chloride is applied to roads and bare soil areas, both positive and negative performance issues occur which are related to many application factors. [ 13 ]
Ziegler-Natta catalysts , used commercially to produce polyolefins , often contain MgCl 2 as a catalyst support . [ 14 ] The introduction of MgCl 2 supports increases the activity of traditional catalysts and allowed the development of highly stereospecific catalysts for the production of polypropylene . [ 15 ]
Magnesium chloride is also a Lewis acid catalyst in aldol reactions . [ 16 ]
Magnesium chloride is used for low-temperature de-icing of highways , sidewalks , and parking lots . When highways have dangerous ice buildup, road maintainers apply magnesium chloride to deter ice from bonding to the pavement, allowing snow plows to clear treated roads more efficiently.
For the purpose of preventing ice from forming on pavement, magnesium chloride is applied in three ways: anti-icing, which involves spreading it on roads to prevent snow from sticking and forming; prewetting, which means a liquid formulation of magnesium chloride is sprayed directly onto salt as it is being spread onto roadway pavement, wetting the salt so that it sticks to the road; and pretreating, when magnesium chloride and salt are mixed together before they are loaded onto trucks and spread onto paved roads. Calcium chloride damages concrete twice as fast as magnesium chloride. [ 17 ] The amount of magnesium chloride is supposed to be controlled when it is used for de-icing as it may cause pollution to the environment. [ 18 ]
Magnesium chloride is used in nutraceutical and pharmaceutical preparations . The hexahydrate is sometimes advertised as " magnesium oil ". Magnesium Chloride is also an electrolyte .
Magnesium chloride ( E511 [ 19 ] ) is an important coagulant used in the preparation of tofu from soy milk .
In Japan it is sold as nigari ( にがり , derived from the Japanese word for "bitter"), a white powder produced from seawater after the sodium chloride has been removed, and the water evaporated. In China, it is called lushui ( 卤水 ).
Nigari or Iushui is, in fact, natural magnesium chloride, meaning that it is not completely refined (it contains up to 5% magnesium sulfate and various minerals). The crystals originate from lakes in the Chinese province of Qinghai , to be then reworked in Japan.
Because magnesium is a mobile nutrient, magnesium chloride can be effectively used as a substitute for magnesium sulfate (Epsom salt) to help correct magnesium deficiency in plants via foliar feeding . The recommended dose of magnesium chloride is smaller than the recommended dose of magnesium sulfate (20 g/L). [ 20 ] This is due primarily to the chlorine present in magnesium chloride, which can easily reach toxic levels if over-applied or applied too often. [ 21 ]
It has been found that higher concentrations of magnesium in tomato and some pepper plants can make them more susceptible to disease caused by infection of the bacterium Xanthomonas campestris , since magnesium is essential for bacterial growth. [ 22 ]
It is used to supply the magnesium necessary to precipitate phosphorus in the form of struvite from agricultural waste [ 23 ] as well as human urine.
Magnesium concentrations in natural seawater are between 1250 and 1350 mg/L, around 3.7% of the total seawater mineral content. Dead Sea minerals contain a significantly higher magnesium chloride ratio, 50.8%. Carbonates and calcium [ clarification needed ] are essential for all growth of corals , coralline algae , clams , and invertebrates . Magnesium can be depleted by mangrove plants and the use of excessive limewater or by going beyond natural calcium, alkalinity , and pH values. [ 24 ] The most common mineral form of magnesium chloride is its hexahydrate, bischofite. [ 25 ] [ 26 ] Anhydrous compound occurs very rarely, as chloromagnesite. [ 26 ] Magnesium chloride-hydroxides, korshunovskite and nepskoeite, are also very rare. [ 27 ] [ 28 ] [ 26 ]
Magnesium ions are bitter-tasting, and magnesium chloride solutions are bitter in varying degrees, depending on the concentration.
Magnesium toxicity from magnesium salts is rare in healthy individuals with a normal diet, because excess magnesium is readily excreted in urine by the kidneys . A few cases of oral magnesium toxicity have been described in persons with normal renal function ingesting large amounts of magnesium salts, but it is rare. If a large amount of magnesium chloride is eaten, it will have effects similar to magnesium sulfate , causing diarrhea, although the sulfate also contributes to the laxative effect in magnesium sulfate, so the effect from the chloride is not as severe.
Chloride ( Cl − ) and magnesium ( Mg 2+ ) are both essential nutrients important for normal plant growth. Too much of either nutrient may harm a plant, although foliar chloride concentrations are more strongly related with foliar damage than magnesium. High concentrations of MgCl 2 ions in the soil may be toxic or change water relationships such that the plant cannot easily accumulate water and nutrients. Once inside the plant, chloride moves through the water-conducting system and accumulates at the margins of leaves or needles, where dieback occurs first. Leaves are weakened or killed, which can lead to the death of the tree. [ 29 ] | https://en.wikipedia.org/wiki/Magnesium_chloride |
Magnesium diboride is the inorganic compound of magnesium and boron with the formula MgB 2 . It is a dark gray, water-insoluble solid. The compound becomes superconducting at 39 K (−234 °C), which has attracted attention. In terms of its composition, MgB 2 differs strikingly from most low-temperature superconductors, which feature mainly transition metals. Its superconducting mechanism is primarily described by BCS theory .
Magnesium diboride's superconducting properties were discovered in 2001. [ 1 ] Its critical temperature ( T c ) of 39 K (−234 °C; −389 °F) is the highest amongst conventional superconductors . Among conventional ( phonon-mediated ) superconductors, it is unusual. Its electronic structure is such that there exist two types of electrons at the Fermi level with widely differing behaviours, one of them ( sigma-bonding ) being much more strongly superconducting than the other ( pi-bonding ). This is at odds with usual theories of phonon-mediated superconductivity which assume that all electrons behave in the same manner. Theoretical understanding of the properties of MgB 2 has nearly been achieved by modelling two energy gaps. In 2001 it was regarded as behaving more like a metallic than a cuprate superconductor . [ 2 ]
Using BCS theory and the known energy gaps of the pi and sigma bands of electrons (2.2 and 7.1 meV, respectively), the pi and sigma bands of electrons have been found to have two different coherence lengths (51 nm and 13 nm, respectively). [ 3 ] The corresponding London penetration depths are 33.6 nm and 47.8 nm. This implies that the Ginzburg-Landau parameters are 0.66±0.02 and 3.68, respectively. The first is less than 1/ √ 2 and the second is greater, therefore the first seems to indicate marginal type I superconductivity and the second type II superconductivity.
It has been predicted that when two different bands of electrons yield two quasiparticles, one of which has a coherence length that would indicate type I superconductivity and one of which would indicate type II, then in certain cases, vortices attract at long distances and repel at short distances. [ 4 ] In particular, the potential energy between vortices is minimized at a critical distance. As a consequence there is a conjectured new phase called the semi-Meissner state , in which vortices are separated by the critical distance. When the applied flux is too small for the entire superconductor to be filled with a lattice of vortices separated by the critical distance, then there are large regions of type I superconductivity, a Meissner state, separating these domains.
Experimental confirmation for this conjecture has arrived recently in MgB 2 experiments at 4.2 Kelvin. The authors found that there are indeed regimes with a much greater density of vortices. Whereas the typical variation in the spacing between Abrikosov vortices in a type II superconductor is of order 1%, they found a variation of order 50%, in line with the idea that vortices assemble into domains where they may be separated by the critical distance. The term type-1.5 superconductivity was coined for this state.
Magnesium diboride was synthesized and its structure confirmed in 1953. [ 5 ] The simplest synthesis involves high temperature reaction between boron and magnesium powders. [ 2 ] Formation begins at 650 °C; however, since magnesium metal melts at 652 °C, the reaction may involve diffusion of magnesium vapor across boron grain boundaries. At conventional reaction temperatures, sintering is minimal, although grain recrystallization is sufficient for Josephson quantum tunnelling between grains. [ citation needed ]
Superconducting magnesium diboride wire can be produced through the powder-in-tube (PIT) ex situ and in situ processes. [ 6 ] In the in situ variant, a mixture of boron and magnesium is reduced in diameter by conventional wire drawing . The wire is then heated to the reaction temperature to form MgB 2 . In the ex situ variant, the tube is filled with MgB 2 powder, reduced in diameter, and sintered at 800 to 1000 °C. In both cases, later hot isostatic pressing at approximately 950 °C further improves the properties. [ citation needed ]
An alternative technique, disclosed in 2003, employs reactive liquid infiltration of magnesium inside a granular preform of boron powders and was called Mg-RLI technique. [ 7 ] The method allowed the manufacture of both high density (more than 90% of the theoretical density for MgB 2 ) bulk materials and special hollow fibers. This method is equivalent to similar melt growth based methods such as the Infiltration and Growth Processing method used to fabricate bulk YBCO superconductors where the non-superconducting Y 2 BaCuO 5 is used as granular preform inside which YBCO based liquid phases are infiltrated to make superconductive YBCO bulk. This method has been copied and adapted for MgB 2 and rebranded as Reactive Mg Liquid Infiltration . The process of Reactive Mg Liquid Infiltration in a boron preform to obtain MgB 2 has been a subject of patent applications by the Italian company Edison S.p.A. [ citation needed ]
Hybrid physical–chemical vapor deposition (HPCVD) has been the most effective technique for depositing magnesium diboride (MgB 2 ) thin films. [ 8 ] The surfaces of MgB 2 films deposited by other technologies are usually rough and non-stoichiometric . In contrast, the HPCVD system can grow high-quality in situ pure MgB 2 films with smooth surfaces, which are required to make reproducible uniform Josephson junctions , the fundamental element of superconducting circuits.
Properties depend greatly on composition and fabrication process. Many properties are anisotropic due to the layered structure. 'Dirty' samples, e.g., with oxides at the crystal boundaries, are different from 'clean' samples. [ 9 ]
Various means of doping MgB 2 with carbon (e.g. using 10% malic acid ) can improve the upper critical field and the maximum current density [ 10 ] [ 11 ] (also with polyvinyl acetate [ 12 ] ).
5% doping with carbon can raise H c2 from 16 to 36 T while lowering T c only from 39 K to 34 K. The maximum critical current ( J c ) is reduced, but doping with TiB 2 can reduce the decrease. [ 13 ] (Doping MgB 2 with Ti is patented. [ 14 ] )
The maximum critical current ( J c ) in magnetic field is enhanced greatly (approx double at 4.2 K) by doping with ZrB 2 . [ 15 ]
Even small amounts of doping lead both bands into the type II regime and so no semi-Meissner state may be expected.
MgB 2 is a multi-band superconductor, that is each Fermi surface has different superconducting energy gap. For MgB 2 , sigma bond of boron is strong, and it induces large s-wave superconducting gap, and pi bond is weak and induces small s-wave gap. [ 16 ] The quasiparticle states of the vortices of large gap are highly confined to the vortex core.
On the other hand, the quasiparticle states of small gap are loosely bound to the vortex core. Thus they can be delocalized and overlap easily between adjacent vortices. [ 17 ] Such delocalization can strongly contribute to the thermal conductivity , which shows abrupt increase above H c1 . [ 16 ]
Superconducting properties and low cost make magnesium diboride attractive for a variety of applications. [ 18 ] [ 19 ] For those applications, MgB 2 powder is compressed with silver metal (or 316 stainless steel) into wire and sometimes tape via the Powder-in-tube process.
In 2006 a 0.5 tesla open MRI superconducting magnet system was built using 18 km of MgB 2 wires. This MRI used a closed-loop cryocooler , without requiring externally supplied cryogenic liquids for cooling. [ 20 ] [ 21 ]
"...the next generation MRI instruments must be made of MgB 2 coils instead of NbTi coils, operating in the 20–25 K range without liquid helium for cooling. ...
Besides the magnet applications MgB 2 conductors have potential uses in superconducting transformers, rotors and transmission cables at temperatures of around 25 K, at fields of 1 T." [ 19 ]
A project at CERN to make MgB 2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the high luminosity upgrade of the Large Hadron Collider . [ 22 ]
The IGNITOR tokamak design was based on MgB 2 for its poloidal coils. [ 23 ]
Thin coatings can be used in superconducting radio frequency cavities to minimize energy loss and reduce the inefficiency of liquid helium cooled niobium cavities.
Because of the low cost of its constituent elements, MgB 2 has promise for use in superconducting low to medium field magnets, electric motors and generators, fault current limiters and current leads. [ citation needed ]
Unlike elemental boron whose combustion is incomplete through the glassy oxide layered impeding oxygen diffusion, magnesium diboride burns completely when ignited in oxygen or in mixtures with oxidizers. [ 24 ] Thus magnesium boride has been proposed as fuel in ram jets . [ 25 ] In addition the use of MgB 2 in blast-enhanced explosives [ 26 ] and propellants has been proposed for the same reasons. Decoy flares containing magnesium diboride/ Teflon / Viton display 30–60% increased spectral efficiency, E λ (J g −1 sr −1 ), compared to classical Magnesium/Teflon/Viton (MTV) payloads. [ 27 ] An application of magnesium diboride to hybrid rocket propulsion has also been investigated, mixing the compound in paraffin wax fuel grains to improve mechanical properties and combustion characteristics. [ 28 ] | https://en.wikipedia.org/wiki/Magnesium_diboride |
Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg 2+ ion. It is an essential mineral nutrient (i.e., element) for life [ 1 ] [ 2 ] [ 3 ] [ 4 ] and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. [ 5 ] As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA . [ citation needed ]
Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. [ 6 ]
In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis . [ citation needed ]
A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere . This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis . However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments. [ 7 ]
Inadequate magnesium intake frequently causes muscle spasms , and has been associated with cardiovascular disease , diabetes , high blood pressure , anxiety disorders, migraines , osteoporosis , and cerebral infarction . [ 8 ] [ 9 ] Acute deficiency (see hypomagnesemia ) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time. [ citation needed ]
The most common symptom of excess oral magnesium intake is diarrhea . Supplements based on amino acid chelates (such as glycinate , lysinate etc.) are much better-tolerated by the digestive system and do not have the side-effects of the older compounds used, while sustained-release dietary supplements prevent the occurrence of diarrhea. [ citation needed ] Since the kidneys of adult humans excrete excess magnesium efficiently, oral magnesium poisoning in adults with normal renal function is very rare. Infants, which have less ability to excrete excess magnesium even when healthy, should not be given magnesium supplements, except under a physician's care. [ citation needed ]
Pharmaceutical preparations with magnesium are used to treat conditions including magnesium deficiency and hypomagnesemia , as well as eclampsia . [ 10 ] Such preparations are usually in the form of magnesium sulfate or chloride when given parenterally . Magnesium is absorbed with reasonable efficiency (30–40%) by the body from any soluble magnesium salt, such as the chloride or citrate. Magnesium is similarly absorbed from Epsom salts , although the sulfate in these salts adds to their laxative effect at higher doses. Magnesium absorption from the insoluble oxide and hydroxide salts ( milk of magnesia ) is erratic and of poorer efficiency, since it depends on the neutralization and solution of the salt by the acid of the stomach, which may not be (and usually is not) complete.
Magnesium orotate may be used as adjuvant therapy in patients on optimal treatment for severe congestive heart failure , increasing survival rate and improving clinical symptoms and patient's quality of life . [ 11 ]
In 2022, magnesium salts were the 207th most commonly prescribed medication in the United States, with more than 1 million prescriptions. [ 12 ] [ 13 ]
Magnesium can affect muscle relaxation through direct action on cell membranes. Mg 2+ ions close certain types of calcium channels , which conduct positively charged calcium ions into neurons . With an excess of magnesium, more channels will be blocked and nerve cells activity will decrease. [ 14 ] [ 15 ]
Intravenous magnesium sulphate is used in treating pre-eclampsia . [ 16 ] For other than pregnancy-related hypertension, a meta-analysis of 22 clinical trials with dose ranges of 120 to 973 mg/day and a mean dose of 410 mg, concluded that magnesium supplementation had a small but statistically significant effect, lowering systolic blood pressure by 3–4 mm Hg and diastolic blood pressure by 2–3 mm Hg. The effect was larger when the dose was more than 370 mg/day. [ 17 ]
Higher dietary intakes of magnesium correspond to lower diabetes incidence. [ 18 ] For people with diabetes or at high risk of diabetes, magnesium supplementation lowers fasting glucose. [ 19 ]
Magnesium is essential as part of the process that generates adenosine triphosphate . [ 20 ] [ 21 ]
Mitochondria are often referred to as the "powerhouses of the cell" because their primary role is generating energy for cellular processes. They achieve this by breaking down nutrients , primarily glucose , through a series of chemical reactions known as cellular respiration . This process ultimately produces adenosine triphosphate (ATP), the cell's main energy currency.
Magnesium and vitamin D have a synergistic relationship in the body, meaning they work together to optimize each other's functions: [ 22 ] [ 23 ]
Overall, maintaining adequate levels of both magnesium and vitamin D is essential for optimal health and well-being.
It is theorized that the process of making testosterone from cholesterol, needs magnesium to function properly. [ 24 ]
Studies have shown that significant gains in testosterone occur after taking 10 mg magnesium/kg body weight/day. [ 25 ]
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for magnesium in 1997. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EARs for magnesium for women and men ages 31 and up are 265 mg/day and 350 mg/day, respectively. The RDAs are 320 and 420 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 350 to 400 mg/day depending on age of the woman. RDA for lactation ranges 310 to 360 mg/day for same reason. For children ages 1–13 years, the RDA increases with age from 65 to 200 mg/day. As for safety, the IOM also sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of magnesium the UL is set at 350 mg/day. The UL is specific to magnesium consumed as a dietary supplement, the reason being that too much magnesium consumed at one time can cause diarrhea. The UL does not apply to food-sourced magnesium. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes . [ 26 ]
* = Adequate intake
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 300 and 350 mg/day, respectively. AIs for pregnancy and lactation are also 300 mg/day. For children ages 1–17 years, the AIs increase with age from 170 to 250 mg/day. These AIs are lower than the U.S. RDAs. [ 28 ] The European Food Safety Authority reviewed the same safety question and set its UL at 250 mg/day – lower than the U.S. value. [ 29 ] The magnesium UL is unique in that it is lower than some of the RDAs. It applies to intake from a pharmacological agent or dietary supplement only and does not include intake from food and water.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value (%DV). For magnesium labeling purposes, 100% of the daily value was 400 mg, but as of May 27, 2016, it was revised to 420 mg to bring it into agreement with the RDA. [ 30 ] [ 31 ] A table of the old and new adult Daily Values is provided at Reference Daily Intake .
Green vegetables such as spinach provide magnesium because of the abundance of chlorophyll molecules, which contain the ion. Nuts (especially Brazil nuts , cashews and almonds ), seeds (e.g., pumpkin seeds ), dark chocolate , roasted soybeans , bran , and some whole grains are also good sources of magnesium. [ 32 ]
Although many foods contain magnesium, it is usually found in low levels. As with most nutrients, daily needs for magnesium are unlikely to be met by one serving of any single food. Eating a wide variety of fruits, vegetables, and grains will help ensure adequate intake of magnesium.
Because magnesium readily dissolves in water, refined foods, which are often processed or cooked in water and dried, in general, are poor sources of the nutrient. For example, whole-wheat bread has twice as much magnesium as white bread because the magnesium-rich germ and bran are removed when white flour is processed. The table of food sources of magnesium suggests many dietary sources of magnesium.
"Hard" water can also provide magnesium, but "soft" water contains less of the ion. Dietary surveys do not assess magnesium intake from water, which may lead to underestimating total magnesium intake and its variability.
Too much magnesium may make it difficult for the body to absorb calcium . Not enough magnesium can lead to hypomagnesemia as described above, with irregular heartbeats, high blood pressure (a sign in humans but not some experimental animals such as rodents), insomnia, and muscle spasms ( fasciculation ). However, as noted, symptoms of low magnesium from pure dietary deficiency are thought to be rarely encountered.
Following are some foods and the amount of magnesium in them: [ 33 ]
In animals , it has been shown that different cell types maintain different concentrations of magnesium. [ 35 ] [ 36 ] [ 37 ] [ 38 ] It seems likely that the same is true for plants . [ 39 ] [ 40 ] This suggests that different cell types may regulate influx and efflux of magnesium in different ways based on their unique metabolic needs. Interstitial and systemic concentrations of free magnesium must be delicately maintained by the combined processes of buffering (binding of ions to proteins and other molecules) and muffling (the transport of ions to storage or extracellular spaces [ 41 ] ).
In plants, and more recently in animals, magnesium has been recognized as an important signaling ion, both activating and mediating many biochemical reactions. The best example of this is perhaps the regulation of carbon fixation in chloroplasts in the Calvin cycle . [ 42 ] [ 43 ]
Magnesium is very important in cellular function. Deficiency of the nutrient causes disease of the affected organism. In single-cell organisms such as bacteria and yeast , low levels of magnesium manifests in greatly reduced growth rates. In magnesium transport knockout strains of bacteria, healthy rates are maintained only with exposure to very high external concentrations of the ion. [ 44 ] [ 45 ] In yeast, mitochondrial magnesium deficiency also leads to disease. [ 46 ]
Plants deficient in magnesium show stress responses. The first observable signs of both magnesium starvation and overexposure in plants is a decrease in the rate of photosynthesis . This is due to the central position of the Mg 2+ ion in the chlorophyll molecule. The later effects of magnesium deficiency on plants are a significant reduction in growth and reproductive viability. [ 4 ] Magnesium can also be toxic to plants, although this is typically seen only in drought conditions. [ 47 ] [ 48 ]
In animals, magnesium deficiency ( hypomagnesemia ) is seen when the environmental availability of magnesium is low. In ruminant animals, particularly vulnerable to magnesium availability in pasture grasses, the condition is known as 'grass tetany'. Hypomagnesemia is identified by a loss of balance due to muscle weakness. [ 49 ] A number of genetically attributable hypomagnesemia disorders have also been identified in humans. [ 50 ] [ 51 ] [ 52 ] [ 53 ]
Overexposure to magnesium may be toxic to individual cells, though these effects have been difficult to show experimentally. [ citation needed ] Hypermagnesemia , an overabundance of magnesium in the blood, is usually caused by loss of kidney function. Healthy animals rapidly excrete excess magnesium in the urine and stool. [ 54 ] Urinary magnesium is called magnesuria . Characteristic concentrations of magnesium in model organisms are: in E. coli 30–100 mM (bound), 0.01–1 mM (free), in budding yeast 50 mM, in mammalian cell 10 mM (bound), 0.5 mM (free) and in blood plasma 1 mM. [ 55 ]
Mg 2+ is the fourth-most-abundant metal ion in cells (per moles ) and the most abundant free divalent cation – as a result, it is deeply and intrinsically woven into cellular metabolism . Indeed, Mg 2+ -dependent enzymes appear in virtually every metabolic pathway: Specific binding of Mg 2+ to biological membranes is frequently observed, Mg 2+ is also used as a signalling molecule, and much of nucleic acid biochemistry requires Mg 2+ , including all reactions that require release of energy from ATP. [ 56 ] [ 57 ] [ 43 ] In nucleotides, the triple-phosphate moiety of the compound is invariably stabilized by association with Mg 2+ in all enzymatic processes.
In photosynthetic organisms, Mg 2+ has the additional vital role of being the coordinating ion in the chlorophyll molecule. This role was discovered by Richard Willstätter , who received the Nobel Prize in Chemistry 1915 for the purification and structure of chlorophyll binding with sixth number of carbon
The chemistry of the Mg 2+ ion, as applied to enzymes, uses the full range of this ion's unusual reaction chemistry to fulfill a range of functions. [ 56 ] [ 58 ] [ 59 ] [ 60 ] Mg 2+ interacts with substrates, enzymes, and occasionally both (Mg 2+ may form part of the active site). In general, Mg 2+ interacts with substrates through inner sphere coordination, stabilising anions or reactive intermediates, also including binding to ATP and activating the molecule to nucleophilic attack. When interacting with enzymes and other proteins, Mg 2+ may bind using inner or outer sphere coordination, to either alter the conformation of the enzyme or take part in the chemistry of the catalytic reaction. In either case, because Mg 2+ is only rarely fully dehydrated during ligand binding, it may be a water molecule associated with the Mg 2+ that is important rather than the ion itself. The Lewis acidity of Mg 2+ ( p K a 11.4) is used to allow both hydrolysis and condensation reactions (most common ones being phosphate ester hydrolysis and phosphoryl transfer) that would otherwise require pH values greatly removed from physiological values.
ATP (adenosine triphosphate), the main source of energy in cells, must be bound to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. [ 5 ]
Nucleic acids have an important range of interactions with Mg 2+ . The binding of Mg 2+ to DNA and RNA stabilises structure; this can be observed in the increased melting temperature ( T m ) of double-stranded DNA in the presence of Mg 2+ . [ 56 ] In addition, ribosomes contain large amounts of Mg 2+ and the stabilisation provided is essential to the complexation of this ribo-protein. [ 61 ] A large number of enzymes involved in the biochemistry of nucleic acids bind Mg 2+ for activity, using the ion for both activation and catalysis. Finally, the autocatalysis of many ribozymes (enzymes containing only RNA) is Mg 2+ dependent (e.g. the yeast mitochondrial group II self splicing introns [ 62 ] ).
Magnesium ions can be critical in maintaining the positional integrity of closely clustered phosphate groups. These clusters appear in numerous and distinct parts of the cell nucleus and cytoplasm . For instance, hexahydrated Mg 2+ ions bind in the deep major groove and at the outer mouth of A-form nucleic acid duplexes . [ 63 ]
Biological cell membranes and cell walls are polyanionic surfaces. This has important implications for the transport of ions, in particular because it has been shown that different membranes preferentially bind different ions. [ 56 ] Both Mg 2+ and Ca 2+ regularly stabilize membranes by the cross-linking of carboxylated and phosphorylated head groups of lipids. However, the envelope membrane of E. coli has also been shown to bind Na + , K + , Mn 2+ and Fe 3+ . The transport of ions is dependent on both the concentration gradient of the ion and the electric potential (ΔΨ) across the membrane, which will be affected by the charge on the membrane surface. For example, the specific binding of Mg 2+ to the chloroplast envelope has been implicated in a loss of photosynthetic efficiency by the blockage of K + uptake and the subsequent acidification of the chloroplast stroma. [ 42 ]
The Mg 2+ ion tends to bind only weakly to proteins ( K a ≤ 10 5 [ 56 ] ) and this can be exploited by the cell to switch enzymatic activity on and off by changes in the local concentration of Mg 2+ . Although the concentration of free cytoplasmic Mg 2+ is on the order of 1 mmol/L, the total Mg 2+ content of animal cells is 30 mmol/L [ 64 ] and in plants the content of leaf endodermal cells has been measured at values as high as 100 mmol/L (Stelzer et al. , 1990), much of which buffered in storage compartments. The cytoplasmic concentration of free Mg 2+ is buffered by binding to chelators (e.g., ATP), but also, what is more important, it is buffered by storage of Mg 2+ in intracellular compartments. [ citation needed ] The transport of Mg 2+ between intracellular compartments may be a major part of regulating enzyme activity. The interaction of Mg 2+ with proteins must also be considered for the transport of the ion across biological membranes. [ citation needed ]
In biological systems, only manganese (Mn 2+ ) is readily capable of replacing Mg 2+ , but only in a limited set of circumstances. Mn 2+ is very similar to Mg 2+ in terms of its chemical properties, including inner and outer shell complexation. Mn 2+ effectively binds ATP and allows hydrolysis of the energy molecule by most ATPases. Mn 2+ can also replace Mg 2+ as the activating ion for a number of Mg 2+ -dependent enzymes, although some enzyme activity is usually lost. [ 56 ] Sometimes such enzyme metal preferences vary among closely related species: For example, the reverse transcriptase enzyme of lentiviruses like HIV , SIV and FIV is typically dependent on Mg 2+ , whereas the analogous enzyme for other retroviruses prefers Mn 2+ .
The use of radioactive tracer elements in ion uptake assays allows the calculation of km, Ki and Vmax and determines the initial change in the ion content of the cells. 28 Mg decays by the emission of a high-energy beta or gamma particle, which can be measured using a scintillation counter. However, the radioactive half-life of 28 Mg, the most stable of the radioactive magnesium isotopes, is only 21 hours. This severely restricts the experiments involving the nuclide. Also, since 1990, no facility has routinely produced 28 Mg, and the price per mCi is now predicted to be approximately US$30,000. [ 65 ] The chemical nature of Mg 2+ is such that it is closely approximated by few other cations. [ 66 ] However, Co 2+ , Mn 2+ and Ni 2+ have been used successfully to mimic the properties of Mg 2+ in some enzyme reactions, and radioactive forms of these elements have been employed successfully in cation transport studies. The difficulty of using metal ion replacement in the study of enzyme function is that the relationship between the enzyme activities with the replacement ion compared to the original is very difficult to ascertain. [ 66 ]
A number of chelators of divalent cations have different fluorescence spectra in the bound and unbound states. [ 67 ] Chelators for Ca 2+ are well established, have high affinity for the cation, and low interference from other ions. Mg 2+ chelators lag behind and the major fluorescence dye for Mg 2+ (mag-fura 2 [ 68 ] ) actually has a higher affinity for Ca 2+ . [ 69 ] This limits the application of this dye to cell types where the resting level of Ca 2+ is < 1 μM and does not vary with the experimental conditions under which Mg 2+ is to be measured. Recently, Otten et al. (2001) have described work into a new class of compounds that may prove more useful, having significantly better binding affinities for Mg 2+ . [ 70 ] The use of the fluorescent dyes is limited to measuring the free Mg 2+ . If the ion concentration is buffered by the cell by chelation or removal to subcellular compartments, the measured rate of uptake will give only minimum values of km and Vmax. [ citation needed ]
First, ion-specific microelectrodes can be used to measure the internal free ion concentration of cells and organelles. The major advantages are that readings can be made from cells over relatively long periods of time, and that unlike dyes very little extra ion buffering capacity is added to the cells. [ 71 ]
Second, the technique of two-electrode voltage-clamp allows the direct measurement of the ion flux across the membrane of a cell. [ 72 ] The membrane is held at an electric potential and the responding current is measured. All ions passing across the membrane contribute to the measured current. [ citation needed ]
Third, the technique of patch-clamp uses isolated sections of natural or artificial membrane in much the same manner as voltage-clamp but without the secondary effects of a cellular system. Under ideal conditions the conductance of individual channels can be quantified. This methodology gives the most direct measurement of the action of ion channels. [ 72 ]
Flame atomic absorption spectroscopy (AAS) determines the total magnesium content of a biological sample. [ 67 ] This method is destructive; biological samples must be broken down in concentrated acids to avoid clogging the fine nebulising apparatus. [ citation needed ] Beyond this, the only limitation is that samples must be in a volume of approximately 2 mL and at a concentration range of 0.1 – 0.4 μmol/L for optimum accuracy. [ citation needed ] As this technique cannot distinguish between Mg 2+ already present in the cell and that taken up during the experiment, only content not uptaken can be quantified. [ citation needed ]
Inductively coupled plasma (ICP) using either the mass spectrometry (MS) or atomic emission spectroscopy (AES) modifications also allows the determination of the total ion content of biological samples. [ 73 ]
The chemical and biochemical properties of Mg 2+ present the cellular system with a significant challenge when transporting the ion across biological membranes. The dogma of ion transport states that the transporter recognises the ion then progressively removes the water of hydration, removing most or all of the water at a selective pore before releasing the ion on the far side of the membrane. [ 74 ] Due to the properties of Mg 2+ , large volume change from hydrated to bare ion, high energy of hydration and very low rate of ligand exchange in the inner coordination sphere , these steps are probably more difficult than for most other ions. To date, only the ZntA protein of Paramecium has been shown to be a Mg 2+ channel. [ 75 ] The mechanisms of Mg 2+ transport by the remaining proteins are beginning to be uncovered with the first three-dimensional structure of a Mg 2+ transport complex being solved in 2004. [ 76 ]
The hydration shell of the Mg 2+ ion has a very tightly bound inner shell of six water molecules and a relatively tightly bound second shell containing 12–14 water molecules (Markham et al. , 2002). Thus, it is presumed that recognition of the Mg 2+ ion requires some mechanism to interact initially with the hydration shell of Mg 2+ , followed by a direct recognition/binding of the ion to the protein. [ 65 ]
In spite of the mechanistic difficulty, Mg 2+ must be transported across membranes, and a large number of Mg 2+ fluxes across membranes from a variety of systems have been described. [ 77 ] However, only a small selection of Mg 2+ transporters have been characterised at the molecular level.
Magnesium ions (Mg 2+ ) in cellular biology are usually in almost all senses opposite to Ca 2+ ions, because they are bivalent too, but have greater electronegativity and thus exert greater pull on water molecules, preventing passage through the channel (even though the magnesium itself is smaller). Thus, Mg 2+ ions block Ca 2+ channels such as ( NMDA channels ) and have been shown to affect gap junction channels forming electrical synapses .
The previous sections have dealt in detail with the chemical and biochemical aspects of Mg 2+ and its transport across cellular membranes. This section will apply this knowledge to aspects of whole plant physiology, in an attempt to show how these processes interact with the larger and more complex environment of the multicellular organism.
Mg 2+ is essential for plant growth and is present in higher plants in amounts on the order of 80 μmol g −1 dry weight. [ 4 ] The amounts of Mg 2+ vary in different parts of the plant and are dependent upon nutritional status. In times of plenty, excess Mg 2+ may be stored in vascular cells (Stelzer et al. , 1990; [ 40 ] and in times of starvation Mg 2+ is redistributed, in many plants, from older to newer leaves. [ 4 ] [ 78 ]
Mg 2+ is taken up into plants via the roots. Interactions with other cations in the rhizosphere can have a significant effect on the uptake of the ion.(Kurvits and Kirkby, 1980; [ 79 ] The structure of root cell walls is highly permeable to water and ions, and hence ion uptake into root cells can occur anywhere from the root hairs to cells located almost in the centre of the root (limited only by the Casparian strip ). Plant cell walls and membranes carry a great number of negative charges, and the interactions of cations with these charges is key to the uptake of cations by root cells allowing a local concentrating effect. [ 80 ] Mg 2+ binds relatively weakly to these charges, and can be displaced by other cations, impeding uptake and causing deficiency in the plant.
Within individual plant cells, the Mg 2+ requirements are largely the same as for all cellular life; Mg 2+ is used to stabilise membranes, is vital to the utilisation of ATP, is extensively involved in the nucleic acid biochemistry, and is a cofactor for many enzymes (including the ribosome). Also, Mg 2+ is the coordinating ion in the chlorophyll molecule. It is the intracellular compartmentalisation of Mg 2+ in plant cells that leads to additional complexity. Four compartments within the plant cell have reported interactions with Mg 2+ . Initially, Mg 2+ will enter the cell into the cytoplasm (by an as yet unidentified system), but free Mg 2+ concentrations in this compartment are tightly regulated at relatively low levels (≈2 mmol/L) and so any excess Mg 2+ is either quickly exported or stored in the second intracellular compartment, the vacuole. [ 81 ] The requirement for Mg 2+ in mitochondria has been demonstrated in yeast [ 82 ] and it seems highly likely that the same will apply in plants. The chloroplasts also require significant amounts of internal Mg 2+ , and low concentrations of cytoplasmic Mg 2+ . [ 83 ] [ 84 ] In addition, it seems likely that the other subcellular organelles (e.g., Golgi, endoplasmic reticulum, etc.) also require Mg 2+ .
Once in the cytoplasmic space of root cells Mg 2+ , along with the other cations, is probably transported radially into the stele and the vascular tissue. [ 85 ] From the cells surrounding the xylem the ions are released or pumped into the xylem and carried up through the plant. In the case of Mg 2+ , which is highly mobile in both the xylem and phloem, [ 86 ] the ions will be transported to the top of the plant and back down again in a continuous cycle of replenishment. Hence, uptake and release from vascular cells is probably a key part of whole plant Mg 2+ homeostasis. Figure 1 shows how few processes have been connected to their molecular mechanisms (only vacuolar uptake has been associated with a transport protein, AtMHX).
The diagram shows a schematic of a plant and the putative processes of Mg 2+ transport at the root and leaf where Mg 2+ is loaded and unloaded from the vascular tissues. [ 4 ] Mg 2+ is taken up into the root cell wall space (1) and interacts with the negative charges associated with the cell walls and membranes. Mg 2+ may be taken up into cells immediately (symplastic pathway) or may travel as far as the Casparian band (4) before being absorbed into cells (apoplastic pathway; 2). The concentration of Mg 2+ in the root cells is probably buffered by storage in root cell vacuoles (3). Note that cells in the root tip do not contain vacuoles. Once in the root cell cytoplasm, Mg 2+ travels toward the centre of the root by plasmodesmata , where it is loaded into the xylem (5) for transport to the upper parts of the plant. When the Mg 2+ reaches the leaves it is unloaded from the xylem into cells (6) and again is buffered in vacuoles (7). Whether cycling of Mg 2+ into the phloem occurs via general cells in the leaf (8) or directly from xylem to phloem via transfer cells (9) is unknown. Mg 2+ may return to the roots in the phloem sap.
When a Mg 2+ ion has been absorbed by a cell requiring it for metabolic processes, it is generally assumed that the ion stays in that cell for as long as the cell is active. [ 4 ] In vascular cells, this is not always the case; in times of plenty, Mg 2+ is stored in the vacuole, takes no part in the day-to-day metabolic processes of the cell (Stelzer et al. , 1990), and is released at need. But for most cells it is death by senescence or injury that releases Mg 2+ and many of the other ionic constituents, recycling them into healthy parts of the plant. In addition, when Mg 2+ in the environment is limiting, some species are able to mobilise Mg 2+ from older tissues. [ 78 ] These processes involve the release of Mg 2+ from its bound and stored states and its transport back into the vascular tissue, where it can be distributed to the rest of the plant. In times of growth and development, Mg 2+ is also remobilised within the plant as source and sink relationships change. [ 4 ]
The homeostasis of Mg 2+ within single plant cells is maintained by processes occurring at the plasma membrane and at the vacuole membrane (see Figure 2). The major driving force for the translocation of ions in plant cells is ΔpH. [ 87 ] H + -ATPases pump H + ions against their concentration gradient to maintain the pH differential that can be used for the transport of other ions and molecules. H + ions are pumped out of the cytoplasm into the extracellular space or into the vacuole. The entry of Mg 2+ into cells may occur through one of two pathways, via channels using the ΔΨ (negative inside) across this membrane or by symport with H + ions. To transport the Mg 2+ ion into the vacuole requires a Mg 2+ /H + antiport transporter (such as AtMHX). The H + -ATPases are dependent on Mg 2+ (bound to ATP) for activity, so that Mg 2+ is required to maintain its own homeostasis.
A schematic of a plant cell is shown including the four major compartments currently recognised as interacting with Mg 2+ . H + -ATPases maintain a constant ΔpH across the plasma membrane and the vacuole membrane. Mg 2+ is transported into the vacuole using the energy of ΔpH (in A. thaliana by AtMHX). Transport of Mg 2+ into cells may use either the negative ΔΨ or the ΔpH. The transport of Mg 2+ into mitochondria probably uses ΔΨ as in the mitochondria of yeast, and it is likely that chloroplasts take Mg 2+ by a similar system. The mechanism and the molecular basis for the release of Mg 2+ from vacuoles and from the cell is not known. Likewise, the light-regulated Mg 2+ concentration changes in chloroplasts are not fully understood, but do require the transport of H + ions across the thylakoid membrane.
Mg 2+ is the coordinating metal ion in the chlorophyll molecule, and in plants where the ion is in high supply about 6% of the total Mg 2+ is bound to chlorophyll. [ 4 ] [ 88 ] [ 89 ] Thylakoid stacking is stabilised by Mg 2+ and is important for the efficiency of photosynthesis, allowing phase transitions to occur. [ 90 ]
Mg 2+ is probably taken up into chloroplasts to the greatest extent during the light-induced development from proplastid to chloroplast or etioplast to chloroplast. At these times, the synthesis of chlorophyll and the biogenesis of the thylakoid membrane stacks absolutely require the divalent cation. [ 91 ] [ 92 ]
Whether Mg 2+ is able to move into and out of chloroplasts after this initial developmental phase has been the subject of several conflicting reports. Deshaies et al. (1984) found that Mg 2+ did move in and out of isolated chloroplasts from young pea plants, [ 93 ] but Gupta and Berkowitz (1989) were unable to reproduce the result using older spinach chloroplasts. [ 94 ] Deshaies et al. had stated in their paper that older pea chloroplasts showed less significant changes in Mg 2+ content than those used to form their conclusions. The relative proportion of immature chloroplasts present in the preparations may explain these observations.
The metabolic state of the chloroplast changes considerably between night and day. During the day, the chloroplast is actively harvesting the energy of light and converting it into chemical energy. The activation of the metabolic pathways involved comes from the changes in the chemical nature of the stroma on the addition of light. H + is pumped out of the stroma (into both the cytoplasm and the lumen) leading to an alkaline pH. [ 95 ] [ 96 ] Mg 2+ (along with K + ) is released from the lumen into the stroma, in an electroneutralisation process to balance the flow of H + . [ 97 ] [ 98 ] [ 99 ] [ 100 ] Finally, thiol groups on enzymes are reduced by a change in the redox state of the stroma. [ 101 ] Examples of enzymes activated in response to these changes are fructose 1,6-bisphosphatase, sedoheptulose bisphosphatase and ribulose-1,5-bisphosphate carboxylase . [ 4 ] [ 59 ] [ 101 ] During the dark period, if these enzymes were active a wasteful cycling of products and substrates would occur.
Two major classes of the enzymes that interact with Mg 2+ in the stroma during the light phase can be identified. [ 59 ] Firstly, enzymes in the glycolytic pathway most often interact with two atoms of Mg 2+ . The first atom is as an allosteric modulator of the enzymes' activity, while the second forms part of the active site and is directly involved in the catalytic reaction. The second class of enzymes includes those where the Mg 2+ is complexed to nucleotide di- and tri-phosphates (ADP and ATP), and the chemical change involves phosphoryl transfer. Mg 2+ may also serve in a structural maintenance role in these enzymes (e.g., enolase).
Plant stress responses can be observed in plants that are under- or over-supplied with Mg 2+ . The first observable signs of Mg 2+ stress in plants for both starvation and toxicity is a depression of the rate of photosynthesis, it is presumed because of the strong relationships between Mg 2+ and chloroplasts/chlorophyll. In pine trees, even before the visible appearance of yellowing and necrotic spots, the photosynthetic efficiency of the needles drops markedly. [ 78 ] In Mg 2+ deficiency, reported secondary effects include carbohydrate immobility, loss of RNA transcription and loss of protein synthesis. [ 102 ] However, due to the mobility of Mg 2+ within the plant, the deficiency phenotype may be present only in the older parts of the plant. For example, in Pinus radiata starved of Mg 2+ , one of the earliest identifying signs is the chlorosis in the needles on the lower branches of the tree. This is because Mg 2+ has been recovered from these tissues and moved to growing (green) needles higher in the tree. [ 78 ]
A Mg 2+ deficit can be caused by the lack of the ion in the media (soil), but more commonly comes from inhibition of its uptake. [ 4 ] Mg 2+ binds quite weakly to the negatively charged groups in the root cell walls, so that excesses of other cations such as K + , NH 4 + , Ca 2+ , and Mn 2+ can all impede uptake.(Kurvits and Kirkby, 1980; [ 79 ] In acid soils Al 3+ is a particularly strong inhibitor of Mg 2+ uptake. [ 103 ] [ 104 ] The inhibition by Al 3+ and Mn 2+ is more severe than can be explained by simple displacement, hence it is possible that these ions bind to the Mg 2+ uptake system directly. [ 4 ] In bacteria and yeast, such binding by Mn 2+ has already been observed. Stress responses in the plant develop as cellular processes halt due to a lack of Mg 2+ (e.g. maintenance of ΔpH across the plasma and vacuole membranes). In Mg 2+ -starved plants under low light conditions, the percentage of Mg 2+ bound to chlorophyll has been recorded at 50%. [ 105 ] Presumably, this imbalance has detrimental effects on other cellular processes.
Mg 2+ toxicity stress is more difficult to develop. When Mg 2+ is plentiful, in general the plants take up the ion and store it (Stelzer et al. , 1990). However, if this is followed by drought then ionic concentrations within the cell can increase dramatically. High cytoplasmic Mg 2+ concentrations block a K + channel in the inner envelope membrane of the chloroplast, in turn inhibiting the removal of H + ions from the chloroplast stroma. This leads to an acidification of the stroma that inactivates key enzymes in carbon fixation , which all leads to the production of oxygen free radicals in the chloroplast that then cause oxidative damage. [ 106 ] | https://en.wikipedia.org/wiki/Magnesium_in_biology |
Magnesium iodide is an inorganic compound with the chemical formula Mg I 2 . It forms various hydrates MgI 2 · x H 2 O . Magnesium iodide is a salt of magnesium and hydrogen iodide . These salts are typical ionic halides , being highly soluble in water.
Magnesium iodide has few commercial uses, but can be used to prepare compounds for organic synthesis .
Magnesium iodide can be prepared from magnesium oxide , magnesium hydroxide , and magnesium carbonate by treatment with hydroiodic acid : [ 3 ]
Magnesium iodide is stable at high heat under a hydrogen atmosphere, but decomposes in air at normal temperatures, turning brown from the release of elemental iodine . When heated in air, it decomposes completely to magnesium oxide. [ 4 ]
Another method to prepare MgI 2 is mixing powdered elemental iodine and magnesium metal . In order to obtain anhydrous MgI 2 , the reaction should be conducted in a strictly anhydrous atmosphere; dry-diethyl ether can be used as a solvent.
Usage of magnesium iodide in the Baylis-Hillman reaction tends to give ( Z )- vinyl compounds. [ 5 ]
Demethylation of certain aromatic methyl ethers can be afforded using magnesium iodide in diethyl ether . [ 6 ]
Two hydrates are known, the octahydrate and the nonahydrate, both verified by X-ray crystallography These hydrates feature [Mg(H 2 O) 6 ] 2+ ions. [ 7 ] | https://en.wikipedia.org/wiki/Magnesium_iodide |
Magnesium monoperoxyphthalate ( MMPP ) is a water-soluble peroxy acid used as an oxidant in organic synthesis . Its main areas of use are the conversion of ketones to esters ( Baeyer-Villiger oxidation ), epoxidation of alkenes ( Prilezhaev reaction ), oxidation of sulfides to sulfoxides and sulfones , oxidation of amines to produce amine oxides , and in the oxidative cleavage of hydrazones. [ 1 ]
Due to its insolubility in non-polar solvents MMPP has seen less use than the more widely used meta -chloroperoxybenzoic acid ( m CPBA). Although work up procedures are more simply handled in polar solvents, usage of MMPP to oxidize nonpolar substrates in biphasic media combined with a phase transfer catalyst have been inefficient. [ 1 ] Despite this MMPP has certain advantages over m CPBA including a lower cost of production and increased stability. [ 1 ]
MMPP is also used as the active ingredient in certain surface disinfectants such as Dismozon Pur. [ 2 ] As a surface disinfectant MMPP exhibits a broad spectrum biocidal effect including inactivation of endospores . [ 3 ] Its wide surface compatibility enables its use on sensitive materials, such as plastic and rubber equipment used in hospitals. Additionally MMPP has been investigated as a potential antibacterial agent for mouthwashes and toothpaste. [ 4 ] | https://en.wikipedia.org/wiki/Magnesium_monoperoxyphthalate |
Magnesium nitrate refers to inorganic compounds with the formula Mg(NO 3 ) 2 (H 2 O) x , where x = 6, 2, and 0. All are white solids. [ 2 ] The anhydrous material is hygroscopic , quickly forming the hexahydrate upon standing in air. All of the salts are very soluble in both water and ethanol .
Being highly water-soluble, magnesium nitrate occurs naturally only in mines and caverns as nitromagnesite (hexahydrate form). [ 3 ]
The magnesium nitrate used in commerce is made by the reaction of nitric acid and various magnesium salts.
The principal use is as a dehydrating agent in the preparation of concentrated nitric acid . [ 2 ]
Its fertilizer grade has 10.5% nitrogen and 9.4% magnesium , so it is listed as 10.5-0-0 + 9.4% Mg. Fertilizer blends containing magnesium nitrate also have ammonium nitrate , calcium nitrate , potassium nitrate and micronutrients in most cases; these blends are used in the greenhouse and hydroponics trade.
Magnesium nitrate reacts with alkali metal hydroxide to form the corresponding nitrate:
Since magnesium nitrate has a high affinity for water, heating the hexahydrate does not result in the dehydration of the salt, but rather its decomposition into magnesium oxide , oxygen , and nitrogen oxides :
The absorption of these nitrogen oxides in water is one possible route to synthesize nitric acid . Although inefficient, this method does not require the use of any strong acid .
It is also occasionally used as a desiccant. | https://en.wikipedia.org/wiki/Magnesium_nitrate |
Magnesium "oil" (also referred to as transdermal magnesium, magnesium hexahydrate ) is a colloquial misnomer for a concentrated solution of magnesium chloride in water. It is oily only in the sense that it feels oily to the touch, but unlike a true oil it mixes freely with water—as it must, being an aqueous solution. Magnesium oil is supposed to be applied to the skin as an alternative to taking a magnesium supplement by mouth, [ 1 ] although it is ineffective and scientifically unsupported due to lack of any convincing data that magnesium is absorbed in significant amounts through the skin. [ 2 ] [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magnesium_oil |
Magnesium oxalate is an organic compound comprising a magnesium cation with a 2+ charge bonded to an oxalate anion . It has the chemical formula MgC 2 O 4 . Magnesium oxalate is a white solid that comes in two forms: an anhydrous form and a dihydrate form where two water molecules are complexed with the structure. Both forms are practically insoluble in water and are insoluble in organic solutions.
Magnesium oxalate has been found naturally near Mill of Johnston, which is located close to Insch in northeast Scotland. This naturally occurring magnesium oxalate is called glushinskite and occurs at the lichen /rock interface on serpentinite as a creamy white layer mixed in with the hyphae of the lichen fungus. A scanning electron micrograph of samples taken showed that the crystals had a pyramidal structure with both curved and striated faces. The size of these crystals ranged from 2 to 5 μm. [ 9 ]
Magnesium oxalate can by synthesized by combining a magnesium salt or ion with an oxalate.
A specific example of a synthesis would be mixing Mg(NO 3 ) 2 and KOH and then adding that solution to dimethyl oxalate , (COOCH 3 ) 2 . [ 10 ]
When heated, magnesium oxalate will decompose. First, the dihydrate will decompose at 150 °C into the anhydrous form.
With additional heating the anhydrous form will decompose further into magnesium oxide and carbon oxides between 420 °C and 620 °C. First, carbon monoxide and magnesium carbonate form. The carbon monoxide then oxidizes to carbon dioxide , and the magnesium carbonate decomposes further to magnesium oxide and carbon dioxide. [ 8 ]
Magnesium oxalate dihydrate has also been used in the synthesis of nano sized particles of magnesium oxide, which have larger surface are to volume ratio than conventionally synthesized particles and are optimal for various applications, such as in catalysis . By using a sol-gel synthesis , which involves combining a magnesium salt, in this case magnesium oxalate, with a gelating agent, nano sized particles of magnesium oxide can be produced. [ 11 ]
Magnesium oxalate is a skin and eye irritant. If inhaled, it will irritate the lungs and mucous membranes . Magnesium oxalate has no known chronic effects nor any carcinogenic effects. Magnesium oxalate is non-flammable and stable, but in fire conditions it will give off toxic fumes. According to OSHA, magnesium oxalate is considered to be hazardous. [ 4 ] [ 12 ] | https://en.wikipedia.org/wiki/Magnesium_oxalate |
Magnesium transporters are proteins that transport magnesium across the cell membrane . All forms of life require magnesium , yet the molecular mechanisms of Mg 2+ uptake from the environment and the distribution of this vital element within the organism are only slowly being elucidated.
The ATPase function of MgtA is highly cardiolipin dependent and has been shown to detect free magnesium in the μM range [ 1 ]
In bacteria, Mg 2+ is probably mainly supplied by the CorA protein [ 2 ] and, where the CorA protein is absent, by the MgtE protein . [ 3 ] [ 4 ] In yeast the initial uptake is via the Alr1p and Alr2p proteins, [ 5 ] but at this stage the only internal Mg 2+ distributing protein identified is Mrs2p. [ 6 ] Within the protozoa only one Mg 2+ transporter (XntAp) has been identified. [ 7 ] In metazoa, Mrs2p [ 8 ] and MgtE homologues [ 9 ] have been identified, along with two novel Mg 2+ transport systems TRPM6/TRPM7 [ 10 ] [ 11 ] and PCLN-1. [ 12 ] Finally, in plants, a family of Mrs2p homologues has been identified [ 13 ] [ 14 ] along with another novel protein, AtMHX. [ 15 ]
The evolution of Mg 2+ transport appears to have been rather complicated. Proteins apparently based on MgtE are present in bacteria and metazoa, but are missing in fungi and plants, whilst proteins apparently related to CorA are present in all of these groups. The two active transport transporters present in bacteria, MgtA and MgtB, do not appear to have any homologies in higher organisms. There are also Mg 2+ transport systems that are found only in the higher organisms.
There are a large number of proteins yet to be identified that transport Mg 2+ . Even in the best studied eukaryote, yeast, Borrelly [ 16 ] has reported a Mg 2+ /H + exchanger without an associated protein, which is probably localised to the Golgi. At least one other major Mg 2+ transporter in yeast is still unaccounted for, the one affecting Mg 2+ transport in and out of the yeast vacuole. In higher, multicellular organisms, it seems that many Mg 2+ transporting proteins await discovery.
The CorA-domain-containing Mg 2+ transporters (CorA, Alr-like and Mrs2-like) have a similar but not identical array of affinities for divalent cations. In fact, this observation can be extended to all of the Mg 2+ transporters identified so far. This similarity suggests that the basic properties of Mg 2+ strongly influence the possible mechanisms of recognition and transport. However, this observation also suggests that using other metal ions as tracers for Mg 2+ uptake will not necessarily produce results comparable to the transporter's ability to transport Mg 2+ . Ideally, Mg 2+ should be measured directly. [ 17 ]
Since 28 Mg 2+ is practically unobtainable, much of the old data will need to be reinterpreted with new tools for measuring Mg 2+ transport, if different transporters are to be compared directly. The pioneering work of Kolisek [ 18 ] and Froschauer [ 19 ] using mag-fura 2 has shown that free Mg 2+ can be reliably measured in vivo in some systems. By returning to the analysis of CorA with this new tool, we have gained an important baseline for the analysis of new Mg 2+ transport systems as they are discovered. However, it is important that the amount of transporter present in the membrane is accurately determined if comparisons of transport capability are to be made. This bacterial system might also be able to provide some utility for the analysis of eukaryotic Mg 2+ transport proteins, but differences in biological systems of prokaryotes and eukaryotes will have to be considered in any experiment.
Comparing the functions of the characterised Mg 2+ transport proteins is currently almost impossible, even though the proteins have been investigated in different biological systems using different methodologies and technologies. Finding a system where all the proteins can be compared directly would be a major advance. If the proteins could be shown to be functional in bacteria ( S. typhimurium ), then a combination of the techniques of mag-fura 2, quantification of protein in the envelope membrane, and structure of the proteins (X-ray crystal or cryo-TEM) might allow the determination of the basic mechanisms involved in the recognition and transport of the Mg 2+ ion. However, perhaps the best advance would be the development of methods allowing the measurement of the protein's function in the patch-clamp system using artificial membranes.
In 1968, Lusk [ 20 ] described the limitation of bacterial ( Escherichia coli ) growth on Mg 2+ -poor media, suggesting that bacteria required Mg 2+ and were likely to actively take this ion from the environment. The following year, the same group [ 21 ] and another group, Silver, [ 22 ] independently described the uptake and efflux of Mg 2+ in metabolically active E. coli cells using 28 Mg 2+ . By the end of 1971, two papers had been published describing the interference of Co 2+ , Ni 2+ and Mn 2+ on the transport of Mg 2+ in E. coli [ 23 ] and in Aerobacter aerogenes and Bacillus megaterium. [ 24 ] In the last major development before the cloning of the genes encoding the transporters, it was discovered that there was a second Mg 2+ uptake system that showed similar affinity and transport kinetics to the first system, but had a different range of sensitivities to interfering cations. This system was also repressible by high extracellular concentrations of Mg 2+ . [ 25 ] [ 26 ]
The CorA gene and its corresponding protein are the most exhaustively studied Mg 2+ transport system in any organism. Most of the published literature on the CorA gene comes from the laboratory of M. E. Maguire. Recently the group of R. J. Schweyen made a significant impact on the understanding of Mg 2+ transport by CorA. The gene was originally named after the cobalt-resistant phenotype in E. coli that was caused by the gene's inactivation. [ 25 ]
The gene was genetically identified in E. coli by Park et al. , [ 26 ] but wasn't cloned until Hmiel et al. [ 2 ] isolated the Salmonella enterica serovar Typhimurium ( S. typhimurium ) homologue. Later it would be shown by Smith and Maguire [ 27 ] that the CorA gene was present in 17 gram-negative bacteria. With the large number of complete genome sequences now available for prokaryotes, CorA has been shown to be virtually ubiquitous among the Eubacteria, as well as being widely distributed among the Archaea. [ 28 ] The CorA locus in E. coli contains a single open reading frame of 948 nucleotides, producing a protein of 316 amino acids. This protein is well conserved amongst the Eubacteria and Archaea. Between E. coli and S. typhimurium , the proteins are 98% identical, but in more distantly related species, the similarity falls to between 15 and 20%. [ 28 ] In the more distantly related genes, the similarity is often restricted to the C-terminal part of the protein, and a short amino acid motif GMN within this region is very highly conserved. The CorA domain, also known as PF01544 in the pFAM conserved protein domain database ( http://webarchive.loc.gov/all/20110506030957/http%3A//pfam.sanger.ac.uk/ ), is additionally present in a wide range of higher organisms, and these transporters will be reviewed below.
The CorA gene is constitutively expressed in S. typhimurium under a wide range of external Mg 2+ concentrations. [ 29 ] However, recent evidence suggests that the activity of the protein may be regulated by the PhoPQ two-component regulatory system . [ 30 ] This sensor responds to low external Mg 2+ concentrations during the infection process of S. typhimurium in humans. [ 31 ] In low external Mg 2+ conditions, the PhoPQ system was reported to suppress the function of CorA and it has been previously shown that the transcription of the alternative Mg 2+ transporters MgtA and MgtB is activated in these conditions. [ 29 ] Chamnongpol and Groisman suggest that this allows the bacteria to escape metal ion toxicity caused by the transport of other ions, particularly Fe(II), by CorA in the absence of Mg 2+ . [ 30 ] Papp and Maguire offer a conflicting report on the source of the toxicity. [ 32 ]
The figure (not to scale) shows the originally published transmembrane (TM) domain topology of the S. typhimurium CorA protein, which was said to have three membrane-spanning regions in the C-terminal part of the protein (shown in blue), as determined by Smith et al. . [ 33 ] Evidence for CorA acting as a homotetramer was published by Warren et al. in 2004. [ 34 ] In December 2005 the crystal structure of the CorA channel was posted to the RSCB protein structure database. The results showed that the protein has two TM domains and exists as a homopentamer, in direct conflict with the earlier reports. Follow this link to see the structure in 3D . The soluble intracellular parts of the protein are highly charged, containing 31 positively charged and 53 negatively charged residues. Conversely, the TM domains contain only one charged amino acid, which has been shown to be unimportant in the activity of the transporter. [ 35 ] From mutagenesis experiments, it appears that the chemistry of the Mg 2+ transport relies on the hydroxyl groups lining the inside of the transport pore; there is also an absolute requirement for the GMN motif (shown in red). [ 35 ] [ 36 ]
Before the activity of CorA could be studied in vivo , any other Mg 2+ transport systems in the bacterial host had to be identified and inactivated or deleted (see below). A strain of S. typhimurium containing a functional CorA gene but lacking MgtA and MgtB was constructed [ 37 ] (also see below), and the uptake kinetics of the transporter were analysed. [ 38 ] This strain showed nearly normal growth rates on standard media (50 μM Mg 2+ ), but the removal of all three genes created a bacterial strain requiring 100 mM external Mg 2+ for normal growth. [ 37 ]
Mg 2+ is transported into cells containing only the CorA transport system with similar kinetics and cation sensitivities as the Mg 2+ uptake described in the earlier papers, and has additionally been quantified [ 38 ] (see table). The uptake of Mg 2+ was seen to plateau as in earlier studies, and although no actual mechanism for the decrease in transport has been determined, so it has been assumed that the protein is inactivated. [ 19 ] Co 2+ and Ni 2+ are toxic to S. typhimurium cells containing a functional CorA protein and this toxicity stems from the blocking of Mg 2+ uptake (competitive inhibition) and the accumulation of these ions inside the cell. [ 2 ] Co 2+ and Ni 2+ have been shown to be transported by CorA by using radioactive tracer analysis, [ 2 ] [ 39 ] although with lower affinities (km) and velocities (Vmax) than for Mg 2+ (see table). The km values for Co 2+ and Ni 2+ are significantly above those expected to be encountered by the cells in their normal environment, so it is unlikely that the CorA transport system mediates the uptake of these ions under natural conditions. [ 2 ] To date, the evidence for Mn 2+ transport by CorA is limited to E. coli . [ 26 ]
The table lists the transport kinetics of the CorA Mg 2+ transport system. This table has been compiled from the publications of Snavely et al. (1989b), [ 38 ] Gibson et al. (1991) [ 39 ] and Smith et al. (1998a) [ 35 ] and summarises the kinetic data for the CorA transport protein expressed from the wild type promoter in bacteria lacking MgtA and MgtB. km and Vmax were determined at 20 °C as the uptake of Mg 2+ at 37 °C was too rapid to measure accurately.
Recently the Mg 2+ -dependent fluorescence of mag-fura 2 was used to measure the free Mg 2+ content of S. typhimurium cells in response to external Mg 2+ , which showed that CorA is the major uptake system for Mg 2+ in bacteria. [ 19 ] The authors also showed for the first time that the changes in the electric potential (ΔΨ) across the plasma membrane of the cell affected both the rate of Mg 2+ uptake and the free Mg 2+ content of the cell; depolarisation suppressed transport, while hyperpolarisation increased transport. The kinetics of transport were defined only by the rate of change of free Mg 2+ inside the cells (250 μM s −1 ). Because no quantification of the amount of CorA protein in the membrane was made, this value cannot be compared with other experiments on Mg 2+ transporters. [ 18 ]
The efflux of Mg 2+ from bacterial cells was first observed by Lusk and Kennedy (1969) [ 21 ] and is mediated by the CorA Mg 2+ transport system in the presence of high extracellular concentrations of Mg 2+ . [ 38 ] The efflux can also be triggered by Co 2+ , Mn 2+ and Ni 2+ , although not to the same degree as Mg 2+ . [ 23 ] No Co 2+ efflux through the CorA transport system was observed. The process of Mg 2+ efflux additionally requires one of the CorB, CorC or CorD genes. [ 39 ] The mutation of any single one of these genes leads to a Co 2+ resistance a little less than half of that provided by a CorA mutant. This effect may be due to the inhibition of Mg 2+ loss that would otherwise occur in the presence of high levels of Co 2+ . It is currently unknown whether Mg 2+ is more toxic when the CorBCD genes are deleted.
It has been speculated that the Mg 2+ ion will initially interact with any transport protein through its hydration shell. [ 40 ] Cobalt (III) hexaammine, Co(III)Hex, is a covalently bound (non-labile) analog for the first shell of hydration for several divalent cations, including Mg 2+ . The radius of the Co(III)Hex molecule is 244 pm, very similar to the 250 pm radius of the first hydration shell of Mg 2+ . This analog is a potent inhibitor of the CorA transport system, more so than Mg 2+ , Co 2+ or Ni 2+ . [ 41 ] The additional strength of the Co(III)Hex inhibition might come from the blocking of the transport pore due to the inability of the protein to ‘dehydrate’ the substrate. It was also shown that Co(III)Hex was not transported into the cells, [ 41 ] suggesting that at least partial dehydration would be required for the transport of the normal substrate (Mg 2+ ). Nickel (II) hexaammine, with a radius of 255 pm, did not inhibit the CorA transport system, suggesting a maximum size limit exists for the binding of the CorA substrate ion. [ 41 ] These results suggest that the important property involved in the recognition of Mg 2+ by CorA is the size of the ion with its first shell of hydration. Hence, the volume change generally quoted for the bare to hydrated Mg 2+ ion of greater than 500-fold, including the second sphere of hydration, may not be biologically relevant, and may be a reason for the first sphere volume change of 56-fold to be more commonly used.
The presence of these two genes was first suspected when Nelson and Kennedy (1972) [ 25 ] showed that there were Mg 2+ -repressible and non-repressible Mg 2+ uptake systems in E. coli . The non-repressible uptake of Mg 2+ is mediated by the CorA protein. In S. typhimurium the repressible Mg 2+ uptake was eventually shown to be via the MgtA and MgtB proteins. [ 37 ]
Both MgtA and MgtB are regulated by the PhoPQ system and are actively transcribed during the process of infection of human patients by S. typhimurium . [ 31 ] [ 42 ] [ 43 ] Although neither gene is required for pathogenicity, the MgtB protein does enhance the long-term survival of the pathogen in the cell. [ 44 ] The genes are also upregulated in vitro when the Mg 2+ concentration falls below 50 μM (Snavely et al. , 1991a). Although the proteins have km values similar to CorA and transport rates approximately 10 times less, the genes may be part of a Mg 2+ scavenging system. Chamnongpol and Groisman (2002) presents evidence that the role of these proteins may be to compensate for the inactivation of the CorA protein by the PhoPQ regulon. [ 30 ] The authors suggest that the CorA protein is inactivated to allow the avoidance of metal toxicity via the protein in the low Mg 2+ environments S. typhimurium is subjected to by cells after infection.
The proteins are both P-type ATPases [ 38 ] [ 45 ] and neither gene shows any similarity to CorA. The MgtA and MgtB proteins are 75% similar (50% identical), although it seems that MgtB may have been acquired by horizontal gene transfer as part of Salmonella Pathogenicity Island 3. [ 45 ] [ 46 ] The TM topology of the MgtB protein has been experimentally determined, showing that the protein has ten TM-spanning helices with the termini of the protein in the cytoplasm (see figure
). MgtA is present in widely divergent bacteria, but is not nearly as common as CorA, while MgtB appears to have a quite restricted distribution. [ 47 ] No hypotheses for the unusual distribution have been suggested.
The figure, adapted from Smith et al. (1993b), [ 48 ] shows the experimentally determined membrane topology of the MgtB protein in S. typhimurium . The TM domains are shown in light blue and the orientation in the membrane and the positions of the N- and C-termini are indicated. The figure is not drawn to scale.
While the MgtA and MgtB proteins are very similar, they do show some minor differences in activity. MgtB is very sensitive to temperature, losing all activity (with regard to Mg 2+ transport) at a temperature of 20 °C. [ 38 ] Additionally, MgtB and MgtA are inhibited by different ranges of cations (Table A10.1 [ 38 ] ).
The table lists cation transport characteristics of the MgtA and MgtB proteins in S. typhimurium as well as the kinetic data for the MgtA and MgtB transport proteins at 37 °C. [ 38 ] The Vmax numbers listed in parentheses are those for uptake at 20 °C. The inhibition of Mg 2+ transport by Mn 2+ via MgtA showed unusual kinetics (see Figure 1 of Snavely et al. , 1989b [ 38 ] )
The MgtA and MgtB proteins are ATPases, using one molecule of ATP per transport cycle, whereas the Mg 2+ uptake via CorA is simply electrochemically favourable. Chamnongpol and Groisman (2002) have suggested that the MgtA and MgtB proteins form part of a metal toxicity avoidance system. [ 30 ] Alternatively, as most P-type ATPases function as efflux mediating transporters, it has been suggested that the MgtA and MgtB proteins act as efflux proteins for a currently unidentified cation, and Mg 2+ transport is either non-specific or exchanged to maintain the electro-neutrality of the transport process. [ 49 ] Further experiments will be required to define the physiological function of these proteins.
Two papers describe MgtE, a fourth Mg 2+ uptake protein in bacteria unrelated to MgtA/B or CorA. [ 3 ] [ 4 ] This gene has been sequenced and the protein, 312 amino acids in size, is predicted to contain either four or five TM spanning domains that are closely arranged in the C-terminal part of the protein (see figure). This region of the protein has been identified in the Pfam database as a conserved protein domain (PF01769) and species containing proteins that have this protein domain are roughly equally distributed throughout the Eubacteria and Archaea, although it is quite rare in comparison with the distribution of CorA. However, the diversity of the proteins containing the domain is significantly larger than that of the CorA domain. The Pfam database lists seven distinct groups of MgtE domain containing proteins, of which six contain an archaic or eubacterial member. The expression of MgtE is frequently controlled by a conserved RNA structure, YkoK leader or M-box. [ 51 ]
The figure (right), adapted from Smith et al. (1995) [ 4 ] and the PFAM database entry, shows the computer-predicted membrane topology of the MgtE protein in Bacillus firmus OF4. The TM domains are shown in light blue. The CBS domains , named for the protein they were identified in, cystathionine-beta synthase , shown in orange, are identified in the Pfam database as regulatory domains, but the mechanism of action has not yet been described. They are found in several voltage-gated chloride channels. [ 52 ] The orientation in the membrane and the positions of the N- and C-termini are indicated. This figure is not drawn to scale. This transporter has recently had its structure solved by x-ray crystallography. [ 53 ]
The MgtE gene was first identified by Smith et al. (1995) during a screen for CorA-like proteins in bacteria and complements the Mg 2+ -uptake-deficient S. typhimurium strain MM281 (corA mgtA mgtB), restoring wild type growth on standard media. [ 4 ] The kinetics of Mg 2+ transport for the protein were not determined, as 28 Mg 2+ was unavailable. As a substitute, the uptake of 57 Co 2+ was measured and was shown to have a km of 82 μM and a Vmax of 354 pmol min −1 10 8 cells −1 . Mg 2+ was a competitive inhibitor with a Ki of 50 μM—the Ki of Mg 2+ inhibition of 60 Co 2+ uptake via CorA is 10 μM. [ 2 ] A comparison of the available kinetic data for MgtA and CorA is shown in the table. Clearly, MgtE does not transport Co 2+ to the same degree as CorA, and the inhibition of transport by Mg 2+ is also less efficient, which suggests that the affinity of MgtE for Mg 2+ is lower than that of CorA. The strongest inhibitor of Co 2+ uptake was Zn 2+ , with a Ki of 20 μM. [ 4 ] The transport of Zn 2+ by this protein may be as important as that of Mg 2+ .
The table shows a comparison of the transport kinetics of MgtE and CorA, and key kinetic parameter values for them are listed. As shown, the data has been generated at differing incubation temperatures. km and Ki are not significantly altered by the differing incubation temperature. Conversely, Vmax shows a strong positive correlation with temperature, hence the value of Co 2+ Vmax for MgtE is not directly comparable with the values for CorA.
The earliest research showing that yeast takes up Mg 2+ appears to be done by Schmidt et al. (1949). However, these authors only showed altered yeast Mg 2+ content in a table within the paper, and the report's conclusions dealt entirely with the metabolism of phosphate. A series of experiments by Rothstein [ 54 ] [ 55 ] shifted the focus more towards the uptake of the metal cations, showing that yeast take up cations with the following affinity series; Mg 2+ , Co 2+ , Zn 2+ > Mn 2+ > Ni 2+ > Ca 2+ > Sr 2+ . Additionally, it was suggested that the transport of the different cations is mediated by the same transport system [ 55 ] [ 56 ] [ 57 ] [ 58 ] — a situation very much like that in bacteria.
In 1998, MacDiarmid and Gardner finally identified the proteins responsible for the observed cation transport phenotype in Saccharomyces cerevisiae . [ 5 ] The genes involved in this system and a second mitochondrial Mg 2+ transport system, functionally identified significantly after the gene was cloned, are described in the sections below.
Two genes, ALR1 and ALR2, were isolated in a screen for Al 3+ tolerance (resistance) in yeast. [ 5 ] Over-expression constructs containing yeast genomic DNA were introduced into wild type yeast and the transformants were screened for growth on toxic levels of Al 3+ . ALR1 and ALR2 containing plasmids allowed the growth of yeast in these conditions.
The Alr1p and Alr2p proteins consist of 859 and 858 amino acids respectively and are 70% identical. In a region in the C-terminal, half of these proteins are weakly similar to the full CorA protein. The computer-predicted TM topology of Alr1p is shown in the figure. The presence of a third TM domain was suggested by MacDiarmid and Gardner (1998), [ 5 ] on the strength on sequence homology, and more recently by Lee and Gardner (2006), [ 59 ] on the strength of mutagenesis studies, making the TM topology of these proteins more like that of CorA (see figure). Also, Alr1p contains the conserved GMN motif at the outside end of TM 2 (TM 2') and the mutation of the methionine (M) in this motif to a leucine (L) led to the loss of transport capability. [ 59 ]
The figure shows the two possible TM topologies of Alr1p. Part A of the figure shows the computer-predicted membrane topology of the Alr1p protein in yeast and part B shows the topology of Alr1p based on the experimental results of Lee and Gardner (2006). [ 59 ] The GMN motif location is indicated in red and the TM domains in light blue. The orientation in the membrane and the positions of the N- and C-termini are indicated, the various sizes of the soluble domains are given in amino acids (AA), and TM domains are numbered by their similarity to CorA. Where any TM domain is missing, the remaining domains are numbered with primes. The figure is not drawn to scale.
A third ALR-like gene is present in S. cerevisiae and there are two homologous genes in both Schizosaccharomyces pombe and Neurospora crassa . These proteins contain a GMN motif like that of CorA, with the exception of the second N. crassa gene. No ALR-like genes have been identified in species outside of the fungi.
Membrane fractionation and green fluorescent protein (GFP) fusion studies established that Alr1p is localised to the plasma membrane. [ 60 ] [ 61 ] The localisation of the Alr1p was observed to be internalised and degraded in the vacuole in response to extracellular cations. Mg 2+ , at very low extracellular concentrations (100 μM; < 10% of the standard media Mg 2+ content), and Co 2+ and Mn 2+ at relatively high concentrations (> 20× standard media), induced the change in Alr1p protein localisation, and the effect was dependent on functional ubiquitination, endocytosis and vacuolar degradation. [ 60 ] This mechanism was proposed to allow the regulation of Mg 2+ uptake by yeast.
However, a recent report [ 61 ] indicates that several of the observations made by Stadler et al. [ 60 ] were not reproducible. [ 61 ] For example, regulation of ALR1 mRNA accumulation by Mg 2+ supply was not observed, and the stability of the Alr1 protein was not reduced by exposure to excess Mg 2+ . The original observation of Mg-dependent accumulation of the Alr1 protein under steady-state low-Mg conditions was replicated, but this effect was shown to be an artifact caused by the addition of a small peptide (epitope) to the protein to allow its detection. Despite these problems, Alr1 activity was demonstrated to respond to Mg supply, [ 61 ] suggesting that the activity of the protein is regulated directly, as was observed for some bacterial CorA proteins. [ 19 ]
A functional Alr1p (wild type) or Alr2p (overexpressed) is required for S. cerevisiae growth in standard conditions (4 mM Mg 2+ [ 5 ] ), and Alr1p can support normal growth at Mg 2+ concentrations as low as 30 μM. [ 60 ] 57 Co 2+ is taken up into yeast via the Alr1p protein with a km of 77 – 105 μM (; [ 56 ] C. MacDiarmid and R. C. Gardner, unpublished data), but the Ki for Mg 2+ inhibition of this transport is currently unknown. The transport of other cations by the Alr1p protein was assayed by the inhibition of yeast growth. The overexpression of Alr1p led to increased sensitivity to Ca 2+ , Co 2+ , Cu 2+ , La 3+ , Mn 2+ , Ni 2+ and Zn 2+ , an array of cations similar to those shown to be transported into yeast by a CorA-like transport system. [ 5 ] The increased toxicity of the cations in the presence of the transporter is assumed to be due to the increased accumulation of the cation inside the cell.
The evidence that Alr1p is primarily a Mg 2+ transporter is that the loss of Alr1p leads to a decreased total cell content of Mg 2+ , but not of other cations. Additionally, two electrophysiological studies where Alr1p was produced in yeast or Xenopus oocytes showed a Mg 2+ -dependent current in the presence of the protein; [ 62 ] Salih et al. , in prep.
The kinetics of Mg 2+ uptake by Alr1p have been investigated by electrophysiology techniques on whole yeast cells. [ 62 ] The results suggested that Alr1p is very likely to act as an ion-selective channel. In the same paper, the authors reported that Mg 2+ transport by Alr1p varied from 200 pA to 1500 pA, with a mean current of 264 pA. No quantification of the amount of protein producing the current was presented, so the results lack comparability with the bacterial Mg 2+ transport proteins.
The alternative techniques of 28 Mg 2+ radiotracer analysis and mag-fura 2 to measure Mg 2+ uptake have not yet been used with Alr1p. 28 Mg 2+ is currently not available and the mag-fura 2 system is unlikely to provide simple uptake data in yeast. The yeast cell maintains a heterogeneous distribution of Mg 2+ [ 63 ] suggesting that multiple systems inside the yeast are transporting Mg 2+ into storage compartments. This internal transport will very likely mask the uptake process. The expression of ALR1 in S. typhimurium without Mg 2+ uptake genes may be an alternative, but, as stated earlier, the effects of a heterologous expression system would need to be taken into account.
The MNR2 gene encodes a protein closely related to the Alr proteins, but includes conserved features that define a distinct subgroup of CorA proteins in fungal genomes, suggesting a distinct role in Mg 2+ homeostasis. Like an alr1 mutant, growth of an mnr2 mutant was sensitive to Mg 2+ -deficient conditions, but the mnr2 mutant was observed to accumulate more Mg 2+ than a wild-type strain under these conditions. [ 64 ] These phenotypes suggested that Mnr2 may regulate Mg 2+ storage within an intracellular compartment. Consistent with this interpretation, the Mnr2 protein was localized to the membrane of the vacuole, an internal compartment implicated in the storage of excess mineral nutrients by yeast. A direct role of Mnr2 in Mg 2+ transport was suggested by the observation that increased Mnr2 expression, which redirected some Mnr2 protein to the cell surface, also suppressed the Mg 2+ -requirement of an alr1 alr2 double mutant strain. The mnr2 mutation also altered accumulation of other divalent cations, suggesting this mutation may increase Alr gene expression or protein activity. Recent work [ 61 ] supported this model, by showing that Alr1 activity was increased in an mnr2 mutant strain, and that the mutation was associated with induction of Alr1 activity at a higher external Mg concentration than was observed for an Mnr2 wild-type strain. These effects were observed without any change in Alr1 protein accumulation, again indicating that Alr1 activity may be regulated directly by the Mg concentration within the cell.
Like the ALR genes, the MRS2 gene was cloned and sequenced before it was identified as a Mg 2+ transporter. The MRS2 gene was identified in the nuclear genome of yeast in a screen for suppressors of a mitochondrial gene RNA splicing mutation, [ 65 ] and was cloned and sequenced by Wiesenberger et al. (1992). [ 66 ] Mrs2p was not identified as a putative Mg 2+ transporter until Bui et al. (1999). [ 6 ] Gregan et al. (2001a) identified LPE10 by homology to MRS2 and showed that both LPE10 and MRS2 mutants altered the Mg 2+ content of yeast mitochondria and affected RNA splicing activity in the organelle. [ 67 ] [ 68 ] Mg 2+ transport has been shown to be directly mediated by Mrs2p, [ 18 ] but not for Lpe10p.
The Mrs2p and Lpe10p proteins are 470 and 413 amino acid residues in size, respectively, and a 250–300 amino acid region in the middle of the proteins shows a weak similarity to the full CorA protein. The TM topologies of the Mrs2p and Lpe10p proteins have been assessed using a protease protection assay [ 6 ] [ 67 ] and are shown in the figure. TM 1 and 2 correspond to TM 2 and 3 in the CorA protein. The conserved GMN motif is at the outside end of the first TM domain, and when the glycine (G) in this motif was mutated to a cysteine (C) in Mrs2p, Mg 2+ transport was strongly reduced. [ 18 ]
The figure shows the experimentally determined topology of Mrs2p and Lpe10p as adapted from Bui et al. (1999) [ 6 ] and Gregan et al. (2001a). [ 67 ] The GMN motif location is indicated in red and the TM domains in light blue. The orientation in the membrane and the positions of the N- and C-termini are indicated. The various sizes of the soluble domains are given in amino acids (AA), TM domains are numbered, and the figure is not drawn to scale.
Mrs2p has been localised to the mitochondrial inner membrane by subcellular fractionation and immunodetection [ 6 ] and Lpe10p to the mitochondria. [ 67 ] Mitochondria lacking Mrs2p do not show a fast Mg 2+ uptake, only a slow ‘leak’, and overaccumulation of Mrs2p leads to an increase in the initial rate of uptake. [ 18 ] Additionally, CorA, when fused to the mitochondrial leader sequence of Mrs2p, can partially complement the mitochondrial defect conferred by the loss of either Mrs2p or Lpe10p. Hence, Mrs2p and/or Lpe10p may be the major Mg 2+ uptake system for mitochondria. A possibility is that the proteins form heterodimers, as neither protein (when overexpressed) can fully complement the loss of the other. [ 67 ]
The characteristics of Mg 2+ uptake in isolated mitochondria by Mrs2p were quantified using mag-fura 2. [ 18 ] The uptake of Mg 2+ by Mrs2p shared a number of attributes with CorA. First, Mg 2+ uptake was directly dependent on the electric potential (ΔΨ) across the boundary membrane. Second, the uptake is saturated far below that which the ΔΨ theoretically permits, so the transport of Mg 2+ by Mrs2p is likely to be regulated in a similar manner to CorA, possibly by the inactivation of the protein. Third, Mg 2+ efflux was observed via Mrs2p upon the artificial depolarisation of the mitochondrial membrane by valinomycin. Finally, the Mg 2+ fluxes through Mrs2p are inhibited by cobalt (III) hexaammine. [ 18 ]
The kinetics of Mg 2+ uptake by Mrs2p were determined in the Froschauer et al. (2004) paper on CorA in bacteria. [ 19 ] The initial change in free Mg 2+ concentration was 150 μM s-1 for wild type and 750 μM s-1 for mitochondria from yeast overexpressing MRS2. No attempt was made to scale the observed transport to the amount of transporter present.
The transport of Mg 2+ into Paramecium has been characterised largely by R. R. Preston and his coworkers. Electrophysiological techniques on whole Paramecium were used to identify and characterise Mg 2+ currents in a series of papers [ 69 ] [ 70 ] [ 71 ] [ 72 ] before the gene was cloned by Haynes et al. (2002). [ 7 ]
The open reading frame for the XNTA gene is 1707 bp in size, contains two introns and produces a predicted protein of 550 amino acids. [ 7 ] The protein has been predicted to contain 11 TM domains and also contains the α1 and α2 motifs (see figure) of the SLC8 ( Na+/Ca 2+ exchanger [ 73 ] ) and SLC24 ( K+ dependent Na+/Ca 2+ exchanger [ 74 ] ) human solute transport proteins. The XntAp is equally similar to the SLC8 and SLC24 protein families by amino acid sequence, but the predicted TM topology is more like that of SLC24, but the similarity is at best weak and the relationship is very distant. [ 7 ] The AtMHX protein from plants also shares a distant relationship with the SLC8 proteins.
The figure shows the predicted TM topology of XntAp. Adapted from Haynes et al. (2002), [ 7 ] this figure shows the computer predicted membrane topology of XntAp in Paramecium. The orientation in the membrane was determined using HMMTOP. [ 75 ] [ 76 ] The TM domains are shown in light blue, the α1 and α2 domains are shown in green. The orientation in the membrane and the positions of the N- and C-termini are indicated and the figure is not drawn to scale.
The Mg 2+ -dependent currents carried by XntAp are kinetically like that of a channel protein and have an ion selectivity order of Mg 2+ > Co 2+ , Mn 2+ > Ca 2+ — a series again very similar to that of CorA. [ 72 ] Unlike the other transport proteins reported so far, XntAp is dependent on intracellular Ca 2+ . The transport is also dependent on ΔΨ, but again Mg 2+ is not transported to equilibrium, being limited to approximately 0.4 mM free Mg 2+ in the cytoplasm. The existence of an intracellular compartment with a much higher free concentration of Mg 2+ (8 mM) was supported by the results.
The investigation of Mg 2+ in animals, including humans, has lagged behind that in bacteria and yeast. This is largely because of the complexity of the systems involved, but also because of the impression within the field that Mg 2+ was maintained at high levels in all cells and was unchanged by external influences. Only in the last 25 years has a series of reports begun to challenge this view, with new methodologies finding that free Mg 2+ content is maintained at levels where changes might influence cellular metabolism. [ 77 ]
A bioinformatic search of the sequence databases identified one homologue of the MRS2 gene of yeast in a range of metazoans. [ 8 ] The protein has a very similar sequence and predicted TM topology to the yeast protein, and the GMN motif is intact at the end of the first TM domain. The human protein, hsaMrs2p, has been localised to the mitochondrial membrane in mouse cells using a GFP fusion protein.
Very little is known about the Mg 2+ transport characteristics of the protein in mammals, but Zsurka et al. (2001) has shown that the human Mrs2p complements the mrs2 mutants in the yeast mitochondrial Mg 2+ uptake system. [ 8 ]
The identification of this gene family in the metazoa began with a signal sequence trap method for isolating secreted and membrane proteins. [ 9 ] Much of the identification has come from bioinformatic analyses. Three genes were eventually identified in humans, another three in mouse and three in Caenorhabditis elegans , with a single gene in Anopheles gambiae . The pFAM database lists the MgtE domain as pFAM01769 and additionally identifies a MgtE domain-containing protein in Drosophila melanogaster . The proteins containing the MgtE domain can be divided into seven classes, as defined by pFAM using the type and organisation of the identifiable domains in each protein. Metazoan proteins are present in three of the seven groups. All of the metazoa proteins contain two MgtE domains, but some of these have been predicted only by context recognition (Coin, Bateman and Durbin, unpublished. See the pFAM website for further details).
The human SLC41A1 protein contains two MgtE domains with 52% and 46% respective similarity to the PF01769 consensus sequence and is predicted to contain ten TM domains, five in each MgtE domain (see figure), which suggests that the MgtE protein of bacteria may work as a dimer.
Adapted from Wabakken et al. (2003) [ 9 ] and the pFAM database, the figure shows the computer predicted membrane topology of MgtE in H. sapiens . The TM domains are shown in light blue, the orientation in the membrane and the positions of the N- and C-termini are indicated, and the figure is not drawn to scale.
Wabakken et al. (2003) [ 9 ] found that the transcript of the SLC41A1 gene was expressed in all human tissues tested, but at varying levels, with the heart and testis having the highest expression of the gene. No explanation of the expression pattern has been suggested with regard to Mg 2+ -related physiology.
It has not been shown whether the SLC41 proteins transport Mg 2+ or complement a Mg 2+ transport mutation in any experimental system. However, it has been suggested that as MgtE proteins have no other known function, they are likely to be Mg 2+ transporters in the metazoa as they are in the bacteria. [ 9 ] This will need to be verified using one of the now standard experiment systems for examining Mg 2+ transport.
The investigation of the TRPM genes and proteins in human cells is an area of intense recent study and, at times, debate. Montell et al. (2002) [ 78 ] have reviewed the research into the TRP genes, and a second review by Montell (2003) [ 79 ] has reviewed the research into the TRPM genes.
The TRPM family of ion channels has members throughout the metazoa. The TRPM6 and TRPM7 proteins are highly unusual, containing both an ion channel domain and a kinase domain (Figure 1.7), the role of which brings about the most heated debate. [ 79 ]
The activity of these two proteins has been very difficult to quantify. TRPM7 by itself appears to be a Ca 2+ channel [ 80 ] but in the presence of TRPM6 the affinity series of transported cations places Mg 2+ above Ca 2+ . [ 10 ] [ 81 ] The differences in reported conductance were caused by the expression patterns of these genes. TRPM7 is expressed in all cell types tested so far, while TRPM6 shows a more restricted pattern of expression. [ 82 ] An unfortunate choice of experimental system by Voets et al. , (2004) [ 83 ] led to the conclusion that TRPM6 is a functional Mg 2+ transporter. However, later work by Chubanov et al. (2004) [ 82 ] clearly showed that TRPM7 is required for TRPM6 activity and that the results of Voets et al. are explained by the expression of TRPM7 in the experimental cell line used by Voets et al. in their experiments. Whether TRPM6 is functional by itself is yet to be determined.
The predicted TM topology of the TPRM6 and TRPM7 proteins has been adapted from Nadler et al. (2001), [ 10 ] Runnels et al. (2001) [ 84 ] and Montell et al. (2002), [ 78 ] this figure shows the computer predicted membrane topology of the TRPM6 and TRPM7 proteins in Homo sapiens . At this time, the topology shown should be considered a tentative hypothesis. The TM domains are shown in light blue, the pore loop in purple, the TRP motif in red and the kinase domain in green. The orientation in the membrane and the positions of the N- and C-termini are indicated and the figure is not drawn to scale.
The conclusions of the Voets et al. (2004) [ 83 ] paper are probably incorrect in attributing the Mg 2+ dependent currents to TRPM7 alone, and their kinetic data are likely to reflect the combined TRPM7/ TRPM6 channel. The report presents a robust collection of data consistent with a channel-like activity passing Mg 2+ , based on both electrophysiological techniques and also mag-fura 2 to determine changes in cytoplasmic free Mg 2+ .
Claudins allow for Mg 2+ transport via the paracellular pathway; that is, it mediates the transport of the ion through the tight junctions between cells that form an epithelial cell layer. In particular, Claudin-16 allows the selective reuptake of Mg 2+ in the human kidney. Some patients with mutations in the CLDN19 gene also have altered magnesium transport. [ 85 ] [ 86 ]
The gene Claudin-16 was cloned by Simon et al. (1999), [ 12 ] but only after a series of reports described the Mg 2+ flux itself with no gene or protein. [ 87 ] [ 88 ] [ 89 ] The expression pattern of the gene was determined by RT-PCR, and was shown to be very tightly confined to a continuous region of the kidney tubule running from the medullary thick descending limb to the distal convoluted tubule. [ 12 ] This localisation was consistent with the earlier reports for the location of Mg 2+ re-uptake by the kidney. Following the cloning, mutations in the gene were identified in patients with familial hypomagnesaemia with hypercalciuria and nephrocalcinosis, [ 90 ] [ 91 ] strengthening the links between the gene and the uptake of Mg 2+ .
The current knowledge of the molecular mechanisms for Mg 2+ transport in plants is very limited, with only three publications reporting a molecular basis for Mg 2+ transport in plants. [ 13 ] [ 14 ] [ 15 ] However, the importance of Mg 2+ to plants has been well described, and physiological and ecophysiological studies about the effects of Mg 2+ are numerous. This section will summarise the knowledge of a gene family identified in plants that is distantly related to CorA. Another gene, a Mg 2+ /H + exchanger (AtMHX [ 15 ] ), unrelated to this gene family and to CorA has also been identified, is localised to the vacuolar membrane, and will be described last.
Schock et al. (2000) identified and named the family AtMRS2 based on the similarity of the genes to the MRS2 gene of yeast. [ 13 ] The authors also showed that the AtMRS2-1 gene could complement a Δmrs2 yeast mutant phenotype. Independently, Li et al. (2001) [ 14 ] published a report identifying the family and showing that two additional members could complement Mg 2+ transport deficient mutants, one in S. typhimurium and the other in S. cerevisiae .
The three genes that have been shown to transport Mg 2+ are AtMRS2-1, AtMRS2-10 and AtMRS2-11, and these genes produce proteins 442, 443 and 459 amino acids in size, respectively. Each of the proteins shows significant similarity to Mrs2p of yeast and a weak similarity to CorA of bacteria, contains the conserved GMN amino acid motif at the outside end of the first TM domain, and is predicted to have two TM domains.
The AtMRS2-1 gene, when expressed in yeast from the MRS2 promoter and being fused C-terminally to the first 95 amino acids of the Mrs2p protein, was directed to the mitochondria, where it complemented a Δmrs2 mutant both phenotypically (mitochondrial RNA splicing was restored) and with respect to the Mg 2+ content of the organelle. [ 13 ] No data on the kinetics of the transport was presented. The AtMRS2-11 gene was analysed in yeast (in the alr1 alr2 strain), where it was shown that expression of the gene significantly increased the rate of Mg 2+ uptake into starved cells over the control, as measured using flame atomic absorption spectroscopy of total cellular Mg 2+ content. However, Alr1p was shown to be significantly more effective at transporting Mg 2+ at low extracellular concentrations, suggesting that the affinity of AtMRS2-11 for Mg 2+ is lower than that of Alr1p. [ 14 ] An electrophysiological (voltage clamp) analysis of the AtMRS2-11 protein in Xenopus oocytes also showed a Mg 2+ -dependent current at membrane potentials (ΔΨ) of –100 – –150 mV inside. [ 92 ] These values are physiologically significant, as several membranes in plants maintain ΔΨ in this range. However, the author had difficulty reproducing these results due to an apparent "death" of oocytes containing the AtMRS2-11 protein, and therefore these results should be viewed with caution.
The AtMRS2-10 transporter has been analysed using radioactive tracer uptake analysis. [ 14 ] 63Ni 2+ was used as the substitute ion and Mg 2+ was shown to inhibit the uptake of 63Ni 2+ with a Ki of 20 μM. Uptake was also inhibited by Co(III)Hex and by other divalent cations. Only Co 2+ and Cu 2+ inhibited transport with Ki values less than 1 mM.
The AtMRS2-10 protein was fused to GFP, and was shown to be localised to the plasma membrane. [ 14 ] A similar experiment was attempted in the Schock et al. (2000) paper, [ 13 ] but the observed localisation was not significantly different from that seen with unfused GFP. The most likely reason for the lack of a definitive localisation of AtMRS2-1 in the Schock et al. paper is that the authors removed the TM domains from the protein, thereby precluding its insertion into a membrane.
The exact physiological significance of the AtMRS2-1 and AtMRS2-10 proteins in plants has yet to be clarified. The AtMRS2-11 gene has been overexpressed (from the CaMV 35S promoter) in A. thaliana. [ 92 ] The transgenic line has been shown to accumulate high levels of the AtMRS2-11 transcript. A strong Mg 2+ deficiency phenotype (necrotic spots on the leaves, see Chapter 1.5 below) was recorded during the screening process (in both the T1 and T2 generations) for a homozygote line, but this phenotype was lost in the T3 generation and could not be reproduced when the earlier generations were screened a second time. The author suggested that environmental effects were the most likely cause of the inconsistent phenotype.
The first magnesium transporter isolated in any multicellular organism, AtMHX shows no similarity to any previously isolated Mg 2+ transport protein. [ 15 ] The gene was initially identified in the A. thaliana genomic DNA sequence database, by its similarity to the SLC8 family of Na+/Ca 2+ exchanger genes in humans.
The cDNA sequence of 1990 bp is predicted to produce a 539-amino acid protein. AtMHX is quite closely related to the SLC8 family at the amino acid level and shares a topology with eleven predicted TM domains (Figure A10.5). There is one major difference in the sequence, in that the long non-membranal loop (see Figure A10.5) is 148 amino acids in the AtMHX protein but 500 amino acids in the SLC8 proteins. However, this loop is not well conserved and is not required for transport function in the SLC8 family. [ 15 ]
The AtMHX gene is expressed throughout the plant but most strongly in the vascular tissue. [ 15 ] The authors suggest that the physiological role of the protein is to store Mg 2+ in these tissues for later release when needed. The protein localisation to the vacuolar membrane supports this suggestion (see also Chapter 1.5).
The protein transports Mg 2+ into the vacuolar space and H + out, as demonstrated by electrophysiological techniques. [ 15 ] The transport is driven by the ΔpH maintained between the vacuolar space (pH 4.5 – 5.9) and the cytoplasm (pH 7.3 – 7.6) by an H + -ATPase. [ 93 ] [ 94 ] How the transport of Mg 2+ by the protein is regulated was not determined. Currents were observed to pass through the protein in both directions, but the Mg 2+ out current required a ‘cytoplasmic’ pH of 5.5, a condition not found in plant cells under normal circumstances. In addition to the transport of Mg 2+ , Shaul et al. (1999) [ 15 ] also showed that the protein could transport Zn 2+ and Fe 2+ , but did not report on the capacity of the protein to transport other divalent cations (e.g. Co 2+ and Ni 2+ ) or its susceptibility to inhibition by cobalt (III) hexaammine.
The detailed kinetics of Mg 2+ transport have not been determined for AtMHX. However, physiological effects have been demonstrated. When A. thaliana plants were transformed with overexpression constructs of the AtMHX gene driven by the CaMV 35S promoter, the plants over-accumulated the protein and showed a phenotype of necrotic lesions in the leaves, which the authors suggest is caused by a disruption in the normal function of the vacuole, given their observation that the total Mg 2+ (or Zn 2+ ) content of the plants was not altered in the transgenic plants.
The image has been adapted from Shaul et al. (1999) [ 15 ] and Quednau et al. (2004), [ 73 ] and combined with an analysis using HMMTOP, this figure shows the computer predicted membrane topology of the AtMHX protein in Arabidopsis thaliana . At this time the topology shown should be considered a tentative hypothesis. The TM domains are shown in light blue, the orientation in the membrane and the positions of the N- and C-termini are indicated, and the figure is not drawn to scale. The α1 and α2 domains, shown in green, are both quite hydrophobic and may both be inserted into the membrane. | https://en.wikipedia.org/wiki/Magnesium_transporter |
Magnet-assisted transfection is a transfection method which uses magnetic interactions to deliver DNA into target cells. Nucleic acids are associated with magnetic nanoparticles , and magnetic fields drive the nucleic acid-particle complexes into target cells, where the nucleic acids are released. [ 1 ] [ 2 ]
Nanoparticles used as carriers for nucleic acids are mostly iron oxides . [ 3 ] These iron oxides can be generated by precipitation from acidic iron-salt solutions upon addition of appropriate bases. The magnetic nanoparticles have an approximate size of 100 nm and are additionally coated with biological polymers to allow loading of nucleic acids. Particles and nucleic acids form complexes by ionic interaction of the negatively charged nucleic acid and the positively charged surface of the magnetic nanoparticle.
The binding of the negatively charged nucleic acids to the positively charged iron particles occurs relatively fast. After complex formation, the loaded particles are incubated together with the target cells on a magnetic plate. The magnetic field causes the iron particles to be rapidly drawn towards the surface of the cell membrane. Cellular uptake occurs by either endocytosis or pinocytosis . Once delivered to the target cells, the DNA is released into the cytoplasm. The magnetic particles are accumulated in endosomes and/or vacuoles . Over time, the nanoparticles are degraded and the iron enters the normal iron metabolism . Influence of cellular functions by iron particles has not been reported yet. In most cases the increased iron concentration in culture media does not lead to cytotoxic effects.
Magnet-assisted transfection is a relatively new and time-saving method to introduce nucleic acids into a target cell with increased efficiency. In particular, adherent mammalian cell lines and primary cell cultures show very high transfection rates. Suspension cells and cells from other organisms can also be successfully transfected. A major advantage of the method is the mild treatment of the cells in comparison to liposome-based transfection reagents ( lipofection ) and electroporation , which may result in the death of 20-50% of cells. [ citation needed ] In addition, the transfection efficiency is increased in numerous cases by the directed transport in a magnetic field, especially for low amounts of nucleic acids. In contrast, methods like lipofection offer only statistical hits between cargo and cells, because of the three-dimensional motion of cells and transfection aggregates in a liquid suspension. Magnet-assisted transfection can also be performed in the presence of serum , which is a further benefit. Currently, there are over 150 cells known to be successfully transfected. [ 4 ] Additionally, synergistic effects in transfection efficiency can arise from the possible combination of lipofection and magnet-assisted transfection.
In future, this technology might be also an alternative strategy to the currently used viral and non-viral vectors in gene-therapy and gene transfer. [ 5 ] | https://en.wikipedia.org/wiki/Magnet-assisted_transfection |
Magnet fishing , also called magnetic fishing , is searching in outdoor waters for ferromagnetic objects available to pull with a strong neodymium magnet . [ 1 ]
In English, people who practice magnet fishing may be called magnetfishers or magneteers.
It is thought magnet fishing was initially started by boaters using magnets to recover fallen keys from the water. [ 2 ] Magnet fishing as a hobby began to take off in the early 2000's starting in Europe. [ 3 ]
Magnet fishing can recover metal debris such as discarded bicycles , guns, [ 4 ] safes, [ 5 ] bombs, [ 6 ] grenades, [ 7 ] coins and car tire rims from bodies of water , but many who engage in the hobby are hoping to find rare and valuable items as well. [ 8 ] [ 9 ]
Magnet fishing is typically done with gloves, [ 10 ] a strong neodymium magnet secured to a durable rope between 15 and 30 meters (50–100 ft), and sometimes a grappling hook as a supplement to the magnet. [ 11 ] For safety it is recommended to also use a pair of gloves to protect your hands from any sharp objects you may pull up with your magnet. [ 12 ]
Some magnet fishers have retrieved dangerous objects, including loaded guns, unexploded ordnance , [ 13 ] [ 14 ] [ 6 ] [ 15 ] and sharp pieces of metal. [ 10 ]
Neodymium magnets are powerful and can interfere with pacemakers , posing a health risk; they can also damage other electronic devices. Fingers can get crushed between the magnet and a piece of metal, potentially causing serious bodily harm. [ 11 ] Tetanus can also be a risk for those without an up-to-date tetanus vaccine . [ 8 ] [ medical citation needed ]
In general, police urge those who find weapons or similar items to contact them. [ 2 ] [ 16 ]
Depending on the jurisdiction, anything of value may belong to the local government, not the finder. [ 13 ]
Amateur magnet-fishers in Belgium helped the police by recovering new evidence, specifically firearms and ammunition , related to the crimes of the Brabant killers . [ 17 ]
The rules of magnet fishing are the same as those governing the detection of buried objects:
"No one may use equipment capable of detecting metallic objects for the purpose of searching for monuments and objects likely to be of interest to prehistory, history, art or archaeology without first obtaining an administrative authorisation issued in accordance with the applicant’s qualifications and the nature and manner of the search". [ 18 ]
In Hamburg , magnet fishing without a permit is punishable by fine. [ 19 ] [ 20 ] In Berlin , magnet fishing is governed under the same rules as metal detecting , which requires a permit. Permits are not granted to hobbyists, as the context of any find is lost when untrained personnel disturb a site. [ 21 ] Like all major cities in Germany that experienced fighting and strategic bombing during World War II, unexploded ordnance poses a serious risk. [ 21 ]
Magnet fishing is subject to local regulations concerning outdoor waters. The Canal & River Trust , which owns most of the canals in England and Wales, has bylaws prohibiting people from removing material from the canal and rivers it owns, so fishers may be subject to a £25 fine [ 22 ] for magnet-fishing or removing any material from canal or inland navigation under the control of the Canal & River Trust in England or Wales , other than the Lee and Stort Navigation , Gloucester and Sharpness Canal , and River Severn Navigation . [ 23 ] The Trust "expressly prohibit[s]" the practice, although it refrains from legal action against first-time offenders. [ 2 ] In 2018, a child magnet-fished a sawn-off shotgun out of the Titford Canal in Oldbury, West Midlands. [ 24 ]
According to Polish penal code, magnet fishing without a valid government permit is a crime punishable by up to two years imprisonment. [ 25 ] [ 26 ] [ 27 ]
Magnet fishing is allowed in Scotland. If planning to magnet fish in a scheduled area (including the Canal Network), then the fisher must first obtain a Scheduled Monument Consent from Historic Environment Scotland , and permission from Scottish Canals . An official group exists which gives its members permission to magnet fish in a stretch of the Union Canal in Edinburgh, with more locations planned in the future. Archaeological or historical finds must be reported to Treasure Trove Scotland. [ 28 ]
In the US, there are no federal laws restricting metal fishing. Magnet fishing in state waters without a license is prohibited in South Carolina under the Underwater Antiquities Act. [ 29 ] In Indiana, magnet fishing is allowed on public waters on Department of Natural Resources properties by permit. The magnet must be able to be carried and retrieved by hand. [ 30 ] Certain states have their own regulations pertaining to magnet fishing. [ 31 ]
The hobby has been adopted by celebrities such as English rugby player James Haskell . [ 10 ] [ 2 ] | https://en.wikipedia.org/wiki/Magnet_fishing |
Magnetation is the processing of iron ore tailings, the waste product of iron ore mines, to recover hematite . Crushed mine tailings are mixed with water to create a slurry ; the slurry is then pumped through magnetic separation chambers to extract hematite. Commercial interest in this process stems from the possibility of extracting additional iron from tailings supplied by existing mines, increasing their yield. [ 1 ]
The process is only economical at high ore prices, while low ones lead to companies operating it, such as US Magnetation and ERP Iron Ore, into bankruptcy. [ 2 ]
This article about mining is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Magnetation_(iron_ore) |
Magnetic-activated cell sorting ( MACS ) is a method for separation of various cell populations depending on their surface antigens ( CD molecules ) invented by Miltenyi Biotec . The name MACS is a registered trademark of the company.
The method was developed with Miltenyi Biotec's MACS system, which uses superparamagnetic nanoparticles and columns. The superparamagnetic nanoparticles are of the order of 100 nm. They are used to tag the targeted cells in order to capture them inside the column. The column is placed between permanent magnets so that when the magnetic particle-cell complex passes through it, the tagged cells can be captured. The column consists of steel wool which increases the magnetic field gradient to maximize separation efficiency when the column is placed between the permanent magnets.
Magnetic-activated cell sorting is a commonly used method in areas like immunology, cancer research, neuroscience, and stem cell research. Miltenyi sells microbeads which are magnetic nanoparticles conjugated to antibodies which can be used to target specific cells.
In the Assisted Reproductive Technology ( ART ) field, the apoptotic spermatozoa (those who are programmed to die) are bind in the Annexin V(a membrane apoptotic marker) with a specific monoclonal antibodies which are conjugated a magnetic microsphere. So later when we use a MACS column it is possible to separate the healthy spermatozoa with the apoptotic ones.
The MACS method allows cells to be separated by using magnetic nanoparticles coated with antibodies against a particular surface antigen. This causes the cells expressing this antigen to attach to the magnetic nanoparticles. After incubating the beads and cells, the solution is transferred to a column in a strong magnetic field. In this step, the cells attached to the nanoparticles (expressing the antigen) stay on the column, while other cells (not expressing the antigen) flow through. With this method, the cells can be separated positively or negatively with respect to the particular antigen(s). [ 1 ]
With positive selection, the cells expressing the antigen(s) of interest, which attached to the magnetic column, are washed out to a separate vessel, after removing the column from the magnetic field. This method is useful for isolation of a particular cell type, for instance CD4 lymphocytes .
Moreover, it enables early detection of sperm which initiate apoptosis, although they may show an adequate appearance and motility. A magnetic-labelled receptor that binds to annexin is added to sperm. Inside normal cells, phosphatidylserine molecules are located within the cell membrane towards the cytoplasm. Nevertheless, in those cells that initiate the apoptotic process phosphatidylserine instead faces the cell membrane outer side, binding to the annexin conjugate. Therefore, normal spermatozoa will pass through the column without binding to the labelled receptor. [ 2 ] On the other hand, proapoptotic sperm will remain trapped, which turns out as a sperm selection process thanks to the magnetic- labeled antibody. Finally, this technique has shown its efficacy, even though it remains limited.
With negative selection, the antibody used is against surface antigen(s) which are known to be present on cells that are not of interest. After administration of the cells/magnetic nanoparticles solution onto the column the cells expressing these antigens bind to the column and the fraction that goes through is collected, as it contains almost no cells with these undesired antigens. [ 3 ]
Magnetic nanoparticles conjugated to an antibody against an antigen of interest are not always available, but there is a way to circumvent it. Since fluorophore -conjugated antibodies are much more prevalent, it is possible to use magnetic nanoparticles coated with anti-fluorochrome antibodies. They are incubated with the fluorescent-labelled antibodies against the antigen of interest and may thus serve for cell separation with respect to the antigen. [ 4 ] | https://en.wikipedia.org/wiki/Magnetic-activated_cell_sorting |
Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism . After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease, similarly to graphene.
The first few-layered van der Waals magnetism was reported in 2017 (Cr 2 Ge 2 Te 6 , [ 1 ] and CrI 3 [ 2 ] ). [ 3 ] One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets more easily compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS 3 , [ 4 ] Cr 2 Ge 2 Te 6 , [ 1 ] CrI 3 , [ 2 ] NiPS 3 , [ 5 ] MnPS 3 , [ 6 ] Fe 3 GeTe 2 [ 7 ]
Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise. [ 8 ] [ 9 ] [ 10 ]
Magnetic van der Waals materials is a new addition to the growing list of 2d materials . The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, probably more important, feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets.
Interest in these materials is based on the possibility of producing two-dimensional magnetic materials with ease. The field started with a series of papers in 2016 with a conceptual paper [ 11 ] and a first experimental demonstration. [ 4 ] [ 12 ] The field was expanded further with the publication of similar observations in ferromagnetism the following year. [ 1 ] [ 2 ] Since then, several new materials discovered and several review papers have been published. [ 8 ] [ 9 ] [ 10 ]
Magnetic materials have their ( spins ) aligned over a macroscopic length scale. Alignment of the spins is typically driven by exchange interaction between neighboring spins. While at absolute zero ( T = 0 {\displaystyle T=0} ) the alignment can always exist, thermal fluctuations misalign magnetic moments at temperatures above the Curie temperature ( T C {\displaystyle T_{C}} ), causing a phase transition to a non-magnetic state. Whether T C {\displaystyle T_{C}} is above the absolute zero depends heavily on the dimensions of the system.
For a 3D system, the Curie temperature is always above zero, while a one-dimensional system can only be in a ferromagnetic state at T = 0 {\displaystyle T=0} [ 13 ]
For 2D systems, the transition temperature depends on the spin dimensionality ( n {\displaystyle n} ). [ 9 ] In system with n = 1 {\displaystyle n=1} , the planar spins can be oriented either in or out of plane. A spin dimensionality of two means that the spins are free to point in any direction parallel to the plane. A system with a spin dimensionality of three means there are no constraints on the direction of the spin. A system with n = 1 {\displaystyle n=1} is described by the 2D Ising model . Onsager's solution to the model demonstrates that T C > 0 {\displaystyle T_{C}>0} , thus allowing magnetism at obtainable temperatures. On the contrary, an infinite system where n = 3 {\displaystyle n=3} , described by the isotropic Heisenberg model , does not display magnetism at any finite temperature. The long range ordering of the spins for an infinite system is prevented by the Mermin-Wagner theorem stating that spontaneous symmetry breaking required for magnetism is not possible in isotropic two dimensional magnetic systems. Spin waves in this case have finite density of states and are gapless and are therefore easy to excite, destroying magnetic order. Therefore, an external source of magnetocrystalline anisotropy , such as external magnetic field, or a finite-sized system is required for materials with n = 3 {\displaystyle n=3} to demonstrate magnetism.
The 2D ising model describes the behavior of FePS 3 , [ 4 ] CrI 3 . [ 2 ] and Fe 3 GeTe 2 , [ 7 ] while Cr 2 Ge 2 Te 6 [ 1 ] and MnPS 3 [ 14 ] behaves like isotropic Heisenberg model. The intrinsic anisotropy in CrI 3 and Fe 3 GeTe 2 is caused by strong spin–orbit coupling , allowing them to remain magnetic down to a monolayer , while Cr 2 Ge 2 Te 6 has only exhibit magnetism as a bilayer or thicker. The XY model describes the case where n = 2 {\displaystyle n=2} . In this system, there is no transition between the ordered and unordered states, but instead the system undergoes a so-called Kosterlitz–Thouless transition at finite temperature T K T {\displaystyle T_{KT}} , where at temperatures below T K T {\displaystyle T_{KT}} the system has quasi-long-range magnetic order. It was reported that the theoretical predictions of the XY model are consistent with those experimental observations of NiPS 3. [ 5 ] The Heisenberg model describes the case where n = 3 {\displaystyle n=3} . In this system, there is no transition between the ordered and unordered states because of the Mermin-Wagner theorem . The experimental realization of the Heisenberg model was reported using MnPS 3 . [ 14 ] [ 6 ]
The above systems can be described by a generalized Heisenberg spin Hamiltonian :
Where J {\displaystyle J} is the exchange coupling between spins S i {\displaystyle \mathbf {S} _{i}} and S j {\displaystyle \mathbf {S} _{j}} , and A {\displaystyle A} and Λ {\displaystyle \Lambda } are on-site and inter-site magnetic anisotropies, respectively. Setting A → ± ∞ {\displaystyle A\rightarrow \pm \infty } recovered the 2D Ising model and the XY model. (positive sign for n = 1 {\displaystyle n=1} and negative for n = 2 {\displaystyle n=2} ), while A ≈ 0 {\displaystyle A\approx 0} and Λ ≈ 0 {\displaystyle \Lambda \approx 0} recovers the Heisenberg model ( n = 3 {\displaystyle n=3} ). Along with the idealized models described above, the spin Hamiltonian can be used for most experimental setups, [ 15 ] and it can also model dipole-dipole interactions by renormalization of the parameter A {\displaystyle A} . [ 9 ] However, sometimes including further neighbours or using different exchange coupling, such as antisymmetric exchange , is required. [ 9 ]
Magnetic properties of two-dimensional materials are usually measured using Raman spectroscopy , Magneto-optic Kerr effect , Magnetic circular dichroism or Anomalous Hall effect techniques. [ 9 ] The dimensionality of the system can be determined by measuring the scaling behaviour of magnetization ( M {\displaystyle M} ), susceptibility ( χ {\displaystyle \chi } ) or correlation length ( ξ {\displaystyle \xi } ) as a function of temperature. The corresponding critical exponents are β {\displaystyle \beta } , γ {\displaystyle \gamma } and v {\displaystyle v} respectively. They can be retrieved by fitting
to the data. The critical exponents depend on the system and its dimensionality, as demonstrated in Table 1. Therefore, an abrupt change in any of the critical exponents indicates a transition between two models. Furthermore, the Curie temperature can be measured as a function of number of layers ( N {\displaystyle N} ). This relation for a large N {\displaystyle N} is given by [ 16 ]
where C {\displaystyle C} is a material dependent constant. For thin layers, the behavior changes to T C ∝ N {\displaystyle T_{\text{C}}\propto N} [ 17 ]
Magnetic 2D materials can be used as a part of van der Waals heterostructures. They are layered materials consisting of different 2D materials held together by van der Waals forces . One example of such structure is a thin insulating/semiconducting layer between layers of 2D magnetic material, producing a magnetic tunnel junction . This structure can have significant spin valve effect, [ 18 ] and thus they can have many applications in the field of spintronics . Another newly emerging direction came from the rather unexpected observation of magnetic exciton in NiPS 3. [ 19 ] | https://en.wikipedia.org/wiki/Magnetic_2D_materials |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.