text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Metals in medicine are used in organic systems for diagnostic and treatment purposes. [ 1 ] Inorganic elements are also essential for organic life as cofactors in enzymes called metalloproteins . When metals are under or over-abundant in the body, equilibrium must be returned to its natural state via interventional and natural methods.
Metals can be toxic in high quantities. Either ingestion or faulty metabolic pathways can lead to metal toxicity (metal poisoning). Sources of toxic metals include cadmium from tobacco, arsenic from agriculture and mercury from volcanoes and forest fires. Nature, in the form of trees and plants, is able to trap many toxins and can bring abnormally high levels back into equilibrium. Toxic metal poisoning is usually treated with some type of chelating agent . [ 2 ] [ 3 ] Heavy metal poisoning , such as from mercury, cadmium, or lead, is particularly pernicious.
Examples of specific types of toxic metals include:
Fluid and electrolyte balance, in which fluid balance and electrolyte balance are intertwined homeostatically , is necessary to health in all organisms . It includes reference ranges for cation concentrations of biometals , which in reference to human medicine and veterinary medicine principally includes those for blood serum ion concentrations in humans and in livestock and pets . Derangements in such fluid and electrolyte balance most often occur in the contexts of dehydration , overexertion , and diarrhea , but they also occur in cancers (most especially in paraneoplastic syndromes ), parasitism , inborn errors of metabolism , and several other contexts. Some medical specialties deal especially frequently with electrolyte derangements, including internal medicine and endocrinology (especially in chronic conditions ) and intensive care medicine (in severe acute conditions).
Humans need a certain amount of certain metals to function normally. Most metals are used as cofactors or prosthetics in enzymes, catalyzing specific reactions and serving essential roles. The essential metals for humans are: Sodium , Potassium , Magnesium , Copper , Vanadium , Chromium , Manganese , Iron , Cobalt , Nickel , Zinc , Molybdenum , and Cadmium . Anemia symptoms are caused by lack of a certain essential metal. Anemia can be associated with malnourishment or faulty metabolic processes, usually caused by a genetic defect. [ 3 ]
Examples of specific types of metal anemia include:
Metal ions are often used for diagnostic medical imaging. Metal complexes can be used either for radioisotope imaging (from their emitted radiation) or as contrast agents, for example, in magnetic resonance imaging (MRI). Such imaging can be enhanced by manipulation of the ligands in a complex to create specificity so that the complex will be taken up by a certain cell or organ type. [ 3 ] [ 4 ]
Examples of metals used for diagnosis include:
An important contraindication to MRI ( magnetic resonance imaging ) is having metal objects anywhere near, and most especially inside the field of, the MRI scanner. Not only does this entail that people with implanted metal plates, bone screws ( internal fixation ), or syndesmotic screws often cannot undergo MRI, it also entails that many everyday objects, including jewelry, belt buckles, wallets, purses, security guards' weapons, and so on, must be kept out of the MRI area.
Metals have been used in treatments since ancient times. The Ebers Papyrus from 1500BC is the first written account of the use of metals for treatment and describes the use of Copper to reduce inflammation and the use of iron to treat anemia. Sodium vanadate has been used since the early 20th century to treat rheumatoid arthritis. Recently metals have been used to treat cancer, by specifically attacking cancer cells and interacting directly with DNA. The positive charge on most metals can interact with the negative charge of the phosphate backbone of DNA. Some drugs developed that include metals interact directly with other metals already present in protein active sites, while other drugs can use metals to interact with amino acids with the highest reduction potential. [ 4 ]
Examples of metals used in treatment include: | https://en.wikipedia.org/wiki/Metals_in_medicine |
The metals of antiquity are the seven metals which humans had identified and found use for in prehistoric times in Africa, Europe and throughout Asia: [ 1 ] gold , silver , copper , tin , lead , iron , and mercury .
Zinc , arsenic , and antimony were also known during antiquity, but they were not recognised as distinct metals until later. [ 2 ] [ 3 ] [ 4 ] [ 5 ] A special case is platinum ; it was known to native South Americans around the time Europe was going through classical antiquity, but was unknown to Europeans until the 18th century. Thus, at most eleven elemental metals and metalloids were known by the end of antiquity; this contrasts greatly with the situation today, with over 90 elemental metals known. Bismuth only began to be recognised as distinct around 1500 by the European and Incan civilisations. The first elemental metal with a clearly identifiable discoverer is cobalt , discovered in 1735 by Georg Brandt , by which time the Scientific Revolution was in full swing. [ 6 ] (Even then, cobalt might have been prepared before the 13th century by alchemists roasting and reducing its ore, but, in any case, its distinct nature was not recognised.) [ 7 ]
Copper was probably the first metal mined and crafted by humans. [ 8 ] It was originally obtained as a native metal and later from the smelting of ores. Earliest estimates of the discovery of copper suggest around 9000 BC in the Middle East. It was one of the most important materials to humans throughout the Chalcolithic and Bronze Ages . Copper beads dating from 6000 BC have been found in Çatalhöyük , Anatolia , [ 9 ] and the archaeological site of Belovode on the Rudnik mountain in Serbia contains the world's oldest securely dated evidence of copper smelting from 5000 BC. [ 10 ] [ 11 ]
It is believed that lead smelting began at least 9,000 years ago, and the oldest known artifact of lead is a statuette found at the temple of Osiris on the site of Abydos dated around 3800 BC. [ 12 ]
The earliest gold artifacts were discovered at the site of Wadi Qana in the Levant . [ 13 ] Silver is estimated to have been discovered in Asia Minor shortly after copper and gold. [ 14 ]
There is evidence that iron was known from before 5000 BC. [ 15 ] The oldest known iron objects used by humans are some beads of meteoric iron , made in Egypt in about 4000 BC. The discovery of smelting around 3000 BC led to the start of the Iron Age around 1200 BC [ 16 ] and the prominent use of iron for tools and weapons. [ 17 ]
Tin was first smelted in combination with copper around 3500 BC to produce bronze - and thus giving place to the Bronze Age (except in some places which did not experience a significant Bronze Age, passing directly from the Neolithic Stone Age to the Iron Age ). [ 18 ] Kestel , in southern Turkey , is the site of an ancient Cassiterite mine that was used from 3250 to 1800 BC. [ 19 ] The oldest artifacts date from around 2000 BC. [ 20 ]
The metals of antiquity were recognised as distinct elements in Méthode de nomenclature chimique ( Method of Chemical Nomenclature ), written by a group consisting of Louis Guyton de Morveau , Antoine Lavoisier , Claude Berthollet , and Antoine-François de Fourcroy in 1787. [ 6 ]
The metals of antiquity generally have low melting points , with iron being the exception.
The other metals discovered before the Scientific Revolution largely fit the pattern, except for high-melting platinum:
While all the metals of antiquity but lead occur natively, only gold and silver are commonly found as the native metal .
The practice of alchemy in the Western world, based on a Hellenistic and Babylonian approach to planetary astronomy, often ascribed a symbolic association between the seven then-known celestial bodies and the metals known to the Greeks and Babylonians during antiquity. Additionally, some alchemists and astrologers believed there was an association, sometimes called a rulership , between days of the week, the alchemical metals, and the planets that were said to hold "dominion" over them. [ 27 ] [ 28 ] There was some early variation, but the most common associations since antiquity are the following: | https://en.wikipedia.org/wiki/Metals_of_antiquity |
Metalworking is the process of shaping and reshaping metals in order to create useful objects, parts, assemblies, and large scale structures. As a term, it covers a wide and diverse range of processes, skills, and tools for producing objects on every scale: from huge ships , buildings, and bridges , down to precise engine parts and delicate jewelry .
The historical roots of metalworking predate recorded history; its use spans cultures, civilizations and millennia. It has evolved from shaping soft, native metals like gold with simple hand tools, through the smelting of ores and hot forging of harder metals like iron , up to and including highly technical modern processes such as machining and welding . It has been used as an industry, a driver of trade, individual hobbies, and in the creation of art; [ 1 ] it can be regarded as both a science and a craft.
Modern metalworking processes, though diverse and specialized, can be categorized into one of three broad areas known as forming, cutting, or joining processes. Modern metalworking workshops, typically known as machine shops , hold a wide variety of specialized or general-use machine tools capable of creating highly precise, useful products. Many simpler metalworking techniques, such as blacksmithing , are no longer economically competitive on a large scale in developed countries; some of them are still in use in less developed countries, for artisanal or hobby work, or for historical reenactment.
The oldest archaeological evidence of copper mining and working was the discovery of a copper pendant in northern Iraq from 8,700 BCE. [ 2 ] The earliest substantiated and dated evidence of metalworking in the Americas was the processing of copper in Wisconsin , near Lake Michigan . Copper was hammered until it became brittle, then heated so it could be worked further. In America, this technology is dated to about 4000–5000 BCE. [ 3 ] The oldest gold artifacts in the world come from the Bulgarian Varna Necropolis and date from 4450 BCE.
Not all metal required fire to obtain it or work it. Isaac Asimov speculated that gold was the "first metal". [ 4 ] His reasoning being, that, by its chemistry , it is found in nature as nuggets of pure gold. In other words, gold, as rare as it is, is sometimes found in nature as a native metal . Some metals can also be found in meteors . Almost all other metals are found in ores , a mineral-bearing rock , that require heat or some other process to liberate the metal. Another feature of gold is that it is workable as it is found, meaning that no technology beyond a stone hammer and anvil is needed to work the metal. This is a result of gold's properties of malleability and ductility . The earliest tools were stone, bone , wood , and sinew , all of which sufficed to work gold.
At some unknown time, the process of liberating metals from rock by heat became known, and rocks rich in copper, tin , and lead came into demand. These ores were mined wherever they were recognized. Remnants of such ancient mines have been found all over Southwestern Asia . [ 5 ] Metalworking was being carried out by the South Asian inhabitants of Mehrgarh between 7000 and 3300 BCE. [ 6 ] The end of the beginning of metalworking occurs sometime around 6000 BCE when copper smelting became common in Southwestern Asia.
Ancient civilisations knew of seven metals. Here they are arranged in order of their oxidation potential (in volts ):
The oxidation potential is important because it is one indicator of how tightly bound to the ore the metal is likely to be. As can be seen, iron is significantly higher than the other six metals while gold is dramatically lower than the six above it. Gold's low oxidation is one of the main reasons that gold is found in nuggets. These nuggets are relatively pure gold and are workable as they are found.
Copper ore, being relatively abundant, and tin ore became the next important substances in the story of metalworking. Using heat to smelt copper from ore, a great deal of copper was produced. It was used for both jewelry and simple tools. However, copper by itself was too soft for tools requiring edges and stiffness. At some point tin was added into the molten copper and bronze was developed thereby. Bronze is an alloy of copper and tin. Bronze was an important advance because it had the edge-durability and stiffness that pure copper lacked. Until the advent of iron, bronze was the most advanced metal for tools and weapons in common use (see Bronze Age for more detail).
Outside Southwestern Asia, these same advances and materials were being discovered and used around the world. People in China and Great Britain began using bronze with little time being devoted to copper. Japanese began the use of bronze and iron almost simultaneously. In the Americas it was different. Although the peoples of the Americas knew of metals, it was not until the European colonisation that metalworking for tools and weapons became common. Jewelry and art were the principal uses of metals in the Americas prior to European influence.
About 2700 BCE, production of bronze was common in locales where the necessary materials could be assembled for smelting, heating, and working the metal. Iron was beginning to be smelted and began its emergence as an important metal for tools and weapons. The period that followed became known as the Iron Age . [ citation needed ]
By the historical periods of the Pharaohs in Egypt , the Vedic Kings in India , the Tribes of Israel , and the Maya civilization in North America , among other ancient populations, precious metals began to have value attached to them. In some cases rules for ownership, distribution, and trade were created, enforced, and agreed upon by the respective peoples. By the above periods metalworkers were very skilled at creating objects of adornment, religious artifacts, and trade instruments of precious metals (non-ferrous), as well as weaponry usually of ferrous metals and/or alloys . These skills were well executed. The techniques were practiced by artisans, blacksmiths , atharvavedic practitioners, alchemists , and other categories of metalworkers around the globe. For example, the granulation technique was employed by numerous ancient cultures before the historic record shows people traveled to far regions to share this process. Metalsmiths today still use this and many other ancient techniques.
As time progressed, metal objects became more common, and ever more complex. The need to further acquire and work metals grew in importance. Skills related to extracting metal ores from the earth began to evolve, and metalsmiths became more knowledgeable. Metalsmiths became important members of society. Fates and economies of entire civilizations were greatly affected by the availability of metals and metalsmiths. The metalworker depends on the extraction of precious metals to make jewelry , build more efficient electronics , and for industrial and technological applications from construction to shipping containers to rail , and air transport . Without metals, goods and services would cease to move around the globe on the scale we know today.
Metalworking generally is divided into three categories: forming , cutting , and joining . Most metal cutting is done by high speed steel tools or carbide tools. [ 7 ] Each of these categories contains various processes.
Prior to most operations, the metal must be marked out and/or measured, depending on the desired finished product.
Marking out (also known as layout) is the process of transferring a design or pattern to a workpiece and is the first step in the handcraft of metalworking. It is performed in many industries or hobbies, although in industry, the repetition eliminates the need to mark out every individual piece. In the metal trades area, marking out consists of transferring the engineer's plan to the workpiece in preparation for the next step, machining or manufacture.
Calipers are hand tools designed to precisely measure the distance between two points. Most calipers have two sets of flat, parallel edges used for inner or outer diameter measurements. These calipers can be accurate to within one-thousandth of an inch (25.4 μm). Different types of calipers have different mechanisms for displaying the distance measured. Where larger objects need to be measured with less precision, a tape measure is often used.
Casting achieves a specific form by pouring molten metal into a mold and allowing it to cool, with no mechanical force. Forms of casting include:
These forming processes modify metal or workpiece by deforming the object, that is, without removing any material. Forming is done with a system of mechanical forces and, especially for bulk metal forming, with heat.
Plastic deformation involves using heat or pressure to make a workpiece more conductive to mechanical force. Historically, this and casting were done by blacksmiths, though today the process has been industrialized. In bulk metal forming, the workpiece is generally heated up.
These types of forming process involve the application of mechanical force at room temperature. However, some recent developments involve the heating of dies and/or parts. Advancements in automated metalworking technology have made progressive die stamping possible which is a method that can encompass punching, coining, bending and several other ways below that modify metal at less cost while resulting in less scrap. [ 9 ]
Cutting is a collection of processes wherein material is brought to a specified geometry by removing excess material using various kinds of tooling to leave a finished part that meets specifications. The net result of cutting is two products, the waste or excess material, and the finished part. In woodworking, the waste would be sawdust and excess wood. In cutting metals the waste is chips or swarf and excess metal.
Cutting processes fall into one of three major categories:
Drilling a hole in a metal part is the most common example of a chip producing process. Using an oxy-fuel cutting torch to separate a plate of steel into smaller pieces is an example of burning. Chemical milling is an example of a specialty process that removes excess material by the use of etching chemicals and masking chemicals.
There are many technologies available to cut metal, including:
Cutting fluid or coolant is used where there is significant friction and heat at the cutting interface between a cutter such as a drill or an end mill and the workpiece. Coolant is generally introduced by a spray across the face of the tool and workpiece to decrease friction and temperature at the cutting tool/workpiece interface to prevent excessive tool wear. In practice there are many methods of delivering coolant.
The use of an angle grinder in cutting is not preferred as large amounts of harmful sparks and fumes (and particulates ) are generated when compared with using reciprocating saw or band saw . [ 12 ] Angle grinders produce sparks when cutting ferrous metals. They also produce shards cutting other materials.
Milling is the complex shaping of metal or other materials by removing material to form the final shape. It is generally done on a milling machine , a power-driven machine that in its basic form consists of a milling cutter that rotates about the spindle axis (like a drill ), and a worktable that can move in multiple directions (usually two dimensions [x and y axis] relative to the workpiece). The spindle usually moves in the z axis. It is possible to raise the table (where the workpiece rests). Milling machines may be operated manually or under computer numerical control (CNC), and can perform a vast number of complex operations, such as slot cutting, planing , drilling and threading , rabbeting , routing , etc. Two common types of mills are the horizontal mill and vertical mill.
The pieces produced are usually complex 3D objects that are converted into x, y, and z coordinates that are then fed into the CNC machine and allow it to complete the tasks required. The milling machine can produce most parts in 3D, but some require the objects to be rotated around the x, y, or z coordinate axis (depending on the need). Tolerances come in a variety of standards, depending on the locale. In countries still using the imperial system, this is usually in the thousandths of an inch (unit known as thou ), depending on the specific machine. In many other European countries, standards following the ISO are used instead.
In order to keep both the bit and material cool, a high temperature coolant is used. In most cases the coolant is sprayed from a hose directly onto the bit and material. This coolant can either be machine or user controlled, depending on the machine.
Materials that can be milled range from aluminum to stainless steel and almost everything in between. Each material requires a different speed on the milling tool and varies in the amount of material that can be removed in one pass of the tool. Harder materials are usually milled at slower speeds with small amounts of material removed. Softer materials vary, but usually are milled with a high bit speed.
The use of a milling machine adds costs that are factored into the manufacturing process. Each time the machine is used coolant is also used, which must be periodically added in order to prevent breaking bits. A milling bit must also be changed as needed in order to prevent damage to the material. Time is the biggest factor for costs. Complex parts can require hours to complete, while very simple parts take only minutes. This in turn varies the production time as well, as each part will require different amounts of time.
Safety is key with these machines. The bits are traveling at high speeds and removing pieces of usually scalding hot metal. The advantage of having a CNC milling machine is that it protects the machine operator.
Turning is a metal cutting process for producing a cylindrical surface with a single point tool. The workpiece is rotated on a spindle and the cutting tool is fed into it radially, axially or both. Producing surfaces perpendicular to the workpiece axis is called facing. Producing surfaces using both radial and axial feeds is called profiling. [ 13 ]
A lathe is a machine tool which spins a block or cylinder of material so that when abrasive , cutting, or deformation tools are applied to the workpiece, it can be shaped to produce an object which has rotational symmetry about an axis of rotation . Examples of objects that can be produced on a lathe include candlestick holders, crankshafts , camshafts , and bearing mounts.
Lathes have four main components: the bed, the headstock, the carriage, and the tailstock. The bed is a precise & very strong base which all of the other components rest upon for alignment. The headstock's spindle secures the workpiece with a chuck , whose jaws (usually three or four) are tightened around the piece. The spindle rotates at high speed, providing the energy to cut the material. While historically lathes were powered by belts from a line shaft , modern examples uses electric motors. The workpiece extends out of the spindle along the axis of rotation above the flat bed. The carriage is a platform that can be moved, precisely and independently parallel and perpendicular to the axis of rotation. A hardened cutting tool is held at the desired height (usually the middle of the workpiece) by the toolpost. The carriage is then moved around the rotating workpiece, and the cutting tool gradually removes material from the workpiece. The tailstock can be slid along the axis of rotation and then locked in place as necessary. It may hold centers to further secure the workpiece, or cutting tools driven into the end of the workpiece.
Other operations that can be performed with a single point tool on a lathe are: [ 13 ]
Chamfering: Cutting an angle on the corner of a cylinder. Parting: The tool is fed radially into the workpiece to cut off the end of a part. Threading : A tool is fed along and across the outside or inside surface of rotating parts to produce external or internal threads . Boring : A single-point tool is fed linearly and parallel to the axis of rotation to create a round hole. Drilling : Feeding the drill into the workpiece axially. Knurling : Uses a tool to produce a rough surface texture on the work piece. Frequently used to allow grip by hand on a metal part.
Modern computer numerical control (CNC) lathes and (CNC) machining centres can do secondary operations like milling by using driven tools. When driven tools are used the work piece stops rotating and the driven tool executes the machining operation with a rotating cutting tool. The CNC machines use x, y, and z coordinates in order to control the turning tools and produce the product. Most modern day CNC lathes are able to produce most turned objects in 3D.
Nearly all types of metal can be turned, although more time & specialist cutting tools are needed for harder workpieces.
There are many threading processes including: cutting threads with a tap or die , thread milling, single-point thread cutting, thread rolling, cold root rolling and forming, and thread grinding. A tap is used to cut a female thread on the inside surface of a pre-drilled hole, while a die cuts a male thread on a preformed cylindrical rod.
Grinding uses an abrasive process to remove material from the workpiece. A grinding machine is a machine tool used for producing very fine finishes, making very light cuts, or high precision forms using an abrasive wheel as the cutting device. This wheel can be made up of various sizes and types of stones, diamonds or inorganic materials.
The simplest grinder is a bench grinder or a hand-held angle grinder, for deburring parts or cutting metal with a zip-disc.
Grinders have increased in size and complexity with advances in time and technology. From the old days of a manual toolroom grinder sharpening endmills for a production shop, to today's 30000 RPM CNC auto-loading manufacturing cell producing jet turbines, grinding processes vary greatly.
Grinders need to be very rigid machines to produce the required finish. Some grinders are even used to produce glass scales for positioning CNC machine axis. The common rule is the machines used to produce scales be 10 times more accurate than the machines the parts are produced for.
In the past grinders were used for finishing operations only because of limitations of tooling. Modern grinding wheel materials and the use of industrial diamonds or other man-made coatings (cubic boron nitride) on wheel forms have allowed grinders to achieve excellent results in production environments instead of being relegated to the back of the shop.
Modern technology has advanced grinding operations to include CNC controls, high material removal rates with high precision, lending itself well to aerospace applications and high volume production runs of precision components.
Filing is combination of grinding and saw tooth cutting using a file . Prior to the development of modern machining equipment it provided a relatively accurate means for the production of small parts, especially those with flat surfaces. The skilled use of a file allowed a machinist to work to fine tolerances and was the hallmark of the craft. Today filing is rarely used as a production technique in industry, though it remains as a common method of deburring .
Broaching is a machining operation used to cut keyways into shafts. Electron beam machining (EBM) is a machining process where high-velocity electrons are directed toward a work piece, creating heat and vaporizing the material. Ultrasonic machining uses ultrasonic vibrations to machine very hard or brittle materials.
Welding is a fabrication process that joins materials, usually metals or thermoplastics , by causing coalescence . This is often done by melting the workpieces and adding a filler material to form a pool of molten material that cools to become a strong joint, but sometimes pressure is used in conjunction with heat , or by itself, to produce the weld. [ 14 ]
Many different energy sources can be used for welding, including a gas flame , an electric arc , a laser, an electron beam, friction , and ultrasound . While often an industrial process, welding can be done in many different environments, including open air, underwater and in space . Regardless of location, however, welding remains dangerous, and precautions must be taken to avoid burns, electric shock , poisonous fumes, and overexposure to ultraviolet light .
Brazing is a joining process in which a filler metal is melted and drawn into a capillary formed by the assembly of two or more work pieces. The filler metal reacts metallurgically with the workpieces and solidifies in the capillary, forming a strong joint. Unlike welding, the work piece is not melted. Brazing is similar to soldering, but occurs at temperatures in excess of 450 °C (842 °F). Brazing has the advantage of producing less thermal stresses than welding, and brazed assemblies tend to be more ductile than weldments because alloying elements can not segregate and precipitate.
Brazing techniques include, flame brazing, resistance brazing, furnace brazing, diffusion brazing, inductive brazing and vacuum brazing.
Soldering is a joining process that occurs at temperatures below 450 °C (842 °F). It is similar to brazing in the way that a filler is melted and drawn into a capillary to form a joint, although at a lower temperature. Because of this lower temperature and different alloys used as fillers, the metallurgical reaction between filler and work piece is minimal, resulting in a weaker joint.
Riveting is one of the most ancient metalwork joining processes. [ 15 ] Its use declined markedly during the second half of the 20th century, [ 16 ] but it still retains important uses in industry and construction, and in artisan crafts such as jewellery , medieval armouring and metal couture in the early 21st century. The earlier use of rivets is being superseded by improvements in welding and component fabrication techniques.
A rivet is essentially a two-headed and unthreaded bolt which holds two other pieces of metal together. Holes are drilled or punched through the two pieces of metal to be joined. The holes being aligned, a rivet is passed through the holes and permanent heads are formed onto the ends of the rivet utilizing hammers and forming dies (by either cold working or hot working ).
Rivets are commonly purchased with one head already formed.
When it is necessary to remove rivets, one of the rivet's heads is sheared off with a cold chisel . The rivet is then driven out with a hammer and punch .
This includes screws , as well as bolts . This is often used as it requires relatively little specialist equipment, and are therefore often used in flat-pack furniture . It can also be used when a metal is joined to another material (such as wood ) or a particular metal does not weld well (such as aluminum ). This can be done to directly join metals, or with an intermediate material such as nylon . While often weaker than other methods such as welding or brazing, the metal can easily be removed and therefore reused or recycled. It can also be done in conjunction with an epoxy or glue, reverting its ecological benefits.
While these processes are not primary metalworking processes, they are often performed before or after metalworking processes.
Metals can be heat treated to alter the properties of strength, ductility, toughness, hardness or resistance to corrosion. Common heat treatment processes include annealing , precipitation hardening , quenching , and tempering :
Often, mechanical and thermal treatments are combined in what is known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high alloy special steels, super alloys and titanium alloys.
Electroplating is a common surface-treatment technique. It involves bonding a thin layer of another metal such as gold , silver , chromium or zinc to the surface of the product by hydrolysis. It is used to reduce corrosion, create abrasion resistance and improve the product's aesthetic appearance. Plating can even change the properties of the original part including conductivity, heat dissipation or structural integrity. There are four main electroplating methods to ensure proper coating and cost effectiveness per product: mass plating, rack plating, continuous plating and line plating. [ 17 ]
Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings due to the thicker coating. The four main thermal spray processes include electric wire arc spray, flame (oxy acetylene combustion) spray, plasma spray and high velocity oxy fuel (HVOF) spray. [ 18 ]
General: | https://en.wikipedia.org/wiki/Metalworking |
In organometallic chemistry , metal–halogen exchange is a fundamental reaction that converts an organic halide into an organometallic product. The reaction commonly involves the use of electropositive metals (Li, Na, Mg) and organochlorides, bromides, and iodides. Particularly well-developed is the use of metal–halogen exchange for the preparation of organolithium compounds .
Two kinds of lithium–halogen exchange can be considered: reactions involving organolithium compounds and reactions involving lithium metal. Commercial organolithium compounds are produced by the heterogeneous (slurry) reaction of lithium with organic bromides and chlorides:
Often the lithium halide remains in the soluble product.
Most of this article is about the homogeneous (one-phase) reaction of preformed organolithium compounds:
Butyllithium is commonly used. Gilman and Wittig independently discovered this method in the late 1930s. [ 1 ] It is not a salt metathesis reaction , as no salt is produced.
Lithium–halogen exchange is frequently used to prepare vinyl-, aryl- and primary alkyllithium reagents. Vinyl halides usually undergo lithium–halogen exchange with retention of the stereochemistry of the double bond. [ 2 ] The presence of alkoxyl or related chelating groups accelerates lithium–halogen exchange. [ 3 ] Lithium halogen exchange is typically a fast reaction. It is usually faster than nucleophilic addition and can sometimes exceed the rate of proton transfer. [ 4 ]
Exchange rates usually follow the trend I > Br > Cl. Alkyl- and arylfluoride are generally unreactive toward organolithium reagents. Lithium–halogen exchange is kinetically controlled, and the rate of exchange is primarily influenced by the stabilities of the carbanion intermediates (sp > sp 2 > sp 3 ) of the organolithium reagents. [ 5 ] [ 3 ]
Two mechanisms have been proposed for lithium–halogen exchange. [ 6 ] One proposed pathway involves a nucleophilic mechanism that generates a reversible "ate-complex" intermediate. Farnham and Calabrese crystallized an "ate-complex" lithium bis(pentafluorophenyl) iodinate complexed with TMEDA . [ 7 ] The "ate-complex" further reacts with electrophiles and provides pentafluorophenyl iodide and C 6 H 5 Li. [ 7 ] A number of kinetic studies also support a nucleophilic pathway in which the carbanion on the lithium species attacks the halogen atom on the aryl halide. [ 8 ] Another proposed mechanism involves single electron transfer with the generation of radicals. In reactions of secondary and tertiary alkyllithium and alkyl halides, radical species were detected by EPR spectroscopy . [ 9 ] [ 6 ] The mechanistic studies of lithium–halogen exchange are complicated by the formation of aggregates of organolithium species.
Grignard reagents can be prepared by treating a preformed Grignard reagent with an organic halide. This method offers the advantage that the Mg transfer tolerates many functional groups. A typical reaction involves isopropylmagnesium chloride and aryl bromide or iodides: [ 10 ]
Magnesium ate complexes metalate aryl halides: [ 11 ]
Zinc–halogen exchange: [ 12 ]
Several examples can be found in organic syntheses. [ 13 ]
Below lithium–halogen exchange is a step in the synthesis of morphine. Here n -butyllithium is used to perform lithium–halogen exchange with bromide. The nucleophilic carbanion center quickly undergoes carbolithiation to the double bond, generating an anion stabilized by the adjacent sulfone group. An intramolecular S N 2 reaction by the anion forms the cyclic backbone of morphine. [ 14 ]
Lithium–halogen exchange is a crucial part of Parham cyclization. [ 15 ] In this reaction, an aryl halide (usually iodide or bromide) exchanges with organolithium to form a lithiated arene species. If the arene bears a side chain with an electrophillic moiety, the carbanion attached to the lithium will perform intramolecular nucleophilic attack and cyclize. This reaction is a useful strategy for heterocycle formation. [ 16 ] In the example below, Parham cyclization was used to in the cyclization of an isocyanate to form isoindolinone, which was then converted to a nitrone. The nitrone species further reacts with radicals and can be used as "spin traps" to study biological radical processes. [ 17 ] | https://en.wikipedia.org/wiki/Metal–halogen_exchange |
Metal–inorganic frameworks (MIFs) are a class of compounds consisting of metal ions or clusters coordinated to inorganic ligands to form one-, two-, or three-dimensional structures. They are a subclass of coordination polymers , with the special feature that they are often porous . They are inorganic counterpart of Metal–organic frameworks . [ 1 ]
Millon's base which have been known since early 20th century, can be considered as MIFs. [ 1 ]
MIF with Borazocine linker was developed for hydrogen storage . [ 2 ] Cu2I2Se6 has Se6 linkers. [ 3 ] There are many MIFs with pnictogen linkers. [ 1 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Metal–inorganic_framework |
Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges ) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature, [ 1 ] pressure [ 2 ] or, in case of a semiconductor , doping .
The basic distinction between metals and insulators was proposed by Hans Bethe , Arnold Sommerfeld and Felix Bloch in 1928-1929. It distinguished between conducting metals (with partially filled bands) and nonconducting insulators. However, in 1937 Jan Hendrik de Boer and Evert Verwey reported that many transition-metal oxides (such as NiO) with a partially filled d-band were poor conductors, often insulating. In the same year, the importance of the electron-electron correlation was stated by Rudolf Peierls . Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott , after whom the insulating state is named Mott insulator .
The first metal-insulator transition to be found was the Verwey transition of magnetite in the 1940s. [ 3 ]
The classical band structure of solid state physics predicts the Fermi level to lie in a band gap for insulators and in the conduction band for metals, which means metallic behavior is seen for compounds with partially filled bands. However, some compounds have been found which show insulating behavior even for partially filled bands. This is due to the electron-electron correlation , since electrons cannot be seen as noninteracting. Mott considers a lattice model with just one electron per site. Without taking the interaction into account, each site could be occupied by two electrons, one with spin up and one with spin down. Due to the interaction the electrons would then feel a strong Coulomb repulsion, which Mott argued splits the band in two. Having one electron per-site fills the lower band while the upper band remains empty, which suggests the system becomes an insulator. This interaction-driven insulating state is referred to as a Mott insulator . The Hubbard model is one simple model commonly used to describe metal-insulator transitions and the formation of a Mott insulator.
Metal–insulator transitions (MIT) and models for approximating them can be classified based on the origin of their transition.
The polarization catastrophe model describes the transition of a material from an insulator to a metal. This model considers the electrons in a solid to act as oscillators and the conditions for this transition to occur is determined by the number of oscillators per unit volume of the material. Since every oscillator has a frequency ( ω 0 ) we can describe the dielectric function of a solid as,
where ε ( ω ) is the dielectric function, N is the number of oscillators per unit volume, ω 0 is the fundamental oscillation frequency, m is the oscillator mass, and ω is the excitation frequency.
For a material to be a metal, the excitation frequency ( ω ) must be zero by definition, [ 2 ] which then gives us the static dielectric constant,
where ε s is the static dielectric constant. If we rearrange equation (1) to isolate the number of oscillators per unit volume we get the critical concentration of oscillators ( N c ) at which ε s becomes infinite, indicating a metallic solid and the transition from an insulator to a metal.
This expression creates a boundary that defines the transition of a material from an insulator to a metal. This phenomenon is known as the polarization catastrophe.
The polarization catastrophe model also theorizes that, with a high enough density, and thus a low enough molar volume, any solid could become metallic in character. [ 2 ] Predicting whether a material will be metallic or insulating can be done by taking the ratio R / V , where R is the molar refractivity , sometimes represented by A , and V is the molar volume. In cases where R / V is less than 1, the material will have non-metallic, or insulating properties, while an R / V value greater than one yields metallic character. [ 8 ] | https://en.wikipedia.org/wiki/Metal–insulator_transition |
In organometallic chemistry , a metal–ligand multiple bond describes the interaction of certain ligands with a metal with a bond order greater than one. [ 1 ] Coordination complexes featuring multiply bonded ligands are of both scholarly and practical interest. transition metal carbene complexes catalyze the olefin metathesis reaction. Metal oxo intermediates are pervasive in oxidation catalysis.
As a cautionary note, the classification of a metal–ligand bond as being "multiple" bond order is ambiguous and even arbitrary because bond order is a formalism. Furthermore, the usage of multiple bonding is not uniform. Symmetry arguments suggest that most ligands engage metals via multiple bonds. The term 'metal–ligand multiple bond" is often reserved for ligands of the type CR n and NR n (n = 0, 1, 2) and OR n (n = 0, 1) where R is H or an organic substituent, or pseudohalide . Historically, CO and NO + are not included in this classification, nor are halides .
In coordination chemistry , a pi-donor ligand is a kind of ligand endowed with filled non-bonding orbitals that overlap with metal-based orbitals. Their interaction is complementary to the behavior of pi-acceptor ligands . The existence of terminal oxo ligands for the early transition metals is one consequence of this kind of bonding. Classic pi-donor ligands are oxide (O 2− ), nitride (N 3− ), imide (RN 2− ), alkoxide (RO − ), amide (R 2 N − ), and fluoride. For late transition metals, strong pi-donors form anti-bonding interactions with the filled d-levels, with consequences for spin state, redox potentials, and ligand exchange rates. Pi-donor ligands are low in the spectrochemical series . [ 1 ]
Metals bound to so-called triply bonded carbyne , imide , nitride ( nitrido ), and oxide ( oxo ) ligands are generally assigned to high oxidation states with low d electron counts. The high oxidation state stabilizes the highly reduced ligands. The low d electron count allow for many bonds between ligands and the metal center. A d 0 metal center can accommodate up to 9 bonds without violating the 18 electron rule , whereas a d 6 species can only accommodate 6 bonds.
A ligand described in ionic terms can bond to a metal through however many lone pairs it has available. For example, many alkoxides use one of their three lone pairs to make a single bond to a metal center. In this situation the oxygen is sp 3 hybridized according to valence bond theory . Increasing the bond order to two by involving another lone pair changes the hybridization at the oxygen to an sp 2 center with an expected expansion in the M-O-R bond angle and contraction in the M-O bond length. If all three lone pairs are included for a bond order of three than the M-O bond distance contracts further and since the oxygen is a sp center the M-O-R bond angle is 180˚ or linear. Similarly with the imidos are commonly referred to as either bent (sp 2 ) or linear (sp). Even the oxo can be sp 2 or sp hybridized. The triply bonded oxo, similar to carbon monoxide , is partially positive at the oxygen atom and unreactive toward Brønsted acids at the oxygen atom. When such a complex is reduced, the triple bond can be converted to a double bond at which point the oxygen no longer bears a partial positive charge and is reactive toward acid.
Imido ligands, also known as imides or nitrenes, most commonly form "linear six electron bonds" with metal centers. Bent imidos are a rarity limited by complexes electron count, orbital bonding availability, or some similar phenomenon. It is common to draw only two lines of bonding for all imidos, including the most common linear imidos with a six electron bonding interaction to the metal center. Similarly amido complexes are usually drawn with a single line even though most amido bonds involve four electrons. Alkoxides are generally drawn with a single bond although both two and four electron bonds are common. Oxo can be drawn with two lines regardless of whether four electrons or six are involved in the bond, although it is not uncommon to see six electron oxo bonds represented with three lines.
There are two motifs to indicate a metal oxidation state based around the actual charge separation of the metal center. Oxidation states up to +3 are believed to be an accurate representation of the charge separation experienced by the metal center. [ citation needed ] For oxidation states of +4 and larger, the oxidation state becomes more of a formalism with much of the positive charge distributed between the ligands. This distinction can be expressed by using a Roman numeral for the lower oxidation states in the upper right of the metal atomic symbol and an Arabic number with a plus sign for the higher oxidation states (see the example below). This formalism is not rigorously followed and the use of Roman numerals to represent higher oxidation states is common. | https://en.wikipedia.org/wiki/Metal–ligand_multiple_bond |
In inorganic chemistry , metal–metal bonds describe attractive interactions between metal centers. The simplest examples are found in bimetallic complexes. Metal–metal bonds can be "supported", i.e. be accompanied by one or more bridging ligands , or "unsupported". They can also vary according to bond order. The topic of metal–metal bonding is usually discussed within the framework of coordination chemistry , [ 1 ] but the topic is related to extended metallic bonding , which describes interactions between metals in extended solids such as bulk metals and metal subhalides. [ 2 ]
An example of a metal–metal bond is found in dimanganese decacarbonyl , Mn 2 (CO) 10 . As confirmed by X-ray crystallography , a pair of Mn(CO) 5 units are linked by a bond between the Mn atoms. The Mn-Mn distance (290 pm ) is short. [ 3 ] Mn 2 (CO) 10 is a simple and clear case of a metal-metal bond because no other atoms tie the two Mn atoms together.
When several metals are linked by metal-metal bonds, the compound or ion is called a metal cluster . Many metal clusters contain several unsupported M–M bonds. Some examples are M 3 (CO) 12 (M = Ru, Os) and Ir 4 (CO) 12 .
A subclass of unsupported metal–metal bonded arrays are linear chain compounds . In such cases the M–M bonding is weak as signaled by longer M–M bonds and the tendency of such compounds to dissociate in solution.
In many compounds, metal-metal bonds are accompanied by bridging ligands . In those cases, it is difficult to state unequivocably that the metal-metal bond is the cohesive force binding the two metals together. Diiron nonacarbonyl is such an example. Another example of a supported metal–metal bond is cyclopentadienyliron dicarbonyl dimer , [(C 5 H 5 )Fe(CO) 2 ] 2 . In the predominant isomers of this complex, the two Fe centers are joined not only by an Fe–Fe bond, but also by bridging CO ligands . The related cyclopentadienylruthenium dicarbonyl dimer features an unsupported Ru–Ru bond. Many metal clusters contain several supported M–M bonds. Further examples are Fe 3 (CO) 12 and Co 4 (CO) 12 .
In addition to M–M single bonds, metal pairs can be linked by double, triple, quadruple , and in a few cases, quintuple bonds . [ 4 ] Isolable complexes with multiple bonds are most common among the transition metals in the middle of the d-block , such as rhenium , tungsten , technetium , molybdenum and chromium . Typical the co ligands are π-donors, not π-acceptors. [ 5 ] Well studied examples are the tetra acetates , such as dimolybdenum tetraacetate (quadruple bond) and dirhodium tetraacetate (single bond). Mixed-valence druthenium tetraacetates have fractional M–M bond orders, i.e., 2.5 for [Ru 2 (OAc) 4 (H 2 O) 2 ] + . [ 6 ]
The complexes Nb 2 X 6 (SR 2 ) 3 adopt a face-sharing bioctahedral structures (X = Cl, Br; SR 2 = thioether). As dimers of Nb(III), they feature double metal–metal bonds, the maximum possible for a pair of metals with d 2 configuration. [ 7 ] Hexa( tert -butoxy)ditungsten(III) is a well studied example of a complex with a metal–metal triple bond. [ 8 ] | https://en.wikipedia.org/wiki/Metal–metal_bond |
The metal–nitride–oxide–semiconductor or metal–nitride–oxide–silicon ( MNOS ) transistor is a type of MOSFET (metal–oxide–semiconductor field-effect transistor) in which the oxide layer is replaced by a double layer of nitride and oxide. [ 1 ] It is an alternative and supplement to the existing standard MOS technology , wherein the insulation employed is a nitride-oxide layer. [ 2 ] [ 3 ] It is used in non-volatile computer memory . [ 4 ]
The first silicon dioxide transistors were developed by Frosch and Derick in 1957 at Bell Labs. [ 5 ]
In late 1967, a Sperry research team led by H.A. Richard Wegener invented the metal–nitride–oxide–semiconductor (MNOS) transistor, [ 6 ] a type of MOSFET in which the oxide layer is replaced by a double layer of nitride and oxide. [ 1 ] Nitride was used as a trapping layer instead of a floating gate, but its use was limited as it was considered inferior to a floating gate. [ 7 ]
Charge trap (CT) memory was introduced with MNOS devices in the late 1960s. It had a device structure and operating principles similar to floating-gate (FG) memory, but the main difference is that the charges are stored in a conducting material (typically a doped polysilicon layer) in FG memory, whereas CT memory stored charges in localized traps within a dielectric layer (typically made of silicon nitride ). [ 8 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Metal–nitride–oxide–semiconductor_transistor |
Metal–organic biohybrids ( MOBs ) are a family of materials containing a metal component, such as copper, and a biological component, such as the amino acid dimer cystine . [ 1 ] One of the MOB families first described was the copper-high aspect ratio structure called CuHARS. Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) of CuHARS revealed linear morphology and smooth surface texture. SEM, TEM and light microscopy showed that CuHARS composites had scalable dimensions from nano- to micro-, with diameters as low as 40 nm, lengths exceeding 150 microns, and average aspect ratios of 100. [ 2 ]
MOBs are composed of two major components: a metal ion or cluster of metal ions and a biological molecule. Examples are:
When combined with copper to form CuHARS, the cystine may provide a linker function leading to a linear, high-aspect ratio structure that gives CuHARS its name: copper high-aspect ratio structures. In contrast to CuHARS, MOBs formed with silver and cystine result in silver nanoparticles with spherical, rounded structure. These have been named AgCysNPs. [ 3 ]
Figure 1 shows comparative electron microscopy of CuHARS and AgCysNPs. [ 5 ]
MOBs under reducing conditions using sodium hydroxide (NaOH) can be self-assembled at body temperature (37 degrees Celsius). [ 1 ] In the case of copper CuHARS, MOBs can be produced by transforming copper nanoparticles to provide the copper source, or using copper(II) sulfate. [ 1 ]
CuHARS have been shown to completely degrade under physiological conditions (cell culture media at 37 °C), even in the absence of cells; this is possibly due to the metal chelating properties of typical cell culture medias. [ 6 ] These may include the copper-binding properties of cerulosplasmin [ 7 ] and of albumin. [ 8 ] Additionally, CuHARS have been shown to polarize light using inverted microscopy. [ 3 ] Cobalt-containing MOBs (CoMOBs) have been shown to be susceptible to an externally applied magnetic field as shown in Figure 2. [ 9 ]
MOBs have been incorporated into composites including cellulose. [ 6 ] Additionally, MOBs composed of the copper-containing CuHARS have been shown to provide catalytic function [ 10 ] to produce nitric oxide (NO).
This production of NO was shown to impart anti-microbial activity, and the CuHARS in this case were incorporated into a biodegradable, biocompatible, and renewable resource material, namely cellulose. [ 11 ] The release of NO catalzyed by copper from CuHARS may have beneficial biomedical applications. [ 12 ]
Both copper- and silver-containing MOBs were shown to have anti-cancer effect on cells in vitro. [ 3 ] In the case of possible uses for CuHARS, copper may have a potential role in tumor immunity and for antitumor therapy. [ 13 ] Since CuHARS are 100% biodegradable under physiological conditions, copper metabolism of CuHARS may have benefits as an approach for treating glioma . [ 14 ]
Green nanomedicine has been suggested as a path to the next generation of materials for diagnosing brain tumors and for therapeutics, including the use of CuHARS. [ 15 ]
CuHARS embedded into nanofiber aerogels have been shown to have angiogenic effects. [ 16 ]
CuHARS embedded into nanofiber aerogels [ 16 ] and via CuHARS-mediated nitric oxide generation [ 10 ] have both been examples of antibacterial effects.
As of this edit , this article uses content from "High-Aspect Ratio Bio-Metallic Nanocomposites for Cellular Interactions" , which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License , but not under the GFDL . All relevant terms must be followed. | https://en.wikipedia.org/wiki/Metal–organic_biohybrid |
Metamagnetism is a sudden (often, dramatic) increase in the magnetization of a material with a small change in an externally applied magnetic field . The metamagnetic behavior may have quite different physical causes for different types of metamagnets. Some examples of physical mechanisms leading to metamagnetic behavior are:
Depending on the material and experimental conditions, metamagnetism may be associated with a first-order phase transition , a continuous phase transition at a critical point (classical or quantum), or crossovers beyond a critical point that do not involve a phase transition at all. These wildly different physical explanations sometimes lead to confusion as to what the term "metamagnetic" is referring in specific cases. | https://en.wikipedia.org/wiki/Metamagnetism |
Metaman: The Merging of Humans and Machines into a Global Superorganism is a 1993 book by author Gregory Stock . The title refers to a superorganism comprising humanity and its technology. [ 1 ] [ 2 ]
In his book, Stock claims that the humanity as a whole can be seen as a collective organism which he calls Metaman. He compares individual humans with cells which work together and communicate on a global scale thanks to advances in technology. Stock sees mass media as the Metaman's consciousness, libraries as its memory and transport as its nervous system. [ 2 ] [ 3 ]
According to Stock, the Metaman is constantly evolving. Metaman transforms the planetary environment and creates new biological communities which are completely dependent on the Metaman. It changes humans which begin to merge with machines. [ 4 ] Its evolution is accelerating, and it might soon reproduce into outer space. Stock thinks that this growth is beneficial, and Metaman will overcome the negative natural processes, such as floods or famines. [ 2 ] [ 3 ]
Stock does not explore the negative sides of such entity as Metaman. Kenneth Haygood says that Stock only provided the data to support particular points and did not examine forces which would interfere with his concept: "Readers of this journal with general systems theory and related ideas may find that all of Stock's bits and pieces of data, while relevant to a particular point, had the overall effect of diverting the reader from a more penetrating examination of the concept and its implications." [ 3 ]
In her review, Patric Hedlund opposes Stock's optimistic view and provides counterexamples, such as Yugoslav Wars . She argues that Metaman's awareness might not be sufficient to prevent its self-destruction. [ 5 ] | https://en.wikipedia.org/wiki/Metaman |
A metamaterial (from the Greek word μετά meta , meaning "beyond" or "after", and the Latin word materia , meaning "matter" or "material") is a type of material engineered to have a property, typically rarely observed in naturally occurring materials, that is derived not from the properties of the base materials but from their newly designed structures. Metamaterials are usually fashioned from multiple materials, such as metals and plastics, and are usually arranged in repeating patterns , at scales that are smaller than the wavelengths of the phenomena they influence. Their precise shape , geometry , size , orientation , and arrangement give them their "smart" properties of manipulating electromagnetic , acoustic, or even seismic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials.
Appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. [ 3 ] [ 4 ] [ 5 ] Those that exhibit a negative index of refraction for particular wavelengths have been the focus of a large amount of research. [ 6 ] [ 7 ] [ 8 ] These materials are known as negative-index metamaterials .
Potential applications of metamaterials are diverse and include sports equipment , [ 9 ] [ 10 ] optical filters , medical devices , remote aerospace applications, sensor detection and infrastructure monitoring , smart solar power management, lasers, [ 11 ] crowd control , radomes , high-frequency battlefield communication and lenses for high-gain antennas, improving ultrasonic sensors , and even shielding structures from earthquakes . [ 12 ] [ 13 ] [ 14 ] [ 15 ] Metamaterials offer the potential to create super-lenses . [ 16 ] Such a lens can allow imaging below the diffraction limit that is the minimum resolution d=λ/(2NA) that can be achieved by conventional lenses having a numerical aperture NA and with illumination wavelength λ. Sub-wavelength optical metamaterials, when integrated with optical recording media, can be used to achieve optical data density higher than limited by diffraction. [ 17 ] A form of 'invisibility' was demonstrated using gradient-index materials . Acoustic and seismic metamaterials are also research areas. [ 12 ] [ 18 ]
Metamaterial research is interdisciplinary and involves such fields as electrical engineering , electromagnetics , classical optics , solid state physics , microwave and antenna engineering , optoelectronics , material sciences , nanoscience and semiconductor engineering. [ 4 ] Recent developments also show promise for metamaterials in optical computing , with metamaterial-based systems theoretically being able to perform certain tasks more efficiently than conventional computing. [ 19 ]
Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century. Some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose , who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the early twentieth century.
In the late 1940s, Winston E. Kock from AT&T Bell Laboratories developed materials that had similar characteristics to metamaterials. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas . Microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media. [ 4 ] [ 20 ] [ 21 ]
Negative-index materials were first described theoretically by Victor Veselago in 1967. [ 22 ] He proved that such materials could transmit light . He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector . This is contrary to wave propagation in naturally occurring materials. [ 8 ]
In 1995, John M. Guerra fabricated a sub-wavelength transparent grating (later called a photonic metamaterial) having 50 nm lines and spaces, and then coupled it with a standard oil immersion microscope objective (the combination later called a super-lens) to resolve a grating in a silicon wafer also having 50 nm lines and spaces. This super-resolved image was achieved with illumination having a wavelength of 650 nm in air. [ 16 ]
In 2000, John Pendry was the first to identify a practical way to make a left-handed metamaterial, a material in which the right-hand rule is not followed. [ 22 ] Such a material allows an electromagnetic wave to convey energy (have a group velocity ) against its phase velocity . Pendry hypothesized that metallic wires aligned along the direction of a wave could provide negative permittivity ( dielectric function ε < 0). Natural materials (such as ferroelectrics ) display negative permittivity; the challenge was achieving negative permeability (μ < 0). In 1999, Pendry demonstrated that a split ring (C shape) with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that a periodic array of wires and rings could give rise to a negative refractive index. Pendry also proposed a related negative-permeability design, the Swiss roll .
In 2000, David R. Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically , split-ring resonators and thin wire structures. A method was provided in 2002 to realize negative-index metamaterials using artificial lumped-element loaded transmission lines in microstrip technology. In 2003, complex (both real and imaginary parts of) negative refractive index [ 23 ] and imaging by flat lens [ 24 ] using left handed metamaterials were demonstrated. By 2007, experiments that involved negative refractive index had been conducted by many groups. [ 3 ] [ 15 ] At microwave frequencies, the first, imperfect invisibility cloak was realized in 2006. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ]
From the standpoint of governing equations, contemporary researchers can classify the realm of metamaterials into three primary branches: [ 30 ] Electromagnetic/Optical wave metamaterials, other wave metamaterials, and diffusion metamaterials . These branches are characterized by their respective governing equations, which include Maxwell's equations (a wave equation describing transverse waves), other wave equations (for longitudinal and transverse waves), and diffusion equations (pertaining to diffusion processes). [ 31 ] Crafted to govern a range of diffusion activities, diffusion metamaterials prioritize diffusion length as their central metric. This crucial parameter experiences temporal fluctuations while remaining immune to frequency variations. In contrast, wave metamaterials, designed to adjust various wave propagation paths, consider the wavelength of incoming waves as their essential metric. This wavelength remains constant over time, though it adjusts with frequency alterations. Fundamentally, the key metrics for diffusion and wave metamaterials present a stark divergence, underscoring a distinct complementary relationship between them. For comprehensive information, refer to Section I.B, "Evolution of metamaterial physics," in Ref. [ 30 ]
An electromagnetic metamaterial affects electromagnetic waves that impinge on or interact with its structural features, which are smaller than the wavelength. To behave as a homogeneous material accurately described by an effective refractive index , its features must be much smaller than the wavelength. [ citation needed ]
The unusual properties of metamaterials arise from the resonant response of each constituent element rather than their spatial arrangement into a lattice. It allows considering the local effective material parameters (permittivity and permeability ). The resonance effect related to the mutual arrangement of elements is responsible for Bragg scattering , which underlies the physics of photonic crystals , another class of electromagnetic materials. Unlike the local resonances, Bragg scattering and corresponding Bragg stop-band have a low-frequency limit determined by the lattice spacing. The subwavelength approximation ensures that the Bragg stop-bands with the strong spatial dispersion effects are at higher frequencies and can be neglected. The criterion for shifting the local resonance below the lower Bragg stop-band make it possible to build a photonic phase transition diagram in a parameter space, for example, size and permittivity of the constituent element. Such diagram displays the domain of structure parameters allowing the metamaterial properties observation in the electromagnetic material. [ 32 ]
For microwave radiation , the features are on the order of millimeters . Microwave frequency metamaterials are usually constructed as arrays of electrically conductive elements (such as loops of wire) that have suitable inductive and capacitive characteristics. Many microwave metamaterials use split-ring resonators . [ 5 ] [ 6 ]
Photonic metamaterials are structured on the nanometer scale and manipulate light at optical frequencies. Photonic crystals and frequency-selective surfaces such as diffraction gratings , dielectric mirrors and optical coatings exhibit similarities to subwavelength structured metamaterials. However, these are usually considered distinct from metamaterials, as their function arises from diffraction or interference and thus cannot be approximated as a homogeneous material. [ citation needed ] However, material structures such as photonic crystals are effective in the visible light spectrum . The middle of the visible spectrum has a wavelength of approximately 560 nm (for sunlight). Photonic crystal structures are generally half this size or smaller, that is < 280 nm. [ citation needed ]
Plasmonic metamaterials utilize surface plasmons , which are packets of electrical charge that collectively oscillate at the surfaces of metals at optical frequencies.
Frequency selective surfaces (FSS) can exhibit subwavelength characteristics and are known variously as artificial magnetic conductors (AMC) or High Impedance Surfaces (HIS). FSS display inductive and capacitive characteristics that are directly related to their subwavelength structure. [ 33 ]
Electromagnetic metamaterials can be divided into different classes, as follows: [ 3 ] [ 22 ] [ 4 ] [ 34 ]
Negative-index metamaterials (NIM) are characterized by a negative index of refraction. Other terms for NIMs include "left-handed media", "media with a negative refractive index", and "backward-wave media". [ 3 ] NIMs where the negative index of refraction arises from simultaneously negative permittivity and negative permeability are also known as double negative metamaterials or double negative materials (DNG). [ 22 ]
Assuming a material well-approximated by a real permittivity and permeability, the relationship between permittivity ε r {\displaystyle \varepsilon _{r}} , permeability μ r {\displaystyle \mu _{r}} and refractive index n is given by n = ± ε r μ r {\textstyle n=\pm {\sqrt {\varepsilon _{\mathrm {r} }\mu _{\mathrm {r} }}}} . All known non-metamaterial transparent materials (glass, water, ...) possess positive ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} . By convention the positive square root is used for n . However, some engineered metamaterials have ε r {\displaystyle \varepsilon _{r}} and μ r < 0 {\displaystyle \mu _{r}<0} . Because the product ε r μ r {\displaystyle \varepsilon _{r}\mu _{r}} is positive, n is real . Under such circumstances, it is necessary to take the negative square root for n . When both ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} are positive (negative), waves travel in the forward ( backward ) direction. Electromagnetic waves cannot propagate in materials with ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} of opposite sign as the refractive index becomes imaginary . Such materials are opaque for electromagnetic radiation and examples include plasmonic materials such as metals ( gold , silver , ...).
The foregoing considerations are simplistic for actual materials, which must have complex-valued ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} . The real parts of both ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} do not have to be negative for a passive material to display negative refraction. [ 35 ] [ 36 ] Indeed, a negative refractive index for circularly polarized waves can also arise from chirality. [ 37 ] [ 38 ] Metamaterials with negative n have numerous interesting properties: [ 4 ] [ 39 ]
Negative index of refraction derives mathematically from the vector triplet E , H and k . [ 4 ]
For plane waves propagating in electromagnetic metamaterials, the electric field, magnetic field and wave vector follow a left-hand rule , the reverse of the behavior of conventional optical materials.
To date, only metamaterials exhibit a negative index of refraction. [ 3 ] [ 39 ] [ 40 ]
Single negative (SNG) metamaterials have either negative relative permittivity (ε r ) or negative relative permeability (μ r ), but not both. [ 22 ] They act as metamaterials when combined with a different, complementary SNG, jointly acting as a DNG.
Epsilon negative media (ENG) display a negative ε r while μ r is positive. [ 3 ] [ 39 ] [ 22 ] Many plasmas exhibit this characteristic. For example, noble metals such as gold or silver are ENG in the infrared and visible spectrums .
Mu-negative media (MNG) display a positive ε r and negative μ r . [ 3 ] [ 39 ] [ 22 ] Gyrotropic or gyromagnetic materials exhibit this characteristic. A gyrotropic material is one that has been altered by the presence of a quasistatic magnetic field , enabling a magneto-optic effect . [ citation needed ] A magneto-optic effect is a phenomenon in which an electromagnetic wave propagates through such a medium. In such a material, left- and right-rotating elliptical polarizations can propagate at different speeds. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect : the polarization plane can be rotated, forming a Faraday rotator . The results of such a reflection are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect ). Two gyrotropic materials with reversed rotation directions of the two principal polarizations are called optical isomers .
Joining a slab of ENG material and slab of MNG material resulted in properties such as resonances, anomalous tunneling, transparency and zero reflection. Like negative-index materials, SNGs are innately dispersive, so their ε r , μ r and refraction index n, are a function of frequency. [ 39 ]
Hyperbolic metamaterials (HMMs) behave as a metal for certain polarization or direction of light propagation and behave as a dielectric for the other due to the negative and positive permittivity tensor components, giving extreme anisotropy . The material's dispersion relation in wavevector space forms a hyperboloid and therefore it is called a hyperbolic metamaterial. The extreme anisotropy of HMMs leads to directional propagation of light within and on the surface. [ 41 ] HMMs have shown various potential applications, such as sensing, reflection modulator, [ 42 ] all-optical ultra-fast switching for integrated photonics, [ 43 ] imaging, super high resolution and single photon source, [ 44 ] steering of optical signals, enhanced plasmon resonance effects. [ 45 ]
Electromagnetic bandgap metamaterials (EBG or EBM) control light propagation. This is accomplished either with photonic crystals (PC) or left-handed materials (LHM). PCs can prohibit light propagation altogether. Both classes can allow light to propagate in specific, designed directions and both can be designed with bandgaps at desired frequencies. [ 46 ] [ 47 ] The period size of EBGs is an appreciable fraction of the wavelength, creating constructive and destructive interference.
PC are distinguished from sub-wavelength structures, such as tunable metamaterials , because the PC derives its properties from its bandgap characteristics. PCs are sized to match the wavelength of light, versus other metamaterials that expose sub-wavelength structure. Furthermore, PCs function by diffracting light. In contrast, metamaterial does not use diffraction. [ 48 ]
PCs have periodic inclusions that inhibit wave propagation due to the inclusions' destructive interference from scattering. The photonic bandgap property of PCs makes them the electromagnetic analog of electronic semi-conductor crystals. [ 49 ]
EBGs have the goal of creating high quality, low loss, periodic, dielectric structures. An EBG affects photons in the same way semiconductor materials affect electrons. PCs are the perfect bandgap material, because they allow no light propagation. [ 50 ] Each unit of the prescribed periodic structure acts like one atom, albeit of a much larger size. [ 3 ] [ 50 ]
EBGs are designed to prevent the propagation of an allocated bandwidth of frequencies, for certain arrival angles and polarizations . Various geometries and structures have been proposed to fabricate EBG's special properties. In practice it is impossible to build a flawless EBG device. [ 3 ] [ 4 ]
EBGs have been manufactured for frequencies ranging from a few gigahertz (GHz) to a few terahertz (THz), radio, microwave and mid-infrared frequency regions. EBG application developments include a transmission line , woodpiles made of square dielectric bars and several different types of low gain antennas . [ 3 ] [ 4 ]
Double positive mediums (DPS) do occur in nature, such as naturally occurring dielectrics . Permittivity and magnetic permeability are both positive and wave propagation is in the forward direction. Artificial materials have been fabricated which combine DPS, ENG and MNG properties. [ 3 ] [ 22 ]
Categorizing metamaterials into double or single negative, or double positive, normally assumes that the metamaterial has independent electric and magnetic responses described by ε and μ. However, in many cases, the electric field causes magnetic polarization, while the magnetic field induces electrical polarization, known as magnetoelectric coupling. Such media are denoted as bi-isotropic . Media that exhibit magnetoelectric coupling and that are anisotropic (which is the case for many metamaterial structures [ 51 ] ), are referred to as bi-anisotropic. [ 52 ] [ 53 ]
Four material parameters are intrinsic to magnetoelectric coupling of bi-isotropic media. They are the electric ( E ) and magnetic ( H ) field strengths, and electric ( D ) and magnetic ( B ) flux densities. These parameters are ε, μ, κ and χ or permittivity, permeability, strength of chirality, and the Tellegen parameter, respectively. In this type of media, material parameters do not vary with changes along a rotated coordinate system of measurements. In this sense they are invariant or scalar . [ 4 ]
The intrinsic magnetoelectric parameters, κ and χ , affect the phase of the wave. The effect of the chirality parameter is to split the refractive index. In isotropic media this results in wave propagation only if ε and μ have the same sign. In bi-isotropic media with χ assumed to be zero, and κ a non-zero value, different results appear. Either a backward wave or a forward wave can occur. Alternatively, two forward waves or two backward waves can occur, depending on the strength of the chirality parameter.
In the general case, the constitutive relations for bi-anisotropic materials read D = ε E + ξ H , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} +\xi \mathbf {H} ,} B = ζ E + μ H , {\displaystyle \mathbf {B} =\zeta \mathbf {E} +\mu \mathbf {H} ,} where ε {\displaystyle \varepsilon } and μ {\displaystyle \mu } are the permittivity and the permeability tensors, respectively, whereas ξ {\displaystyle \xi } and ζ {\displaystyle \zeta } are the two magneto-electric tensors. If the medium is reciprocal, permittivity and permeability are symmetric tensors, and ξ = − ζ T = − i κ T {\displaystyle \xi =-\zeta ^{T}=-i\kappa ^{T}} , where κ {\displaystyle \kappa } is the chiral tensor describing chiral electromagnetic and reciprocal magneto-electric response. The chiral tensor can be expressed as κ = 1 3 tr ( κ ) I + N + J {\displaystyle \kappa ={\tfrac {1}{3}}\operatorname {tr} (\kappa )I+N+J} , where tr ( κ ) {\displaystyle \operatorname {tr} (\kappa )} is the trace of κ {\displaystyle \kappa } , I is the identity matrix, N is a symmetric trace-free tensor, and J is an antisymmetric tensor. Such decomposition allows us to classify the reciprocal bianisotropic response and we can identify the following three main classes: (i) chiral media ( tr ( κ ) ≠ 0 , N ≠ 0 , J = 0 {\displaystyle \operatorname {tr} (\kappa )\neq 0,N\neq 0,J=0} ), (ii) pseudochiral media ( tr ( κ ) = 0 , N ≠ 0 , J = 0 {\displaystyle \operatorname {tr} (\kappa )=0,N\neq 0,J=0} ), (iii) omega media ( tr ( κ ) = 0 , N = 0 , J ≠ 0 {\displaystyle \operatorname {tr} (\kappa )=0,N=0,J\neq 0} ).
Handedness of metamaterials is a potential source of confusion as the metamaterial literature includes two conflicting uses of the terms left- and right-handed . The first refers to one of the two circularly polarized waves that are the propagating modes in chiral media. The second relates to the triplet of electric field, magnetic field and Poynting vector that arise in negative refractive index media, which in most cases are not chiral.
Generally a chiral and/or bianisotropic electromagnetic response is a consequence of 3D geometrical chirality: 3D-chiral metamaterials are composed by embedding 3D-chiral structures in a host medium and they show chirality-related polarization effects such as optical activity and circular dichroism . The concept of 2D chirality also exists and a planar object is said to be chiral if it cannot be superposed onto its mirror image unless it is lifted from the plane. 2D-chiral metamaterials that are anisotropic and lossy have been observed to exhibit directionally asymmetric transmission (reflection, absorption) of circularly polarized waves due to circular conversion dichroism. [ 54 ] [ 55 ] On the other hand, bianisotropic response can arise from geometrical achiral structures possessing neither 2D nor 3D intrinsic chirality. Plum and colleagues investigated magneto-electric coupling due to extrinsic chirality , where the arrangement of a (achiral) structure together with the radiation wave vector is different from its mirror image, and observed large, tuneable linear optical activity, [ 56 ] nonlinear optical activity, [ 57 ] specular optical activity [ 58 ] and circular conversion dichroism. [ 59 ] Rizza et al. [ 60 ] suggested 1D chiral metamaterials where the effective chiral tensor is not vanishing if the system is geometrically one-dimensional chiral (the mirror image of the entire structure cannot be superposed onto it by using translations without rotations).
3D-chiral metamaterials are constructed from chiral materials or resonators in which the effective chirality parameter κ {\displaystyle \kappa } is non-zero.
Wave propagation properties in such chiral metamaterials demonstrate that negative refraction can be realized in metamaterials with a strong chirality and positive ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} . [ 61 ] [ 62 ] This is because the refractive index n {\displaystyle n} has distinct values for left and right circularly polarized waves, given by
It can be seen that a negative index will occur for one polarization if κ {\displaystyle \kappa } > ε r μ r {\displaystyle {\sqrt {\varepsilon _{r}\mu _{r}}}} . In this case, it is not necessary that either or both ε r {\displaystyle \varepsilon _{r}} and μ r {\displaystyle \mu _{r}} be negative for backward wave propagation. [ 4 ] A negative refractive index due to chirality was first observed simultaneously and independently by Plum et al. [ 37 ] and Zhang et al. [ 38 ] in 2009.
Frequency selective surface-based metamaterials block signals in one waveband and pass those at another waveband. They have become an alternative to fixed frequency metamaterials. They allow for optional changes of frequencies in a single medium, rather than the restrictive limitations of a fixed frequency response . [ 63 ]
Mechanical metamaterials are rationally designed artificial materials/structures of precision geometrical arrangements leading to unusual physical and mechanical properties. These unprecedented properties are often derived from their unique internal structures rather than the materials from which they are made. Inspiration for mechanical metamaterials design often comes from biological materials (such as honeycombs and cells), from molecular and crystalline unit cell structures as well as the artistic fields of origami and kirigami. While early mechanical metamaterials had regular repeats of simple unit cell structures, increasingly complex units and architectures are now being explored. Mechanical metamaterials can be seen as a counterpart to the rather well-known family of optical metamaterials and electromagnetic metamaterials . Mechanical properties, including elasticity, viscoelasticity, and thermoelasticity, are central to the design of mechanical metamaterials. They are often also referred to as elastic metamaterials or elastodynamic metamaterials . Their mechanical properties can be designed to have values that cannot be found in nature, such as negative stiffness, negative Poisson’s ratio, negative compressibility, and vanishing shear modulus. [ 64 ] [ 65 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ] In addition to classical mechanical metamaterials, there has been growing attention to active mechanical metamaterials with advanced functionalities. These enable "intelligent mechanical metamaterials", which are programmable material systems capable of sensing, energy harvesting, actuation, communication, and information processing—to interact with their surrounding environments, optimize their response, and create a sense–decide–respond loop. [ 76 ] [ 77 ]
Acoustic metamaterials control, direct and manipulate sound in the form of sonic , infrasonic or ultrasonic waves in gases , liquids and solids . As with electromagnetic waves, sonic waves can exhibit negative refraction. [ 18 ]
Control of sound waves is mostly accomplished through the bulk modulus β , mass density ρ and chirality. The bulk modulus and density are analogs of permittivity and permeability in electromagnetic metamaterials. Related to this is the mechanics of sound wave propagation in a lattice structure. Also materials have mass and intrinsic degrees of stiffness . Together, these form a resonant system and the mechanical (sonic) resonance may be excited by appropriate sonic frequencies (for example audible pulses ).
Structural metamaterials are a type of mechanical metamaterial that provide properties such as crushability and lightweight characteristics. Using projection micro-stereolithography , microlattices can be created using forms much like trusses and girders . Materials four orders of magnitude stiffer than conventional aerogel , but with the same density have been created. Such materials can withstand a load of at least 160,000 times their own weight by over-constraining the materials. [ 78 ] [ 79 ]
A ceramic nanotruss metamaterial can be flattened and revert to its original state. [ 80 ]
Typically materials found in nature, when homogeneous, are thermally isotropic. That is to say, heat passes through them at roughly the same rate in all directions. However, thermal metamaterials are anisotropic usually due to their highly organized internal structure. Composite materials with highly aligned internal particles or structures, such as fibers, and carbon nanotubes (CNT), are examples of this.
Metamaterials may be fabricated that include some form of nonlinear media, whose properties change with the power of the incident wave. Nonlinear media are essential for nonlinear optics . Most optical materials have a relatively weak response, meaning that their properties change by only a small amount for large changes in the intensity of the electromagnetic field . The local electromagnetic fields of the inclusions in nonlinear metamaterials can be much larger than the average value of the field. Besides, remarkable nonlinear effects have been predicted and observed if the metamaterial effective dielectric permittivity is very small (epsilon-near-zero media). [ 81 ] [ 82 ] [ 83 ] In addition, exotic properties such as a negative refractive index, create opportunities to tailor the phase matching conditions that must be satisfied in any nonlinear optical structure.
Metafluids offer programmable properties such as viscosity, compressibility, and optical. One approach employed 50-500 micron diameter air-filled elastomer spheres suspended in silicon oil. The spheres compress under pressure, and regain their shape when the pressure is relieved. Their properties differ across those two states. Unpressurized, they scatter light, making them opaque. Under pressure, they collapse into half-moon shapes, focusing light, and becoming transparent. The pressure response could allow them to act as a sensor or as a dynamic hydraulic fluid. Like cornstarch , it can act as either a Newtonian or a non-Newtonian fluid. Under pressure, it becomes non-Newtonian – meaning its viscosity changes in response to shear force. [ 84 ]
In 2009, Marc Briane and Graeme Milton [ 85 ] proved mathematically that one can in principle invert the sign of a 3 materials based composite in 3D made out of only positive or negative sign Hall coefficient materials. Later in 2015 Muamer Kadic et al. [ 86 ] showed that a simple perforation of isotropic material can lead to its change of sign of the Hall coefficient. This theoretical claim was finally experimentally demonstrated by Christian Kern et al. [ 87 ]
In 2015, it was also demonstrated by Christian Kern et al. that an anisotropic perforation of a single material can lead to a yet more unusual effect namely the parallel Hall effect. [ 88 ] This means that the induced electric field inside a conducting media is no longer orthogonal to the current and the magnetic field but is actually parallel to the latest.
Meta-biomaterials are a type of mechanical metamaterial purposefully designed to interact with biological systems, integrating principles from both metamaterial science and biological disciplines. Engineered at the nanoscale, these materials adeptly manipulate electromagnetic, acoustic, or thermal properties to facilitate biological processes. Through meticulous adjustment of their structure and composition, meta-biomaterials hold promise in augmenting various biomedical technologies such as medical imaging, [ 89 ] drug delivery, [ 90 ] and tissue engineering. [ 91 ] This underscores the importance of comprehending biological systems through the interdisciplinary lens of materials science.
Terahertz metamaterials interact at terahertz frequencies, usually defined as 0.1 to 10 THz . Terahertz radiation lies at the far end of the infrared band, just after the end of the microwave band. This corresponds to millimeter and submillimeter wavelengths between the 3 mm ( EHF band) and 0.03 mm (long-wavelength edge of far-infrared light).
Photonic metamaterial interact with optical frequencies ( mid-infrared ). The sub-wavelength period distinguishes them from photonic band gap structures. [ 92 ] [ 93 ]
Tunable metamaterials allow arbitrary adjustments to frequency changes in the refractive index. A tunable metamaterial expands beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials.
Plasmonic metamaterials exploit surface plasmons , which are produced from the interaction of light with metal- dielectrics . Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves or surface waves [ 94 ] known as surface plasmon polaritons . Bulk plasma oscillations make possible the effect of negative mass (density). [ 95 ] [ 96 ]
Metamaterials are under consideration for many applications. [ 97 ] Metamaterial antennas are commercially available.
In 2007, one researcher stated that for metamaterial applications to be realized, energy loss must be reduced, materials must be extended into three-dimensional isotropic materials and production techniques must be industrialized. [ 98 ]
Metamaterial antennas are a class of antennas that use metamaterials to improve performance. [ 15 ] [ 22 ] [ 99 ] [ 100 ] Demonstrations showed that metamaterials could enhance an antenna's radiated power . [ 15 ] [ 101 ] Materials that can attain negative permeability allow for properties such as small antenna size, high directivity and tunable frequency. [ 15 ] [ 22 ]
A metamaterial absorber manipulates the loss components of metamaterials' permittivity and magnetic permeability, to absorb large amounts of electromagnetic radiation . [ 102 ] This is a useful feature for photodetection [ 103 ] [ 104 ] and solar photovoltaic applications. [ 105 ] Loss components are also relevant in applications of negative refractive index (photonic metamaterials, antenna systems) or transformation optics ( metamaterial cloaking , celestial mechanics), but often are not used in these applications.
A superlens is a two or three-dimensional device that uses metamaterials, usually with negative refraction properties, to achieve resolution beyond the diffraction limit (ideally, infinite resolution). Such a behavior is enabled by the capability of double-negative materials to yield negative phase velocity. The diffraction limit is inherent in conventional optical devices or lenses. [ 106 ] [ 107 ]
Metamaterials are a potential basis for a practical cloaking device . The proof of principle was demonstrated on October 19, 2006. No practical cloaks are publicly known to exist. [ 108 ] [ 109 ] [ 110 ] [ 111 ] [ 112 ] [ 113 ]
Metamaterials have applications in stealth technology , which reduces RCS in any of various ways (e.g., absorption, diffusion, redirection). Conventionally, the RCS has been reduced either by radar-absorbent material (RAM) or by purpose shaping of the targets such that the scattered energy can be redirected away from the source. While RAMs have narrow frequency band functionality, purpose shaping limits the aerodynamic performance of the target. More recently, metamaterials or metasurfaces have been synthesized that can redirect the scattered energy away from the source using either array theory [ 114 ] [ 115 ] [ 116 ] [ 117 ] or generalized Snell's law. [ 118 ] [ 119 ] This has led to aerodynamically favorable shapes for the targets with the reduced RCS.
Seismic metamaterials counteract the adverse effects of seismic waves on man-made structures. [ 12 ] [ 120 ] [ 121 ]
Metamaterials textured with nanoscale wrinkles could control sound or light signals, such as changing a material's color or improving ultrasound resolution. Uses include nondestructive material testing , medical diagnostics and sound suppression . The materials can be made through a high-precision, multi-layer deposition process. The thickness of each layer can be controlled within a fraction of a wavelength. The material is then compressed, creating precise wrinkles whose spacing can cause scattering of selected frequencies. [ 122 ] [ 123 ]
Metamaterials can be integrated with optical waveguides to tailor guided electromagnetic waves ( meta-waveguide ). [ 124 ] Subwavelength structures like metamaterials can be integrated with for instance silicon waveguides to develop and polarization beam splitters [ 125 ] and optical couplers, [ 126 ] adding new degrees of freedom of controlling light propagation at nanoscale for integrated photonic devices. [ 127 ] Other applications such as integrated mode converters, [ 128 ] polarization (de)multiplexers, [ 129 ] structured light generation, [ 130 ] and on-chip bio-sensors [ 131 ] can be developed. [ 124 ]
All materials are made of atoms , which are dipoles . These dipoles modify light velocity by a factor n (the refractive index). In a split ring resonator the ring and wire units act as atomic dipoles: the wire acts as a ferroelectric atom, while the ring acts as an inductor L, while the open section acts as a capacitor C . The ring as a whole acts as an LC circuit . When the electromagnetic field passes through the ring, an induced current is created. The generated field is perpendicular to the light's magnetic field. The magnetic resonance results in a negative permeability; the refraction index is negative as well. (The lens is not truly flat, since the structure's capacitance imposes a slope for the electric induction.)
Several (mathematical) material models frequency response in DNGs. One of these is the Lorentz model , which describes electron motion in terms of a driven-damped, harmonic oscillator . The Debye relaxation model applies when the acceleration component of the Lorentz mathematical model is small compared to the other components of the equation. The Drude model applies when the restoring force component is negligible and the coupling coefficient is generally the plasma frequency . Other component distinctions call for the use of one of these models, depending on its polarity or purpose. [ 3 ]
Three-dimensional composites of metal/non-metallic inclusions periodically/randomly embedded in a low permittivity matrix are usually modeled by analytical methods, including mixing formulas and scattering-matrix based methods. The particle is modeled by either an electric dipole parallel to the electric field or a pair of crossed electric and magnetic dipoles parallel to the electric and magnetic fields, respectively, of the applied wave. These dipoles are the leading terms in the multipole series. They are the only existing ones for a homogeneous sphere, whose polarizability can be easily obtained from the Mie scattering coefficients. In general, this procedure is known as the "point-dipole approximation", which is a good approximation for metamaterials consisting of composites of electrically small spheres. Merits of these methods include low calculation cost and mathematical simplicity. [ 132 ] [ 133 ]
Three conceptions- negative-index medium, non-reflecting crystal and superlens are foundations of the metamaterial theory. Other first principles techniques for analyzing triply-periodic electromagnetic media may be found in Computing photonic band structure
The Multidisciplinary University Research Initiative (MURI) encompasses dozens of Universities and a few government organizations. Participating universities include UC Berkeley, UC Los Angeles, UC San Diego, Massachusetts Institute of Technology, and Imperial College in London. The sponsors are Office of Naval Research and the Defense Advanced Research Project Agency . [ 134 ]
MURI supports research that intersects more than one traditional science and engineering discipline to accelerate both research and translation to applications. As of 2009, 69 academic institutions were expected to participate in 41 research efforts. [ 135 ]
The Virtual Institute for Artificial Electromagnetic Materials and Metamaterials "Metamorphose VI AISBL" is an international association to promote artificial electromagnetic materials and metamaterials. It organizes scientific conferences, supports specialized journals, creates and manages research programs, provides training programs (including PhD and training programs for industrial partners); and technology transfer to European Industry. [ 136 ] [ 137 ] | https://en.wikipedia.org/wiki/Metamaterial |
A metamaterial absorber [ 1 ] is a type of metamaterial intended to efficiently absorb electromagnetic radiation such as light . Furthermore, metamaterials are an advance in materials science . Hence, those metamaterials that are designed to be absorbers offer benefits over conventional absorbers such as further miniaturization, wider adaptability, and increased effectiveness. Intended applications for the metamaterial absorber include emitters, photodetectors , sensors , spatial light modulators , infrared camouflage, wireless communication , and use in solar photovoltaics and thermophotovoltaics .
For practical applications, the metamaterial absorbers can be divided into two types: narrow band and broadband. [ 2 ] For example, metamaterial absorbers can be used to improve the performance of photodetectors . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Metamaterial absorbers can also be used for enhancing absorption in both solar photovoltaic [ 6 ] [ 7 ] and thermo-photovoltaic [ 8 ] [ 9 ] applications. Skin depth engineering can be used in metamaterial absorbers in photovoltaic applications as well as other optoelectronic devices, where optimizing the device performance demands minimizing resistive losses and power consumption, such as photodetectors , laser diodes , and light emitting diodes . [ 10 ]
In addition, the advent of metamaterial absorbers enable researchers to further understand the theory of metamaterials which is derived from classical electromagnetic wave theory . This leads to understanding the material's capabilities and reasons for current limitations. [ 1 ]
Unfortunately, achieving broadband absorption, especially in the THz region (and higher frequencies), still remains a challenging task because of the intrinsically narrow bandwidth of surface plasmon polaritons (SPPs) or localized surface plasmon resonances (LSPRs) generated on metallic surfaces at the nanoscale, which are exploited as a mechanism to obtain perfect absorption. [ 2 ]
Metamaterials are artificial materials which exhibit unique properties which do not occur in nature. These are usually arrays of structures which are smaller than the wavelength they interact with. These structures have the capability to control electromagnetic radiation in unique ways that are not exhibited by conventional materials. It is the spacing and shape of a given metamaterial's components that define its use and the way it controls electromagnetic radiation. Unlike most conventional materials, researchers in this field can physically control electromagnetic radiation by altering the geometry of the material's components. Metamaterial structures are used in a wide range of applications and across a broad frequency range from radio frequencies , to microwave , terahertz , across the infrared spectrum and almost to visible wavelengths . [ 1 ]
"An electromagnetic absorber neither reflects nor transmits the incident radiation. Therefore, the power of the impinging wave is mostly absorbed in the absorber materials. The performance of an absorber depends on its thickness and morphology, and also the materials used to fabricate it." [ 11 ]
"A near unity absorber is a device in which all incident radiation is absorbed at the operating frequency–transmissivity, reflectivity, scattering and all other light propagation channels are disabled. Electromagnetic (EM) wave absorbers can be categorized into two types: resonant absorbers and broadband absorbers. [ 2 ] [ 12 ]
A metamaterial absorber utilizes the effective medium design of metamaterials and the loss components of permittivity and magnetic permeability to create a material that has a high ratio of electromagnetic radiation absorption. Loss is noted in applications of negative refractive index ( photonic metamaterials , antenna systems metamaterials ) or transformation optics ( metamaterial cloaking , celestial mechanics), but is typically undesired in these applications. [ 1 ] [ 13 ]
Complex permittivity and permeability are derived from metamaterials using the effective medium approach. As effective media, metamaterials can be characterized with complex ε(w) = ε 1 + iε 2 for effective permittivity and μ(w) = μ 1 + i μ 2 for effective permeability. Complex values of permittivity and permeability typically correspond to attenuation in a medium. Most of the work in metamaterials is focused on the real parts of these parameters, which relate to wave propagation rather than attenuation. The loss (imaginary) components are small in comparison to the real parts and are often neglected in such cases.
However, the loss terms (ε 2 and μ 2 ) can also be engineered to create high attenuation and correspondingly large absorption. By independently manipulating resonances in ε and μ it is possible to absorb both the incident electric and magnetic field. Additionally, a metamaterial can be impedance-matched to free space by engineering its permittivity and permeability, minimizing reflectivity. Thus, it becomes a highly capable absorber. [ 1 ] [ 13 ] [ 14 ]
This approach can be used to create thin absorbers. Typical conventional absorbers are thick compared to wavelengths of interest, [ 15 ] which is a problem in many applications. Since metamaterials are characterized based on their subwavelength nature, they can be used to create effective yet thin absorbers. This is not limited to electromagnetic absorption either. [ 15 ] | https://en.wikipedia.org/wiki/Metamaterial_absorber |
Metamaterial antennas are a class of antennas which use metamaterials to increase performance of miniaturized ( electrically small ) antenna systems . [ 1 ] Their purpose, as with any electromagnetic antenna, is to launch energy into free space. However, this class of antenna incorporates metamaterials, which are materials engineered with novel, often microscopic , structures to produce unusual physical properties . Antenna designs incorporating metamaterials can step-up the antenna's radiated power .
Conventional antennas that are very small compared to the wavelength reflect most of the signal back to the source. A metamaterial antenna behaves as if it were much larger than its actual size, because its novel structure stores and re-radiates energy. Established lithography techniques can be used to print metamaterial elements on a printed circuit board . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
These novel antennas aid applications such as portable interaction with satellites, wide angle beam steering, emergency communications devices, micro-sensors and portable ground-penetrating radars to search for geophysical features.
Some applications for metamaterial antennas are wireless communication , space communications , GPS , satellites , space vehicle navigation and airplanes.
Antenna designs incorporating metamaterials can improve the radiated power of an antenna. The newest metamaterial antennas radiate as much as 95 percent of an input radio signal . Standard antennas need to be at least half the size of the signal wavelength to operate efficiently. At 300 MHz , for instance, an antenna would need to be half a meter long. In contrast, experimental metamaterial antennas are as small as one-fiftieth of a wavelength, and could have further decreases in size.
Metamaterials are a basis for further miniaturization of microwave antennas , with efficient power and acceptable bandwidth. Antennas employing metamaterials offer the possibility of overcoming restrictive efficiency-bandwidth limitations for conventionally constructed, miniature antennas.
Metamaterials permit smaller antenna elements that cover a wider frequency range , thus making better use of available space for space-constrained cases. In these instances, miniature antennas with high gain are significantly relevant because the radiating elements are combined into large antenna arrays. Furthermore, metamaterials' negative refractive index focuses electromagnetic radiation by a flat lens versus being dispersed. [ 7 ] [ 8 ] [ 9 ]
The earliest research in metamaterial antennas was an analytical study of a miniature dipole antenna surrounded with a metamaterial. This material is known variously as a negative index metamaterial (NIM) or double negative metamaterial (DNG) among other names. [ 10 ]
This configuration analytically and numerically appears to produce an order of magnitude increase in power. At the same time, the reactance appears to offer a corresponding decrease. Furthermore, the DNG shell becomes a natural impedance matching network for this system. [ 10 ]
Metamaterials employed in the ground planes surrounding antennas offer improved isolation between radio frequency , or microwave channels of ( multiple-input multiple-output ) (MIMO) antenna arrays . [ 11 ] Metamaterial, high-impedance groundplanes can also improve radiation efficiency and axial ratio performance of low-profile antennas located close to the ground plane surface . Metamaterials have also been used to increase beam scanning range by using both the forward and backward waves in leaky wave antennas. Various metamaterial antenna systems can be employed to support surveillance sensors, communication links, navigation systems and command and control systems. [ 7 ]
Besides antenna miniaturization, the novel configurations have potential applications ranging from radio frequency devices to optical devices. Other combinations, for other devices in metamaterial antenna subsystems are being researched. [ 12 ] Either double negative metamaterial slabs are used exclusively or combinations of double positive (DPS) with DNG slabs, or epsilon-negative (ENG) slabs with mu-negative (MNG) slabs are employed in the subsystems. Antenna subsystems that are currently being researched include cavity resonators , waveguides, scatters and antennas (radiators). [ 12 ] Metamaterial antennas were commercially available by 2009. [ 13 ] [ 14 ] [ 15 ]
Pendry et al. were able to show that a three- dimensional array of intersecting, thin wires could be used to create negative values of permittivity (" ε "), and that a periodic array of copper split ring resonators could produce an effective negative magnetic permeability ( " μ "). [ 11 ]
In May 2000, a group of researchers, Smith et al. were the first to successfully combine the split-ring resonator (SRR), with thin wire conducting posts and produce a left-handed material that had negative values of ε, μ and refractive index for frequencies in the gigahertz or microwave range. [ 12 ] [ 16 ]
In 2002, a different class of negative refractive index (NRI) metamaterials was introduced that employs periodic reactive loading of a 2-D transmission line as the host medium . This configuration used positive index (DPS) material with negative index material (DNG). It employed a small, planar, negative-refractive-lens interfaced with a positive index, parallel-plate waveguide. This was experimentally verified soon after. [ 17 ] [ 18 ]
Although some SRR inefficiencies were identified, they continued to be employed as of 2009 for research. SRRs have been involved in wide-ranging metamaterial research, including research on metamaterial antennas. [ 4 ] [ 17 ] [ 18 ]
A more recent view is that by using SRRs as building blocks, the electromagnetic response and associated flexibility is practical and desirable. [ 19 ]
DNG can provide phase compensation due to their negative index of refraction. This is accomplished by combining a slab of conventional lossless DPS material with a slab of lossless DNG metamaterial.
DPS has a conventional positive index of refraction , while the DNG has a negative refractive index. Both slabs are impedance -matched to the outside region (e.g., free space). The desired monochromatic plane wave is radiated on this configuration. As this wave propagates through the first slab of material a phase difference emerges between the exit and entrance faces. As the wave propagates through the second slab the phase difference is significantly decreased and even compensated for. Therefore, as the wave exits the second slab the total phase difference is equal to zero. [ 20 ]
With this system a phase-compensated, waveguiding system could be produced. By stacking slabs of this configuration, the phase compensation (beam translation effects) would occur throughout the entire system. Furthermore, by changing the index of any of the DPS-DNG pairs, the speed at which the beam enters the front face, and exits the back face of the entire stack-system changes. In this manner, a volumetric, low loss, time delay transmission line could be realized for a given system. [ 20 ]
Furthermore, this phase compensation can lead to a set of applications, which are miniaturized, subwavelength , cavity resonators , and waveguides with applications below diffraction limits . [ 20 ]
Because of DNG's dispersive nature as a transmission medium, it could be useful as a dispersion compensation device for time-domain applications . The dispersion produces a variance of the group speed of the signals' wave components, as they propagate in the DNG medium. Hence, stacked DNG metamaterials could be useful for modifying signal propagation along a microstrip transmission line . At the same time, dispersion leads to distortion. However, if the dispersion could be compensated for along the microstrip line, RF or microwave signals propagating along them would significantly decrease distortion. Therefore, components for attenuating distortion become less critical, and could lead to simplification of many systems. Metamaterials can eliminate dispersion along the microstrip by correcting for the frequency dependence of the effective permittivity. [ 21 ]
The strategy is to design a length of metamaterial -loaded transmission line that can be introduced with the original length of microstrip line to make the paired system dispersionless creating a dispersion-compensating segment of transmission line. This could be accomplished by introducing a metamaterial with a specific localized permittivity and a specific localized magnetic permeability , which then affects the relative permittivity and permeability of the overall microstrip line. It is introduced so that the wave impedance in the metamaterial remains unchanged. The index of refraction in the medium compensates for the dispersion effects associated with the microstrip geometry itself; making the effective refractive index of the pair that of free space. [ 21 ]
Part of the design strategy is that the effective permittivity and permeability of such a metamaterial should be negative – requiring a DNG material. [ 21 ]
Combining left-handed segments with a conventional (right-handed) transmission line results in advantages over conventional designs. Left-handed transmission lines are essentially a high-pass filter with phase advance. Conversely, right-handed transmission lines are a low-pass filter with phase lag. This configuration is designated composite right/left-handed (CRLH) metamaterial. [ 22 ] [ 23 ] [ 24 ]
The conventional Leaky Wave antenna has had limited commercial success because it lacks complete backfire-to-endfire frequency scanning capability. The CRLH allowed complete backfire-to-endfire frequency scanning, including broadside.
The metamaterial lens , found in metamaterial antenna systems, is used as an efficient coupler to external radiation, focusing radiation along or from a microstrip transmission line into transmitting and receiving components. Hence, it can be used as an input device . In addition, it can enhance the amplitude of evanescent waves , as well as correct the phase of propagating waves.
In this instance an SRR uses layers of a metallic mesh of thin wires – with wires in the three directions of space and slices of foam . This material's permittivity above the plasma frequency can be positive and less than one. This means that the refractive index is just above zero. The relevant parameter is often the contrast between the permittivities rather than the overall permittivity value at desired frequencies. This occurs because the equivalent (effective) permittivity has a behavior governed by a plasma frequency in the microwave domain. This low optical index material then is a good candidate for extremely convergent microlenses . Methods that have been developed theoretically using dielectric photonic crystals applied in the microwave domain to realize a directive emitter using metallic grids. [ 2 ]
In this instance, arrayed wires in a cubic , crystal lattice structure can be analyzed as an array of aerials ( antenna array ). As a lattice structure it has a lattice constant . The lattice constant or lattice parameter refers to the constant distance between unit cells in a crystal lattice. [ 25 ]
The earlier discovery of plasmons created the view that metal at plasmon frequency f p is a composite material. The effect of plasmons on any metal sample is to create properties in the metal such that it can behave as a dielectric , independent of the wave vector of the EM excitation (radiation) field. Furthermore, a minute-fractionally small amount of plasmon energy is absorbed into the system denoted as γ . For aluminium f p = 15 eV, and γ = 0.1 eV. Perhaps the most important result of the interaction of metal and the plasma frequency is that permittivity is negative below the plasma frequency, all the way to the minute value of γ . [ 25 ] [ 26 ]
These facts ultimately result in the arrayed wire structure as being effectively a homogeneous medium. [ 25 ]
This metamaterial allows for control of the direction of emission of an electromagnetic radiation source located inside the material in order to collect all the energy in a small angular domain around the normal . [ 2 ] By using a slab of a metamaterial, diverging electromagnetic waves are focused into a narrow cone. Dimensions are small in comparison to the wavelength and thus the slab behaves as a homogeneous material with a low plasma frequency . [ 2 ]
A transmission line is the material medium or structure that forms all or part of a path from one place to another for directing the transmission of energy, such as electromagnetic waves or electric power transmission . Types of transmission line include wires , coaxial cables , dielectric slabs, striplines , optical fibers , electric power lines and waveguides. [ 27 ]
A microstrip is a type of transmission line that can be fabricated using printed circuit board technology and is used to convey microwave-frequency signals. It consists of a conducting strip separated from a ground plane by a dielectric layer known as the substrate . Microwave components such as antennas , couplers , filters and power dividers can be formed from a microstrip.
From the simplified schematics to the right it can be seen that total impedance, conductance, reactance (capacitance and inductance) and the transmission medium (transmission line) can be represented by single components that give the overall value.
With transmission line media it is important to match the load impedance Z L to the characteristic impedance Z 0 as closely as possible, because it is usually desirable that the load absorbs as much power as possible.
Often, because of the goal that moves physical metamaterial inclusions (or cells) to smaller sizes, discussion and implementation of lumped LC circuits or distributed LC networks are often examined. Lumped circuit elements are actually microscopic elements that effectively approximate their larger component counterparts. For example, circuit capacitance and inductance can be created with split rings, which are on the scale of nanometers at optical frequencies. The distributed LC model is related to the lumped LC model, however the distributed-element model is more accurate but more complex than the lumped-element model .
Some noted metamaterial antennas employ negative-refractive-index transmission-line metamaterials (NRI-TLM). These include lenses that can overcome the diffraction limit, narrowband and broadband phase-shifting lines, small antennas, low-profile antennas, antenna feed networks, novel power architectures, and high-directivity couplers. Loading a planar metamaterial network of TLs with series capacitors and shunt inductors produces higher performance. This results in a large operating bandwidth while the refractive index is negative. [ 12 ] [ 28 ]
Because superlenses can overcome the diffraction limit , this allows for a more efficient coupling to external radiation and enables a broader frequency band. For example, the superlens can be applied to the TLM architecture. In conventional lenses, imaging is limited by the diffraction limit . With superlenses the details of the near field images are not lost. Growing evanescent waves are supported in the metamaterial ( n < 1), which restores the decaying evanescent waves from the source. This results in a diffraction-limited resolution of λ/6, after some small losses. This compares with λ/2, the normal diffraction limit for conventional lenses . [ 28 ]
By combining right-handed (RHM) with left-handed materials (LHM) as a composite material (CRLH) construction, both a backward to forward scanning capability is obtained.
Metamaterials were first used for antenna technology around 2005. This type of antenna used the established capability of SNGs to couple with external radiation . Resonant coupling allowed for a wavelength larger than the antenna. At microwave frequencies this allowed for a smaller antenna. [ 4 ] [ 28 ]
A metamaterial-loaded transmission line has significant advantages over conventional or standard delay transmission lines. It is more compact in size, it can achieve positive or negative phase shift while occupying the same short physical length and it exhibits a linear, flatter phase response with frequency , leading to shorter group delays. It can work in lower frequency because of high series distributed-capacitors and has smaller plane dimensions than its equivalent coplanar structure. [ 28 ]
In 2002, rather than using SRR-wire configuration, or other 3-D media, researchers looked at planar configurations that supported backward wave propagation, thus demonstrating negative refractive index and focusing as a consequence. [ 17 ]
It has long been known that transmission lines periodically loaded with capacitive and inductive elements in a high-pass configuration support certain types of backward waves. In addition, planar transmission lines are a natural match for 2-D wave propagation. With lumped circuit elements they retain a compact configuration and can still support the lower RF range. With this in mind, high pass and cutoff, periodically loaded, two-dimensional LC transmission line networks were proposed. The LC networks can be designed to support backward waves, without bulky SRR/wire structure. This was the first such proposal which veered away from bulk media for a negative refractive effect. A notable property of this type of network is that there is no reliance on resonance, Instead the ability to support backward waves defines negative refraction. [ 17 ]
The principles behind focusing are derived from Veselago and Pendry. Combining a conventional, flat, (planar) DPS slab, M-1, with a left-handed medium, M-2, a propagating electromagnetic wave with a wave vector k1 in M-1, results in a refracted wave with a wave vector k2 in M-2. Since, M-2 supports backward wave propagation k2 is refracted to the opposite side of the normal, while the Poynting vector of M-2 is anti-parallel with k2. Under such conditions, power is refracted through an effectively negative angle, which implies an effectively negative index of refraction. [ 17 ]
Electromagnetic waves from a point source located inside a conventional DPS can be focused inside an LHM using a planar interface of the two media. These conditions can be modeled by exciting a single node inside the DPS and observing the magnitude and phase of the voltages to ground at all points in the LHM. A focusing effect should manifest itself as a “spot” distribution of voltage at a predictable location in the LHM. [ 17 ]
Negative refraction and focusing can be accomplished without employing resonances or directly synthesizing the permittivity and permeability. In addition, this media can be practically fabricated by appropriately loading a host transmission line medium. Furthermore, the resulting planar topology permits LHM structures to be readily integrated with conventional planar microwave circuits and devices. [ 17 ]
When transverse electromagnetic propagation occurs with a transmission line medium, the analogy for permittivity and permeability is ε = L, and μ = C. This analogy was developed with positive values for these parameters. The next logic step was realizing that negative values could be achieved. In order to synthesize a left-handed medium (ε < 0 and μ < 0) the series reactance and shunt susceptibility should become negative, because the material parameters are directly proportional to these circuit quantities. [ 29 ]
A transmission line that has lumped circuit elements that synthesize a left-handed medium is referred to as a "dual transmission line" as compared to "conventional transmission line". The dual transmission line structure can be implemented in practice by loading a host transmission line with lumped series capacitors (C) and shunt inductors (L). In this periodic structure, the loading is strong such that the lumped elements dominate the propagation characteristics. [ 29 ]
Using SRRs at RF frequencies , as with wireless devices, requires the resonators to be scaled to larger dimensions. This worked against making the devices more compact. In contrast, LC network configurations could be scaled to both microwave and RF frequencies. [ 30 ]
LC-loaded transmission lines enabled a new class of metamaterials to produce a negative refractive index . Relying on LC networks to emulate electrical permittivity and magnetic permeability resulted in a substantial increase in operating bandwidths. [ 30 ]
Moreover, their unit cells are connected through a transmission-line network and may be equipped with lumped circuit elements, which permit them to be compact at frequencies where an SRR cannot be compact. The flexibility gained through the use of either discrete or printed elements enables planar metamaterials to be scalable from the megahertz to the tens of gigahertz range. In addition, replacing capacitors with varactors allowed the material properties to be dynamically tuned. The proposed media are planar and inherently support two-dimensional (2-D) wave propagation, making them well-suited for RF/microwave device and circuit applications. [ 30 ]
The periodic 2-D LC loaded transmission-line ( TL ) was shown to exhibit NRI properties over a broad frequency range. This network will be referred to as a dual TL structure since it is of a high-pass configuration, as opposed to the low-pass representation of a conventional TL structure. [ 31 ] Dual TL structures have been used to experimentally demonstrate backward-wave radiation and focusing at microwave frequencies. [ 17 ] [ 31 ]
As a negative refractive index medium, a dual TL structure is not simply a phase compensator. It can enhance the amplitude of evanescent waves, as well as correct the phase of propagating waves. Evanescent waves actually grow within the dual TL structure. [ 31 ]
Grbic et al. used one-dimensional LC loaded transmission line network, which supports fast backward-wave propagation to demonstrate characteristics analogous to "reversed Cherenkov radiation". Their proposed backward-wave radiating structure was inspired by negative refractive index LC materials. The simulated E-plane pattern at 15 GHz showed radiation towards the backfire direction in the far-field pattern, clearly indicating the excitation of a backward wave. Since the transverse dimension of the array is electrically short, the structure is backed by a long metallic trough. The trough acts as a waveguide below cut-off and recovers the back radiation, resulting in unidirectional far-field patterns. [ 32 ]
Planar media can be implemented with an effective negative refractive index. The underlying concept is based on appropriately loading a printed network of transmission lines periodically with inductors and capacitors. This technique results in effective permittivity and permeability material parameters that are both inherently and simultaneously negative, obviating the need to employ separate means. The proposed media possess other desirable features including very wide bandwidth over which the refractive index remains negative, the ability to guide 2-D TM waves, scalability from RF to millimeter-wave frequencies and low transmission losses, as well as the potential for tunability by inserting varactors and/or switches in the unit cell. The concept has been verified with circuit and full-wave simulations. A prototype focusing device has been tested experimentally. The experimental results demonstrated focusing of an incident cylindrical wave within an octave bandwidth and over an electrically short area; suggestive of near-field focusing. [ 33 ]
RF/microwave devices can be implemented based on these proposed media for applications in wireless communications, surveillance and radars. [ 33 ]
According to some researchers SRR/wire-configured metamaterials are bulky 3-D constructions that are difficult to adapt for RF/microwave device and circuit applications. These structures can achieve a negative index of refraction only within a narrow bandwidth. When applied to wireless devices at RF frequencies the split ring-resonators have to be scaled to larger dimensions, which, in turn forces a larger device size. [ 33 ]
The proposed structures go beyond the wire/SRR composites in that they do not rely on SRRs to synthesize the material parameters, thus leading to dramatically increased operating bandwidths. Moreover, their unit cells are connected through a transmission-line network and they may, therefore, be equipped with lumped elements, which permit them to be compact at frequencies where the SRR cannot be compact. The flexibility gained through the use of either discrete or printed elements enables planar metamaterials to be scalable from the megahertz to the tens of gigahertz range. In addition, by utilizing varactors instead of capacitors, the effective material properties can be dynamically tuned. Furthermore, the proposed media are planar and inherently support two-dimensional (2-D) wave propagation. Therefore, these new metamaterials are well suited for RF/microwave device and circuit applications. [ 33 ]
In the long-wavelength regime, the permittivity and permeability of conventional materials can be artificially synthesized using periodic LC networks arranged in a low-pass configuration. In the dual (high-pass) configuration, these equivalent material parameters assume simultaneously negative values, and may therefore be used to synthesize a negative refractive index. [ 34 ]
Antenna theory is based on classical electromagnetic theory as described by Maxwell's equations . [ 35 ] Physically, an antenna is an arrangement of one or more conductors , usually called elements. An alternating current is created in the elements by applying a voltage at the antenna terminals, causing the elements to radiate an electromagnetic field. In reception, the reverse occurs: an electromagnetic field from another source induces an alternating current in the elements and a corresponding voltage at the antenna's terminals. Some receiving antennas (such as parabolic and horn types) incorporate shaped reflective surfaces to collect EM waves from free space and direct or focus them onto the actual conductive elements.
An antenna creates sufficiently strong electromagnetic fields at large distances. Reciprocally, it is sensitive to the electromagnetic fields impressed upon it externally. The actual coupling between a transmitting and receiving antenna is so small that amplifier circuits are required at both the transmitting and receiving stations. Antennas are usually created by modifying ordinary circuitry into transmission line configurations. [ 35 ]
The required antenna for any given application is dependent on the bandwidth employed, and range (power) requirements. In the microwave to millimeter-wave range – wavelengths from a few meters to millimeters – the following antennas are usually employed: [ 35 ]
Dipole antennas, short antennas, parabolic and other reflector antennas, horn antennas, periscope antennas, helical antennas, spiral antennas, surface-wave and leaky wave antennas. Leaky wave antennas include dielectric and dielectric loaded antennas, and the variety of microstrip antennas. [ 35 ]
The SRR was introduced by Pendry in 1999, and is one of the most common elements of metamaterials . [ 36 ] As a nonmagnetic conducting unit, it comprises an array of units that yield an enhanced negative effective magnetic permeability, when the frequency of the incident electromagnetic field is close to the SRR resonance frequency. The resonant frequency of the SRR depends on its shape and physical design. In addition, resonance can occur at wavelengths much larger than its size. [ 37 ] [ 38 ] For the further shape optimization of the elements it is expedient to use genetic and other optimization algorithms. In multi-frequency designs one may apply fractal designs such as those of Sierpensky, Koch or other fractals instead of SRRs. [ 11 ]
Through the application of double negative metamaterials (DNG), the power radiated by electrically small dipole antennas can be notably increased. This could be accomplished by surrounding an antenna with a shell of double negative (DNG) material. When the electric dipole is embedded in a homogeneous DNG medium, the antenna acts inductively rather than capacitively, as it would in free space without the interaction of the DNG material. In addition, the dipole-DNG shell combination increases the real power radiated by more than an order of magnitude over a free space antenna. A notable decrease in the reactance of the dipole antenna corresponds to the increase in radiated power. [ 10 ]
The reactive power indicates that the DNG shell acts as a natural matching network for the dipole. The DNG material matches the intrinsic reactance of this antenna system to free space, hence the impedance of DNG material matches free space. It provides a natural matching circuit to the antenna. [ 10 ]
The addition of an SRR-DNG metamaterial increased the radiated power by more than an order of magnitude over a comparable free space antenna. Electrically small antennas, high directivity and tunable operational frequency are produced with negative magnetic permeability. When combining a right-handed material (RHM) with a Veselago-left-handed material (LHM) other novel properties are obtained. A single negative material resonator, obtained with an SRR, can produce an electrically small antenna when operating at microwave frequencies, as follows: [ 4 ]
The configuration of an SRR assessed was two concentric annular rings with relative opposite gaps in the inner and outer ring. Its geometrical parameters were R = 3.6 mm, r = 2.5 mm, w = 0.2 mm, t = 0.9 mm. R and r are used in annular parameters, w is the spacing between the rings and t = the width of the outer ring. The material had a thickness of 1.6 mm. Permittivity was 3.85 at 4 GHz. The SRR was fabricated with an etching technique onto a 30 μm thick copper substrate. The SRR was excited by using a monopole antenna . The monopole antenna was composed of a coaxial cable , ground plane and radiating components. The ground plane material was aluminium . The operation frequency of the antenna was 3.52 GHz, which was determined by considering the geometrical parameters of SRR. An 8.32 mm length of wire was placed above the ground plane, connected to the antenna, which was one quarter of the operation wavelength. The antenna worked with a feed wavelength of 3.28 mm and feed frequency of 7.8 GHz. The SRR's resonant frequency was smaller than the monopole operation frequency. [ 4 ]
The monopole-SRR antenna operated efficiently at (λ/10) using the SRR-wire configuration. It demonstrated good coupling efficiency and sufficient radiation efficiency. Its operation was comparable to a conventional antenna at λ/2, which is a conventional antenna size for efficient coupling and radiation. Therefore, the monopole-SRR antenna becomes an acceptable electrically small antenna at the SRR's resonance frequency. [ 4 ] [ 11 ]
When the SRR is made part of this configuration, characteristics such as the antenna's radiation pattern are entirely changed in comparison to a conventional monopole antenna. With modifications to the SRR structure the antenna size could reach ( λ/40 ). Coupling 2, 3, and 4 SRRs side by side slightly shifts radiation patterns. [ 4 ]
In 2005 a patch antenna with a metamaterial cover was proposed that enhanced directivity . According to the numerical results, the antenna showed significant improvement in directivity, compared to conventional patch antennae. This was cited in 2007 for an efficient design of directive patch antennas in mobile communications using metamaterials. [ 11 ] This design was based on the left-handed material (LHM) transmission line model, with the circuit elements L and C of the LHM equivalent circuit model. This study developed formulae to determine the L and C values of the LHM equivalent circuit model for desirable characteristics of directive patch antennas. Design examples derived from actual frequency bands in mobile communications were performed, which illustrates the efficiency of this approach. [ 39 ] [ 40 ] [ 41 ]
This configuration uses a flat aperture constructed of zero-index metamaterial. This has advantages over ordinary (conventional) curved lenses, which results in a much improved directivity. [ 11 ] These investigations have provided capabilities for the miniaturization of microwave source and non-source devices, circuits, antennas and the improvement of electromagnetic performance. [ 42 ]
Metamaterials surface antenna technology (M-SAT) is an invention that uses metamaterials to direct and maintain a consistent broadband radio frequency beam locked on to a satellite whether the platform is in motion or stationary. Gimbals and motors are replaced by arrays of metamaterials in a planar configuration. Also, with this new technology phase shifters are not required as with phased array equipment. The desired affect is accomplished by varying the pattern of activated metamaterial elements as needed. The technology is a practical application of metamaterial cloaking theory. The antenna is approximately the size of a laptop computer. [ 43 ] [ 44 ] [ 45 ]
Research and applications of metamaterial based antennas. Related components are also researched. [ 46 ] [ 47 ]
When the interface between a pair of materials that function as optical transmission media interact as a result of opposing permittivity and / or permeability values that are either ordinary (positive) or extraordinary (negative), notable anomalous behaviors may occur. The pair would be a DNG metamaterial (layer), paired with a DPS, ENG or MNG layer. Wave propagation behavior and properties may occur that would otherwise not happen if only DNG layers are paired together. [ 48 ]
At the interface between two media, the concept of the continuity of the tangential electric and magnetic field components can be applied. If either the permeability or permittivity of two media has opposite signs then the normal components of the tangential field, on both sides of the interface, will be discontinuous at the boundary. This implies a concentrated resonant phenomenon at the interface. This appears to be similar to the current and voltage distributions at the junction between an inductor and capacitor, at the resonance of an L-C circuit. This " interface resonance " is essentially independent of the total thickness of the paired layers, because it occurs along the discontinuity between two such conjugate materials. [ 48 ] [ 49 ]
The geometry consists of two parallel plates as perfect conductors (PEC), an idealized structure, filled by two stacked planar slabs of homogeneous and isotropic materials with their respective constitutive parameters ε 1 , ε 2 , u 1 , u 2 . Each slab has thickness = d, slab 1 = d 1 , and slab 2 = d 2 . Choosing which combination of parameters to employ involves pairing DPS and DNG or ENG and MNG materials. As mentioned previously, this is one pair of oppositely-signed constitutive parameters, combined. [ 50 ]
The real component values for negative permittivity and permeability results in real component values for negative refraction n. In a lossless medium, all that would exist are real values. This concept can be used to map out phase compensation when a conventional lossless material, DPS, is matched with a lossless NIM (DNG). [ 49 ]
In phase compensation, the DPS of thickness d 1 has ε > 0 and μ > 0. Conversely, the NIM of thickness d 2 has ε < 0 and μ < 0. Assume that the intrinsic impedance of the DPS dielectric material (d 1 ) is the same as that of the outside region and responding to a normally incident planar wave. The wave travels through the medium without any reflection because the DPS impedance and the outside impedance are equal. However, the plane wave at the end of DPS slab is out of phase with the plane wave at the beginning of the material. [ 49 ]
The plane wave then enters the lossless NIM (d 2 ). At certain frequencies ε < 0 and μ < 0 and n < 0. Like the DPS, the NIM has intrinsic impedance that is equal to the outside, and, therefore, is also lossless. The direction of power flow (i.e., the Poynting vector) in the first slab should be the same as that in the second one, because the power of the incident wave enters the first slab (without any reflection at the first interface), traverses the first slab, exits the second interface, enters the second slab and traverses it, and finally leaves the second slab. However, as stated earlier, the direction of power is anti-parallel to the direction of phase velocity. Therefore, the wave vector k 2 is in the opposite direction of k 1 . Furthermore, whatever phase difference is developed by traversing the first slab can be decreased and even cancelled by traversing the second slab. If the ratio of the two thicknesses is d 1 / d 2 = n 2 / n 1 , then the total phase difference between the front and back faces is zero. [ 49 ] This demonstrates how the NIM slab at chosen frequencies acts as a phase compensator. It is important to note that this phase compensation process is only on the ratio of d 1 / d 2 rather than the thickness of d 1 + d 1 . Therefore, d 1 + d 1 can be any value, as long as this ratio satisfies the above condition. Finally, even though this two-layer structure is present, the wave traversing this structure would not experience the phase difference.
Following this, the next step is the subwavelength cavity resonator. [ 49 ]
The phase compensator described above can be used to conceptualize the possibility of designing a compact 1-D cavity resonator. The above two-layer structure is applied as two perfect
reflectors, or in other words, two perfect conducting plates. Conceptually, what is constrained in the resonator is d 1 / d 2 , not d 1 + d 2 . Therefore, in principle, one can have a thin subwavelength cavity resonator for a given frequency, if at this frequency the second layer acts a metamaterial with negative permittivity and permeability and the ratio correlates to the correct values. [ 49 ]
The cavity can conceptually be thin while still resonant, as long as the ratio of thicknesses is satisfied. This can, in principle, provide possibility for subwavelength, thin, compact cavity resonators. [ 49 ]
Frequency selective surface (FSS) based metamaterials utilize equivalent LC circuitry configurations. Using FSS in a cavity allows for miniaturization, decrease of the resonant frequency, lowers the cut-off frequency and smooth transition from a fast-wave to a slow-wave in a waveguide configuration. [ 51 ]
As an LHM application four different cavities operating in the microwave regime were fabricated and experimentally observed and described. [ 52 ]
A magnetic dipole was placed on metamaterial (slab) ground plane. The metamaterials have either constituent parameters that are both negative, or negative permittivity or negative permeability. The dispersion and radiation properties of leaky waves supported by these metamaterial slabs, respectively, were investigated. [ 53 ]
Multiple systems have patents .
Phased array systems and antennas for use in such systems are well known in areas such as telecommunications and radar applications. In general phased array systems work by coherently reassembling signals over the entire array by using circuit elements to compensate for relative phase differences and time delays. [ 54 ]
Patented in 2004, one phased array antenna system is useful in automotive radar applications. By using NIMs as a biconcave lens to focus microwaves, the antenna's sidelobes are reduced in size. This equates to a reduction in radiated energy loss, and a relatively wider useful bandwidth. The system is an efficient, dynamically ranged phased array radar system. [ 54 ]
In addition, signal amplitude is increased across the microstrip transmission lines by suspending them above the ground plane at a predetermined distance. In other words, they are not in contact with a solid substrate. Dielectric signal loss is reduced significantly, reducing signal attenuation. [ 54 ]
This system was designed to boost the performance of the Monolithic microwave integrated circuit (MMIC), among other benefits. A transmission line is created with photolithography. A metamaterial lens, consisting of a thin wire array focuses the transmitted or received signals between the line and the emitter / receiver elements. [ 54 ]
The lens also functions as an input device and consists of a number of periodic unit-cells disposed along the line. The lens consists of multiple lines of the same make up; a plurality of periodic unit-cells. The periodic unit-cells are constructed of a plurality of electrical components; capacitors and inductors as components of multiple distributed-element circuits . [ 54 ]
The metamaterial incorporates a conducting transmission element, a substrate comprising at least a first ground plane for grounding the transmission element, a plurality of unit-cell circuits composed periodically along the transmission element and at least one via for electrically connecting the transmission element to at least the first ground plane. It also includes a means for suspending this transmission element a predetermined distance from the substrate in a way such that the transmission element is located at a second predetermined distance from the ground plane. [ 54 ]
This structure was designed for use in waveguiding or scattering of waves. It employs two adjacent layers. The first layer is an epsilon-negative (ENG) material or a mu-negative (MNG) material. The second layer is either a double-positive (DPS) material or a double-negative (DNG) material. Alternatively, the second layer can be an ENG material when the first layer is an MNG material or the reverse. [ 55 ]
Metamaterials can reduce interference across multiple devices with smaller and simpler shielding. While conventional absorbers can be three inches thick, metamaterials can be in the millimeter range—2 mm (0.078 in) thick. [ 56 ] | https://en.wikipedia.org/wiki/Metamaterial_antenna |
Metamaterial cloaking is the usage of metamaterials in an invisibility cloak . This is accomplished by manipulating the paths traversed by light through a novel optical material. Metamaterials direct and control the propagation and transmission of specified parts of the light spectrum and demonstrate the potential to render an object seemingly invisible . Metamaterial cloaking, based on transformation optics , describes the process of shielding something from view by controlling electromagnetic radiation . Objects in the defined location are still present, but incident waves are guided around them without being affected by the object itself. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Electromagnetic metamaterials respond to chosen parts of radiated light, also known as the electromagnetic spectrum , in a manner that is difficult or impossible to achieve with natural materials . In other words, these metamaterials can be further defined as artificially structured composite materials , which exhibit interaction with light usually not available in nature ( electromagnetic interactions ). At the same time, metamaterials have the potential to be engineered and constructed with desirable properties that fit a specific need. That need will be determined by the particular application. [ 2 ] [ 6 ] [ 7 ]
The artificial structure for cloaking applications is a lattice design – a sequentially repeating network – of identical elements. Additionally, for microwave frequencies, these materials are analogous to crystals for optics . Also, a metamaterial is composed of a sequence of elements and spacings, which are much smaller than the selected wavelength of light . The selected wavelength could be radio frequency , microwave, or other radiations, now just beginning to reach into the visible frequencies . Macroscopic properties can be directly controlled by adjusting characteristics of the rudimentary elements , and their arrangement on, or throughout the material. Moreover, these metamaterials are a basis for building very small cloaking devices in anticipation of larger devices, adaptable to a broad spectrum of radiated light. [ 2 ] [ 6 ] [ 8 ]
Hence, although light consists of an electric field and a magnetic field , ordinary optical materials, such as optical microscope lenses, have a strong reaction only to the electric field. The corresponding magnetic interaction is essentially nil. This results in only the most common optical effects , such as ordinary refraction with common diffraction limitations in lenses and imaging . [ 2 ] [ 6 ] [ 8 ]
Since the beginning of optical sciences , centuries ago, the ability to control the light with materials has been limited to these common optical effects. Metamaterials, on the other hand, are capable of a very strong interaction, or coupling, with the magnetic component of light. Therefore, the range of response to radiated light is expanded beyond the ordinary optical limitations that are described by the sciences of physical optics and optical physics . In addition, as artificially constructed materials, both the magnetic and electric components of the radiated light can be controlled at will, in any desired fashion as it travels, or more accurately propagates , through the material. This is because a metamaterial's behavior is typically formed from individual components, and each component responds independently to a radiated spectrum of light. At this time, however, metamaterials are limited. Cloaking across a broad spectrum of frequencies has not been achieved, including the visible spectrum . Dissipation , absorption , and dispersion are also current drawbacks, but this field is still in its optimistic infancy. [ 2 ] [ 6 ] [ 8 ]
The field of transformation optics is founded on the effects produced by metamaterials. [ 1 ]
Transformation optics has its beginnings in the conclusions of two research endeavors. They were published on May 25, 2006, in the same issue of Science , a peer-reviewed journal. The two papers are tenable theories on bending or distorting light to electromagnetically conceal an object. Both papers notably map the initial configuration of the electromagnetic fields on to a Cartesian mesh. Twisting the Cartesian mesh, in essence, transforms the coordinates of the electromagnetic fields, which in turn conceal a given object. Hence, with these two papers, transformation optics is born. [ 2 ] [ 9 ] [ 10 ]
Transformation optics subscribes to the capability of bending light , or electromagnetic waves and energy , in any preferred or desired fashion, for a desired application. Maxwell's equations do not vary even though coordinates transform. Instead it is the values of the chosen parameters of the materials which "transform", or alter, during a certain time period. So, transformation optics developed from the capability to choose the parameters for a given material. Hence, since Maxwell's equations retain the same form, it is the successive values of the parameters, permittivity and permeability , which change over time. Furthermore, permittivity and permeability are in a sense responses to the electric and magnetic fields of a radiated light source respectively, among other descriptions. The precise degree of electric and magnetic response can be controlled in a metamaterial, point by point. Since so much control can be maintained over the responses of the material, this leads to an enhanced and highly flexible gradient-index material. Conventionally predetermined refractive index of ordinary materials instead become independent spatial gradients in a metamaterial, which can be controlled at will. Therefore, transformation optics is a new method for creating novel and unique optical devices . [ 1 ] [ 2 ] [ 7 ] [ 9 ] [ 11 ] [ 12 ]
The purpose of a cloaking device is to hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (or sound waves ), as with Metamaterial cloaking . [ 5 ] [ 13 ]
Cloaking objects, or making them appear invisible with metamaterials , is roughly analogous to a magician's sleight of hand, or his tricks with mirrors. The object or subject doesn't really disappear; the vanishing is an illusion. With the same goal, researchers employ metamaterials to create directed blind spots by deflecting certain parts of the light spectrum (electromagnetic spectrum). It is the light spectrum, as the transmission medium , that determines what the human eye can see. [ 14 ]
In other words, light is refracted or reflected determining the view, color, or illusion that is seen. The visible extent of light is seen in a chromatic spectrum such as the rainbow . However, visible light is only part of a broad spectrum, which extends beyond the sense of sight. For example, there are other parts of the light spectrum which are in common use today. The microwave spectrum is employed by radar , cell phones , and wireless Internet . The infrared spectrum is used for thermal imaging technologies, which can detect a warm body amidst a cooler night time environment, and infrared illumination is combined with specialized digital cameras for night vision . Astronomers employ the terahertz band for submillimeter observations to answer deep cosmological questions.
Furthermore, electromagnetic energy is light energy, but only a small part of it is visible light . This energy travels in waves. Shorter wavelengths, such as visible light and infrared , carry more energy per photon than longer waves, such as microwaves and radio waves . For the sciences , the light spectrum is known as the electromagnetic spectrum . [ 14 ] [ 15 ] [ 16 ] [ 17 ]
Prisms , mirrors , and lenses have a long history of altering the diffracted visible light that surrounds all. However, the control exhibited by these ordinary materials is limited. Moreover, the one material which is common among these three types of directors of light is conventional glass . Hence, these familiar technologies are constrained by the fundamental, physical laws of optics . With metamaterials in general, and the cloaking technology in particular, it appears these barriers disintegrate with advancements in materials and technologies never before realized in the natural physical sciences . These unique materials became notable because electromagnetic radiation can be bent, reflected, or skewed in new ways. The radiated light could even be slowed or captured before transmission. In other words, new ways to focus and project light and other radiation are being developed. Furthermore, the expanded optical powers presented in the science of cloaking objects appear to be technologically beneficial across a wide spectrum of devices already in use. This means that every device with basic functions that rely on interaction with the radiated electromagnetic spectrum could technologically advance. With these beginning steps a whole new class of optics has been established. [ 15 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Interest in the properties of optics, and light, date back to almost 2000 years to Ptolemy (AD 85 – 165). In his work entitled Optics , he writes about the properties of light , including reflection , refraction , and color . He developed a simplified equation for refraction without trigonometric functions . About 800 years later, in AD 984, Ibn Sahl discovered a law of refraction mathematically equivalent to Snell's law . He was followed by the most notable Islamic scientist, Ibn Al-Haytham (c.965–1039), who is considered to be "one of the few most outstanding figures in optics in all times". [ 22 ] He made significant advances in the science of physics in general, and optics in particular. He anticipated the universal laws of light articulated by seventeenth century scientists by hundreds of years. [ 15 ] [ 22 ] [ 23 ] [ 24 ]
In the seventeenth century both Willebrord Snellius and Descartes were credited with discovering the law of refraction. It was Snellius who noted that Ptolemy's equation for refraction was inexact. Consequently, these laws have been passed along, unchanged for about 400 years, like the laws of gravity. [ 15 ] [ 22 ] [ 23 ] [ 24 ]
Electromagnetic radiation and matter have a symbiotic relationship. Radiation does not simply act on a material, nor is it simply acted upon by a given material. Radiation interacts with matter . Cloaking applications which employ metamaterials alter how objects interact with the electromagnetic spectrum . The guiding vision for the metamaterial cloak is a device that directs the flow of light smoothly around an object, like water flowing past a rock in a stream, without reflection , rendering the object invisible. In reality, the simple cloaking devices of the present are imperfect, and have limitations. [ 14 ] [ 15 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] One challenge up to the present date has been the inability of metamaterials, and cloaking devices, to interact at frequencies , or wavelengths , within the visible light spectrum. [ 3 ] [ 28 ] [ 29 ]
The principle of cloaking, with a cloaking device, was first proved (demonstrated) at frequencies in the microwave radiation band on October 19, 2006. This demonstration used a small cloaking device. Its height was less than one half inch (< 13 mm) and its diameter five inches (125 mm), and it successfully diverted microwaves around itself. The object to be hidden from view, a small cylinder, was placed in the center of the device. The invisibility cloak deflected microwave beams so they flowed around the cylinder inside with only minor distortion, making it appear almost as if nothing were there at all.
Such a device typically involves surrounding the object to be cloaked with a shell which affects the passage of light near it. There was reduced reflection of electromagnetic waves (microwaves), from the object. Unlike a homogeneous natural material with its material properties the same everywhere, the cloak's material properties vary from point to point, with each point designed for specific electromagnetic interactions (inhomogeneity), and are different in different directions (anisotropy). This accomplishes a gradient in the material properties. The associated report was published in the journal Science . [ 3 ] [ 18 ] [ 29 ] [ 30 ]
Although a successful demonstration, three notable limitations can be shown. First, since its effectiveness was only in the microwave spectrum the small object is somewhat invisible only at microwave frequencies. This means invisibility had not been achieved for the human eye , which sees only within the visible spectrum . This is because the wavelengths of the visible spectrum are tangibly shorter than microwaves. However, this was considered the first step toward a cloaking device for visible light, although more advanced nanotechnology-related techniques would be needed due to light's short wavelengths. Second, only small objects can be made to appear as the surrounding air. In the case of the 2006 proof of cloaking demonstration, the hidden from view object, a copper cylinder, would have to be less than five inches in diameter, and less than one half inch tall. Third, cloaking can only occur over a narrow frequency band, for any given demonstration. This means that a broad band cloak, which works across the electromagnetic spectrum , from radio frequencies to microwave to the visible spectrum , and to x-ray , is not available at this time. This is due to the dispersive nature of present-day metamaterials. The coordinate transformation ( transformation optics ) requires extraordinary material parameters that are only approachable through the use of resonant elements, which are inherently narrow band , and dispersive at resonance. [ 1 ] [ 3 ] [ 4 ] [ 18 ] [ 29 ]
At the very beginning of the new millennium, metamaterials were established as an extraordinary new medium, which expanded control capabilities over matter . Hence, metamaterials are applied to cloaking applications for a few reasons. First, the parameter known as material response has broader range. Second, the material response can be controlled at will. [ 15 ]
Third, optical components, such as lenses, respond within a certain defined range to light . As stated earlier – the range of response has been known, and studied, going back to Ptolemy – eighteen hundred years ago. The range of response could not be effectively exceeded, because natural materials proved incapable of doing so. In scientific studies and research, one way to communicate the range of response is the refractive index of a given optical material. Every natural material so far only allows for a positive refractive index. Metamaterials, on the other hand, are an innovation that are able to achieve negative refractive index, zero refractive index, and fractional values in between zero and one. Hence, metamaterials extend the material response, among other capabilities.
However, negative refraction is not the effect that creates invisibility-cloaking. It is more accurate to say that gradations of refractive index, when combined, create invisibility-cloaking. Fourth, and finally, metamaterials demonstrate the capability to deliver chosen responses at will. [ 15 ]
Before actually building the device, theoretical studies were conducted. The following is one of two studies accepted simultaneously by a scientific journal, as well being distinguished as one of the first published theoretical works for an invisibility cloak.
The exploitation of "light", the electromagnetic spectrum , is accomplished with common objects and materials which control and direct the electromagnetic fields . For example, a glass lens in a camera is used to produce an image, a metal cage may be used to screen sensitive equipment, and radio antennas are designed to transmit and receive daily FM broadcasts. Homogeneous materials, which manipulate or modulate electromagnetic radiation , such as glass lenses, are limited in the upper limit of refinements to correct for aberrations. Combinations of inhomogeneous lens materials are able to employ gradient refractive indices , but the ranges tend to be limited. [ 2 ]
Metamaterials were introduced about a decade ago, and these expand control of parts of the electromagnetic spectrum ; from microwave , to terahertz , to infrared . Theoretically, metamaterials, as a transmission medium , will eventually expand control and direction of electromagnetic fields into the visible spectrum . Hence, a design strategy was introduced in 2006, to show that a metamaterial can be engineered with arbitrarily assigned positive or negative values of permittivity and permeability , which can also be independently varied at will. Then direct control of electromagnetic fields becomes possible, which is relevant to novel and unusual lens design, as well as a component of the scientific theory for cloaking of objects from electromagnetic detection. [ 2 ]
Each component responds independently to a radiated electromagnetic wave as it travels through the material, resulting in electromagnetic inhomogeneity for each component. Each component has its own response to the external electric and magnetic fields of the radiated source . Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. These materials obey the laws of physics , but behave differently from normal materials. Metamaterials are artificial materials engineered to provide properties which "may not be readily available in nature". These materials usually gain their properties from structure rather than composition, using the inclusion of small inhomogeneities to enact effective macroscopic behavior .
The structural units of metamaterials can be tailored in shape and size. Their composition, and their form or structure, can be finely adjusted. Inclusions can be designed, and then placed at desired locations in order to vary the function of a given material. As the lattice is constant, the cells are smaller than the radiated light. [ 6 ] [ 31 ] [ 32 ] [ 33 ]
The design strategy has at its core inhomogeneous composite metamaterials which direct, at will, conserved quantities of electromagnetism . These quantities are specifically, the electric displacement field D , the magnetic field intensity B , and the Poynting vector S . Theoretically, when regarding the conserved quantities, or fields, the metamaterial exhibits a twofold capability. First, the fields can be concentrated in a given direction. Second, they can be made to avoid or surround objects, returning without perturbation to their original path. These results are consistent with Maxwell's equations and are more than only ray approximation found in geometrical optics . Accordingly, in principle, these effects can encompass all forms of electromagnetic radiation phenomena on all length scales. [ 2 ] [ 9 ] [ 34 ]
The hypothesized design strategy begins with intentionally choosing a configuration of an arbitrary number of embedded sources. These sources become localized responses of permittivity , ε, and magnetic permeability , μ. The sources are embedded in an arbitrarily selected transmission medium with dielectric and magnetic characteristics. As an electromagnetic system the medium can then be schematically represented as a grid. [ 2 ]
The first requirement might be to move a uniform electric field through space, but in a definite direction, which avoids an object or obstacle. Next remove and embed the system in an elastic medium that can be warped, twisted, pulled or stretched as desired. The initial condition of the fields is recorded on a Cartesian mesh. As the elastic medium is distorted in one, or combination, of the described possibilities, the same pulling and stretching process is recorded by the Cartesian mesh. The same set of contortions can now be recorded, occurring as coordinate transformation :
Hence, the permittivity, ε, and permeability, μ, is proportionally calibrated by a common factor. This implies that less precisely, the same occurs with the refractive index. Renormalized values of permittivity and permeability are applied in the new coordinate system. For the renormalization equations see ref. #. [ 2 ]
Given the above parameters of operation, the system, a metamaterial, can now be shown to be able to conceal an object of arbitrary size. Its function is to manipulate incoming rays, which are about to strike the object. These incoming rays are instead electromagnetically steered around the object by the metamaterial, which then returns them to their original trajectory. As part of the design it can be assumed that no radiation leaves the concealed volume of space, and no radiation can enter the space. As illustrated by the function of the metamaterial, any radiation attempting to penetrate is steered around the space or the object within the space, returning to the initial direction. It appears to any observer that the concealed volume of space is empty, even with an object present there. An arbitrary object may be hidden because it remains untouched by external radiation. [ 2 ]
A sphere with radius R 1 is chosen as the object to be hidden. The cloaking region is to be contained within the annulus R 1 < r < R 2 . A simple transformation that achieves the desired result can be found by taking all fields in the region r < R 2 and compressing them into the region R 1 < r < R 2 . The coordinate transformations do not alter Maxwell's equations. Only the values of ε ′ and μ ′ change over time.
There are issues to be dealt with to achieve invisibility cloaking. One issue, related to ray tracing , is the anisotropic effects of the material on the electromagnetic rays entering the "system". Parallel bundles of rays, ( see above image ), headed directly for the center are abruptly curved and, along with neighboring rays, are forced into tighter and tighter arcs . This is due to rapid changes in the now shifting and transforming permittivity ε ′ and permeability μ ′ . The second issue is that, while it has been discovered that the selected metamaterials are capable of working within the parameters of the anisotropic effects and the continual shifting of ε ′ and μ ′ , the values for ε ′ and μ ′ cannot be very large or very small. The third issue is that the selected metamaterials are currently unable to achieve broad, frequency spectrum capabilities. This is because the rays must curve around the "concealed" sphere , and therefore have longer trajectories than traversing free space , or air. However, the rays must arrive around the other side of the sphere in phase with the beginning radiated light . If this is happening then the phase velocity exceeds the velocity of light in a vacuum , which is the speed limit of the universe. (Note, this does not violate the laws of physics). And, with a required absence of frequency dispersion , the group velocity will be identical with phase velocity . In the context of this experiment, group velocity can never exceed the velocity of light, hence the analytical parameters are effective for only one frequency . [ 2 ]
The goal then is to create no discernible difference between a concealed volume of space and the propagation of electromagnetic waves through empty space. It would appear that achieving a perfectly concealed (100%) hole, where an object could be placed and hidden from view, is not probable. The problem is the following: in order to carry images, light propagates in a continuous range of directions. The scattering data of electromagnetic waves, after bouncing off an object or hole, is unique compared to light propagating through empty space, and is therefore easily perceived. Light propagating through empty space is consistent only with empty space. This includes microwave frequencies. [ 9 ]
Although mathematical reasoning shows that perfect concealment is not probable because of the wave nature of light, this problem does not apply to electromagnetic rays, i.e., the domain of geometrical optics . Imperfections can be made arbitrarily, and exponentially small for objects that are much larger than the wavelength of light. [ 9 ]
Mathematically, this implies n < 1, because the rays follow the shortest path and hence in theory create a perfect concealment. In practice, a certain amount of acceptable visibility occurs, as noted above. The range of the refractive index of the dielectric (optical material) needs to be across a wide spectrum to achieve concealment, with the illusion created by wave propagation across empty space. These places where n < 1 would be the shortest path for the ray around the object without phase distortion. Artificial propagation of empty space could be reached in the microwave-to- terahertz range. In stealth technology , impedance matching could result in absorption of beamed electromagnetic waves rather than reflection, hence, evasion of detection by radar . These general principles can also be applied to sound waves , where the index n describes the ratio of the local phase velocity of the wave to the bulk value. Hence, it would be useful to protect a space from any sound sourced detection. This also implies protection from sonar. Furthermore, these general principles are applicable in diverse fields such as electrostatics , fluid mechanics , classical mechanics , and quantum chaos . [ 9 ]
Mathematically, it can be shown that the wave propagation is indistinguishable from empty space where light rays propagate along straight lines. The medium performs an optical conformal mapping to empty space. [ 9 ]
The next step, then, is to actually conceal an object by controlling electromagnetic fields.
Now, the demonstrated and theoretical ability for controlled electromagnetic fields has opened a new field, transformation optics . This nomenclature is derived from coordinate transformations used to create variable pathways for the propagation of light through a material. This demonstration is based on previous theoretical prescriptions, along with the accomplishment of the prism experiment. One possible application of transformation optics and materials is electromagnetic cloaking for the purpose of rendering a volume or object undetectable to incident radiation, including radiated probing. [ 3 ] [ 35 ] [ 36 ]
This demonstration, for the first time, of actually concealing an object with electromagnetic fields, uses the method of purposely designed spatial variation. This is an effect of embedding purposely designed electromagnetic sources in the metamaterial. [ 37 ]
As discussed earlier, the fields produced by the metamaterial are compressed into a shell (coordinate transformations) surrounding the now concealed volume. Earlier this was supported theory; this experiment demonstrated the effect actually occurs. Maxwell's equations are scalar when applying transformational coordinates, only the permittivity tensor and permeability tensor are affected, which then become spatially variant, and directionally dependent along different axes. The researchers state :
By implementing these complex material properties, the concealed volume plus the cloak appear to have the properties of free space when viewed externally. The cloak thus neither scatters waves nor imparts a shadow in the either of which would enable the cloak to be detected. Other approaches to invisibility either rely on the reduction of backscatter or make use of a resonance in which the properties of the cloaked object and the must be carefully matched.
...Advances in the development of [negative index metamaterials], especially with respect to gradient index lenses, have made the physical realization of the specified complex material properties feasible. We implemented a two-dimensional (2D) cloak because its fabrication and measurement requirements were simpler than those of a 3D cloak. [ 3 ]
Before the actual demonstration, the experimental limits of the transformational fields were computationally determined, in addition to simulations, as both were used to determine the effectiveness of the cloak. [ 3 ]
A month prior to this demonstration, the results of an experiment to spatially map the internal and external electromagnetic fields of negative refractive metamaterial was published in September 2006. [ 37 ] This was innovative because prior to this the microwave fields were measured only externally. [ 37 ] In this September experiment the permittivity and permeability of the microstructures (instead of external macrostructure) of the metamaterial samples were measured, as well as the scattering by the two-dimensional negative index metamaterials. [ 37 ] This gave an average effective refractive index, which results in assuming homogeneous metamaterial. [ 37 ]
Employing this technique for this experiment, spatial mapping of phases and amplitudes of the microwave radiations interacting with metamaterial samples was conducted. The performance of the cloak was confirmed by comparing the measured field maps to simulations. [ 3 ]
For this demonstration, the concealed object was a conducting cylinder at the inner radius of the cloak. As the largest possible object designed for this volume of space, it has the most substantial scattering properties. The conducting cylinder was effectively concealed in two dimensions. [ 3 ]
The definition optical frequency, in metamaterials literature, ranges from far infrared, to near infrared, through the visible spectrum, and includes at least a portion of ultra-violet. To date when literature refers optical frequencies these are almost always frequencies in the infrared, which is below the visible spectrum. In 2009 a group of researchers announced cloaking at optical frequencies. In this case the cloaking frequency was centered at 1500 nm or 1.5 micrometers – the infrared. [ 38 ] [ 39 ]
A laboratory metamaterial device, applicable to ultra-sound waves was demonstrated in January 2011. It can be applied to sound wavelengths corresponding to frequencies from 40 to 80 kHz.
The metamaterial acoustic cloak is designed to hide objects submerged in water. The metamaterial cloaking mechanism bends and twists sound waves by intentional design.
The cloaking mechanism consists of 16 concentric rings in a cylindrical configuration. Each ring has acoustic circuits. It is intentionally designed to guide sound waves in two dimensions.
Each ring has a different index of refraction . This causes sound waves to vary their speed from ring to ring. "The sound waves propagate around the outer ring, guided by the channels in the circuits, which bend the waves to wrap them around the outer layers of the cloak". It forms an array of cavities that slow the speed of the propagating sound waves. An experimental cylinder was submerged and then disappeared from sonar . Other objects of various shape and density were also hidden from the sonar. The acoustic cloak demonstrated effectiveness for frequencies of 40 kHz to 80 kHz. [ 40 ] [ 41 ] [ 42 ] [ 43 ]
In 2014 researchers created a 3D acoustic cloak from stacked plastic sheets dotted with repeating patterns of holes. The pyramidal geometry of the stack and the hole placement provide the effect. [ 44 ]
In 2014, scientists demonstrated good cloaking performance in murky water, demonstrating that an object shrouded in fog can disappear completely when appropriately coated with metamaterial. This is due to the random scattering of light, such as that which occurs in clouds, fog, milk, frosted glass, etc., combined with the properties of the metatmaterial coating. When light is diffused, a thin coat of metamaterial around an object can make it essentially invisible under a range of lighting conditions. [ 45 ] [ 46 ]
If a transformation to quasi- orthogonal coordinates is applied to Maxwell's equations in order to conceal a perturbation on a flat conducting plane rather than a singular point, as in the first demonstration of a transformation optics-based cloak, then an object can be hidden underneath the perturbation. [ 47 ] This is sometimes referred to as a "carpet" cloak.
As noted above, the original cloak demonstrated utilized resonant metamaterial elements to meet the effective material constraints. Utilizing a quasi-conformal transformation in this case, rather than the non-conformal original transformation, changed the required material properties. Unlike the original (singular expansion) cloak, the "carpet" cloak required less extreme material values. The quasi-conformal carpet cloak required anisotropic, inhomogeneous materials which only varied in permittivity . Moreover, the permittivity was always positive. This allowed the use of non-resonant metamaterial elements to create the cloak, significantly increasing the bandwidth.
An automated process, guided by a set of algorithms , was used to construct a metamaterial consisting of thousands of elements, each with its own geometry . Developing the algorithm allowed the manufacturing process to be automated, which resulted in fabrication of the metamaterial in nine days. The previous device used in 2006 was rudimentary in comparison, and the manufacturing process required four months in order to create the device. [ 4 ] These differences are largely due to the different form of transformation: the original 2006 cloak transformed a singular point, while the ground-plane version transforms a plane, and the transformation in the carpet cloak was quasi-conformal, rather than non-conformal.
Other theories of cloaking discuss various science and research based theories for producing an electromagnetic cloak of invisibility. Theories presented employ transformation optics , event cloaking, dipolar scattering cancellation, tunneling light transmittance, sensors and active sources, and acoustic cloaking .
The research in the field of metamaterials has diffused out into the American government science research departments, including the US Naval Air Systems Command , US Air Force , and US Army . Many scientific institutions are involved including: [ citation needed ]
Funding for research into this technology is provided by the following American agencies: [ 48 ]
Through this research, it has been realized that developing a method for controlling electromagnetic fields can be applied to escape detection by radiated probing, or sonar technology, and to improve communications in the microwave range; that this method is relevant to superlens design and to the cloaking of objects within and from electromagnetic fields . [ 9 ]
On October 20, 2006, the day after Duke University achieved enveloping and "disappearing" an object in the microwave range, the story was reported by Associated Press . [ 49 ] Media outlets covering the story included USA Today, MSNBC's Countdown With Keith Olbermann: Sight Unseen , The New York Times with Cloaking Copper, Scientists Take Step Toward Invisibility , (London) The Times with Don't Look Now—Visible Gains in the Quest for Invisibility , Christian Science Monitor with Disappear Into Thin Air? Scientists Take Step Toward Invisibility , Australian Broadcasting, Reuters with Invisibility Cloak a Step Closer , and the (Raleigh) News & Observer with ' Invisibility Cloak a Step Closer . [ 49 ]
On November 6, 2006, the Duke University research and development team was selected as part of the Scientific American best 50 articles of 2006. [ 50 ]
In the month of November 2009, "research into designing and building unique 'metamaterials' has received a £4.9 million funding boost. Metamaterials can be used for invisibility 'cloaking' devices, sensitive security sensors that can detect tiny quantities of dangerous substances, and flat lenses that can be used to image tiny objects much smaller than the wavelength of light." [ 51 ]
In November 2010, scientists at the University of St Andrews in Scotland reported the creation of a flexible cloaking material they call "Metaflex", which may bring industrial applications significantly closer. [ 52 ]
In 2014, the world 's first 3D acoustic device was built by Duke engineers. [ 53 ] | https://en.wikipedia.org/wiki/Metamaterial_cloaking |
Metamaterials: Physics and Engineering Explorations is a book length introduction to the fundamental research and advancements in electromagnetic composite substances known as electromagnetic metamaterials . The discussion encompasses examination of the physics of metamaterial interactions, the designs, and the perspectives of engineering regarding these materials. Also included throughout the book are potential applications, which are discussed at various points in each section of each chapter. The book encompasses a variety of theoretical , numerical , and experimental perspectives. [ 1 ] [ 2 ]
This book has been cited by a few hundred other peer-reviewed research efforts, mostly peer-reviewed science articles. [ 3 ]
Nader Engheta received his Ph.D. in Electrical Engineering (with a minor in Physics), in 1982 from the California Institute of Technology . Currently he is a Professor of Electrical and Systems Engineering, and Professor of Bioengineering at the University of Pennsylvania . His current research activities include metamaterials , plasmonics , nano-optics , nanophotonics , bio-inspired sensing and imaging, miniaturized antennas and nanoantennas . [ 4 ] [ 5 ]
Richard W. Ziolkowski received both his M.S. and Ph.D. in physics , in 1975 and 1980, respectively from the University of Illinois at Urbana-Champaign . Currently he has a dual appointment at the University of Arizona . He is a Professor of Electrical and Computer Engineering , and a Professor of the Optical Sciences . His current research includes metamaterial physics and engineering related to low frequency and high frequency antenna systems, and includes nanoparticle lasers . [ 6 ] [ 7 ]
Through their respective research, both Engheta and Ziolkowski have each contributed significantly to advancing metamaterials. Ziolkowski has been described as being at the leading edge of metamaterials research since a Defense Advanced Research Projects Agency (DARPA) workshop, in November, 1999.
Nader Engheta and Richard W. Ziolkowski , are also the editors of this book. They have compiled the published research related to metamaterials at the end of each chapter of this book. The content of each chapter describes the path the current research is taking in its respective domain. Included are descriptions of basic research (physics), and how it is applied (engineering). The chapters are written by contributors who are carrying out the actual research and applications, including some chapter contributions by Engheta and Ziolkowski.
Hence, the content of the book also consists of original research papers by researchers in the field, who are knowledgeable about metamaterials, and who have made significant contributions, to the advancement and understanding of metamaterials. [ note 1 ] These persons were invited to present their discoveries and some conclusions, while researching metamaterials. Included in their findings are the state of the art developments in applications for antennas , waveguides , and related devices, and components. [ 1 ] [ 2 ] [ 8 ] [ 9 ]
The first chapter opens with a very brief overview of the history of metamaterials . Afterwards, a history treatment is interspersed throughout the book, which frames the discussion of the related section or chapter.
The organizational structure of the book begins with dividing the subject, electromagnetic metamaterials, into two major classes of metamaterials. The first major class is the SNG and DNG metamaterials, and the second major class is EBG structured metamaterials . [ 1 ] [ 2 ] [ 10 ]
The organizational format relates the SNG and DNG metamaterials into one class. This class is described by its common structure which is the subwavelength size of the inclusions, and the periodicity of the structure. The inclusions, or cells, are artificially arrayed into an ordered, repeating pattern, of equal dimensions and equidistant spacing. Such structures are then conceptually described as being homogenous and as effective media . [ 1 ] [ 2 ] [ 10 ]
EBG metamaterials, on the other hand, can be described by other periodic media concepts.
These classes are sub-divided further into their three-dimensional ( 3D volumetric ) and two-dimensional ( 2D planar or surface ) realizations. Examples of the aforementioned types of metamaterials are provided and their known and anticipated properties are described. [ 1 ] [ 2 ] [ 10 ]
In all, there are 14 chapters, along with a preface by the authors.
The book presents broad coverage of electromagnetic metamaterials. Coverage also includes theoretical , numerical , and experimental perspectives of the contributors, along with current and intended applications. The extensive peer reviewed article reference lists, at the end of each chapter, are noteworthy. [ 1 ] [ 2 ] [ 9 ] | https://en.wikipedia.org/wiki/Metamaterials:_Physics_and_Engineering_Explorations |
Metamaterials was a quarterly peer-reviewed scientific journal that was established in March 2007. It was published by Elsevier and the founding editor-in-chief was Mikhail Lapine ( Helsinki University of Technology ). [ 1 ] The journal published special issues occasionally. It covered research concerning metamaterials , such as artificial electromagnetic materials, which includes various types of composite periodic structures and frequency selective surfaces in the microwave and optical range . The journal was abstracted and indexed in Scopus . [ 2 ] It was discontinued in 2013. | https://en.wikipedia.org/wiki/Metamaterials_(journal) |
Metamaterials Handbook is a two-volume handbook on metamaterials edited by Filippo Capolino professor of electrical engineering in University of California . [ 1 ] [ 2 ]
The series is designed to cover all theory and application topics related to electromagnetic metamaterials. Disciplines have combined to study, and develop electromagnetic metamaterials. Some of these disciplines are optics, physics, electromagnetic theory (including computational methods) microfabrication, microwaves, nanofabrication, nanotechnology, and nanochemistry. [ 1 ] [ 3 ] [ 4 ] [ 5 ]
Theory and Phenomena of Metamaterials is the first volume of the Metamaterials Handbook . It contains contributions from researchers (scientists) who have produced accepted results in the field of metamaterials . Most of the contributors are associated with Metamorphose VI AISBL , a non-profit, European organization that focuses on artificial electromagnetic materials and metamaterials. Metamorphose provided access to the network of contributors (researchers) who work in a variety of scientific disciplines , involved with metamaterials
This book is in an article review format, covering prior work in metamaterials. It focuses on theories underpinning metamaterial research along with the properties of metamaterials. The text covers all areas of metamaterial research. [ 1 ] [ 6 ] [ 7 ] [ 8 ]
Applications of Metamaterials is the second volume of the Metamaterials Handbook . This book derives its organization for discussion of its topics from the previous volume. Theory, modeling, and basic properties of metamaterials that were explored in the first volume, are now shown how they work when applied. Devices based on electromagnetic metamaterials continue to expand understanding of principles and modeling begun in the first volume. The applications for metamaterials are shown to be wide-ranging, encompassing electronics , telecommunications , sensing, medical instrumentation, and data storage. This book also discusses the key domains of where metamaterials have already been developed.
The material in this book is obtained from highly regarded sources, such as many scientific, peer reviewed , journal articles. [ 9 ] [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Metamaterials_Handbook |
Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories , which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert 's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic " (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.
Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics . Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?).
Metamathematics was intimately connected to mathematical logic , so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory , category theory , recursion theory and pure model theory .
Serious metamathematical reflection began with the work of Gottlob Frege , especially his Begriffsschrift , published in 1879.
David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program ), in the early 20th century. In his hands, it meant something akin to contemporary proof theory , in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55).
Other prominent figures in the field include Bertrand Russell , Thoralf Skolem , Emil Post , Alonzo Church , Alan Turing , Stephen Kleene , Willard Quine , Paul Benacerraf , Hilary Putnam , Gregory Chaitin , Alfred Tarski , Paul Cohen and Kurt Gödel .
Today, metalogic and metamathematics broadly overlap, and both have been substantially subsumed by mathematical logic in academia.
The discovery of hyperbolic geometry had important philosophical consequences for metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable.
When Gauss discovered hyperbolic geometry, it is said that he did not publish anything about it out of fear of the "uproar of the Boeotians ", which would ruin his status as princeps mathematicorum (Latin, "the Prince of Mathematicians"). [ 1 ] The "uproar of the Boeotians" came and went, and gave an impetus to metamathematics and great improvements in mathematical rigour , analytical philosophy and logic .
Begriffsschrift (German for, roughly, "concept-script") is a book on logic by Gottlob Frege , published in 1879, and the formal system set out in that book.
Begriffsschrift is usually translated as concept writing or concept notation ; the full title of the book identifies it as "a formula language , modeled on that of arithmetic , of pure thought ." Frege's motivation for developing his formal approach to logic resembled Leibniz 's motivation for his calculus ratiocinator (despite that, in his Foreword Frege clearly denies that he reached this aim, and also that his main aim would be constructing an ideal language like Leibniz's, what Frege declares to be quite hard and idealistic, however, not impossible task). Frege went on to employ his logical calculus in his research on the foundations of mathematics , carried out over the next quarter century.
Principia Mathematica , or "PM" as it is often abbreviated, was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy, [ 2 ] being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them.
One of the main inspirations and motivations for PM was the earlier work of Gottlob Frege on logic, which Russell discovered allowed for the construction of paradoxical sets . PM sought to avoid this problem by ruling out the unrestricted creation of arbitrary sets. This was achieved by replacing the notion of a general set with notion of a hierarchy of sets of different ' types ', a set of a certain type only allowed to contain sets of strictly lower types. Contemporary mathematics, however, avoids paradoxes such as Russell's in less unwieldy ways, such as the system of Zermelo–Fraenkel set theory .
Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic . The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics . The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem .
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an " effective procedure " (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers ( arithmetic ). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
The T-schema or truth schema (not to be confused with ' Convention T ') is used to give an inductive definition of truth which lies at the heart of any realisation of Alfred Tarski 's semantic theory of truth . Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett . [ 3 ]
The T-schema is often expressed in natural language , but it can be formalized in many-sorted predicate logic or modal logic ; such a formalisation is called a T-theory . T-theories form the basis of much fundamental work in philosophical logic , where they are applied in several important controversies in analytic philosophy .
As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S):
'S' is true if and only if S
Example: 'snow is white' is true if and only if snow is white.
The Entscheidungsproblem ( German for ' decision problem ') is a challenge posed by David Hilbert in 1928. [ 4 ] The Entscheidungsproblem asks for an algorithm that takes as input a statement of a first-order logic (possibly with a finite number of axioms beyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement is universally valid , i.e., valid in every structure satisfying the axioms. By the completeness theorem of first-order logic , a statement is universally valid if and only if it can be deduced from the axioms, so the Entscheidungsproblem can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic.
In 1936, Alonzo Church and Alan Turing published independent papers [ 5 ] showing that a general solution to the Entscheidungsproblem is impossible, assuming that the intuitive notation of " effectively calculable " is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus ). This assumption is now known as the Church–Turing thesis . | https://en.wikipedia.org/wiki/Metamathematics |
In biology , metamerism is the phenomenon of having a linear series of body segments fundamentally similar in structure, though not all such structures are entirely alike in any single life form because some of them perform special functions. [ 1 ] In animals, metameric segments are referred to as somites or metameres . In plants, they are referred to as metamers or, more concretely, phytomers .
In animals, zoologists define metamery as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products. [ 1 ] Endoderm is not involved in metamery. Segmentation is not the same concept as metamerism: segmentation can be confined only to ectodermally derived tissue, e.g., in the Cestoda tapeworms . Metamerism is far more important biologically since it results in metameres - also called somites - that play a critical role in advanced locomotion .
One can divide metamerism into two main categories:
In addition, an animal may be classified as "pseudometameric", meaning that it has clear internal metamerism but no corresponding external metamerism - as is seen, for example, in Monoplacophora .
Humans and other chordates are conspicuous examples of organisms that have metameres intimately grouped into tagmata. In the Chordata the metameres of each tagma are fused to such an extent that few repetitive features are directly visible. Intensive investigation is necessary to discern the metamerism in the tagmata of such organisms. Examples of detectable evidence of vestigially metameric structures include branchial arches and cranial nerves .
Some schemes regard the concept of metamerism as one of the four principles of construction of the human body, common to many animals, along with general bilateral symmetry (or zygomorphism), pachymerism (or tubulation ), and stratification. [ 3 ] More recent schemes also include three other concepts: segmentation (conceived as different from metamerism), polarity and endocrinosity . [ 4 ]
A metamer is one of several segments that share in the construction of a shoot , or into which a shoot may be conceptually (at least) resolved. [ 5 ] In the metameristic model, a plant consists of a series of 'phytons' or phytomers , each consisting of an internode and its upper node with the attached leaf. As Asa Gray (1850) wrote: [ 6 ]
The branch, or simple stem itself, is manifestly an assemblage of similar parts, placed one above another in a continuous series, developed one from another in successive generations. Each one of these joints of stem, bearing its leaf at the apex, is a plant element; or as we term it a phyton,—a potential plant, having all the organs of vegetation, namely, stem, leaf, and in its downward development even a root, or its equivalent. This view of the composition of the plant, though by no means a new one, has not been duly appreciated. I deem it essential to a correct philosophical understanding of the plant.
Some plants, particularly grasses, demonstrate a rather clear metameric construction, but many others either lack discrete modules or their presence is more arguable. [ 5 ] Phyton theory has been criticized as an over-ingenious, academic conception which bears little relation to reality. [ 7 ] Eames (1961) concluded that "concepts of the shoot as consisting of a series of structural units have been obscured by the dominance of the stem- and leaf-theory. Anatomical units like these do not exist: the shoot is the basic unit." [ 8 ] Even so, others still consider comparative study along the length of the metameric organism to be a fundamental aspect of plant morphology . [ 9 ]
Metameric conceptions generally segment the vegetative axis into repeating units along its length, but constructs based on other divisions are possible. [ 5 ] The pipe model theory conceives of the plant (especially trees) as made up of unit pipes ('metamers'), each supporting a unit amount of photosynthetic tissue. [ 10 ] Vertical metamers are also suggested in some desert shrubs in which the stem is modified into isolated strips of xylem , each having continuity from root to shoot. [ 5 ] This may enable the plant to abscise a large part of its shoot system in response to drought, without damaging the remaining part.
In vascular plants , the shoot system differs fundamentally from the root system in that the former shows a metameric construction (repeated units of organs; stem, leaf, and inflorescence), while the latter does not. The plant embryo represents the first metamer of the shoot in spermatophytes or seed plants.
Plants (especially trees) are considered to have a 'modular construction,' a module being an axis in which the entire sequence of aerial differentiation is carried out from the initiation of the meristem to the onset of sexuality (e.g. flower or cone development) which completes its development. [ 5 ] These modules are considered to be developmental units, not necessarily structural. | https://en.wikipedia.org/wiki/Metamerism_(biology) |
In chemistry , metamerism is used to define the isomeric relationship between compounds with the same polyvalent , heteroatomic , functional group but differ in the main carbon chain or any of the side chains . It has rather been an obsolete term for isomerism, which has not been recognised by IUPAC in its publications. [ 1 ] When Swedish chemist Jöns Jacob Berzelius used the term in 1831, he did so to describe those substances which possess the same percentage composition but had different properties. What Berzelius implied to be called metamerism is now considered as isomerism. [ 2 ]
The isomers which have been cited as examples of metamers in chemical literature consist primarily of ethers ; [ 3 ] but this could by the same reasoning be extended to thioethers , secondary as well as tertiary amines , esters , secondary as well as tertiary amides , (mixed) acid anhydrides etc.
Ketones however, should be excluded from this class of isomeric relationship, as they primarily are part of position isomerism - as there is no heteroatom present in the functional group, so the two alkyl groups (main chain and side chain) are not disconnected from each other.
There have been disputes on metamerism being included with other isomerisms such as position as well as chain isomerism, [ 4 ] some authors still keep using it in their textbooks, mostly citing the examples of ethers and secondary amines. [ 5 ] | https://en.wikipedia.org/wiki/Metamerism_(chemistry) |
Metamictisation (sometimes called metamictization or metamiction ) is a natural process resulting in the gradual and ultimately complete destruction of a mineral 's crystal structure , leaving the mineral amorphous . The affected material is therefore described as metamict .
Certain minerals occasionally contain interstitial impurities of radioactive elements, and it is the alpha radiation emitted from those compounds that is responsible for degrading a mineral's crystal structure through internal bombardment. The effects of metamictisation are extensive: other than negating any birefringence previously present, the process also lowers a mineral's refractive index , hardness , and its specific gravity . The mineral's colour is also affected: metamict specimens are usually green, brown or blackish. Further, metamictisation diffuses the bands of a mineral's absorption spectrum . Curiously and inexplicably, the one attribute which metamictisation does not alter is dispersion . All metamict materials are themselves radioactive, some dangerously so.
An example of a metamict mineral is zircon . The presence of uranium and thorium atoms substituting for zirconium in the crystal structure is responsible for the radiation damage in this case. Unaffected specimens are termed high zircon while metamict specimens are termed low zircon . Other minerals known to undergo metamictisation include allanite , gadolinite , ekanite , thorite and titanite . Ekanite is almost invariably found completely metamict as thorium and uranium are part of its essential chemical composition .
Metamict minerals can have their crystallinity and properties restored through prolonged annealing .
A related phenomenon is the formation of pleochroic halos surrounding minute zircon inclusions within a crystal of biotite or other mineral. The spherical halos are produced by alpha particle radiation from the included uranium- or thorium-bearing species. Such halos can also be found surrounding monazite and other radioactive minerals.
This mineralogy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Metamictisation |
A metamorphic reaction is a chemical reaction that takes place during the geological process of metamorphism wherein one assemblage of minerals is transformed into a second assemblage which is stable under the new temperature/pressure conditions resulting in the final stable state of the observed metamorphic rock . [ 1 ]
Examples include the production of talc under varied metamorphic conditions: | https://en.wikipedia.org/wiki/Metamorphic_reaction |
Metanil Yellow ( Acid Yellow 36 ) is a dye of the azo class . In analytical chemistry, it is used as a pH indicator and it has a color change from red to yellow between pH 1.2 and 3.2. [ 1 ]
Although illegal for food use, Metanil Yellow has been used as an adulterant in turmeric and pigeon pea based food products, particularly in India. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Animal studies have suggested that Metanil Yellow is neurotoxic [ 3 ] and hepatotoxic . [ 6 ] | https://en.wikipedia.org/wiki/Metanil_Yellow |
Metaphenomics [ 1 ] studies the phenome of plants or other organisms by means of meta-analysis. Main goal is to establish dose-response relationships of a wide range of phenotypic traits for a large set of a-biotic environmental factors .
A popular way to study the effect of the environment on plants is to set up experiments where subgroups of individuals of a species of interest are exposed to different levels of one environmental factor (e.g. light, CO 2 ), while all other factors are similar. These studies have yielded a lot of insight into the way plants respond to the environment , but may be challenging to integrate by means of a classical meta-analysis . One of the reasons for that is that phenotypic traits often respond to the environment in a non-linear way. Rather than evaluating the difference between ‘low-CO 2 ’ and ‘high-CO 2 ’ grown plants, it would be better to derive dose-response curves which take into account at which CO 2 levels experiments were carried out. Metaphenomics uses a method to calculate dose-response curves from a variety of experiments, and is applicable to any phenotypic trait and many environmental variables. [ citation needed ]
Core of the method used in metaphenomics is to scale all phenotypic data for a given species or genotype across all the levels of the environmental variable of interest (say CO 2 ) to the value they have at a reference value of that environmental variable (for example, a CO 2 concentration of 400 ppm). In this way, inherent variation among species or genotypes in the trait of interest is removed, as for all experiments and species, the scaled value at 400 ppm will be 1.0. Subsequently, general dose-response curves can be derived by fitting mathematical equations to the data. [ 2 ]
The results generally are a family of curves where dose-response curves for one phenotypic trait are compared for a range of different environmental variables, [ 3 ] or where many different phenotypic traits are analysed for their response to one environmental factor. [ 4 ] [ 5 ] This provides a simple and quantitative overview of the many ways plants or other organisms respond to their environmental. [ citation needed ] | https://en.wikipedia.org/wiki/Metaphenomics |
Metaphocytes are myeloid -like cells considered among tissue-resident macrophages (TRMs) and are present in the skin, gill, and intestine of the zebrafish ( Danio rerio ). Originating from the ectoderm during development, metaphocytes share many similarities, in terms of cellular morphology and gene expression profile with macrophages (which are of mesodermal origin) in particular the Langerhans cells in the skin. [ 1 ] [ 2 ]
Similar to many immune cells, metaphocytes are highly motile cells found in mucosal tissues such as skin, gills, and intestines. Interestingly, by contrast to conventional macrophages, metaphocytes do not migrate or respond to wound-induced inflammation, and they lack phagocytosis ability. [ 1 ] [ 2 ] [ 3 ] The main function of the metaphocytes is to uptake soluble antigens from the external environment and to transfer these antigens to Langerhans cells (TRM of the skin), most probably to regulate the immune response. [ 1 ] [ 2 ]
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Metaphocyte |
Metaplasia (from Greek ' change in form ' ) is the transformation of a cell type to another cell type. [ 1 ] The change from one type of cell to another may be part of a normal maturation process, or caused by some sort of abnormal stimulus. In simplistic terms, it is as if the original cells are not robust enough to withstand their environment, so they transform into another cell type better suited to their environment. If the stimulus causing metaplasia is removed or ceases, tissues return to their normal pattern of differentiation . Metaplasia is not synonymous with dysplasia , and is not considered to be an actual cancer . [ 2 ] It is also contrasted with heteroplasia, which is the spontaneous abnormal growth of cytologic and histologic elements. Today, metaplastic changes are usually considered to be an early phase of carcinogenesis , specifically for those with a history of cancers or who are known to be susceptible to carcinogenic changes. Metaplastic change is thus often viewed as a premalignant condition that requires immediate intervention, either surgical or medical, lest it lead to cancer via malignant transformation .
When cells are faced with physiological or pathological stresses, they respond by adapting in any of several ways, one of which is metaplasia. It is a benign (i.e. non-cancerous) change that occurs as a response to change of milieu (physiological metaplasia) or chronic physical or chemical irritation. One example of pathological irritation is cigarette smoke, which causes the mucus-secreting ciliated pseudostratified columnar respiratory epithelial cells that line the airways to be replaced by stratified squamous epithelium, or a stone in the bile duct that causes the replacement of the secretory columnar epithelium with stratified squamous epithelium ( squamous metaplasia ). Metaplasia is an adaptation that replaces one type of epithelium with another that is more likely to be able to withstand the stresses it is faced with. It is also accompanied by a loss of endothelial function, and in some instances considered undesirable; this undesirability is underscored by the propensity for metaplastic regions to eventually turn cancerous if the irritant is not eliminated.
The cell of origin for many types of metaplasias are controversial or unknown. For example, there is evidence supporting several different hypotheses of origin in Barrett's esophagus . They include direct transdifferentiation of squamous cells to columnar cells, the stem cell changing from esophageal type to intestinal type, migration of gastric cardiac cells, and a population of resident embryonic cells present through adulthood.
Normal physiological metaplasia, such as that of the endocervix , is highly desirable.
The medical significance of metaplasia is that in some sites where pathological irritation is present, cells may progress from metaplasia, to develop dysplasia, and then malignant neoplasia (cancer). Thus, at sites where abnormal metaplasia is detected, efforts are made to remove the causative irritant, thereby decreasing the risk of progression to malignancy . The metaplastic area must be carefully monitored to ensure that dysplastic change does not begin to occur. A progression to significant dysplasia indicates that the area could need removal to prevent the development of cancer.
Barrett's esophagus is an abnormal change in the cells of the lower esophagus, thought to be caused by damage from chronic stomach acid exposure.
The following table lists some common tissues susceptible to metaplasia, and the stimuli that can cause the change:
Intestinal metaplasia is a premalignant condition that increases the risk for subsequent gastric cancer . [ 4 ] Intestinal metaplasia lesions with an active DNA damage response will likely undergo extended latency in the premalignant state until further damaging hits override the DNA damage response leading to clonal expansion and progression. [ 4 ] The DNA damage response includes expression of proteins that detect DNA damages and activate downstream responses like DNA repair , cell cycle checkpoints or apoptosis . [ 4 ] | https://en.wikipedia.org/wiki/Metaplasia |
Metaproteomics (also Community Proteomics , Environmental Proteomics , or Community Proteogenomics ) is an umbrella term for experimental approaches to study all proteins in microbial communities and microbiomes from environmental sources. Metaproteomics is used to classify experiments that deal with all proteins identified and quantified from complex microbial communities. Metaproteomics approaches are comparable to gene-centric environmental genomics , or metagenomics . [ 1 ] [ 2 ]
The term "metaproteomics" was proposed by Francisco Rodríguez-Valera to describe the genes and/or proteins most abundantly expressed in environmental samples. [ 3 ] The term was derived from "metagenome". Wilmes and Bond proposed the term "metaproteomics" for the large-scale characterization of the entire protein complement of environmental microbiota at a given point in time. [ 4 ] At the same time, the terms "microbial community proteomics" and "microbial community proteogenomics" are sometimes used interchangeably for different types of experiments and results.
Metaproteomics allows for scientists to better understand organisms' gene functions, as genes in DNA are transcribed to mRNA which is then translated to protein. Gene expression changes can therefore be monitored through this method. Furthermore, proteins represent cellular activity and structure, so using metaproteomics in research can lead to functional information at the molecular level. Metaproteomics can also be used as a tool to assess the composition of a microbial community in terms of biomass contributions of individual members species in the community and can thus complement approaches that assess community composition based on gene copy counts such as 16S rRNA gene amplicon or metagenome sequencing. [ 5 ]
The first proteomics experiment was conducted with the invention of two-dimensional polyacrylamide gel electrophoresis (2D-PAGE). [ 6 ] [ 7 ] The 1980s and 1990s saw the development of mass spectrometry and mass spectrometry based proteomics. The current proteomics of microbial community makes use of both gel-based (one-dimensional and two-dimensional) and non-gel liquid chromatography based separation, where both rely on mass spectrometry based peptide identification.
While proteomics is largely a discovery-based approach that is followed by other molecular or analytical techniques to provide a full picture of the subject system, it is not limited to simple cataloging of proteins present in a sample. With the combined capabilities of "top-down" and "bottom-up" approaches, proteomics can pursue inquiries ranging from quantitation of gene expression between growth conditions (whether nutritional, spatial, temporal, or chemical) to protein structural information . [ 1 ]
A metaproteomics study of the human oral microbiome found 50 bacterial genera using shotgun proteomics . The results agreed with the Human Microbiome Project, a metagenomic based approach. [ 8 ]
Similarly, metaproteomics approaches have been used in larger clinical studies linking the bacterial proteome with human health. A recent paper used shotgun proteomics to characterize the vaginal microbiome, identifying 188 unique bacterial species in 688 women profiled. [ 9 ] This study linked vaginal microbiome groups to the efficacy of topical antiretroviral drugs to prevent HIV acquisition in women, which was attributed to bacterial metabolism of the drug in vivo. In addition, metaproteomic approaches have been used to study other aspects of the vaginal microbiome, including the immunological and inflammatory consequences of vaginal microbial dysbiosis, [ 10 ] as well as the influence of hormonal contraceptives on the vaginal microbiome. [ 11 ]
Aside from the oral and vaginal microbiomes , several intestinal microbiome studies have used metaproteomic approaches. A 2020 study done by Long et al. has shown, using metaproteomic approaches, that colorectal cancer pathogenesis may be due to changes in the intestinal microbiome . Several proteins examined in this study were associated with iron intake and transport as well as oxidative stress , as high intestinal iron content and oxidative stress are indicative of colorectal cancer. [ 12 ]
Another study done in 2017 by Xiong et al. used metaproteomics along with metagenomics in analyzing gut microbiome changes during human development . Xiong et al. found that the infant gut microbiome may be initially populated with facultative anaerobes such as Enterococcus and Klebsiella , and then later populated by obligate anaerobes like Clostridium , Bifidobacterium , and Bacteroides . While the human gut microbiome shifted over time, microbial metabolic functions remained consistent, including carbohydrate , amino acid and nucleotide metabolism . [ 13 ]
A similar study done in 2017 by Maier et al. combined metaproteomics with metagenomics and metabolomics to show the effects of resistant starch on the human intestinal microbiome. After subjects consumed diets high in resistant starch, it was discovered that several microbial proteins were altered such as butyrate kinase , enoyl coenzyme A ( enoyl-CoA ) hydratase, phosphotransacetylase , adenylosuccinate synthase , adenine phosphoribosyltransferases , and guanine phosphoribosyltransferases . The human subjects experienced increases in colipase , pancreatic triglyceride lipase , bile salt-stimulated lipase abundance while also experiencing a decrease in α-amylase . [ 14 ] Metaproteomics has also been used to understand the human-microbiome interactions that may underlie cardiovascular health. Using machine learning, a 2025 study by Yang et al. showed that human and microbial proteins could identify those at high-risk of cardiovascular disease in healthy and heart failure cohorts. [ 15 ] These were proteins were primarily associated with intestinal inflammation and production of short-chain fatty acids .
Overall, metaproteomics has gained immense popularity in human intestinal microbiome studies as it has led to important discoveries in the health field. [ citation needed ]
Metaproteomics has been especially useful in the identification of microbes involved in various biodegradation processes. A 2017 study done by Jia et al. has shown the application of metaproteomics in examining protein expression profiles of biofuel-producing microorganisms. According to this study, bacterial and archaeal proteins are involved in producing hydrogen and methane-derived biofuels. Bacterial proteins involved are ferredoxin-NADP reductase, acetate kinase, and NADH-quinone oxidoreductase found in the Firmicutes, Proteobacteria, Actinobacteria and Bacteroidetes taxa. These particular proteins are involved in carbohydrate, lipid, and amino acid metabolism. The archaeal proteins involved are acetyl-CoA decarboxylase and methyl-coenzyme M reductase found in Methanosarcina . These proteins participate in biochemical pathways involving acetic acid utilization, CO 2 reduction, and methyl nutrient usage. [ 16 ]
The first quantification method for metaproteomics was reported by Laloo et al. 2018 on an engineered biological reactor enriched for ammonia and nitrite oxidising bacteria. [ 17 ] Here the authors used a robust SWATH-MS quantification method ( protein requirement 5μg) for studying the change in expression levels of protein to a perturbed condition. The study noted that the changes in protein expression of the dominant species i.e. ammonia oxidising bacteria were clearly observed but this was not so for the nitrite oxidising bacteria which was found in low abundance.
A 2019 study by Li et al. has demonstrated the use of metaproteomics in observing protein expression of polycyclic aromatic hydrocarbon (PAH) degradation genes. The authors of this study specifically focused on identifying the degradable microbial communities in activated sludge during wastewater treatment, as PAHs are highly prevalent wastewater pollutants. They showed that Burkholderiales bacteria are heavily involved in PAH degradation, and that the bacterial proteins are involved in DNA replication, fatty acid and glucose metabolism, stress response, protein synthesis, and aromatic hydrocarbon metabolism. [ 18 ]
A similar study done in 2020 by Zhang et al. involved metaproteomic profiling of azo dye-degrading microorganisms. As azo dyes are hazardous industrial pollutants, metaproteomics was used to observe the overall biodegradation mechanism. Pseudomonas Burkholderia, Enterobacter, Lactococcus and Clostridium strains were identified using metagenomic shotgun sequencing, and many bacterial proteins were found to show degradative activity. These proteins identified using metaproteomics include those involved in the TCA cycle, glycolysis, and aldehyde dehydrogenation. Identification of these proteins therefore led the scientists into proposing potential azo dye degradation pathways in Pseudomonas and Burkholderia . [ 19 ]
All in all, metaproteomics is applicable not only to human health studies, but also to environmental studies involving potentially harmful contaminants. | https://en.wikipedia.org/wiki/Metaproteomics |
Metascape is a free gene annotation and analysis resource that helps biologists make sense of one or multiple gene lists. Metascape provides automated meta-analysis tools to understand either common or unique pathways and protein networks within a group of orthogonal target-discovery studies.
In the "OMICs" age, it is important to gain biological insights into a list of genes. Although a number of bioinformatics sources exist for this purpose, such as DAVID , they are not all free, easy to use, and well maintained. To analyze multiple lists of genes originated from orthogonal but complementary "OMICs" studies, tools often require computational skills that are beyond the reach of many biologists. According to the Metascape blog, [ 1 ] a team of scientists self-organized to address this challenge. The team includes core members Yingyao Zhou, Bin Zhou, Lars Pache, Max Chang, Christopher Benner, and Sumit Chanda , as well as other contributors over the time. Metascape was first released as a beta version on Oct 8, 2015. The first Metascape application was published on Dec 9, 2015. [ 2 ] Metascape has gone through multiple releases since then. It currently supports key model organisms, pathway enrichment analysis, protein-protein interaction network and component analysis, automatic presentation of the results as publication-ready web report, Excel and PowerPoint presentations.
The paper titled "Metascape provides a biologist-oriented resource for the analysis of systems-level datasets" was published on Apr 3, 2019 in Nature Communications. [ 3 ]
Metascape implements a CAME analysis workflow:
Metascape integrated over 40 bioinformatics knowledgebase into a seamless user interface, where experimental biologists can use a single-click Express Analysis feature to turn multiple gene lists into interpretable results.
All analysis results are presented in a web report, which contains Excel annotation and enrichment sheets, PowerPoint slides, and custom analysis files (e.g., .cys file by Cytoscape , .svg by Circos ) for further offline analysis or processing.
One noticeable strength of Metascape is its visualization capability. Metascape has aided in the interpretation of 2,600 published studies as of December 2021, [ 4 ] among which, 2/3 of publications made use of graphs or sheets prepared by Metascape.
Metascape for Bioinformaticians (MSBio) was released in 2021 to meet the growing needs of computational biologists to automate Metascape batch analyzes for large-scale gene lists. [ 5 ] MSBio leverages the power of container technology to encapsulate the computational platform in Docker containers. Academic users can conduct offline analyses, which is only limited by the hardware they have access to. Commercial users have the capability of adding proprietary knowledgebase and conducting secure computations using internal computational assets. MSBio databases are updated in synchronization with the Metascape website. | https://en.wikipedia.org/wiki/Metascape |
Metasilicic acid is a hypothetical chemical compound with formula (HO) 2 SiO . [ 1 ] The free acid slowly polymerises in aqueous solution even at low concentrations and cannot be isolated under normal conditions. Compounds including the conjugate base are known as metasilicates and occur widely in nature as inosilicates . | https://en.wikipedia.org/wiki/Metasilicic_acid |
In electronics , metastability is the ability of a digital electronic system to persist for an unbounded time in an unstable equilibrium or metastable state. [ 1 ] In digital logic circuits, a digital signal is required to be within certain voltage or current limits to represent a '0' or '1' logic level for correct circuit operation; if the signal is within a forbidden intermediate range it may cause faulty behavior in logic gates the signal is applied to. In metastable states, the circuit may be unable to settle into a stable '0' or '1' logic level within the time required for proper circuit operation. As a result, the circuit can act in unpredictable ways, and may lead to a system failure, sometimes referred to as a "glitch". [ 2 ] Metastability is an instance of the Buridan's ass paradox.
Metastable states are inherent features of asynchronous digital systems , and of systems with more than one independent clock domain. In self-timed asynchronous systems, arbiters are designed to allow the system to proceed only after the metastability has resolved, so the metastability is a normal condition, not an error condition. [ 3 ] In synchronous systems with asynchronous inputs, synchronizers are designed to make the probability of a synchronization failure acceptably small. [ 4 ] Metastable states are avoidable in fully synchronous systems when the input setup and hold time requirements on flip-flops are satisfied.
A simple example of metastability can be found in an SR NOR latch , when both Set and Reset inputs are true (R=1 and S=1) and then both transition to false (R=0 and S=0) at about the same time. Both outputs Q and Q are initially held at 0 by the simultaneous Set and Reset inputs. After both Set and Reset inputs change to false, the flip-flop will (eventually) end up in one of two stable states, one of Q and Q true and the other false. The final state will depend on which of R or S returns to zero first, chronologically, but if both transition at about the same time, the resulting metastability, with intermediate or oscillatory output levels, can take arbitrarily long to resolve to a stable state.
In electronics, an arbiter is a circuit designed to determine which of several signals arrive first. Arbiters are used in asynchronous circuits to order computational activities for shared resources to prevent concurrent incorrect operations. Arbiters are used on the inputs of fully synchronous systems, and also between clock domains, as synchronizers for input signals. Although they can minimize the occurrence of metastability to very low probabilities, all arbiters nevertheless have metastable states, which are unavoidable at the boundaries of regions of the input state space resulting in different outputs. [ 5 ]
Synchronous circuit design techniques make digital circuits that are resistant to the failure modes that can be caused by metastability. A clock domain is defined as a group of flip-flops with a common clock. Such architectures can form a circuit guaranteed free of metastability (below a certain maximum clock frequency, above which first metastability, then outright failure occur), assuming a low- skew common clock. However, even then, if the system has a dependence on any continuous inputs then these are likely to be vulnerable to metastable states. [ 6 ]
Synchronizer circuits are used to reduce the likelihood of metastability when receiving an asynchronous input or when transferring signals between different clock domains. Synchronizers may take the form of a cascade of D flip-flops (e.g. the shift register in Figure 3). [ 7 ] Although each flip-flop stage adds an additional clock cycle of latency to the input data stream, each stage provides an opportunity to resolve metastability. Such synchronizers can be engineered to reduce metastability to a tolerable rate.
Schmitt triggers can also be used to reduce the likelihood of metastability, but as the researcher Chaney demonstrated in 1979, even Schmitt triggers may become metastable. He further argued that it is not possible to entirely remove the possibility of metastability from unsynchronized inputs within finite time and that "there is a great deal of theoretical and experimental evidence that a region of anomalous behavior exists for every device that has two stable states." In the face of this inevitability, hardware can only reduce the probability of metastability, and systems can try to gracefully handle the occasional metastable event. [ 8 ]
Although metastability is well understood and architectural techniques to control it are known, it persists as a failure mode in equipment.
Serious computer and digital hardware bugs caused by metastability have a fascinating social history. Many engineers have refused to believe that a bistable device can enter into a state that is neither true nor false and has a positive probability that it will remain indefinite for any given period of time, albeit with exponentially decreasing probability over time. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] However, metastability is an inevitable result of any attempt to map a continuous domain to a discrete one. At the boundaries in the continuous domain between regions which map to different discrete outputs, points arbitrarily close together in the continuous domain map to different outputs, making a decision as to which output to select a difficult and potentially lengthy process. [ 14 ] If the inputs to an arbiter or flip-flop arrive almost simultaneously, the circuit most likely will traverse a point of metastability. Metastability remains poorly understood in some circles, and various engineers have proposed their own circuits said to solve or filter out the metastability; typically these circuits simply shift the occurrence of metastability from one place to another. [ 15 ] Chips using multiple clock sources are often tested with tester clocks that have fixed phase relationships, not the independent clocks drifting past each other that will be experienced during operation. This usually explicitly prevents the metastable failure mode that will occur in the field from being seen or reported. Proper testing for metastability frequently employs clocks of slightly different frequencies and ensuring correct circuit operation. | https://en.wikipedia.org/wiki/Metastability_(electronics) |
In statistical mechanics , the metastate is a probability measure on the
space of all thermodynamic states for a system with quenched randomness. The term metastate, in this context, was first used in by Charles M. Newman and Daniel L. Stein in 1996.. [ 1 ]
Two different versions have been proposed:
1) The Aizenman -Wehr construction, a canonical ensemble approach,
constructs the metastate through an ensemble of states obtained by varying
the random parameters in the Hamiltonian outside of the volume being
considered. [ 2 ]
2) The Newman - Stein metastate, a microcanonical ensemble approach,
constructs an empirical average from a deterministic (i.e., chosen
independently of the randomness) subsequence of finite-volume Gibbs distributions . [ 1 ] [ 3 ] [ 4 ]
It was proved [ 4 ] for Euclidean lattices that there always
exists a deterministic subsequence along which the Newman-Stein and
Aizenman-Wehr constructions result in the same metastate. The metastate is
especially useful in systems where deterministic sequences of volumes fail
to converge to a thermodynamic state , and/or there are many competing
observable thermodynamic states.
As an alternative usage, "metastate" can refer to thermodynamic states , where the system is in a metastable state (for example superheated or undercooled liquids, when the actual temperature of the liquid is above or below the boiling or freezing temperature, but the material is still in a liquid state). [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Metastate |
A metasyntactic variable is a specific word or set of words identified as a placeholder in computer science and specifically computer programming . These words are commonly found in source code and are intended to be modified or substituted before real-world usage. For example, foo and bar are used in over 330 Internet Engineering Task Force Requests for Comments , the documents which define foundational internet technologies like HTTP (web), TCP/IP , and email protocols . [ 1 ] [ 2 ]
By mathematical analogy , a metasyntactic variable is a word that is a variable for other words, just as in algebra letters are used as variables for numbers . [ 1 ] [ failed verification ]
Metasyntactic variables are used to name entities such as variables, functions , and commands whose exact identity is unimportant and serve only to demonstrate a concept, which is useful for teaching programming.
Since English is the foundation language or lingua franca of most computer programming languages, variables that originate in English are commonly seen even in programs and examples of programs written for other spoken-language audiences.
The variables used in a particular context may depend on subcultures that develop around programming languages .
Metasyntactic variables used commonly across all programming languages include foobar , foo , bar , baz , qux , quux , corge , grault , garply , waldo , fred , plugh , xyzzy , and thud . [ 1 ] [ 3 ] Two of these words, plugh and xyzzy , are taken from the game Colossal Cave Adventure . [ 4 ]
A fuller reference can be found in The Hacker's Dictionary from MIT Press .
In Japanese, the words hoge (ほげ) [ 5 ] and fuga (ふが) are commonly used, with other common words and variants being piyo (ぴよ), hogera (ほげら), and hogehoge (ほげほげ). [ 6 ] [ circular reference ] The origin of hoge as a metasyntactic variable is not known, but it is believed to date to the early 1980s. [ 6 ]
In France, the word toto is widely used, with variants tata , titi , tutu as related placeholders. One commonly-raised source for the use of toto is a reference to the stock character used to tell jokes with Tête à Toto . [ citation needed ]
In Turkey, the words hede and hödö (usually spelt hodo due to ASCII -only naming constraints of programming languages) are well-known metasyntactic variables that stem from popular humorous cartoon magazines of the 90's like LeMan. The words do not mean anything, and are used for precisely that reason. The terms were popularized more widely by the actor and stand-up comedian Cem Yılmaz in the late 90's and early 2000's. [ 7 ]
In Italian software programming culture, it is common to encounter names of Walt Disney characters (as found in the Italian versions of the shows) being used as variables. These names often appear in pseudo-code, are referenced in software engineering classes, and are commonly employed when explaining algorithms to colleagues. Among the most frequently used are "pippo" (Goofy), "pluto," and "paperino" (Donald Duck). [ 8 ]
In the following example the function name foo and the variable name bar are both metasyntactic variables. Lines beginning with // are comments.
Function prototypes with examples of different argument passing mechanisms: [ 9 ]
Example showing the function overloading capabilities of the C++ language
Spam , ham , and eggs are the principal metasyntactic variables used in the Python programming language . [ 10 ] This is a reference to the famous comedy sketch, " Spam ", by Monty Python , the eponym of the language. [ 11 ] In the following example spam , ham , and eggs are metasyntactic variables and lines beginning with # are comments.
Both the IETF RFCs and computer programming languages are rendered in plain text , making it necessary to distinguish metasyntactic variables by a naming convention, since it would not be obvious from context.
Here is an example from the official IETF document explaining the e-mail protocols (from RFC 772 - cited in RFC 3092):
(The documentation for texinfo emphasizes the distinction between metavariables and mere variables used in a programming language being documented in some texinfo file as: "Use the @var command to indicate metasyntactic variables. A metasyntactic variable is something that stands for another piece of text. For example, you should use a metasyntactic variable in the documentation of a function to describe the arguments that are passed to that function. Do not use @var for the names of particular variables in programming languages. These are specific names from a program, so @code is correct for them." [ 12 ] )
Another point reflected in the above example is the convention that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast with nonterminal symbols in formal grammars where the nonterminals on the right of a production can be substituted by different instances. [ 13 ]
It is common to use the name ACME in example SQL databases and as a placeholder company-name for the purpose of teaching. The term 'ACME Database' is commonly used to mean a training or example-only set of database data used solely for training or testing.
ACME is also commonly used in documentation which shows SQL usage examples, a common practice with in many educational texts as well as technical documentation from companies such as Microsoft and Oracle . [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Metasyntactic_variable |
A metasystem transition is the emergence , through evolution , of a higher level of organization or control .
A metasystem is formed by the integration of a number of initially independent components, such as molecules (as theorized for instance by hypercycles ), cells , or individuals, and the emergence of a system steering or controlling their interactions. As such, the collective of components becomes a new, goal-directed individual , capable of acting in a coordinated way. This metasystem is more complex , more intelligent , and more flexible in its actions than the initial component systems. Prime examples are the origin of life , the transition from unicellular to multicellular organisms, the emergence of eusociality or symbolic thought .
The concept of metasystem transition was introduced by the cybernetician Valentin Turchin in his 1970 book The Phenomenon of Science , and developed among others by Francis Heylighen in the Principia Cybernetica Project. Another related idea, that systems ("operators") evolve to become more complex by successive closures encapsulating components in a larger whole, is proposed in " the operator theory ", developed by Gerard Jagers op Akkerhuis.
Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution. [ citation needed ]
The following is the classical sequence of metasystem transitions in the history of animal evolution according to Turchin, from the origin of animate life to sapient culture:
A number of thinkers have argued that the next human metasystem transition consists of a merger of biological metasystems with technological metasystems, especially information processing technology. Several cumulative major transitions of evolution have transformed life through key innovations in information storage and replication, including RNA , DNA , multicellularity , and also language and culture as inter-human information processing systems. [ 2 ] [ 3 ] [ 4 ] In this sense it can be argued that the carbon-based biosphere has generated a system (human society) capable of creating technology that will result in a comparable evolutionary transition. "Digital information has reached a similar magnitude to information in the biosphere... Like previous evolutionary transitions, the potential symbiosis between biological and digital information will reach a critical point where these codes could compete via natural selection. Alternatively, this fusion could create a higher-level superorganism employing a low-conflict division of labor in performing informational tasks... humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels, ...most transactions on the stock market are executed by automated trading algorithms, and our electric grids are in the hands of artificial intelligence. With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". [ 1 ] [ 5 ] | https://en.wikipedia.org/wiki/Metasystem_transition |
In logic , a metatheorem is a statement about a formal system proven in a metalanguage . Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory , and may reference concepts that are present in the metatheory but not the object theory. [ citation needed ]
A formal system is determined by a formal language and a deductive system ( axioms and rules of inference ). The formal system can be used to prove particular sentences of the formal language with that system. Metatheorems, however, are proved externally to the system in question, in its metatheory. Common metatheories used in logic are set theory (especially in model theory ) and primitive recursive arithmetic (especially in proof theory ). Rather than demonstrating particular sentences to be provable, metatheorems may show that each of a broad class of sentences can be proved, or show that certain sentences cannot be proved. [ citation needed ]
Examples of metatheorems include: | https://en.wikipedia.org/wiki/Metatheorem |
Metatranscriptomics is the set of techniques used to study gene expression of microbes within natural environments, i.e., the metatranscriptome. [ 1 ]
While metagenomics focuses on studying the genomic content and on identifying which microbes are present within a community, metatranscriptomics can be used to study the diversity of the active genes within such community, to quantify their expression levels and to monitor how these levels change in different conditions (e.g., physiological vs. pathological conditions in an organism). The advantage of metatranscriptomics is that it can provide information about differences in the active functions of microbial communities that would otherwise appear to have similar make-up. [ 2 ]
The microbiome has been defined as a microbial community occupying a well-defined habitat. [ 3 ] These communities are ubiquitous and can play a key role in maintenance of the characteristics of their environment, and an imbalance in these communities can negatively affect the activities of the setting in which they reside. To study these communities, and to then determine their impact and correlation with their niche, different omics approaches have been used. While metagenomics can help researchers generate a taxonomic profile of the sample, metatranscriptomics provides a functional profile by analysing which genes are expressed by the community. It is possible to infer what genes are expressed under specific conditions, and this can be done using functional annotations of expressed genes.
Since metatranscriptomics focuses on what genes are expressed, it enables the characterization of the active functional profile of the entire microbial community. [ 4 ] The overview of the gene expression in a given sample is obtained by capturing the total mRNA of the microbiome and performing whole-metatranscriptomics shotgun sequencing .
Although microarrays can be exploited to determine the gene expression profiles of some model organisms, next-generation sequencing and third-generation sequencing are the preferred techniques in metatranscriptomics. The protocol that is used to perform a metatranscriptome analysis may vary depending on the type of sample that needs to be analysed. Indeed, many different protocols have been developed for studying the metatranscriptome of microbial samples. Generally, the steps include sample harvesting, RNA extraction (different extraction methods for different kinds of samples have been reported in the literature), mRNA enrichment, cDNA synthesis and preparation of metatranscriptomic libraries, sequencing and data processing and analysis. mRNA enrichment is one of the most technically challenging steps, for which different strategies have been proposed:
The last two strategies are not recommended as they have been reported to be highly biased. [ 6 ]
A typical metatranscriptome analysis pipeline:
The first strategy maps reads to reference genomes in databases, to collect information that is useful to deduce the relative expression of the single genes. Metatranscriptomic reads are mapped against databases using alignment tools, such as Bowtie2 , BWA, and BLAST . Then, the results are annotated using resources, such as GO , KEGG , COG, and Swiss-Prot . The final analysis of the results is carried out depending on the aim of the study. One of the latest metatranscriptomics techniques is stable isotope probing (SIP), which has been used to retrieve specific targeted transcriptomes of aerobic microbes in lake sediment. [ 7 ] The limitation of this strategy is its reliance on the information of reference genomes in databases.
The second strategy retrieves the abundance in the expression of the different genes by assembling metatranscriptomic reads into longer fragments called contigs using different software. The Trinity software for RNA-seq , in comparison with other de novo transcriptome assemblers, was reported to recover more full-length transcripts over a broad range of expression levels, with a sensitivity similar to methods that rely on genome alignments. This is particularly important in the absence of a reference genome. [ 8 ]
A quantitative pipeline for transcriptomic analysis was developed by Li and Dewey [ 9 ] and called RSEM (RNA-Seq by Expectation Maximization). It can work as stand-alone software or as a plug-in for Trinity. RSEM starts with a reference transcriptome or assembly along with RNA-Seq reads generated from the sample and calculates normalized transcript abundance (meaning the number of RNA-Seq reads cor-responding to each reference transcriptome or assembly). [ 10 ] [ 11 ]
Although both Trinity and RSEM were designed for transcriptomic datasets (i.e., obtained from a single organism), it may be possible to apply them to metatranscriptomic data (i.e., obtained from a whole microbial community). [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ]
The use of computational analysis tools has become more important as DNA sequencing capabilities have grown, particularly in metagenomic and metatranscriptomic analysis, which can generate a huge volume of data. Many different bioinformatic pipelines have been developed for these purposes, often as open source platforms such as HUMAnN and the more recent HUMAnN2, MetaTrans, SAMSA, Leimena-2013 and mOTUs2. [ 18 ]
HUMAnN2 is a bioinformatic pipeline designed from the previous HUMAnN software, which was developed during the Human Microbiome Project (HMP), implementing a “tiered search” approach. In the first tier, HUMAnN2 screens DNA or RNA reads with MetaPhlAn2 in order to identify already-known microbes and constructing a sample-specific database by merging pangenomes of annotated species; in the second tier, the algorithm performs a mapping of the reads against the assembled pangenome database; in the third tier, non-aligned reads are used for a translated search against a protein database. [ 19 ]
MetaTrans is a pipeline that exploits multithreading to improve efficiency. Data is obtained from paired-end RNA-Seq, mainly from 16S RNA for taxonomy and mRNA for gene expression levels. The pipeline is divided in 4 major steps. Firstly, paired-end reads are filtered for quality control purposes, then sorted and filtered for taxonomic analysis (by removal of tRNA sequences) or functional analysis (by removal of both tRNA and rRNA reads). For the taxonomic analysis, sequences are mapped against 16S rRNA Greengenes v13.5 database using SOAP2, while for functional analysis sequences are mapped against a functional database such as MetaHIT-2014 always by using SOAP2 tool. This pipeline is highly flexible, since it offers the possibility to use third-party tools and improve single modules as long as the general structure is preserved. [ 20 ]
This pipeline is designed specifically for metatranscriptomics data analysis, by working in conjunction with the MG-RAST server for metagenomics. This pipeline is simple to use, requires low technical preparation and computational power and can be applied to a wide range of microbes. First, sequences from raw sequencing data are filtered for quality and then submitted to MG-RAST (which performs further steps such as quality control, gene calling, clustering of amino acid sequences and use of sBLAT on each cluster to detect the best matches). Matches are then aggregated for taxonomic and functional analysis purposes. [ 21 ]
This pipeline does not have an official name and is usually referred to using the first author of the article in which it is described. This algorithm foresees the implementation of alignment tools such as BLAST and MegaBLAST. Reads are clustered in groups of identical sequences and then processed for in-silico removal of tRNA and rRNA sequences. Remaining reads are then mapped to NCBI databases using BLAST and MegaBLAST, then classified by their bitscore. Sequences with higher bitscores are used to predict phylogenetic origin and function, and lower-score reads are aligned with the more sensitive BLASTX and eventually can be aligned in protein databases so that their function can be characterized. [ 12 ]
The mOTUs2 profiler, [ 22 ] which is based on essential housekeeping genes , is demonstrably well-suited for quantification of basal transcriptional activity of microbial community members. [ citation needed ] Depending on environmental conditions, the number of transcripts per cell varies for most genes. An exception to this are housekeeping genes that are expressed constitutively and with low variability under different conditions. [ citation needed ] Thus, the abundance of transcripts from such genes strongly correlate with the abundance of active cells in a community.
Another method that can be exploited for metatranscriptomic purposes is tiling microarrays . In particular, microarrays have been used to measure microbial transcription levels, to detect new transcripts and to obtain information about the structure of mRNAs (for instance, the UTR boundaries). Recently, it has also been used to find new regulatory ncRNA. However, microarrays are affected by some pitfalls:
RNA-Seq can overcome these limitations: it does not require any previous knowledge about the genomes that have to be analysed and it provides high throughput validation of genes prediction, structure, expression. Thus, by combining the two approaches it is possible to have a more complete representation of bacterial transcriptome. [ 1 ]
The gut microbiome has emerged in recent years as an important player in human health. Its prevalent functions are related to the fermentation of indigestible food components, competitions with pathogen, strengthening of the intestinal barrier, stimulation and regulation of the immune system. [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] Although much has been learnt about the microbiome community in the last years, the wide diversity of microorganisms and molecules in the gut requires new tools to enable new discoveries. By focusing on changes in the expression of the genes, metatrascriptomics can generate a more dynamic picture of the state and activity of the microbiome than metagenomics. It has been observed that metatranscriptomic functional profiles are more variable than what might have been reckoned only by metagenomic information. This suggests that non-housekeeping genes are not stably expressed in situ [ 30 ] [ 31 ]
One example of metatranscriptomic application is in the study of the gut microbiome in inflammatory bowel disease. Inflammatory bowel disease (IBD) is a group of chronic diseases of the digestive tract that affects millions of people worldwide. [ 32 ] Several human genetic mutations have been linked to an increased susceptibility to IBD, but additional factors are needed for the full development of the disease.
Regarding the relationship between IBD and gut microbiome, it is known that there is a dysbiosis in patients with IBD but microbial taxonomic profiles can be highly different among patients, making it difficult to implicate specific microbial species or strains in disease onset and progression. In addition, the gut microbiome composition presents a high variability over time among people, with more pronounced variations in patient with IBD. [ 33 ] [ 34 ] The functional potential of an organism, meaning the genes and pathways encoded in its genome, provides only indirect information about the level or extent of activation of such functions. So, the measurement of functional activity (gene expression) is critical to understand the mechanism of the gut microbiome dysbiosis.
Alterations in transcriptional activity in IBD, established on the rRNA expression, indicate that some bacterial populations are active in patients with IBD, while other groups are inactive or latent. [ 35 ]
A metatranscriptomics analysis measuring the functional activity of the gut microbiome reveals insights only partially observable in metagenomic functional potential, including disease-linked observations for IBD. It has been reported that many IBD-specific signals are either more pronounced or only detectable on the RNA level. [ 33 ] These altered expression profiles are potentially the result of changes in the gut environment in patients with IBD, which include increased levels of inflammation, higher concentrations of oxygen and a diminished mucous layer. [ 36 ] Metatranscriptomics has the advantage of allowing researchers to skip the assaying of biochemical products in situ (like mucus or oxygen) and enables evaluation of effects of environmental changes on microbial expression patterns in vivo for large human populations. In addition, it can be coupled with longitudinal sampling to associate modulation of activity with the disease progression. Indeed, it has been shown that while a particular path may remain stable over time at the genomic level, the corresponding expression varies with the disease severity. [ 33 ] This suggests that microbial dysbiosis affect the gut health through changing in the transcriptional programmes in a stable community. In this way, metatranscriptomic profiling emerges as an important tool for understanding the mechanisms of that relationship.
Some technical limitations of the RNA measurements in stool are related to the fact that the extracted RNA can be degraded and, if not, it still represents only the organisms presents in the stool sample.
Examples of techniques applied:
Microarrays: allow the monitoring of changes in the expression levels of many genes in parallel for both host and pathogen. First microarray approaches have shown the first global analysis of gene expression changes in pathogens such as Vibrio cholerae , Borrelia burgdorferi , Chlamydia trachomatis , Chlamydia pneumoniae and Salmonella enterica , revealing the strategies that are used by these microorganisms to adapt to the host.
In addition, microarrays only provide the first global insights about the host innate immune response to PAMPs , as the effects of bacterial infection on the expression of various host factor.
Anyway, the detection through microarrays of both organisms at the same time could be problematic.
Problems:
Dual RNA-Seq: this technique allows the simultaneous study of both host and pathogen transcriptomes as well. It is possible to monitor the expression of genes at different time points of the infection process; in this way could it be possible to study the changes in cellular networks in both organisms starting from the initial contact until the manipulation of the host (interplay host-patogen).
Moreover, RNA-Seq is an important approach for identifying coregulated genes, enabling the organization of pathogen genomes into operons . Indeed, genome annotation has been done for some eukaryotic pathogens, such as Candida albicans , Trypanosoma brucei and Plasmodium falciparum .
Despite the increasing sensitivity and depth of sequencing now available, there are still few published RNA-Seq studies concerning the response of the mammalian host cell to the infection. [ 37 ] [ 38 ] | https://en.wikipedia.org/wiki/Metatranscriptomics |
In logic , a metavariable (also metalinguistic variable [ 1 ] or syntactical variable ) [ 2 ] is a symbol or symbol string which belongs to a metalanguage and stands for elements of some object language. For instance, in the sentence
the symbols A and B are part of the metalanguage in which the statement about the object language ℒ is formulated.
John Corcoran considers this terminology unfortunate because it obscures the use of schemata and because such "variables" do not actually range over a domain. [ 3 ] : 220
The convention is that a metavariable is to be uniformly substituted with the same instance in all its appearances in a given schema. This is in contrast with nonterminal symbols in formal grammars where the nonterminals on the right of a production can be substituted by different instances. [ 4 ]
Attempts to formalize the notion of metavariable result in some kind of type theory . [ 5 ] | https://en.wikipedia.org/wiki/Metavariable |
Metcalfe's law states that the financial value or influence of a telecommunications network is proportional to the square of the number of connected users of the system ( n 2 ). The law is named after Robert Metcalfe and was first proposed in 1980, albeit not in terms of users, but rather of "compatible communicating devices" (e.g., fax machines, telephones). [ 1 ] It later became associated with users on the Ethernet after a September 1993 Forbes article by George Gilder . [ 2 ]
Metcalfe's law characterizes many of the network effects of communication technologies and networks such as the Internet , social networking and the World Wide Web . Former Chairman of the U.S. Federal Communications Commission Reed Hundt said that this law gives the most understanding to the workings of the present-day Internet. [ 3 ] Mathematically, Metcalfe's Law shows that the number of unique possible connections in an n {\displaystyle n} -node connection can be expressed as the triangular number n ( n − 1 ) / 2 {\displaystyle n(n-1)/2} , which is asymptotically proportional to n 2 {\displaystyle n^{2}} .
The law has often been illustrated using the example of fax machines: a single fax machine on its own is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases. [ 4 ] This is common illustration to explain network effect . Thus, in any social network, the greater the number of users with the service, the more valuable the service becomes to the community.
Metcalfe's law was conceived in 1983 in a presentation to the 3Com sales force. [ 5 ] It stated V would be proportional to the total number of possible connections, or approximately n -squared.
The original incarnation was careful to delineate between a linear cost ( Cn ), non-linear growth( n 2 ) and a non-constant proportionality factor affinity ( A ). The break-even point point where costs are recouped is given by: C × n = A × n ( n − 1 ) / 2 {\displaystyle C\times n=A\times n(n-1)/2} At some size, the right-hand side of the equation V , value, exceeds the cost, and A describes the relationship between size and net value added. For large n , net network value is then: Π = n ( A × ( n − 1 ) / 2 − C ) {\displaystyle \Pi =n(A\times (n-1)/2-C)} Metcalfe properly dimensioned A as "value per user". Affinity is also a function of network size, and Metcalfe correctly asserted that A must decline as n grows large. In a 2006 interview, Metcalfe stated: [ 6 ]
There may be diseconomies of network scale that eventually drive values down with increasing size. So, if V = An 2 , it could be that A (for “affinity,” value per connection) is also a function of n and heads down after some network size, overwhelming n 2 .
Network size, and hence value, does not grow unbounded but is constrained by practical limitations such as infrastructure, access to technology, and bounded rationality such as Dunbar's number . It is almost always the case that user growth n reaches a saturation point. With technologies, substitutes, competitors and technical obsolescence constrain growth of n . Growth of n is typically assumed to follow a sigmoid function such as a logistic curve or Gompertz curve .
A is also governed by the connectivity or density of the network topology. In an undirected network, every edge connects two nodes such that there are 2 m nodes per edge. The proportion of nodes in actual contact are given by c = 2 m / n {\displaystyle c=2m/n} .
The maximum possible number of edges in a simple network (i.e. one with no multi-edges or self-edges) is ( n 2 ) = n ( n − 1 ) / 2 {\displaystyle {\binom {n}{2}}=n(n-1)/2} .
Therefore the density ρ of a network is the faction of those edges that are actually present is:
which for large networks is approximated by ρ = c / n {\displaystyle \rho =c/n} . [ 7 ]
Metcalfe's law assumes that the value of each node n {\displaystyle n} is of equal benefit. [ 3 ] If this is not the case, for example because one fax machine serves 60 workers in a company, the second fax machine serves half of that, the third one third, and so on, then the relative value of an additional connection decreases. Likewise, in social networks, if users that join later use the network less than early adopters, then the benefit of each additional user may lessen, making the overall network less efficient if costs per users are fixed.
Within the context of social networks, many, including Metcalfe himself, have proposed modified models in which the value of the network grows as n log n {\displaystyle n\log n} rather than n 2 {\displaystyle n^{2}} . [ 8 ] [ 3 ] Reed [ non sequitur ] and Andrew Odlyzko have sought out possible relationships to Metcalfe's Law in terms of describing the relationship of a network and one can read about how those are related. Tongia and Wilson also examine the related question of the costs to those excluded. [ 9 ]
For more than 30 years, there was little concrete evidence in support of the law. Finally, in July 2013, Dutch researchers analyzed European Internet-usage patterns over a long-enough time [ specify ] and found n 2 {\displaystyle n^{2}} proportionality for small values of n {\displaystyle n} and n log n {\displaystyle n\log n} proportionality for large values of n {\displaystyle n} . [ 10 ] A few months later, Metcalfe himself provided further proof by using Facebook 's data over the past 10 years to show a good fit for Metcalfe's law. [ 11 ]
In 2015, Zhang, Liu, and Xu parameterized the Metcalfe function in data from Tencent and Facebook. Their work showed that Metcalfe's law held for both, despite differences in audience between the two sites (Facebook serving a worldwide audience and Tencent serving only Chinese users). The functions for the two sites were V Tencent = 7.39 × 10 − 9 × n 2 {\displaystyle V_{\text{Tencent}}=7.39\times 10^{-9}\times n^{2}} and V Facebook = 5.70 × 10 − 9 × n 2 {\displaystyle V_{\text{Facebook}}=5.70\times 10^{-9}\times n^{2}} respectively. [ 12 ] One of the earliest mentions of the Metcalfe Law in the context of Bitcoin was by a Reddit post by Santostasi in 2014. He compared the observed generalized Metcalfe behavior for Bitcoin to the Zipf's Law and the theoretical Metcalfe result. [ 13 ] The Metcalfe's Law is a critical component of Santostasi's Bitcoin Power Law Theory. [ 14 ] In a working paper, Peterson linked time-value-of-money concepts to Metcalfe value using Bitcoin and Facebook as numerical examples of the proof, [ 15 ] and in 2018 applied Metcalfe's law to Bitcoin , showing that over 70% of variance in Bitcoin value was explained by applying Metcalfe's law to increases in Bitcoin network size. [ 16 ]
In a 2024 interview, mathematician Terence Tao emphasized the importance of universality and networking within the mathematics community, for which he cited the Metcalfe's Law. Tao believes that a larger audience leads to more connections, which ultimately results in positive developments within the community. For this, he cited Metcalfe's law to support this perspective. Tao further stated, "my whole career experience has been sort of the more connections equals just better stuff happening". [ 17 ] | https://en.wikipedia.org/wiki/Metcalfe's_law |
A meteorite is a rock that originated in outer space and has fallen to the surface of a planet or moon . When the original object enters the atmosphere, various factors such as friction , pressure, and chemical interactions with the atmospheric gases cause it to heat up and radiate energy. It then becomes a meteor and forms a fireball , also known as a shooting star; astronomers call the brightest examples " bolides ". Once it settles on the larger body's surface, the meteor becomes a meteorite. Meteorites vary greatly in size. For geologists, a bolide is a meteorite large enough to create an impact crater . [ 2 ]
Meteorites that are recovered after being observed as they transit the atmosphere and impact Earth are called meteorite falls . All others are known as meteorite finds . Meteorites have traditionally been divided into three broad categories: stony meteorites that are rocks, mainly composed of silicate minerals ; iron meteorites that are largely composed of ferronickel ; and stony-iron meteorites that contain large amounts of both metallic and rocky material. Modern classification schemes divide meteorites into groups according to their structure, chemical and isotopic composition and mineralogy. "Meteorites" less than ~1 mm in diameter are classified as micrometeorites , however micrometeorites differ from meteorites in that they typically melt completely in the atmosphere and fall to Earth as quenched droplets. Extraterrestrial meteorites have been found on the Moon and on Mars. [ 3 ] [ 4 ] [ 5 ]
Most space rocks crashing into Earth come from a single source. The origin of most meteorites can be traced to just a handful of asteroid breakup events – and possibly even individual asteroids . [ 6 ]
Most meteoroids disintegrate when entering the Earth's atmosphere. Usually, five to ten a year are observed to fall and are subsequently recovered and made known to scientists. [ 7 ] Few meteorites are large enough to create large impact craters . Instead, they typically arrive at the surface at their terminal velocity and, at most, create a small pit.
Large meteoroids may strike the earth with a significant fraction of their escape velocity (second cosmic velocity), leaving behind a hypervelocity impact crater. The kind of crater will depend on the size, composition, degree of fragmentation, and incoming angle of the impactor. The force of such collisions has the potential to cause widespread destruction. [ 8 ] [ 9 ] The most frequent hypervelocity cratering events on the Earth are caused by iron meteoroids, which are most easily able to transit the atmosphere intact. Examples of craters caused by iron meteoroids include Barringer Meteor Crater , Odessa Meteor Crater , Wabar craters , and Wolfe Creek crater ; iron meteorites are found in association with all of these craters. In contrast, even relatively large stony or icy bodies such as small comets or asteroids , up to millions of tons, are disrupted in the atmosphere, and do not make impact craters. [ 10 ] Although such disruption events are uncommon, they can cause a considerable concussion to occur; the famed Tunguska event probably resulted from such an incident. Very large stony objects, hundreds of meters in diameter or more, weighing tens of millions of tons or more, can reach the surface and cause large craters but are very rare. Such events are generally so energetic that the impactor is completely destroyed, leaving no meteorites. (The first example of a stony meteorite found in association with a large impact crater, the Morokweng impact structure in South Africa, was reported in May 2006.) [ 11 ]
Several phenomena are well documented during witnessed meteorite falls too small to produce hypervelocity craters. [ 12 ] The fireball that occurs as the meteoroid passes through the atmosphere can appear to be very bright, rivaling the sun in intensity, although most are far dimmer and may not even be noticed during the daytime. Various colors have been reported, including yellow, green, and red. Flashes and bursts of light can occur as the object breaks up. Explosions, detonations, and rumblings are often heard during meteorite falls, which can be caused by sonic booms as well as shock waves resulting from major fragmentation events. These sounds can be heard over wide areas, with a radius of a hundred or more kilometers. Whistling and hissing sounds are also sometimes heard but are poorly understood. Following the passage of the fireball, it is not unusual for a dust trail to linger in the atmosphere for several minutes.
As meteoroids are heated during atmospheric entry , their surfaces melt and experience ablation . They can be sculpted into various shapes during this process, sometimes resulting in shallow thumbprint-like indentations on their surfaces called regmaglypts . If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical "nose cone" or "heat shield" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites , the fusion crust may be very light-colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to 1 centimetre (0.39 in) below the surface. Reports vary; some meteorites are reported to be "burning hot to the touch" upon landing, while others are alleged to have been cold enough to condense water and form a frost. [ 13 ] [ 14 ] [ 15 ]
Meteoroids that disintegrate in the atmosphere may fall as meteorite showers, which can range from only a few up to thousands of separate individuals. The area over which a meteorite shower falls is known as its strewn field . Strewn fields are commonly elliptical in shape, with the major axis parallel to the direction of flight. In most cases, the largest meteorites in a shower are found farthest down-range in the strewn field. [ 16 ]
Most meteorites are stony meteorites, classed as chondrites and achondrites . Only about 6% of meteorites are iron meteorites or a blend of rock and metal, the stony-iron meteorites . Modern classification of meteorites is complex. The review paper of Krot et al. (2007) [ 17 ] summarizes modern meteorite taxonomy.
About 86% of the meteorites are chondrites, [ 18 ] [ 19 ] [ 20 ] which are named for the small, round particles they contain. These particles, or chondrules , are composed mostly of silicate minerals that appear to have been melted while they were free-floating objects in space. Certain types of chondrites also contain small amounts of organic matter , including amino acids , and presolar grains . Chondrites are typically about 4.55 billion years old and are thought to represent material from the asteroid belt that never coalesced into large bodies. Like comets , chondritic asteroids are some of the oldest and most primitive materials in the Solar System . Chondrites are often considered to be "the building blocks of the planets".
About 8% of the meteorites are achondrites (meaning they do not contain chondrules), some of which are similar to terrestrial igneous rocks . Most achondrites are also ancient rocks, and are thought to represent crustal material of differentiated planetesimals. One large family of achondrites (the HED meteorites ) may have originated on the parent body of the Vesta Family , although this claim is disputed. [ 21 ] [ 22 ] Others derive from unidentified asteroids. Two small groups of achondrites are special, as they are younger and do not appear to come from the asteroid belt. One of these groups comes from the Moon, and includes rocks similar to those brought back to Earth by Apollo and Luna programs. The other group is almost certainly from Mars and constitutes the only materials from other planets ever recovered by humans.
About 5% of meteorites that have been seen to fall are iron meteorites composed of iron- nickel alloys , such as kamacite and/or taenite . Most iron meteorites are thought to come from the cores of planetesimals that were once molten. As with the Earth, the denser metal separated from silicate material and sank toward the center of the planetesimal, forming its core. After the planetesimal solidified, it broke up in a collision with another planetesimal. Due to the low abundance of iron meteorites in collection areas such as Antarctica, where most of the meteoric material that has fallen can be recovered, it is possible that the percentage of iron-meteorite falls is lower than 5%. This would be explained by a recovery bias; laypeople are more likely to notice and recover solid masses of metal than most other meteorite types. The abundance of iron meteorites relative to total Antarctic finds is 0.4%. [ 23 ] [ 24 ]
Stony-iron meteorites constitute the remaining 1%. They are a mixture of iron-nickel metal and silicate minerals. One type, called pallasites , is thought to have originated in the boundary zone above the core regions where iron meteorites originated. The other major type of stony-iron meteorites is the mesosiderites .
Tektites (from Greek tektos , molten) are not themselves meteorites, but are rather natural glass objects up to a few centimeters in size that were formed—according to most scientists—by the impacts of large meteorites on Earth's surface. A few researchers have favored tektites originating from the Moon as volcanic ejecta, but this theory has lost much of its support over the last few decades.
The diameter of the largest impactor to hit Earth on any given day is likely to be about 40 centimeters (16 inches), in a given year about four metres (13 ft), and in a given century about 20 m (66 ft). These statistics are obtained by the following:
Over at least the range from five centimeters (2.0 inches) to roughly 300 meters (980 feet), the rate at which Earth receives meteors obeys a power-law distribution as follows:
where N (> D ) is the expected number of objects larger than a diameter of D meters to hit Earth in a year. [ 25 ] This is based on observations of bright meteors seen from the ground and space, combined with surveys of near-Earth asteroids . Above 300 m (980 ft) in diameter, the predicted rate is somewhat higher, with a 2 km (1.2 mi) asteroid (one teraton TNT equivalent ) every couple of million years – about 10 times as often as the power-law extrapolation would predict.
In 2015, NASA scientists reported that complex organic compounds found in DNA and RNA , including uracil , cytosine , and thymine , have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine , found in meteorites. Pyrimidine and polycyclic aromatic hydrocarbons (PAHs) may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists. [ 26 ]
In 2018, researchers found that 4.5 billion-year-old meteorites found on Earth contained liquid water along with prebiotic complex organic substances that may be ingredients for life. [ 27 ] [ 28 ]
In 2019, scientists reported detecting sugar molecules in meteorites for the first time, including ribose , suggesting that chemical processes on asteroids can produce some organic compounds fundamental to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth. [ 29 ] [ 30 ]
In 2022, a Japanese group reported that they had found adenine (A), thymine (T), guanine (G), cytosine (C) and uracil (U) inside carbon-rich meteorites. These compounds
are building blocks of DNA and RNA , the genetic code of all life on Earth. These compounds have also occurred spontaneously in laboratory settings emulating conditions in outer space. [ 31 ] [ 32 ]
Until recently, [ when? ] the source of only about 6% of meteorites had been traced to their sources: the Moon, Mars, and asteroid Vesta. [ 33 ] [ 34 ] [ 35 ] Approximately 70% of meteorites found on Earth now appear to originate from break-ups of three asteroids. [ 36 ]
Most meteorites date from the early Solar System and are by far the oldest extant material on Earth. Analysis of terrestrial weathering due to water, salt, oxygen, etc. is used to quantify the degree of alteration that a meteorite has experienced. Several qualitative weathering indices have been applied to Antarctic and desertic samples. [ 37 ]
The most commonly employed weathering scale, used for ordinary chondrites , ranges from W0 (pristine state) to W6 (heavy alteration).
"Fossil" meteorites are sometimes discovered by geologists. They represent the highly weathered remains of meteorites that fell to Earth in the remote past and were preserved in sedimentary deposits sufficiently well that they can be recognized through mineralogical and geochemical studies. The Thorsberg limestone quarry in Sweden has produced an anomalously large number – exceeding one hundred – fossil meteorites from the Ordovician , nearly all of which are highly weathered L-chondrites that still resemble the original meteorite under a petrographic microscope , but which have had their original material almost entirely replaced by terrestrial secondary mineralization. The extraterrestrial provenance was demonstrated in part through isotopic analysis of relict spinel grains, a mineral that is common in meteorites, is insoluble in water, and is able to persist chemically unchanged in the terrestrial weathering environment. Scientists believe that these meteorites, which have all also been found in Russia and China, all originated from the same source , a collision that occurred somewhere between Jupiter and Mars. [ 38 ] [ 39 ] [ 40 ] [ 41 ] One of these fossil meteorites, dubbed Österplana 065 , appears to represent a distinct type of meteorite that is "extinct" in the sense that it is no longer falling to Earth, the parent body having already been completely depleted from the reservoir of near-Earth objects . [ 42 ]
A "meteorite fall", also called an "observed fall", is a meteorite collected after its arrival was observed by people or automated devices. Any other meteorite is called a "meteorite find". [ 43 ] [ 44 ] There are more than 1,100 documented falls listed in widely used databases, [ 45 ] [ 46 ] [ 47 ] most of which have specimens in modern collections. As of January 2019 [update] , the Meteoritical Bulletin Database had 1,180 confirmed falls. [ 45 ]
Most meteorite falls are collected on the basis of eyewitness accounts of the fireball or the impact of the object on the ground, or both. Therefore, despite the fact that meteorites fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with higher human population densities such as Europe, Japan, and northern India.
A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Příbram meteorite , which fell in Czechoslovakia (now the Czech Republic) in 1959. [ 48 ] In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite.
Following the Příbram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network , operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US . This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. [ 49 ] Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree , in 1977. [ 50 ] Finally, observations by the European Fireball Network , a descendant of the original Czech program that recovered Příbram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. [ 51 ] NASA has an automated system that detects meteors and calculates the orbit, magnitude, ground track , and other parameters over the southeast USA, which often detects a number of events each night. [ 52 ]
Until the twentieth century, only a few hundred meteorite finds had ever been discovered. More than 80% of these were iron and stony-iron meteorites, which are easily distinguished from local rocks. To this day, few stony meteorites are reported each year that can be considered to be "accidental" finds. The reason there are now more than 30,000 meteorite finds in the world's collections started with the discovery by Harvey H. Nininger that meteorites are much more common on the surface of the Earth than was previously thought.
Meteorites that land in Canada are protected under the Cultural Property Export and Import Act . [ 53 ] In July 2024, a meteorite was recorded by security footage crashing into a residential property in Marshfield, Prince Edward Island . It is believed to be the first time such an event has been captured on camera and the sound of the crash recorded. [ 54 ] It was subsequently registered as the Charlottetown meteorite, named after the city near to where it landed. [ 55 ]
Nininger's strategy was to search for meteorites in the Great Plains of the United States, where the land was largely cultivated and the soil contained few rocks. Between the late 1920s and the 1950s, he traveled across the region, educating local people about what meteorites looked like and what to do if they thought they had found one, for example, in the course of clearing a field. The result was the discovery of more than 200 new meteorites, mostly stony types. [ 56 ]
In the late 1960s, Roosevelt County, New Mexico was found to be a particularly good place to find meteorites. After the discovery of a few meteorites in 1967, a public awareness campaign resulted in the finding of nearly 100 new specimens in the next few years, with many being by a single person, Ivan Wilson. In total, nearly 140 meteorites were found in the region since 1967. In the area of the finds, the ground was originally covered by a shallow, loose soil sitting atop a hardpan layer. During the dustbowl era, the loose soil was blown off, leaving any rocks and meteorites that were present stranded on the exposed surface. [ 57 ]
Beginning in the mid-1960s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. [ 58 ] To date, thousands of meteorites have been recovered from the Mojave , Sonoran , Great Basin , and Chihuahuan Deserts , with many being recovered on dry lake beds. Significant finds include the three-tonne Old Woman meteorite , currently on display at the Desert Discovery Center in Barstow, California , and the Franconia and Gold Basin meteorite strewn fields; hundreds of kilograms of meteorites have been recovered from each. [ 59 ] [ 60 ] [ 61 ] A number of finds from the American Southwest have been submitted with false find locations, as many finders think it is unwise to publicly share that information for fear of confiscation by the federal government and competition with other hunters at published find sites. [ 62 ] [ 63 ] [ 64 ] Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles, and at UCLA 's Meteorite Gallery. [ 65 ]
A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains . With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. [ 67 ] After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites. [ 68 ]
Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the Antarctic Search for Meteorites ( ANSMET ) program. [ 69 ] European teams, starting with a consortium called "EUROMET" in the 1990/91 season, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites. [ 70 ]
The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. [ 71 ] The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003). [ 72 ]
At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia . Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia . Systematic searches between about 1971 and the present recovered more than 500 others, [ 73 ] ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone . In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark-colored meteorites can then be recognized among the very different looking limestone pebbles and rocks.
In 1986–87, a German team installing a network of seismic stations while prospecting for oil discovered about 65 meteorites on a flat, desert plain about 100 kilometres (62 mi) southeast of Dirj (Daraj), Libya . A few years later, a desert enthusiast saw photographs of meteorites being recovered by scientists in Antarctica, and thought that he had seen similar occurrences in northern Africa . In 1989, he recovered about 100 meteorites from several distinct locations in Libya and Algeria. Over the next several years, he and others who followed found at least 400 more meteorites. The find locations were generally in regions known as regs or hamadas : flat, featureless areas covered only by small pebbles and minor amounts of sand. [ 75 ] Dark-colored meteorites can be easily spotted in these places. In the case of several meteorite fields, such as Dar al Gani , Dhofar, and others, favorable light-colored geology consisting of basic rocks (clays, dolomites , and limestones ) makes meteorites particularly easy to identify. [ 76 ]
Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research . The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to more than 500. [ 77 ]
Meteorite markets came into existence in the late 1990s, especially in Morocco . This trade was driven by Western commercialization and an increasing number of collectors. The meteorites were supplied by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. When they get classified, they are named "Northwest Africa" (abbreviated NWA) followed by a number. [ 78 ] It is generally accepted that NWA meteorites originate in Morocco, Algeria, Western Sahara, Mali, and possibly even further afield. Nearly all of these meteorites leave Africa through Morocco. Scores of important meteorites, including Lunar and Martian ones, have been discovered and made available to science via this route. A few of the more notable meteorites recovered include Tissint and NWA 7034 . Tissint was the first witnessed Martian meteorite fall in more than fifty years; NWA 7034 is the oldest meteorite known to come from Mars, and is a unique water-bearing regolith breccia.
In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali , had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however, international teams of Omani and European scientists have also now collected specimens.
The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed national treasures. This new law provoked a small international incident , as its implementation preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters, primarily from Russia, but whose party also consisted of members from the US as well as several other European countries. [ citation needed ]
Meteorites have figured into human culture since their earliest discovery as ceremonial or religious objects, as the subject of writing about events occurring in the sky and as a source of peril. The oldest known iron artifacts are nine small beads hammered from meteoritic iron. They were found in northern Egypt and have been securely dated to 3200 BC. [ 79 ]
Although the use of the metal found in meteorites is also recorded in myths of many countries and cultures where the celestial source was often acknowledged, scientific documentation only began in the last few centuries.
Meteorite falls may have been the source of cultish worship . The cult in the Temple of Artemis at Ephesus, one of the Seven Wonders of the Ancient World , possibly originated with the observation and recovery of a meteorite that was understood by contemporaries to have fallen to the earth from Jupiter , the principal Roman deity. [ 80 ] There are reports that a sacred stone was enshrined at the temple that may have been a meteorite.
The Black Stone set into the wall of the Kaaba has often been presumed to be a meteorite, but the little available evidence for this is inconclusive. [ 81 ] [ 82 ] [ 83 ]
Some Native Americans treated meteorites as ceremonial objects. In 1915, a 61-kilogram (135 lb) iron meteorite was found in a Sinagua (c. 1100–1200 AD) burial cyst near Camp Verde, Arizona , respectfully wrapped in a feather cloth. [ 84 ] A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo , New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds , and the discovery of the Winona meteorite in a Native American stone-walled crypt. [ 84 ] [ 85 ]
In medieval China during the Song dynasty , a meteorite strike event was recorded by Shen Kuo in 1064 AD near Changzhou . He reported "a loud noise that sounded like a thunder was heard in the sky; a giant star, almost like the moon, appeared in the southeast" and later finding the crater and the still-hot meteorite within, nearby. [ 86 ]
Two of the oldest recorded meteorite falls in Europe are the Elbogen (1400) and Ensisheim (1492) meteorites. The German physicist, Ernst Florens Chladni , was the first to publish (in 1794) the idea that meteorites might be rocks that originated not from Earth, but from space. [ 87 ] His booklet was "On the Origin of the Iron Masses Found by Pallas and Others Similar to it, and on Some Associated Natural Phenomena" . [ 88 ] In this he compiled all available data on several meteorite finds and falls concluded that they must have their origins in outer space. The scientific community of the time responded with resistance and mockery. [ 89 ] It took nearly ten years before a general acceptance of the origin of meteorites was achieved through the work of the French scientist Jean-Baptiste Biot and the British chemist, Edward Howard . [ 90 ] Biot's study, initiated by the French Academy of Sciences , was compelled by a fall of thousands of meteorites on 26 April 1803 from the skies of L'Aigle, France. [ 91 ] [ 92 ] [ 93 ]
Throughout history, many first- and second-hand reports speak of meteorites killing humans and other animals. One example is from 1490 AD in China, which purportedly killed thousands of people. [ 94 ] John Lewis has compiled some of these reports, and summarizes, "No one in recorded history has ever been killed by a meteorite in the presence of a meteoriticist and a medical doctor" and "reviewers who make sweeping negative conclusions usually do not cite any of the primary publications in which the eyewitnesses describe their experiences, and give no evidence of having read them". [ 95 ]
Modern reports of meteorite strikes include:
Meteorites are always named for the places they were found, where practical, usually a nearby town or geographic feature. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). The name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors. [ 100 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Meteorite |
A meteorite fall , also called an observed fall , is a meteorite collected after its fall from outer space , that was also observed by people or automated devices. Any other meteorite is called a " find ". [ 1 ] [ 2 ] There are more than 1,300 documented falls listed in widely used databases, [ 3 ] [ 4 ] [ 5 ] most of which have specimens in modern collections. As of February 2023 [update] , the Meteoritical Bulletin Database had 1372 confirmed falls. [ 3 ] [ 6 ]
Observed meteorite falls are important for several reasons.
Material from observed falls has not been subjected to terrestrial weathering, making the find a better candidate for scientific study. Historically, observed falls were the most compelling evidence supporting the extraterrestrial origin of meteorites. [ 7 ] Furthermore, observed fall discoveries are a better representative sample of the types of meteorites which fall to Earth. For example, iron meteorites take much longer to weather and are easier to identify as unusual objects, as compared to other types. This may explain the increased proportion of iron meteorites among finds (6.7%), over that among observed falls (4.4%). [ 3 ] There is also detailed statistics on falls such as based on meteorite classification .
Only one known meteorite fall, the 2024 Charlottetown meteorite , was recorded with video including audio. The sound of the meteorite shattering upon impact has been described as similar to the sound of breaking ice. [ 8 ]
As of January 2019, the Meteoritical Bulletin Database had 1,180 confirmed falls. [ 3 ] Statistics by decade are listed in the table in this section.
The German physicist Ernst Chladni , sometimes considered as the father of meteoritics , [ 9 ] was the first to publish in modern Western thought (in 1794) the then audacious idea that meteorites are rocks from space. [ 10 ] There were already several documented cases, one of the earliest was the Aegospotami meteorite of 467 BC and which became a landmark for 500 years, of which Diogenes of Apollonia said: [ 11 ]
With the visible stars revolve stones which are invisible, and for that reason nameless. They often fall on the ground and are extinguished, like the stone star that came down on fire at Aegospotami.
showing that the Greeks had a much earlier idea that meteorites are rocks from space.
Below is a list of eight confirmed falls pre-1600 AD. However, unlike the Loket (Elbogen) and Ensisheim meteorites, not all are as well-documented.
While most confirmed falls involve masses between less than one kg to several kg, some reach 100 kg or more. A few have fragments that total even more than one metric ton . The six largest falls are listed below and five (except the 2013 Chelyabinsk meteorite ) occurred during the 20th century. Presumably, events of such magnitude may happen a few times per century but, especially if it occurred in remote areas, may have gone unreported.
For comparison, the largest finds are the 60-ton Hoba meteorite , a 30.8-ton fragment ( Gancedo ) and a 28.8-ton fragment ( El Chaco ) of the Campo del Cielo , and a 30.9-ton fragment ( Ahnighito ) of the Cape York meteorite .
As of 31 August 2021, there have been 90 found since 2010.
On 18 August 1907 multiple newspapers [ 36 ] reported that a meteor fall had occurred in Amaganzett, Long Island .
These have all been found between 1610 and 2010 and arranged alphabetically (mostly). | https://en.wikipedia.org/wiki/Meteorite_fall |
This award honors members of the Meteoritical Society who have advanced the goals of the Society to promote research and education in meteoritics and planetary science in ways other than by conducting scientific research. [ 1 ] Examples of activities that could be honored by the award include, but are not limited to, education and public outreach, service to the Society and the broader scientific community, and acquisition, classification and curation of new samples for research. This award may be given annually, and should be given at least every other year. Winners will be granted lifetime membership in the Meteoritical Society . | https://en.wikipedia.org/wiki/Meteoritical_Society's_Service_Award |
A meteoroid ( / ˈ m iː t i ə r ɔɪ d / MEE -tee-ə-royd ) [ 1 ] is a small rocky or metallic body in outer space .
Meteoroids are distinguished as objects significantly smaller than asteroids , ranging in size from grains to objects up to 1 m (3 ft 3 in) wide. [ 2 ] Objects smaller than meteoroids are classified as micrometeoroids or space dust . [ 2 ] [ 3 ] [ 4 ] Many are fragments from comets or asteroids, whereas others are collision impact debris ejected from bodies such as the Moon or Mars . [ 5 ] [ 6 ] [ 7 ]
The visible passage of a meteoroid, comet, or asteroid entering Earth's atmosphere is called a meteor , and a series of many meteors appearing seconds or minutes apart and appearing to originate from the same fixed point in the sky is called a meteor shower .
An estimated 25 million meteoroids, micrometeoroids and other space debris enter Earth's atmosphere each day, [ 8 ] which results in an estimated 15,000 tonnes of that material entering the atmosphere each year. [ 9 ] A meteorite is the remains of a meteoroid that has survived the ablation of its surface material during its passage through the atmosphere as a meteor and has impacted the ground.
In 1961, the International Astronomical Union (IAU) defined a meteoroid as "a solid object moving in interplanetary space, of a size considerably smaller than an asteroid and considerably larger than an atom". [ 10 ] [ 11 ] In 1995, Beech and Steel, writing in the Quarterly Journal of the Royal Astronomical Society , proposed a new definition where a meteoroid would be between 100 μm and 10 m (33 ft) across. [ 12 ] In 2010, following the discovery of asteroids below 10 m in size, Rubin and Grossman proposed a revision of the previous definition of meteoroid to objects between 10 μm (0.00039 in) and one meter (3 ft 3 in) in diameter in order to maintain the distinction. [ 2 ] According to Rubin and Grossman, the minimum size of an asteroid is given by what can be discovered from Earth-bound telescopes, so the distinction between meteoroid and asteroid is fuzzy. Some of the smallest asteroids discovered (based on absolute magnitude H ) are 2008 TS 26 with H = 33.2 [ 13 ] and 2011 CQ 1 with H = 32.1 [ 14 ] both with an estimated size of one m (3 ft 3 in). [ 15 ] In April 2017, the IAU adopted an official revision of its definition, limiting size to between 30 μm (0.0012 in) and one meter in diameter, but allowing for a deviation for any object causing a meteor. [ 16 ]
Objects smaller than meteoroids are classified as micrometeoroids and interplanetary dust . The Minor Planet Center does not use the term "meteoroid".
Almost all meteoroids contain extraterrestrial nickel and iron. They have three main classifications: iron, stone, and stony-iron. Some stone meteoroids contain grain-like inclusions known as chondrules and are called chondrites . Stony meteoroids without these features are called " achondrites ", which are typically formed from extraterrestrial igneous activity; they contain little or no extraterrestrial iron. [ 17 ] The composition of meteoroids can be inferred as they pass through Earth's atmosphere from their trajectories and the light spectra of the resulting meteor. Their effects on radio signals also give information, especially useful for daytime meteors, which are otherwise very difficult to observe. From these trajectory measurements, meteoroids have been found to have many different orbits, some clustering in streams (see meteor showers ) often associated with a parent comet , others apparently sporadic. Debris from meteoroid streams may eventually be scattered into other orbits. The light spectra, combined with trajectory and light curve measurements, have yielded various compositions and densities, ranging from fragile snowball-like objects with density about a quarter that of ice, [ 18 ] to nickel-iron rich dense rocks. The study of meteorites also gives insights into the composition of non-ephemeral meteoroids.
Most meteoroids come from the asteroid belt , having been perturbed by the gravitational influences of planets, but others are particles from comets , giving rise to meteor showers . Some meteoroids are fragments from bodies such as Mars or the Moon , that have been thrown into space by an impact.
Meteoroids travel around the Sun in a variety of orbits and at various velocities. The fastest move at about 42 km/s (94,000 mph) through space in the vicinity of Earth's orbit. This is escape velocity from the Sun, equal to the square root of two times Earth's speed, and is the upper speed limit of objects in the vicinity of Earth, unless they come from interstellar space. Earth travels at about 29.6 km/s (66,000 mph), so when meteoroids meet the atmosphere head-on (which only occurs when meteors are in a retrograde orbit such as the Leonids , which are associated with the retrograde comet 55P/Tempel–Tuttle ) the combined speed may reach about 71 km/s (160,000 mph) (see Specific energy#Astrodynamics ). Meteoroids moving through Earth's orbital space average about 20 km/s (45,000 mph), [ 19 ] but due to Earth's gravity meteors such as the Phoenicids can make atmospheric entry at as slow as about 11 km/s.
On January 17, 2013, at 05:21 PST, a one-meter-sized comet from the Oort cloud entered Earth atmosphere over California and Nevada . [ 20 ] The object had a retrograde orbit with perihelion at 0.98 ± 0.03 AU . It approached from the direction of the constellation Virgo (which was in the south about 50° above the horizon at the time), and collided head-on with Earth's atmosphere at 72 ± 6 km/s (161,000 ± 13,000 mph) [ 20 ] vaporising more than 100 km (330,000 ft) above ground over a period of several seconds.
When meteoroids intersect with Earth's atmosphere at night, they are likely to become visible as meteors . If meteoroids survive the entry through the atmosphere and reach Earth's surface, they are called meteorites . Meteorites are transformed in structure and chemistry by the heat of entry and force of impact. A noted 4-metre (13 ft) asteroid , 2008 TC 3 , was observed in space on a collision course with Earth on 6 October 2008 and entered Earth's atmosphere the next day, striking a remote area of northern Sudan. It was the first time that a meteoroid had been observed in space and tracked prior to impacting Earth. [ 10 ] NASA has produced a map showing the most notable asteroid collisions with Earth and its atmosphere from 1994 to 2013 from data gathered by U.S. government sensors. [ 21 ]
A meteorite is a portion of a meteoroid or asteroid that survives its passage through the atmosphere and hits the ground without being destroyed. [ 22 ] Meteorites are sometimes, but not always, found in association with hypervelocity impact craters ; during energetic collisions, the entire impactor may be vaporized, leaving no meteorites. Geologists use the term, "bolide", in a different sense from astronomers to indicate a very large impactor . For example, the USGS uses the term to mean a generic large crater-forming projectile in a manner "to imply that we do not know the precise nature of the impacting body ... whether it is a rocky or metallic asteroid, or an icy comet for example". [ 23 ]
Meteoroids also hit other bodies in the Solar System. On such stony bodies as the Moon or Mars that have little or no atmosphere, they leave enduring craters.
Meteoroid collisions with solid Solar System objects, including the Moon, Mercury , Callisto , Ganymede , and most small moons and asteroids , create impact craters, which are the dominant geographic features of many of those objects. On other planets and moons with active surface geological processes, such as Earth, Venus , Mars , Europa , Io , and Titan , visible impact craters may become eroded , buried, or transformed by tectonics over time. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth. [ 24 ] Molten terrestrial material ejected from a meteorite impact crater can cool and solidify into an object known as a tektite . These are often mistaken for meteorites. Terrestrial rock, sometimes with pieces of the original meteorite, created or modified by an impact of a meteorite is called impactite .
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Meteoroid |
Meteorological intelligence is information measured, gathered, compiled, exploited, analyzed and disseminated by meteorologists , climatologists and hydrologists to characterize the current state and/or predict the future state of the atmosphere at a given location and time. Meteorological intelligence is a subset of environmental intelligence and is synonymous with the term weather intelligence .
The earliest known use of the term "meteorological intelligence" in a written document dates to 1854 on pg. 168 of the Eighth Annual Report of the Board of Regents of the Smithsonian Institution . This report discusses the Smithsonian Institution's initiative to transmit meteorological intelligence via telegraph lines. An early reference to "meteorological intelligence" in England dates an 1866 issue of The Edinburgh Review which was a prominent Scottish journal during the 19th century (Reeve 1866, pg. 75).
Another documented, early use of the term dates to 1874 in a historical compilation entitled, "The American Historical Record" (Lossing 1874, pg. 125). In this book, Lossing uses the term to refer to weather observations transmitted over telegraph lines for the purpose of studying the nature of storms with the ultimate goal of enhancing public safety through the issuance of storm warnings. This mission was carried out by the Army Signal Service starting in the 1870s who was responsible for communication (via telegraph) of technical intelligence for the army as well as "meteorological intelligence" for the general welfare of the country (Ingersoll 1879, pg. 156).
From the viewpoint of the intelligence community , the term meteorological intelligence is more limited in its use referring to the use of clandestine or technical means to learn about environmental conditions over enemy territory (Shulsky and Schmitt 2002) as in the North Atlantic weather war . In the military intelligence context, weather information is often referred to as meteorological or environmental intelligence (Hinsley 1990, pg. 420; Platt 1957, pg. 14; U.S. Congress, pg. 164).
With regard to private sector meteorology , the term meteorological intelligence is a broad term of art that is primarily associated with observed and forecast weather information provided to decision makers in one of a number of weather sensitive business areas including: Energy, forestry , agriculture , telecommunications , transportation , aviation , entertainment , retail and construction (CMOS 2001, pg. 23) . It is considered a key aspect of weather risk management for the legal and insurance industries. | https://en.wikipedia.org/wiki/Meteorological_intelligence |
Meteorology is the scientific study of the Earth 's atmosphere and short-term atmospheric phenomena (i.e. weather ), with a focus on weather forecasting . [ 1 ] It has applications in the military , aviation , energy production , transport , agriculture , construction , weather warnings and disaster management .
Along with climatology , atmospheric physics and atmospheric chemistry , meteorology forms the broader field of the atmospheric sciences . The interactions between Earth's atmosphere and its oceans (notably El Niño and La Niña ) are studied in the interdisciplinary field of hydrometeorology . Other interdisciplinary areas include biometeorology , space weather and planetary meteorology. Marine weather forecasting relates meteorology to maritime and coastal safety, based on atmospheric interactions with large bodies of water.
Meteorologists study meteorological phenomena driven by solar radiation , Earth's rotation , ocean currents and other factors. These include everyday weather like clouds , precipitation , wind patterns as well as severe weather events such as tropical cyclones and severe winter storms . Such phenomena are quantified using variables like temperature , pressure and humidity , which are then used to forecast weather at local ( microscale ), regional ( mesoscale and synoptic scale ), and global scales . Meteorologists collect data using basic instruments like thermometers , barometers and weather vanes (for surface-level measurements), alongside advanced tools like weather satellites , balloons , reconnaissance aircraft, buoys and radars . The World Meteorological Organization (WMO) ensures international standardization of meteorological research.
The study of meteorology dates back millennia . Ancient civilizations tried to predict weather through folklore , astrology and religious rituals . Aristotle's treatise Meteorology sums up early observations of the field, which advanced little during early medieval times, but experienced a resurgence during the Renaissance , when Alhazen and Descartes challenged Aristotelian theories, emphasizing scientific methods . In the 18th century, accurate measurement tools (e.g. barometer and thermometer) were developed and the first meteorological society was founded. In the 19th century, telegraph -based weather observation networks were formed across broad regions. [ 2 ] In the 20th century, numerical weather prediction (NWP), coupled with advanced satellite and radar technology, introduced sophisticated forecasting models. [ 3 ] Later, computers revolutionized forecasting by processing vast datasets in real time and automatically solving modelling equations. 21st-century meteorology is highly accurate and driven by big data and supercomputing . It is adopting innovations like machine learning , ensemble forecasting and high-resolution global climate modeling. [ 4 ] Climate change -induced extreme weather poses new challenges for forecasting and research, [ 5 ] while inherent uncertainty remains because of the atmosphere's chaotic nature (see butterfly effect ). [ 6 ]
The word meteorology is from the Ancient Greek μετέωρος metéōros ( meteor ) and -λογία -logia ( -(o)logy ), meaning "the study of things high in the air". [ citation needed ]
Early attempts at predicting weather were often related to prophecy and divining , and were sometimes based on astrological ideas. Ancient religions believed meteorological phenomena to be under the control of the gods. [ 7 ] The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. The Egyptians had rain-making rituals as early as 3500 BC. [ 7 ]
Ancient Indian Upanishads contain mentions of clouds and seasons . [ 8 ] The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. [ 9 ] Varāhamihira 's classical work Brihatsamhita , written about 500 AD, [ 8 ] provides evidence of weather observation.
Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos . [ 9 ]
The ancient Greeks were the first to make theories about the weather. Many natural philosophers studied the weather. However, as meteorological instruments did not exist, the inquiry was largely qualitative, and could only be judged by more general theoretical speculations. [ 10 ] Herodotus states that Thales predicted the solar eclipse of 585 BC. He studied Babylonian equinox tables. [ 11 ] According to Seneca, he explained that the cause of the Nile 's annual floods was due to northerly winds hindering its descent by the sea. [ 12 ] Anaximander and Anaximenes thought that thunder and lightning was caused by air smashing against the cloud, thus kindling the flame. Early meteorological theories generally considered that there was a fire-like substance in the atmosphere. Anaximander defined wind as a flowing of air, but this was not generally accepted for centuries. [ 13 ] A theory to explain summer hail was first proposed by Anaxagoras . He observed that air temperature decreased with increasing height and that clouds contain moisture. He also noted that heat caused objects to rise, and therefore the heat on a summer day would drive clouds to an altitude where the moisture would freeze. [ 14 ] Empedocles theorized on the change of the seasons. He believed that fire and water opposed each other in the atmosphere, and when fire gained the upper hand, the result was summer, and when water did, it was winter. Democritus also wrote about the flooding of the Nile. He said that snow in northern parts of the world melted during the summer solstice. This would cause vapors to form clouds, which would cause storms when driven to the Nile by northerly winds, thus filling the lakes and the Nile. [ 15 ] Hippocrates inquired into the effect of weather on health. Eudoxus claimed that bad weather followed four-year periods, according to Pliny. [ 16 ]
These early observations would form the basis for Aristotle 's Meteorology , written in 350 BC. [ 17 ] [ 18 ] Aristotle is considered the founder of meteorology. [ 19 ] One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle . His work would remain an authority on meteorology for nearly 2,000 years. [ 20 ]
The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted: [ 21 ]
After Aristotle, progress in meteorology stalled for a long time. Theophrastus compiled a book on weather forecasting, called the Book of Signs , as well as On Winds . He gave hundreds of signs for weather phenomena for a period up to a year. [ 22 ] His system was based on dividing the year by the setting and the rising of the Pleiad, halves into solstices and equinoxes, and the continuity of the weather for those periods. He also divided months into the new moon, fourth day, eighth day and full moon, in likelihood of a change in the weather occurring. The day was divided into sunrise, mid-morning, noon, mid-afternoon and sunset, with corresponding divisions of the night, with change being likely at one of these divisions. [ 23 ] Applying the divisions and a principle of balance in the yearly weather, he came up with forecasts like that if a lot of rain falls in the winter, the spring is usually dry. Rules based on actions of animals are also present in his work, like that if a dog rolls on the ground, it is a sign of a storm. Shooting stars and the Moon were also considered significant. However, he made no attempt to explain these phenomena, referring only to the Aristotelian method. [ 24 ] The work of Theophrastus remained a dominant influence in weather forecasting for nearly 2,000 years. [ 25 ]
Meteorology continued to be studied and developed over the centuries, but it was not until the Renaissance in the 14th to 17th centuries that significant advancements were made in the field. Scientists such as Galileo and Descartes introduced new methods and ideas, leading to the scientific revolution in meteorology.
Speculation on the cause of the flooding of the Nile ended when Eratosthenes , according to Proclus , stated that it was known that man had gone to the sources of the Nile and observed the rains, although interest in its implications continued. [ 26 ]
During the era of Roman Greece and Europe, scientific interest in meteorology waned. In the 1st century BC, most natural philosophers claimed that the clouds and winds extended up to 111 miles, but Posidonius thought that they reached up to five miles, after which the air is clear, liquid and luminous. He closely followed Aristotle's theories. By the end of the second century BC, the center of science shifted from Athens to Alexandria , home to the ancient Library of Alexandria . In the 2nd century AD, Ptolemy 's Almagest dealt with meteorology, because it was considered a subset of astronomy. He gave several astrological weather predictions. [ 27 ] He constructed a map of the world divided into climatic zones by their illumination, in which the length of the Summer solstice increased by half an hour per zone between the equator and the Arctic. [ 28 ] Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. [ 29 ]
In 25 AD, Pomponius Mela , a Roman geographer, formalized the climatic zone system. [ 30 ] In 63–64 AD, Seneca wrote Naturales quaestiones . It was a compilation and synthesis of ancient Greek theories. However, theology was of foremost importance to Seneca, and he believed that phenomena such as lightning were tied to fate. [ 31 ] The second book(chapter) of Pliny 's Natural History covers meteorology. He states that more than twenty ancient Greek authors studied meteorology. He did not make any personal contributions, and the value of his work is in preserving earlier speculation, much like Seneca's work. [ 32 ]
From 400 to 1100, scientific learning in Europe was preserved by the clergy. Isidore of Seville devoted a considerable attention to meteorology in Etymologiae , De ordine creaturum and De natura rerum . Bede the Venerable was the first Englishman to write about the weather in De Natura Rerum in 703. The work was a summary of then extant classical sources. However, Aristotle's works were largely lost until the twelfth century, including Meteorologica . Isidore and Bede were scientifically minded, but they adhered to the letter of Scripture . [ 33 ]
Islamic civilization translated many ancient works into Arabic which were transmitted and translated in western Europe to Latin. [ 34 ]
In the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution . He describes the meteorological character of the sky, the planets and constellations , the sun and moon , the lunar phases indicating seasons and rain, the anwa ( heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes. [ 35 ] [ 36 ]
In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight in Opticae thesaurus ; he estimated that twilight begins when the sun is 19 degrees below the horizon , and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km). [ 37 ]
Adelard of Bath was one of the early translators of the classics. He also discussed meteorological topics in his Quaestiones naturales . He thought dense air produced propulsion in the form of wind. He explained thunder by saying that it was due to ice colliding in clouds, and in Summer it melted. In the thirteenth century, Aristotelian theories reestablished dominance in meteorology. For the next four centuries, meteorological work by and large was mostly commentary . It has been estimated over 156 commentaries on the Meteorologica were written before 1650. [ 38 ]
Experimental evidence was less important than appeal to the classics and authority in medieval thought. In the thirteenth century, Roger Bacon advocated experimentation and the mathematical approach. In his Opus majus , he followed Aristotle's theory on the atmosphere being composed of water, air, and fire, supplemented by optics and geometric proofs. He noted that Ptolemy's climatic zones had to be adjusted for topography . [ 39 ]
St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. [ 40 ] Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit cannot appear higher than 42 degrees above the horizon. [ 41 ]
In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow. [ 42 ]
By the middle of the sixteenth century, meteorology had developed along two lines: theoretical science based on Meteorologica , and astrological weather forecasting. The pseudoscientific prediction by natural signs became popular and enjoyed protection of the church and princes. This was supported by scientists like Johannes Muller , Leonard Digges , and Johannes Kepler . However, there were skeptics. In the 14th century, Nicole Oresme believed that weather forecasting was possible, but that the rules for it were unknown at the time. Astrological influence in meteorology persisted until the eighteenth century. [ 43 ]
Gerolamo Cardano 's De Subilitate (1550) was the first work to challenge fundamental aspects of Aristotelian theory. Cardano maintained that there were only three basic elements- earth, air, and water. He discounted fire because it needed material to spread and produced nothing. Cardano thought there were two kinds of air: free air and enclosed air. The former destroyed inanimate things and preserved animate things, while the latter had the opposite effect. [ 44 ]
Rene Descartes 's Discourse on the Method (1637) typifies the beginning of the scientific revolution in meteorology. His scientific method had four principles: to never accept anything unless one clearly knew it to be true; to divide every difficult problem into small problems to tackle; to proceed from the simple to the complex, always seeking relationships; to be as complete and thorough as possible with no prejudice. [ 45 ]
In the appendix Les Meteores , he applied these principles to meteorology. He discussed terrestrial bodies and vapors which arise from them, proceeding to explain the formation of clouds from drops of water, and winds, clouds then dissolving into rain, hail and snow. He also discussed the effects of light on the rainbow. Descartes hypothesized that all bodies were composed of small particles of different shapes and interwovenness. All of his theories were based on this hypothesis. He explained the rain as caused by clouds becoming too large for the air to hold, and that clouds became snow if the air was not warm enough to melt them, or hail if they met colder wind. Like his predecessors, Descartes's method was deductive, as meteorological instruments were not developed and extensively used yet. He introduced the Cartesian coordinate system to meteorology and stressed the importance of mathematics in natural science. His work established meteorology as a legitimate branch of physics. [ 46 ]
In the 18th century, the invention of the thermometer and barometer allowed for more accurate measurements of temperature and pressure, leading to a better understanding of atmospheric processes. This century also saw the birth of the first meteorological society, the Societas Meteorologica Palatina in 1780. [ 47 ]
In the 19th century, advances in technology such as the telegraph and photography led to the creation of weather observing networks and the ability to track storms. Additionally, scientists began to use mathematical models to make predictions about the weather. The 20th century saw the development of radar and satellite technology, which greatly improved the ability to observe and track weather systems. In addition, meteorologists and atmospheric scientists started to create the first weather forecasts and temperature predictions. [ 48 ]
In the 20th and 21st centuries, with the advent of computer models and big data, meteorology has become increasingly dependent on numerical methods and computer simulations. This has greatly improved weather forecasting and climate predictions. Additionally, meteorology has expanded to include other areas such as air quality, atmospheric chemistry, and climatology. The advancement in observational, theoretical and computational technologies has enabled ever more accurate weather predictions and understanding of weather pattern and air pollution. In current time, with the advancement in weather forecasting and satellite technology, meteorology has become an integral part of everyday life, and is used for many purposes such as aviation, agriculture, and disaster management. [ citation needed ]
In 1441, King Sejong 's son, Prince Munjong of Korea, invented the first standardized rain gauge . [ 49 ] These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer , and was known as the first anemometer . [ 50 ] In 1607, Galileo Galilei constructed a thermoscope . In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." [ 51 ] In 1643, Evangelista Torricelli invented the mercury barometer . [ 50 ] In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer . [ 52 ] In 1742, Anders Celsius , a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. [ 53 ] In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure . In 1802–1803, Luke Howard wrote On the Modification of Clouds , in which he assigns cloud types Latin names. [ 54 ] In 1806, Francis Beaufort introduced his system for classifying wind speeds . [ 55 ] Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas , which has remained in print ever since. The April 1960 launch of the first successful weather satellite , TIROS-1 , marked the beginning of the age where weather information became available globally.
In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. [ 56 ] In 1738, Daniel Bernoulli published Hydrodynamics , initiating the Kinetic theory of gases and established the basic laws for the theory of gases. [ 57 ] In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen , which he called phlogisticated air , and together they developed the phlogiston theory . [ 58 ] In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. [ 59 ] In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," [ 60 ] he deprecates the phlogiston theory and proposes a caloric theory . [ 61 ] [ 62 ] In 1804, John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation . In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight . In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics . In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines.
In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. [ 63 ] In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. [ 64 ] In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley . [ 65 ] In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane , he decided that cyclones move in a contrary manner to the winds at their periphery. [ 66 ] Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. [ 67 ] In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. [ 68 ] Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. [ 69 ] Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones , and introduced the idea of fronts , that is, sharply defined boundaries between air masses . [ 70 ] The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics ), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes .
In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer , barometer , hydrometer , as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. [ 71 ] In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence , Cutigliano , Vallombrosa , Bologna , Parma , Milan , Innsbruck , Osnabrück , Paris and Warsaw . The collected data were sent to Florence at regular time intervals. [ 72 ] In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates ' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health. [ 71 ]
During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds . They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus, by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. [ 71 ] In 1832, an electromagnetic telegraph was created by Baron Schilling . [ 73 ] The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area. [ 74 ]
This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry . [ 75 ] Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams . [ 76 ] Charles Kenneth Mackinnon Douglas , known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems. [ 77 ] Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. [ 78 ] In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected.
FitzRoy coined the term "weather forecast" and tried to separate scientific approaches from prophetic ones. [ 79 ]
Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon . [ 80 ] The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University . [ 81 ] Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency , began constructing surface weather maps in 1883. [ 82 ] The United States Weather Bureau (1890) was established under the United States Department of Agriculture . The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services. [ 83 ] [ 84 ]
In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws . [ 85 ] [ 86 ]
It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction . In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," [ 87 ] after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability .
Starting in the 1950s, numerical forecasts with computers became feasible. [ 88 ] The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves , that is, the pattern of atmospheric lows and highs . [ 89 ] In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury . [ 90 ]
In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz , founding the field of chaos theory . [ 91 ] These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. [ 92 ] Mathematical models used to predict the long term weather of the Earth ( climate models ), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases .
Meteorologists are scientists who study and work in the field of meteorology. [ 93 ] The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary . [ 94 ] Meteorologists work in government agencies , private consulting and research services, industrial enterprises, utilities, radio and television stations , and in education . In the United States, meteorologists held about 10,000 jobs in 2018. [ 95 ]
Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman . The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media.
Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge , the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air.
Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station , a ship or a weather buoy . The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure , wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. [ 96 ] Professional stations may also include air quality sensors ( carbon monoxide , carbon dioxide , methane , ozone , dust , and smoke ), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor , lightning sensor , microphone ( explosions , sonic booms , thunder ), pyranometer / pyrheliometer / spectroradiometer (IR/Vis/UV photodiodes ), rain gauge / snow gauge , scintillation counter ( background radiation , fallout , radon ), seismometer ( earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging . Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes . Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization .
Remote sensing , as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar , Lidar , and satellites (or photogrammetry ). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. [ 97 ] Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño .
The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale.
Other subclassifications are used to describe the unique, local, or broad effects within those subclasses.
Microscale meteorology is the study of atmospheric phenomena on a scale of about 1 kilometre (0.62 mi) or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. [ 99 ] Misoscale meteorology is an informal subdivision.
Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause , and the lower section of the stratosphere . The terms meso-alpha, meso-beta, and meso-gamma to classify the horizontal scales of atmospheric processes were introduced to the field of mesoscale meteorology by Isidoro Orlanski . [ 100 ] Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms , squall lines , fronts , precipitation bands in tropical and extratropical cyclones , and topographically generated weather systems such as mountain waves and sea and land breezes . [ 101 ]
Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 10 5 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones , and to some extent jet streams . All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations . [ 102 ]
Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles . Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation , or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation . Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions.
Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. [ 103 ] The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies.
Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat , matter , or momentum on time scales of less than a day are caused by turbulent motions. [ 104 ] Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology.
Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as an infinitesimal region in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum. [ 98 ]
Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. [ 105 ] [ 106 ] Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve. [ 107 ]
Once an all-human endeavor based mainly upon changes in barometric pressure , current weather conditions, and sky condition, [ 108 ] [ 109 ] forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections , knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome. [ 110 ] [ 111 ] [ 112 ]
There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. [ 113 ] Forecasts based on temperature and precipitation are important to agriculture, [ 114 ] [ 115 ] [ 116 ] [ 117 ] and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. [ 118 ] [ 119 ] [ 120 ] On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill , forecasts can be used to plan activities around these events, and to plan ahead and survive them.
Aviation meteorology deals with the impact of weather on air traffic management . [ 121 ] It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual : [ 122 ]
The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage. [ 123 ]
Meteorologists, soil scientists , agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield , water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather. [ 124 ]
Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle , the water budget, and the rainfall statistics of storms . [ 125 ] A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences. [ 126 ]
The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project [ 127 ] – that are trying to address this issue. [ 128 ]
Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere. [ 129 ]
Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center , Honolulu National Weather Service forecast office, United Kingdom Met Office , KNMI and JMA prepare high seas forecasts for the world's oceans.
Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy 's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force 's Air Force Weather Agency is responsible for the Air Force and Army .
Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions.
Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy.
Please see weather forecasting for weather forecast sites. | https://en.wikipedia.org/wiki/Meteorology |
In physics , the meter water equivalent (often m.w.e. or mwe ) is a standard measure of cosmic ray attenuation in underground laboratories . A laboratory at a depth of 1000 m.w.e is shielded from cosmic rays equivalently to a lab 1,000 m (3,300 ft) below the surface of a body of water. Because laboratories at the same depth (in meters) can have greatly varied levels of cosmic ray penetration, the m.w.e. provides a convenient and consistent way of comparing cosmic ray levels in different underground locations. [ 1 ]
Cosmic ray attenuation is dependent on the density of the material of the overburden , so the m.w.e. is defined as the product of depth and density (also known as an interaction depth). Because the density of water is 1 g/cm 3 , 1 m (100 cm) of water gives an interaction depth of 1 hectogram per square centimetre (100 g/cm 2 ). Some publications use hg/cm 2 instead of m.w.e., although the two units are equivalent. [ 2 ]
For example, the Waste Isolation Pilot Plant , located 660 m (2,170 ft) deep in a salt formation, achieves 1585 m.w.e. shielding. Soudan Mine , at 713 m (2,339 ft) depth is only 8% deeper, but because it is in denser iron-rich rock it achieves 2100 m.w.e. shielding, 32% more.
Another factor that must be accounted for is the shape of the overburden. While some laboratories are located beneath a flat ground surface, many are located in tunnels in mountains. Thus, the distance to the surface in directions other than straight up is less than it would be assuming a flat surface. This can increase the muon flux by a factor of 4 ± 2 . [ 3 ]
The usual conversion between m.w.e. and total muon flux is given by Mei and Hime: [ 4 ]
where h 0 {\displaystyle h_{0}} is the depth in m.w.e. and I μ {\displaystyle I_{\mu }} is the total muon flux per cm 2 ⋅s. (The first term dominates for depths up to 1681.5 m.w.e.; below that, the second term dominates. Thus, for great depths, the factor of 4 mentioned above corresponds to a difference of 698 ln 4 ≈ 968 m.w.e.)
In addition to m.w.e., underground laboratory depth can also be measured in meters of standard rock. Standard rock is defined to have mass number A = 22, atomic number Z = 11 , and density 2.65 g/cm 3 (43.4 g/cu in). [ 5 ] Because most laboratories are under earth and not underwater, the depth in standard rock is often closer to the actual underground depth of the laboratory.
Underground laboratories exist at depths ranging from just below ground level to approximately 6000 m.w.e. at SNOLAB [ 4 ] and 6700 m.w.e. at the Jinping Underground Laboratory in China. [ 6 ] | https://en.wikipedia.org/wiki/Meter_water_equivalent |
A metering pump moves a precise volume of liquid in a specified time period providing an accurate volumetric flow rate . [ 1 ] Delivery of fluids in precise adjustable flow rates is sometimes called metering . The term "metering pump" is based on the application or use rather than the exact kind of pump used, although a couple types of pumps are far more suitable than most other types of pumps. [ 2 ]
Although metering pumps can pump water , they are often used to pump chemicals , solutions , or other liquids. Many metering pumps are rated to be able to pump into a high discharge pressure . They are typically made to meter at flow rates which are practically constant (when averaged over time) within a wide range of discharge (outlet) pressure. Manufacturers provide each of their models of metering pumps with a maximum discharge pressure rating against which each model is guaranteed to be able to pump against. An engineer, designer, or user should ensure that the pressure and temperature ratings and wetted pump materials are compatible for the application and the type of liquid being pumped.
Most metering pumps have a pump head and a motor . The liquid being pumped goes through the pump head, entering through an inlet line and leaving through an outlet line. The motor is commonly an electric motor which drives the pump head.
Some metering pumps can be used for dispensing . A metering pump is designed to deliver a continuous rate of flow, however, a dispensing pump is designed to deliver a precise total amount.
Many metering pumps are piston-driven. Piston pumps are positive displacement pumps which can be designed to pump at practically constant flow rates (averaged over time) against a wide range of discharge pressure, including high discharge pressures of thousands of psi .
Piston-driven metering pumps commonly work as follows: There is a piston (sometimes called plunger), typically cylindrical, which can go in and out of a correspondingly shaped chamber in the pump head. The inlet and outlet lines are joined to the piston chamber. There are two check valves , often ball check valves, attached to the pump head, one at the inlet line and the other at the outlet line. The inlet valve allows flow from the inlet line to the piston chamber, but not in the reverse direction. The outlet valve allows flow from the chamber to the outlet line, but not in reverse. The motor repeatedly moves the piston into and out of the piston chamber, causing the volume of the chamber to repeatedly become smaller and larger. When the piston moves out, a vacuum is created. Low pressure in the chamber causes liquid to enter and fill the chamber through the inlet check valve, but higher pressure at the outlet causes the outlet valve to shut. Then when the piston moves in, it pressurizes the liquid in the chamber. High pressure in the chamber causes the inlet valve to shut and forces the outlet valve to open, forcing liquid out at the outlet. These alternating suction and discharge strokes are repeated over and over to meter the liquid. In the back of the chamber, there is packing around the piston or a doughnut-shaped seal with a toroid-shaped sphincter-like spring inside compressing the seal around the piston. This holds the fluid pressure when the piston slides in and out and makes the pump leak-tight. The packing or seals can wear out after prolonged use and can be replaced. The metering rate can be adjusted by varying the strokelength by which the piston moves back and forth or varying the speed of the piston motion.
A single-piston pump delivers liquid to the outlet only during the discharge stroke. If the piston's suction and discharge strokes occur at the same speed and liquid is metered out half the time the pump is working, then the overall metering rate averaged over time equals half the average flow rate during the discharge stroke. Some single-piston pumps may have a constant slow piston motion for discharge and a quick retract motion for refilling the pump head. In such cases, the overall metering rate is practically equal to the pumping rate during the discharge stroke.
Pumps used in high-pressure chromatography such as HPLC and ion chromatography are much like small piston metering pumps. For wear resistance and chemical resistance to solvents, etc., typically the pistons are made of artificial sapphire and the ball check valves have ruby balls and sapphire seats. To produce good chromatograms, it is desirable to have a pumping flow rate as constant as possible. Either a single piston pump with a quick refill is used or a double pump head with coordinated piston strokes is used to provide as constant a pumping rate as possible.
In order to avoid leakage at the packing or seal particularly when a liquid is dangerous, toxic , or noxious, diaphragm pumps are used for metering. Diaphragm pumps have a diaphragm through which repeated compression/decompression motion is transmitted. The liquid does not penetrate through the diaphragm, so the liquid inside the pump is sealed off from the outside. Such motion changes the volume of a chamber in the pump head so that liquid enters through an inlet check valve during decompression and exits through an outlet check valve during compression, in a manner similar to piston pumps. Diaphragm pumps can also be made which discharge at fairly high pressure. Diaphragm metering pumps are commonly hydraulically driven.
Peristaltic pumps use motor-driven rollers to roll along flexible tubing, compressing it to push forward a liquid inside. Although peristaltic pumps can be used to meter at lower pressures, the flexible tubing is limited in the level of pressure it can withstand.
The maximum pressure rating of a metering pump is actually the top of the discharge pressure range the pump is guaranteed to pump against at a reasonably controllable flow rate. The pump itself is a pressurizing device often capable of exceeding its pressure rating, although not guaranteed to. For this reason, if there is any stop valve downstream of the pump, a pressure relief valve should be placed in between to prevent overpressuring of the tubing or piping line in case the stop valve is inadvertently shut while the pump is running. The relief valve setting should be below the maximum pressure rating that the piping, tubing, or any other components there could withstand.
Liquids are only very slightly compressible. This property of liquids lets metering pumps discharge liquids at high pressure. Since a liquid can be only slightly compressed during a discharge stroke, it is forced out of the pump head. Gases are much more compressible. Metering pumps are not good at pumping gases. Sometimes, a metering or similar pump has to be primed before operation, i. e. the pump head filled with the liquid to be pumped. When gas bubbles enter a pump head, the compression motion compresses the gas but has a hard time forcing it out of the pump head. The pump may stop pumping liquid with gas bubbles in the pump head even though mechanically the pump is going through the motions, repeatedly compressing and decompressing the bubbles. To prevent this type of "vapor lock", chromatography solvents are often degassed before pumping.
If the pressure at the outlet is lower than the pressure at the inlet and remains that way in spite of the pumping, then this pressure difference opens both check valves simultaneously and the liquid flows through the pump head uncontrollably from inlet to outlet. This can happen whether the pump is working or not. This situation can be avoided by placing a correctly rated positive pressure differential check valve downstream of the pump. Such a valve will only open if a minimum rated pressure differential across the valve is exceeded, something which most high-pressure metering pumps can easily exceed. | https://en.wikipedia.org/wiki/Metering_pump |
MethBase is a database of DNA methylation data derived from next-generation sequencing data. [ 1 ] MethBase provides a visualization of publicly available bisulfite sequencing and reduced representation bisulfite sequencing experiments through the UCSC Genome Browser . MethBase contents include single- CpG site resolution methylation levels for each CpG site in the genome of interest, annotation of regions of hypomethylation often associated with gene promoters , and annotation of allele -specific methylation associated with genomic imprinting .
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MethBase |
Methanandamide ( AM-356 ) is a synthetically created stable chiral analog of anandamide . [ 1 ] Its effects have been observed to act on the cannabinoid receptors (specifically on CB 1 receptors, which are part of the central nervous system ) found in different organisms such as mammals, fish, and certain invertebrates (e.g. Hydra ).
This article about an alkene is a stub . You can help Wikipedia by expanding it .
This cannabinoid related article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This article about an alcohol is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methanandamide |
Methanation is the conversion of carbon monoxide and carbon dioxide (CO x ) to methane (CH 4 ) through hydrogenation . The methanation reactions of CO x were first discovered by Sabatier and Senderens in 1902. [ 1 ]
CO x methanation has many practical applications. It is a means of carbon oxide removal from process gases and is also being discussed as an alternative to PROX in fuel processors for mobile fuel cell applications. [ 2 ]
Methanation as a means of producing synthetic natural gas has been considered since the 1970s. [ 1 ] More recently it has been considered as a way to store energy produced from solar or wind power using power-to-gas systems in conjunction with existing natural gas storage .
The following reactions describe the methanation of carbon monoxide and carbon dioxide respectively:
The methanation reactions are classified as exothermic and their energy of formations are listed. [ 1 ]
There is disagreement on whether the CO 2 methanation occurs by first associatively adsorbing an adatom hydrogen and forming oxygen intermediates before hydrogenation or dissociating and forming a carbonyl before being hydrogenated. [ 3 ] CO is believed to be methanated through a dissociative mechanism where the carbon-oxygen bond is broken before hydrogenation with an associative mechanism only being observed at high H 2 concentrations.
Methanation reaction over different carried metal catalysts including Ni, [ 4 ] Ru [ 5 ] and Rh [ 6 ] has been widely investigated for the production of CH 4 from syngas and other power to gas initiatives. [ 3 ] Nickel is the most widely used catalyst due to its high selectivity and low cost. [ 1 ]
Methanation is an important step in the creation of synthetic or substitute natural gas (SNG). [ 7 ] Coal or wood undergo gasification which creates a producer gas that must undergo methanation in order to produce a usable gas that just needs to undergo a final purification step.
The first commercial synthetic gas plant opened in 1984 and is the Great Plains Synfuel plant in Beulah, North Dakota. [ 1 ] It is still operational and produces 1500 MW worth of SNG using coal as the carbon source. In the years since its opening, other commercial facilities have been opened using other carbon sources such as wood chips. [ 1 ]
In France, the AFUL Chantrerie, located in Nantes, started in November 2017 the demonstrator MINERVE. This methanation unit of 14 Nm3/day was carried out by Top Industrie, with the support of Leaf. This installation is used to feed a CNG station and to inject methane into the natural gas boiler. [ 8 ]
In ammonia production CO and CO 2 are considered poisons to most commonly used catalysts. [ 9 ] Methanation catalysts are added after several hydrogen producing steps to prevent carbon oxide buildup in the ammonia synthesis loop as methane does not have similar adverse effects on ammonia synthesis rates. | https://en.wikipedia.org/wiki/Methanation |
Methane ( US : / ˈ m ɛ θ eɪ n / METH -ayn , UK : / ˈ m iː θ eɪ n / MEE -thayn ) is a chemical compound with the chemical formula CH 4 (one carbon atom bonded to four hydrogen atoms). It is a group-14 hydride , the simplest alkane , and the main constituent of natural gas . The abundance of methane on Earth makes it an economically attractive fuel , although capturing and storing it is difficult because it is a gas at standard temperature and pressure . In the Earth's atmosphere methane is transparent to visible light but absorbs infrared radiation , acting as a greenhouse gas . Methane is an organic compound , and among the simplest of organic compounds. Methane is also a hydrocarbon .
Naturally occurring methane is found both below ground and under the seafloor and is formed by both geological and biological processes. The largest reservoir of methane is under the seafloor in the form of methane clathrates . When methane reaches the surface and the atmosphere , it is known as atmospheric methane . [ 10 ]
The Earth's atmospheric methane concentration has increased by about 160% since 1750, with the overwhelming percentage caused by human activity. [ 11 ] It accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases , according to the 2021 Intergovernmental Panel on Climate Change report. [ 12 ] Strong, rapid and sustained reductions in methane emissions could limit near-term warming and improve air quality by reducing global surface ozone. [ 13 ]
Methane has also been detected on other planets, including Mars , which has implications for astrobiology research. [ 14 ]
Methane is a tetrahedral molecule with four equivalent C–H bonds . Its electronic structure is described by four bonding molecular orbitals (MOs) resulting from the overlap of the valence orbitals on C and H . The lowest-energy MO is the result of the overlap of the 2s orbital on carbon with the in-phase combination of the 1s orbitals on the four hydrogen atoms. Above this energy level is a triply degenerate set of MOs that involve overlap of the 2p orbitals on carbon with various linear combinations of the 1s orbitals on hydrogen. The resulting "three-over-one" bonding scheme is consistent with photoelectron spectroscopic measurements.
Methane is an odorless, colourless and transparent gas at standard temperature and pressure . [ 15 ] It does absorb visible light, especially at the red end of the spectrum, due to overtone bands , but the effect is only noticeable if the light path is very long. This is what gives Uranus and Neptune their blue or bluish-green colors, as light passes through their atmospheres containing methane and is then scattered back out. [ 16 ]
The familiar smell of natural gas as used in homes is achieved by the addition of an odorant , usually blends containing tert -butylthiol , as a safety measure. Methane has a boiling point of −161.5 °C at a pressure of one atmosphere . [ 3 ] As a gas, it is flammable over a range of concentrations (5.4%–17%) in air at standard pressure .
Solid methane exists in several modifications , of which nine are known. [ 17 ] Cooling methane at normal pressure results in the formation of methane I. This substance crystallizes in the cubic system ( space group Fm 3 m). The positions of the hydrogen atoms are not fixed in methane I, i.e. methane molecules may rotate freely. Therefore, it is a plastic crystal . [ 18 ]
The primary chemical reactions of methane are combustion , steam reforming to syngas , and halogenation . In general, methane reactions are difficult to control.
Partial oxidation of methane to methanol ( C H 3 O H ), a more convenient, liquid fuel, is challenging because the reaction typically progresses all the way to carbon dioxide and water even with an insufficient supply of oxygen . The enzyme methane monooxygenase produces methanol from methane, but cannot be used for industrial-scale reactions. [ 19 ] Some homogeneously catalyzed systems and heterogeneous systems have been developed, but all have significant drawbacks. These generally operate by generating protected products which are shielded from overoxidation. Examples include the Catalytica system , copper zeolites , and iron zeolites stabilizing the alpha-oxygen active site. [ 20 ]
One group of bacteria catalyze methane oxidation with nitrite as the oxidant in the absence of oxygen , giving rise to the so-called anaerobic oxidation of methane . [ 21 ]
Like other hydrocarbons , methane is an extremely weak acid . Its p K a in DMSO is estimated to be 56. [ 22 ] It cannot be deprotonated in solution, but the conjugate base is known in forms such as methyllithium .
A variety of positive ions derived from methane have been observed, mostly as unstable species in low-pressure gas mixtures. These include methenium or methyl cation CH + 3 , methane cation CH + 4 , and methanium or protonated methane CH + 5 . Some of these have been detected in outer space . Methanium can also be produced as diluted solutions from methane with superacids . Cations with higher charge, such as CH 2+ 6 and CH 3+ 7 , have been studied theoretically and conjectured to be stable. [ 23 ]
Despite the strength of its C–H bonds, there is intense interest in catalysts that facilitate C–H bond activation in methane (and other lower numbered alkanes ). [ 24 ]
Methane's heat of combustion is 55.5 MJ/kg. [ 25 ] Combustion of methane is a multiple step reaction summarized as follows:
Peters four-step chemistry is a systematically reduced four-step chemistry that explains the burning of methane.
Given appropriate conditions, methane reacts with halogen radicals as follows:
where X is a halogen : fluorine (F), chlorine (Cl), bromine (Br), or iodine (I). This mechanism for this process is called free radical halogenation . It is initiated when UV light or some other radical initiator (like peroxides ) produces a halogen atom . A two-step chain reaction ensues in which the halogen atom abstracts a hydrogen atom from a methane molecule, resulting in the formation of a hydrogen halide molecule and a methyl radical ( •CH 3 ). The methyl radical then reacts with a molecule of the halogen to form a molecule of the halomethane, with a new halogen atom as byproduct. [ 26 ] Similar reactions can occur on the halogenated product, leading to replacement of additional hydrogen atoms by halogen atoms with dihalomethane , trihalomethane , and ultimately, tetrahalomethane structures, depending upon reaction conditions and the halogen-to-methane ratio.
This reaction is commonly used with chlorine to produce dichloromethane and chloroform via chloromethane . Carbon tetrachloride can be made with excess chlorine.
Methane may be transported as a refrigerated liquid (liquefied natural gas, or LNG ). While leaks from a refrigerated liquid container are initially heavier than air due to the increased density of the cold gas, the gas at ambient temperature is lighter than air. Gas pipelines distribute large amounts of natural gas, of which methane is the principal component.
Methane is used as a fuel for ovens, homes, water heaters, kilns, automobiles, [ 27 ] [ 28 ] turbines, etc.
As the major constituent of natural gas , methane is important for electricity generation by burning it as a fuel in a gas turbine or steam generator . Compared to other hydrocarbon fuels , methane produces less carbon dioxide for each unit of heat released. At about 891 kJ/mol, methane's heat of combustion is lower than that of any other hydrocarbon, but the ratio of the heat of combustion (891 kJ/mol) to the molecular mass (16.0 g/mol, of which 12.0 g/mol is carbon) shows that methane, being the simplest hydrocarbon, produces more heat per mass unit (55.7 kJ/g) than other complex hydrocarbons. In many areas with a dense enough population, methane is piped into homes and businesses for heating , cooking, and industrial uses. In this context it is usually known as natural gas , which is considered to have an energy content of 39 megajoules per cubic meter, or 1,000 BTU per standard cubic foot . Liquefied natural gas (LNG) is predominantly methane ( CH 4 ) converted into liquid form for ease of storage or transport.
Refined liquid methane as well as LNG is used as a rocket fuel , [ 29 ] when combined with liquid oxygen , as in the TQ-12 , BE-4 , Raptor , YF-215 , and Aeon engines. [ 30 ] Due to the similarities between methane and LNG such engines are commonly grouped together under the term methalox .
As a liquid rocket propellant , a methane/ liquid oxygen combination offers the advantage over kerosene / liquid oxygen combination, or kerolox, of producing small exhaust molecules, reducing coking or deposition of soot on engine components. Methane is easier to store than hydrogen due to its higher boiling point and density, as well as its lack of hydrogen embrittlement . [ 31 ] [ 32 ] The lower molecular weight of the exhaust also increases the fraction of the heat energy which is in the form of kinetic energy available for propulsion, increasing the specific impulse of the rocket. Compared to liquid hydrogen , the specific energy of methane is lower but this disadvantage is offset by methane's greater density and temperature range, allowing for smaller and lighter tankage for a given fuel mass. Liquid methane has a temperature range (91–112 K) nearly compatible with liquid oxygen (54–90 K). The fuel currently sees use in operational launch vehicles such as Zhuque-2 , Vulcan and New Glenn as well as in-development launchers such as Starship , Neutron , Terran R , Nova , and Long March 9 . [ 33 ]
Natural gas , which is mostly composed of methane, is used to produce hydrogen gas on an industrial scale. Steam methane reforming (SMR), or simply known as steam reforming, is the standard industrial method of producing commercial bulk hydrogen gas. More than 50 million metric tons are produced annually worldwide (2013), principally from the SMR of natural gas. [ 34 ] Much of this hydrogen is used in petroleum refineries , in the production of chemicals and in food processing. Very large quantities of hydrogen are used in the industrial synthesis of ammonia .
At high temperatures (700–1100 °C) and in the presence of a metal -based catalyst ( nickel ), steam reacts with methane to yield a mixture of CO and H 2 , known as "water gas" or " syngas ":
This reaction is strongly endothermic (consumes heat, Δ H r = 206 kJ/mol).
Additional hydrogen is obtained by the reaction of CO with water via the water-gas shift reaction :
This reaction is mildly exothermic (produces heat, Δ H r = −41 kJ/mol).
Methane is also subjected to free-radical chlorination in the production of chloromethanes, although methanol is a more typical precursor. [ 35 ]
Hydrogen can also be produced via the direct decomposition of methane, also known as methane pyrolysis , which, unlike steam reforming, produces no greenhouse gases (GHG). The heat needed for the reaction can also be GHG emission free, e.g. from concentrated sunlight, renewable electricity, or burning some of the produced hydrogen. If the methane is from biogas then the process can be a carbon sink . Temperatures in excess of 1200 °C are required to break the bonds of methane to produce hydrogen gas and solid carbon. [ 36 ]
However, through the use of a suitable catalyst the reaction temperature can be reduced to between 550 and 900 °C depending on the chosen catalyst. Dozens of catalysts have been tested, including unsupported and supported metal catalysts, carbonaceous and metal-carbon catalysts. [ 37 ]
The reaction is moderately endothermic as shown in the reaction equation below. [ 38 ]
As a refrigerant , methane has the ASHRAE designation R-50 .
Methane can be generated through geological, biological or industrial routes.
The two main routes for geological methane generation are (i) organic (thermally generated, or thermogenic) and (ii) inorganic ( abiotic ). [ 14 ] Thermogenic methane occurs due to the breakup of organic matter at elevated temperatures and pressures in deep sedimentary strata . Most methane in sedimentary basins is thermogenic; therefore, thermogenic methane is the most important source of natural gas. Thermogenic methane components are typically considered to be relic (from an earlier time). Generally, formation of thermogenic methane (at depth) can occur through organic matter breakup, or organic synthesis. Both ways can involve microorganisms ( methanogenesis ), but may also occur inorganically. The processes involved can also consume methane, with and without microorganisms.
The more important source of methane at depth (crystalline bedrock) is abiotic. Abiotic means that methane is created from inorganic compounds, without biological activity, either through magmatic processes [ example needed ] or via water-rock reactions that occur at low temperatures and pressures, like serpentinization . [ 39 ] [ 40 ]
Most of Earth's methane is biogenic and is produced by methanogenesis , [ 41 ] [ 42 ] a form of anaerobic respiration only known to be conducted by some members of the domain Archaea . [ 43 ] Methanogens occur in landfills and soils , [ 44 ] ruminants (for example, cattle ), [ 45 ] the guts of termites, and the anoxic sediments below the seafloor and the bottom of lakes.
This multistep process is used by these microorganisms for energy. The net reaction of methanogenesis is:
The final step in the process is catalyzed by the enzyme methyl coenzyme M reductase (MCR). [ 46 ]
Wetlands are the largest natural sources of methane to the atmosphere, [ 47 ] accounting for approximately 20 – 30% of atmospheric methane. [ 48 ] Climate change is increasing the amount of methane released from wetlands due to increased temperatures and altered rainfall patterns. This phenomenon is called wetland methane feedback . [ 49 ]
Rice cultivation generates as much as 12% of total global methane emissions due to the long-term flooding of rice fields. [ 50 ]
Ruminants such as cattle belch out methane, accounting for about 22% of the U.S. annual methane emissions to the atmosphere. [ 51 ] One study reported that the livestock sector in general (primarily cattle, chickens, and pigs) produces 37% of all human-induced methane. [ 52 ] A 2013 study estimated that livestock accounted for 44% of human-induced methane and about 15% of human-induced greenhouse gas emissions. [ 53 ] Many efforts are underway to reduce livestock methane production, such as medical treatments and dietary adjustments, [ 54 ] [ 55 ] and to trap the gas to use its combustion energy. [ 56 ]
Most of the subseafloor is anoxic because oxygen is removed by aerobic microorganisms within the first few centimeters of the sediment . Below the oxygen-replete seafloor, methanogens produce methane that is either used by other organisms or becomes trapped in gas hydrates . [ 43 ] These other organisms that utilize methane for energy are known as methanotrophs ('methane-eating'), and are the main reason why little methane generated at depth reaches the sea surface. [ 43 ] Consortia of Archaea and Bacteria have been found to oxidize methane via anaerobic oxidation of methane (AOM); the organisms responsible for this are anaerobic methanotrophic Archaea (ANME) and sulfate-reducing bacteria (SRB). [ 57 ]
Given its cheap abundance in natural gas, there is little incentive to produce methane industrially. Methane can be produced by hydrogenating carbon dioxide through the Sabatier process . Methane is also a side product of the hydrogenation of carbon monoxide in the Fischer–Tropsch process , which is practiced on a large scale to produce longer-chain molecules than methane.
An example of large-scale coal-to-methane gasification is the Great Plains Synfuels plant, started in 1984 in Beulah, North Dakota as a way to develop abundant local resources of low-grade lignite , a resource that is otherwise difficult to transport for its weight, ash content, low calorific value and propensity to spontaneous combustion during storage and transport. A number of similar plants exist around the world, although mostly these plants are targeted towards the production of long chain alkanes for use as gasoline , diesel , or feedstock to other processes.
Power to methane is a technology that uses electrical power to produce hydrogen from water by electrolysis and uses the Sabatier reaction to combine hydrogen with carbon dioxide to produce methane.
Methane can be produced by protonation of methyl lithium or a methyl Grignard reagent such as methylmagnesium chloride . It can also be made from anhydrous sodium acetate and dry sodium hydroxide , mixed and heated above 300 °C (with sodium carbonate as byproduct). [ citation needed ] In practice, a requirement for pure methane can easily be fulfilled by steel gas bottle from standard gas suppliers.
Methane is the major component of natural gas, about 87% by volume. The major source of methane is extraction from geological deposits known as natural gas fields , with coal seam gas extraction becoming a major source (see coal bed methane extraction , a method for extracting methane from a coal deposit, while enhanced coal bed methane recovery is a method of recovering methane from non-mineable coal seams). It is associated with other hydrocarbon fuels, and sometimes accompanied by helium and nitrogen . Methane is produced at shallow levels (low pressure) by anaerobic decay of organic matter and reworked methane from deep under the Earth's surface. In general, the sediments that generate natural gas are buried deeper and at higher temperatures than those that contain oil .
Methane is generally transported in bulk by pipeline in its natural gas form, or by LNG carriers in its liquefied form; few countries transport it by truck.
Methane is an important greenhouse gas , responsible for around 30% of the rise in global temperatures since the industrial revolution. [ 58 ]
Methane has a global warming potential (GWP) of 29.8 ± 11 compared to CO 2 (potential of 1) over a 100-year period, and 82.5 ± 25.8 over a 20-year period. [ 59 ] This means that, for example, a leak of one tonne of methane is equivalent to emitting 82.5 tonnes of carbon dioxide. Burning methane and producing carbon dioxide also reduces the greenhouse gas impact compared to simply venting methane to the atmosphere.
As methane is gradually converted into carbon dioxide (and water) in the atmosphere, these values include the climate forcing from the carbon dioxide produced from methane over these timescales.
Annual global methane emissions are currently approximately 580 Mt, [ 60 ] 40% of which is from natural sources and the remaining 60% originating from human activity, known as anthropogenic emissions. The largest anthropogenic source is agriculture , responsible for around one quarter of emissions, closely followed by the energy sector , which includes emissions from coal, oil, natural gas and biofuels. [ 61 ]
Historic methane concentrations in the world's atmosphere have ranged between 300 and 400 nmol/mol during glacial periods commonly known as ice ages , and between 600 and 700 nmol/mol during the warm interglacial periods. A 2012 NASA website said the oceans were a potential important source of Arctic methane, [ 62 ] but more recent studies associate increasing methane levels as caused by human activity. [ 11 ]
Global monitoring of atmospheric methane concentrations began in the 1980s. [ 11 ] The Earth's atmospheric methane concentration has increased 160% since preindustrial levels in the mid-18th century. [ 11 ] In 2013, atmospheric methane accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases. [ 63 ] Between 2011 and 2019 the annual average increase of methane in the atmosphere was 1866 ppb. [ 12 ] From 2015 to 2019 sharp rises in levels of atmospheric methane were recorded. [ 64 ] [ 65 ]
In 2019, the atmospheric methane concentration was higher than at any time in the last 800,000 years. As stated in the AR6 of the IPCC , "Since 1750, increases in CO 2 (47%) and CH 4 (156%) concentrations far exceed, and increases in N 2 O (23%) are similar to, the natural multi-millennial changes between glacial and interglacial periods over at least the past 800,000 years (very high confidence)". [ 12 ] [ a ] [ 66 ]
In February 2020, it was reported that fugitive emissions and gas venting from the fossil fuel industry may have been significantly underestimated. [ 67 ] [ 68 ] The largest annual increase occurred in 2021 with the overwhelming percentage caused by human activity. [ 11 ]
Climate change can increase atmospheric methane levels by increasing methane production in natural ecosystems, forming a climate change feedback . [ 43 ] [ 69 ] Another explanation for the rise in methane emissions could be a slowdown of the chemical reaction that removes methane from the atmosphere. [ 70 ]
Over 100 countries have signed the Global Methane Pledge , launched in 2021, promising to cut their methane emissions by 30% by 2030. [ 71 ] This could avoid 0.2 °C of warming globally by 2050, although there have been calls for higher commitments in order to reach this target. [ 72 ] The International Energy Agency 's 2022 report states "the most cost-effective opportunities for methane abatement are in the energy sector, especially in oil and gas operations". [ 73 ]
Methane clathrates (also known as methane hydrates) are solid cages of water molecules that trap single molecules of methane. Significant reservoirs of methane clathrates have been found in arctic permafrost and along continental margins beneath the ocean floor within the gas clathrate stability zone , located at high pressures (1 to 100 MPa; lower end requires lower temperature) and low temperatures (< 15 °C; upper end requires higher pressure). [ 74 ] Methane clathrates can form from biogenic methane, thermogenic methane, or a mix of the two. These deposits are both a potential source of methane fuel as well as a potential contributor to global warming. [ 75 ] [ 76 ] The global mass of carbon stored in gas clathrates is still uncertain and has been estimated as high as 12,500 Gt carbon and as low as 500 Gt carbon. [ 49 ] The estimate has declined over time with a most recent estimate of ≈1800 Gt carbon. [ 77 ] A large part of this uncertainty is due to our knowledge gap in sources and sinks of methane and the distribution of methane clathrates at the global scale. For example, a source of methane was discovered relatively recently in an ultraslow spreading ridge in the Arctic. [ 48 ] Some climate models suggest that today's methane emission regime from the ocean floor is potentially similar to that during the period of the Paleocene–Eocene Thermal Maximum ( PETM ) around 55.5 million years ago, although there are no data indicating that methane from clathrate dissociation currently reaches the atmosphere. [ 77 ] Arctic methane release from permafrost and seafloor methane clathrates is a potential consequence and further cause of global warming ; this is known as the clathrate gun hypothesis . [ 78 ] [ 79 ] [ 80 ] [ 81 ] Data from 2016 indicate that Arctic permafrost thaws faster than predicted. [ 82 ]
Methane "degrades air quality and adversely impacts human health, agricultural yields, and ecosystem productivity". [ 83 ]
Methane is extremely flammable and may form explosive mixtures with air. Methane gas explosions are responsible for many deadly mining disasters. [ 84 ] A methane gas explosion was the cause of the Upper Big Branch coal mine disaster in West Virginia on April 5, 2010, killing 29. [ 85 ] Natural gas accidental release has also been a major focus in the field of safety engineering , due to past accidental releases that concluded in the formation of jet fire disasters. [ 86 ] [ 87 ]
The 2015–2016 methane gas leak in Aliso Canyon, California was considered to be the worst in terms of its environmental effect in American history. [ 88 ] [ 89 ] [ 90 ] It was also described as more damaging to the environment than Deepwater Horizon 's leak in the Gulf of Mexico. [ 91 ]
In May 2023 The Guardian published a report blaming Turkmenistan as the worst in the world for methane super emitting . The data collected by Kayrros researchers indicate that two large Turkmen fossil fuel fields leaked 2.6 million and 1.8 million metric tonnes of methane in 2022 alone, pumping the CO 2 equivalent of 366 million tonnes into the atmosphere, surpassing the annual CO 2 emissions of the United Kingdom . [ 92 ]
Methane is also an asphyxiant if the oxygen concentration is reduced to below about 16% by displacement, as most people can tolerate a reduction from 21% to 16% without ill effects . The concentration of methane at which asphyxiation risk becomes significant is much higher than the 5–15% concentration in a flammable or explosive mixture. Methane off-gas can penetrate the interiors of buildings near landfills and expose occupants to significant levels of methane. Some buildings have specially engineered recovery systems below their basements to actively capture this gas and vent it away from the building. [ citation needed ]
Methane is abundant in many parts of the Solar System and potentially could be harvested on the surface of another Solar System body (in particular, using methane production from local materials found on Mars [ 93 ] or Titan ), providing fuel for a return journey. [ 29 ] [ 94 ]
Negative methane , the negative ion of methane, is also known to exist in interstellar space . [ 95 ] Its mechanism of formation is not fully understood.
Methane has been detected on all planets of the Solar System and most of the larger moons. [ citation needed ] With the possible exception of Mars , it is believed to have come from abiotic processes. [ 96 ] [ 97 ]
The Curiosity rover has documented seasonal fluctuations of atmospheric methane levels on Mars. These fluctuations peaked at the end of the Martian summer at 0.6 parts per billion. [ 98 ] [ 99 ] [ 100 ] [ 101 ] [ 102 ] [ 103 ] [ 104 ] [ 105 ]
Methane has been proposed as a possible rocket propellant on future Mars missions due in part to the possibility of synthesizing it on the planet by in situ resource utilization . [ 106 ] An adaptation of the Sabatier methanation reaction may be used with a mixed catalyst bed and a reverse water-gas shift in a single reactor to produce methane and oxygen from the raw materials available on Mars, utilizing water from the Martian subsoil and carbon dioxide in the Martian atmosphere . [ 93 ]
Methane could be produced by a non-biological process called serpentinization [ b ] involving water, carbon dioxide, and the mineral olivine , which is known to be common on Mars. [ 107 ]
Methane has been detected in vast abundance on Titan , the largest moon of Saturn . It comprises a significant portion of its atmosphere and also exists in a liquid form on its surface, where it comprises the majority of the liquid in Titan's vast lakes of hydrocarbons, the second largest of which is believed to be almost pure methane in composition. [ 108 ]
The presence of stable lakes of liquid methane on Titan, as well as the surface of Titan being highly chemically active and rich in organic compounds, has led scientists to consider the possibility of life existing within Titan's lakes, using methane as a solvent in the place of water for Earth-based life [ 109 ] and using hydrogen in the atmosphere to derive energy with acetylene . [ 110 ]
The discovery of methane is credited to Italian physicist Alessandro Volta , who characterized numerous properties including its flammability limit and origin from decaying organic matter. [ 111 ]
Volta was initially motivated by reports of inflammable air present in marshes by his friend Father Carlo Guiseppe Campi. While on a fishing trip to Lake Maggiore straddling Italy and Switzerland in November 1776, he noticed the presence of bubbles in the nearby marshes and decided to investigate. Volta collected the gas rising from the marsh and demonstrated that the gas was inflammable. [ 111 ] [ 112 ]
Volta notes similar observations of inflammable air were present previously in scientific literature, including a letter written by Benjamin Franklin . [ 113 ]
Following the Felling mine disaster of 1812 in which 92 men perished, Sir Humphry Davy established that the feared firedamp was in fact largely methane. [ 114 ]
The name "methane" was coined in 1866 by the German chemist August Wilhelm von Hofmann . [ 115 ] [ 116 ] The name was derived from methanol .
Etymologically, the word methane is coined from the chemical suffix " -ane ", which denotes substances belonging to the alkane family; and the word methyl , which is derived from the German Methyl (1840) or directly from the French méthyle , which is a back-formation from the French méthylène (corresponding to English "methylene"), the root of which was coined by Jean-Baptiste Dumas and Eugène Péligot in 1834 from the Greek μέθυ méthy (wine) (related to English "mead") and ὕλη hýlē (meaning "wood"). The radical is named after this because it was first detected in methanol , an alcohol first isolated by distillation of wood. The chemical suffix -ane is from the coordinating chemical suffix -ine which is from Latin feminine suffix -ina which is applied to represent abstracts. The coordination of "-ane", " -ene ", " -one ", etc. was proposed in 1866 by German chemist August Wilhelm von Hofmann . [ 117 ]
The abbreviation CH 4 -C can mean the mass of carbon contained in a mass of methane, and the mass of methane is always 1.33 times the mass of CH 4 -C. [ 118 ] [ 119 ] CH 4 -C can also mean the methane-carbon ratio, which is 1.33 by mass. [ 120 ] Methane at scales of the atmosphere is commonly measured in teragrams (Tg CH 4 ) or millions of metric tons (MMT CH 4 ), which mean the same thing. [ 121 ] Other standard units are also used, such as nanomole (nmol, one billionth of a mole), mole (mol), kilogram , and gram . | https://en.wikipedia.org/wiki/Methane |
This page provides supplementary chemical data on methane .
The handling of this chemical may incur notable safety precautions. [ 1 ]
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Annotation "(s)" indicates equilibrium temperature of vapor over solid. Otherwise temperature is equilibrium of vapor over liquid. Note that these are all negative temperature values. | https://en.wikipedia.org/wiki/Methane_(data_page) |
Methane functionalization is the process of converting methane in its gaseous state to another molecule with a functional group , typically methanol or acetic acid , through the use of transition metal catalysts .
In the realm of carbon-hydrogen bond activation and functionalization (C-H activation/functionalization), many recent efforts have been made in order to catalytically functionalize the C-H bonds in methane. The large abundance of methane in natural gas or shale gas deposits presents a large potential for its use as a feedstock in modern chemistry. However, given its gaseous natural state, it is quite difficult to transport economically. Its ideal use would be as a raw starting material for methanol or acetic acid synthesis, with plants built at the source to eliminate the issue of transportation. [ 1 ] Methanol, in particular, would be of great use as a potential fuel source, and many efforts have been applied to researching the feasibilities of a methanol economy .
The challenges of C-H activation and functionalization present themselves when several factors are taken into consideration. Firstly, the C-H bond is extremely inert and non-polar, with a high bond dissociation energy, making methane a relatively unreactive starting material. Secondly, any products formed from methane would likely be more reactive than the starting product, which would be detrimental to the selectivity and yield of the reaction. [ 1 ]
The main strategy currently used to increase the reactivity of methane uses transition metal complexes to activate the carbon-hydrogen bonds. In a typical C-H activation mechanism, a transition metal catalyst coordinates to the C-H bond to cleave it, and convert it into a bond with a lower bond dissociation energy. By doing so, the product can be used in further downstream reactions, since it will usually have a new functional group attached to the carbon. It is also important to note the difference between the terms "activation" and "functionalization," since both terms are often used interchangeably, but should be held distinct from each other. Activation refers to the coordination of a metal center to the C-H bond, whereas functionalization occurs when the coordinated metal complex is further reacted with a group "X" to result in the functionalized product. [ 1 ]
The four most common methods of transition metal catalyzed methane activation are the Shilov system , sigma bond metathesis , oxidative addition , and 1,2 addition reactions.
The Shilov system involves platinum based complexes to produce metal alkyls. It was first discovered when a hydrogen-deuterium exchanged was observed in a deuterated solution with the platinum tetrachloride anion. [ 2 ] Shilov et al. then was able to catalytically convert methane into methanol or methyl chloride when a Pt(IV) salt was used as a stoichiometric oxidant. The process is simplified down into three main steps: (1) C-H activation, (2) a redox reaction to form an octahedral intermediate, followed by (3) the formation of the carbon-oxygen bond to form methanol ( Figure 3 ). [ 3 ]
Sigma bond metathesis involves the formation of new C-H and metal-carbon bonds, where the metals are typically in the d 0 configuration. Starting with a metal alkyl, a C-H bond coordinates with the metal complex via sigma bonding. A four-member transition state is created, where a new metal-carbon bond is formed, and the former C-H linkage is broken ( Figure 4 ). [ 1 ]
In oxidative addition , the metal center's oxidation state increases by 2 units during the process. First, the metal center coordinates with a sigma C-H bond to form an intermediate called a sigma-methane complex. The C-H linkage is then broken, as the metal becomes covalently bonded each to the carbon and the hydrogen ( Figure 5 ). [ 1 ]
Similar to sigma bond metathesis is the 1,2 addition reaction , where a four-membered transition state is also formed. However, a polarized double or triple metal-ligand bond is required in order to favor the formation of the desired product ( Figure 6 ). [ 1 ]
Once the C-H bond of methane is activated by bonding to a transition metal complex, the net functionalization of the alkyl metal complex into another hydrocarbon containing a functional group is actually much harder to achieve. In general, alkanes of various lengths have typically been functionalized by a number of more commonly known reactions: electrophilic activation (Shilov system, see above), dehydrogenation , borylation , hydrogen-deuterium exchange , and carbene / nitrene /oxo insertion. [ 1 ] The functionalization of methane in particular has been reported in four different methods that use homogeneous catalysts rather than heterogeneous catalysts . Heterogeneous systems, using copper- and iron exchanged Zeolite , are also investigated. In these systems, reactive oxygen species such as Alpha-Oxygen are generated which can perform a hydrogen atom abstraction . [ 4 ] Finally, photochemically excited elemental mercury has also been shown to activate hydrocarbons, including methane. [ 5 ]
In 1993, Periana et al. reported a synthesis of methyl bisulfate from methane using a mercury catalyst at 180 °C. [ 6 ] Mercuric bisulfate activates methane electrophilically to form a methyl-complex, which then reacts with sulfuric acid to produce methyl bisulfate. The resulting mercury complex Hg 2 (OSO 3 ) 2 is re-oxidized by sulfuric acid to regenerate the catalyst and restart the catalytic cycle ( Figure 7 ).
This method of functionalizing methane preceded the 1998 discovery by the same group of the so-called Catalytica system, the most active cycle to date in terms of turnover rate, yields, and selectivity. [ 7 ] Performing the reaction in sulfuric acid at 220 °C means that the catalyst must be able to withstand these harsh conditions. A platinum- bipyrimidine complex serves as the catalyst. The mechanism for this system is similar to the one described above, where methane is first activated electrophilically to form a methyl-platinum intermediate. The Pt(II) complex is then oxidized to Pt(IV) as two sulfuric acid groups are added to the complex. The reductive elimination of methyl bisulfate transforms the Pt(IV) species back to Pt(II) to regenerate the catalyst ( Figure 8 ).
In a hypothetical combined process, the Catalytica system could be used in a net conversion of methane to methanol. The methyl bisulfate produced in the cycle could be converted to methanol by hydrolysis, and the sulfur dioxide generated could be converted back to sulfuric acid. [ 1 ]
Periana's group was also able to convert methane into acetic acid using similar conditions to the Catalytica system. Palladium(II) salts were used in this process, and the products formed were a mixture of methanol and acetic acid, along with side products of carbon monoxide and possibly carbon dioxide due to over-oxidation. [ 8 ] The mechanism of reaction involves another electrophilic activation of methane, and when carbon monoxide is incorporated, the acetic acid derivative is generated through its activation to an acyl intermediate ( Figure 9 ).
Another example of acetic acid synthesis was demonstrated by Pombeiro et al., which used vanadium -based complexes in trifluoroacetic acid with peroxodisulfate as the oxidant. [ 9 ] The proposed mechanism involves a radical mechanism, where methane is the methyl source and trifluoroacetic acid is the carbonyl source. Minor side products were formed, including methyltrifluoroacetate and methylsulfate.
T. Don Tilley and coworkers were able to use the process of sigma-bond metathesis to design catalytic systems that work by the formation of carbon-carbon bonds. [ 10 ] They first demonstrated an example using a scandium -based system, where methane is dehydrogenated and silated. Starting from phenyl silane, methane pressure converts it into Ph 2 MePhH using a Cp*ScMe catalyst. The scandium complex then transfers the methyl group to the silane by sigma-bond metathesis to form the product and the Cp* 2 ScH intermediate. The favorable formation of hydrogen gas combined with methane will regenerate the methyl complex from the hydride derivative ( Figure 10 ).
Cp* 2 ScMe was also used as a catalyst in the formation of isobutane by adding methane to the double bond of propene . This was achieved when propene and methane were combined in the presence of the scandium catalyst and heated to 80 °C. [ 11 ]
Carbene insertion use a different strategy for the functionalization of methane. A strategy using metallocarbenes has been shown with several linear and branched alkanes with rhodium , silver , copper , and gold -based catalysts. [ 12 ] With a carbene ligand attached to a metal center, it can be transferred from the coordination sphere and inserted into an activated C-H bond. In this case, there is no interaction between the metal center and the alkane in question, which separates this method from the other methods mentioned above. The general mechanism for this cycle begins with the reaction of an electron-poor metal center with a diazocompound to form a metallo-carbene intermediate. In order for this reaction to occur, the diazocompound must be very electrophilic, since the C-H bond is such a poor nucleophile as well as being an unactivated alkane. The reaction then proceeds in a concerted manner, where the C-H bond of the incoming molecule coordinates with the carbene carbon of the metallocarbene complex. The hydrocarbon then dissociates from the metal center to regenerate the catalyst and free the newly formed carbon-carbon bond ( Figure 11 ).
This route is very successful for higher order alkanes due to the fact that there is no formation of strong metal-carbon or metal-hydrogen bonds that could prevent any intermediates from reacting further. The reactions also take place in room temperature under mild conditions. However, when applying this method to methane specifically, the gaseous nature of methane requires an appropriate solvent. Reactions with other alkanes usually have the alkane in question be the solvent itself; however, any C-H bond with a lower BDE or higher polarity than methane will react first and prevent methane functionalization. Therefore, Pérez, Asensio, Etienne, et al. developed a solution to use supercritical carbon dioxide as the solvent, which is formed under the critical pressure of 73 bar and a temperature of 31 °C. [ 13 ] In these conditions, scCO 2 behaves as a liquid, and since fluorinated compounds can dissolve easily in scCO 2 , highly fluorinated silver-based catalysts were developed and tested with methane and ethyl diazoacetate. However, under the reaction conditions, only 19% yield of ethyl propionate was able to be achieved. The reaction depends on a delicate balance between methane pressure and catalyst concentration, and consequently more work is being done to further improve yields. | https://en.wikipedia.org/wiki/Methane_functionalization |
A methane leak is a significant natural gas leak. The term is used for a class of methane emissions , which can come from an industrial facility or pipeline.
Satellite data enables the identification of super-emitter events (synonymous with ultra-emitters, see "Mitigation of Ultra-Emitters") that produce methane plumes . Over 1,000 methane leaks of this type were found worldwide in 2022. [ 1 ] As with other gas leaks , a leak of methane is a safety hazard: coalbed methane in the form of fugitive gas emission has always been a danger to miners. [ 2 ] Methane leaks also have a serious environmental impact. Natural gas contain methane, ethane , and other gases, which from the safety and environmental point of view raise major issues with atmospheric composition and human health.
As a greenhouse gas and climate change contributor, methane ranks second, following carbon dioxide . [ 3 ] Fossil fuel exploration, transportation and production is responsible for about 40% of human-caused methane emissions. [ 1 ] Smaller leaks than can be spotted from space comprise long tail of emissions. They can be identified from planes flying at 900 meters (3,000 ft). [ 4 ] According to Fatih Birol of the International Energy Agency , "Methane emissions are still far too high, especially as methane cuts are among the cheapest options to limit near-term global warming". [ 1 ]
Individual methane leaks are reported as specific events with a large quantity of gas released. An example followed the 2022 Nord Stream pipeline sabotage . Following early reports that the escape might exceed 10 5 tonnes, The International Methane Emissions Observatory of the United Nations Environment Programme analyzed the release. In February 2023 it put the mass of methane gas in the range 7.5 to 23.0 x 10 4 tonnes . In terms of overall human-made methane emissions, these figures are under 0.1% of the annual total. [ 5 ] [ 6 ]
Satellite data detection has shown that methane super emitter sites in Turkmenistan , USA and Russia are responsible for the biggest number of events from fossil fuel facilities. Estimated emissions from oil and gas ultra-emitters rank highest for Turkmenistan with 1.3 megaons (Mt) of methane per year, followed by Russia, the United States, Iran , Kazakhstan, and Algeria . [ 7 ] Equipment failures are normally responsible for the releases, which can last for weeks. [ 8 ]
The Aliso Canyon gas leak of 2015 has been quantified as at least 1.09 x 10 5 tonnes of methane. [ 9 ] Satellite data for the Raspadskaya coal mine , Kemerovo Oblast , Russia indicated in 2022 an hourly methane leakage rate of 87 tonnes; [ 10 ] this compares to 60 tonnes per hour of natural gas leaking from the Aliso Canyon incident, considered among the worst recorded leak events. [ 11 ]
Spain 's Technical University of Valencia , in a study published in 2022, found that a super emitter event at a gas and oil platform in the Gulf of Mexico released around 4 x 10 4 tonnes of methane during a 17-day time period in December 2021 (hourly rate around 98 tonnes). [ 12 ] Another major event in 2022 was a leak of 427 tonnes an hour in August, near Turkmenistan's Caspian coast and a major pipeline. [ 8 ]
Ultra-emitters of methane are characterized by producing more than 25 tons/hour of CH 4 from oil and gas activities, and are in the top 1% of methane emitters in the world. [ 7 ] Reducing emissions from these sites can be done by enforcing leak detection and by reducing venting during routine maintenance. [ 7 ]
Ultra-emitters are common and particularly large in Russia, Iran, and Kazakhstan, representing 10-20% of annual reported emissions across the globe. [ 7 ] The U.S. is found to house 5% of annual worldwide emissions, but this number excludes emissions from drilling in the Permian basin, which accounts for 10% of U.S. natural gas production. [ 7 ] Drilling in the Permian basin creates about 2.7 Mt a year of emissions, which is 35% of U.S. oil and gas production emissions. [ 7 ]
Spending for mitigation of ultra-emitters is funded by the International Energy Agency (IEA) , Environmental Protection Agency (EPA) , and International Institute for Applied Systems Analysis (IIASA) . [ 7 ] Emissions from ultra-emitters are expected to be more cost-effective to mitigate than average-sized sources due to efficiency and leak efforts. [ 7 ]
The geographic area of Lubbock has been a site of ongoing emissions research to assess the extent and environmental implications of methane leakage from abandoned wells. [ 13 ] Lubbock is located within the Permian Basin in West Texas, United States, and includes an estimate of 1781 drilling wells. [ 13 ] Aeromagnetic surveys are used to detect active and abandoned wells and are able to detect those with no visible aboveground markers. [ 13 ] Regular monitoring and repair initiatives targeting emissions from storage tanks can be particularly impactful in mitigating vented emissions. [ 13 ] Even with efforts to accurately measure the greenhouse gas emissions associated with the abandoned wells, emissions data is still relatively uncertain due to gas characterization and source concerns. [ 13 ]
Usage of methane gas detection sensors vary based on region, environmental conditions, and purpose of measurements. Types of sensors include optical sensors, calorimetric sensors, pyroelectric sensors, semiconducting oxide sensors, and electrochemical sensors.
Optical sensors detect changes in light waves that interact with the receptor. They are optimal in regions where there could be electromagnetic interference and at high altitudes where oxygen content is low. [ 14 ] They are also non-destructive and result in little to no environmental harm. However, they have high costs in large settings and low selectivity. [ 14 ]
Calorimetric sensors measure the heat produced from a reaction and compare the value to reactant concentration. [ 14 ] These sensors are low cost and have a simple design. They are able to operate in harsh conditions but are susceptible to cracking and accelerated degradation. [ 14 ] They also require high power consumption to operate and have low detection accuracy.
Pyroelectric sensors convert thermal energy into electrical energy based on pyroelectricity . [ 14 ] They have good sensitivity and responsivity, can operate without oxygen, and have a wide measuring range. Among the limitations of pyroelectric sensors are cost and difficulty in manufacturing, but the most detrimental is the immobility of the sensor once positioned. [ 14 ]
Semiconducting metal oxide sensors measure methane by detecting the absorption of gas on the surface of a metal oxide , which changes its conductivity . [ 14 ] These instruments are low cost, lightweight, and have a long lifespan. They may not be used as widely due to their poor selectivity, sensitivity to changes in temperature and humidity, and significant additive dependence. [ 14 ]
Electrochemical sensors oxidize or reduce the gas detected at an electrode and measure the current to find methane gas concentration. [ 14 ] These instruments are low cost, non-hazardous, and have low volatility. They also have good selectivity specifically for methane gas and can detect small leaks. They may have slow response time or be susceptible to degradation or loss of electrodes, however these sensors have returned promising results in the accuracy of small methane leak detection. [ 14 ]
Quantitative reports of methane leaks often use the standard cubic foot (scf) of the United States customary system . Applied to natural gas, a complex mixture of uncertain proportions, and depending on pressure and temperature conditions, the accuracy of calculations converting scf to metric units of mass is subject to limitations. A conversion figure given is 5 x 10 4 scf of natural gas as 1.32 short tons (1.20 t). [ 15 ]
For detection sensitivity, quantitative criteria are typically stated in units of standard cubic feet per hour (scf/h, "skiff", US), or thousand standard cubic feet per day (Mscf/d); or with metric units kilograms per hour (kg/hr), cubic meters per day (m3/d). [ 16 ]
To describe the mass balance of methane in the atmosphere, mass rates are described in units of Tg/yr, i.e. teragrams per year where a teragram is 10 6 tonnes (megagrams). [ 17 ] The methane leak from the Permian Basin , a significant region of the Mid-Continent Oil Producing Area , was estimated for 2018/9 from satellite data as 2.7 Tg/yr. Quoted in terms of the proportion of the mass of extracted gas, the leakage comes to 3.7%. [ 18 ] The 2021 Carbon Mapper project, a collaboration of the Jet Propulsion Laboratory and academia, detected 533 methane super-emitters in the Permian Basin. [ 19 ] | https://en.wikipedia.org/wiki/Methane_leak |
A methane reformer is a device based on steam reforming , autothermal reforming or partial oxidation and is a type of chemical synthesis which can produce pure hydrogen gas from methane using a catalyst . There are multiple types of reformers in development but the most common in industry are autothermal reforming (ATR) and steam methane reforming (SMR). Most methods work by exposing methane to a catalyst (usually nickel ) at high temperature and pressure.
Steam reforming (SR), sometimes referred to as steam methane reforming (SMR) uses an external source of hot gas to heat tubes in which a catalytic reaction takes place that converts steam and lighter hydrocarbons such as methane, biogas or refinery feedstock into hydrogen and carbon monoxide (syngas). Syngas reacts further to give more hydrogen and carbon dioxide in the reactor. The carbon oxides are removed before use by means of pressure swing adsorption (PSA) with molecular sieves for the final purification. The PSA works by adsorbing impurities from the syngas stream to leave a pure hydrogen gas.
Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas . The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic due to the oxidation.
When the ATR uses carbon dioxide the H 2 :CO ratio produced is 1:1; when the ATR uses steam the H 2 :CO ratio produced is 2.5:1
The reactions can be described in the following equations, using CO 2 :
And using steam:
The outlet temperature of the syngas is between 950 and 1100 °C and outlet pressure can be as high as 100 bar . [ 1 ]
The main difference between SMR and ATR is that SMR only uses oxygen via air for combustion as a heat source to create steam, while ATR directly combusts oxygen. The advantage of ATR is that the H 2 :CO can be varied, this is particularly useful for producing certain second generation biofuels , such as DME which requires a 1:1 H 2 :CO ratio.
Partial oxidation (POX) is a type of chemical reaction. It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use.
The capital cost of steam reforming plants is prohibitive for small to medium size applications because the technology does not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi with outlet temperatures in the range of 815 to 925 °C. However, analyses have shown that even though it is more costly to construct, a well-designed SMR can produce hydrogen more cost-effectively than an ATR for smaller applications. [ 2 ] | https://en.wikipedia.org/wiki/Methane_reformer |
Methanediol , also known as formaldehyde monohydrate or methylene glycol , is an organic compound with chemical formula CH 2 (OH) 2 . It is the simplest geminal diol . In aqueous solutions it coexists with oligomers (short polymers). The compound is closely related and convertible to the industrially significant derivatives paraformaldehyde ( (CH 2 O) n ), formaldehyde ( H 2 C=O ), and 1,3,5-trioxane ( (CH 2 O) 3 ). [ 3 ]
Methanediol is a product of the hydration of formaldehyde. The equilibrium constant for hydration is estimated to be 10 3 , [ 4 ] CH 2 (OH) 2 predominates in dilute (<0.1%) solution. In more concentrated solutions, it oligomerizes to HO(CH 2 O) n H . [ 3 ]
The dianion, methanediolate, is believed to be an intermediate in the crossed Cannizzaro reaction .
Gaseous methanediols can be generated by electron irradiation and sublimation of a mixture of methanol and oxygen ices. [ 5 ]
Methanediol is believed to occur as an intermediate in the decomposition of carbonyl compounds in the atmosphere, and as a product of ozonolysis on these compounds. [ 5 ]
Methanediol, rather than formaldehyde, is listed as one of the main ingredients of " Brazilian blowout ", a hair-straightening formula marketed in the United States . The equilibrium with formaldehyde has caused concern since formaldehyde in hair straighteners is a health hazard. [ 6 ] [ 7 ] Research funded by the Professional Keratin Smoothing Council (PKSC), an industry association that represents selected manufacturers of professional-use only keratin smoothing products, has disputed the risk. [ 8 ] | https://en.wikipedia.org/wiki/Methanediol |
Methanedithiol is an organosulfur compound with the formula H 2 C(SH) 2 . A seldom used chemical, it forms when formaldehyde reacts with hydrogen sulfide under pressure:
This reaction competes with formation of trithiane :
Methanedithiol forms a solid dibenzoate derivative upon treatment with benzoic anhydride : [ 1 ]
Methanetrithiol is also known. | https://en.wikipedia.org/wiki/Methanedithiol |
Methanesulfonic acid ( MsOH , MSA ) or methanesulphonic acid (in British English) is an organosulfuric , colorless liquid with the molecular formula CH 3 SO 3 H and structure H 3 C − S(=O) 2 − OH . It is the simplest of the alkylsulfonic acids ( R−S(=O) 2 −OH ). Salts and esters of methanesulfonic acid are known as mesylates (or methanesulfonates, as in ethyl methanesulfonate ). It is hygroscopic in its concentrated form. Methanesulfonic acid can dissolve a wide range of metal salts, many of them in significantly higher concentrations than in hydrochloric acid (HCl) or sulfuric acid ( H 2 SO 4 ). [ 3 ]
German chemist Hermann Kolbe discovered MSA between 1842 and 1845 and originally termed it methyl hyposulphuric acid . [ 4 ] [ 5 ] [ 6 ]
The discovery stemmed from earlier work by Berzelius and Marcet in 1813, who treated carbon disulfide with moist chlorine and produced a compound they named "sulphite of chloride of carbon". By reacting it with barium hydroxide Kolbe demonstrated it to actually be trichloromethylsulfonyl chloride (CCl₃SO₂Cl in modern notation). [ 4 ] [ 5 ]
2 CCl 3 SO 2 Cl + 3 Ba(OH) 2 → Ba(CCl 3 SO 3 ) 2 + 3 BaCl 2 + 2 H 2 O
From resulting barium trichloromethylsulfonate Kolbe isolated the free acid, which he was then able to sequentially dechlorinate by electrolytically generated atomic hydrogen to ultimately yield MSA. [ 4 ] [ 5 ]
CCl 3 SO 3 H + 3 H → CHCl 2 SO 3 H + 2 H + HCl → … → CH 3 SO 3 H + 3 HCl
Kolbe's research on methanesulfonic and chloroacetic acids was hailed by Berzelius as strong evidence for his theory of copulated compounds, a modification of radical theory to accommodate substitution reactions which posited the combination of organic and inorganic moieties without significantly altering the properties of the latter. [ 6 ]
Later in the 19th century, the name transitioned to methyl sulphonic acid . Other historical laboratory synthesis routes included oxidizing methanethiol , dimethyl disulfide or methyl thiocyanate with nitric acid . [ 5 ]
The first commercial production of MSA, developed in the 1940s by Standard Oil of Indiana , was based on oxidation of dimethylsulfide by O 2 from air. Although inexpensive, this process suffered from a poor product quality and explosion hazards.
Starting from the 1960s, it received a shortened name of mesylic acid [ 7 ] after the term for the " mesyl " group coined by Helferich et al. in 1938. [ 8 ]
In 1967, the Pennwalt Corporation (USA) developed a different process for dimethylsulfide (as a water-based emulsion) oxidation using chlorine , followed by extraction-purification. In 2022 this chlorine-oxidation process was used only by Arkema (France) for making high-purity MSA. This process is not popular on a large scale, because it co-produces large quantities of hydrochloric acid .
Between years 1970 and 2000 MSA was used only on a relatively small-scale in niche markets (for example, in the microelectronic and electroplating industries since the 1980s), which was mainly due to its rather high price and limited availability. However, this situation changed around 2003, when BASF launched commercial production of MSA in Ludwigshafen based on a modified version of the aforementioned air oxidation process, oxidising dimethyldisulfide with nitric acid which is then restored using atmospheric oxygen. The former is produced in one step from methanol from syngas , hydrogen and sulfur . [ 9 ]
An even better (lower-cost and environmentally friendlier) process of making methanesulfonic acid was developed in 2016 by Grillo-Werke AG (Germany). It is based on a direct reaction between methane and oleum at around 50 °C and 100 bar in the presence of a potassium persulfate initiator. [ 10 ] Further addition of sulfur trioxide gives methanedisulfonic acid instead. [ 11 ] This technology was acquired and commercialized by BASF in 2019. [ 12 ]
Since ca. 2000 methanesulfonic acid has become a popular replacement for other acids in numerous industrial and laboratory applications, because it: [ 13 ]
The closely related p -toluenesulfonic acid (PTSA) is solid.
Methanesulfonic acid can be used in the generation of borane (BH 3 ) by reacting methanesulfonic acid with NaBH 4 in an aprotic solvent such as THF or DMSO , the complex of BH 3 and the solvent is formed. [ 14 ]
Solutions of methanesulfonic acid are used for the electroplating of tin and tin-lead solders. It is displacing the use of fluoroboric acid , which releases corrosive and volatile hydrogen fluoride . [ 15 ]
Methanesulfonic acid is also a primary ingredient in rust and scale removers. [ 16 ] It is used to clean off surface rust from ceramic, tiles and porcelain which are usually susceptible to acid attack. | https://en.wikipedia.org/wiki/Methanesulfonic_acid |
Methanesulfonic anhydride ( Ms 2 O ) is the acid anhydride of methanesulfonic acid . Like methanesulfonyl chloride (MsCl), it may be used to generate mesylates (methanesulfonyl esters).
Ms 2 O may be prepared by the dehydration of methanesulfonic acid with phosphorus pentoxide . [ 2 ]
Ms 2 O can be purified by distillation under vacuum (distillation of a solid) or by recrystallization from Methyl tert-butyl ether / toluene .
Passage of hydrogen chloride through molten Ms 2 O yields MsCl. [ 3 ]
Similar to MsCl, Ms 2 O can perform mesylation of alcohols to form sulfonates . Use of Ms 2 O avoids the alkyl chloride, which often appears as a side-product when MsCl is used. [ 4 ] Unlike MsCl, Ms 2 O may not be suitable for mesylation of the unsaturated alcohols. [ 5 ]
Examples of mesylation of alcohols with Ms 2 O:
Ms 2 O also converts amines to sulfonamides . [ 7 ]
Assisted by Lewis acid catalyst , Friedel-Crafts methylsulfonation of aryl ring can be achieved by Ms 2 O. In contrast to MsCl, either activated or deactivated benzene derivatives can form the corresponding sulfonatesin satisfactory yields with Ms 2 O. [ 8 ]
Examples of aromatic sulfonation with Ms 2 O:
Ms 2 O catalyzes the esterification of alcohols by carboxylic acids. 2-Naphthyl acetate was prepared from 2-naphthol and glacial (anhydrous) acetic acid in the presence of Ms 2 O . Both alcohols on ethylene glycol successfully benzoylated with benzoic acid and Ms 2 O. However, for free alcohols on monosaccharides , the acetylation was not completed. [ 2 ]
Like Pfitzner–Moffatt oxidation and Swern oxidation , with DMSO , Ms 2 O can oxidize primary and secondary alcohols to aldehydes and ketones , respectively, in HMPA . [ 10 ] This method applies to benzylic alcohol. [ 10 ] HMPA may be substituted by dichloromethane but may result in more side-products. [ 10 ] | https://en.wikipedia.org/wiki/Methanesulfonic_anhydride |
Methanesulfonyl chloride ( mesyl chloride ) is an organosulfur compound with the formula CH 3 SO 2 Cl . Using the organic pseudoelement symbol Ms for the methanesulfonyl (or mesyl) group CH 3 SO 2 –, it is frequently abbreviated MsCl in reaction schemes or equations. It is a colourless liquid that dissolves in polar organic solvents but is reactive toward water, alcohols, and many amines. The simplest organic sulfonyl chloride , it is used to make methanesulfonates and to generate the elusive molecule sulfene (methylenedioxosulfur(VI)). [ 7 ]
It is produced by the reaction of methane and sulfuryl chloride in a radical reaction :
Another method of production entails chlorination of methanesulfonic acid with thionyl chloride or phosgene :
Methanesulfonyl chloride is a precursor to many compounds because it is highly reactive. It is an electrophile, functioning as a source of the "CH 3 SO 2 + " synthon . [ 7 ]
Methanesulfonyl chloride is mainly used to give methanesulfonates by its reaction with alcohols in the presence of a non-nucleophilic base . [ 8 ] In contrast to the formation of toluenesulfonates from alcohols and p -toluenesulfonyl chloride in the presence of pyridine, the formation of methanesulfonates is believed to proceed via a mechanism wherein methanesulfonyl chloride first undergoes an E1cb elimination to generate the highly reactive parent sulfene ( CH 2 =SO 2 ), followed by attack by the alcohol and rapid proton transfer to generate the observed product. This mechanistic proposal is supported by isotope labeling experiments and the trapping of the transient sulfene as cycloadducts. [ 9 ]
Methanesulfonates are used as intermediates in substitution reactions , elimination reactions , reductions , and rearrangement reactions . When treated with a Lewis acid , oxime methanesulfonates undergo facile Beckmann rearrangement . [ 10 ]
Methanesulfonates are occasionally used as a protecting group for alcohols. They are stable to acidic conditions and is cleaved back to the alcohol using sodium amalgam . [ 11 ]
Methanesulfonyl chloride react with primary and secondary amines to give methanesulfonamides . Unlike methanesulfonates, methanesulfonamides are very resistant toward hydrolysis under both acidic and basic conditions. [ 7 ] When used as a protecting group, they can be converted back to amines using lithium aluminium hydride or a dissolving metal reduction . [ 12 ]
In the presence of copper(II) chloride , methanesulfonyl chloride will add across alkynes to form β-chloro sulfones . [ 13 ]
Upon treatment with a base, such as triethylamine , methanesulfonyl chloride will undergo an elimination to form sulfene . Sulfene can undergo cycloadditions to form various heterocycles. α-Hydroxyketones react with sulfene to form five-membered sultones . [ 14 ]
Forming acyliminium ions from α-hydroxy amides can be done using methanesulfonyl chloride and a base, typically triethylamine . [ 15 ]
Methanesulfonyl chloride is highly toxic by inhalation, corrosive , and acts as a lachrymator . It reacts with nucleophilic reagents (including water) in a strongly exothermic manner. When heated to decomposition point, it emits toxic vapors of sulfur oxides and hydrogen chloride . [ 16 ] | https://en.wikipedia.org/wiki/Methanesulfonyl_chloride |
Methanobactin (mb) is a class of copper-binding and reducing chromophoric peptides initially identified in the methanotroph Methylococcus capsulatus Bath - and later in Methylosinus trichosporium OB3b - during the isolation of the membrane-associated or particulate methane monooxygenase (pMMO). [ 1 ] It is thought to be secreted to the extracellular media to recruit copper, a critical component of methane monooxygenase, the first enzyme in the series that catalyzes the oxidation of methane into methanol . Methanobactin functions as a chalkophore, similar to iron siderophores , by binding to Cu(II) or Cu(I) then shuttling the copper into the cell. Methanobactin has an extremely high affinity for binding and Cu(I) with a K d of approximately 10 20 M −1 at pH 8. [ 2 ] Additionally, methanobactin can reduce Cu(II), which is toxic to cells, to Cu(I), the form used in pMMO. [ 3 ] Moreover, different species of methanobactin are hypothesized to be ubiquitous within the biosphere, especially in light of the discovery of molecules produced by other type II methanotrophs that similarly bind and reduce copper (II) to copper (I). [ 1 ]
Methanobactin OB3b is a commonly studied methanobactin. It has a molecular weight of 1154 Da when metal free. OB3b is composed of 9 amino acid residues with two oxazolone rings, which take part in binding to copper ions. [ 4 ] [ 5 ] The oxazalone rings are susceptible to cleavage under low pH conditions, which releases any metal ion bound to the rings. Copper is bound and reduced at a tetradentate binding site composed of 2 oxazolone rings and 2 modified enethiol groups. [ 4 ] In particular, the origin and function of these oxazolone rings in methanobactin OB3b has been the subject of research, since these domains appear unique.
In 2010, it was suggested that mb OB3b is derived from a small, ribsomally-produced peptide precursor with the sequence of L-C-G-S-C-Y-P-C-S-C-M. [ 6 ] Functional mbOB3b is composed of (isobutyl group)-(Oxazolone ring A)-G-S-C-Y-(Oxazolone ring B)-S-M. [ 6 ] (Note that some specimens of mBOB3b are found without the C-terminal methionine and appear fully functional.)
It has been argued that the chromophoric rings of this particular species of methanobactin enable mbOB3b to bind and reduce other metals. For example, mbOB3b can reduce Ag(I) to Ag(0), Au(III) to Au(0), Cr(VI) to Cr(III), and Hg(II) to Hg(I); it is also able to bind Co(II), Zn (II), Mn(II), Pb(II), and U(IV). [ 1 ] Because of this, it is possible that methanobactin may have several medical and environmental applications as a metal chelator and reducing agent.
The mechanism of metal reduction is currently undetermined. It has been shown that the tetradentate binding configuration of copper(I) in mbOB3b necessitates the ligation of a water molecule to the copper ion as a ligand. [ 7 ] This has been used to argue that water is the source of electrons for reducing the bound metal ion. Others have suggested that the disulfide bridge in methanobactin's structure is the source of the electron, though XPS has shown that this bond is still intact in copper-bound methanobactin. [ 6 ] The source of this reducing electron remains elusive at the moment.
Methanobactin SB2 is produced by Methylocystis bacteria. SB2 is much smaller than OB3b with a molecular weight of 851Da when metal free. [ 6 ] SB2 contains one imidazole ring and one oxazalone ring as well as a sulfate group that are thought to partake in binding copper. | https://en.wikipedia.org/wiki/Methanobactin |
Methanogens are anaerobic archaea that produce methane as a byproduct of their energy metabolism , i.e., catabolism . Methane production, or methanogenesis , is the only biochemical pathway for ATP generation in methanogens. All known methanogens belong exclusively to the domain Archaea, although some bacteria , plants , and animal cells are also known to produce methane. [ 1 ] However, the biochemical pathway for methane production in these organisms differs from that in methanogens and does not contribute to ATP formation. Methanogens belong to various phyla within the domain Archaea. Previous studies placed all known methanogens into the superphylum Euryarchaeota. [ 2 ] [ 3 ] However, recent phylogenomic data have led to their reclassification into several different phyla. [ 4 ] Methanogens are common in various anoxic environments, such as marine and freshwater sediments, wetlands , the digestive tracts of animals, wastewater treatment plants, rice paddy soil, and landfills . [ 5 ] While some methanogens are extremophiles , such as Methanopyrus kandleri , which grows between 84 and 110°C, [ 6 ] or Methanonatronarchaeum thermophilum , which grows at a pH range of 8.2 to 10.2 and a Na + concentration of 3 to 4.8 M, [ 7 ] most of the isolates are mesophilic and grow around neutral pH. [ 8 ]
Methanogens are usually cocci (spherical) or rods (cylindrical) in shape, but long filaments ( Methanobrevibacter filiformis , Methanospirillum hungatei ) and curved forms ( Methanobrevibacter curvatus , Methanobrevibacter cuticularis ) also occur. There are over 150 described species of methanogens, [ 9 ] which do not form a monophyletic group in the phylum Euryarchaeota (see Taxonomy). They are exclusively anaerobic organisms that cannot function under aerobic conditions due to the extreme oxygen sensitivity of methanogenesis enzymes and FeS clusters involved in ATP production. However, the degree of oxygen sensitivity varies, as methanogenesis has often been detected in temporarily oxygenated environments such as rice paddy soil, [ 10 ] [ 11 ] [ 12 ] and various molecular mechanisms potentially involved in oxygen and reactive oxygen species (ROS) detoxification have been proposed. [ 13 ] For instance, a recently identified species Candidatus Methanothrix paradoxum common in wetlands and soil can function in anoxic microsites within aerobic environments [ 14 ] but it is sensitive to the presence of oxygen even at trace level and cannot usually sustain oxygen stress for a prolonged time. However, Methanosarcina barkeri from a sister family Methanosarcinaceae is exceptional in possessing a superoxide dismutase (SOD) enzyme , and may survive longer than the others in the presence of O 2 . [ 3 ]
As is the case for other archaea, methanogens lack peptidoglycan , a polymer that is found in the cell walls of bacteria . [ 15 ] Instead, some methanogens have a cell wall formed by pseudopeptidoglycan (also known as pseudomurein ). Other methanogens have a paracrystalline protein array (S-layer) that fits together like a jigsaw puzzle . [ 5 ] In some lineages there are less common types of cell envelope such as the proteinaceous sheath of Methanospirillum or the methanochondroitin of Methanosarcina aggregated cells. [ 16 ]
In anaerobic environments , methanogens play a vital ecological role, removing excess hydrogen and fermentation products that have been produced by other forms of anaerobic respiration . [ 17 ] Methanogens typically thrive in environments in which all electron acceptors other than CO 2 (such as oxygen , nitrate , ferric iron (Fe(III)), and sulfate ) have been depleted. Such environments include wetlands and rice paddy soil, the digestive tracts of various animals (ruminants, arthropods, humans), [ 18 ] [ 19 ] [ 20 ] wastewater treatment plants and landfills, deep-water oceanic sediments, and hydrothermal vents. [ 21 ] Most of these environments are not categorized as extreme, and thus the methanogens inhabiting them are also not considered extremophiles. However, many well-studied methanogens are thermophiles such as Methanopyrus kandleri , Methanothermobacter marburgensis , Methanocaldococcus jannaschii . On the other hand, gut methanogens such as Methanobrevibacter smithii common in humans or Methanobrevibacter ruminantium omnipresent in ruminants are mesophiles . [ citation needed ]
In deep basaltic rocks near the mid-ocean ridges , methanogens can obtain their hydrogen from the serpentinization reaction of olivine as observed in the hydrothermal field of Lost City . The thermal breakdown of water and water radiolysis are other possible sources of hydrogen. Methanogens are key agents of remineralization of organic carbon in continental margin sediments and other aquatic sediments with high rates of sedimentation and high sediment organic matter. Under the correct conditions of pressure and temperature, biogenic methane can accumulate in massive deposits of methane clathrates [ 22 ] that account for a significant fraction of organic carbon in continental margin sediments and represent a key reservoir of a potent greenhouse gas. [ 23 ]
Methanogens have been found in several extreme environments on Earth – buried under kilometres of ice in Greenland and living in hot, dry desert soil. They are known to be the most common archaea in deep subterranean habitats. Live microbes making methane were found in a glacial ice core sample retrieved from about three kilometres under Greenland by researchers from the University of California, Berkeley . They also found a constant metabolism able to repair macromolecular damage, at temperatures of 145 to –40 °C. [ 6 ]
Another study [ 7 ] has also discovered methanogens in a harsh environment on Earth. Researchers studied dozens of soil and vapour samples from five different desert environments in Utah , Idaho and California in the United States , and in Canada and Chile . Of these, five soil samples and three vapour samples from the vicinity of the Mars Desert Research Station in Utah were found to have signs of viable methanogens. [ 8 ]
Some scientists have proposed that the presence of methane in the Martian atmosphere may be indicative of native methanogens on that planet. [ 24 ] In June 2019, NASA's Curiosity rover detected methane, commonly generated by underground microbes such as methanogens, which signals possibility of life on Mars . [ 25 ]
Closely related to the methanogens are the anaerobic methane oxidizers, which utilize methane as a substrate in conjunction with the reduction of sulfate and nitrate. [ 26 ] Most methanogens are autotrophic producers, but those that oxidize CH 3 COO − are classed as chemotroph instead. [ citation needed ]
The digestive tract of animals is characterized by a nutrient-rich and predominantly anaerobic environment, making it an ideal habitat for many microbes, including methanogens. Despite this, methanogens and archaea, in general, were largely overlooked as part of the gut microbiota until recently. However, they play a crucial role in maintaining gut balance by utilizing end products of bacterial fermentation, such as H 2 , acetate, methanol, and methylamines. [ citation needed ]
Methanobrevibacter smithii is the predominant methanogenic archaeon in the microbiota of the human gut . [ 27 ] Recent extensive surveys of archaea presence in the animal gut, based on 16S rRNA analysis, have provided a comprehensive view of archaea diversity and abundance. [ 28 ] [ 29 ] [ 30 ] These studies revealed that only a few archaeal lineages are present, with the majority being methanogens, while non-methanogenic archaea are rare and not abundant. Taxonomic classification of archaeal diversity identified that representatives of only three phyla are present in the digestive tracts of animals: Methanobacteriota (order Methanobacteriales), Thermoplasmatota (order Methanomassiliicoccales), and Halobacteriota (orders Methanomicrobiales and Methanosarcinales). However, not all families and genera within these orders were detected in animal guts, but only a few genera, suggesting their specific adaptations to the gut environment. [ citation needed ]
Comparative proteomic analysis has led to the identification of 31 signature proteins which are specific for methanogens (also known as Methanoarchaeota). Most of these proteins are related to methanogenesis, and they could serve as potential molecular markers for methanogens. Additionally, 10 proteins found in all methanogens, which are shared by Archaeoglobus , suggest that these two groups are related. In phylogenetic trees, methanogens are not monophyletic and they are generally split into three clades. Hence, the unique shared presence of large numbers of proteins by all methanogens could be due to lateral gene transfers. [ 31 ] Additionally, more recent novel proteins associated with sulfide trafficking have been linked to methanogen archaea. [ 32 ] More proteomic analysis is needed to further differentiate specific genera within the methanogen class and reveal novel pathways for methanogenic metabolism. [ citation needed ]
Modern DNA or RNA sequencing approaches has elucidated several genomic markers specific to several groups of methanogens. One such finding isolated nine methanogens from genus Methanoculleus and found that there were at least 2 trehalose synthases genes that were found in all nine genomes. [ 33 ] Thus far, the gene has been observed only in this genus, therefore it can be used as a marker to identify the archaea Methanoculleus. As sequencing techniques progress and databases become populated with an abundance of genomic data, a greater number of strains and traits can be identified, but many genera have remained understudied. For example, halophilic methanogens are potentially important microbes for carbon cycling in coastal wetland ecosystems but seem to be greatly understudied. One recent publication isolated a novel strain from genus Methanohalophilus which resides in sulfide-rich seawater. Interestingly, they have isolated several portions of this strain's genome that are different from other isolated strains of this genus ( Methanohalophilus mahii , Methanohalophilus halophilus , Methanohalophilus portucalensis , Methanohalophilus euhalbius ). Some differences include a highly conserved genome, sulfur and glycogen metabolisms and viral resistance. [ 34 ] Genomic markers consistent with the microbes environment have been observed in many other cases. One such study found that methane producing archaea found in hydraulic fracturing zones had genomes which varied with vertical depth. Subsurface and surface genomes varied along with the constraints found in individual depth zones, though fine-scale diversity was also found in this study. [ 35 ] Genomic markers pointing at environmentally relevant factors are often non-exclusive. A survey of Methanogenic Thermoplasmata has found these organisms in human and animal intestinal tracts. This novel species was also found in other methanogenic environments such as wetland soils, though the group isolated in the wetlands did tend to have a larger number of genes encoding for anti-oxidation enzymes that were not present in the same group isolated in the human and animal intestinal tract. [ 36 ] A common issue with identifying and discovering novel species of methanogens is that sometimes the genomic differences can be quite small, yet the research group decides they are different enough to separate into individual species. One study took a group of Methanocellales and ran a comparative genomic study. The three strains were originally considered identical, but a detailed approach to genomic isolation showed differences among their previously considered identical genomes. Differences were seen in gene copy number and there was also metabolic diversity associated with the genomic information. [ 37 ]
Genomic signatures not only allow one to mark unique methanogens and genes relevant to environmental conditions; it has also led to a better understanding of the evolution of these archaea. Some methanogens must actively mitigate against oxic environments. Functional genes involved with the production of antioxidants have been found in methanogens, and some specific groups tend to have an enrichment of this genomic feature. Methanogens containing a genome with enriched antioxidant properties may provide evidence that this genomic addition may have occurred during the Great Oxygenation Event. [ 38 ] In another study, three strains from the lineage Thermoplasmatales isolated from animal gastro-intestinal tracts revealed evolutionary differences. The eukaryotic-like histone gene which is present in most methanogen genomes was not present, alluding to evidence that an ancestral branch was lost within Thermoplasmatales and related lineages. [ 39 ] Furthermore, the group Methanomassiliicoccus has a genome which appears to have lost many common genes coding for the first several steps of methanogenesis. These genes appear to have been replaced by genes coding for a novel methylated methogenic pathway. This pathway has been reported in several types of environments, pointing to non-environment specific evolution, and may point to an ancestral deviation. [ 40 ]
Methanogens are known to produce methane from substrates such as H 2 /CO 2 , acetate, formate , methanol and methylamines in a process called methanogenesis . [ 41 ] Different methanogenic reactions are catalyzed by unique sets of enzymes and coenzymes . While reaction mechanism and energetics vary between one reaction and another, all of these reactions contribute to net positive energy production by creating ion concentration gradients that are used to drive ATP synthesis. [ 42 ] The overall reaction for H 2 /CO 2 methanogenesis is:
Well-studied organisms that produce methane via H 2 /CO 2 methanogenesis include Methanosarcina barkeri , Methanobacterium thermoautotrophicum , and Methanobacterium wolfei . [ 43 ] [ 44 ] [ 45 ] These organisms are typically found in anaerobic environments. [ 41 ]
In the earliest stage of H 2 /CO 2 methanogenesis, CO 2 binds to methanofuran (MF) and is reduced to formyl-MF. This endergonic reductive process (∆G˚'= +16 kJ/mol) is dependent on the availability of H 2 and is catalyzed by the enzyme formyl-MF dehydrogenase. [ 41 ]
The formyl constituent of formyl-MF is then transferred to the coenzyme tetrahydromethanopterin (H4MPT) and is catalyzed by a soluble enzyme known as formyltransferase . This results in the formation of formyl-H4MPT. [ 41 ]
Formyl-H4MPT is subsequently reduced to methenyl-H4MPT. Methenyl-H4MPT then undergoes a one-step hydrolysis followed by a two-step reduction to methyl-H4MPT. The two-step reversible reduction is assisted by coenzyme F 420 whose hydride acceptor spontaneously oxidizes. [ 41 ] Once oxidized, F 420 's electron supply is replenished by accepting electrons from H 2 . This step is catalyzed by methylene H4MPT dehydrogenase. [ 46 ]
Next, the methyl group of methyl-M4MPT is transferred to coenzyme M via a methyltransferase-catalyzed reaction. [ 47 ] [ 48 ]
The final step of H 2 /CO 2 methanogenesis involves methyl-coenzyme M reductase and two coenzymes: N-7 mercaptoheptanoylthreonine phosphate (HS-HTP) and coenzyme F 430 . HS-HTP donates electrons to methyl-coenzyme M allowing the formation of methane and mixed disulfide of HS-CoM. [ 49 ] F 430 , on the other hand, serves as a prosthetic group to the reductase. H 2 donates electrons to the mixed disulfide of HS-CoM and regenerates coenzyme M. [ 50 ]
Methanogens are widely used in anaerobic digestors to treat wastewater as well as aqueous organic pollutants. Industries have selected methanogens for their ability to perform biomethanation during wastewater decomposition thereby rendering the process sustainable and cost-effective. [ 51 ]
Bio-decomposition in the anaerobic digester involves a four-staged cooperative action performed by different microorganisms. [ 52 ] The first stage is the hydrolysis of insoluble polymerized organic matter by anaerobes such as Streptococcus and Enterobacterium. [ 53 ] In the second stage, acidogens break down dissolved organic pollutants in wastewater to fatty acids . In the third stage, acetogens convert fatty acids to acetates . In the final stage, methanogens metabolize acetates to gaseous methane . The byproduct methane leaves the aqueous layer and serves as an energy source to power wastewater-processing within the digestor, thus generating a self-sustaining mechanism. [ 54 ]
Methanogens also effectively decrease the concentration of organic matter in wastewater run-off. [ 55 ] For instance, agricultural wastewater , highly rich in organic material, has been a major cause of aquatic ecosystem degradation. The chemical imbalances can lead to severe ramifications such as eutrophication . Through anaerobic digestion, the purification of wastewater can prevent unexpected blooms in water systems as well as trap methanogenesis within digesters. This allocates biomethane for energy production and prevents a potent greenhouse gas, methane, from being released into the atmosphere. [ citation needed ]
The organic components of wastewater vary vastly. Chemical structures of the organic matter select for specific methanogens to perform anaerobic digestion. An example is the members of Methanosaeta genus dominate the digestion of palm oil mill effluent (POME) and brewery waste. [ 55 ] Modernizing wastewater treatment systems to incorporate higher diversity of microorganisms to decrease organic content in treatment is under active research in the field of microbiological and chemical engineering. [ 56 ] Current new generations of Staged Multi-Phase Anaerobic reactors and Upflow Sludge Bed reactor systems are designed to have innovated features to counter high loading wastewater input, extreme temperature conditions, and possible inhibitory compounds. [ 57 ]
Initially, methanogens were considered to be bacteria, as it was not possible to distinguish archaea and bacteria before the introduction of molecular techniques such as DNA sequencing and PCR. Since the introduction of the domain Archaea by Carl Woese in 1977, [ 58 ] methanogens were for a prolonged period considered a monophyletic group, later named Euryarchaeota (super)phylum. However, intensive studies of various environments have proved that there are more and more non-methanogenic lineages among methanogenic ones.
The development of genome sequencing directly from environmental samples (metagenomics) allowed the discovery of the first methanogens outside the Euryarchaeota superphylum. The first such putative methanogenic lineage was Bathyarchaeia, [ 59 ] a class within the Thermoproteota phylum. Later, it was shown that this lineage is not methanogenic but alkane-oxidizing utilizing highly divergent enzyme Acr similar to the hallmark gene of methanogenesis, methyl-CoM reductase (McrABG). [ 60 ] The first isolate of Bathyarchaeum tardum from sediment of coastal lake in Russia showed that it metabolizes aromatic compounds and proteins [ 61 ] as it was previously predicted based on metagenomic studies. [ 62 ] [ 63 ] [ 64 ] However, more new putative methanogens outside of Euryarchaeota were discovered based on the presence McrABG.
For instance, methanogens were found in the phyla Thermoproteota (orders Methanomethyliales, Korarchaeales, Methanohydrogenales, Nezhaarchaeales) and Methanobacteriota_B (order Methanofastidiosales). Additionally, some new lineages of methanogens were isolated in pure culture, which allowed the discovery of a new type of methanogenesis: H 2 -dependent methyl-reducing methanogenesis, which is independent of the Wood-Ljungdahl pathway. For example, in 2012, the order Methanoplasmatales from the phylum Thermoplasmatota was described as a seventh order of methanogens. [ 65 ] Later, the order was renamed Methanomassiliicoccales based on the isolated from human gut Methanomassiliicoccus luminyensis . [ 66 ] [ 67 ]
Another new lineage in the Halobacteriota phylum, order Methanonatronarchaeales, was discovered in alkaline saline lakes in Siberia in 2017. [ 68 ] [ 69 ] It also employs H 2 -dependent methyl-reducing methanogenesis but intriguingly harbors almost the full Wood-Ljungdahl pathway. However, it is disconnected from McrABG as no MtrA-H complex was detected. [ 70 ] [ 71 ]
The taxonomy of methanogens reflects the evolution of these archaea , with some studies suggesting that the Last Archaeal Common Ancestor was methanogenic. [ 72 ] If correct, this suggests that many archaeal lineages lost the ability to produce methane and switched to other types of metabolism. Currently, most of the isolated methanogens belong to one of three archaeal phyla ( classification GTDB release 220): Halobacteriota, Methanobacteriota, and Thermoplasmatota. Under the International Code of Nomenclature for Prokaryotes, [ 73 ] all three phyla belong to the same kingdom, Methanobacteriati. [ 74 ] [ 75 ] In total, more than 150 methanogen species are known in culture, with some represented by more than one strain . [ 76 ] | https://en.wikipedia.org/wiki/Methanogen |
Methanogens are a group of microorganisms that produce methane as a byproduct of their metabolism. They play an important role in the digestive system of ruminants . The digestive tract of ruminants contains four major parts: rumen , reticulum , omasum and abomasum . The food with saliva first passes to the rumen for breaking into smaller particles and then moves to the reticulum, where the food is broken into further smaller particles. Any indigestible particles are sent back to the rumen for rechewing . The majority of anaerobic microbes assisting the cellulose breakdown occupy the rumen and initiate the fermentation process. The animal absorbs the fatty acids, vitamins and nutrient content on passing the partially digested food from the rumen to the omasum. This decreases the pH level and initiates the release of enzymes for further breakdown of the food which later passes to the abomasum to absorb remaining nutrients before excretion. This process takes about 9–12 hours.
Some of the microbes in the ruminant digestive system are: | https://en.wikipedia.org/wiki/Methanogens_in_digestive_tract_of_ruminants |
This page provides supplementary chemical data on methanol .
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Safety Datasheet ( SDS ) for this chemical from a reliable source such as SIRI , and follow its directions. SDS is available at MSDS , J.T. Baker and Loba Chemie
79.9 J/(mol K) at 20 °C
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed
Here is a similar formula from the 67th edition of the CRC handbook. Note that the form of this formula as given is a fit to the Clausius–Clapeyron equation, which is a good theoretical starting point for calculating saturation vapor pressures:
Data obtained from Lange's Handbook of Chemistry , 10th ed. and CRC Handbook of Chemistry and Physics 44th ed. The annotation, d a °C/ b °C, indicates density of solution at temperature a divided by density of pure water at temperature b known as specific gravity. When temperature b is 4 °C, density of water is 0.999972 g/mL. | https://en.wikipedia.org/wiki/Methanol_(data_page) |
The methanol economy is a suggested future economy in which methanol and dimethyl ether replace fossil fuels as a means of energy storage , ground transportation fuel, and raw material for synthetic hydrocarbons and their products. It offers an alternative to the proposed hydrogen economy or ethanol economy , although these concepts are not exclusive. Methanol can be produced from a variety of sources including fossil fuels ( natural gas , coal , oil shale , tar sands , etc.) as well as agricultural products and municipal waste , wood and varied biomass . It can also be made from chemical recycling of carbon dioxide .
Nobel prize laureate George A. Olah advocated a methanol economy. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Methanol is a fuel for heat engines and fuel cells. Due to its high octane rating it can be used directly as a fuel in flex-fuel cars (including hybrid and plug-in hybrid vehicles) using existing internal combustion engines (ICE). Methanol can also be burned in some other kinds of engine or to provide heat as other liquid fuels are used. Fuel cells , can use methanol either directly in Direct Methanol Fuel Cells (DMFC) or indirectly (after conversion into hydrogen by reforming) in a Reformed Methanol Fuel Cell (RMFC).
Green methanol is a liquid fuel that is produced from combining carbon dioxide and hydrogen ( CO 2 + 3 H 2 → CH 3 OH + H 2 O ) under pressure and heat with catalysts . It is a way to reuse carbon capture for recycling . Methanol can store hydrogen economically at standard outdoor temperatures and pressures , compared to liquid hydrogen and ammonia that need to use a lot of energy to stay cold in their liquid state . [ 8 ] In 2023 the Laura Maersk was the first container ship to run on methanol fuel. [ 9 ] Ethanol plants in the midwest are a good place for pure carbon capture to combine with hydrogen to make green methanol, with abundant wind and nuclear energy in Iowa , Minnesota , and Illinois . [ 10 ] [ 11 ] Mixing methanol with ethanol could make methanol a safer fuel to use because methanol doesn't have a visible flame in the daylight and doesn't emit smoke, and ethanol has a visible light yellow flame. [ 12 ] [ 13 ] [ 14 ] Green hydrogen production of 70% efficiency and a 70% efficiency of methanol production from that would be a 49% energy conversion efficiency . [ 15 ]
Methanol is already used today on a large scale to produce a variety of chemicals and products. Global methanol demand as a chemical feedstock reached around 42 million metric tonnes per year as of 2015. [ 16 ] Through the methanol-to-gasoline (MTG) process, it can be transformed into gasoline. Using the methanol-to-olefin (MTO) process, methanol can also be converted to ethylene and propylene , the two chemicals produced in largest amounts by the petrochemical industry . [ 17 ] These are important building blocks for the production of essential polymers (LDPE, HDPE, PP) and like other chemical intermediates are currently produced mainly from petroleum feedstock. Their production from methanol could therefore reduce our dependency on petroleum. It would also make it possible to continue producing these chemicals when fossil fuels reserves are depleted.
Today most methanol is produced from methane through syngas . Trinidad and Tobago is the world's largest methanol producer, with exports mainly to the United States . [ 18 ] The feedstock for the production of methanol comes natural gas.
The conventional route to methanol from methane passes through syngas generation by steam reforming combined (or not) with partial oxidation. Alternative ways to convert methane into methanol have also been investigated. These include:
All these synthetic routes emit the greenhouse gas carbon dioxide CO 2 . To mitigate this, methanol can be made through ways minimizing the emission of CO 2 . One solution is to produce it from syngas obtained by biomass gasification. For this purpose any biomass can be used including wood , wood wastes, grass, agricultural crops and their by-products, animal waste, aquatic plants and municipal waste. [ 19 ] There is no need to use food crops as in the case of ethanol from corn, sugar cane and wheat.
Methanol can be synthesized from carbon and hydrogen from any source, including fossil fuels and biomass . CO 2 emitted from fossil fuel burning power plants and other industries and eventually even the CO 2 contained in the air, can be a source of carbon. [ 20 ] It can also be made from chemical recycling of carbon dioxide , which Carbon Recycling International has demonstrated with its first commercial scale plant. [ 21 ] Initially the major source will be the CO 2 rich flue gases of fossil-fuel-burning power plants or exhaust from cement and other factories. In the longer range however, considering diminishing fossil fuel resources and the effect of their utilization on Earth's atmosphere , even the low concentration of atmospheric CO 2 itself could be captured and recycled via methanol, thus supplementing nature's own photosynthetic cycle. Efficient new absorbents to capture atmospheric CO 2 are being developed, mimicking plants' ability. Chemical recycling of CO 2 to new fuels and materials could thus become feasible, making them renewable on the human timescale.
Methanol can also be produced at atmospheric pressure from CO 2 by catalytic hydrogenation of CO 2 with H 2 [ 22 ] where the hydrogen has been obtained from water electrolysis . This is the process used by Carbon Recycling International of Iceland . Methanol may also be produced through CO 2 electrochemical reduction , if electrical power is available. The energy needed for these reactions in order to be carbon neutral would come from renewable energy sources such as wind, hydroelectricity and solar as well as nuclear power. In effect, all of them allow free energy to be stored in easily transportable methanol, which is made immediately from hydrogen and carbon dioxide, rather than attempting to store energy in free hydrogen.
Or with electric energy:
Total:
The necessary CO 2 would be captured from fossil fuel burning power plants and other industrial flue gases including cement factories. With diminishing fossil fuel resources and therefore CO 2 emissions, the CO 2 content in the air could also be used. Considering the low concentration of CO 2 in air (0.04%) improved and economically viable technologies to absorb CO 2 will have to be developed. For this reason, extraction of CO 2 from water could be more feasible due to its higher concentrations in dissolved form. [ 23 ] This would allow the chemical recycling of CO 2 , thus mimicking nature's photosynthesis.
In large-scale renewable methanol is mainly produced of fermented biomass as well as municipal solid waste (bio-methanol) and of renewable electricity (e-methanol). [ 24 ] Production costs for renewable methanol currently are about 300 to US$1000/t for bio-methanol, about 800 to US$1600/t for e-methanol of carbon dioxide of renewable sources and about 1100 to US$2400/t for e-methanol of carbon dioxide of direct air capture . [ 19 ]
Methanol which is produced of CO 2 and water by the use of electricity is called e-methanol. Typically hydrogen is produced by electrolysis of water which is then transformed with CO 2 to methanol. Currently the efficiency for hydrogen production by water electrolysis of electricity amounts to 75 to 85% [ 19 ] with potential up to 93% until 2030. [ 25 ] Efficiency for methanol synthesis of hydrogen and carbon dioxide currently is 79 to 80%. [ 19 ] Thus the efficiency for production of methanol from electricity and carbon dioxide is about 59 to 78%. If CO 2 is not directly available but is obtained by direct air capture then the efficiency amounts to 50-60 % for methanol production by use of electricity. [ 19 ] [ 26 ] When methanol is used in a methanol fuel cell the electrical efficiency of the fuel cell is about 35 to 50% (status of 2021). Thus the electrical overall efficiency for the production of e-methanol with electricity including the following energy conversion of e-methanol to electricity amounts to about 21 to 34% for e-methanol of directly available CO 2 and to about 18 to 30% for e-methanol produced by CO 2 which is obtained by direct air capture .
If waste heat is used for a high temperature electrolysis or if waste heat of electrolysis, methanol synthesis and/or of the fuel cell is used then the overall efficiency can be significantly increased beyond electrical efficiency. [ 27 ] [ 28 ] For example, an overall efficiency of 86% can be reached by using waste heat (e.g. for district heating ) which is obtained by production of e-methanol by electrolysis or by the following methanol synthesis. [ 28 ] If the waste heat of a fuel cell is used a fuel cell efficiency of 85 to 90% can be reached. [ 29 ] [ 30 ] The waste heat can for example be used for heating of a vehicle or a household. Also the generation of coldness by using waste heat is possible with a refrigeration machine. With an extensive use of waste heat an overall efficiency of 70 to 80% can be reached for production of e-methanol including the following use of the e-methanol in a fuel cell.
The electrical system efficiency including all losses of peripheral devices (e.g. cathode compressor, stack cooling) amounts to about 40 to 50% for a methanol fuel cell of RMFC type and to 40 to 55% for a hydrogen fuel cell of LT-PEMFC type. [ 31 ] [ 32 ] [ 33 ] [ 34 ]
Araya et al. compared the hydrogen path with the methanol path (for methanol of directly available CO 2 ). [ 31 ] Here the electrical efficiency from electricity supply to delivery of electricity by a fuel cell was determined with following intermediate steps: power management, conditioning, transmission, hydrogen production by electrolysis, methanol synthesis resp. hydrogen compression, fuel transportation, fuel cell. For the methanol path the efficiency was investigated as 23 to 38% and for the hydrogen path as 24 to 41%. With the hydrogen path a large part of energy is lost by hydrogen compression and hydrogen transport, whereas for the methanol path energy for methanol synthesis is needed.
Helmers et al. compared the well-to-wheel (WTW) efficiency of vehicles. The WTW efficiency was determined as 10 to 20% for with fossile gasoline operated vehicles with internal combustion engine, as 15 to 29% for with fossile gasoline operated full electric hybrid vehicles with internal combustion engine, as 13 to 25% for with fossile Diesel operated vehicles with internal combustion engine, as 12 to 21% for with fossile CNG operated vehicles with internal combustion engine, as 20 to 29% for fuel cell vehicles (e.g. fossile hydrogen or methanol) and as 59 to 80% for battery electric vehicles . [ 35 ]
In German study "Agora Energiewende" different drive technologies by using renewable electricity for fuel production were examined and a WTW efficiency of 13% for vehicles with internal combustion engine (operated with synthetic fuel like OME ), 26% for fuel cell vehicles (operated with hydrogen) and 69% for battery electric vehicles was determined. [ 36 ]
If renewable hydrogen is used the well-to-wheel efficiency for a hydrogen fuel cell car amounts to about 14 to 30%.
If renewable e-methanol is produced from directly available CO 2 the well-to-wheel efficiency amounts to about 11 to 21% for a vehicle with internal combustion engine which is operated with this e-methanol and to about 18 to 29% for a fuel cell vehicle which is operated with this e-methanol. If renewable e-methanol is produced from CO 2 of direct air capture the well-to-wheel efficiency amounts to about 9 to 19% for a vehicle with internal combustion engine which is operated with this e-methanol and to about 15 to 26% for a fuel cell vehicle which is operated with this e-methanol (status of 2021).
Methanol is cheaper than hydrogen. For large amounts (tank) price for fossil methanol is about 0.3 to 0.5 USD/L. [ 37 ] One liter of Methanol has the same energy content as 0.13 kg hydrogen. [ 5 ] [ 6 ] Price for 0.13 kg of fossil hydrogen is currently about 1.2 to 1.3 USD for large amounts (about 9.5 USD/kg at hydrogen refuelling stations). [ 38 ] For middle scale amounts (delivery in IBC container with 1000 L methanol) price for fossil methanol is usually about 0.5 to 0.7 USD/L, for biomethanol about 0.7 to 2.0 USD/L and for e-methanol [ 39 ] from CO 2 about 0.8 to 2.0 USD/L plus deposit for IBC container. For middle scale amounts of hydrogen (bundle of gas cylinders) price for 0.13 kg of fossil hydrogen is usually about 5 to 12 USD plus rental fee for the cylinders. The significantly higher price for hydrogen compared to methanol is amongst others caused by the complex logistics and storage of hydrogen. Whereas biomethanol and renewable e-methanol are available at distributors, [ 40 ] [ 41 ] green hydrogen is typically not yet available at distributors. Prices for renewable hydrogen as well as for renewable methanol are expected to decrease in future. [ 19 ]
For future it is expected that for passenger cars a high percentage of vehicles will be full electric battery vehicles. For utility vehicles and trucks percentage of full electric battery vehicles is expected to be significantly lower than for passenger cars. The rest of vehicles is expected to be based on fuel. While methanol infrastructure for 10 000 refuelling stations would cost about 0.5 to 2.0 billion USD, cost for a hydrogen infrastructure for 10 000 refuelling stations would be about 16 to 1400 billion USD with strong dependence on hydrogen throughput of the hydrogen refuelling station. [ 31 ] [ 42 ]
While for vehicles with internal combustion engine that are fuelled with methanol there are no significant additional costs compared to gasoline-fuelled vehicles, additional costs for a passenger car with methanol fuel cell would be about -600 to 2400 USD compared with a passenger car with hydrogen fuel cell (primarily additional costs for reformer, balance of plant components and perhaps stack minus costs for hydrogen tank [ 43 ] and hydrogen high-pressure instruments).
In the process of photosynthesis , green plants use the energy of sunlight to split water into free oxygen (which is released) and free hydrogen. Rather than attempt to store the hydrogen, plants immediately capture carbon dioxide from the air to allow the hydrogen to reduce it to storable fuels such as hydrocarbons (plant oils and terpenes ) and polyalcohols ( glycerol , sugars and starches ). In the methanol economy, any process which similarly produces free hydrogen, proposes to immediately use it "captively" to reduce carbon dioxide into methanol, which, like plant products from photosynthesis, has great advantages in storage and transport over free hydrogen itself.
Methanol is a liquid under normal conditions, allowing it to be stored, transported and dispensed easily, much like gasoline and diesel fuel . It can also be readily transformed by dehydration into dimethyl ether , a diesel fuel substitute with a cetane number of 55.
Methanol is water-soluble: An accidental release of methanol in the environment would cause much less damage than a comparable gasoline or crude oil spill . Unlike these fuels, methanol is biodegradable and totally soluble in water, and would be rapidly diluted to a concentration low enough for microorganism to start biodegradation . This effect is already exploited in water treatment plants, where methanol is already used for denitrification and as a nutrient for bacteria. [ 44 ] Accidental release causing groundwater pollution has not been thoroughly studied yet, though it is believed that it might undergo relatively rapid.
Methanol economy advantages compared to a hydrogen economy: | https://en.wikipedia.org/wiki/Methanol_economy |
A methanol reformer is a device used in chemical engineering , especially in the area of fuel cell technology, which can produce pure hydrogen gas and carbon dioxide by reacting a methanol and water (steam) mixture.
Methanol is transformed into hydrogen and carbon dioxide by pressure and heat and interaction with a catalyst .
A mixture of water and methanol with a molar concentration ratio (water:methanol) of 1.0 - 1.5 is pressurized to approximately 20 bar , vaporized and heated to a temperature of 250 - 360 °C . The hydrogen that is created is separated through the use of Pressure swing adsorption or a hydrogen-permeable membrane made of polymer or a palladium alloy.
There are two basic methods of conducting this process.
With either design, not all of the hydrogen is removed from the product gases (raffinate). Since the remaining gas mixture still contains a significant amount of chemical energy, it is often mixed with air and burned to provide heat for the endothermic reforming reaction.
Methanol reformers are used as a component of stationary fuel cell systems or hydrogen fuel cell-powered vehicles (see Reformed methanol fuel cell ). A prototype car, the NECAR 5 , was introduced by Daimler-Chrysler in the year 2000. The primary advantage of a vehicle with a reformer is that it does not need a pressurized gas tank to store hydrogen fuel; instead methanol is stored as a liquid. The logistic implications of this are great; pressurized hydrogen is difficult to store and produce. Also, this could help ease the public's concern over the danger of hydrogen and thereby make fuel cell-powered vehicles more attractive. However, methanol, like gasoline , is toxic and (of course) flammable. The cost of the PdAg membrane and its susceptibility to damage by temperature changes provide obstacles to adoption.
While hydrogen power produces energy without CO 2 , a methanol reformer creates the gas as a byproduct.
Methanol (prepared from natural gas) that is used in an efficient fuel cell, however, releases less CO 2 in the atmosphere than gasoline, in a net analysis. [ 1 ] | https://en.wikipedia.org/wiki/Methanol_reformer |
Gas to liquids ( GTL ) is a refinery process to convert natural gas or other gaseous hydrocarbons into longer-chain hydrocarbons, such as gasoline or diesel fuel . Methane -rich gases are converted into liquid synthetic fuels . Two general strategies exist: (i) direct partial combustion of methane to methanol and (ii) Fischer–Tropsch -like processes that convert carbon monoxide and hydrogen into hydrocarbons. Strategy ii is followed by diverse methods to convert the hydrogen-carbon monoxide mixtures to liquids. Direct partial combustion has been demonstrated in nature but not replicated commercially. Technologies reliant on partial combustion have been commercialized mainly in regions where natural gas is inexpensive. [ 1 ] [ 2 ]
The motivation for GTL is to produce liquid fuels, which are more readily transported than methane. Methane must be cooled below its critical temperature of −82.3 °C in order to be liquified under pressure. Because of the associated cryogenic apparatus, LNG tankers are used for transport. Methanol is a conveniently handled combustible liquid, but its energy density is half of that of gasoline. [ 3 ]
A GtL process may be established via the Fischer–Tropsch process which comprises several chemical reactions that convert a mixture of carbon monoxide (CO) and hydrogen (H 2 ) into long chained hydrocarbons. These hydrocarbons are typically liquid or semi-liquid and ideally have the formula (C n H 2 n +2 ).
In order to obtain the mixture of CO and H 2 required for the Fischer–Tropsch process, methane (main component of natural gas) may be subjected to partial oxidation which yields a raw synthesis gas mixture of mostly carbon dioxide , carbon monoxide , hydrogen gas (and sometimes water and nitrogen). [ 4 ] The ratio of carbon monoxide to hydrogen in the raw synthesis gas mixture can be adjusted e.g. using the water gas shift reaction . Removing impurities, particularly nitrogen, carbon dioxide and water, from the raw synthesis gas mixture yields pure synthesis gas (syngas).
The pure syngas is routed into the Fischer–Tropsch process, where the syngas reacts over an iron or cobalt catalyst to produce synthetic hydrocarbons, including alcohols.
Methanol is made from methane (natural gas) in a series of three reactions:
The methanol thus formed may be converted to gasoline by the Mobil process and methanol-to-olefins.
In the early 1970s, Mobil developed an alternative procedure in which natural gas is converted to syngas, and then methanol . The methanol reacts in the presence of a zeolite catalyst to form various compounds. In the first step methanol is partially dehydrated to give dimethyl ether :
The mixture of dimethyl ether and methanol is then further dehydrated over a zeolite catalyst such as ZSM-5 , and in practice is polymerized and hydrogenated to give a gasoline with hydrocarbons of five or more carbon atoms making up 80% of the fuel by weight. The Mobil MTG process is practiced from coal-derived methanol in China by JAMG . A more modern implementation of MTG is the Topsøe improved gasoline synthesis (TiGAS). [ 5 ]
Methanol can be converted to olefins using zeolite and SAPO-based heterogeneous catalysts . Depending on the catalyst pore size, this process can afford either C2 or C3 products, which are important monomers. [ 6 ] [ 7 ]
Methanol to olefins technology is widely used in China in order to produce plastics from coal gasification. It is also discussed as a method to make fossil-free plastics in the future. [ 8 ]
A third gas-to-liquids process builds on the MTG technology by converting natural gas-derived syngas into drop-in gasoline and jet fuel via a thermochemical single-loop process. [ 9 ]
The STG+ process follows four principal steps in one continuous process loop. This process consists of four fixed bed reactors in series in which a syngas is converted to synthetic fuels. The steps for producing high-octane synthetic gasoline are as follows: [ 10 ]
With methane as the predominant target for GTL, much attention has focused on the three enzymes that process methane. These enzymes support the existence of methanotrophs , microorganisms that metabolize methane as their only source of carbon and energy. Aerobic methanotrophs harbor enzymes that oxygenate methane to methanol. The relevant enzymes are methane monooxygenases , which are found both in soluble and particulate (i.e. membrane-bound) varieties. They catalyze the oxygenation according to the following stoichiometry:
Anaerobic methanotrophs rely on the bioconversion of methane using the enzymes called methyl-coenzyme M reductases . These organisms effect reverse methanogenesis . Strenuous efforts have been made to elucidate the mechanisms of these methane-converting enzymes, which would enable their catalysis to be replicated in vitro. [ 11 ]
Biodiesel can be made from CO 2 using the microbes Moorella thermoacetica and Yarrowia lipolytica . This process is known as biological gas-to-liquids. [ 12 ]
Using gas-to-liquids processes, refineries can convert some of their gaseous waste products ( flare gas ) into valuable fuel oils , which can be sold as is or blended only with diesel fuel . The World Bank estimates that over 150 billion cubic metres (5.3 × 10 ^ 12 cu ft) of natural gas are flared or vented annually, an amount worth approximately $30.6 billion, equivalent to 25% of the United States' gas consumption or 30% of the European Union's annual gas consumption, [ 13 ] a resource that could be useful using GTL. Gas-to-liquids processes may also be used for the economic extraction of gas deposits in locations where it is not economical to build a pipeline. This process will be increasingly significant as crude oil resources are depleted .
Royal Dutch Shell produces a diesel from natural gas in a factory in Bintulu , Malaysia . Another Shell GTL facility is the Pearl GTL plant in Qatar , the world's largest GTL facility. [ 14 ] [ 15 ] Sasol has recently built the Oryx GTL facility in Ras Laffan Industrial City , Qatar and together with Uzbekneftegaz and Petronas builds the Uzbekistan GTL plant. [ 16 ] [ 17 ] [ 18 ] Chevron Corporation , in a joint venture with the Nigerian National Petroleum Corporation is commissioning the Escravos GTL in Nigeria , which uses Sasol technology. PetroSA , South Africa's national oil company, owns and operates a 22,000 barrels/day (capacity) GTL plant in Mossel Bay , using Sasol GTL technology. [ 19 ]
New generation of GTL technology is being pursued for the conversion of unconventional, remote and problem gas into valuable liquid fuels. [ 20 ] [ 21 ] GTL plants based on innovative Fischer–Tropsch catalysts have been built by INFRA Technology . Other mainly U.S. companies include Velocys, ENVIA Energy, Waste Management, NRG Energy, ThyssenKrupp Industrial Solutions, Liberty GTL, Petrobras , [ 22 ] Greenway Innovative Energy, [ 23 ] Primus Green Energy, [ 24 ] Compact GTL, [ 25 ] and Petronas. [ 26 ] Several of these processes have proven themselves with demonstration flights using their jet fuels. [ 27 ] [ 28 ]
Another proposed solution to stranded gas involves use of novel FPSO for offshore conversion of gas to liquids such as methanol , diesel , petrol , synthetic crude , and naphtha . [ 29 ]
GTL using natural gas is more economical when there is wide gap between the prevailing natural gas price and crude oil price on a Barrel of oil equivalent (BOE) basis. A coefficient of 0.1724 results in full oil parity . [ 30 ] GTL is a mechanism to bring down the diesel/gasoline/crude oil international prices at par with the natural gas price in an expanding global natural gas production at cheaper than crude oil price. When natural gas is converted in to GTL, the liquid products are easier to export at cheaper price rather than converting in to LNG and further conversion to liquid products in an importing country. [ 31 ] [ 32 ]
However, GTL fuels are much more expensive to produce than conventional fuels. [ 33 ] | https://en.wikipedia.org/wiki/Methanol_to_gasoline |
Methanol toxicity (also methanol poisoning ) is poisoning from methanol , characteristically via ingestion. [ 1 ] Symptoms may include an altered/decreased level of consciousness , poor or no coordination, vomiting , abdominal pain , and a specific smell on the breath. [ 1 ] [ 2 ] Decreased vision may start as early as twelve hours after exposure. [ 2 ] Long-term outcomes may include blindness and kidney failure . [ 1 ] Blindness may occur after drinking as little as 10 mL; death may occur after drinking quantities over 15 mL (median 100 mL, varies depending on body weight). [ 1 ] [ 4 ]
Methanol poisoning most commonly occurs following the drinking of windshield washer fluid . [ 2 ] This may be accidental or as part of an attempted suicide . Toxicity may also rarely occur through extensive skin exposure or breathing in fumes. [ 1 ] When the body breaks down methanol it results in the creation of metabolite byproducts such as formaldehyde , formic acid , and formate which cause much of the toxicity. [ 2 ] The diagnosis may be suspected when there is acidosis or an increased osmol gap and confirmed by directly measuring blood levels. [ 1 ] [ 2 ] Other conditions that can produce similar symptoms include infections , exposure to other toxic alcohols , serotonin syndrome , and diabetic ketoacidosis . [ 2 ]
Early treatment increases the chance of a good outcome. Treatment consists of stabilizing the person and using an antidote . The preferred antidote is fomepizole , with ethanol used if this is not available. Hemodialysis may also be used in those where there is organ damage or a high degree of acidosis . Other treatments may include sodium bicarbonate , folate , and thiamine . [ 2 ]
Outbreaks of methanol ingestion have occurred due to contamination of drinking alcohol . This is more common in the developing world . [ 2 ] In 2013 more than 1700 cases occurred in the United States. Those affected are usually adults and males. [ 3 ] Toxicity to methanol has been described as early as 1856. [ 5 ]
The initial symptoms of methanol intoxication include central nervous system depression , headache, dizziness, nausea, lack of coordination, and confusion. Sufficiently large doses cause unconsciousness and death. The initial symptoms of methanol exposure are usually less severe than the symptoms from the ingestion of a similar quantity of ethanol. [ 6 ] Once the initial symptoms have passed, a second set of symptoms arises, from 10 to as many as 30 hours after the initial exposure, that may include blurring, photophobia, snowstorm vision or complete loss of vision, acidosis , and putaminal hemorrhages, an uncommon but serious complication. [ 7 ] [ 8 ] These symptoms result from the accumulation of toxic levels of formate in the blood, and may progress to death by respiratory failure . Physical examination may show tachypnea , and eye examination may show dilated pupils with hyperemia of the optic disc and retinal edema .
Methanol has a moderate to high toxicity in humans. As little as 10 mL of pure methanol when drunk is metabolized into formic acid , which can cause permanent blindness by destruction of the optic nerve . 15 mL is potentially fatal, [ 1 ] although the median lethal dose is typically 100 mL (3.4 fl oz) (i.e. 1–2 mL/kg body weight of pure methanol). [ 4 ] Reference dose for methanol is 0.5 mg/kg/day. [ 9 ]
Methanol is not produced in toxic amounts by fermentation of agricultural products or by subsequent distillation. [ 10 ] However, in modern times, in order to comply with regulations reducing the methanol amount is sometimes desired. [ 11 ] This can be achieved with the use of a molecular sieve . [ 12 ]
Because of its similarities in both appearance and odor to ethanol (the alcohol in beverages) or isopropyl alcohol , it is difficult to differentiate between the three. [ 13 ] As a result, ethanol is sometimes denatured (adulterated), and made poisonous, by the addition of methanol. The result is known as methylated spirit, "meths" ( British use) or "metho" ( Australian slang). [ 14 ]
This is not to be confused with "meth", a common abbreviation for methamphetamine and for methadone in Britain and the United States. [ citation needed ]
Despite its poisonous content, denatured alcohol is sometimes consumed as a surrogate alcohol . [ citation needed ]
Methanol is toxic by two mechanisms. First, methanol (whether it enters the body by ingestion , inhalation , or absorption through the skin) can be fatal due to its CNS depressant properties in the same manner as ethanol poisoning . Second, in a process of toxication , it is metabolized to formic acid (which is present as the formate ion) via formaldehyde in a process initiated by the enzyme alcohol dehydrogenase in the liver . [ 15 ] Methanol is converted to formaldehyde via alcohol dehydrogenase and formaldehyde is converted to formic acid (formate) via aldehyde dehydrogenase . The conversion to formate via ALDH proceeds completely, with no detectable formaldehyde remaining. [ 16 ] Formate is toxic because it inhibits mitochondrial cytochrome c oxidase , causing hypoxia at the cellular level, and metabolic acidosis , among a variety of other metabolic disturbances. [ 17 ]
Methanol poisoning can be treated with fomepizole , or if unavailable, ethanol may be used. [ 15 ] [ 18 ] [ 19 ] Both drugs act to reduce the action of alcohol dehydrogenase on methanol by means of competitive inhibition . Ethanol , the active ingredient in alcoholic beverages, acts as a competitive inhibitor by more effectively binding and saturating the alcohol dehydrogenase enzyme in the liver, thus blocking the binding of methanol. Methanol is excreted by the kidneys without being converted into the toxic metabolites formaldehyde and formic acid. Alcohol dehydrogenase instead enzymatically converts ethanol to acetaldehyde , a less toxic organic molecule. [ 15 ] [ 20 ] Additional treatment may include sodium bicarbonate for metabolic acidosis, and hemodialysis or hemodiafiltration to remove methanol and formate from the blood. [ 15 ] Folinic acid or folic acid is also administered to enhance the metabolism of formate. [ 15 ]
There are cases of methanol resistance, such as that of Mike Malloy , whom someone tried and failed to poison with methanol in the early 1930s. [ 21 ]
In December 2016, 78 people died in Irkutsk, Russia , from methanol poisoning after ingesting a counterfeit body lotion that was primarily methanol rather than ethanol as labeled. The body lotion, before the event, had been used as a cheap substitute for vodka by the impoverished people in the region despite warnings on the lotion's bottles that it was not safe for drinking and long-standing problems with alcohol poisoning across the country. [ 22 ]
During the COVID-19 pandemic , Iranian media reported that nearly 300 people had died and over a thousand became ill due to methanol poisoning in the belief that drinking methanol could help with the disease. [ 23 ] In the United States, the Food and Drug Administration discovered that several brands of hand sanitizer manufactured in Mexico during the pandemic contained methanol, and urged the public to avoid using the affected products. [ 24 ]
In November 2024, six foreign tourists, comprising two Australian teenagers, two Danish women, a British lawyer, and an American tourist, died of suspected methanol poisoning after consuming contaminated alcohol at the Nana Backpackers Hostel in Vang Vieng , Laos. The tragedy, which also hospitalized several others, prompted Laotian authorities to detain eight hostel staff members and launch an investigation into the source of the contamination. Governments, including Australia, updated travel advisories, warning citizens about the dangers of consuming local alcohol in Southeast Asia. The incident significantly affected Laos's tourism-dependent economy, as travelers canceled trips and raised concerns over safety standards in the country. Tourism, a critical sector for Laos, faced losses as international confidence waned, prompting calls for stricter regulations on alcohol production and improved enforcement to restore trust. [ 25 ] [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Methanol_toxicity |
A methanometer is an instrument used to measure methane gas in the air of a mine. The Mine Safety Appliances Company Ltd. manufactured the first type - W8 Methanometer around 1950 and it was approved for use by the Ventilation Regulations of 1947. The Methanometer could be powered by an Edison battery cap lamp and it could be carried on a miner's belt with other tools. Methane is the main gas present in firedamp . It is highly explosive and had previously been detected by the blue halo effect it gave to the flame of a safety lamp .
By 1983, several instruments for measuring methane concentration in air had been developed. [ 1 ] These relied on such methods as the differential heating of an incandescent platinum filament, or by measuring the higher absorption of infrared radiation by gas containing methane.
A catalytic-type methanometer uses an array of four heated wire filament elements, two active filaments are coated with a catalyst , arranged in a Wheatstone bridge with two inactive elements that have no coating. When exposed to methane-contaminated air, the coated filaments heat up due to oxidation of the methane, and the resulting imbalance in the resistance of active and inactive elements can be displayed on a calibrated meter. Such instruments require oxygen to work and can be inaccurate if methane concentration is very high. The catalyst can be poisoned by exposure to some chemicals, especially those containing sulphur , again making the instrument insensitive to methane. [ 2 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methanometer |
Methanosarcina acetivorans is a versatile methane producing microbe which is found in such diverse environments as oil wells, trash dumps, deep-sea hydrothermal vents, and oxygen-depleted sediments beneath kelp beds. Only M. acetivorans and microbes in the genus Methanosarcina use all three known metabolic pathways for methanogenesis . [ 1 ] Methanosarcinides, including M. acetivorans , are also the only archaea capable of forming multicellular colonies, and even show cellular differentiation. The genome of M. acetivorans is one of the largest archaeal genomes ever sequenced. [ 2 ] Furthermore, one strain of M. acetivorans , M. a. C2A , has been identified to possess an F-type ATPase (unusual for archaea, but common for bacteria, mitochondria and chloroplasts ) along with an A-type ATPase. [ 3 ]
M. acetivorans has been noted for its ability to metabolize carbon monoxide to form acetate and formate . [ 4 ] It can also oxidize carbon monoxide into carbon dioxide . The carbon dioxide can then be converted into methane in a process which M. acetivorans uses to conserve energy. [ 5 ] It has been suggested that this pathway may be similar to metabolic pathways used by primitive cells. [ 6 ]
However, in the presence of minerals containing iron sulfides, as might have been found in sediments in a primordial environment, acetate would be catalytically converted into acetate thioester, a sulfur-containing derivative. Primitive microbes could obtain biochemical energy in the form of adenosine triphosphate (ATP) by converting acetate thioester back into acetate using PTS and ACK, which would then be converted back into acetate thioester to complete the process. In such an environment, a primitive "protocell" could easily produce energy through this metabolic pathway, excreting acetate as waste. Furthermore, ACK catalyzes the synthesis of ATP directly. Other pathways generate energy from ATP only through complex multi-enzyme reactions involving protein pumps and osmotic imbalances across a membrane.
M. acetivorans was isolated in 1984 from marine sediment obtained at Scripps Canyon . [ 7 ] | https://en.wikipedia.org/wiki/Methanosarcina_acetivorans |
Methemoglobin (British: methaemoglobin , shortened MetHb ) (pronounced "met-hemoglobin") is a hemoglobin in the form of metalloprotein , in which the iron in the heme group is in the Fe 3+ ( ferric ) state, not the Fe 2+ ( ferrous ) of normal hemoglobin. Sometimes, it is also referred to as ferrihemoglobin. [ 2 ] Methemoglobin cannot bind oxygen , which means it cannot carry oxygen to tissues. It is bluish chocolate-brown in color. In human blood a trace amount of methemoglobin is normally produced spontaneously, but when present in excess the blood becomes abnormally dark bluish brown. The NADH -dependent enzyme methemoglobin reductase ( a type of diaphorase ) is responsible for converting methemoglobin back to hemoglobin .
Normally one to two percent of a person's hemoglobin is methemoglobin; a higher percentage than this can be genetic or caused by exposure to various chemicals and depending on the level can cause health problems known as methemoglobinemia . A higher level of methemoglobin will tend to cause a pulse oximeter to read closer to 85% regardless of the true level of oxygen saturation .
The word methemoglobin derives from the Ancient Greek prefix μετα- (meta-: behind, later, subsequent) and the word hemoglobin .
The name hemoglobin is itself derived from the words heme and globin , each subunit of hemoglobin being a globular protein with an embedded heme group.
Amyl nitrite is administered to treat cyanide poisoning . It works by converting hemoglobin to methemoglobin, which allows for the binding of cyanide (CN – ) anions by ferric (Fe 3+ ) cations and the formation of cyanomethemoglobin . The immediate goal of forming this cyanide adduct is to prevent the binding of free cyanide to the cytochrome a 3 group in cytochrome c oxidase . [ 8 ]
Methemoglobin is expressed as a concentration or a percentage. Percentage of methemoglobin is calculated by dividing the concentration of methemoglobin by the concentration of total hemoglobin. Percentage of methemoglobin is likely a better indicator of illness severity than overall concentration, as underlying medical conditions play an important role. For example, a methemoglobin concentration of 1.5 g/dL may represent a percentage of 10% in an otherwise healthy patient with a baseline hemoglobin of 15 mg/dL, whereas the presence of the same concentration of 1.5 g/dL of methemoglobin in an anemic patient with a baseline hemoglobin of 8 g/dL would represent a percentage of 18.75%. The former patient will be left with a functional hemoglobin concentration of 13.5 g/dL and potentially remain asymptomatic while the latter patient with a functional hemoglobin concentration 6.5 g/dL may be severely symptomatic with a methemoglobin of less than 20%. [ 9 ]
This may be further compounded by the "functional hemoglobin's" decreased ability to release oxygen in the presence of methemoglobin. Anemia , congestive heart failure , chronic obstructive pulmonary disease , and essentially any pathology that impairs the ability to deliver oxygen may worsen the symptoms of methemoglobinemia. [ 9 ]
Increased levels of methemoglobin are found in blood stains. Upon exiting the body, bloodstains transit from bright red to dark brown, which is attributed to oxidation of oxy-hemoglobin (HbO 2 ) to methemoglobin (met-Hb) and hemichrome (HC). [ 10 ] | https://en.wikipedia.org/wiki/Methemoglobin |
In organic chemistry , methenium (also called methylium , carbenium , [ 2 ] methyl cation , or protonated methylene ) is a cation with the formula CH + 3 . It can be viewed as a methylene radical ( : CH 2 ) with an added proton ( H + ), or as a methyl radical (• CH 3 ) with one electron removed. It is a carbocation and an enium ion , making it the simplest of the carbenium ions . [ 3 ]
Experiments and calculations generally agree that the methenium ion is planar, with threefold symmetry . [ 3 ] The carbon atom is a prototypical (and exact) example of sp 2 hybridization.
For mass spectrometry studies at low pressure, methenium can be obtained by ultraviolet photoionization of methyl radical, [ 3 ] or by collisions of monatomic cations such as C + and Kr + with neutral methane. [ 4 ] In such conditions, it will react with acetonitrile CH 3 CN to form the ion ( CH 3 ) 2 CN + . [ 5 ]
Upon capture of a low-energy electron (less than 1 eV ), it will spontaneously dissociate. [ 6 ]
It is seldom encountered as an intermediate in the condensed phase. It is proposed as a reactive intermediate that forms upon protonation or hydride abstraction of methane with FSO 3 H-SbF 5 . The methenium ion is very reactive, even towards alkanes . [ 7 ]
In June 2023, astronomers detected , for the first time outside the Solar System, methyl cation, CH 3 + (and/or carbon cation , C + ), the known basic ingredients of life , in interstellar space . [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Methenium |
In organic chemistry , a methine group or methine bridge is a trivalent functional group =CH− , derived formally from methane . It consists of a carbon atom bound by two single bonds and one double bond , where one of the single bonds is to a hydrogen . The group is also called methyne or methene , but its IUPAC systematic name is methylylidene or methanylylidene . [ 1 ]
This group is sometimes called "methylidyne", however that name belongs properly to either the methylidyne group ≡CH (connected to the rest of the molecule by a triple bond) or to the methylidyne radical ⫶ CH (the two atoms as a free molecule with dangling bonds).
The name "methine" is also widely used in non-systematic nomenclature for the methanetriyl group (IUPAC): a carbon atom with four single bonds, where one bond is to a hydrogen atom ( >CH− ). [ 2 ]
Two or more methine bridges can overlap, forming a chain or ring of carbon atoms connected by alternating single and double bonds, as in piperylene H 2 C=CH−CH=CH−CH 3 , or the compound
Every carbon atom in this molecule is a methine carbon atom, except for three; two that are attached to the two nitrogen atoms and not to any hydrogen atoms, and the carbon attached to the nitrogen atom, which is attached to two hydrogen atoms (far right). There is a five-carbon-atom poly-methine chain in the center of this molecule.
Chains of alternating single and double bonds often form conjugated systems . When closed, as in benzene (=CH−CH=) 3 , they often give aromatic character to the compound. | https://en.wikipedia.org/wiki/Methine_group |
Methiocarb is a carbamate pesticide (an acetylcholinesterase inhibitor ) which is used as an insecticide , [ 1 ] [ 2 ] bird repellent , [ 3 ] acaricide [ 2 ] and molluscicide [ 2 ] since the 1960s. Methiocarb has contact and stomach action on mites and neurotoxic effects on molluscs . Seeds treated with methiocarb also affect birds. Other names for methiocarb are mesurol [ 4 ] and mercaptodimethur .
Due to its toxicity, methiocarb approval as a plant protection product has been withdrawn by the EU effective 2020. [ 5 ]
The carbamate functional group in methiocarb can be cleaved by cholinesterase to result in the carbamate, which binds to the cholinesterase, and the phenol.
Methiocarb ( 3 ) is synthesised by Bayer from 4-methylthio-3,5-xylenol ( 1 ) and methyl isocyanate ( 2 ). [ 6 ] The xylenol ( 1 ) will act as the nucleophile in this reaction attacking the partially positively charged carbon in the isocyanate ( 2 ).
Methiocarb acts by acetylcholinesterase inhibition . [ 1 ] The product of the cleavage of the carbamate group of methiocarb is methylcarbamic acid which is bound to cholinesterase after the reaction. The normal function of cholinesterase is to cleave the acetyl-choline bond which results in the binding of acetic acid to cholinesterase which is a fast reversible reaction. The carbamic acid also reversibly binds but the hydrolysis of the bond is slower and therefore the acid inhibits the function of cholinesterase which results in elevated acetylcholine levels. In comparison: organophosphates inhibit irreversibly and will therefore inhibit the acetylcholinesterase even more. [ citation needed ]
In addition to its cholinergic effects, methiocarb has been found to be an endocrine disruptor , acting as an estrogen , antiandrogen , and aromatase inhibitor . [ 7 ]
Methiocarb is biotransformed in the liver mainly by sulfoxidation . This can happen to methiocarb itself but also to the phenol group which is cleaved from methiocarb by choline-esterase. In some cases this same sulfur can be oxidised once more to give the sulfone . A minor pathway that occurs is the hydroxylation of the N-methyl. [ 8 ] [ 9 ]
Methiocarb can be taken up through different routes. The most common for humans is up take through the skin or as an aerosol , because of its use as a pesticide in agriculture. For insects and birds this would be by the oral route. The NOAEL's of these routes have been determined as follows: For the oral route, the NOAEL is set to 3.3 mg/kg per day for rats, based on a 2-year study. For absorption through the skin the NOAEL is set to 150 mg/kg per day for rabbits, based on the reduction of food consumption. [ 8 ]
When methiocarb is fed to rats at a dose of 50 ppm , it gives a reduction of brain cholinesterase by 14% and 5% in males and females respectively.
When methiocarb is administered as an aerosol to rats, the highest concentration (96 mg/m3 in solvent) showed signs of involuntary muscle contraction ( tremors ). These signs weren't observed in the other groups. The brain acetylcholine esterase is reduced in comparison to the solvent controls, to 61% and 74% for males and females respectively. There were no changes in organ weight. The NOAEL was determined to be 6 mg/m3 based on the reduction of brain acetylcholine esterase activity. [ 8 ]
To determine the distribution of methiocarb through the body carbon-14 ([ 14 C]) labeled methiocarb studies have been performed on rats. About 8 hours after IP injection of [ 14 C]methiocarb more than 20 is present in the kidneys, 14 in the lungs, 14 in the heart, 6 in the body fat and 26 in the red blood cells. All the numbers are a measure of the radioactivity in dpm x 103/g of dried tissue. 30 Minutes after treatment gave, for all tissues except bodyfat, much higher values indicating that elimination takes place shortly after injection. Also, an increase in all tissues except the red blood cells has been observed between 2 and 4 hours after injection. This indicates that after two hours redistribution takes place shortly followed by elimination. This radioactivity study only measured the [ 14 C] so the compound could already be metabolized to different compounds with different toxicities, which is not indicated in this study. [ 10 ]
In rats the cholinesterase activity fell down to 50 percent of the control values in 27 days where the dose applied in their diet was 2 mg/ kg bw in the first three days and 4 mg/kg bw for the next 24 days. No abnormal clinical signs were observed. [ 8 ] [ 11 ] In rabbits methiocarb was applied to the skin to a group of ten at doses of 0, 60, 150 or 375 mg/kg bw per day for 6 h/day. Two out of ten rabbits with the low dose did not survive and with the high dose had a reduced food consumption. Cholinesterase activity was reduced in males with a high dose at 14 and 21 days of treatment. There were no intergroup differences observed in cholinesterase activity among females. The erythrocyte acetylcholinesterase activity is apparently not inhibited in a dose-related fashion. The duration of the study was 24 days. [ 8 ] [ 12 ]
In mice, a one-year study of 50 males and 50 females was performed. The mice received diets containing methiocarb at doses of 0, 15, 43 and 130 mg/kg bw per day in males and 0, 20, 57, and 170 mg/kg bw per day in females. Food consumption, behaviour and mortality rate were not affected at any dose. At one month the decrease in plasma acetylcholinesterase activity was the highest and the smallest reductions were observed at 24 months. Brain acetylcholinesterase activity was also lowered, more in males than in females. [ 8 ] [ 13 ] [ 14 ] In rats a two-year study of 60 rats was performed. The rats received diets containing 0, 3.3, 9.3 and 29 mg/kg bw per day for males and 0, 5, 14, and 42 mg/kg bw per day for females. Food consumption, behaviour and mortality rate were not affected at any dose. The total protein concentrations were raised at higher doses of methiocarb. The plasma acetylcholinesterase activity was lowered at the high dose at day one and from eight weeks onwards in males and at day one and 1, 2, 4 and 13 weeks in females. No brain acetylcholinesterase activity was observed. [ 8 ] [ 15 ] [ 16 ]
Because methiocarb is widely used as an insecticide on crops, environmental risks were also studied to establish safety risks for human health. The metabolism of methiocarb in plants, soil and water have been proposed from radiolabeled [ 14 C]methiocarb studies. In plants, the major metabolites were methiocarb sulfoxide and methiocarb sulfoxide phenol. Environmental fate in water and soil has been determined from the metabolites formed by anaerobic as well as aerobic degradation, photolysis , adsorption and leaching of methiocarb. In soil the half life of methiocarb sulfone phenol is 20 days, methiocarb sulfoxide phenol is 2 days, methiocarb 1.5 days and methiocarb sulfoxide 6 days. Methiocarb is mainly metabolized to methiocarb phenol and minor to methiocarb sulfoxide and methiocarb sulfoxide phenol. Also after 217 days no methiocarb or metabolites are present anymore in the soil. This is because a lot of gets metabolized to CO 2 . In water, no methiocarb was present already after 32 days. The half life of methiocarb in water is strongly pH dependent but at pH 7 the half life is about 28 days. [ 17 ]
Methiocarb is used as toxin for different purposes. It ranges from snails, insects, rodents and even as a bird repellent.
As an insecticide it is effective for thrips and has a low dose that is lethal for these animals. The LC 99,99 for suspension concentrate is 0.34 g/L and for the wettable powder it is 2.30, which is a bit too much for effective use. [ 18 ]
For the use as a molluscicide methiocarb is effective, but at a high dose. In a research with E. vermiculata , methiocarb showed to be the most effective as topical applicant (although DMSO was used as a solvent). The LD 50 is 414 μg per snail and the LD 99,99 is roughly estimated 1400 μg per snail for methiocarb. In comparison to methomyl which was more effective, with its LD 50 was 90 μg per snail. Which is a lot lower than the LD 50 of methiocarb. [ 19 ]
As snail bait methiocarb has the same effectiveness as methomyl for 1% (mass percent) and 2%. but the LC 50 of methiocarb is higher than the one of methomyl. 0.93:0.31.
They both reached an average mortality of 85%, by the use of 2% methiocarb/methomyl bait. [ 19 ]
In another comparison study (with Monacha obstructa ) between methiocarb and methomyl. Methomyl showed again to be more effective. The LD 50 in this study were 12 μg per snail for methomyl and 27 μg per snail for methiocarb. These compound were topically applied on the snails and these compounds were first dissolved in 95% ethanol and diluted with water to make the concentrations. [ 20 ]
As an avian repelled to protect fruit, methiocarb was in one research not effective. The birds still damaged the figs. This happened because the methiocarb was sprayed on the fruit. The birds pinched the fruits or peeled the skin of the fruit and ate the meat of the figs. In that manner these birds are very little or not exposed to the repellent. [ 21 ]
In another study with quelea, it was investigated if methiocarb had an adverse effect on the food choice. It showed that when quelea ate seeds with methiocarb, the next time they would choose some other food. This shows that methiocarb can be effective as a bird repellent. [ 22 ]
In one study methiocarb is shown to be not very effective against mice as a rodenticide. In the first field trial, snail pellets of methiocarb were spread across the land and killed almost 23% of the initial mice population in one night, but the population did not decrease (probably because of reinvasion of the neighbouring land). There hasn't been searched for carcasses after that, but birds were seen scavenging on carcasses.
In the second field trial, grain was covered in methiocarb and strychnine and it showed a mortality rate of 40% for methiocarb and 90% for strychnine. Although methiocarb seems to be effective at first. Mice develop an aversion for the methiocarb, which makes it not very effective as rodenticide. [ 23 ]
Methiocarb is a plant protection agent and while suicide with these type of toxins is rare, there is one case reported of a suicide with methiocarb.
An 80-year-old woman in Germany killed herself by drinking a bottle of Mesurol. The red/pink fluid was on her clothes, face, and hands (probably because of the vomiting) and in the gastrointestinal tract as in the respiratory tract .
The toxicological examination showed that the methiocarb uptake wasn't completed and the concentration of methiocarb and its metabolite in the urine was low. This is due to the short duration of exposure. Elevated concentration of methiocarb may be the result of post-mortem uptake, but it could also be the post-mortem redistribution of methiocarb and metabolites from the gastrointestinal tract.
The conclusion of the toxicological examination was death by acute poisoning of methiocarb. [ 24 ]
The amount of methiocarb in the stomach is calculated to compare it with the LD 50 of rats. The amount of methiocarb is estimated to have been 6.1 gram (by a stomach volume of 1L). The weight of this woman was 53 kg. That would make 115 mg/kg bw. When compared to the LD 50 for rats, which is 30 mg/kg, it is reasonable to say that this woman died of poisoning from methiocarb.
Bear in mind that this is only the amount of methiocarb found in the stomach and that the rest was already methiocarb distributed through the body. [ 24 ]
†Semiquantitative analysis was performed by the approximation of similar extinction coefficients of mercaptodimethur and its metabolite descarbamoylmercaptodimethur at wavelength 200 nm. [ 24 ] | https://en.wikipedia.org/wiki/Methiocarb |
In organic chemistry, a methiodide is a chemical derivative produced by the reaction of a compound with methyl iodide . Methiodides are often formed through the methylation of tertiary amines :
Whereas the parent amines are hydrophobic and often oily, methiodides, being salts, are somewhat hydrophilic and exhibit high melting points . Methiodides exhibit altered pharmacological properties as well.
Examples include:
Tertiary phosphines and phosphite esters also form methiodides. [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methiodide |
Methionine sulfoxide is the organic compound with the formula CH 3 S(O)CH 2 CH 2 CH(NH 2 )CO 2 H. It is an amino acid that occurs naturally although it is formed post-translationally .
Oxidation of the sulfur of methionine results in methionine sulfoxide or methionine sulfone. The sulfur-containing amino acids methionine and cysteine are more easily oxidized than the other amino acids. [ 1 ] [ 2 ] Unlike oxidation of other amino acids, the oxidation of methionine can be reversed by enzymatic action, specifically by enzymes in the methionine sulfoxide reductase family of enzymes. The three known methionine sulfoxide reductases are MsrA , MsrB , and fRmsr. [ 2 ] Oxidation of methionine results in a mixture of the two diastereomers methionine-S-sulfoxide and methionine-R-sulfoxide, which are reduced by MsrA and MsrB, respectively. [ 3 ] MsrA can reduce both free and protein-based methionine-S-sulfoxide, whereas MsrB is specific for protein-based methionine-R-sulfoxide. fRmsr, however, catalyzes the reduction of free methionine-R-sulfoxide. [ 2 ] Thioredoxin serves to recycle by reduction some of the methionine sulfoxide reductase family of enzymes, whereas others can be reduced by metallothionein . [ 4 ]
Methionine sulfoxide (MetO), the oxidized form of the amino acid methionine (Met), increases with age in body tissues, which is believed by some to contribute to biological ageing . [ 5 ] [ 6 ] Oxidation of methionine residues in tissue proteins can cause them to misfold or otherwise render them dysfunctional. [ 5 ] Uniquely, the methionine sulfoxide reductase (Msr) group of enzymes act with thioredoxin to catalyze the enzymatic reduction and repair of oxidized methionine residues. [ 5 ] Moreover, levels of methionine sulfoxide reductase A (MsrA) decline in aging tissues in mice and in association with age-related disease in humans. [ 5 ] There is thus a rationale for thinking that by maintaining the structure, increased levels or activity of MsrA might retard the rate of aging.
Indeed, transgenic Drosophila (fruit flies) that overexpress methionine sulfoxide reductase show extended lifespan . [ 7 ] However, the effects of MsrA overexpression in mice were ambiguous. [ 8 ] MsrA is found in both the cytosol and the energy-producing mitochondria , where most of the body's endogenous free radicals are produced. Transgenically increasing the levels of MsrA in either the cytosol or the mitochondria had no significant effect on lifespan assessed by most standard statistical tests, and may possibly have led to early deaths in the cytosol-specific mice, although the survival curves appeared to suggest a slight increase in maximum (90%) survivorship, as did analysis using Boschloo's Exact test, a binomial test designed to test greater extreme variation. [ 8 ]
The oxidation of methionine serves as a switch that deactivates certain protein activities such as E.coli ribosomal protein, L12. [ 9 ] Proteins with great amount of methionine residues tend to exist within the lipid bilayer as methionine is one of the most hydrophobic amino acids. Those methionine residues that are exposed to the aqueous exterior thus are vulnerable to oxidation. The oxidized residues tend to be arrayed around the active site and may guard access to this site by reactive oxygen species. Once oxidized, the MetO residues are reduced back to methionine by the enzyme methionine sulfoxide reductase. Thus, an oxidation–reduction cycle occurs in which exposed methionine residues are oxidized (e.g., by H 2 O 2 ) to methionine sulfoxide residues, which are subsequently reduced. [ 10 ]
Methionine(protein)+ H 2 O 2 → Methionine Sulfoxide(protein)+ H 2 O
Methionine Sulfoxide(protein)+ NADPH+H + → Methionine(protein)+ NADP + +H 2 O | https://en.wikipedia.org/wiki/Methionine_sulfoxide |
Method of Fluxions ( Latin : De Methodis Serierum et Fluxionum ) [ 1 ] is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus . The book was completed in 1671 and posthumously published in 1736. [ 2 ]
Fluxion is Newton's term for a derivative . He originally developed the method at Woolsthorpe Manor during the closing of Cambridge due to the Great Plague of London from 1665 to 1667. Newton did not choose to make his findings known (similarly, his findings which eventually became the Philosophiae Naturalis Principia Mathematica were developed at this time and hidden from the world in Newton's notes for many years). Gottfried Leibniz developed his form of calculus independently around 1673, seven years after Newton had developed the basis for differential calculus, as seen in surviving documents like “the method of fluxions and fluents ..." from 1666. Leibniz, however, published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693. [ 3 ]
The calculus notation in use today is mostly that of Leibniz, although Newton's dot notation for differentiation x ˙ {\displaystyle {\dot {x}}} is frequently used to denote derivatives with respect to time.
Newton's Method of Fluxions was formally published posthumously, but following Leibniz's publication of the calculus a bitter rivalry erupted between the two mathematicians over who had developed the calculus first, provoking Newton to reveal his work on fluxions.
For a period of time encompassing Newton's working life, the discipline of analysis was a subject of controversy in the mathematical community. Although analytic techniques provided solutions to long-standing problems, including problems of quadrature and the finding of tangents, the proofs of these solutions were not known to be reducible to the synthetic rules of Euclidean geometry. Instead, analysts were often forced to invoke infinitesimal , or "infinitely small", quantities to justify their algebraic manipulations. Some of Newton's mathematical contemporaries, such as Isaac Barrow , were highly skeptical of such techniques, which had no clear geometric interpretation. Although in his early work Newton also used infinitesimals in his derivations without justifying them, he later developed something akin to the modern definition of limits in order to justify his work. [ 4 ] | https://en.wikipedia.org/wiki/Method_of_Fluxions |
In proof theory , the semantic tableau [ 1 ] ( / t æ ˈ b l oʊ , ˈ t æ b l oʊ / ; plural: tableaux ), also called an analytic tableau , [ 2 ] truth tree , [ 1 ] or simply tree , [ 2 ] is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic . [ 1 ] An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. [ 3 ] The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics . [ 4 ]
A method of truth trees contains a fixed set of rules for producing trees from a given logical formula, or set of logical formulas. Those trees will have more formulas at each branch, and in some cases, a branch can come to contain both a formula and its negation, which is to say, a contradiction. In that case, the branch is said to close . [ 1 ] If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, [ 1 ] and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous : if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close. [ 1 ]
In his Symbolic Logic Part II , Charles Lutwidge Dodgson (also known by his literary pseudonym, Lewis Carroll) introduced the Method of Trees, the earliest modern use of a truth tree. [ 5 ]
The method of semantic tableaux was invented by the Dutch logician Evert Willem Beth (Beth 1955) [ 6 ] and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). [ 7 ] Smullyan's simplification, "one-sided tableaux", is described here. Smullyan's method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987). [ 8 ]
Tableaux can be intuitively seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in (Carnielli 1991). [ 9 ]
Assume an infinite set P V {\displaystyle PV} of propositional variables and define the set Φ {\displaystyle \Phi } of formulae by induction, represented by the following grammar:
That is, the basic connectives are: negation ¬ {\displaystyle \neg } , implication → {\displaystyle \to } , disjunction ∨ {\displaystyle \lor } , and conjunction ∧ {\displaystyle \land } .
The truth or falsehood of a formula is called its truth value. A formula, or set of formulas, is said to be satisfiable if there is a possible assignment of truth-values to the propositional variables such that the entire formula, which combines the variables with connectives, is itself true as well. [ 1 ] Such an assignment is said to satisfy the formula. [ 2 ]
A tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment: a formula is valid if its negation is unsatisfiable, and formulae A 1 , … , A n {\displaystyle A_{1},\ldots ,A_{n}} imply B {\displaystyle B} if { A 1 , … , A n , ¬ B } {\displaystyle \{A_{1},\ldots ,A_{n},\neg B\}} is unsatisfiable.
For any formulae X {\displaystyle X} , Y {\displaystyle Y} the following facts hold:
The method of analytic tableaux is based on these facts. The main principle of propositional tableaux is to attempt to "break" complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible.
The method works on a tree whose nodes are labeled with formulae. At each step, this tree is modified; in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability. [ 10 ] Then, the following procedure may be repeatedly applied nondeterministically:
If a branch of the tableau contains a formula ...
The breakdown process terminates after a finite number of steps, because each application of a rule eliminates a connective, and there are only finitely many connectives in any formula.
Note : In systems based on the grammar
that do not treat negation as primitive but define it in terms of implication and falsity ( ¬ Φ = def Φ → ⊥ {\displaystyle \neg \Phi \,{\overset {\text{def}}{=}}\,\Phi \to \bot } ), the tableau rules for ¬ {\displaystyle \neg } are replaced by
The principle of tableau is that formulae in nodes of the same branch are considered in conjunction while the different branches are considered to be disjuncted. As a result, a tableau is a tree-like representation of a formula that is a disjunction of conjunctions. This formula is equivalent to the set to prove unsatisfiability. The procedure modifies the tableau in such a way that the formula represented by the resulting tableau is equivalent to the original one. One of these conjunctions may contain a pair of complementary literals, in which case that conjunction is proved to be unsatisfiable. If all conjunctions are proved unsatisfiable, the original set of formulae is unsatisfiable.
Every tableau can be considered as a graphical representation of a formula, which is equivalent to the set the tableau is built from. This formula is as follows: each branch of the tableau represents the conjunction of its formulae; the tableau represents the disjunction of its branches. The expansion rules transforms a tableau into one having an equivalent represented formula. Since the tableau is initialized as a single branch containing the formulae of the input set, all subsequent tableaux obtained from it represent formulae which are equivalent to that set (in the variant where the initial tableau is the single node labeled true, the formulae represented by tableaux are consequences of the original set.)
The method of tableaux works by starting with the initial set of formulae and then adding to the tableau simpler and simpler formulae until contradiction is shown in the simple form of opposite literals. Since the formula represented by a tableau is the disjunction of the formulae represented by its branches, contradiction is obtained when every branch contains a pair of opposite literals.
Once a branch contains a literal and its negation, its corresponding formula is unsatisfiable. As a result, this branch can be now "closed", as there is no need to further expand it. If all branches of a tableau are closed, the formula represented by the tableau is unsatisfiable; therefore, the original set is unsatisfiable as well. Obtaining a tableau where all branches are closed is a way for proving the unsatisfiability of the original set. In the propositional case, one can also prove that satisfiability is proved by the impossibility of finding a closed tableau, provided that every expansion rule has been applied everywhere it could be applied. In particular, if a tableau contains some open (non-closed) branches and every formula that is not a literal has been used by a rule to generate a new node on every branch the formula is in, the set is satisfiable.
This rule takes into account that a formula may occur in more than one branch (this is the case if there is at least a branching point "below" the node). In this case, the rule for expanding the formula has to be applied so that its conclusion(s) are appended to all of these branches that are still open, before one can conclude that the tableau cannot be further expanded and that the formula is therefore satisfiable.
The above rules for propositional tableau can be simplified by using uniform notation. In uniform notation, each formula is either of type α {\displaystyle \alpha } (alpha) or of type β {\displaystyle \beta } (beta). Each formula of type alpha is assigned the two components α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}} , and each formula of type beta is assigned the two components β 1 , β 2 {\displaystyle \beta _{1},\beta _{2}} . Formulae of type alpha can be thought of as being conjunctive, as both α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} are implied by α {\displaystyle \alpha } being true. Formulae of type beta can be thought of as being disjunctive, as either β 1 {\displaystyle \beta _{1}} or β 2 {\displaystyle \beta _{2}} is implied by β {\displaystyle \beta } being true. The below tables shows how to determine the type, and the components, of any given propositional formula . [ 14 ]
In each table, the left-most column shows all the possible structures for the formulae of type alpha or beta, and the right-most columns show their respective components.
When constructing a propositional tableau using the above notation, whenever one encounters a formula of type alpha, its two components α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}} are added to the current branch that is being expanded. Whenever one encounters a formula of type beta on some branch θ {\displaystyle \theta } , one can split θ {\displaystyle \theta } into two branches, one with the set { θ {\displaystyle \theta } , β 1 {\displaystyle \beta _{1}} } of formulae, and the other with the set { θ {\displaystyle \theta } , β 2 {\displaystyle \beta _{2}} } of formulae. [ 15 ]
A variant of tableau is to label nodes with sets of formulae rather than single formulae. [ 16 ] In this case, the initial tableau is a single node labeled with the set to be proved satisfiable. The formulae in a set are therefore considered to be in conjunction.
The rules of expansion of the tableau can now work on the leaves of the tableau, ignoring all internal nodes. For conjunction, the rule is based on the equivalence of a set containing a conjunction A ∧ B {\displaystyle A\land B} with the set containing both A {\displaystyle A} and B {\displaystyle B} in place of it. In particular, if a leaf is labeled with X ∪ { A ∧ B } {\displaystyle X\cup \{A\land B\}} , a node can be appended to it with label X ∪ { A , B } {\displaystyle X\cup \{A,B\}} :
For disjunction, a set X ∪ { A ∨ B } {\displaystyle X\cup \{A\lor B\}} is equivalent to the disjunction of the two sets X ∪ { A } {\displaystyle X\cup \{A\}} and X ∪ { B } {\displaystyle X\cup \{B\}} . As a result, if the first set labels a leaf, two children can be appended to it, labeled with the latter two formulae.
Finally, if a set contains both a literal and its negation, this branch can be closed:
A tableau for a given finite set X is a finite (upside down) tree with root X in which all child nodes are obtained by applying the tableau rules to their parents. A branch in such a tableau is closed if its leaf node contains "closed". A tableau is closed if all its branches are closed. A tableau is open if at least one branch is not closed.
Below are two closed tableaux for the set
Each rule application is marked at the right hand side. Both achieve the same effect; the first closes faster. The only difference is the order in which the reduction is performed.
and second, longer one, with the rules applied in a different order:
The first tableau closes after only one rule application while the second one misses the mark and takes much longer to close. Clearly, one would prefer to always find the shortest closed tableau but it can be shown that one single algorithm that finds the shortest closed tableau for all [ all? clarification needed ] input sets of formulae cannot exist. [ disputed (for: For a given finite set X {\displaystyle X} of propositional formulae one can generate all — finitely many — possible tableaux and pick one with smallest height or width.) – discuss ]
The three rules ( ∧ ) {\displaystyle (\land )} , ( ∨ ) {\displaystyle (\lor )} and ( i d ) {\displaystyle (id)} given above are then enough to decide whether a given set X ′ {\displaystyle X'} of formulae in negated normal form are jointly satisfiable:
Just apply all possible rules in all possible orders until we find a closed tableau for X ′ {\displaystyle X'} or until we exhaust all possibilities and conclude that every tableau for X ′ {\displaystyle X'} is open.
In the first case, X ′ {\displaystyle X'} is jointly unsatisfiable and in the second the case the leaf node of the open branch gives an assignment to the atomic formulae and negated atomic formulae which makes X ′ {\displaystyle X'} jointly satisfiable. Classical logic actually has the rather nice property that we need to investigate only (any) one tableau completely: if it closes then X ′ {\displaystyle X'} is unsatisfiable and if it is open then X ′ {\displaystyle X'} is satisfiable. But this property is not generally enjoyed by other logics.
These rules suffice for all of classical logic by taking an initial set of formulae X and replacing each member C by its logically equivalent negated normal form C' giving a set of formulae X' . We know that X is satisfiable if and only if X' is
satisfiable, so it suffices to search for a closed tableau for X' using the procedure outlined above.
By setting X = { ¬ A } {\displaystyle X=\{\neg A\}} one can test whether the formula A is a tautology of classical logic:
If the tableau for { ¬ A } {\displaystyle \{\neg A\}} closes then ¬ A {\displaystyle \neg A} is unsatisfiable and so A is a tautology since no assignment of truth values will ever make A false. Otherwise any open leaf of any open branch of any open tableau for { ¬ A } {\displaystyle \{\neg A\}} gives an assignment that falsifies A .
Tableaux are extended to first-order predicate logic by two rules for dealing with universal and existential quantifiers, respectively. Two different sets of rules can be used; both employ a form of Skolemization for handling existential quantifiers, but differ on the handling of universal quantifiers.
The set of formulae to check for validity is here supposed to contain no free variables; this is not a limitation as free variables are implicitly universally quantified, so universal quantifiers over these variables can be added, resulting in a formula with no free variables.
A first-order formula ∀ x . γ ( x ) {\displaystyle \forall x.\gamma (x)} implies all formulae γ ( t ) {\displaystyle \gamma (t)} where t {\displaystyle t} is a ground term . The following inference rule is therefore correct:
Contrarily to the rules for the propositional connectives, multiple applications of this rule to the same formula may be necessary. As an example, the set { ¬ P ( a ) ∨ ¬ P ( b ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(a)\lor \neg P(b),\forall x.P(x)\}} can only be proved unsatisfiable if both P ( a ) {\displaystyle P(a)} and P ( b ) {\displaystyle P(b)} are generated from ∀ x . P ( x ) {\displaystyle \forall x.P(x)} .
Existential quantifiers are dealt with by means of Skolemization. In particular, a formula with a leading existential quantifier like ∃ x . δ ( x ) {\displaystyle \exists x.\delta (x)} generates its Skolemization δ ( c ) {\displaystyle \delta (c)} , where c {\displaystyle c} is a new constant symbol.
The Skolem term c {\displaystyle c} is a constant (a function of arity 0) because the quantification over x {\displaystyle x} does not occur within the scope of any universal quantifier. If the original formula contained some universal quantifiers such that the quantification over x {\displaystyle x} was within their scope, these quantifiers have evidently been removed by the application of the rule for universal quantifiers.
The rule for existential quantifiers introduces new constant symbols. These symbols can be used by the rule for universal quantifiers, so that ∀ y . γ ( y ) {\displaystyle \forall y.\gamma (y)} can generate γ ( c ) {\displaystyle \gamma (c)} even if c {\displaystyle c} was not in the original formula but is a Skolem constant created by the rule for existential quantifiers.
The above two rules for universal and existential quantifiers are correct, and so are the propositional rules: if a set of formulae generates a closed tableau, this set is unsatisfiable. Completeness can also be proved: if a set of formulae is unsatisfiable, there exists a closed tableau built from it by these rules. However, actually finding such a closed tableau requires a suitable policy of application of rules. Otherwise, an unsatisfiable set can generate an infinite-growing tableau. As an example, the set { ¬ P ( f ( c ) ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(f(c)),\forall x.P(x)\}} is unsatisfiable, but a closed tableau is never obtained if one unwisely keeps applying the rule for universal quantifiers to ∀ x . P ( x ) {\displaystyle \forall x.P(x)} , generating for example P ( c ) , P ( f ( c ) ) , P ( f ( f ( c ) ) ) , … {\displaystyle P(c),P(f(c)),P(f(f(c))),\ldots } . A closed tableau can always be found by ruling out this and similar "unfair" policies of application of tableau rules.
The rule for universal quantifiers ( ∀ ) {\displaystyle (\forall )} is the only non-deterministic rule, as it does not specify which term to instantiate with. Moreover, while the other rules need to be applied only once for each formula and each path the formula is in, this one may require multiple applications. Application of this rule can however be restricted by delaying the application of the rule until no other rule is applicable and by restricting the application of the rule to ground terms that already appear in the path of the tableau. The variant of tableaux with unification shown below aims at solving the problem of non-determinism.
The main problem of tableau without unification is how to choose a ground term t {\displaystyle t} for the universal quantifier rule. Indeed, every possible ground term can be used, but clearly most of them might be useless for closing the tableau.
A solution to this problem is to "delay" the choice of the term to the time when the consequent of the rule allows closing at least a branch of the tableau. This can be done by using a variable instead of a term, so that ∀ x . γ ( x ) {\displaystyle \forall x.\gamma (x)} generates γ ( x ′ ) {\displaystyle \gamma (x')} , and then allowing substitutions to later replace x ′ {\displaystyle x'} with a term. The rule for universal quantifiers becomes:
While the initial set of formulae is supposed not to contain free variables, a formula of the tableau may contain the free variables generated by this rule. These free variables are implicitly considered universally quantified.
This rule employs a variable instead of a ground term. What is gained by this change is that these variables can be then given a value when a branch of the tableau can be closed, solving the problem of generating terms that might be useless.
As an example, { ¬ P ( a ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(a),\forall x.P(x)\}} can be proved unsatisfiable by first generating P ( x 1 ) {\displaystyle P(x_{1})} ; the negation of this literal is unifiable with ¬ P ( a ) {\displaystyle \neg P(a)} , the most general unifier being the substitution that replaces x 1 {\displaystyle x_{1}} with a {\displaystyle a} ; applying this substitution results in replacing P ( x 1 ) {\displaystyle P(x_{1})} with P ( a ) {\displaystyle P(a)} , which closes the tableau.
This rule closes at least a branch of the tableau—the one containing the considered pair of literals. However, the substitution has to be applied to the whole tableau, not only on these two literals. This is expressed by saying that the free variables of the tableau are rigid : if an occurrence of a variable is replaced by something else, all other occurrences of the same variable must be replaced in the same way. Formally, the free variables are (implicitly) universally quantified and all formulae of the tableau are within the scope of these quantifiers.
Existential quantifiers are dealt with by Skolemization. Contrary to the tableau without unification, Skolem terms may not be simple constants. Indeed, formulae in a tableau with unification may contain free variables, which are implicitly considered universally quantified. As a result, a formula like ∃ x . δ ( x ) {\displaystyle \exists x.\delta (x)} may be within the scope of universal quantifiers; if this is the case, the Skolem term is not a simple constant but a term made of a new function symbol and the free variables of the formula.
This rule incorporates a simplification over a rule where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the free variables of the branch, not of δ {\displaystyle \delta } alone. This rule can be further simplified by the reuse of a function symbol if it has already been used in a formula that is identical to δ {\displaystyle \delta } up to variable renaming.
The formula represented by a tableau is obtained in a way that is similar to the propositional case, with the additional assumption that free variables are considered universally quantified. As for the propositional case, formulae in each branch are conjoined and the resulting formulae are disjoined. In addition, all free variables of the resulting formula are universally quantified. All these quantifiers have the whole formula in their scope. In other words, if F {\displaystyle F} is the formula obtained by disjoining the conjunction of the formulae in each branch, and x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the free variables in it, then ∀ x 1 , … , x n . F {\displaystyle \forall x_{1},\ldots ,x_{n}.F} is the formula represented by the tableau. The following considerations apply:
The following two variants are also correct.
Tableaux with unification can be proved complete: if a set of formulae is unsatisfiable, it has a tableau-with-unification proof. However, actually finding such a proof may be a difficult problem. Contrarily to the case without unification, applying a substitution can modify the existing part of a tableau; while applying a substitution closes at least a branch, it may make other branches impossible to close (even if the set is unsatisfiable).
A solution to this problem is delayed instantiation : no substitution is applied until one that closes all branches at the same time is found. With this variant, a proof for an unsatisfiable set can always be found by a suitable policy of application of the other rules. This method however requires the whole tableau to be kept in memory: the general method closes branches, which can be then discarded, while this variant does not close any branch until the end.
The problem that some tableaux that can be generated are impossible to close even if the set is unsatisfiable is common to other sets of tableau expansion rules: even if some specific sequences of application of these rules allow constructing a closed tableau (if the set is unsatisfiable), some other sequences lead to tableaux that cannot be closed. General solutions for these cases are outlined in the "Searching for a tableau" section.
A tableau calculus is a set of rules that allows building and modification of a tableau. Propositional tableau rules, tableau rules without unification, and tableau rules with unification, are all tableau calculi. Some important properties a tableau calculus may or may not possess are completeness, destructiveness, and proof confluence.
A tableau calculus is called complete if it allows building a tableau proof for every given unsatisfiable set of formulae. The tableau calculi mentioned above can be proved complete.
A remarkable difference between tableau with unification and the other two calculi is that the latter two calculi only modify a tableau by adding new nodes to it, while the former one allows substitutions to modify the existing part of the tableau. More generally, tableau calculi are classed as destructive or non-destructive depending on whether they only add new nodes to tableau or not. Tableau with unification is therefore destructive, while propositional tableau and tableau without unification are non-destructive.
Proof confluence is the property of a tableau calculus being able to obtain a proof for an arbitrary unsatisfiable set from an arbitrary tableau, assuming that this tableau has itself been obtained by applying the rules of the calculus. In other words, in a proof confluent tableau calculus, from an unsatisfiable set one can apply whatever set of rules and still obtain a tableau from which a closed one can be obtained by applying some other rules.
A tableau calculus is simply a set of rules that prescribes how a tableau can be modified. A proof procedure is a method for actually finding a proof (if one exists). In other words, a tableau calculus is a set of rules, while a proof procedure is a policy of application of these rules. Even if a calculus is complete, not every possible choice of application of rules leads to a proof of an unsatisfiable set. For example, { P ( f ( c ) ) , R ( c ) , ¬ P ( f ( c ) ) ∨ ¬ R ( c ) , ∀ x . Q ( x ) } {\displaystyle \{P(f(c)),R(c),\neg P(f(c))\lor \neg R(c),\forall x.Q(x)\}} is unsatisfiable, but both tableaux with unification and tableaux without unification allow the rule for the universal quantifiers to be applied repeatedly to the last formula, while simply applying the rule for disjunction to the third one would directly lead to closure.
For proof procedures, a definition of completeness has been given: a proof procedure is strongly complete if it allows finding a closed tableau for any given unsatisfiable set of formulae. Proof confluence of the underlying calculus is relevant to completeness: proof confluence is the guarantee that a closed tableau can be always generated from an arbitrary partially constructed tableau (if the set is unsatisfiable). Without proof confluence, the application of a 'wrong' rule may result in the impossibility of making the tableau complete by applying other rules.
Propositional tableaux and tableaux without unification have strongly complete proof procedures. In particular, a complete proof procedure is that of applying the rules in a fair way. This is because the only way such calculi cannot generate a closed tableau from an unsatisfiable set is by not applying some applicable rules.
For propositional tableaux, fairness amounts to expanding every formula in every branch. More precisely, for every formula and every branch the formula is in, the rule having the formula as a precondition has been used to expand the branch. A fair proof procedure for propositional tableaux is strongly complete.
For first-order tableaux without unification, the condition of fairness is similar, with the exception that the rule for universal quantifiers might require more than one application. Fairness amounts to expanding every universal quantifier infinitely often. In other words, a fair policy of application of rules cannot keep applying other rules without expanding every universal quantifier in every branch that is still open once in a while.
If a tableau calculus is complete, every unsatisfiable set of formulae has an associated closed tableau. While this tableau can always be obtained by applying some of the rules of the calculus, the problem of which rules to apply for a given formula still remains. As a result, completeness does not automatically imply the existence of a feasible policy of application of rules that always leads to a closed tableau for every given unsatisfiable set of formulae. While a fair proof procedure is complete for ground tableau and tableau without unification, this is not the case for tableau with unification.
A general solution for this problem is that of searching the space of tableaux until a closed one is found (if any exists, that is, the set is unsatisfiable). In this approach, one starts with an empty tableau and then recursively applies every possible applicable rule. This procedure visits a (implicit) tree whose nodes are labeled with tableaux, and such that the tableau in a node is obtained from the tableau in its parent by applying one of the valid rules.
Since each branch can be infinite, this tree has to be visited breadth-first rather than depth-first. This requires a large amount of space, as the breadth of the tree can grow exponentially. A method that may visit some nodes more than once but works in polynomial space is to visit in a depth-first manner with iterative deepening : one first visits the tree depth first up to a certain depth, then increases the depth and perform the visit again. This particular procedure uses the depth (which is also the number of tableau rules that have been applied) for deciding when to stop at each step. Various other parameters (such as the size of the tableau labeling a node) have been used instead.
The size of the search tree depends on the number of (children) tableaux that can be generated from a given (parent) one. Reducing the number of such tableaux therefore reduces the required search.
A way for reducing this number is to disallow the generation of some tableaux based on their internal structure. An example is the condition of regularity: if a branch contains a literal, using an expansion rule that generates the same literal is useless because the branch containing two copies of the literals would have the same set of formulae of the original one. This expansion can be disallowed because if a closed tableau exists, it can be found without it. This restriction is structural because it can be checked by looking at the structure of the tableau to expand only.
Different methods for reducing search disallow the generation of some tableaux on the ground that a closed tableau can still be found by expanding the other ones. These restrictions are called global. As an example of a global restriction, one may employ a rule that specifies which of the open branches is to be expanded. As a result, if a tableau has for example two non-closed branches, the rule specifies which one is to be expanded, disallowing the expansion of the second one. This restriction reduces the search space because one possible choice is now forbidden; completeness is however not harmed, as the second branch will still be expanded if the first one is eventually closed. As an example, a tableau with root ¬ a ∧ ¬ b {\displaystyle \neg a\land \neg b} , child a ∨ b {\displaystyle a\lor b} , and two leaves a {\displaystyle a} and b {\displaystyle b} can be closed in two ways: applying ( ∧ ) {\displaystyle (\land )} first to a {\displaystyle a} and then to b {\displaystyle b} , or vice versa. There is clearly no need to follow both possibilities; one may consider only the case in which ( ∧ ) {\displaystyle (\land )} is first applied to a {\displaystyle a} and disregard the case in which it is first applied to b {\displaystyle b} . This is a global restriction because what allows neglecting this second expansion is the presence of the other tableau, where expansion is applied to a {\displaystyle a} first and b {\displaystyle b} afterwards.
When applied to sets of clauses (rather than of arbitrary formulae), tableaux methods allow for a number of efficiency improvements. A first-order clause is a formula ∀ x 1 , … , x n L 1 ∨ ⋯ ∨ L m {\displaystyle \forall x_{1},\ldots ,x_{n}L_{1}\lor \cdots \lor L_{m}} that does not contain free variables and such that each L i {\displaystyle L_{i}} is a literal. The universal quantifiers are often omitted for clarity, so that for example P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle P(x,y)\lor Q(f(x))} actually means ∀ x , y . P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle \forall x,y.P(x,y)\lor Q(f(x))} . Note that, if taken literally, these two formulae are not the same as for satisfiability: rather, the satisfiability P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle P(x,y)\lor Q(f(x))} is the same as that of ∃ x , y . P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle \exists x,y.P(x,y)\lor Q(f(x))} . That free variables are universally quantified is not a consequence of the definition of first-order satisfiability; it is rather used as an implicit common assumption when dealing with clauses.
The only expansion rules that are applicable to a clause are ( ∀ ) {\displaystyle (\forall )} and ( ∨ ) {\displaystyle (\lor )} ; these two rules can be replaced by their combination without losing completeness. In particular, the following rule corresponds to applying in sequence the rules ( ∀ ) {\displaystyle (\forall )} and ( ∨ ) {\displaystyle (\lor )} of the first-order calculus with unification.
When the set to be checked for satisfiability is only composed of clauses, this and the unification rules are sufficient to prove unsatisfiability. In other worlds, the tableau calculi composed of ( C ) {\displaystyle (C)} and ( σ ) {\displaystyle (\sigma )} is complete.
Since the clause expansion rule only generates literals and never new clauses, the clauses to which it can be applied are only clauses of the input set. As a result, the clause expansion rule can be further restricted to the case where the clause is in the input set.
Since this rule directly exploits the clauses in the input set there is no need to initialize the tableau to the chain of the input clauses. The initial tableau can therefore be initialize with the single node labeled t r u e {\displaystyle true} ; this label is often omitted as implicit. As a result of this further simplification, every node of the tableau (apart from the root) is labeled with a literal.
A number of optimizations can be used for clause tableau. These optimization are aimed at reducing the number of possible tableaux to be explored when searching for a closed tableau as described in the "Searching for a closed tableau" section above.
Connection is a condition over tableau that forbids expanding a branch using clauses that are unrelated to the literals that are already in the branch. Connection can be defined in two ways:
Both conditions apply only to branches consisting not only of the root. The second definition allows for the use of a clause containing a literal that unifies with the negation of a literal in the branch, while the first only further constraint that literal to be in leaf of the current branch.
If clause expansion is restricted by connectedness (either strong or weak), its application produces a tableau in which substitution can applied to one of the new leaves, closing its branch. In particular, this is the leaf containing the literal of the clause that unifies with the negation of a literal in the branch (or the negation of the literal in the parent, in case of strong connection).
Both conditions of connectedness lead to a complete first-order calculus: if a set of clauses is unsatisfiable, it has a closed connected (strongly or weakly) tableau. Such a closed tableau can be found by searching in the space of tableaux as explained in the "Searching for a closed tableau" section. During this search, connectedness eliminates some possible choices of expansion, thus reducing search. In other worlds, while the tableau in a node of the tree can be in general expanded in several different ways, connection may allow only few of them, thus reducing the number of resulting tableaux that need to be further expanded.
This can be seen on the following (propositional) example. The tableau made of a chain t r u e − a {\displaystyle true-a} for the set of clauses { a , ¬ a ∨ b , ¬ c ∨ d , ¬ b } {\displaystyle \{a,\neg a\lor b,\neg c\lor d,\neg b\}} can be in general expanded using each of the four input clauses, but connection only allows the expansion that uses ¬ a ∨ b {\displaystyle \neg a\lor b} . This means that the tree of tableaux has four leaves in general but only one if connectedness is imposed. This means that connectedness leaves only one tableau to try to expand, instead of the four ones to consider in general. In spite of this reduction of choices, the completeness theorem implies that a closed tableau can be found if the set is unsatisfiable.
The connectedness conditions, when applied to the propositional (clausal) case, make the resulting calculus non-confluent. As an example, { a , b , ¬ b } {\displaystyle \{a,b,\neg b\}} is unsatisfiable, but applying ( C ) {\displaystyle (C)} to a {\displaystyle a} generates the chain t r u e − a {\displaystyle true-a} , which is not closed and to which no other expansion rule can be applied without violating either strong or weak connectedness. In the case of weak connectedness, confluence holds provided that the clause used for expanding the root is relevant to unsatisfiability, that is, it is contained in a minimally unsatisfiable subset of the set of clauses. Unfortunately, the problem of checking whether a clause meets this condition is itself a hard problem. In spite of non-confluence, a closed tableau can be found using search, as presented in the "Searching for a closed tableau" section above. While search is made necessary, connectedness reduces the possible choices of expansion, thus making search more efficient.
A tableau is regular if no literal occurs twice in the same branch. Enforcing this condition allows for a reduction of the possible choices of tableau expansion, as the clauses that would generate a non-regular tableau cannot be expanded.
These disallowed expansion steps are however useless. If B {\displaystyle B} is a branch containing a literal L {\displaystyle L} , and C {\displaystyle C} is a clause whose expansion violates regularity, then C {\displaystyle C} contains L {\displaystyle L} . In order to close the tableau, one needs to expand and close, among others, the branch where B − L {\displaystyle B-L} , where L {\displaystyle L} occurs twice. However, the formulae in this branch are exactly the same as the formulae of B {\displaystyle B} alone. As a result, the same expansion steps that close B − L {\displaystyle B-L} also close B {\displaystyle B} . This means that expanding C {\displaystyle C} was unnecessary; moreover, if C {\displaystyle C} contained other literals, its expansion generated other leaves that needed to be closed. In the propositional case, the expansion needed to close these leaves are completely useless; in the first-order case, they may only affect the rest of the tableau because of some unifications; these can however be combined to the substitutions used to close the rest of the tableau.
In a modal logic , a model comprises a set of possible worlds , each one associated to a truth evaluation; an accessibility relation specifies when a world is accessible from another one. A modal formula may specify not only conditions over a possible world, but also on the ones that are accessible from it. As an example, ◻ A {\displaystyle \Box A} is true in a world if A {\displaystyle A} is true in all worlds that are accessible from it.
As for propositional logic, tableaux for modal logics are based on recursively breaking formulae into its basic components. Expanding a modal formula may however require stating conditions over different worlds. As an example, if ¬ ◻ A {\displaystyle \neg \Box A} is true in a world then there exists a world accessible from it where A {\displaystyle A} is false. However, one cannot simply add the following rule to the propositional ones.
In propositional tableaux all formulae refer to the same truth evaluation, but the precondition of the rule above holds in one world while the consequence holds in another. Not taking this into account would generate incorrect results. For example, formula a ∧ ¬ ◻ a {\displaystyle a\land \neg \Box a} states that a {\displaystyle a} is true in the current world and a {\displaystyle a} is false in a world that is accessible from it. Simply applying ( ∧ ) {\displaystyle (\land )} and the expansion rule above would produce a {\displaystyle a} and ¬ a {\displaystyle \neg a} , but these two formulae should not in general generate a contradiction, as they hold in different worlds. Modal tableaux calculi do contain rules of the kind of the one above, but include mechanisms to avoid the incorrect interaction of formulae referring to different worlds.
Technically, tableaux for modal logics check the satisfiability of a set of formulae: they check whether there exists a model M {\displaystyle M} and world w {\displaystyle w} such that the formulae in the set are true in that model and world. In the example above, while a {\displaystyle a} states the truth of a {\displaystyle a} in w {\displaystyle w} , the formula ¬ ◻ a {\displaystyle \neg \Box a} states the truth of ¬ a {\displaystyle \neg a} in some world w ′ {\displaystyle w'} that is accessible from w {\displaystyle w} and which may in general be different from w {\displaystyle w} . Tableaux calculi for modal logic take into account that formulae may refer to different worlds.
This fact has an important consequence: formulae that hold in a world may imply conditions over different successors of that world. Unsatisfiability may then be proved from the subset of formulae referring to a single successor. This holds if a world may have more than one successor, which is true for most modal logics. If this is the case, a formula like ¬ ◻ A ∧ ¬ ◻ B {\displaystyle \neg \Box A\land \neg \Box B} is true if a successor where ¬ A {\displaystyle \neg A} holds exists and a successor where ¬ B {\displaystyle \neg B} holds exists. In the other way around, if one can show unsatisfiability of ¬ A {\displaystyle \neg A} in an arbitrary successor, the formula is proved unsatisfiable without checking for worlds where ¬ B {\displaystyle \neg B} holds. At the same time, if one can show unsatisfiability of ¬ B {\displaystyle \neg B} , there is no need to check ¬ A {\displaystyle \neg A} . As a result, while there are two possible way to expand ¬ ◻ A ∧ ¬ ◻ B {\displaystyle \neg \Box A\land \neg \Box B} , one of these two ways is always sufficient to prove unsatisfiability if the formula is unsatisfiable. For example, one may expand the tableau by considering an arbitrary world where ¬ A {\displaystyle \neg A} holds. If this expansion leads to unsatisfiability, the original formula is unsatisfiable. However, it is also possible that unsatisfiability cannot be proved this way, and that the world where ¬ B {\displaystyle \neg B} holds should have been considered instead. As a result, one can always prove unsatisfiability by expanding either ¬ ◻ A {\displaystyle \neg \Box A} only or ¬ ◻ B {\displaystyle \neg \Box B} only; however, if the wrong choice is made the resulting tableau may not be closed. Expanding either subformula leads to tableau calculi that are complete but not proof-confluent. Searching as described in the "Searching for a closed tableau" may therefore be necessary.
Depending on whether the precondition and consequence of a tableau expansion rule refer to the same world or not, the rule is called static or transactional. While rules for propositional connectives are all static, not all rules for modal connectives are transactional: for example, in every modal logic including axiom T , it holds that ◻ A {\displaystyle \Box A} implies A {\displaystyle A} in the same world. As a result, the relative (modal) tableau expansion rule is static, as both its precondition and consequence refer to the same world.
A method for avoiding formulae referring to different worlds interacting in the wrong way is to make sure that all formulae of a branch refer to the same world. This condition is initially true as all formulae in the set to be checked for consistency are assumed referring to the same world. When expanding a branch, two situations are possible: either the new formulae refer to the same world as the other one in the branch or not. In the first case, the rule is applied normally. In the second case, all formulae of the branch that do not also hold in the new world are deleted from the branch, and possibly added to all other branches that are still relative to the old world.
As an example, in S5 every formula ◻ A {\displaystyle \Box A} that is true in a world is also true in all accessible worlds (that is, in all accessible worlds both A {\displaystyle A} and ◻ A {\displaystyle \Box A} are true). Therefore, when applying ¬ ◻ B ¬ B {\displaystyle {\frac {\neg \Box B}{\neg B}}} , whose consequence holds in a different world, one deletes all formulae from the branch, but can keep all formulae ◻ A {\displaystyle \Box A} , as these hold in the new world as well. In order to retain completeness, the deleted formulae are then added to all other branches that still refer to the old world.
A different mechanism for ensuring the correct interaction between formulae referring to different worlds is to switch from formulae to labeled formulae: instead of writing A {\displaystyle A} , one would write w : A {\displaystyle w:A} to make it explicit that A {\displaystyle A} holds in world w {\displaystyle w} .
All propositional expansion rules are adapted to this variant by stating that they all refer to formulae with the same world label. For example, w : A ∧ B {\displaystyle w:A\land B} generates two nodes labeled with w : A {\displaystyle w:A} and w : B {\displaystyle w:B} ; a branch is closed only if it contains two opposite literals of the same world, like w : a {\displaystyle w:a} and w : ¬ a {\displaystyle w:\neg a} ; no closure is generated if the two world labels are different, like in w : a {\displaystyle w:a} and w ′ : ¬ a {\displaystyle w':\neg a} .
A modal expansion rule may have a consequence that refers to different worlds. For example, the rule for ¬ ◻ A {\displaystyle \neg \Box A} would be written as follows
The precondition and consequent of this rule refer to worlds w {\displaystyle w} and w ′ {\displaystyle w'} , respectively. The various calculi use different methods for keeping track of the accessibility of the worlds used as labels. Some include pseudo-formulae like w R w ′ {\displaystyle wRw'} to denote that w ′ {\displaystyle w'} is accessible from w {\displaystyle w} . Some others use sequences of integers as world labels, this notation implicitly representing the accessibility relation (for example, ( 1 , 4 , 2 , 3 ) {\displaystyle (1,4,2,3)} is accessible from ( 1 , 4 , 2 ) {\displaystyle (1,4,2)} .)
The problem of interaction between formulae holding in different worlds can be overcome by using set-labeling tableaux. These are trees whose nodes are labeled with sets of formulae; the expansion rules explain how to attach new nodes to a leaf, based only on the label of the leaf (and not on the label of other nodes in the branch).
Tableaux for modal logics are used to verify the satisfiability of a set of modal formulae in a given modal logic. Given a set of formulae S {\displaystyle S} , they check the existence of a model M {\displaystyle M} and a world w {\displaystyle w} such that M , w ⊨ S {\displaystyle M,w\models S} .
The expansion rules depend on the particular modal logic used. A tableau system for the basic modal logic K can be obtained by adding to the propositional tableau rules the following one:
Intuitively, the precondition of this rule expresses the truth of all formulae A 1 , … , A n {\displaystyle A_{1},\ldots ,A_{n}} at all accessible worlds, and truth of ¬ B {\displaystyle \neg B} at some accessible worlds. The consequence of this rule is a formula that must be true at one of those worlds where ¬ B {\displaystyle \neg B} is true.
More technically, modal tableaux methods check the existence of a model M {\displaystyle M} and a world w {\displaystyle w} that make set of formulae true. If ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B} are true in w {\displaystyle w} , there must be a world w ′ {\displaystyle w'} that is accessible from w {\displaystyle w} and that makes A 1 ; … ; A n ; ¬ B {\displaystyle A_{1};\ldots ;A_{n};\neg B} true. This rule therefore amounts to deriving a set of formulae that must be satisfied in such w ′ {\displaystyle w'} .
While the preconditions ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B} are assumed satisfied by M , w {\displaystyle M,w} , the consequences A 1 ; … ; A n ; ¬ B {\displaystyle A_{1};\ldots ;A_{n};\neg B} are assumed satisfied in M , w ′ {\displaystyle M,w'} : same model but possibly different worlds. Set-labeled tableaux do not explicitly keep track of the world where each formula is assumed true: two nodes may or may not refer to the same world. However, the formulae labeling any given node are assumed true at the same world.
As a result of the possibly different worlds where formulae are assumed true, a formula in a node is not automatically valid in all its descendants, as every application of the modal rule corresponds to a move from a world to another one. This condition is automatically captured by set-labeling tableaux, as expansion rules are based only on the leaf where they are applied and not on its ancestors.
Notably, ( K ) {\displaystyle (K)} does not directly extend to multiple negated boxed formulae such as in ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B 1 ; ¬ ◻ B 2 {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B_{1};\neg \Box B_{2}} : while there exists an accessible world where B 1 {\displaystyle B_{1}} is false and one in which B 2 {\displaystyle B_{2}} is false, these two worlds are not necessarily the same.
Differently from the propositional rules, ( K ) {\displaystyle (K)} states conditions over all its preconditions. For example, it cannot be applied to a node labeled by a ; ◻ b ; ◻ ( b → c ) ; ¬ ◻ c {\displaystyle a;\Box b;\Box (b\to c);\neg \Box c} ; while this set is inconsistent and this could be easily proved by applying ( K ) {\displaystyle (K)} , this rule cannot be applied because of formula a {\displaystyle a} , which is not even relevant to inconsistency. Removal of such formulae is made possible by the rule:
The addition of this rule (thinning rule) makes the resulting calculus non-confluent: a tableau for an inconsistent set may be impossible to close, even if a closed tableau for the same set exists.
Rule ( θ ) {\displaystyle (\theta )} is non-deterministic: the set of formulae to be removed (or to be kept) can be chosen arbitrarily; this creates the problem of choosing a set of formulae to discard that is not so large it makes the resulting set satisfiable and not so small it makes the necessary expansion rules inapplicable. Having a large number of possible choices makes the problem of searching for a closed tableau harder.
This non-determinism can be avoided by restricting the usage of ( θ ) {\displaystyle (\theta )} so that it is only applied before a modal expansion rule, and so that it only removes the formulae that make that other rule inapplicable. This condition can be also formulated by merging the two rules in a single one. The resulting rule produces the same result as the old one, but implicitly discard all formulae that made the old rule inapplicable. This mechanism for removing ( θ ) {\displaystyle (\theta )} has been proved to preserve completeness for many modal logics.
Axiom T expresses reflexivity of the accessibility relation: every world is accessible from itself. The corresponding tableau expansion rule is:
This rule relates conditions over the same world: if ◻ B {\displaystyle \Box B} is true in a world, by reflexivity B {\displaystyle B} is also true in the same world . This rule is static, not transactional, as both its precondition and consequent refer to the same world.
This rule copies ◻ B {\displaystyle \Box B} from the precondition to the consequent, in spite of this formula having been "used" to generate B {\displaystyle B} . This is correct, as the considered world is the same, so ◻ B {\displaystyle \Box B} also holds there. This "copying" is necessary in some cases. It is for example necessary to prove the inconsistency of ◻ ( a ∧ ¬ ◻ a ) {\displaystyle \Box (a\land \neg \Box a)} : the only applicable rules are in order ( T ) , ( ∧ ) , ( θ ) , ( K ) {\displaystyle (T),(\land ),(\theta ),(K)} , from which one is blocked if ◻ a {\displaystyle \Box a} is not copied.
A different method for dealing with formulae holding in alternate worlds is to start a different tableau for each new world that is introduced in the tableau. For example, ¬ ◻ A {\displaystyle \neg \Box A} implies that A {\displaystyle A} is false in an accessible world, so one starts a new tableau rooted by ¬ A {\displaystyle \neg A} . This new tableau is attached to the node of the original tableau where the expansion rule has been applied; a closure of this tableau immediately generates a closure of all branches where that node is, regardless of whether the same node is associated other auxiliary tableaux. The expansion rules for the auxiliary tableaux are the same as for the original one; therefore, an auxiliary tableau can have in turns other (sub-)auxiliary tableaux.
The above modal tableaux establish the consistency of a set of formulae, and can be used for solving the local logical consequence problem. This is the problem of telling whether, for each model M {\displaystyle M} , if A {\displaystyle A} is true in a world w {\displaystyle w} , then B {\displaystyle B} is also true in the same world. This is the same as checking whether B {\displaystyle B} is true in a world of a model, in the assumption that A {\displaystyle A} is also true in the same world of the same model.
A related problem is the global consequence problem, where the assumption is that a formula (or set of formulae) G {\displaystyle G} is true in all possible worlds of the model. The problem is that of checking whether, in all models M {\displaystyle M} where G {\displaystyle G} is true in all worlds, B {\displaystyle B} is also true in all worlds.
Local and global assumption differ on models where the assumed formula is true in some worlds but not in others. As an example, { P , ¬ ◻ ( P ∧ Q ) } {\displaystyle \{P,\neg \Box (P\land Q)\}} entails ¬ ◻ Q {\displaystyle \neg \Box Q} globally but not locally. Local entailment does not hold in a model consisting of two worlds making P {\displaystyle P} and ¬ P , Q {\displaystyle \neg P,Q} true, respectively, and where the second is accessible from the first; in the first world, the assumptions are true but ¬ ◻ Q {\displaystyle \neg \Box Q} is false. This counterexample works because P {\displaystyle P} can be assumed true in a world and false in another one. If however the same assumption is considered global, ¬ P {\displaystyle \neg P} is not allowed in any world of the model.
These two problems can be combined, so that one can check whether B {\displaystyle B} is a local consequence of A {\displaystyle A} under the global assumption G {\displaystyle G} . Tableaux calculi can deal with global assumption by a rule allowing its addition to every node, regardless of the world it refers to.
The following conventions are sometimes used.
When writing tableaux expansion rules, formulae are often denoted using a convention, so that for example α is always considered to be α 1 ∧ α 2 {\displaystyle \alpha _{1}\land \alpha _{2}} . The following table provides the notation for formulae in propositional, first-order, and modal logic.
Each label in the first column is taken to be either formula in the other columns. An overlined formula such as α 1 ¯ {\displaystyle {\overline {\alpha _{1}}}} indicates that α 1 {\displaystyle \alpha _{1}} is the negation of whatever formula appears in its place, so that for example in formula ¬ ( a ∨ b ) {\displaystyle \neg (a\lor b)} the subformula α 1 {\displaystyle \alpha _{1}} is the negation of a .
Since every label indicates many equivalent formulae, this notation allows writing a single rule for all these equivalent formulae. For example, the conjunction expansion rule is formulated as: | https://en.wikipedia.org/wiki/Method_of_analytic_tableaux |
In mathematics , more specifically in dynamical systems , the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift . It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.
More precisely, the system has the following form x ˙ = ε f ( x , t , ε ) , 0 ≤ ε ≪ 1 {\displaystyle {\dot {x}}=\varepsilon f(x,t,\varepsilon ),\quad 0\leq \varepsilon \ll 1} of a phase space variable x . {\displaystyle x.} The fast oscillation is given by f {\displaystyle f} versus a slow drift of x ˙ {\displaystyle {\dot {x}}} . The averaging method yields an autonomous dynamical system y ˙ = ε 1 T ∫ 0 T f ( y , s , 0 ) d s =: ε f ¯ ( y ) {\displaystyle {\dot {y}}=\varepsilon {\frac {1}{T}}\int _{0}^{T}f(y,s,0)~ds=:\varepsilon {\bar {f}}(y)} which approximates the solution curves of x ˙ {\displaystyle {\dot {x}}} inside a connected and compact region of the phase space and over time of 1 / ε {\displaystyle 1/\varepsilon } .
Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for y {\displaystyle y} . In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds , as well as their stability in the phase space of the averaged system.
In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for x ˙ {\displaystyle {\dot {x}}} , with the corresponding averaged system y ˙ {\displaystyle {\dot {y}}} , in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment. [ 1 ]
The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in [ 2 ] ).
Consider a perturbed logistic growth x ˙ = ε ( x ( 1 − x ) + sin t ) x ∈ R , 0 ≤ ε ≪ 1 , {\displaystyle {\dot {x}}=\varepsilon (x(1-x)+\sin {t})\quad \quad x\in \mathbb {R} ,\quad 0\leq \varepsilon \ll 1,} and the averaged equation y ˙ = ε y ( 1 − y ) y ∈ R . {\displaystyle {\dot {y}}=\varepsilon y(1-y)\qquad y\in \mathbb {R} .} The purpose of the method of averaging is to tell us the qualitative behavior of the vector field when we average it over a period of time. It guarantees that the solution y ( t ) {\displaystyle y(t)} approximates x ( t ) {\displaystyle x(t)} for times t = O ( 1 / ε ) . {\displaystyle t={\mathcal {O}}(1/\varepsilon ).} Exceptionally: in this example the approximation is even better, it is valid for all times. We present it in a section below.
We assume the vector field f : R n × R × R → R n {\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} \times \mathbb {R} \to \mathbb {R} ^{n}} to be of differentiability class C r {\displaystyle C^{r}} with r ≥ 2 {\displaystyle r\geq 2} (or even we will only say smooth), which we will denote f ∈ C r ( R n × R × R + ; R n ) {\displaystyle f\in C^{r}(\mathbb {R} ^{n}\times \mathbb {R} \times \mathbb {R} ^{+};\mathbb {R} ^{n})} . We expand this time-dependent vector field in a Taylor series (in powers of ε {\displaystyle \varepsilon } ) with remainder f [ k + 1 ] ( x , t , ε ) {\displaystyle f^{[k+1]}(x,t,\varepsilon )} . We introduce the following notation: [ 2 ] f ( x , t , ε ) = f 0 ( x , t ) + ε f 1 ( x , t ) + ⋯ + ε k f k ( x , t ) + ε k + 1 f [ k + 1 ] ( x , t , ε ) , {\displaystyle f(x,t,\varepsilon )=f^{0}(x,t)+\varepsilon f^{1}(x,t)+\dots +\varepsilon ^{k}f^{k}(x,t)+\varepsilon ^{k+1}f^{[k+1]}(x,t,\varepsilon ),} where f j = f ( j ) ( x , t , 0 ) j ! {\displaystyle f^{j}={\frac {f^{(j)}(x,t,0)}{j!}}} is the j {\displaystyle j} -th derivative with 0 ≤ j ≤ k {\displaystyle 0\leq j\leq k} . As we are concerned with averaging problems, in general f 0 ( x , t ) {\displaystyle f^{0}(x,t)} is zero, so it turns out that we will be interested in vector fields given by f ( x , t , ε ) = ε f [ 1 ] ( x , t , ε ) = ε f 1 ( x , t ) + ε 2 f [ 2 ] ( x , t , ε ) . {\displaystyle f(x,t,\varepsilon )=\varepsilon f^{[1]}(x,t,\varepsilon )=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ).} Besides, we define the following initial value problem to be in the standard form : [ 2 ] x ˙ = ε f 1 ( x , t ) + ε 2 f [ 2 ] ( x , t , ε ) , x ( 0 , ε ) =: x 0 ∈ D ⊆ R n , 0 ≤ ε ≪ 1. {\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ),\qquad x(0,\varepsilon )=:x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1.}
Consider for every D ⊂ R n {\displaystyle D\subset \mathbb {R} ^{n}} connected and bounded and every ε 0 > 0 {\displaystyle \varepsilon _{0}>0} there exist L > 0 {\displaystyle L>0} and ε ≤ ε 0 {\displaystyle \varepsilon \leq \varepsilon _{0}} such that the original system (a non-autonomous dynamical system) given by x ˙ = ε f 1 ( x , t ) + ε 2 f [ 2 ] ( x , t , ε ) , x 0 ∈ D ⊆ R n , 0 ≤ ε ≪ 1 , {\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon ),\qquad x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1,} has solution x ( t , ε ) {\displaystyle x(t,\varepsilon )} , where f 1 ∈ C r ( D × R ; R n ) {\displaystyle f^{1}\in C^{r}(D\times \mathbb {R} ;\mathbb {R} ^{n})} is periodic with period T {\displaystyle T} and f [ 2 ] ∈ C r ( D × R × R + ; R n ) {\displaystyle f^{[2]}\in C^{r}(D\times \mathbb {R} \times \mathbb {R} ^{+};\mathbb {R} ^{n})} both with r ≥ 2 {\displaystyle r\geq 2} bounded on bounded sets. Then there exists a constant c > 0 {\displaystyle c>0} such that the solution y ( t , ε ) {\displaystyle y(t,\varepsilon )} of the averaged system (autonomous dynamical system) is y ˙ = ε 1 T ∫ 0 T f 1 ( y , s ) d s =: ε f ¯ 1 ( y ) , y ( 0 , ε ) = x 0 {\displaystyle {\dot {y}}=\varepsilon {\frac {1}{T}}\int _{0}^{T}f^{1}(y,s)~ds=:\varepsilon {\bar {f}}^{1}(y),\quad y(0,\varepsilon )=x_{0}} is ‖ x ( t , ε ) − y ( t , ε ) ‖ < c ε {\displaystyle \|x(t,\varepsilon )-y(t,\varepsilon )\|<c\varepsilon } for 0 ≤ ε ≤ ε 0 {\displaystyle 0\leq \varepsilon \leq \varepsilon _{0}} and 0 ≤ t ≤ L / ε {\displaystyle 0\leq t\leq L/\varepsilon } .
Krylov-Bogoliubov realized that the slow dynamics of the system determines the leading order of the asymptotic solution.
In order to proof it, they proposed a near-identity transformation, which turned out to be a change of coordinates with its own time-scale transforming the original system to the averaged one.
Along the history of the averaging technique, there is class of system extensively studied which give us meaningful examples we will discuss below. The class of system is given by: z ¨ + z = ε g ( z , z ˙ , t ) , z ∈ R , z ( 0 ) = z 0 a n d z ˙ ( 0 ) = v 0 , {\displaystyle {\ddot {z}}+z=\varepsilon g(z,{\dot {z}},t),\qquad z\in \mathbb {R} ,\quad z(0)=z_{0}~\mathrm {and} ~{\dot {z}}(0)=v_{0},} where g {\displaystyle g} is smooth. This system is similar to a linear system with a small nonlinear perturbation given by [ 0 ε g ( z , z ˙ , t ) ] {\displaystyle {\begin{bmatrix}0\\\varepsilon ~g(z,{\dot {z}},t)\end{bmatrix}}} : z 1 ˙ = z 2 , z 1 ( 0 ) = z 0 z 2 ˙ = − z 1 + ε g ( z 1 , z 2 , t ) , z 2 ( 0 ) = v 0 , {\displaystyle {\begin{aligned}{\dot {z_{1}}}&=z_{2},&z_{1}(0)&=z_{0}\\{\dot {z_{2}}}&=-z_{1}+\varepsilon g(z_{1},z_{2},t),&z_{2}(0)&=v_{0},\end{aligned}}} differing from the standard form. Hence there is a necessity to perform a transformation to make it in the standard form explicitly. [ 2 ] We are able to change coordinates using variation of constants method. We look at the unperturbed system, i.e. ε = 0 {\displaystyle \varepsilon =0} , given by [ z 1 ˙ z 2 ˙ ] = [ 0 1 − 1 0 ] [ z 1 z 2 ] = A [ z 1 z 2 ] {\displaystyle {\begin{bmatrix}{\dot {z_{1}}}\\{\dot {z_{2}}}\end{bmatrix}}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}{\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}=A{\begin{bmatrix}z_{1}\\z_{2}\end{bmatrix}}}
which has the fundamental solution Φ ( t ) = e A t {\displaystyle \Phi (t)=e^{At}} corresponding to a rotation. Then the time-dependent change of coordinates is z ( t ) = Φ ( t ) x {\displaystyle z(t)=\Phi (t)x} where x {\displaystyle x} is the coordinates respective to the standard form.
If we take the time derivative in both sides and invert the fundamental matrix we obtain x ˙ = ε e − A t [ 0 g ~ ( x , x ˙ , t ) ] with g ~ ( x , x ˙ , t ) = g ( cos ( t ) x ( t ) + sin ( t ) x ˙ ( t ) , − sin ( t ) x ( t ) + cos ( t ) x ˙ ( t ) , t ) . {\displaystyle {\dot {x}}=\varepsilon e^{-At}{\begin{bmatrix}0\\~{\tilde {g}}(x,{\dot {x}},t)\end{bmatrix}}~{\text{ with }}~{\tilde {g}}(x,{\dot {x}},t)=g(\cos(t)x(t)+\sin(t){\dot {x}}(t),-\sin(t)x(t)+\cos(t){\dot {x}}(t),t).}
If g ∈ C 1 {\displaystyle g\in C^{1}} we may apply averaging so long as a neighborhood of the origin is excluded (since the polar coordinates fail): f ¯ 1 1 ( r ) = 1 2 π ∫ 0 2 π cos ( s − ϕ ) g ( r sin ( s − ϕ ) , r cos ( s − ϕ ) , s ) d s f ¯ 2 1 ( r ) = 1 2 π r ∫ 0 2 π sin ( s − ϕ ) g ( r sin ( s − ϕ ) , r cos ( s − ϕ ) , s ) d s , {\displaystyle {\begin{array}{rcl}{\bar {f}}_{1}^{1}(r)&=&\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }\cos(s-\phi )g(r\sin(s-\phi ),r\cos(s-\phi ),s)ds\\[4pt]{\bar {f}}_{2}^{1}(r)&=&\displaystyle {\frac {1}{2\pi r}}\int _{0}^{2\pi }\sin(s-\phi )g(r\sin(s-\phi ),r\cos(s-\phi ),s)ds,\end{array}}} where the averaged system is r ¯ ˙ = ε f ¯ 1 1 ( r ¯ ) ϕ ¯ ˙ = ε f ¯ 2 1 ( r ¯ ) . {\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=\varepsilon {\bar {f}}_{1}^{1}({\bar {r}})\\{\dot {\bar {\phi }}}=\varepsilon {\bar {f}}_{2}^{1}({\bar {r}}).\end{array}}}
The method contains some assumptions and restrictions. These limitations play important role when we average the original equation which is not into the standard form, and we can discuss counterexample of it. The following example in order to discourage this hurried averaging: [ 2 ] z ¨ + 4 ε cos 2 ( t ) z ˙ + z = 0 , z ( 0 ) = 0 , z ˙ ( 0 ) = 1 , {\displaystyle {\ddot {z}}+4\varepsilon \cos ^{2}{(t)}{\dot {z}}+z=0,\qquad z(0)=0,\quad {\dot {z}}(0)=1,} where we put g ( z , z ˙ , t ) = − 4 cos 2 ( t ) z ˙ {\displaystyle g(z,{\dot {z}},t)=-4\cos ^{2}(t){\dot {z}}} following the previous notation.
This systems corresponds to a damped harmonic oscillator where the damping term oscillates between 0 {\displaystyle 0} and 4 ε {\displaystyle 4\varepsilon } . Averaging the friction term over one cycle of 2 π {\displaystyle 2\pi } yields the equation: z ¯ ¨ + 2 ε z ¯ ˙ + z ¯ = 0 , z ¯ ( 0 ) = 0 , z ¯ ˙ ( 0 ) = 1. {\displaystyle {\ddot {\bar {z}}}+2\varepsilon {\dot {\bar {z}}}+{\bar {z}}=0,\qquad {\bar {z}}(0)=0,\quad {\dot {\bar {z}}}(0)=1.} The solution is z ¯ ( t ) = 1 ( 1 − ε 2 ) 1 2 e − ε t sin ( ( 1 − ε 2 ) 1 2 t ) . {\displaystyle {\bar {z}}(t)={\frac {1}{(1-\varepsilon ^{2})^{\frac {1}{2}}}}e^{-\varepsilon t}\sin {((1-\varepsilon ^{2})^{\frac {1}{2}}t)}.} which the convergence rate to the origin is ε {\displaystyle \varepsilon } . The averaged system obtained from the standard form yields: r ¯ ˙ = − 1 2 ε r ¯ ( 2 + cos ( 2 ϕ ¯ ) ) , r ¯ ( 0 ) = 1 ϕ ¯ ˙ = 1 2 ε sin ( 2 ϕ ¯ ) , ϕ ¯ ( 0 ) = 0 , {\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=-{\frac {1}{2}}\varepsilon {\bar {r}}(2+\cos(2{\bar {\phi }})),~{\bar {r}}(0)=1\\{\dot {\bar {\phi }}}={\frac {1}{2}}\varepsilon \sin(2{\bar {\phi }}),~{\bar {\phi }}(0)=0,\end{array}}} which in the rectangular coordinate shows explicitly that indeed the rate of convergence to the origin is 3 2 ε {\textstyle {\frac {3}{2}}\varepsilon } differing from the previous crude averaged system: y ( t ) = e − 3 2 ε t sin t {\displaystyle y(t)=e^{-{\frac {3}{2}}\varepsilon t}\sin {t}}
Van der Pol was concerned with obtaining approximate solutions for equations of the type z ¨ + ε ( 1 − z 2 ) z ˙ + z = 0 , {\displaystyle {\ddot {z}}+\varepsilon (1-z^{2}){\dot {z}}+z=0,} where g ( z , z ˙ , t ) = ( 1 − z 2 ) z ˙ {\displaystyle g(z,{\dot {z}},t)=(1-z^{2}){\dot {z}}} following the previous notation. This system is often called the Van der Pol oscillator . Applying periodic averaging to this nonlinear oscillator provides qualitative knowledge of the phase space without solving the system explicitly.
The averaged system is r ¯ ˙ = 1 2 ε r ¯ ( 1 − 1 4 r ¯ 2 ) ϕ ¯ ˙ = 0 , {\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}={\frac {1}{2}}\varepsilon {\bar {r}}(1-{\frac {1}{4}}{\bar {r}}^{2})\\{\dot {\bar {\phi }}}=0,\end{array}}} and we can analyze the fixed points and their stability. There is an unstable fixed point at the origin and a stable limit cycle represented by r ¯ = 2 {\displaystyle {\bar {r}}=2} .
The existence of such stable limit-cycle can be stated as a theorem.
Theorem (Existence of a periodic orbit) [ 5 ] : If p 0 {\displaystyle p_{0}} is a hyperbolic fixed point of y ˙ = ε f ¯ 1 ( y ) {\displaystyle {\dot {y}}=\varepsilon {\bar {f}}^{1}(y)} Then there exists ε 0 > 0 {\displaystyle \varepsilon _{0}>0} such that for all 0 < ε < ε 0 {\displaystyle 0<\varepsilon <\varepsilon _{0}} , x ˙ = ε f 1 ( x , t ) + ε 2 f [ 2 ] ( x , t , ε ) {\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t)+\varepsilon ^{2}f^{[2]}(x,t,\varepsilon )} has a unique hyperbolic periodic orbit γ ε ( t ) = p 0 + O ( ε ) {\displaystyle \gamma _{\varepsilon }(t)=p_{0}+{\mathcal {O}}(\varepsilon )} of the same stability type as p 0 {\displaystyle p_{0}} .
The proof can be found at Guckenheimer and Holmes, [ 5 ] Sanders et al. [ 2 ] and for the angle case in Chicone. [ 1 ]
The average theorem assumes existence of a connected and bounded region D ⊂ R n {\displaystyle D\subset \mathbb {R} ^{n}} which affects the time interval L {\displaystyle L} of the result validity. The following example points it out. Consider the z ¨ + z = 8 ε cos ( t ) z ˙ 2 , z ( 0 ) = 0 , z ˙ ( 0 ) = 1 , {\displaystyle {\ddot {z}}+z=8\varepsilon \cos {(t)}{\dot {z}}^{2},~z(0)=0,~{\dot {z}}(0)=1,} where g ( z , z ˙ , t ) = 8 z ˙ 2 cos ( t ) {\displaystyle g(z,{\dot {z}},t)=8{\dot {z}}^{2}\cos(t)} . The averaged system consists of r ¯ ˙ = 3 ε r ¯ 2 cos ( ϕ ¯ ) , r ¯ ( 0 ) = 1 ϕ ¯ ˙ = − ε r ¯ sin ( ϕ ¯ ) , ϕ ¯ ( 0 ) = 0 , {\displaystyle {\begin{array}{lcr}{\dot {\bar {r}}}=3\varepsilon {\bar {r}}^{2}\cos({\bar {\phi }}),~{\bar {r}}(0)=1\\{\dot {\bar {\phi }}}=-\varepsilon {\bar {r}}\sin({\bar {\phi }}),~{\bar {\phi }}(0)=0,\end{array}}} which under this initial condition indicates that the original solution behaves like z ( t ) = sin ( t ) 1 − 3 ε t + O ( ε ) , {\displaystyle z(t)={\frac {\sin(t)}{1-3\varepsilon t}}+{\mathcal {O}}(\varepsilon ),} where it holds on a bounded region over 0 ≤ ε t ≤ L < 1 3 {\displaystyle 0\leq \varepsilon t\leq L<{\frac {1}{3}}} .
Consider a damped pendulum whose point of suspension is vibrated vertically by a small amplitude, high frequency signal (this is usually known as dithering ). The equation of motion for such a pendulum is given by m ( l θ ¨ − a k ω 2 sin ω t sin θ ) = − m g sin θ − k ( l θ ˙ + a ω cos ω t sin θ ) {\displaystyle m(l{\ddot {\theta }}-ak\omega ^{2}\sin \omega t\sin \theta )=-mg\sin \theta -k(l{\dot {\theta }}+a\omega \cos \omega t\sin \theta )} where a sin ω t {\displaystyle a\sin \omega t} describes the motion of the suspension point, k {\displaystyle k} describes the damping of the pendulum, and θ {\displaystyle \theta } is the angle made by the pendulum with the vertical.
The phase space form of this equation is given by t ˙ = 1 θ ˙ = p p ˙ = 1 m l ( m a k ω 2 sin ω t sin θ − m g sin θ − k ( l p + a ω cos ω t sin θ ) ) {\displaystyle {\begin{aligned}{\dot {t}}&=1\\{\dot {\theta }}&=p\\{\dot {p}}&={\frac {1}{ml}}(mak\omega ^{2}\sin \omega t\sin \theta -mg\sin \theta -k(lp+a\omega \cos \omega t\sin \theta ))\end{aligned}}} where we have introduced the variable p {\displaystyle p} and written the system as an autonomous , first-order system in ( t , θ , p ) {\displaystyle (t,\theta ,p)} -space.
Suppose that the angular frequency of the vertical vibrations, ω {\displaystyle \omega } , is much greater than the natural frequency of the pendulum, g / l {\textstyle {\sqrt {g/l}}} . Suppose also that the amplitude of the vertical vibrations, a {\displaystyle a} , is much less than the length l {\displaystyle l} of the pendulum. The pendulum's trajectory in phase space will trace out a spiral around a curve C {\displaystyle C} , moving along C {\displaystyle C} at the slow rate g / l {\displaystyle {\sqrt {g/l}}} but moving around it at the fast rate ω {\displaystyle \omega } . The radius of the spiral around C {\displaystyle C} will be small and proportional to a {\displaystyle a} . The average behaviour of the trajectory, over a timescale much larger than 2 π / ω {\displaystyle 2\pi /\omega } , will be to follow the curve C {\displaystyle C} .
Average technique for initial value problems has been treated up to now with an validity error estimates of order 1 / ε {\displaystyle 1/\varepsilon } . However, there are circumstances where the estimates can be extended for further times, even the case for all times. [ 2 ] Below we deal with a system containing an asymptotically stable fixed point. Such situation recapitulates what is illustrated in Figure 1.
Theorem (Eckhaus [ 6 ] /Sanchez-Palencia [ 7 ] ) Consider the initial value problem x ˙ = ε f 1 ( x , t ) , x 0 ∈ D ⊆ R n , 0 ≤ ε ≪ 1. {\displaystyle {\dot {x}}=\varepsilon f^{1}(x,t),\qquad x_{0}\in D\subseteq \mathbb {R} ^{n},\quad 0\leq \varepsilon \ll 1.} Suppose y ˙ = ε lim T → ∞ 1 T ∫ 0 T f 1 ( y , s ) d s =: ε f ¯ 1 ( y ) , y ( 0 , ε ) = x 0 {\displaystyle {\dot {y}}=\varepsilon \lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}f^{1}(y,s)~ds=:\varepsilon {\bar {f}}^{1}(y),\quad y(0,\varepsilon )=x_{0}} exists and contains an asymptotically stable fixed point y = 0 {\displaystyle y=0} in the linear approximation. Moreover, f ¯ 1 {\displaystyle {\bar {f}}^{1}} is continuously differentiable with respect to y {\displaystyle y} in D {\displaystyle D} and has a domain of attraction D 0 ⊂ D {\displaystyle D^{0}\subset D} . For any compact K ⊂ D 0 {\displaystyle K\subset D^{0}} and for all x 0 ∈ K {\displaystyle x_{0}\in K} ‖ x ( t ) − y ( t ) ‖ = O ( δ ( ε ) ) , 0 ≤ t < ∞ , {\displaystyle \|x(t)-y(t)\|={\mathcal {O}}(\delta (\varepsilon )),\quad 0\leq t<\infty ,} with δ ( ε ) = o ( 1 ) {\displaystyle \delta (\varepsilon )=o(1)} in the general case and O ( ε ) {\displaystyle {\mathcal {O}}(\varepsilon )} in the periodic case. | https://en.wikipedia.org/wiki/Method_of_averaging |
The method of continued fractions is a method developed specifically for solution of integral equations of quantum scattering theory like Lippmann–Schwinger equation or Faddeev equations . It was invented by Horáček and Sasakawa [ 1 ] in 1983. The goal of the method is to solve the integral equation
iteratively and to construct convergent continued fraction for the T-matrix
The method has two variants. In the first one (denoted as MCFV) we construct approximations of the potential energy operator V {\displaystyle V} in the form of separable function of rank 1, 2, 3 ... The second variant (MCFG method [ 2 ] ) constructs the finite rank approximations to Green's operator . The approximations are constructed within Krylov subspace constructed from vector | ϕ ⟩ {\displaystyle |\phi \rangle } with action of the operator A = G 0 V {\displaystyle A=G_{0}V} . The method can thus be understood as resummation of (in general divergent) Born series by Padé approximants . It is also closely related to Schwinger variational principle .
In general the method requires similar amount of numerical work as calculation of terms of Born series, but it provides much faster convergence of the results.
The derivation of the method proceeds as follows. First we introduce rank-one (separable)
approximation to the potential
The integral equation for the rank-one part of potential is easily soluble. The full solution of the original problem can therefore be expressed as
in terms of new function | ψ 1 ⟩ {\displaystyle |\psi _{1}\rangle } . This function is solution of modified Lippmann–Schwinger equation
with | ϕ 1 ⟩ = G 0 V | ϕ ⟩ . {\displaystyle |\phi _{1}\rangle =G_{0}V|\phi \rangle .} The remainder potential term V 1 {\displaystyle V_{1}} is transparent for incoming wave
i. e. it is weaker operator than the original one.
The new problem thus obtained for | ψ 1 ⟩ {\displaystyle |\psi _{1}\rangle } is of the same form as the original one and we can repeat the procedure.
This leads to recurrent relations
It is possible to show that the T-matrix of the original problem can be expressed in the form of chain fraction
where we defined
In practical calculation the infinite chain fraction is replaced by finite one assuming that
This is equivalent to assuming that the remainder solution
is negligible. This is plausible assumption, since the remainder potential V N {\displaystyle V_{N}} has all vectors | ϕ i ⟩ , i = 0 , 1 , … , N − 1 {\displaystyle |\phi _{i}\rangle ,i=0,1,\ldots ,N-1} in its null space and it can be shown that this potential converges to zero and the chain fraction converges to the exact T-matrix.
The second variant [ 2 ] of the method construct the approximations to the Green's operator
now with vectors
The chain fraction for T-matrix now also holds, with little bit different definition of coefficients β i , γ i {\displaystyle \beta _{i},\gamma _{i}} . [ 2 ]
The expressions for the T-matrix resulting from both methods can be related to certain class of variational principles. In the case of first iteration of MCFV method we get the same result as from Schwinger variational principle with trial function | ψ ⟩ = | ϕ ⟩ {\displaystyle |\psi \rangle =|\phi \rangle } . The higher iterations with N -terms in the continuous fraction reproduce exactly 2 N terms (2 N + 1) of Born series for the MCFV (or MCFG) method respectively. The method was tested on calculation of collisions of electrons from hydrogen atom in static-exchange approximation. In this case the method reproduces exact results for scattering cross-section on 6 significant digits in 4 iterations. It can also be shown that both methods reproduce exactly the solution of the Lippmann-Schwinger equation with the potential given by finite-rank operator . The number of iterations is then equal to the rank of the potential. The method has been successfully used for solution of problems in both nuclear [ 3 ] and molecular physics . [ 4 ] | https://en.wikipedia.org/wiki/Method_of_continued_fractions |
In the mathematical field of enumerative combinatorics , identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set.
Let A {\displaystyle {\mathcal {A}}} be a family of subsets of the set A {\displaystyle A} and let x ∈ A {\displaystyle x\in A} be a distinguished element of set A {\displaystyle A} . Then suppose there is a predicate P ( X , x ) {\displaystyle P(X,x)} that relates a subset X ⊆ A {\displaystyle X\subseteq A} to x {\displaystyle x} . Denote A ( x ) {\displaystyle {\mathcal {A}}(x)} to be the set of subsets X {\displaystyle X} from A {\displaystyle {\mathcal {A}}} for which P ( X , x ) {\displaystyle P(X,x)} is true and A − x {\displaystyle {\mathcal {A}}-x} to be the set of subsets X {\displaystyle X} from A {\displaystyle {\mathcal {A}}} for which P ( X , x ) {\displaystyle P(X,x)} is false, Then A ( x ) {\displaystyle {\mathcal {A}}(x)} and A − x {\displaystyle {\mathcal {A}}-x} are disjoint sets, so by the method of summation, the cardinalities are additive [ 1 ]
Thus the distinguished element allows for a decomposition according to a predicate that is a simple form of a divide and conquer algorithm . In combinatorics, this allows for the construction of recurrence relations . Examples are in the next section. | https://en.wikipedia.org/wiki/Method_of_distinguished_element |
In mathematics, the method of dominant balance approximates the solution to an equation
by solving a simplified form of the equation containing 2 or more of the equation's terms that most influence (dominate) the solution and excluding terms contributing only small modifications to this approximate solution. Following an initial solution, iteration of the procedure may generate additional terms of an asymptotic expansion providing a more accurate solution. [ 1 ] [ 2 ]
An early example of the dominant balance method is the Newton polygon method. Newton developed this method to find an explicit approximation for an algebraic function . Newton expressed the function as proportional to the independent variable raised to a power , retained only the lowest-degree polynomial terms (dominant terms), and solved this simplified reduced equation to obtain an approximate solution. [ 3 ] [ 4 ] Dominant balance has a broad range of applications, solving differential equations arising in fluid mechanics , plasma physics , turbulence , combustion , nonlinear optics , geophysical fluid dynamics , and neuroscience . [ 5 ] [ 6 ]
The functions f ( z ) {\textstyle f(z)} and g ( z ) {\displaystyle g(z)} of parameter or independent variable z {\textstyle z} and the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} have limits as z {\textstyle z} approaches the limit L {\textstyle L} .
The function f ( z ) {\textstyle f(z)} is much less than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written as f ( z ) ≪ g ( z ) ( z → L ) {\textstyle f(z)\ll g(z)\ (z\to L)} , if the limit of the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} is zero as z {\textstyle z} approaches L {\textstyle L} . [ 7 ]
The relation f ( z ) {\textstyle f(z)} is lower order than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written using little-o notation f ( z ) = o ( g ( z ) ) ( z → L ) {\textstyle f(z)=o(g(z))\ (z\to L)} , is identical to the f ( z ) {\textstyle f(z)} is much less than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} relation. [ 7 ]
The function f ( z ) {\textstyle f(z)} is equivalent to g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written as f ( z ) ∼ g ( z ) ( z → L ) {\textstyle f(z)\sim g(z)\ (z\to L)} , if the limit of the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} is 1 as z {\textstyle z} approaches L {\textstyle L} . [ 7 ]
This result indicates that the zero function , f ( z ) = 0 {\textstyle f(z)=0} for all values of z {\textstyle z} , can never be equivalent to any other function. [ 7 ]
Asymptotically equivalent functions remain asymptotically equivalent under integration if requirements related to convergence are met. There are more specific requirements for asymptotically equivalent functions to remain asymptotically equivalent under differentiation . [ 8 ]
An equation's approximate solution is s ( z ) {\textstyle s(z)} as z {\textstyle z} approaches limit L {\textstyle L} . The equation's terms that may be constants or contain this solution are T 0 ( s ) , T 1 ( s ) , … , T n ( s ) {\textstyle T_{0}(s),T_{1}(s),\ldots ,T_{n}(s)} . If the approximate solution is fully correct, the equation's terms sum to zero in this equation: T 0 ( s ) + T 1 ( s ) + … + T n ( s ) = 0. {\displaystyle T_{0}(s)+T_{1}(s)+\ldots +T_{n}(s)=0.} For distinct integer indices i , j {\textstyle i,j} , this equation is a sum of 2 terms and a remainder R i j ( s ) {\textstyle R_{ij}(s)} expressed as T i ( s ) + T j ( s ) + R i j ( s ) = 0 R i j ( s ) = ∑ k = 0 k ≠ i , k ≠ j n T k ( s ) . {\displaystyle {\begin{aligned}&T_{i}(s)+T_{j}(s)+R_{ij}(s)=0\\&R_{ij}(s)=\sum _{{k=0} \atop {k\neq i,k\neq j}}^{n}T_{k}(s).\end{aligned}}} Balance equation terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} means make these terms equal and asymptotically equivalent by finding the function s ( z ) {\textstyle s(z)} that solves the reduced equation T i ( s ) + T j ( s ) = 0 {\textstyle T_{i}(s)+T_{j}(s)=0} with T i ( s ) ≠ 0 {\textstyle T_{i}(s)\neq 0} and T j ( s ) ≠ 0 {\textstyle T_{j}(s)\neq 0} . [ 9 ]
This solution s ( z ) {\textstyle s(z)} is consistent if terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} are dominant ; dominant means the remaining equation terms R i j ( s ) {\textstyle R_{ij}(s)} are much less than terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} as z {\textstyle z} approaches L {\textstyle L} . [ 10 ] [ 11 ] A consistent solution that balances two equation terms may generate an accurate approximation to the full equation's solution for z {\textstyle z} values approaching L {\textstyle L} . [ 11 ] [ 12 ] Approximate solutions arising from balancing different terms of an equation may generate distinct approximate solutions e.g. inner and outer layer solutions . [ 5 ]
Substituting the scaled function s ( z ) = ( z − L ) p s ~ ( z ) {\textstyle s(z)=(z-L)^{p}{\tilde {s}}(z)} into the equation and taking the limit as z {\textstyle z} approaches L {\textstyle L} may generate simplified reduced equations for distinct exponent values of p {\textstyle p} . [ 9 ] These simplified equations are called distinguished limits and identify balanced dominant equation terms. [ 13 ] The scale transformation generates the scaled functions. The dominant balance method applies scale transformations to balance equation terms whose factors contain distinct exponents. For example, T i ( s ) {\textstyle T_{i}(s)} contains factor ( z − L ) q {\textstyle (z-L)^{q}} and term T j ( s ) {\textstyle T_{j}(s)} contains factor ( z − L ) r {\textstyle (z-L)^{r}} with q ≠ r {\textstyle q\neq r} . Scaled functions are applied to differential equations when z {\textstyle z} is an equation parameter, not the differential equation´s independent variable. [ 5 ] The Kruskal-Newton diagram facilitates identifying the required scaled functions needed for dominant balance of algebraic and differential equations. [ 5 ]
For differential equation solutions containing an irregular singularity , the leading behavior is the first term of an asymptotic series solution that remains when the independent variable z {\textstyle z} approaches an irregular singularity L {\textstyle L} . The controlling factor is the fastest changing part of the leading behavior. It is advised to "show that the equation for the function obtained by factoring off the dominant balance solution from the exact solution itself has a solution that varies less rapidly than the dominant balance solution." [ 11 ]
The input is the set of equation terms and the limit L. The output is the set of approximate solutions. For each pair of distinct equation terms T i ( s ) , T j ( s ) {\textstyle T_{i}(s),T_{j}(s)} the algorithm applies a scale transformation if needed, balances the selected terms by finding a function that solves the reduced equation and then determines if this function is consistent. If the function balances the terms and is consistent, the algorithm adds the function to the set of approximate solutions, otherwise the algorithm rejects the function. The process is repeated for each pair of distinct equation terms.
The method may be iterated to generate additional terms of an asymptotic expansion to provide a more accurate solution. [ 11 ] Iterative methods such as the Newton-Raphson method may generate a more accurate solution. [ 4 ] A perturbation series , using the approximate solution as the first term, may also generate a more accurate solution. [ 5 ]
The dominant balance method will find an explicit approximate expression for the multi-valued function s = s ( z ) {\textstyle s=s(z)} defined by the equation 1 − 16 s + z s 5 = 0 {\textstyle 1-16s+zs^{5}=0} as z {\textstyle z} approaches zero. [ 14 ]
The set of equation terms is { 1 , − 16 s , z s 5 } {\textstyle \{1,-16s,zs^{5}\}} and the limit is zero.
s 1 ( z ) = 2 z 1 / 4 , s 2 ( z ) = − 2 z 1 / 4 , s 3 ( z ) = 2 i z 1 / 4 , s 4 ( z ) = − 2 i z 1 / 4 . {\displaystyle s_{1}(z)={\frac {2}{z^{1/4}}},s_{2}(z)={\frac {-2}{z^{1/4}}},s_{3}(z)={\frac {2i}{z^{1/4}}},s_{4}(z)={\frac {-2i}{z^{1/4}}}.}
The set of approximate solutions has 5 functions: { 1 16 , 2 z 1 / 4 , − 2 z 1 / 4 , 2 i z 1 / 4 , − 2 i z 1 / 4 } . {\displaystyle \left\{{\frac {1}{16}},{\frac {2}{z^{1/4}}},{\frac {-2}{z^{1/4}}},{\frac {2i}{z^{1/4}}},{\frac {-2i}{z^{1/4}}}\right\}.}
The approximate solutions are the first terms in the perturbation series solutions. [ 14 ]
The differential equation z 3 w ′ ′ − w = 0 {\textstyle z^{3}w^{\prime \prime }-w=0} is known to have a solution with an exponential leading term. [ 15 ] The transformation w ( z ) = e s ( z ) {\textstyle w(z)=e^{s(z)}} leads to the differential equation 1 − z 3 ( s ′ ) 2 − z 3 s ′ ′ = 0 {\textstyle 1-z^{3}(s^{\prime })^{2}-z^{3}s^{\prime \prime }=0} . The dominant balance method will find an approximate solution as z {\textstyle z} approaches zero. Scaled functions will not be used because z {\textstyle z} is the differential equation's independent variable, not a differential equation parameter. [ 10 ]
The set of equation terms is { 1 , − z 3 ( s ′ ) 2 , − z 3 s ′ ′ } {\textstyle \{1,-z^{3}(s^{\prime })^{2},-z^{3}s^{\prime \prime }\}} and the limit is zero.
The set of approximate solutions has 2 functions: [ 10 ] { + 2 z − 1 / 2 , − 2 z − 1 / 2 } . {\displaystyle \left\{+2z^{-1/2},-2z^{-1/2}\right\}.}
Using the 1-term solution, a 2-term solution is s 2 ± ( z ) = ± 2 z − 1 / 2 + s ( z ) . {\displaystyle s_{2\pm }(z)=\pm 2z^{-1/2}+s(z).} Substitution of this 2-term solution into the original differential equation generates a new differential equation: [ 10 ] 1 − z 3 ( s 2 ± ′ ) 2 − z 3 s 2 ± ′ ′ = 0 ± 1 ∓ 4 3 z s ′ + 2 3 z 5 / 2 ( s ′ ) 2 + 2 3 z 5 / 2 s ′ ′ = 0. {\displaystyle {\begin{aligned}1-z^{3}(s_{2\pm }^{\prime })^{2}-z^{3}s_{2\pm }^{\prime \prime }&=0\\\pm 1\mp {\frac {4}{3}}zs^{\prime }+{\frac {2}{3}}z^{5/2}(s^{\prime })^{2}+{\frac {2}{3}}z^{5/2}s^{\prime \prime }&=0.\end{aligned}}}
The set of equation terms is { ± 1 , ∓ 4 3 z s ′ , 2 3 z 5 / 2 ( s ′ ) 2 , 2 3 z 5 / 2 s ′ ′ } {\textstyle \{\pm 1,\mp {\frac {4}{3}}zs^{\prime },{\frac {2}{3}}z^{5/2}(s^{\prime })^{2},{\frac {2}{3}}z^{5/2}s^{\prime \prime }\}} and the limit is zero.
For other term pairs, the functions that solve the reduced equations are not consistent. [ 10 ]
The set of approximate solutions has 2 functions: [ 10 ] { + 2 z − 1 / 2 + 3 4 ln z , − 2 z − 1 / 2 + 3 4 ln z } . {\displaystyle \left\{+2z^{-1/2}+{\tfrac {3}{4}}\ln z,-2z^{-1/2}+{\tfrac {3}{4}}\ln z\right\}.}
The next iteration generates a 3-term solution s 3 ± ( z ) = ± 2 z − 1 / 2 + 3 4 ln ( z ) + h ( z ) {\textstyle s_{3\pm }(z)=\pm 2z^{-1/2}+{\tfrac {3}{4}}\operatorname {ln} (z)+h(z)} with h ( z ) ≪ 1 ( z → 0 ) {\textstyle h(z)\ll 1\ (z\to 0)} and this means that a power series expansion can represent the remainder of the solution. [ 10 ] The dominant balance method generates the leading term to this asymptotic expansion with constant A {\textstyle A} and expansion coefficients determined by substitution into the full differential equation: [ 10 ]
A partial sum of this non-convergent series generates an approximate solution. The leading term corresponds to the Liouville-Green (LG) or Wentzel–Kramers–Brillouin (WKB) approximation. [ 15 ] | https://en.wikipedia.org/wiki/Method_of_dominant_balance |
The method of exhaustion ( Latin : methodus exhaustionis ) is a method of finding the area of a shape by inscribing inside it a sequence of polygons (one at a time) whose areas converge to the area of the containing shape . If the sequence is correctly constructed, the difference in area between the n th polygon and the containing shape will become arbitrarily small as n becomes large. As this difference becomes arbitrarily small, the possible values for the area of the shape are systematically "exhausted" by the lower bound areas successively established by the sequence members.
The method of exhaustion typically required a form of proof by contradiction , known as reductio ad absurdum . This amounts to finding an area of a region by first comparing it to the area of a second region, which can be "exhausted" so that its area becomes arbitrarily close to the true area. The proof involves assuming that the true area is greater than the second area, proving that assertion false, assuming it is less than the second area, then proving that assertion false, too.
The idea originated in the late 5th century BC with Antiphon , although it is not entirely clear how well he understood it. [ 1 ] The theory was made rigorous a few decades later by Eudoxus of Cnidus , who used it to calculate areas and volumes. It was later reinvented in China by Liu Hui in the 3rd century AD in order to find the area of a circle. [ 2 ] The first use of the term was in 1647 by Gregory of Saint Vincent in Opus geometricum quadraturae circuli et sectionum .
The method of exhaustion is seen as a precursor to the methods of calculus . The development of analytical geometry and rigorous integral calculus in the 17th-19th centuries subsumed the method of exhaustion so that it is no longer explicitly used to solve problems. An important alternative approach was Cavalieri's principle , also termed the method of indivisibles which eventually evolved into the infinitesimal calculus of Roberval , Torricelli , Wallis , Leibniz , and others.
Euclid used the method of exhaustion to prove the following six propositions in the 12th book of his Elements .
Proposition 2 : The area of circles is proportional to the square of their diameters. [ 3 ]
Proposition 5 : The volumes of two tetrahedra of the same height are proportional to the areas of their triangular bases. [ 4 ]
Proposition 10 : The volume of a cone is a third of the volume of the corresponding cylinder which has the same base and height. [ 5 ]
Proposition 11 : The volume of a cone (or cylinder) of the same height is proportional to the area of the base. [ 6 ]
Proposition 12: The volume of a cone (or cylinder) that is similar to another is proportional to the cube of the ratio of the diameters of the bases. [ 7 ]
Proposition 18 : The volume of a sphere is proportional to the cube of its diameter. [ 8 ]
Archimedes used the method of exhaustion as a way to compute the area inside a circle by filling the circle with a sequence of polygons with an increasing number of sides and a corresponding increase in area. The quotients formed by the area of these polygons divided by the square of the circle radius can be made arbitrarily close to π as the number of polygon sides becomes large, proving that the area inside the circle of radius r is πr 2 , π being defined as the ratio of the circumference to the diameter (C/d).
He also provided the bounds 3 + 10 / 71 < π < 3 + 10 / 70 , (giving a range of 1 / 497 ) by comparing the perimeters of the circle with the perimeters of the inscribed and circumscribed 96-sided regular polygons.
Other results he obtained with the method of exhaustion included [ 9 ] | https://en.wikipedia.org/wiki/Method_of_exhaustion |
The method of images (or method of mirror images ) is a mathematical tool for solving differential equations , in which boundary conditions are satisfied by combining a solution not restricted by the boundary conditions with its possibly weighted mirror image. Generally, original singularities are inside the domain of interest but the function is made to satisfy boundary conditions by placing additional singularities outside the domain of interest. Typically the locations of these additional singularities are determined as the virtual location of the original singularities as viewed in a mirror placed at the location of the boundary conditions. Most typically, the mirror is a hyperplane or hypersphere . [ vague ]
The method of images can also be used in solving discrete problems with boundary conditions, such counting the number of restricted discrete random walks .
The method of image charges is used in electrostatics to simply calculate or visualize the distribution of the electric field of a charge in the vicinity of a conducting surface. It is based on the fact that the tangential component of the electrical field on the surface of a conductor is zero, and that an electric field E in some region is uniquely defined by its normal component over the surface that confines this region (the uniqueness theorem ). [ 1 ]
The method of images may also be used in magnetostatics for calculating the magnetic field of a magnet that is close to a superconducting surface. The superconductor in so-called Meissner state is an ideal diamagnet into which the magnetic field does not penetrate. Therefore, the normal component of the magnetic field on its surface should be zero. Then the image of the magnet should be mirrored. The force between the magnet and the superconducting surface is therefore repulsive.
Comparing to the case of the charge dipole above a flat conducting surface, the mirrored magnetization vector can be thought as due to an additional sign change of an axial vector .
In order to take into account the magnetic flux pinning phenomenon in type-II superconductors , the frozen mirror image method can be used. [ 2 ]
Environmental engineers are often interested in the reflection (and sometimes the absorption) of a contaminant plume off of an impenetrable (no-flux) boundary. A quick way to model this reflection is with the method of images.
The reflections, or images , are oriented in space such that they perfectly replace any mass (from the real plume) passing through a given boundary. [ 3 ] A single boundary will necessitate a single image. Two or more boundaries produce infinite images. However, for the purposes of modeling mass transport—such as the spread of a contaminant spill in a lake—it may be unnecessary to include an infinite set of images when there are multiple relevant boundaries. For example, to represent the reflection within a certain threshold of physical accuracy, one might choose to include only the primary and secondary images.
The simplest case is a single boundary in 1-dimensional space. In this case, only one image is possible. If as time elapses, a mass approaches the boundary, then an image can appropriately describe the reflection of that mass back across the boundary.
Another simple example is a single boundary in 2-dimensional space. Again, since there is only a single boundary, only one image is necessary. This describes a smokestack, whose effluent "reflects" in the atmosphere off of the impenetrable ground, and is otherwise approximately unbounded.
Finally, we consider a mass release in 1-dimensional space bounded to its left and right by impenetrable boundaries. There are two primary images, each replacing the mass of the original release reflecting through each boundary. There are two secondary images, each replacing the mass of one of the primary images flowing through the opposite boundary. There are also two tertiary images (replacing the mass lost by the secondary images), two quaternary images (replacing the mass lost by the tertiary images), and so on ad infinitum.
For a given system, once all of the images are carefully oriented, the concentration field is given by summing the mass releases (the true plume in addition to all of the images) within the specified boundaries. This concentration field is only physically accurate within the boundaries; the field outside the boundaries is non-physical and irrelevant for most engineering purposes.
This method is a specific application of Green's functions . [ citation needed ] The method of images works well when the boundary is a flat surface and the distribution has a geometric center. This allows for simple mirror-like reflection of the distribution to satisfy a variety of boundary conditions. Consider the simple 1D case illustrated in the graphic where there is a distribution of ⟨ c ⟩ {\displaystyle \langle c\rangle } as a function of x {\displaystyle x} and a single boundary located at x b {\displaystyle x_{b}} with the real domain such that x ≥ x b {\displaystyle x\geq x_{b}} and the image domain x < x b {\displaystyle x<x_{b}} . Consider the solution f ( ± x + x 0 , t ) {\displaystyle f(\pm x+x_{0},t)} to satisfy the linear differential equation for any x 0 {\displaystyle x_{0}} , but not necessarily the boundary condition.
Note these distributions are typical in models that assume a Gaussian distribution . This is particularly common in environmental engineering, especially in atmospheric flows that use Gaussian plume models .
The mathematical statement of a perfectly reflecting boundary condition is as follows: ∇ y ( x ) ⋅ n = 0 {\displaystyle \nabla y(\mathbf {x} )\cdot \mathbf {n} =0}
This states that the derivative of our scalar function y {\displaystyle y} will have no derivative in the normal direction to a wall. In the 1D case, this simplifies to: d ⟨ c ⟩ d x = 0 {\displaystyle {\frac {d\langle c\rangle }{dx}}=0}
This condition is enforced with positive images so that: [ citation needed ] ⟨ c ⟩ = f ( x − x 0 , t ) + f ( − x + ( x b − ( x 0 − x b ) ) , t ) {\displaystyle \langle c\rangle =f(x-x_{0},t)+f(-x+(x_{b}-(x_{0}-x_{b})),t)} where the − x + ( x b − ( x 0 − x b ) ) {\displaystyle -x+(x_{b}-(x_{0}-x_{b}))} translates and reflects the image into place. Taking the derivative with respect to x {\displaystyle x} : d ⟨ c ⟩ d x | x b = d f ( x − x 0 , t ) d x | x b + d f ( − x + ( x b − ( x 0 − x b ) ) , t ) d x | x b = d f ( x , t ) d x | x b − x 0 − d f ( x , t ) d x | x b − x 0 = 0 {\displaystyle \left.{\frac {d\langle c\rangle }{dx}}\right|_{x_{b}}=\left.{\frac {df(x-x_{0},t)}{dx}}\right|_{x_{b}}+\left.{\frac {df(-x+(x_{b}-(x_{0}-x_{b})),t)}{dx}}\right|_{x_{b}}=\left.{\frac {df(x,t)}{dx}}\right|_{x_{b}-x_{0}}-\left.{\frac {df(x,t)}{dx}}\right|_{x_{b}-x_{0}}=0}
Thus, the perfectly reflecting boundary condition is satisfied.
The statement of a perfectly absorbing boundary condition is as follows: [ citation needed ] y ( x b ) = 0 {\displaystyle y(x_{b})=0}
This condition is enforced using a negative mirror image: ⟨ c ⟩ = f ( x − x 0 , t ) − f ( − x + ( x b − ( x 0 − x b ) ) , t ) {\displaystyle \langle c\rangle =f(x-x_{0},t)-f(-x+(x_{b}-(x_{0}-x_{b})),t)}
And: ⟨ c ⟩ | x b = f ( x b − x 0 , t ) − f ( − x b + ( x b − ( x 0 − x b ) ) , t ) = f ( x b − x 0 , t ) − f ( x b − x 0 , t ) = 0 {\displaystyle \langle c\rangle {\bigg |}_{x_{b}}=f(x_{b}-x_{0},t)-f(-x_{b}+(x_{b}-(x_{0}-x_{b})),t)=f(x_{b}-x_{0},t)-f(x_{b}-x_{0},t)=0}
Thus this boundary condition is also satisfied.
The method of images can be used in discrete cases. For example, the number of random walks that start at position 0 , take steps of size ±1 , continue for a total of n steps, and end at position k is given by the binomial coefficient ( n ( n + k ) / 2 ) {\displaystyle {\binom {n}{(n+k)/2}}} assuming that | k | ≤ n and n + k is even. Suppose we have the boundary condition that walks are prohibited from stepping to −1 during any part of the walk. The number of restricted walks can be calculated by starting with the number of unrestricted walks that start at position 0 and end at position k and subtracting the number of unrestricted walks that start at position −2 and end at position k . This is because, for any given number of steps, exactly as many unrestricted positively weighted walks as unrestricted negatively weighted walks will reach −1 ; they are mirror images of each other. As such, these negatively weighted walks cancel out precisely those positively weighted walks that our boundary condition has prohibited.
For example, if the number of steps is n = 2 m and the final location is k = 0 then the number of restricted walks is the Catalan number C m = ( 2 m m ) − ( 2 m m + 1 ) . {\displaystyle C_{m}={\binom {2m}{m}}-{\binom {2m}{m+1}}\,.} | https://en.wikipedia.org/wiki/Method_of_images |
In mathematics , the method of matched asymptotic expansions [ 1 ] is a common approach to finding an accurate approximation to the solution to an equation , or system of equations . It is particularly used when solving singularly perturbed differential equations . It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt .
In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series [ 2 ] found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small regions in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers in general, and specifically as boundary layers or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain, respectively.
An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the inner solution , and the other is the outer solution , named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Consider the boundary value problem ε y ″ + ( 1 + ε ) y ′ + y = 0 , {\displaystyle \varepsilon y''+(1+\varepsilon )y'+y=0,} where y {\displaystyle y} is a function of independent time variable t {\displaystyle t} , which ranges from 0 to 1, the boundary conditions are y ( 0 ) = 0 {\displaystyle y(0)=0} and y ( 1 ) = 1 {\displaystyle y(1)=1} , and ε {\displaystyle \varepsilon } is a small parameter, such that 0 < ε ≪ 1 {\displaystyle 0<\varepsilon \ll 1} .
Since ε {\displaystyle \varepsilon } is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation ε = 0 {\displaystyle \varepsilon =0} , and hence find the solution to the problem y ′ + y = 0. {\displaystyle y'+y=0.}
Alternatively, consider that when y {\displaystyle y} and t {\displaystyle t} are both of size O (1), the four terms on the left hand side of the original equation are respectively of sizes O ( ε ) {\displaystyle O(\varepsilon )} , O (1), O ( ε ) {\displaystyle O(\varepsilon )} and O (1). The leading-order balance on this timescale, valid in the distinguished limit ε → 0 {\displaystyle \varepsilon \to 0} , is therefore given by the second and fourth terms, i.e., y ′ + y = 0. {\displaystyle y'+y=0.}
This has solution y = A e − t {\displaystyle y=Ae^{-t}} for some constant A {\displaystyle A} . Applying the boundary condition y ( 0 ) = 0 {\displaystyle y(0)=0} , we would have A = 0 {\displaystyle A=0} ; applying the boundary condition y ( 1 ) = 1 {\displaystyle y(1)=1} , we would have A = e {\displaystyle A=e} . It is therefore impossible to satisfy both boundary conditions, so ε = 0 {\displaystyle \varepsilon =0} is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we infer that there must be a boundary layer at one of the endpoints of the domain where ε {\displaystyle \varepsilon } needs to be included. This region will be where ε {\displaystyle \varepsilon } is no longer negligible compared to the independent variable t {\displaystyle t} , i.e. t {\displaystyle t} and ε {\displaystyle \varepsilon } are of comparable size, i.e. the boundary layer is adjacent to t = 0 {\displaystyle t=0} . Therefore, the other boundary condition y ( 1 ) = 1 {\displaystyle y(1)=1} applies in this outer region, so A = e {\displaystyle A=e} , i.e. y O = e 1 − t {\displaystyle y_{\mathrm {O} }=e^{1-t}} is an accurate approximate solution to the original boundary value problem in this outer region. It is the leading-order solution.
In the inner region, t {\displaystyle t} and ε {\displaystyle \varepsilon } are both tiny, but of comparable size, so define the new O (1) time variable τ = t / ε {\displaystyle \tau =t/\varepsilon } . Rescale the original boundary value problem by replacing t {\displaystyle t} with τ ε {\displaystyle \tau \varepsilon } , and the problem becomes 1 ε y ″ ( τ ) + ( 1 + ε ) 1 ε y ′ ( τ ) + y ( τ ) = 0 , {\displaystyle {\frac {1}{\varepsilon }}y''(\tau )+\left({1+\varepsilon }\right){\frac {1}{\varepsilon }}y'(\tau )+y(\tau )=0,} which, after multiplying by ε {\displaystyle \varepsilon } and taking ε = 0 {\displaystyle \varepsilon =0} , is y ″ + y ′ = 0. {\displaystyle y''+y'=0.}
Alternatively, consider that when t {\displaystyle t} has reduced to size O ( ε ) {\displaystyle O(\varepsilon )} , then y {\displaystyle y} is still of size O (1) (using the expression for y O {\displaystyle y_{\mathrm {O} }} ), and so the four terms on the left hand side of the original equation are respectively of sizes O ( ε − 1 ) {\displaystyle O(\varepsilon ^{-1})} , O ( ε − 1 ) {\displaystyle O(\varepsilon ^{-1})} , O (1) and O (1). The leading-order balance on this timescale, valid in the distinguished limit ε → 0 {\displaystyle \varepsilon \to 0} , is therefore given by the first and second terms, i.e. y ″ + y ′ = 0. {\displaystyle y''+y'=0.}
This has solution y = B − C e − τ {\displaystyle y=B-Ce^{-\tau }} for some constants B {\displaystyle B} and C {\displaystyle C} . Since y ( 0 ) = 0 {\displaystyle y(0)=0} applies in this inner region, this gives B = C {\displaystyle B=C} , so an accurate approximate solution to the original boundary value problem in this inner region (it is the leading-order solution) is y I = B ( 1 − e − τ ) = B ( 1 − e − t / ε ) . {\displaystyle y_{\mathrm {I} }=B\left({1-e^{-\tau }}\right)=B\left({1-e^{-t/\varepsilon }}\right).}
We use matching to find the value of the constant B {\displaystyle B} . The idea of matching is that the inner and outer solutions should agree for values of t {\displaystyle t} in an intermediate (or overlap) region, i.e. where ε ≪ t ≪ 1 {\displaystyle \varepsilon \ll t\ll 1} . We need the outer limit of the inner solution to match the inner limit of the outer solution, i.e., lim τ → ∞ y I = lim t → 0 y O , {\displaystyle \lim _{\tau \to \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },} which gives B = e {\displaystyle B=e} .
The above problem is the simplest of the simple problems dealing with matched asymptotic expansions. One can immediately calculate that e 1 − t {\displaystyle e^{1-t}} is the entire asymptotic series for the outer region whereas the O ( ε ) {\displaystyle {\mathcal {O}}(\varepsilon )} correction to the inner solution y I {\displaystyle y_{\mathrm {I} }} is B ( 1 − e − t / ε ) {\textstyle B(1-e^{-t/\varepsilon })} and the constant of integration B {\displaystyle B} must be obtained from inner-outer matching.
Notice, the intuitive idea for matching of taking the limits i.e. lim τ → ∞ y I = lim t → 0 y O , {\textstyle \lim _{\tau \to \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },} doesn't apply at this level. This is simply because the underlined term doesn't converge to a limit. The methods to follow in these types of cases are either to go for a) method of an intermediate variable or using b) the Van-Dyke matching rule. The former method is cumbersome and works always whereas the Van-Dyke matching rule is easy to implement but with limited applicability. A concrete boundary value problem having all the essential ingredients is the following.
Consider the boundary value problem ε y ″ − x 2 y ′ − y = 1 , y ( 0 ) = y ( 1 ) = 1 {\displaystyle \varepsilon y''-x^{2}y'-y=1,\quad y(0)=y(1)=1}
The conventional outer expansion y O = y 0 + ε y 1 + ⋯ {\displaystyle y_{\mathrm {O} }=y_{0}+\varepsilon y_{1}+\cdots } gives y 0 = α e 1 / x − 1 {\displaystyle y_{0}=\alpha e^{1/x}-1} , where α {\displaystyle \alpha } must be obtained from matching.
The problem has boundary layers both on the left and on the right. The left boundary layer near 0 {\displaystyle 0} has a thickness ε 1 / 2 {\displaystyle \varepsilon ^{1/2}} whereas the right boundary layer near 1 {\displaystyle 1} has thickness ε {\displaystyle \varepsilon } . Let us first calculate the solution on the left boundary layer by rescaling X = x / ε 1 / 2 , Y = y {\displaystyle X=x/\varepsilon ^{1/2},\;Y=y} , then the differential equation to satisfy on the left is Y ″ − ε 1 / 2 X 2 Y ′ − Y = 1 , Y ( 0 ) = 1 {\displaystyle Y''-\varepsilon ^{1/2}X^{2}Y'-Y=1,\quad Y(0)=1} and accordingly, we assume an expansion Y l = Y 0 l + ε 1 / 2 Y 1 / 2 l + ⋯ {\displaystyle Y^{l}=Y_{0}^{l}+\varepsilon ^{1/2}Y_{1/2}^{l}+\cdots } .
The O ( 1 ) {\displaystyle {\mathcal {O}}(1)} inhomogeneous condition on the left provides us the reason to start the expansion at O ( 1 ) {\displaystyle {\mathcal {O}}(1)} . The leading order solution is Y 0 l = 2 e − X − 1 {\displaystyle Y_{0}^{l}=2e^{-X}-1} .
This with 1 − 1 {\displaystyle 1-1} van-Dyke matching gives α = 0 {\displaystyle \alpha =0} .
Let us now calculate the solution on the right rescaling X = ( 1 − x ) / ε , Y = y {\displaystyle X=(1-x)/\varepsilon ,\;Y=y} , then the differential equation to satisfy on the right is Y ″ + ( 1 − 2 ε X + ε 2 X 2 ) Y ′ − ε Y = ε , Y ( 1 ) = 1 , {\displaystyle Y''+\left(1-2\varepsilon X+\varepsilon ^{2}X^{2}\right)Y'-\varepsilon Y=\varepsilon ,\quad Y(1)=1,} and accordingly, we assume an expansion Y r = Y 0 r + ε Y 1 r + ⋯ . {\displaystyle Y^{r}=Y_{0}^{r}+\varepsilon Y_{1}^{r}+\cdots .}
The O ( 1 ) {\displaystyle {\mathcal {O}}(1)} inhomogeneous condition on the right provides us the reason to start the expansion at O ( 1 ) {\displaystyle {\mathcal {O}}(1)} . The leading order solution is Y 0 r = ( 1 − B ) + B e − X {\displaystyle Y_{0}^{r}=(1-B)+Be^{-X}} . This with 1 − 1 {\displaystyle 1-1} van-Dyke matching gives B = 2 {\displaystyle B=2} . Proceeding in a similar fashion if we calculate the higher order-corrections we get the solutions as Y l = 2 e − X − 1 + ε 1 / 2 e − X ( X 3 3 + X 2 2 + X 2 ) + O ( ε ) , X = x ε 1 / 2 . {\displaystyle Y^{l}=2e^{-X}-1+\varepsilon ^{1/2}e^{-X}\left({\frac {X^{3}}{3}}+{\frac {X^{2}}{2}}+{\frac {X}{2}}\right)+{\mathcal {O}}(\varepsilon ),\quad X={\frac {x}{\varepsilon ^{1/2}}}.} y ≡ − 1. {\displaystyle y\equiv -1.} Y r = 2 e − X − 1 + 2 ε e − X ( X + X 2 ) + O ( ε 2 ) , X = 1 − x ε . {\displaystyle Y^{r}=2e^{-X}-1+2\varepsilon e^{-X}\left(X+X^{2}\right)+{\mathcal {O}}(\varepsilon ^{2}),\quad X={\frac {1-x}{\varepsilon }}.}
To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. In this method, we add the inner and outer approximations and subtract their overlapping value, y o v e r l a p {\displaystyle \,y_{\mathrm {overlap} }} , which would otherwise be counted twice. The overlapping value is the outer limit of the inner boundary layer solution, and the inner limit of the outer solution; these limits were above found to equal e {\displaystyle e} . Therefore, the final approximate solution to this boundary value problem is, y ( t ) = y I + y O − y o v e r l a p = e ( 1 − e − t / ε ) + e 1 − t − e = e ( e − t − e − t / ε ) . {\displaystyle y(t)=y_{\mathrm {I} }+y_{\mathrm {O} }-y_{\mathrm {overlap} }=e\left({1-e^{-t/\varepsilon }}\right)+e^{1-t}-e=e\left({e^{-t}-e^{-t/\varepsilon }}\right).}
Note that this expression correctly reduces to the expressions for y I {\displaystyle y_{\mathrm {I} }} and y O {\displaystyle y_{\mathrm {O} }} when t {\displaystyle t} is O ( ε ) {\displaystyle O(\varepsilon )} and O (1), respectively.
This final solution satisfies the problem's original differential equation (shown by substituting it and its derivatives into the original equation). Also, the boundary conditions produced by this final solution match the values given in the problem, up to a constant multiple. This implies, due to the uniqueness of the solution, that the matched asymptotic solution is identical to the exact solution up to a constant multiple. This is not necessarily always the case, any remaining terms should go to zero uniformly as ε → 0 {\displaystyle \varepsilon \rightarrow 0} .
Not only does our solution successfully approximately solve the problem at hand, it closely approximates the problem's exact solution. It happens that this particular problem is easily found to have exact solution y ( t ) = e − t − e − t / ε e − 1 − e − 1 / ε , {\displaystyle y(t)={\frac {e^{-t}-e^{-t/\varepsilon }}{e^{-1}-e^{-1/\varepsilon }}},} which has the same form as the approximate solution, by the multiplying constant. The approximate solution is the first term in a binomial expansion of the exact solution in powers of e 1 − 1 / ε {\displaystyle e^{1-1/\varepsilon }} .
Conveniently, we can see that the boundary layer, where y ′ {\displaystyle y'} and y ″ {\displaystyle y''} are large, is near t = 0 {\displaystyle t=0} , as we supposed earlier. If we had supposed it to be at the other endpoint and proceeded by making the rescaling τ = ( 1 − t ) / ε {\displaystyle \tau =(1-t)/\varepsilon } , we would have found it impossible to satisfy the resulting matching condition. For many problems, this kind of trial and error is the only way to determine the true location of the boundary layer. [ 3 ]
The problem above is a simple example because it is a single equation with only one dependent variable, and there is one boundary layer in the solution. Harder problems may contain several co-dependent variables in a system of several equations, and/or with several boundary and/or interior layers in the solution.
It is often desirable to find more terms in the asymptotic expansions of both the outer and the inner solutions. The appropriate form of these expansions is not always clear: while a power-series expansion in ε {\displaystyle \varepsilon } may work, sometimes the appropriate form involves fractional powers of ε {\displaystyle \varepsilon } , functions such as ε log ε {\displaystyle \varepsilon \log \varepsilon } , et cetera. As in the above example, we will obtain outer and inner expansions with some coefficients which must be determined by matching. [ 7 ]
A method of matched asymptotic expansions - with matching of solutions in the common domain of validity - has been developed and used extensively by Dingle and Müller-Kirsten for the derivation of asymptotic expansions of the solutions and characteristic numbers (band boundaries) of Schrödinger-like second-order differential equations with periodic potentials - in particular for the Mathieu equation [ 8 ] (best example), Lamé and ellipsoidal wave equations, [ 9 ] oblate [ 10 ] and prolate [ 11 ] spheroidal wave equations, and equations with anharmonic potentials. [ 12 ]
Methods of matched asymptotic expansions have been developed to find approximate solutions to the Smoluchowski convection–diffusion equation , which is a singularly perturbed second-order differential equation. The problem has been studied particularly in the context of colloid particles in linear flow fields, where the variable is given by the pair distribution function around a test particle. In the limit of low Péclet number, the convection–diffusion equation also presents a singularity at infinite distance (where normally the far-field boundary condition should be placed) due to the flow field being linear in the interparticle separation. This problem can be circumvented with a spatial Fourier transform as shown by Jan Dhont. [ 13 ] A different approach to solving this problem was developed by Alessio Zaccone and coworkers and consists in placing the boundary condition right at the boundary layer distance, upon assuming (in a first-order approximation) a constant value of the pair distribution function in the outer layer due to convection being dominant there. This leads to an approximate theory for the encounter rate of two interacting colloid particles in a linear flow field in good agreement with the full numerical solution. [ 14 ] When the Péclet number is significantly larger than one, the singularity at infinite separation no longer occurs and the method of matched asymptotics can be applied to construct the full solution for the pair distribution function across the entire domain. [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Method_of_matched_asymptotic_expansions |
In applied mathematics, methods of mean weighted residuals (MWR) are methods for solving differential equations . The solutions of these differential equations are assumed to be well approximated by a finite sum of test functions ϕ i {\displaystyle \phi _{i}} . In such cases, the selected method of weighted residuals is used to find the coefficient value of each corresponding test function. The resulting coefficients are made to minimize the error between the linear combination of test functions, and actual solution, in a chosen norm.
It is often very important to firstly sort out notation used before presenting how this method is executed in order to avoid confusion.
The method of mean weighted residuals solves R ( x , u , u x , … , d n u d x n ) = 0 {\displaystyle R\left(x,u,u_{x},\ldots ,{\frac {d^{n}u}{dx^{n}}}\right)=0} by imposing that the degrees of freedom a i {\displaystyle a_{i}} are such that:
is satisfied. Where the inner product ( f , g ) {\displaystyle (f,g)} is the standard function inner product with respect to some weighting function r ( x ) {\displaystyle r(x)} which is determined usually by the basis function set or arbitrarily according to whichever weighting function is most convenient. For instance, when the basis set is just the Chebyshev polynomials of the first kind, the weighting function is typically r ( x ) = 1 1 − x 2 {\displaystyle r(x)={\frac {1}{\sqrt {1-x^{2}}}}} because inner products can then be more easily computed using a Chebyshev transform .
Additionally, all these methods have in common that they enforce boundary conditions by either enforcing that the basis functions (in the case of a linear combination) individual enforce the boundary conditions on the original BVP (This only works if the boundary conditions are homogeneous however it is possible to apply it to problems with inhomogeneous boundary conditions by letting u ( x ) = v ( x ) + L ( x ) {\displaystyle u(x)=v(x)+L(x)} and substituting this expression into the original differential equation and imposing homogeneous boundary conditions to the new solution being sought to find u(x) that is v(x) where L(x) is a function that satisfies the boundary conditions imposed on u that is known.), or by explicitly imposing the boundary by removing n rows to the matrix representing the discretised problem where n refers to the order of the differential equation and substituting them with ones that represent the boundary conditions.
The choice of test function, as mentioned earlier, depends on the specific method used (under the general heading of mean weighted residual methods). Here is a list of commonly used specific MWR methods and their corresponding test functions roughly according to their popularity: | https://en.wikipedia.org/wiki/Method_of_mean_weighted_residuals |
In calculus , the method of normals was a technique invented by Descartes for finding normal and tangent lines to curves . It represented one of the earliest methods for constructing tangents to curves. The method hinges on the observation that the radius of a circle is always normal to the circle itself. With this in mind Descartes would construct a circle that was tangent to a given curve. He could then use the radius at the point of intersection to find the slope of a normal line, and from this one can easily find the slope of a tangent line.
This was discovered about the same time as Fermat 's method of adequality . While Fermat's method had more in common with the infinitesimal techniques that were to be used later, Descartes' method was more influential in the early history of calculus. ( Katz 2008 )
One reason Descartes' method fell from favor was the algebraic complexity it involved. On the other hand, this method can be used to rigorously define the derivative for a wide class of functions using neither infinitesimal nor limit techniques. It is also related to a completely general definition of differentiability given by Carathéodory ( Range 2011 ).
This article about the history of mathematics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Method_of_normals |
Methods engineering is a subspecialty of industrial engineering and manufacturing engineering concerned with human integration in industrial production processes. [ 1 ]
Alternatively it can be described as the design of the productive process in which a person is involved. The task of the Methods engineer is to decide where humans will be utilized in the process of converting raw materials to finished products and how workers can most effectively perform their assigned tasks. [ 1 ] [ 2 ] The terms operation analysis, work design and simplification, and methods engineering and corporate re-engineering are frequently used interchangeably. [ 3 ]
Lowering costs and increasing reliability and productivity are the objectives of methods engineering. Methods efficiency engineering focuses on lowering costs through productivity improvement. It investigates the output obtained from each unit of input and the speed of each machine and man. Methods quality engineering focuses on increasing quality and reliability. These objectives are met in a five step sequence as follows: Project selection, data acquisition and presentation, data analysis, development of an ideal method based on the data analysis and, finally, presentation and implementation of the method. [ 3 ]
Methods engineers typically work on projects involving new product design, products with a high cost of production to profit ratio, and products associated with having poor quality issues. Different methods of project selection include the Pareto analysis , fish diagrams, Gantt charts , PERT charts , and job/work site analysis guides.
Data that needs to be collected are specification sheets for the product, design drawings, process plans, quantity and delivery requirements, and projections as to how the product will perform or has performed in the market. Process charts are used to describe proposed or existing way of doing work utilizing machines and men. The Gantt process chart can assist in the analysis of the man to machine interaction and it can aid in establishing the optimum number of workers and machines subject to the financial constraints of the operation. A flow diagram is frequently employed to represent the manufacturing process associated with the product.
Data analysis enables the methods engineer to make decisions about several things, including: purpose of the operation, part design characteristics, specifications and tolerances of parts, materials, manufacturing process design, setup and tooling, working conditions, material handling, plant layout, and workplace design. [ 3 ] Knowing the specifics (who, what, when, where, why, and how) of product manufacturing assists in the development of an optimum manufacturing method.
Equations of synchronous and random servicing as well as line balancing are used to determine the ideal worker to machine ratio for the process or product chosen. Synchronous servicing is defined as the process where a machine is assigned to more than one operator, and the assigned operators and machine are occupied during the whole operating cycle. Random servicing of a facility, as the name indicates, is defined as a servicing process with a random time of occurrence and need of servicing variables. Line balancing equations determine the ideal number of workers needed on a production line to enable it to work at capacity.
The industrial process or operation can be optimized using a variety of available methods. Each method design has its advantages and disadvantages. The best overall method is chosen using selection criteria and concepts involving value engineering , cost-benefit analysis , crossover charts, and economic analysis. The outcome of the selection process is then presented to the company for implementation at the plant. This last step involves "selling the idea" to the company brass, a skill the methods engineer must develop in addition to the normal engineering qualifications. | https://en.wikipedia.org/wiki/Methods_engineering |
There are many methods to investigate protein–protein interactions which are the physical contacts of high specificity established between two or more protein molecules involving electrostatic forces and hydrophobic effects . Each of the approaches has its own strengths and weaknesses, especially with regard to the sensitivity and specificity of the method. [ 1 ] A high sensitivity means that many of the interactions that occur are detected by the screen. A high specificity indicates that most of the interactions detected by the screen are occurring in reality.
Co-immunoprecipitation is considered [ citation needed ] to be the gold standard assay for protein–protein interactions, especially when it is performed with endogenous (not overexpressed and not tagged ) proteins. The protein of interest is isolated with a specific antibody . Interaction partners which stick to this protein are subsequently identified by Western blotting . [ 2 ] Interactions detected by this approach are considered to be real. However, this method can only verify interactions between suspected interaction partners. Thus, it is not a screening approach. A note of caution also is that immunoprecipitation experiments reveal direct and indirect interactions. Thus, positive results may indicate that two proteins interact directly or may interact via one or more bridging molecules. This could include bridging proteins, nucleic acids (DNA or RNA), or other molecules.
Bimolecular fluorescence complementation (BiFC) is a new technique in observing the interactions of proteins. Combining with other new techniques, this method can be used to screen protein–protein interactions and their modulators, [ 3 ] DERB . [ 4 ]
Affinity electrophoresis as used for estimation of binding constants , as for instance in lectin affinity electrophoresis or characterization of molecules with specific features like glycan content or ligand binding.
Pull-down assays are a common variation of immunoprecipitation and immunoelectrophoresis and are used identically, although this approach is more amenable to an initial screen for interacting proteins.
Label transfer can be used for screening or confirmation of protein interactions and can provide information about the interface where the interaction takes place. Label transfer can also detect weak or transient interactions that are difficult to capture using other in vitro detection strategies. In a label transfer reaction, a known protein is tagged with a detectable label. The label is then passed to an interacting protein, which can then be identified by the presence of the label.
Phage display is used for the high-throughput screening of protein interactions.
In-vivo crosslinking of protein complexes using photo-reactive amino acid analogs was introduced in 2005 by researchers from the Max Planck Institute [ 5 ] In this method, cells are grown with photoreactive diazirine analogs to leucine and methionine , which are incorporated into proteins. Upon exposure to ultraviolet light, the diazirines are activated and bind to interacting proteins that are within a few angstroms of the photo-reactive amino acid analog. [ 6 ]
Tandem affinity purification (TAP) method allows high throughput identification of protein interactions. In contrast to yeast two-hybrid approach the accuracy of the method can be compared to those of small-scale experiments [ 7 ] and the interactions are detected within the correct cellular environment as by co-immunoprecipitation . However, the TAP tag method requires two successive steps of protein purification and consequently it can not readily detect transient protein–protein interactions. Recent genome-wide TAP experiments were performed by Krogan et al. and Gavin et al. providing updated protein interaction data for yeast organism. [ 8 ] [ 9 ]
Chemical cross-linking is often used to "fix" protein interactions in place before trying to isolate/identify interacting proteins. Common crosslinkers for this application include the non-cleavable NHS -ester cross-linker, bissulfosuccinimidyl suberate (BS3); a cleavable version of BS3, dithiobis(sulfosuccinimidyl propionate) (DTSSP); and the imidoester cross-linker dimethyl dithiobispropionimidate (DTBP) that is popular for fixing interactions in ChIP assays.
Chemical cross-linking followed by high mass MALDI mass spectrometry can be used to analyze intact protein interactions in place before trying to isolate/identify interacting proteins. This method detects interactions among non-tagged proteins and is available from CovalX .
SPINE (Strepprotein interaction experiment) [ 10 ] uses a combination of reversible crosslinking with formaldehyde and an incorporation of an affinity tag to detect interaction partners in vivo .
Quantitative immunoprecipitation combined with knock-down (QUICK) relies on co-immunoprecipitation, quantitative mass spectrometry ( SILAC ) and RNA interference (RNAi). This method detects interactions among endogenous non-tagged proteins. [ 11 ] Thus, it has the same high confidence as co-immunoprecipitation. However, this method also depends on the availability of suitable antibodies.
Proximity ligation assay (PLA) in situ is an immunohistochemical method utilizing so called PLA probes for detection of proteins, protein interactions and modifications. Each PLA probes comes with a unique short DNA strand attached to it and bind either to species specific primary antibodies or consist of directly DNA-labeled primary antibodies. [ 12 ] [ 13 ] When the PLA probes are in close proximity, the DNA strands can interact through a subsequent addition of two other circle-forming DNA oligonucleotides. After joining of the two added oligonucleotides by enzymatic ligation, they are amplified via rolling circle amplification using a polymerase. After the amplification reaction, several-hundredfold replication of the DNA circle has occurred and flurophore or enzyme labeled complementary oligonucleotide probes highlight the product. The resulting high concentration of fluorescence or cromogenic signal in each single-molecule amplification product is easily visible as a distinct bright spot when viewed with either in a fluorescence microscope or a standard brightfield microscope.
Surface plasmon resonance (SPR) is the most common label-free technique for the measurement of biomolecular interactions. [ citation needed ] SPR instruments measure the change in the refractive index of light reflected from a metal surface (the "biosensor"). Binding of biomolecules to the other side of this surface leads to a change in the refractive index which is proportional to the mass added to the sensor surface. In a typical application, one binding partner (the "ligand", often a protein) is immobilized on the biosensor and a solution with potential binding partners (the "analyte") is channelled over this surface. The build-up of analyte over time allows to quantify on rates (kon), off rates (koff), dissociation constants (Kd) and, in some applications, active concentrations of the analyte. [ 14 ] Several different vendors offer SPR-based devices. Best known are Biacore instruments which were the first commercially available.
Dual polarisation interferometry (DPI) can be used to measure protein–protein interactions. DPI provides real-time, high-resolution measurements of molecular size, density and mass. While tagging is not necessary, one of the protein species must be immobilized on the surface of a waveguide. As well as kinetics and affinity, conformational changes during interaction can also be quantified.
Static light scattering (SLS) measures changes in the Rayleigh scattering of protein complexes in solution and can characterize both weak and strong interactions without labeling or immobilization of the proteins or other biomacromolecule. The composition-gradient, multi-angle static light scattering (CG-MALS) measurement mixes a series of aliquots of different concentrations or compositions, measures the effect of the changes in light scattering as a result of the interaction, and fits the correlated light scattering changes with concentration to a series of association models in order to find the best-fit descriptor. Weak, non-specific interactions are typically characterized via the second virial coefficient . For specific binding, this type of analysis can determine the stoichiometry and equilibrium association constant(s) of one or more associated complexes, [ 15 ] including challenging systems such as those that exhibit simultaneous homo- and hetero-association, multi-valent interactions and cooperativity.
Dynamic light scattering (DLS), also known as quasielastic light scattering (QELS), or photon correlation spectroscopy, processes the time-dependent fluctuations in scattered light intensity to yield the hydrodynamic radius of particles in solution. The hydrodynamic radius is the radius of a solid sphere with the same translational diffusion coefficient as that measured for the sample particle. As proteins associate, the average hydrodynamic radius of the solution increases. Application of the Method of Continuous Variation, otherwise known as the Job plot , with the solution hydrodynamic radius as the observable, enables in vitro determination of K d , complex stoichiometry, complex hydrodynamic radius, and the Δ H ° and Δ S ° of protein–protein interactions. [ 16 ] This technique does not entail immobilization or labeling. Transient and weak interactions can be characterized. Relative to static light scattering, which is based upon the absolute intensity of scattered light, DLS is insensitive to background light from the walls of containing structures. This insensitivity permits DLS measurements from 1 μL volumes in 1536 well plates, and lowers sample requirements into the femtomole range. This technique is also suitable for screening of buffer components and/or small molecule inhibitors/effectors.
Flow-induced dispersion analysis (FIDA), is a new capillary-based and immobilization-free technology used for characterization and quantification of biomolecular interaction and protein concentration under native conditions. [ 17 ] The technique is based on measuring the change in apparent size (hydrodynamic radius) of a selective ligand when interacting with the analyte of interest. A FIDA assay works in complex solutions (e.g. plasma [ 18 ] ), and provides information regarding analyte concentration, affinity constants, molecular size and binding kinetics. A single assay is typically completed in minutes and only requires a sample consumption of a few μL. [ 17 ]
Fluorescence polarization/anisotropy can be used to measure protein–protein or protein–ligand interactions. Typically one binding partner is labeled with a fluorescence probe (although sometimes intrinsic protein fluorescence from tryptophan can be used) and the sample is excited with polarized light. The increase in the polarization of the fluorescence upon binding of the labeled protein to its binding partner can be used to calculate the binding affinity.
With fluorescence correlation spectroscopy , one protein is labeled with a fluorescent dye and the other is left unlabeled. The two proteins are then mixed and the data outputs the fraction of the labeled protein that is unbound and bound to the other protein, allowing you to get a measure of K D and binding affinity. You can also take time-course measurements to characterize binding kinetics. FCS also tells you the size of the formed complexes so you can measure the stoichiometry of binding. A more powerful methods is fluorescence cross-correlation spectroscopy (FCCS) that employs double labeling techniques and cross-correlation resulting in vastly improved signal-to-noise ratios over FCS. Furthermore, the two-photon and three-photon excitation practically eliminates photobleaching effects and provide ultra-fast recording of FCCS or FCS data.
Fluorescence resonance energy transfer (FRET) is a common technique when observing the interactions of different proteins. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Applied in vivo, FRET has been used to detect the location and interactions of genes and cellular structures including integrins and membrane proteins. [ 23 ] FRET can be used to obtain information about metabolic or signaling pathways. [ 24 ] [ 25 ] [ 26 ]
Bio-layer interferometry (BLI) is a label-free technology for measuring biomolecular interactions [ 27 ] [ 28 ] (protein:protein or protein:small molecule). It is an optical analytical technique that analyzes the interference pattern of white light reflected from two surfaces: a layer of immobilized protein on the biosensor tip, and an internal reference layer. Any change in the number of molecules bound to the biosensor tip causes a shift in the interference pattern that can be measured in real-time, providing detailed information regarding the kinetics of association and dissociation of the two molecule molecules as well as the affinity constant for the protein interaction (k a , k d and K d ). Due to sensor configuration, the technique is highly amenable to both purified and crude samples as well as high throughput screening experiments. The detection method can also be used to determine the molar concentration of analytes.
Protein activity determination by NMR multi-nuclear relaxation measurements, or 2D-FT NMR spectroscopy in solutions, combined with nonlinear regression analysis of NMR relaxation or 2D-FT spectroscopy data sets. Whereas the concept of water activity is widely known and utilized in the applied biosciences, its complement—the protein activity which quantitates protein–protein interactions—is much less familiar to bioscientists as it is more difficult to determine in dilute solutions of proteins; protein activity is also much harder to determine for concentrated protein solutions when protein aggregation, not merely transient protein association, is often the dominant process. [ 29 ]
Isothermal titration calorimetry (ITC), is considered as the most quantitative technique available for measuring the thermodynamic properties of protein–protein interactions and is becoming a necessary tool for protein–protein complex structural studies. This technique relies upon the accurate measurement of heat changes that follow the interaction of protein molecules in solution, without the need to label or immobilize the binding partners, since the absorption or production of heat is an intrinsic property of virtually all biochemical reactions. ITC provides information regarding the stoichiometry, enthalpy, entropy, and binding kinetics between two interacting proteins. [ 30 ]
Microscale thermophoresis (MST), is a new method that enables the quantitative analysis of molecular interactions in solution at the microliter scale. The technique is based on the thermophoresis of molecules, which provides information about molecule size, charge and hydration shell. Since at least one of these parameters is typically affected upon binding, the method can be used for the analysis of each kind of biomolecular interaction or modification. The method works equally well in standard buffers and biological liquids like blood or cell-lysate. It is a free solution method which does not need to immobilize the binding partners. MST provides information regarding the binding affinity, stoichiometry, competition and enthalpy of two or more interacting proteins. [ 31 ] [ 32 ]
Rotating cell‑based ligand binding assay using radioactivity or fluorescence, is a recent method that measures molecular interactions in living cells in real-time. This method allows the characterization of the binding mechanism, as well as K d , k on and k off . This principle is being applied in several studies, mainly with protein ligands and living mammalian cells. [ 33 ] [ 34 ] [ 35 ] [ 36 ] An alternative technology to measure protein interactions directly on cells is Real-Time Interaction Cytometry (RT-IC). [ 37 ] In this technology, the living or fixed cells are physically retained on the surface of biosensor chips using biocompatible and flow-permeable polymer traps. [ 38 ] Binding and unbinding of automatically injected labeled analytes is measured by time-resolved fluorescence detection.
Single colour reflectometry (SCORE) is a label-free technology for measuring all kinds of biomolecular interactions in real-time. Similar to BLI, it exploits interference effects at thin layers. However, it does not need a spectral resolution but rather uses monochromatic light. Thus, it is possible to analyse not only a single interaction but high-density arrays with up to 10,000 interactions per cm 2 . [ 39 ]
switchSENSE is a technology based on DNA nanolevers on a chip surface. A fluorescent dye as well as the unlabeled ligand are attached to this nanolever. Upon binding of an analyte to the ligand, the real-time kinetic rates (k on , k off ) can be measured as changes in fluorescence intensity and the K d can be derived. This method can be used to investigate protein-protein interactions, as well as to investigate modulators of protein-protein interactions by assessing ternary complex formation. An example for such modulators are PROTACs , which are investigated for their therapeutic potential in cancer therapy. Another example for such ternary interactions are bispecific antibodies binding to their two distinct antigens. [ 40 ] switchSENSE can additionally be utilized to detect conformational changes induced by ligands binding to a target protein. [ 41 ]
The yeast two-hybrid and bacterial two-hybrid screen [ 42 ] investigate the interaction between artificial fusion proteins. They do not require isolation of proteins but rather use transformation to express proteins in bacteria or yeast . The cells are designed in a way that an interaction activates the transcription of a reporter gene or a reporter enzyme . [ 43 ]
Most PPI methods require some computational data analysis. The methods in this section are primarily computational although they typically require data generated by wet lab experiments.
Protein–protein docking , the prediction of protein–protein interactions based only on the three-dimensional protein structures from X-ray diffraction of protein crystals might not be satisfactory. [ 44 ] [ 45 ]
Network analysis includes the analysis of interaction networks using methods of graph theory or statistical methods. The goal of these studies is to understand the nature of interactions in the context of a cell or pathway , not just individual interactions. [ 46 ] | https://en.wikipedia.org/wiki/Methods_to_investigate_protein–protein_interactions |
Methomyl is a carbamate insecticide introduced in 1966. It is highly toxic to humans, livestock, pets, and wildlife. [ 3 ] The EU imposed a pesticide residue limit of 0,01 mg/kg for all fruit and vegetables. [ 4 ]
Methomyl is a common active ingredient in commercial fly bait, for which the label instructions in the United States warn that "It is a violation of Federal Law to use this product in a manner inconsistent with its labeling." "Off-label" uses and other uses not specifically targeted at problem insects are illegal, dangerous, and ill-advised. [ 5 ] [ 6 ]
Methomyl is a broad-spectrum insecticide that is used to kill insect pests . [ 7 ] Methomyl is registered for commercial/professional use under certain conditions on sites including field, vegetable, and orchard crops; turf ( sod farms only); livestock quarters; commercial premises; and refuse containers. Products containing 1% Methomyl are available to the general public for retail sale, but more potent formulations are classified as restricted-use pesticides: not registered for homeowner or non-professional application. [ 7 ] However, Heliothis virescens developed a resistance to methomyl within 5 years. [ 8 ] Other species like Helicoverpa assulta also developed resistance after exposure. [ 9 ]
In acute toxicity testing, methomyl is placed in EPA Toxicity Category I (the highest toxicity category out of four) via the oral route and in eye irritation studies. [ 7 ] It is in lower Toxicity Categories for inhalation (Category II), acute dermal effects (Category III), and acute skin irritation (Category IV). Methomyl is not likely to be a carcinogen (EPA carcinogen Category E). [ 7 ]
Methomyl has low persistence in the soil environment, with a reported half-life of approximately 14 days. [ 10 ] Because of its high solubility in water, and low affinity for soil binding methomyl may have potential for groundwater contamination. [ 7 ] [ 11 ] The estimated aqueous half-life for the insecticide is 6 days in surface water and over 25 weeks in groundwater . [ 11 ]
First prepare thioester:
Second prepare oxime from thioester:
Third prepare product from methyl isocyanate and the finished oxime:
Common names for methomyl include metomil and mesomile. Trade names include Acinate, Agrinate, DuPont 1179, Flytek, Kipsin, Lannate, Lanox, Memilene, Methavin, Methomex, Nudrin, NuBait, Pillarmate and SD 14999 [ 11 ] | https://en.wikipedia.org/wiki/Methomyl |
In chemistry , an alkoxide is the conjugate base of an alcohol and therefore consists of an organic group bonded to a negatively charged oxygen atom. They are written as RO − , where R is the organyl substituent . Alkoxides are strong bases [ citation needed ] and, when R is not bulky , good nucleophiles and good ligands . Alkoxides, although generally not stable in protic solvents such as water, occur widely as intermediates in various reactions, including the Williamson ether synthesis . [ 1 ] [ 2 ] Transition metal alkoxides are widely used for coatings and as catalysts . [ 3 ] [ 4 ]
Enolates are unsaturated alkoxides derived by deprotonation of a C−H bond adjacent to a ketone or aldehyde . The nucleophilic center for simple alkoxides is located on the oxygen, whereas the nucleophilic site on enolates is delocalized onto both carbon and oxygen sites. Ynolates are also unsaturated alkoxides derived from acetylenic alcohols.
Phenoxides are close relatives of the alkoxides, in which the alkyl group is replaced by a phenyl group. Phenol is more acidic than a typical alcohol; thus, phenoxides are correspondingly less basic and less nucleophilic than alkoxides. They are, however, often easier to handle and yield derivatives that are more crystalline than those of the alkoxides.
Alkali metal alkoxides are often oligomeric or polymeric compounds, especially when the R group is small (Me, Et). [ 3 ] [ page needed ] The alkoxide anion is a good bridging ligand , thus many alkoxides feature M 2 O or M 3 O linkages. In solution, the alkali metal derivatives exhibit strong ion-pairing, as expected for the alkali metal derivative of a strongly basic anion.
Alkoxides can be produced by several routes starting from an alcohol . Highly reducing metals react directly with alcohols to give the corresponding metal alkoxide. The alcohol serves as an acid , and hydrogen is produced as a by-product. A classic case is sodium methoxide produced by the addition of sodium metal to methanol : [ citation needed ]
Other alkali metals can be used in place of sodium, and most alcohols can be used in place of methanol. Generally, the alcohol is used in excess and left to be used as a solvent in the reaction. Thus, an alcoholic solution of the alkali alkoxide is used. Another similar reaction occurs when an alcohol is reacted with a metal hydride such as NaH. The metal hydride removes the hydrogen atom from the hydroxyl group and forms a negatively charged alkoxide ion.
The alkoxide ion and its salts react with primary alkyl halides in an S N 2 reaction to form an ether via the Williamson ether synthesis . [ 1 ] [ 2 ]
Aliphatic metal alkoxides decompose in water as summarized in this idealized equation:
In the transesterification process, metal alkoxides react with esters to bring about an exchange of alkyl groups between metal alkoxide and ester. With the metal alkoxide complex in focus, the result is the same as for alcoholysis, namely the replacement of alkoxide ligands, but at the same time the alkyl groups of the ester are changed, which can also be the primary goal of the reaction. Sodium methoxide in solution, for example, is commonly used for this purpose, a reaction that is used in the production of biodiesel .
Many metal alkoxide compounds also feature oxo- ligands . Oxo-ligands typically arise via the hydrolysis, often accidentally, and via ether elimination: [ citation needed ]
Many metal alkoxides thermally decompose in the range ≈100–300 °C. [ citation needed ] Depending on process conditions, this thermolysis can afford [ clarification needed ] nanosized powders of oxide or metallic phases. This approach is a basis of processes of fabrication of functional materials intended for aircraft, space, electronic fields, and chemical industry: individual oxides, their solid solutions, complex oxides, powders of metals and alloys active towards sintering. Decomposition of mixtures of mono- and heterometallic alkoxide derivatives has also been examined. This method represents a prospective approach possessing an advantage of capability of obtaining functional materials with increased phase and chemical homogeneity and controllable grain size (including the preparation of nanosized materials) at relatively low temperature (less than 500–900 °C) as compared with the conventional techniques. [ citation needed ]
[ citation needed ]
Sodium methoxide, also called sodium methylate and sodium methanolate, is a white powder when pure. [ 6 ] It is used as an initiator of an anionic addition polymerization with ethylene oxide , forming a polyether with high molecular weight. [ citation needed ] Both sodium methoxide and its counterpart prepared with potassium are frequently used as catalysts for commercial-scale production of biodiesel . In this process, vegetable oils or animal fats, which chemically are fatty acid triglycerides, are transesterified with methanol to give fatty acid methyl esters (FAMEs).
Sodium methoxide is produced on an industrial scale and is available from a number of chemical companies.
Potassium methoxide in alcoholic solution is commonly used as a catalyst for transesterification in the production of biodiesel . [ 7 ] | https://en.wikipedia.org/wiki/Methoxide |
Methoxy arachidonyl fluorophosphonate , commonly referred as MAFP , is an irreversible active site-directed enzyme inhibitor that inhibits nearly all serine hydrolases and serine proteases . [ 1 ] It inhibits phospholipase A2 and fatty acid amide hydrolase with special potency, displaying IC 50 values in the low-nanomolar range. In addition, it binds to the CB 1 receptor in rat brain membrane preparations ( IC 50 = 20 nM), [ 2 ] but does not appear to agonize or antagonize the receptor, [ 3 ] though some related derivatives do show cannabinoid-like properties. [ 4 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This enzyme -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methoxy_arachidonyl_fluorophosphonate |
Methoxychlor is a synthetic organochloride insecticide , now obsolete. Tradenames for methoxychlor include Chemform , Maralate , Methoxo , Methoxcide , Metox , and Moxie .
Methoxychlor was used to protect crops, ornamentals, livestock, and pets against fleas, mosquitoes, cockroaches, and other insects. It was intended to be a replacement for DDT , but has since been banned for use as a pesticide based on its acute toxicity, bioaccumulation , and endocrine disruption activity. [ 3 ]
The amount of methoxychlor in the environment changes seasonally due to its use in farming and foresting. It does not dissolve readily in water, so it is mixed with a petroleum-based fluid and sprayed, or used as a dust. Sprayed methoxychlor settles on the ground or in aquatic ecosystems, where it can be detected in sediments . [ 4 ] Its degradation may take many months. Methoxychlor is ingested and absorbed by living organisms, and it accumulates in the food chain. Some metabolites may have unwanted side effects.
The use of methoxychlor as a pesticide was banned in the United States in 2003 [ 5 ] and in the European Union in 2002. [ 6 ]
The EPA lists methoxychlor as "a persistent, bioaccumulative, and toxic (PBT) chemical by the EPA Toxics Release Inventory (TRI) program", [ 3 ] and as such it is a waste minimization priority chemical. The 2023 Conference of the Parties of the United Nations Stockholm Convention on Persistent Organic Pollutants decided to eliminate the use of methoxychlor, by listing this chemical in Annex A to the Convention. [ 7 ]
Human exposure to methoxychlor occurs via air, soil, and water, [ 8 ] primarily in people who work with the substance or who are exposed to air, soil, or water that has been contaminated. It is unknown how quickly and efficiently the substance is absorbed by humans who have been exposed to contaminated air or via skin contact. [ 8 ] In animal models , high doses can lead to neurotoxicity . [ 8 ] Some methoxychlor's metabolites have estrogenic effects in adult and developing animals before and after birth. [ 8 ] One studied metabolite is 2,2-bis( p -hydroxyphenyl)-1,1,1-trichloroethane (HPTE) which shows reproductive toxicity in an animal model by reducing testosterone biosynthesis. [ 9 ] [ 10 ] Such effects adversely affect both the male and female reproductive systems . It is expected that this "could occur in humans" but has not been proven. [ 8 ] While one study has linked methoxychlor to the development of leukemia in humans, most studies in animals and humans have been negative, thus the EPA has determined that it is not classifiable as a carcinogen . The EPA indicates that levels above the Maximum Contaminant Level of 40 ppb "cause" central nervous depression, diarrhea, damage to liver, kidney, and heart, and - by chronic exposure - growth retardation. [ 3 ]
Little information is available regarding effects on human pregnancy and children, but it is assumed from animals studies that methoxychlor crosses the placenta, and it has been detected in human milk [ 8 ] Exposure to children may be different than in adults because they tend to play on the ground, further, their reproductive system may be more sensitive to the effects of methoxychlor as an endocrine disruptor . [ citation needed ]
Food contamination may occur at low levels and it is recommended to wash all foods. [ 8 ] A number of hazardous waste sites are known to contain methoxychlor.
Maximum pesticide residue limits for the EU/UK are set at 0.01 mg/kg for oranges and 0.01 mg/kg for apples. | https://en.wikipedia.org/wiki/Methoxychlor |
Methyl- n -amylnitrosamine (MNAN) is a potential carcinogen [ 1 ] It is metabolized in the liver by the enzyme CYP2A6 .
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methyl-n-amylnitrosamine |
Methyl azide is an organic compound with the formula CH 3 N 3 . It is a white solid and it is the simplest organic azide .
Methyl azide can be prepared by the methylation of sodium azide , for instance with dimethyl sulfate in alkaline solution, followed by passing through a tube of anhydrous calcium chloride or sodium hydroxide to remove contaminating hydrazoic acid . [ 1 ] The first synthesis was reported in 1905. [ 2 ]
Decomposition to a nitrene is a first-order reaction :
The product, like its notional tautomer methanimine , polymerizes at room temperature. [ 3 ]
Methyl azide might be a potential precursor in the synthesis of prebiotic molecules via nonequilibrium reactions on interstellar ices initiated by energetic galactic cosmic rays (GCR) and photons . [ 4 ]
Methyl azide is stable at ambient temperature but may explode when heated or disturbed. [ 1 ] Presence of mercury increases the sensitivity to shock and spark. It is incompatible with methanol and dimethyl malonate . [ 5 ] When heated to decomposition, it emits toxic fumes of NO x . [ citation needed ] It can be stored indefinitely in the dark at −80 °C. [ 1 ] | https://en.wikipedia.org/wiki/Methyl_azide |
Methyl chlorate is a hypothetical organic compound having a chemical formula CH 3 ClO 3 . It would be a methyl ester of chloric acid if it existed. Attempts to synthesize it failed. [ 2 ] No physical properties are known. [ 3 ]
This article about chemical compounds is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methyl_chlorate |
Methyl chloroformate is a chemical compound with the chemical formula Cl−C(=O)−O−CH 3 . It is the methyl ester of chloroformic acid . It is an oily colorless liquid, although aged samples appear yellow. It is also known for its pungent odor.
Methyl chloroformate can be synthesized using anhydrous methanol and phosgene . [ 2 ]
Methyl chloroformate hydrolyzes in water to form methanol , hydrochloric acid , and carbon dioxide . [ 3 ] This decomposition happens violently in the presence of steam, causing foaming. The compound decomposes in heat, which can liberate hydrogen chloride, phosgene, chlorine, or other toxic gases. [ 4 ]
Methyl chloroformate is used in organic synthesis for the introduction of the methoxycarbonyl functionality to a suitable nucleophile (i.e. carbomethoxylation). [ 5 ]
Methyl chloroformate forms highly flammable vapour-air mixtures. The compound has a flash point of 10 °C. [ 6 ] Methyl chloroformate, if heated, releases phosgene. It produces hydrogen chloride upon contact with water. It will cause skin damage if in contact with skin. | https://en.wikipedia.org/wiki/Methyl_chloroformate |
Methyl cyanoformate is the organic compound with the formula CH 3 OC(O)CN. It is used as a reagent in organic synthesis as a source of the methoxycarbonyl group, [ 1 ] in which context it is also known as Mander's reagent. When a lithium enolate is generated in diethyl ether or methyl t -butyl ether, treatment with Mander's reagent will selectively afford the C-acylation product. [ 1 ] Thus, for enolate acylation reactions in which C- vs. O-selectivity is a concern, methyl cyanoformate is often used in place of more common acylation reagent like methyl chloroformate .
Methyl cyanoformate is also an ingredient in Zyklon A . It has lachrymatory effects. [ 2 ]
This article about an ester is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Methyl_cyanoformate |
Methyl fluoroacetate ( MFA ) is an organic compound with the chemical formula F C H 2 C O 2 CH 3 . It is the extremely toxic methyl ester of fluoroacetic acid . It is a colorless, odorless liquid at room temperature. It is used as a laboratory chemical and as a rodenticide . Because of its extreme toxicity, MFA was studied for potential use as a chemical weapon . [ 1 ]
The general population is not likely to be exposed to methyl fluoroacetate. People who use MFA for work, however, can breathe in or have direct skin contact with the substance. [ 2 ]
MFA was first synthesized in 1896 by the Belgian chemist Frédéric Swarts by reacting methyl iodoacetate with silver fluoride . It can also be synthesized by reacting methyl chloroacetate with potassium fluoride [ 1 ]
Because of its toxicity, MFA was studied for potential use as a chemical weapon during World War II . It was considered a good water poison since it is colorless and odorless and therefore it can toxify the water supply and kill a big part of the population. By the end of the war, several countries began to make methyl fluoroacetate to debilitate or kill the enemy. [ 2 ]
The synthesis of methyl fluoroacetate consists of a two-step process: [ 3 ]
Methyl fluoroacetate is a methyl ester of fluoroacetic acid .
MFA is a liquid, which is odorless or can have a faint, fruity smell. The boiling point of MFA is 104.5 °C and the melting point is −35.0 °C. It is soluble in water (117 g/L at 25 °C) and slightly soluble in petroleum ether. [ 2 ]
MFA is resistant to the displacement of fluorine by nucleophiles , so there is higher stability of the C−F bond compared to the other halogens ( C− Cl , C− Br , C− I ). The other haloacetates are more powerful alkylating agents that react with −SH group of proteins . This, however, does not happen for MFA and gives it a unique toxic action. [ 2 ] Moreover, MFA is a derivative of fluoroacetate (FA) compound which is as toxic and has similar biotransformation to MFA.
Generally, fluoroacetates are toxic because they are converted to fluorocitrate by fluoroacetyl coenzyme A . Fluorocitrate can inhibit aconitate hydratase , which is needed for the conversion of citrate , by competitive inhibition . [ 4 ] This interrupts the citric acid cycle (TCA cycle) and also causes citrate to accumulate in the tissues and eventually in the plasma. [ clarification needed ] MFA is mainly biotransformed by glutathione transferase enzyme in a phase 2 biotransformation process . The GSH-dependent enzyme couples glutathione to MFA and thereby defluorinating MFA. [ clarification needed ] As a result, a fluoride anion and S -carboxymethylglutathione are produced. The decoupling of fluoride is mediated by a fluoroacetate -specific defluorinase. [ clarification needed ] The defluorinating activity is mainly present in the liver , but also kidneys , lungs , the heart , and the testicles show activity. In the brain , there are no signs of defluorination. Eventually, fluorocitrate (FC) is formed which is the main toxic compound. It binds the aconitase enzyme with a very high affinity and therefore intervenes in the TCA cycle. Citrate in normal circumstances is converted to succinate , but the process is inhibited. The cycle stops and oxidative phosphorylation is prevented since NADH , FADH2 and succinate are required from the TCA cycle. Respiration stops shortly. The poison acts very quickly and has no antidote .
Mammals are intolerant to MFA. However, a few Australian species (e.g. brush-tailed possum ) show a level of tolerance to fluoroacetate by metabolizing it using glutathione- S -transferase . [ 5 ] Fluoride can be removed from fluoroacetate or fluorocitrate. It is involved in detoxifying the aryl and alkyl groups by converting them into glutathione conjugates . The C−F bond is cleaved because of a nucleophilic attack of carbon resulting in the formation of S -carboxymethyl glutathione. This can be afterward excreted in the form of S -carboxymethylcysteine . [ 5 ]
The elimination half-life of biotransformed MFA is about 2 days. When administered, the MFA mainly resides in blood plasma , but can also be traced in the liver, kidney, and muscle tissue . [ 6 ]
MFA is a convulsant poison. It causes severe convulsions in poisoned victims. [ 7 ] Death results from respiratory failure . [ 6 ]
For a variety of animals, the toxicity of methyl fluoroacetate has been determined orally and through subcutaneous injection. The dosage ranges from 0.1 mg/kg in dogs to 10–12 mg/kg in monkeys indicating considerable variation. An order of decreasing susceptibility has been determined within these animals which is: dog, guinea-pig, cat, rabbit, goat, and then likely horse, rat, mouse, and monkey. For the rat and mouse, the toxicity by inhalation has been investigated more fully than for other animals. The LD 50 for the rat and mouse are 450 mg/m 3 and above 1,000 mg/m 3 for 5 minutes, respectively. In dogs, guinea-pigs, cats, rabbits, goats, horses, rats, mice, and monkeys, the pharmacological effects of this substance have been investigated by mouth and by injection. Methyl fluoroacetate causes progressive depression of respiration and is a convulsant poison in most animals. When applied to the skin it is not toxic, yet through inhalation, injection and by mouth it is. For the rat, cat and the rhesus monkey, the effects of methyl fluoroacetate have been determined similar to those of nicotine , strychnine , leptazol , picrotoxin , and electrically induced convulsions. The convulsive pattern is considered to be similar to that of leptazol. Little besides signs of asphyxia is found post-mortem in these animals. Estimations have been made for blood sugar, hemoglobin, plasma proteins, non-protein nitrogen, and serum potassium, calcium, chloride, and inorganic phosphate in a small number of rabbits, dogs, and goats. Blood changes include a rise of 20 to 60% in hemoglobin, a rise of up to 90% in blood sugar, a rise of 70 to 130% in inorganic phosphate, and a less significant rise in serum potassium with a terminal rise in non-protein nitrogen and potassium. The whole central nervous system is affected by methyl fluoroacetate just like with leptazol, with the higher centers being more sensitive than the lower ones. Small doses of methyl fluoroacetate have little effect on blood pressure yet in large doses it has an action similar to nicotine. It further stimulates the rate and volume of respiration and then causes failure of the respiration, probably central in origin as seen through graphic records. The knee jerk reaction appears to be accentuated through methyl fluoroacetate until convulsions occur due to the irradiation of the stimuli being so facilitated. Nervous conduction is increased and the threshold stimulus lessened in the reflex arc of a spinal cat. Methyl fluoroacetate reduces the electric convulsive threshold about 10 times in rats. The difficulties of treatments are stressed as methyl fluoroacetate is both a powerful convulsant and a respiratory depressant, yet suggestions for treatment in man are made. Methyl fluoroacetate presents a serious hazard as a food and water contaminant in the case that it is used as a poison against rodents and other vermin, as it is not easily detected or destroyed and is equally toxic by mouth and by injection. [ 6 ]
Methyl fluoroacetate is produced and used as a chemical reagent and it can be released to the environment through several waste streams. When it was used as a rodenticide, it was released directly to the environment where it would be broken down in the air. If released to air, an estimated vapor pressure of 31 mmHg at 25 °C indicates methyl fluoroacetate will exist solely as a vapor in the atmosphere. [ 2 ] Vapor-phase methyl fluoroacetate will be degraded in the atmosphere by reaction with photochemically produced hydroxyl radicals. The half-life for this reaction in air is estimated to be 98 days.
MFA does not contain chromophores that absorb at wavelengths > 290 nm and therefore it's not expected to be susceptible to direct photolysis by sunlight . [ 2 ]
The effects on animals occur very rapidly and strongly, all resulting in death. Exposure to a high concentration of MFA vapor does not show any symptoms in animals until 30–60 minutes. [ 6 ] Then violent reactions and death took place in a few hours, according to studies. From intravenous injection mice, rats, and guinea pigs show symptoms after 15 min to 2 hours. The animals become quiet and limp. Rabbits show a similar latent time period and muscle weakness. [ 6 ] Dogs show symptoms of hyperactivity. They are more sensitive because of higher rates of metabolism and, eventually, they also fail to respirate. Fish are more resistant because of slow metabolism [ 4 ] and therefore it is not expected that the substance will build up in fish. Also, Australian herbivores (e.g. possum and seed-eating birds) that live in a habitat consisting of plants with traces of fluoroacetate, have some tolerance. This can happen by detoxifying fluoroacetate or more resistivity of aconitase to fluorocitrate in the presence of GSH. Some insects can store the toxin in vacuoles and use it later. [ 4 ] The highly hazardous MFA cannot be used for poisoning animals without risking human life.
There is no known antidote against MFA, but there are some suggestions regarding the treatment of MFA poisoning. Advised is to use an intravenous injection of fast-acting anesthetics directly after poisoning. The anesthetic should be pentothal sodium or evipan sodium followed by an intramuscular injection of long-acting cortical depressants like sodium phenobarbitone or rectal avertin. Afterward, careful supervision of oxygen supply is necessary together with a BLB mask [ clarification needed ] and the use of artificial respiration. Possibly, the use of hypertonic glucose intravenously is required as in status epilepticus. At last, careful use of tubocurarine chloride should be applied to control any convulsions. [ 6 ] If any vomiting occurs, lean the patient forward to maintain an open airway.
Alternatively, there is a therapy aimed at the prevention of fluorocitrate synthesis, the blocking of aconitase within the mitochondria, and to provide a citrate outflow from the mitochondria to keep the TCA cycle going. For now, ethanol has proven to be the most effective against FC formation. When ethanol is oxidized, it increases blood acetate levels which inhibits FC production. In humans, an oral dose of 40-60 mL 96% ethanol is advised followed by 1.0-1.5 g/kg of 5-10% ethanol intravenously during the first hour and 0.1 g/kg during the following 6–8 hours. This therapy is meant for fluoroacetate (FA) poisoning which is highly related MFA, so this therapy aimed at MFA may result in other outcomes. [ 8 ]
Treatment with monoacetin (glycerol monoacetate) helped against FA poisoning. It aids in increasing acetate levels of the blood and it decreases citrate levels in the heart, brain, and kidneys. However, this is only tested experimentally. In monkeys, monoacetin even reverses the effects of FA: all biological effects normalized. [ 8 ] As with ethanol, monoacetin is effective against FA poisoning.
There is up until now, no proven treatment against MFA. However, the beforementioned treatments can provide starting points for therapy aimed at MFA since FA and MFA are closely related compounds. [ 8 ] | https://en.wikipedia.org/wiki/Methyl_fluoroacetate |
Methyl green (CI 42585) is a cationic or positive charged stain related to Ethyl Green that has been used for staining DNA since the 19th century. [ 1 ] It has been used for staining cell nuclei either as a part of the classical Unna-Pappenheim stain or as a nuclear counterstain ever since. In recent years, its fluorescent properties, [ 2 ] when bound to DNA, have positioned it as useful for far-red imaging of live cell nuclei. [ 3 ] Fluorescent DNA staining is routinely used in cancer prognosis. [ 4 ] Methyl green also emerges as an alternative stain for DNA in agarose gels, fluorometric assays , and flow cytometry . [ 3 ] [ 5 ] It has also been shown that it can be used as an exclusion viability stain for cells.
Its interaction with DNA has been shown to be non-intercalating , in other words, not inserting itself into the DNA, but instead electrostatic with the DNA major groove . [ 6 ] It is used in combination with pyronin in the methyl green–pyronin stain , which stains and differentiates DNA and RNA.
When excited at 244 or 388 nm in a neutral aqueous solution, methyl green produces a fluorescent emission at 488 or 633 nm, respectively. The presence or absence of DNA does not affect these fluorescence behaviors. When binding DNA under neutral aqueous conditions, methyl green also becomes fluorescent in the far red with an excitation maximum of 633 nm and an emission maximum of 677 nm. [ 3 ]
Commercial Methyl green preparations are often contaminated with Crystal violet . Crystal violet can be removed by chloroform extraction. [ 3 ] | https://en.wikipedia.org/wiki/Methyl_green |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.