text
stringlengths
11
320k
source
stringlengths
26
161
The O.E. Meinzer Award is the annual award of the Hydrogeology Division of the Geological Society of America . Established in 1965, it is named after Oscar Edward Meinzer who has been called the "father of modern groundwater hydrology". [ 1 ] The Meinzer award recognizes the author or authors of a publication or body of publications that have significantly advanced the science of hydrogeology or a closely related field. [ 2 ] Starting in 1973, the list of recipients is available from the National Ground Water Association website. [ 3 ]
https://en.wikipedia.org/wiki/Meinzer_Award
Meiobenthos , also called meiofauna , are small benthic invertebrates that live in marine or freshwater environments , or both. The term meiofauna loosely defines a group of organisms by their size—larger than microfauna but smaller than macrofauna —rather than by their taxonomy . This fauna includes both animals that turn into macrofauna later in life, and those small enough to belong to the meiobenthos their entire life. In marine environments there can be thousands of individuals in 10 cubic centimeters of sediment, and counts animals like nematodes, copepods , rotifers , tardigrades and ostracods , but protists like ciliates and foraminifers within the size range of the meiobenthos are also often included. In practice, the term usually includes organisms that can pass through a 1 mm mesh but are retained by a 45 μm mesh, though exact dimensions may vary. [ 1 ] Whether an organism will pass through a 1 mm mesh also depends upon whether it is alive or dead at the time of sorting. The term meiobenthos was first coined in 1942 by marine biologist Molly Mare , but organisms that fit into the modern meiofauna category have been studied since the 18th century. Meiofauna are most commonly encountered in sedimentary environments in both marine and freshwater environments, from the littoral to the deep-sea . They can also be found on hard substrates living on algae , the phytal environment , and sessile animals ( barnacles , mussel beds, etc.). Sampling the meiobenthos is dependent upon the environment and whether quantitative or qualitative samples are required. In the sedimentary environment, the methodology used also depends on the physical morphology of the sediment. For qualititative sampling within the littoral zone, for both coarse and fine sediment, a bucket and spade will work. In the sub-littoral and deep water, some form of grab (like the Van Veen grab sampler ) is required, although a fine mesh (about 0.25 mm or less) would work also. For the quantitative sampling of sedimentary environments at all depths, a wide variety of samplers have been devised. The simplest is a plastic syringe with the end cut off to form a piston corer which can be deployed in the littoral zone, or in the sub-littoral using SCUBA gear. Generally the deeper the water, the more complicated the sampling process becomes. For sampling the meiofauna on hard substrates or phytal or epizootic environments, the only practical methodology is to cut or scrape off a known area of the substrate and place it in a plastic bag. There are a wide variety of methods for extracting meiofauna from the samples of their habitat depending upon whether live or fixed specimens are required. For extracting live meiofauna, one has to contend with the large number of species that cling or attach themselves to the substrate when disturbed. In order to get the meiofauna to release their grip, there are three methodologies available. The first, and simplest, is osmotic shock , which is achieved by submerging the sample in fresh water for a few seconds (this only works for marine samples). This will cause the organisms to release, after which they can be shaken free from the substrate and filtered out through a 45 μm mesh and immediately returned to fresh filtered seawater. Many organisms will come through this process unharmed as long as the osmotic shock does not last too long. The second methodology is the use of an anaesthetic . The preferred chemical solution for meiobenthologists is isotonic magnesium chloride (7.5g MgCl 2 ·6H 2 O per 100 mL of distilled water). The sample is immersed in the isotonic solution and left for a period of 15 min, after which the meiofauna are shaken free of the substrate and again filtered out through a 45 μm mesh and immediately returned to fresh filtered seawater. The third methodology is Uhlig's seawater ice technique. [ 2 ] This relies on the organisms moving ahead of a front of ice-cold seawater moving down through the sample, ultimately forcing them out of the sediment. It is most effective on samples from temperate and tropical regions. For major studies where large numbers of samples are collected concurrently, samples are normally fixed using a 10% formalin solution, and the meiofauna are extracted at a later date. There are two main extraction methodologies. The first, decantation , works best with coarse sediments. Samples are shaken in an excess of water, the sediment is briefly allowed to settle, and then the meiofauna are filtered off. The second methodology, the flotation technique, works best with finer sediments, where the mass of the sediment particles is close to that of the meiofauna. The best solution for this technique is the colloidal silica Ludox. The sample is stirred into the Ludox solution and left to settle for 40 min, after which the meiofauna are filtered out. In fine sediments, extraction efficiency is improved with centrifugation . With both methodologies, repeated extractions should be made (at least three) with each sample to ensure that at least 95% of the meiofauna is extracted.
https://en.wikipedia.org/wiki/Meiobenthos
Meiotic drive is a type of intragenomic conflict , whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time. According to Buckler et al., "Meiotic drive is the subversion of meiosis so that particular genes are preferentially transmitted to the progeny. Meiotic drive generally causes the preferential segregation of small regions of the genome". [ 1 ] The first report of meiotic drive came from Marcus Rhoades who in 1942 observed a violation of Mendelian segregation ratios for the R locus - a gene controlling the production of the purple pigment anthocyanin in maize kernels - in a maize line carrying abnormal chromosome 10 (Ab10). [ 2 ] Ab10 differs from the normal chromosome 10 by the presence of a 150-base pair heterochromatic region called 'knob', which functions as a centromere during division (hence called 'neocentromere') and moves to the spindle poles faster than the centromeres during meiosis I and II. [ 3 ] The mechanism for this was later found to involve the activity of a kinesin -14 gene called Kinesin driver ( Kindr ). Kindr protein is a functional minus-end directed motor, displaying quicker minus-end directed motility than an endogenous kinesin-14, such as Kin11. As a result Kindr outperforms the endogenous kinesins, pulling the 150 bp knobs to the poles faster than the centromeres and causing Ab10 to be preferentially inherited during meiosis [ 4 ] The unequal inheritance of gametes has been observed since the 1950s, [ 5 ] in contrast to Gregor Mendel 's First and Second Laws (the law of segregation and the law of independent assortment ), which dictate that there is a random chance of each allele being passed on to offspring. Examples of selfish drive genes in animals have primarily been found in rodents and flies. These drive systems could play important roles in the process of speciation . For instance, the proposal that hybrid sterility ( Haldane's rule ) may arise from the divergent evolution of sex chromosome drivers and their suppressors. [ 6 ] Early observations of mouse t-haplotypes by Mary Lyon described numerous genetic loci on chromosome 17 that suppress X-chromosome sex ratio distortion. [ 7 ] [ 8 ] If a driver is left unchecked, this may lead to population extinction as the population would fix for the driver (e.g. a selfish X chromosome), removing the Y chromosome (and therefore males) from the population. The idea that meiotic drivers and their suppressors may govern speciation is supported by observations that mouse Y chromosomes lacking certain genetic loci produce female-biased offspring, implying these loci encode suppressors of drive. [ 9 ] Moreover, matings of certain mouse strains used in research results in unequal offspring ratios. One gene responsible for sex ratio distortion in mice is r2d2 ( r2d2 – responder to meiotic drive 2), which predicts which strains of mice can successfully breed without offspring sex ratio distortion. [ 10 ] Selfish chromosomes of stalk-eyed flies have had ecological consequences. Driving X chromosomes lead to reductions in male fecundity and mating success, leading to frequency dependent selection maintaining both the driving alleles and wild-type alleles. [ 11 ] Multiple species of fruit fly are known to have driving X chromosomes, of which the best-characterized are found in Drosophila simulans . Three independent driving X chromosomes are known in D. simulans , called Paris, Durham, and Winters. In Paris, the driving gene encodes a DNA modelling protein ("heterochromatin protein 1 D2" or HP1D2 ), where the allele of the driving copy fails to prepare the male Y chromosome for meiosis. [ 12 ] In Winters, the gene responsible ("Distorter on the X" or Dox ) has been identified, though the mechanism by which it acts is still unknown. [ 13 ] The strong selective pressure imposed by these driving X chromosomes has given rise to suppressors of drive, of which the genes are somewhat known for Winters, Durham, and Paris. These suppressors encode hairpin RNAs which match the sequence of driver genes (such as Dox ), leading host RNA interference pathways to degrade Dox sequence. [ 14 ] Autosomal suppressors of drive are known in Drosophila mediopunctata , [ 15 ] Drosophila paramelanica , [ 16 ] Drosophila quinaria , [ 17 ] and Drosophila testacea , [ 18 ] emphasizing the importance of these drive systems in natural populations.
https://en.wikipedia.org/wiki/Meiotic_drive
A Meisenheimer complex or Jackson–Meisenheimer complex in organic chemistry is a 1:1 reaction adduct between an arene carrying electron withdrawing groups and a nucleophile . These complexes are found as reactive intermediates in nucleophilic aromatic substitution but stable and isolated Meisenheimer salts are also known. [ 1 ] [ 2 ] [ 3 ] The early development of this type of complex takes place around the turn of the 19th century. In 1886 Janovski observed an intense violet color when he mixed meta -dinitrobenzene with an alcoholic solution of alkali. In 1895 Cornelis Adriaan Lobry van Troostenburg de Bruyn investigated a red substance formed in the reaction of trinitrobenzene with potassium hydroxide in methanol . In 1900 Jackson and Gazzolo reacted trinitroanisole with sodium methoxide and proposed a quinoid structure for the reaction product. In 1902 Jakob Meisenheimer [ 4 ] observed that by acidifying their reaction product, the starting material was recovered. With three electron withdrawing groups, the negative charge in the complex is located at one of the nitro groups according to the quinoid model. When less electron poor arenes this charge is delocalized over the entire ring (structure to the right in scheme 1 ). In one study [ 5 ] a Meisenheimer arene (4,6-dinitrobenzofuroxan) was allowed to react with a strongly electron-releasing arene (1,3,5-tris(N-pyrrolidinyl)benzene) forming a zwitterionic Meisenheimer–Wheland complex. The Wheland intermediate is the name typically given to the cationic reactive intermediate formed in electrophilic aromatic substitution , and can be considered an oppositely charged analog of the negatively charged Meisenheimer complex formed in nucleophilic aromatic substitution. Hence, the simultaneous occurrence of the Wheland and Meisenheimer intermediates in the single zwitterionic complex shown below lead to its description as a Meisenheimer–Wheland complex. The structure of this complex was confirmed by NMR spectroscopy . The Janovski reaction is the reaction of 1,3- dinitrobenzene with an enolizable ketone to the Meisenheimer adduct. In the Zimmermann reaction the Janovski adduct is oxidized with excess base to a strongly colored enolate with subsequent reduction of the dinitro compound to the aromatic nitro amine. [ 6 ] This reaction is the basis of the Zimmermann test used for the detection of ketosteroids . [ 7 ] The Jackson–Meisenheimer complex was named after the American organic chemist, Charles Loring Jackson (1847–1935) and the German organic chemist, Jakob Meisenheimer (1876–1934). The Janovski reaction was named for the Czech chemist, Jaroslav Janovski (1850–1907). [ 8 ] The Zimmermann reaction was named after the German chemist, Wilhelm Zimmermann (1910–1982). [ 8 ] Lastly, the Wheland intermediate was named after the American chemist, George Willard Wheland (1907–1976) [ 9 ]
https://en.wikipedia.org/wiki/Meisenheimer_complex
The Meissel–Mertens constant (named after Ernst Meissel and Franz Mertens ), also referred to as the Mertens constant , Kronecker's constant (after Leopold Kronecker ), Hadamard–de la Vallée-Poussin constant (after Jacques Hadamard and Charles Jean de la Vallée-Poussin ), or the prime reciprocal constant , is a mathematical constant in number theory , defined as the limiting difference between the harmonic series summed only over the primes and the natural logarithm of the natural logarithm: Here γ is the Euler–Mascheroni constant , which has an analogous definition involving a sum over all integers (not just the primes). The value of M is approximately Mertens' second theorem establishes that the limit exists. The fact that there are two logarithms (log of a log) in the limit for the Meissel–Mertens constant may be thought of as a consequence of the combination of the prime number theorem and the limit of the Euler–Mascheroni constant. The Meissel-Mertens constant was used by Google when bidding in the Nortel patent auction. Google posted three bids based on mathematical numbers: $1,902,160,540 ( Brun's constant ), $2,614,972,128 (Meissel–Mertens constant), and $3.14159 billion ( π ). [ 1 ]
https://en.wikipedia.org/wiki/Meissel–Mertens_constant
In condensed-matter physics , the Meissner effect (or Meißner–Ochsenfeld effect ) is the expulsion of a magnetic field from a superconductor during its transition to the superconducting state when it is cooled below the critical temperature. This expulsion will repel a nearby magnet . The German physicists Walther Meißner (anglicized Meissner ) and Robert Ochsenfeld [ 1 ] discovered this phenomenon in 1933 by measuring the magnetic field distribution outside superconducting tin and lead samples. [ 2 ] The samples, in the presence of an applied magnetic field, were cooled below their superconducting transition temperature , whereupon the samples cancelled nearly all interior magnetic fields. They detected this effect only indirectly because the magnetic flux is conserved by a superconductor: when the interior field decreases, the exterior field increases. The experiment demonstrated for the first time that superconductors were more than just perfect conductors and provided a uniquely defining property of the superconductor state. The ability for the expulsion effect is determined by the nature of equilibrium formed by the neutralization within the unit cell of a superconductor. A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too strong. Superconductors can be divided into two classes according to how this breakdown occurs. Most pure elemental superconductors, except niobium and carbon nanotubes , are type I, while almost all impure and compound superconductors are type II. The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London , who showed that the electromagnetic free energy in a superconductor is minimized provided where H is the magnetic field and λ is the London penetration depth . This equation, known as the London equation , predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. This exclusion of magnetic field is a manifestation of the superdiamagnetism emerged during the phase transition from conductor to superconductor, for example by reducing the temperature below critical temperature. In a weak applied field (less than the critical field that breaks down the superconducting phase), a superconductor expels nearly all magnetic flux by setting up electric currents near its surface, as the magnetic field H induces magnetization M within the London penetration depth from the surface. These surface currents shield the internal bulk of the superconductor from the external applied field. As the field expulsion, or cancellation, does not change with time, the currents producing this effect (called persistent currents or screening currents) do not decay with time. Near the surface, within the London penetration depth , the magnetic field is not completely canceled. Each superconducting material has its own characteristic penetration depth. Any perfect conductor will prevent any change to magnetic flux passing through its surface due to ordinary electromagnetic induction at zero resistance. However, the Meissner effect is distinct from this: when an ordinary conductor is cooled so that it makes the transition to a superconducting state in the presence of a constant applied magnetic field, the magnetic flux is expelled during the transition. This effect cannot be explained by infinite conductivity, but only by the London equation. The placement and subsequent levitation of a magnet above an already superconducting material does not demonstrate the Meissner effect, while an initially stationary magnet later being repelled by a superconductor as it is cooled below its critical temperature does. The persisting currents that exist in the superconductor to expel the magnetic field is commonly misconceived as a result of Lenz's Law or Faraday's Law . A reason this is not the case is that no change in flux was made to induce the current. Another explanation is that since the superconductor experiences zero resistance, there cannot be an induced emf in the superconductor. The persisting current therefore is not a result of Faraday's Law. Superconductors in the Meissner state exhibit perfect diamagnetism, or superdiamagnetism , meaning that the total magnetic field is very close to zero deep inside them (many penetration depths from the surface). This means that their volume magnetic susceptibility is χ v {\displaystyle \chi _{v}} = −1. Diamagnetics are defined by the generation of a spontaneous magnetization of a material which directly opposes the direction of an applied field. However, the fundamental origins of diamagnetism in superconductors and normal materials are very different. In normal materials diamagnetism arises as a direct result of the orbital spin of electrons about the nuclei of an atom induced electromagnetically by the application of an applied field. In superconductors the illusion of perfect diamagnetism arises from persistent screening currents which flow to oppose the applied field (the Meissner effect); not solely the orbital spin. The discovery of the Meissner effect led to the phenomenological theory of superconductivity by Fritz and Heinz London in 1935. This theory explained resistanceless transport and the Meissner effect, and allowed the first theoretical predictions for superconductivity to be made. However, this theory only explained experimental observations—it did not allow the microscopic origins of the superconducting properties to be identified. This was done successfully by the BCS theory in 1957, from which the penetration depth and the Meissner effect result. [ 5 ] However, some physicists argue that BCS theory does not explain the Meissner effect. [ 6 ] The Meissner superconductivity effect serves as an important paradigm for the generation mechanism of a mass M (i.e., a reciprocal range , λ M := h / ( M c ) {\displaystyle \lambda _{M}:=h/(Mc)} where h is the Planck constant and c is the speed of light ) for a gauge field . In fact, this analogy is an abelian example for the Higgs mechanism , [ 7 ] which generates the masses of the electroweak W ± and Z gauge particles in high-energy physics . The length λ M {\displaystyle \lambda _{M}} is identical with the London penetration depth in the theory of superconductivity . [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Meissner_effect
The astronomy of Meitei civilisation deals with celestial objects , space , and the physical universe as a whole. Meitei language term “Khenchanglon” ( Meitei : ꯈꯦꯟꯆꯪꯂꯣꯟ / ꯈꯦꯟꯆꯡꯂꯣꯟ ) is derived from its ancient Meitei equivalent “Khenchonglon” ( Meitei : ꯈꯦꯟꯆꯣꯡꯂꯣꯟ ), literally meaning "the growing up, evolving or emergence of natural / celestial body(ies) and energy(ies)" and colloquially meaning " astronomy or astronomical bodies , like stars , constellations , planets , satellites , comets , meteors , etc." The Meitei astronomy was also related to the tradition of astrology . [ 1 ]
https://en.wikipedia.org/wiki/Meitei_astronomy
The Mekong River Commission (MRC) is an "...inter-governmental organisation that works directly with the governments of Cambodia , Laos , Thailand , and Vietnam to jointly manage the shared water resources and the sustainable development of the Mekong River". [ 1 ] Its mission is "To promote and coordinate sustainable management and development of water and related resources for the countries' mutual benefit and the people's well-being". [ 2 ] The origins of the Mekong Committee are linked to the legacy of (de)colonialism in Indochina and subsequent geopolitical developments. The political, social, and economic conditions of the Mekong River basin countries evolved dramatically since the 1950s, when the Mekong represented the "only large river left in the world, besides the Amazon, which remained virtually unexploited." [ 3 ] The impetus for the creation of the Mekong cooperative regime progressed in tandem with the drive for the development of the lower Mekong, following the 1954 Geneva Conference which granted Cambodia, Laos, and Vietnam independence from France . A 1957 United Nations Economic Commission for Asia and the Far East (ECAFE) report, Development of Water Resources in the Lower Mekong Basin , recommended development to the tune of 90,000 km 2 of irrigation and 13.7 gigawatts (GW) from five dams. [ 4 ] Based largely on the recommendations of ECAFE, the "Committee for Coordination on the Lower Mekong Basin" (known as the Mekong Committee ) was established in September 1957 with the adoption of the Statute for the Committee for Coordination of Investigations into the Lower Mekong Basin . ECAFE's Bureau of Flood Control had prioritized the Mekong—of the 18 international waterways within its jurisdiction—in the hopes of creating a precedent for cooperation elsewhere. [ 5 ] and "one of the UN's earliest spin-offs", [ 6 ] as the organization functioned under the aegis of the UN, with its Executive Agent (EA) chosen from the career staff of the United Nations Development Programme (UNDP). The US government—which feared that poverty in the basin would contribute to the strength of communist movements—proved one of the most vocal international backers of the committee, with the U.S. Bureau of Reclamation conducting a seminal 1956 study on the basin's potential. [ 7 ] [ 5 ] Another 1962 study by U.S. geographer Gilbert F. White , Economic and Social Aspects of Lower Mekong Development , proved extremely influential, resulting in the postponement of (in White's own estimation) the construction of the (still unrealized) mainstream Pa Mong Dam, which would have displaced a quarter-million people. [ 8 ] The influence of the United States in the committee's formation can also been seen in development studies of General Raymond Wheeler , the former Chief of the Army Corps of Engineers , the role of C. Hart Schaaf as the Mekong Committee's Executive Agent from 1959 to 1969, and President Lyndon Johnson ’s promotion of the committee as having the potential to "dwarf even our own T.V.A ." [ 5 ] However, US financial support was terminated in 1975 and did not resume for decades due to embargoes against Cambodia (until 1992) and Vietnam (until 1994), followed by periods of trade restrictions. [ 9 ] However, Makim [ 10 ] argues that the committee was "largely unaffected by formal or informal U.S. preferences" given the ambivalence of some riparians about US technical support, in particular Cambodia's rejection of some specific types of assistance. However, the fact remains that "international development agencies have always paid the bills for the Mekong regime," with European (especially Scandinavian) nations picking up the slack left by the United States, and then (to a lesser extent) Japan. [ 11 ] The Mekong Committee was a forceful advocate for large-scale dams and other projects, primarily preoccupied with facilitating projects. For example, the 1970 Indicative Basin Plan called for 30,000 km 2 of irrigation by the year 2000 (up from 2,130 km 2 ) as well as 87 short-term tributary development projects and 17 long-term development projects on the mainstream. The Indicative Basin Plan was crafted largely in response to criticisms of the committee's "piecemeal" approach and declining political support of the organization; for example, the committee had received no funds from Thailand , normally the biggest contributor, during the 1970 fiscal year. [ 12 ] The completion of all 17 projects was never intended; rather the list was meant to serve as a "menu" for international donors, who were to select 9 or 10 of the projects. [ 5 ] While a few of the short-term projects were implemented, none of the long-term projects prevailed in the political climate of the ensuing decade, which included the end of the Vietnam War in 1975. [ 13 ] Several tributary dams were constructed, but only one—the Nam Ngum Dam (completed 1971), in Laos—outside of Thailand, whose electricity was sold to Thailand. [ 5 ] According to Makim, [ 14 ] Nam Ngum was the "only truly intergovernmental project achieved" by the committee. This period was also marked by efforts to expand the jurisdiction and mandate of the committee between 1958 and 1975, which did not receive the consent of all four riparians. [ 15 ] However, these efforts culminated, in January 1975, in the adoption of a 35-article Joint Declaration of Principles for Utilization of the Waters of the Mekong Basin by the sixty-eighth session of the Mekong Committee, prohibiting the "unilateral appropriation" without "prior approval" and "extra-basin diversion" without unanimous consent. [ 16 ] However, no committee sessions were held in 1976 or 1977, as no plenipotentiary members had been appointed by Cambodia, Laos, or Vietnam—all of which experienced regime change in 1975. [ 17 ] The rise of the xenophobic and paranoid Khmer Rouge government in Cambodia made Cambodia's continued participation unsustainable, so in April 1977 the other three riparians agreed to the Declaration Concerning the Interim Mekong Committee , which resulted in the establishment of the Interim Mekong Committee in January 1978. The weakened interim organization was only able to study large-scale projects and implement a few small-scale projects in Thailand and Laos, where the Dutch Government through the IMC funded fisheries and agricultural development projects along the Nam Ngum, as well as port facilities at Keng Kabao near Savannakhet; the institutional role of the organization shifted nonetheless largely to data collection. [ 17 ] The 1987 Revised Indicative Basin Plan —the high-water mark of the Interim Committee's activity—scaled back the ambitions of the 1970 plan, envisioning a cascade of smaller dams along the Mekong's mainstream, divided into 29 projects, 26 of which were strictly national in scope. [ 5 ] The Revised Indicative Basin Plan can also be seen as laying the groundwork for Cambodia's readmission. [ 18 ] The Supreme National Council of Cambodia did request readmission in June 1991. Cambodia's readmission was largely a side-show which masked the true issue facing the riparians: that the rapid economic growth experienced in Thailand relative to its neighbors had made even the modest sovereignty limitations imposed by Mekong agreements seem undesirable in Bangkok. Thailand and the other three riparians (led by Vietnam, the most powerful of the remaining three states) were locked in disagreement over whether Cambodia should be readmitted under the terms of the 1957 Statute (and more importantly, the 1975 Joint Declaration ), with Thailand preferring to negotiate an entirely new framework to allow its planned Kong-Chi-Moon Project (and others) to proceed without a Vietnamese veto. [ 19 ] Article 10 of the Joint Declaration , requiring unanimous consent for all mainstream development and inter-basin diversion proved to be the main sticking point of Cambodia's readmission, with Thailand perhaps prepared to walk away from the regime altogether. [ 5 ] The conflict came to a head in April 1992 when Thailand forced the executive agent of the committee, Chuck Lankester, to resign and leave the country after barring the secretariat from the March 1992 meeting. [ 20 ] This prompted a series of meetings organized by the UNDP (which was terrified that the regime in which it had invested so much might disappear), culminating in the April 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin signed by Cambodia, Laos, Thailand, and Vietnam in Chiang Rai, Thailand, creating the Mekong River Commission (MRC). Since the dramatic confrontation of 1992, several seemingly overlapping organizations were created, including the Asian Development Bank 's Greater Mekong Subregion (ADB-GMS, 1992), Japan's Forum of Comprehensive Development of Indochina (FCDI, 1993), the Quadripite Economic Cooperation (QEC, 1993), the Association of South East Asian Nations and Japan's Ministry of International Trade and Industry 's Working Group on Economic Cooperation in Indochina and Burma (AEM-MITI, 1994), and (almost finalized) Myanmar and Singapore's ASEAN-Mekong Basin Development Cooperation (ASEAN-ME, 1996). The MRC has evolved since 1995. Some of the "thorny issues" set aside during the negotiation of the agreement were at least partially resolved by the implementation of subsequent programmes such as the Water Utilization Programme (WUP) agreed to in 1999 and committed to implementation by 2005. [ 21 ] The commission's hierarchical structure has been repeatedly tweaked, as in July 2000 when the MRC Secretariat was restructured. The 2001 Work Programme has largely come to be viewed as a shift "from a project-oriented focus to an emphasis on better management and preservation of existing resources." [ 5 ] On paper, the Work Programme represents a rejection of the ambitious development schemes embodied by the 1970 and 1987 Indicative Basin Plans (calling for no mainstream dams) and a shift to a holistic rather than programmatic approach. [ 5 ] In part, these changes represent a response to criticism of the MRC's failure to undertake a "regional-scale project" or even a region-level focus. [ 22 ] 2001 also saw a major shift in the MRC—at least on paper—when it committed to a role as a "learning organization" with an emphasis on "the livelihoods of the people in the Mekong region." [ 23 ] In the same year its annual report emphasized the importance of "bottoms-up" solutions and the "voice of the people directly affected." [ 24 ] Similarly, the 2001 MRC Hydropower Development Strategy explicitly disavowed the "promotion of specific projects" in favor of "basin-wide issues." [ 25 ] In part, these shifts mark a retreat from past project failures and recognition that the MRC faces multiple, and often more lucrative, competitors in the project arena. [ 26 ] [ 27 ] The MRC is governed by its four member countries through the Joint Committee and the council. Members of the Joint Committee are usually senior civil servants heading government departments. There is one member from each country. The Joint Committee meets two to three times a year to approve budgets and strategic plans. Members of the council are cabinet ministers. The Council meets once a year. Technical and administrative support is provided by the MRC Secretariat. The secretariat is based in Vientiane , Laos, with over 120 staff including scientists, administrators, and technical staff. A chief executive officer manages the secretariat. In April 2010, the Mekong River Commission convened a summit in Hua Hin, Thailand. All six riparian nations were in attendance, including China, Burma (Myanmar), Laos, Thailand, Cambodia, and Vietnam. [ 28 ] From its conception until 1995 the organization was under the leadership of an "executive agent". Since then it has had CEO's. [ 29 ] The Mekong River Commission and its predecessors have never included PR China (which was not a member of the United Nations in 1957) or Burma (which does not significantly rely on or tap the Mekong), whose territory contains the upper Basin of the Mekong. Part of a joint initiative by the US agency for International Development ( USAID ) and NASA , SERVIR Mekong project , with five countries, Thailand, Cambodia, Laos, and Vietnam including Myanmar which aims to tap into the latest technologies to help the Mekong River region protect its vital ecosystem. [ 34 ] Although China contributes only 16–18 percent of the Mekong's overall water volume, the glacial melt waters of the Tibetan plateau take on increasing importance during the dry season. [ 35 ] The ability of upstream nations to undermine downstream cooperation was perhaps best symbolized by an April 1995 ceremonial boat trip from Thailand to Vietnam—to celebrate the signing of the 1995 Agreement—which ran aground mid-river as a result of China filling the reservoir of the Manwan Dam . [ 36 ] Although China and Burma became "dialogue partners" of the MRC in 1996 and slowly but steadily escalated their (non-binding) participation in its various forums, it is at present unthinkable that either would join the MRC in the near future. [ 37 ] In April 2002, China began providing daily water level data to the MRC during the flood season. [ 38 ] Critics noted that the emphasis on "flood control" rather than dry season flows represented an important omission given the concerns prioritized by the Mekong regime. [ 39 ] In July 2003, MRC CEO Joern Kristensen reported that China had agreed to scale back its plans to blast rapids by implementing only phase one (of three) of its Upper Mekong Navigation Improvement Project ; however, China's future intentions in this area are far from certain. [ 40 ] One area in which China has been particularly reticent is in providing information about the operation of its dams, rather than just flow data, including refusing to join emergency meetings in 2004. [ 41 ] Only in 2005 did China agreed to hold technical discussions directly with the MRC. [ 42 ] On 2 June 2005, at the invitation of the Chinese Ministry of Foreign Affairs and the Ministry of Water Resources, MRC CEO Dr. Olivier Cogels and a delegation of the secretariat's senior staff made the first official visit to Beijing to hold technical consultations under the framework of cooperation between China and the MRC, within the scope of the Mekong Programme. The delegation identified a number of potential areas of cooperation with the Ministry of Foreign Affairs, the Ministry of Water Resources, and the Ministry of Communication, Information and Transport. These discussions resulted in China supplying the MRC (beginning in 2007) with 24-hour water level and 12-hour rainfall data for flood forecasts in exchange for monthly flow data from the MRC Secretariat. [ 43 ] The incentives for China to enter into cooperative regimes on the Mekong are substantially reduced by the alternative of the Salween River as a commercial outlet for China's Yunnan province, made considerably more attractive by requiring negotiation solely with Burma, rather than with four different riparians. [ 36 ] News media and official sources often portray China's joining the commission as a panacea for resolving the overdevelopment of the Mekong. [ 44 ] However, there is no indication that China's joining the MRC would provide downstream riparians with any real capacity to challenge China's development plans, given the dramatic power imbalances exhibited by these countries' relations with China. [ 45 ] The MRC has been hesitant to fully register concerns about Chinese upstream hydro-development. For example, in a letter to the Bangkok Post , MRC CEO Dr. Olivier Cogels in fact argued that Chinese dams would increase the river's dry season volume as their purpose was electricity generation and not irrigation. [ 46 ] While such dams certainly could increase dry season flows, the only certainty about future Chinese reservoir policies seems to be that they will be crafted outside of downstream cooperation regimes. [ 5 ] Public statements from MRC leaders in the same vein as Cogels' comments have—to some—earned the MRC a reputation of being complicit in allowing "China's dam-building machine float downstream." [ 47 ]
https://en.wikipedia.org/wiki/Mekong_River_Commission
Scale bar, 5.0 μm. Melainabacteria , also known as Vampirovibrionophyceae, [ 2 ] is a class of bacteria within the phylum Cyanobacteriota . [ 3 ] Vampirovibrio chlorellavorus is the only species of the class that has been grown in cell culture . [ 3 ] Candidatus species of Melainabacteria have been discovered through DNA and RNA sequence analysis of samples from soil, the human gut and various aquatic habitats such as groundwater. Melainabacteria was originally designated a phylum when its DNA was discovered in 2013, then in 2014 was demoted to a class. [ 3 ] By analyzing genomes of Melainabacteria, predictions are possible about their cell structure and metabolic abilities. The deduced structure of the bacterial cell is similar to cyanobacteria in being surrounded by two membranes. [ 4 ] It differs from cyanobacteria in its predicted ability to move by flagella (like gram-negative flagella), though some members (e.g. Gastranaerophilales ) appear to lack flagella. [ 4 ] It is predicted that Melainabacteria are not able to perform photosynthesis , but obtain energy by fermentation . Cyanobacteria Vampirovibrio " Sericytochromatia " Cyanobacteria " Ca. Caenarcanum " " Ca. Obscuribacter " Vampirovibrio chlorellavorus " Ca. Adamsella " " Ca. Galligastranaerophilus " Ca. Avigastranaerophilus " Ca. Limenecus " Ca. Spyradomonas " Ca. Gastranaerophilus " Ca. Stercorousia " Ca. Scatenecus " Ca. Scatousia " Melainabacteria nucleic acids can be found in a range of environments, including soil, water, and animal habitats. They can be often be found in the gut of humans and in the respiratory tract, oral environments, and skin surface, though rarely. Melainabacteria nucleic acids are often found in natural environments such as groundwater aquifers and lake sediment, soil, bioreactor, [ 3 ] and the aphotic zone of aquatic environments such as lake sediment and aquifers. [ 3 ] Cyanobacteria bloom in freshwater systems as a result of excess nutrients and high temperatures, resulting in a scum on the water surface that resembles spilled paint. [ 3 ] Because Melainabacteria is a type of Cyanobacteria, it has raised concern because Melainabacteria thrive in groundwater systems. The genomes of Melainabacteria were found to be bigger when found in aquifer systems and algal cultivation ponds than when in the mammalian gut environment. [ 3 ] The Great Oxygenation Event (GOE) increased the abundance of oxygen in the atmosphere. [ 11 ] [ 12 ] Bacteria that existed before the GEO did not rely on oxygen, such as the billion-year-old Cyanobacteriota. Although they belong to the phylum Cyanobacteria, Melainabacteria do not photosynthesize. [ 13 ] Cyanobacteria produced atmospheric oxygen and supported the development of early plant cells. [ 14 ] The genomes of Melainabacteria organisms isolated from ground water indicate that the organism has the capacity to fix nitrogen. Melainabacteria are predicted to lack linked electron transport chains but have multiple methods to generate a membrane potential which can then produce ATP via ATP synthase . They are thought to be able to use Fe hydrogenases for H 2 production that can be consumed by other microorganisms. Melainabacteria from the human gut also are thought to synthesize several B and K vitamins , which suggests that these bacteria are beneficial to their host because they are consumed along with plant fibers. [ 4 ] [ 15 ] Melainabacteria may play a role in digesting fiber in the human gut, [ 4 ] and their nucleic acids are more commonly found in herbivorous mammals and those with plant-rich diets. [ 4 ] Because plant diets require more fiber break-down, Melainabacteria may aid in this digestive function. However, scientists do not know why these microbes are in the gut and how they got there. [ 4 ] Ongoing studies such as, "The human gut and groundwater harbor non-photosynthetic bacteria belonging to a new candidate phylum sibling to Cyanobacteria," are funded by various organizations such as the National Institutes of Health , the David and Lucile Packard Foundation , The Hartwell Foundation, the Arnold and Mabel Beckman Foundation , the U.S. Department of Energy , the European Molecular Biology Organization and the Wellcome Trust . [ 14 ]
https://en.wikipedia.org/wiki/Melainabacteria
Melanie Jane Leng MBE is a Professor of Isotope Geosciences at the University of Nottingham [ 1 ] working on isotopes , palaeoclimate and geochemistry . [ 2 ] [ 3 ] [ 4 ] [ 5 ] She also serves as the Chief Scientist for Environmental Change Adaptation and Resilience at the British Geological Survey and Director of the Centre for Environmental Geochemistry, a collaboration between the University of Nottingham and the British Geological Survey. For many years (till 2019) she has been the UK convenor and representative of the UK geoscience community on the International Continental Scientific Drilling Program . [ 6 ] Leng grew up in Scarborough, North Yorkshire . [ 7 ] She spent her childhood on the cliffs and beaches of the Lower Jurassic. Leng studied geology for GCSE and A Level . At Sixth Form College she took a field trip to Ravenscar and described finding an ammonite which hooked her into geology. She studied for a BSc in Earth Science at Oxford Polytechnic , gained her PhD at Aberystwyth University in 1990, [ 8 ] [ 9 ] then moved to the British Geological Survey to work in the isotope laboratory. [ 10 ] Leng has several roles, her most current is Chief Scientist for Environmental Change Adaptation and Resilience at the British Geological Survey. She is also Director of the Centre for Environmental Geochemistry, a collaboration between the British Geological Survey and the University of Nottingham, Leng leads research around environmental change, human impact, food security, and resource management. Leng has been involved in deep drilling as part of the International Continental Scientific Drilling Program , and worked in Lake Ohrid in Macedonia and Lake Chala in East Africa . [ 9 ] She also heads the Stable Isotope Facility at the British Geological Survey , which is part of the National Environmental Isotope Facility. [ 11 ] Stable isotopes can be used to better understand climate change and human-landscape interactions, with increasing importance on the Anthropocene and the modern calibration period; tracers of modern pollution; and understanding the hydrological cycle especially in areas suffering human impact. Leng takes part in expeditions, most recently the Natural Environment Research Council (NERC) mission called Ocean Regulation of Climate by Heat and Carbon Sequestration and Transports (ORCHESTRA). [ 12 ] [ 13 ] She actively blogs about her research. Leng serves on the editorial board of the journals Quaternary Research , Quaternary Science Reviews , Scientific Reports and the Journal of Paleolimnology . [ 14 ] She has written several articles about successfully undertaking a PhD. [ 15 ] [ 16 ] Leng was appointed a Member of the Order of the British Empire (MBE) in the 2019 Birthday Honours . [ 17 ] Leng received a Honorary Doctor of Science (DSc) degree from Oxford Brookes University in 2022. [ 18 ]
https://en.wikipedia.org/wiki/Melanie_Leng
A melanoblast is a precursor cell of a melanocyte . These cells migrate from the trunk neural crest cells (in terms of axial level from neck to posterior end) dorsolaterally between the ectoderm and dorsal surface of the somites . [ 1 ] This developmental biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melanoblast
The melanocortins are a family of neuropeptide hormones which are the ligands of the melanocortin receptors . [ 1 ] The melanocortin system consists of melanocortin receptors, ligands, and accessory proteins. The genes of the melanocortin system are found in chordates . [ 2 ] Melanocortins were originally named so because their earliest known function was in melanogenesis . It is now known that the melanocortin system regulates diverse functions throughout the body, including inflammatory response , fibrosis , melanogenesis, steroidogenesis , energy homeostasis, sexual function, and exocrine gland function. [ 3 ] [ 4 ] [ 1 ] There are four endogenous melanocortin agonists which are derived from post-transcriptional processing of the precursor molecule proopiomelanocortin (POMC). [ 5 ] They are adrenocorticotropic hormone (ACTH), α-melanocyte stimulating hormone (MSH), β-MSH , and γ-MSH . In addition to agonists which activate melanocortin receptors, there are two antagonists which inhibit receptor activity, agouti and agouti-related protein (AgRP). Lastly, the ligand β-defensin 3 acts as a neutral melanocortin receptor antagonist. [ 6 ] The five melanocortin receptors are seven-transmembrane G-protein coupled receptors with differing ligand affinities, tissue and cell type expression, and downstream functions. [ 7 ] MC1R is expressed on melanocytes , macrophages , epithelial cells, endothelial cells, fibroblasts , monocytes and numerous other immune cells, but is also present in brain, testis, and intestine. [ 8 ] Its main functions are in melanogenesis and anti-inflammatory signaling. [ 9 ] MC2R is expressed in the adrenal cortex and adipocytes and promotes steroidogenesis. [ 10 ] MC3R and MC4R are primarily expressed in the brain and regulate energy homeostasis. [ 1 ] MC3R is additionally involved in immunomodulation while MC4R has a role in sexual function. [ 11 ] [ 12 ] MC5R is highly expressed in skin and adrenal glands and has a role in exocrine function. [ 13 ] MC2R is activated exclusively by ACTH, whereas the other 4 receptors can be activated by ACTH, α-MSH, β-MSH, and γ-MSH, although the binding affinities differ. For all the melanocortin receptors, binding of an agonistic ligand activates the receptor, leading to dissociation of the G protein and activation of the enzyme adenyl cyclase. Adenyl cyclase then cleaves ATP to generate the second messenger cyclic AMP (cAMP), which in turn activates multiple downstream pathways. There are two known accessory proteins belonging to the melanocortin system which modulate function of the receptors. These are melanocortin-2-receptor accessory protein (MRAP) and MRAP2. [ 14 ] FDA-approved In 2019 the FDA approved the first drug targeting melanocortin receptors, Vyleesi ( Bremelanotide ) which was developed by Palatin Technologies, Inc.  The Melanocortin system has been largely unexplored in drug development but recent approvals, its novelty and wide-spread application across indications has led it to the frontier of new discoveries in medicine. Since Vyleesi approval  multiple companies have initiated drug discovery programs targeting the melanocortin system. Bremelanotide (Vyleesi) is approved for treatment of acquired, generalized hypoactive sexual desire disorder (HSDD) in premenopausal women. [ 15 ] At therapeutic dose levels, it activates MC1R and MC4R. [ 16 ] Setmelanotide (Imcivree) is an MC4R agonist approved for chronic weight management in patients with genetic obesity. [ 17 ] [ 18 ] Afamelanotide (Scenesse) is an MC1R agonist approved for patients with erythropoietic protoporphyria to increase pain-free light exposure. [ 19 ] Clinical trials PL9643, an ophthalmic solution, is being tested in phase 3 clinical trials to determine safety and efficacy in patients with dry eye. [ 20 ] PL9643 activates MC1R, MC3R, MC4R and MC5R. Completed Phase 2 studies demonstrated positive results for the treatment of dry eye disease. [ 21 ] Dersimelagon (MT-7117) is an orally administered MC1R agonist being tested in phase 3 clinical trials to evaluate safety and tolerability in patients with erythropoietic protoporphyria or X-linked protoporphyria. [ 22 ] Resomelagon (AP1189) is an orally administered MC1R and MC3R agonist being tested in three phase 2 clinical trials to study safety and efficacy in patients with rheumatoid arthritis and idiopathic membranous nephropathy. [ 23 ] [ 24 ] [ 25 ] The melanocortin system is one of the mammalian body's tools to regulate food intake in a push-pull fashion. [ 26 ] The only neurons known to release melanocortins are located in the arcuate nucleus of the hypothalamus. However, melanocortins are also produced by keratinocytes in response to UV exposure. Accordingly, there is a subpopulation called POMC neurons and one called AgRP neurons. [ 27 ] When POMC neurons release α-MSH, appetite is decreased. On the other hand, when AgRP neurons release AgRP, appetite is stimulated. Leptin , the energy surfeit hormone, and Ghrelin , the hunger hormone, are upstream regulators of the melanocortin system in the brain. [ 27 ] These hormones also regulate the release of peptides other than the melanocortins. Disturbance of the leptin-melanocortin pathway can lead to early onset obesity as well as various metabolic disorders and suppressed immune function. [ 28 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melanocortin
A melanomorph is a substance related to the pigment melanin . Melanomorphs originate from the aromatic amino acids tyrosine , tryptophan , and phenylalanine . They tend to absorb ultraviolet -B light, with peaks around 280 nanometers . See also Beer's Law . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melanomorph
A melanotroph (or melanotrope ) is a cell in the pituitary gland that generates melanocyte-stimulating hormone (α‐MSH) from its precursor pro-opiomelanocortin . Chronic stress can induce the secretion of α‐MSH in melanotrophs and lead to their subsequent degeneration. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melanotroph
A melatonergic agent (or drug ) is a chemical which functions to directly modulate the melatonin system in the body or brain. Examples include melatonin receptor agonists and melatonin receptor antagonists . This drug article relating to the nervous system is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melatonergic
In mathematics, the Melnikov method is a tool to identify the existence of chaos in a class of dynamical systems under periodic perturbation. The Melnikov method is used in many cases to predict the occurrence of chaotic orbits in non-autonomous smooth nonlinear systems under periodic perturbation. According to the method, it is possible to construct a function called the "Melnikov function" which can be used to predict either regular or chaotic behavior of a dynamical system. Thus, the Melnikov function will be used to determine a measure of distance between stable and unstable manifolds in the Poincaré map. Moreover, when this measure is equal to zero, by the method, those manifolds crossed each other transversally and from that crossing the system will become chaotic. This method appeared in 1890 by H. Poincaré [ 1 ] and by V. Melnikov in 1963 [ 2 ] and could be called the "Poincaré-Melnikov Method". Moreover, it was described by several textbooks as Guckenheimer & Holmes, [ 3 ] Kuznetsov, [ 4 ] S. Wiggins, [ 5 ] Awrejcewicz & Holicke [ 6 ] and others. There are many applications for Melnikov distance as it can be used to predict chaotic vibrations. [ 7 ] In this method, critical amplitude is found by setting the distance between homoclinic orbits and stable manifolds equal to zero. Just like in Guckenheimer & Holmes where they were the first who based on the KAM theorem , determined a set of parameters of relatively weak perturbed Hamiltonian systems of two-degrees-of-freedom, at which homoclinic bifurcation occurred. Consider the following class of systems given by x ˙ = ∂ H ∂ y ( x , y ) + ϵ g 1 ( x , y , t , ϵ ) y ˙ = − ∂ H ∂ x ( x , y ) + ϵ g 2 ( x , y , t , ϵ ) , ( 1 ) {\displaystyle {{\begin{array}{lcl}{\dot {x}}&=&{\frac {\partial H}{\partial y}}(x,y)+\epsilon g_{1}(x,y,t,\epsilon )\\{\dot {y}}&=&-{\frac {\partial H}{\partial x}}(x,y)+\epsilon g_{2}(x,y,t,\epsilon ),\end{array}}{(1)}}} or in vector form q ˙ = J D H ( q ) + ϵ g ( q , t , ϵ ) ( 2 ) {\displaystyle {{\dot {q}}=JDH(q)+\epsilon g(q,t,\epsilon )~\ ~\ {(2)}}} where q = ( x , y ) {\displaystyle q=(x,y)} , D H = ( ∂ H ∂ x , ∂ H ∂ y ) {\displaystyle DH=\left({\frac {\partial H}{\partial x}},{\frac {\partial H}{\partial y}}\right)} , g = ( g 1 , g 2 ) {\displaystyle g=(g_{1},g_{2})} and J = ( 0 1 − 1 0 ) . {\displaystyle J=\left({\begin{array}{cc}0&1\\-1&0\\\end{array}}\right).} Assume that system (1) is smooth on the region of interest, ϵ {\displaystyle \epsilon } is a small perturbation parameter and g {\displaystyle g} is a periodic vector function in t {\displaystyle t} with the period T = 2 π ω {\displaystyle T={\dfrac {2\pi }{\omega }}} . If ϵ = 0 {\displaystyle \epsilon =0} , then there is an unperturbed system q ˙ = J D H ( q ) . ( 3 ) {\displaystyle {{\dot {q}}=JDH(q).~\ ~\ {(3)}}} From this system (3), looking at the phase space in Figure 1, consider the following assumptions To obtain the Melnikov function, some tricks have to be used, for example, to get rid of the time dependence and to gain geometrical advantages new coordinate has to be used ϕ {\displaystyle \phi } that is cyclic type given by ϕ = ω t + ϕ 0 . {\displaystyle \phi =\omega t+\phi _{0}.} Then, the system (1) could be rewritten in vector form as follows q ˙ = J D H ( q ) + ϵ g ( q , ϕ , ϵ ) ϕ ˙ = ω . ( 4 ) {\textstyle {{\begin{array}{lcl}{\dot {q}}&=&JDH(q)+\epsilon g(q,\phi ,\epsilon )\\{\dot {\phi }}&=&\omega .\end{array}}~\ ~\ {(4)}}} Hence, looking at Figure 2, the three-dimensional phase space R 2 × S 1 , {\displaystyle \mathbb {R} ^{2}\times \mathbb {S} ^{1},} where q ∈ R 2 {\displaystyle q\in \mathbb {R} ^{2}} and ϕ ∈ S 1 {\displaystyle \phi \in \mathbb {S} ^{1}} has the hyperbolic fixed point p 0 {\displaystyle p_{0}} of the unperturbed system becoming a periodic orbit γ ( t ) = ( p 0 , ϕ ( t ) ) . {\displaystyle \gamma (t)=(p_{0},\phi (t)).} The two-dimensional stable and unstable manifolds of γ ( t ) {\displaystyle \gamma (t)} by W s ( γ ( t ) ) {\displaystyle W^{s}(\gamma (t))} and W u ( γ ( t ) ) {\displaystyle W^{u}(\gamma (t))} are denoted, respectively. By the assumption A 1 , {\displaystyle A1,} W s ( γ ( t ) ) {\displaystyle W^{s}(\gamma (t))} and W u ( γ ( t ) ) {\displaystyle W^{u}(\gamma (t))} coincide along a two-dimensional homoclinic manifold. This is denoted by Γ γ = { ( q , ϕ ) ∈ R 2 × S 1 | q = q 0 ( − t 0 ) , t 0 ∈ R ; ϕ = ϕ 0 ∈ ( 0 , 2 π ] } , {\displaystyle \Gamma _{\gamma }=\{(q,\phi )\in \mathbb {R} ^{2}\times \mathbb {S} ^{1}|q=q_{0}(-t_{0}),t_{0}\in \mathbb {R} ;\phi =\phi _{0}\in (0,2\pi ]\},} where t 0 {\displaystyle t_{0}} is the time of flight from a point q 0 ( − t 0 ) {\displaystyle q_{0}(-t_{0})} to the point q 0 ( 0 ) {\displaystyle q_{0}(0)} on the homoclinic connection . In the Figure 3, for any point p ≡ ( q 0 ( − t 0 ) , ϕ 0 ) , {\displaystyle p\equiv (q_{0}(-t_{0}),\phi _{0}),} a vector is constructed π p {\displaystyle \pi _{p}} , normal to the Γ γ {\displaystyle \Gamma _{\gamma }} as follows π p ≡ ( D H ( q 0 ( − t 0 ) , 0 ) . {\displaystyle \pi _{p}\equiv (DH(q_{0}(-t_{0}),0).} Thus varying t 0 {\displaystyle t_{0}} and ϕ 0 {\displaystyle \phi _{0}} serve to move π p {\displaystyle \pi _{p}} to every point on Γ γ . {\displaystyle \Gamma _{\gamma }.} If ϵ ≠ 0 {\displaystyle \epsilon \neq 0} is sufficiently small, which is the system (2), then γ ( t ) {\displaystyle \gamma (t)} becomes γ ϵ ( t ) , {\displaystyle \gamma _{\epsilon }(t),} Γ γ {\displaystyle \Gamma _{\gamma }} becomes Γ γ ϵ , {\displaystyle \Gamma _{\gamma _{\epsilon }},} and the stable and unstable manifolds become different from each other. Furthermore, for this sufficiently small ϵ {\displaystyle \epsilon } in a neighborhood N ( ϵ 0 ) , {\displaystyle {\mathcal {N}}(\epsilon _{0}),} the periodic orbit γ ( t ) {\displaystyle \gamma (t)} of the unperturbed vector field (3) persists as a periodic orbit, γ ϵ ( t ) = γ ( t ) + O ( ϵ ) . {\displaystyle \gamma _{\epsilon }(t)=\gamma (t)+{\mathcal {O}}(\epsilon ).} Moreover, W l o c s ( γ ϵ ( t ) ) {\displaystyle W_{loc}^{s}(\gamma _{\epsilon }(t))} and W l o c u ( γ ϵ ( t ) ) {\displaystyle W_{loc}^{u}(\gamma _{\epsilon }(t))} are C r {\displaystyle C^{r}} ϵ {\displaystyle \epsilon } -close to W l o c s ( γ ( t ) ) {\displaystyle W_{loc}^{s}(\gamma (t))} and W l o c u ( γ ( t ) ) {\displaystyle W_{loc}^{u}(\gamma (t))} respectively. Consider the following cross-section of the phase space Σ ϕ 0 = { ( q , ϕ ) ∈ R 2 | ϕ = ϕ 0 } , {\displaystyle \Sigma ^{\phi _{0}}=\{(q,\phi )\in \mathbb {R} ^{2}|\phi =\phi _{0}\},} then ( q ( t ) , ϕ ( t ) ) {\displaystyle (q(t),\phi (t))} and ( q ϵ ( t ) , ϕ ( t ) ) {\displaystyle (q_{\epsilon }(t),\phi (t))} are the trajectories of the unperturbed and perturbed vector fields, respectively. The projections of these trajectories onto Σ ϕ 0 {\displaystyle \Sigma ^{\phi _{0}}} are given by ( q ( t ) , ϕ 0 ( t ) ) {\displaystyle (q(t),\phi _{0}(t))} and ( q ϵ ( t ) , ϕ 0 ( t ) ) . {\displaystyle (q_{\epsilon }(t),\phi _{0}(t)).} Looking at the Figure 4, splitting of W s ( γ ϵ ( t ) ) {\displaystyle W^{s}(\gamma _{\epsilon }(t))} and W u ( γ ϵ ( t ) ) , {\displaystyle W^{u}(\gamma _{\epsilon }(t)),} is defined hence, consider the points that intersect π p {\displaystyle \pi _{p}} transversely as p ϵ s {\displaystyle p_{\epsilon }^{s}} and p ϵ u {\displaystyle p_{\epsilon }^{u}} , respectively. Therefore, it is natural to define the distance between W s ( γ ϵ ( t ) ) {\displaystyle W^{s}(\gamma _{\epsilon }(t))} and W u ( γ ϵ ( t ) ) {\displaystyle W^{u}(\gamma _{\epsilon }(t))} at the point p , {\displaystyle p,} denoted by d ( p , ϵ ) ≡ | p ϵ s − p ϵ u | {\displaystyle d(p,\epsilon )\equiv |p_{\epsilon }^{s}-p_{\epsilon }^{u}|} and it can be rewritten as d ( p , ϵ ) = ( p ϵ s − p ϵ u ) ⋅ ( D H ( q 0 ( − t 0 ) , 0 ) ∥ ( D H ( q 0 ( − t 0 ) , 0 ) ∥ . {\displaystyle d(p,\epsilon )={\dfrac {(p_{\epsilon }^{s}-p_{\epsilon }^{u})\cdot (DH(q_{0}(-t_{0}),0)}{\parallel (DH(q_{0}(-t_{0}),0)\parallel }}.} Since p ϵ s {\displaystyle p_{\epsilon }^{s}} and p ϵ u {\displaystyle p_{\epsilon }^{u}} lie on π p , p ϵ s = ( q ϵ s , ϕ 0 ) {\displaystyle \pi _{p},p_{\epsilon }^{s}=(q_{\epsilon }^{s},\phi _{0})} and p ϵ u = ( q ϵ u , ϕ 0 ) , {\displaystyle p_{\epsilon }^{u}=(q_{\epsilon }^{u},\phi _{0}),} and then d ( p , ϵ ) {\displaystyle d(p,\epsilon )} can be rewritten by d ( t 0 , ϕ 0 , ϵ ) = D H ( q 0 ( − t 0 ) ) ⋅ ( q ϵ u − q ϵ s ) ∥ ( D H ( q 0 ( − t 0 ) ) ∥ . ( 5 ) {\textstyle {d(t_{0},\phi _{0},\epsilon )={\dfrac {DH(q_{0}(-t_{0}))\cdot (q_{\epsilon }^{u}-q_{\epsilon }^{s})}{\parallel (DH(q_{0}(-t_{0}))\parallel }}.~\ ~\ {(5)}}} The manifolds W s ( γ ϵ ( t ) ) {\displaystyle W^{s}(\gamma _{\epsilon }(t))} and W u ( γ ϵ ( t ) ) {\displaystyle W^{u}(\gamma _{\epsilon }(t))} may intersect π p {\displaystyle \pi _{p}} in more than one point as shown in Figure 5. For it to be possible, after every intersection, for ϵ {\displaystyle \epsilon } sufficiently small, the trajectory must pass through N ( ϵ 0 ) {\displaystyle {\mathcal {N}}(\epsilon _{0})} again. Expanding in Taylor series the eq. (5) about ϵ = 0 , {\displaystyle \epsilon =0,} gives us d ( t 0 , ϕ 0 , ϵ ) = d ( t 0 , ϕ 0 , 0 ) + ϵ ∂ d ∂ ϵ ( t 0 , ϕ 0 , 0 ) + O ( ϵ 2 ) , {\displaystyle d(t_{0},\phi _{0},\epsilon )=d(t_{0},\phi _{0},0)+\epsilon {\frac {\partial d}{\partial \epsilon }}(t_{0},\phi _{0},0)+{\mathcal {O}}(\epsilon ^{2}),} where d ( t 0 , ϕ 0 , 0 ) = 0 {\displaystyle d(t_{0},\phi _{0},0)=0} and ∂ d ∂ ϵ ( t 0 , ϕ 0 , 0 ) = D H ( q 0 ( − t 0 ) ) ⋅ ( ∂ q ϵ u ∂ ϵ | ϵ = 0 − ∂ q ϵ s ∂ ϵ | ϵ = 0 ) ∥ ( D H ( q 0 ( − t 0 ) ) ∥ . {\displaystyle {\frac {\partial d}{\partial \epsilon }}(t_{0},\phi _{0},0)={\dfrac {DH(q_{0}(-t_{0}))\cdot \left({\frac {\partial q_{\epsilon }^{u}}{\partial \epsilon }}{\Big |}_{\epsilon =0}-{\frac {\partial q_{\epsilon }^{s}}{\partial \epsilon }}{\Big |}_{\epsilon =0}\right)}{\parallel (DH(q_{0}(-t_{0}))\parallel }}.} When d ( t 0 , ϕ 0 , ϵ ) = 0 , {\displaystyle d(t_{0},\phi _{0},\epsilon )=0,} then the Melnikov function is defined to be M ( t 0 , ϕ 0 ) ≡ D H ( q 0 ( − t 0 ) ) ⋅ ( ∂ q ϵ u ∂ ϵ | ϵ = 0 − ∂ q ϵ s ∂ ϵ | ϵ = 0 ) , ( 6 ) {\displaystyle {M(t_{0},\phi _{0})\equiv DH(q_{0}(-t_{0}))\cdot \left({\frac {\partial q_{\epsilon }^{u}}{\partial \epsilon }}{\Big |}_{\epsilon =0}-{\frac {\partial q_{\epsilon }^{s}}{\partial \epsilon }}{\Big |}_{\epsilon =0}\right),~\ ~\ {(6)}}} since D H ( q 0 ( − t 0 ) ) = ( ∂ H ∂ x ( q 0 ( − t 0 ) ) , ∂ H ∂ y ( q 0 ( − t 0 ) ) ) {\displaystyle DH(q_{0}(-t_{0}))=\left({\dfrac {\partial H}{\partial x}}(q_{0}(-t_{0})),{\dfrac {\partial H}{\partial y}}(q_{0}(-t_{0}))\right)} is not zero on q 0 ( − t 0 ) {\displaystyle q_{0}(-t_{0})} , considering t 0 {\displaystyle t_{0}} finite and M ( t 0 , ϕ 0 ) = 0 ⇒ ∂ d ∂ ϵ ( t 0 , ϕ 0 ) = 0. {\displaystyle M(t_{0},\phi _{0})=0\Rightarrow {\dfrac {\partial d}{\partial \epsilon }}(t_{0},\phi _{0})=0.} Using eq. (6) it will require knowing the solution to the perturbed problem. To avoid this, Melnikov defined a time dependent Melnikov function M ( t ; t 0 , ϕ 0 ) ≡ D H ( q 0 ( t − t 0 ) ) ⋅ ( ∂ q ϵ u ( t ) ∂ ϵ | ϵ = 0 − ∂ q ϵ s ( t ) ∂ ϵ | ϵ = 0 ) ( 7 ) {\displaystyle {M(t;t_{0},\phi _{0})\equiv DH(q_{0}(t-t_{0}))\cdot \left({\frac {\partial q_{\epsilon }^{u}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}-{\frac {\partial q_{\epsilon }^{s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}\right)~\ ~\ {(7)}}} Where q ϵ u ( t ) {\displaystyle q_{\epsilon }^{u}(t)} and q ϵ s ( t ) {\displaystyle q_{\epsilon }^{s}(t)} are the trajectories starting at q ϵ u {\displaystyle q_{\epsilon }^{u}} and q ϵ s {\displaystyle q_{\epsilon }^{s}} respectively. Taking the time-derivative of this function allows for some simplifications. The time-derivative of one of the terms in eq. (7) is d d t ( D H ( q 0 ( t − t 0 ) ) ⋅ ∂ q ϵ u , s ( t ) ∂ ϵ | ϵ = 0 ) = ( D 2 H ( q 0 ( t − t 0 ) q 0 ˙ ( t − t 0 ) ) ) ⋅ ∂ q ϵ u , s ( t ) ∂ ϵ | ϵ = 0 + D H ( q 0 ( t − t 0 ) ) ⋅ d d t ∂ q ϵ u , s ( t ) ∂ ϵ | ϵ = 0 . ( 8 ) {\displaystyle {{\dfrac {d}{dt}}\left(DH(q_{0}(t-t_{0}))\cdot {\frac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}\right)=\left(D^{2}H(q_{0}(t-t_{0}){\dot {q_{0}}}(t-t_{0}))\right)\cdot {\frac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}+DH(q_{0}(t-t_{0}))\cdot {\dfrac {d}{dt}}{\frac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}.~\ ~\ {(8)}}} From the equation of motion, q ˙ ϵ u , s ( t ) = J D H ( q ϵ u , s ( t ) ) + ϵ g ( q ϵ u , s ( t ) , t , ϵ ) , {\displaystyle {\dot {q}}_{\epsilon }^{u,s}(t)=JDH(q_{\epsilon }^{u,s}(t))+\epsilon g(q_{\epsilon }^{u,s}(t),t,\epsilon ),} then d d t ∂ q ϵ u , s ∂ ϵ | ϵ = 0 = J D 2 H ( q 0 ( t − t 0 ) ) ∂ q ϵ u , s ∂ ϵ | ϵ = 0 + g ( q 0 ( t − t 0 ) , t , 0 ) ( 9 ) {\displaystyle {{\dfrac {d}{dt}}{\frac {\partial q_{\epsilon }^{u,s}}{\partial \epsilon }}{\Big |}_{\epsilon =0}=JD^{2}H(q_{0}(t-t_{0})){\dfrac {\partial q_{\epsilon }^{u,s}}{\partial \epsilon }}{\Big |}_{\epsilon =0}+g(q_{0}(t-t_{0}),t,0)~\ ~\ {(9)}}} Plugging equations (2) and (9) back into (8) gives l l d d t ( D H ( q 0 ( t − t 0 ) ) ⋅ ∂ q ϵ u , s ∂ ϵ | ϵ = 0 ) = D 2 H ( q 0 ( t − t 0 ) ) J D H ( q 0 ( t − t 0 ) ⋅ ∂ q ϵ u , s ( t ) ∂ ϵ | ϵ = 0 + D H ( q 0 ( t − t 0 ) ) ⋅ J D 2 H ( q 0 ( t − t 0 ) ) ∂ q ϵ u , s ( t ) ∂ ϵ | ϵ = 0 + D H ( q 0 ( t − t 0 ) ) ⋅ g ( q 0 ( t − t 0 ) , ϕ ( t ) , 0 ) ( 10 ) {\displaystyle {{\begin{aligned}{ll}{\dfrac {d}{dt}}\left(DH(q_{0}(t-t_{0}))\cdot {\dfrac {\partial q_{\epsilon }^{u,s}}{\partial \epsilon }}{\Big |}_{\epsilon =0}\right)=&D^{2}H(q_{0}(t-t_{0}))JDH(q_{0}(t-t_{0})\cdot {\dfrac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}\\&+\ DH(q_{0}(t-t_{0}))\cdot JD^{2}H(q_{0}(t-t_{0})){\dfrac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}{\Big |}_{\epsilon =0}\\&+\ DH(q_{0}(t-t_{0}))\cdot g(q_{0}(t-t_{0}),\phi (t),0)\end{aligned}}~\ ~\ {(10)}}} The first two terms on the right hand side can be verified to cancel by explicitly evaluating the matrix multiplications and dot products . g ( q , t , ϵ ) {\displaystyle g(q,t,\epsilon )} has been reparameterized to g ( q , ϕ , ϵ ) {\displaystyle g(q,\phi ,\epsilon )} . Integrating the remaining term, the expression for the original terms does not depend on the solution of the perturbed problem. D H ( q 0 ( τ − t 0 ) ) ⋅ ∂ q ϵ u ( τ ) ∂ ϵ | ϵ = 0 = ∫ − ∞ τ D H ( q 0 ( t − t 0 ) ) ⋅ g ( q 0 ( t − t 0 ) , ω t + ϕ 0 , 0 ) d t D H ( q 0 ( τ − t 0 ) ) ⋅ ∂ q ϵ s ( τ ) ∂ ϵ | ϵ = 0 = ∫ ∞ τ D H ( q 0 ( t − t 0 ) ) ⋅ g ( q 0 ( t − t 0 ) , ω t + ϕ 0 , 0 ) d t ( 11 ) {\displaystyle {{\begin{array}{lcl}DH(q_{0}(\tau -t_{0}))\cdot {\dfrac {\partial q_{\epsilon }^{u}(\tau )}{\partial \epsilon }}{\Big |}_{\epsilon =0}&=\displaystyle \int _{-\infty }^{\tau }DH(q_{0}(t-t_{0}))\cdot g(q_{0}(t-t_{0}),\omega t+\phi _{0},0)dt\\DH(q_{0}(\tau -t_{0}))\cdot {\dfrac {\partial q_{\epsilon }^{s}(\tau )}{\partial \epsilon }}{\Big |}_{\epsilon =0}&=\displaystyle \int _{\infty }^{\tau }DH(q_{0}(t-t_{0}))\cdot g(q_{0}(t-t_{0}),\omega t+\phi _{0},0)dt\end{array}}~\ ~\ {(11)}}} The lower integration bound has been chosen to be the time where q ϵ u , s ( t ) = γ ( t ) {\displaystyle q_{\epsilon }^{u,s}(t)=\gamma (t)} , so that ∂ q ϵ u , s ( t ) ∂ ϵ = 0 {\displaystyle {\frac {\partial q_{\epsilon }^{u,s}(t)}{\partial \epsilon }}=0} and therefore the boundary terms are zero. Combining these terms and setting τ = 0 , {\displaystyle \tau =0,} the final form for the Melnikov distance is obtained by M ( t 0 , ϕ 0 ) = ∫ − ∞ + ∞ D H ( q 0 ( t ) ) ⋅ g ( q 0 ( t ) , ω t + ω t 0 + ϕ 0 , 0 ) d t . ( 12 ) {\displaystyle {M(t_{0},\phi _{0})=\int _{-\infty }^{+\infty }DH(q_{0}(t))\cdot g(q_{0}(t),\omega t+\omega t_{0}+\phi _{0},0)dt.~\ ~\ {(12)}}} Then, using this equation, the following theorem Theorem 1 : Suppose there is a point ( t 0 , ϕ 0 ) = ( t 0 ¯ , ϕ 0 ¯ ) {\displaystyle (t_{0},\phi _{0})=({\bar {t_{0}}},{\bar {\phi _{0}}})} such that Then, for ϵ {\displaystyle \epsilon } sufficiently small, W s ( γ ϵ ( t ) ) {\displaystyle W^{s}(\gamma _{\epsilon }(t))} and W u ( γ ϵ ( t ) ) {\displaystyle W^{u}(\gamma _{\epsilon }(t))} intersect transversely at ( q 0 ( − t 0 ) + O ( ϵ ) , ϕ 0 ) . {\displaystyle (q_{0}(-t_{0})+{\mathcal {O}}(\epsilon ),\phi _{0}).} Moreover, if M ( t 0 , ϕ 0 ) ≠ 0 {\displaystyle M(t_{0},\phi _{0})\neq 0} for all ( t 0 , ϕ 0 ) ∈ R 1 × S 1 {\displaystyle (t_{0},\phi _{0})\in \mathbb {R} ^{1}\times \mathbb {S} ^{1}} , then W s ( γ ϵ ( t ) ) ∩ W u ( γ ϵ ( t ) ) = ∅ . {\displaystyle W^{s}(\gamma _{\epsilon }(t))\cap W^{u}(\gamma _{\epsilon }(t))=\emptyset .} From theorem 1 when there is a simple zero of the Melnikov function implies in transversal intersections of the stable W s ( γ ϵ ( t ) ) {\displaystyle W^{s}(\gamma _{\epsilon }(t))} and W u ( γ ϵ ( t ) ) {\displaystyle W^{u}(\gamma _{\epsilon }(t))} manifolds that results in a homoclinic tangle . Such tangle is a very complicated structure with the stable and unstable manifolds intersecting an infinite number of times. Consider a small element of phase volume, departing from the neighborhood of a point near the transversal intersection, along the unstable manifold of a fixed point. Clearly, when this volume element approaches the hyperbolic fixed point it will be distorted considerably, due to the repetitive infinite intersections and stretching (and folding) associated with the relevant invariant sets. Therefore, it is reasonably expect that the volume element will undergo an infinite sequence of stretch and fold transformations as the horseshoe map . Then, this intuitive expectation is rigorously confirmed by a theorem stated as follows Theorem 2 : Suppose that a diffeomorphism P : M → M {\displaystyle P:M\rightarrow M} , where M {\displaystyle M} is an n-dimensional manifold, has a hyperbolic fixed point x ¯ {\displaystyle {\bar {x}}} with a stable W s ( x ¯ ) {\displaystyle W^{s}({\bar {x}})} and W u ( x ¯ ) {\displaystyle W^{u}({\bar {x}})} unstable manifold that intersect transversely at some point x 0 ≠ x ¯ {\displaystyle x_{0}\neq {\bar {x}}} , W s ( x ¯ ) ⊥ W u ( x ¯ ) , {\displaystyle W^{s}({\bar {x}})\perp W^{u}({\bar {x}}),} where d i m W s + d i m W u = n . {\displaystyle dimW^{s}+dimW^{u}=n.} Then, M {\displaystyle M} contains a hyperbolic set Λ {\displaystyle \Lambda } , invariant under P {\displaystyle P} , on which P {\displaystyle P} is topologically conjugate to a shift on finitely many symbols. Thus, according to the theorem 2 , it implies that the dynamics with a transverse homoclinic point is topologically similar to the horseshoe map and it has the property of sensitivity to initial conditions and hence when the Melnikov distance (10) has a simple zero, it implies that the system is chaotic.
https://en.wikipedia.org/wiki/Melnikov_distance
Melomics (derived from "genomics of melodies") is a computational system for the automatic composition of music (with no human intervention), based on bioinspired algorithms. [ 1 ] Melomics applies an evolutionary approach to music composition, i.e., music pieces are obtained by simulated evolution. These themes compete to better adapt to a proper fitness function, generally grounded on formal and aesthetic criteria. The Melomics system encodes each theme in a genome, and the entire population of music pieces undergoes evo-devo dynamics (i.e., pieces read-out mimicking a complex embryological development process). [ 2 ] [ 3 ] [ 4 ] [ 5 ] The system is fully autonomous: once programmed, it composes music without human intervention. This technology has been transferred to industry as an academic spin-off, Melomics Media, which has provided and reprogrammed a new computer cluster that created a huge collection of popular music. The results of this evolutionary computation are being stored in Melomics' site, [ 6 ] which nowadays constitutes a vast repository of music content. A differentiating feature is that pieces are available in three types of formats: playable ( MP3 ), editable ( MIDI and MusicXML ) and readable (score in PDF ). The Melomics computational system includes two computer clusters: Melomics109 and Iamus , dedicated to popular and artistic music, respectively. [ 2 ] [ 7 ] Melomics109 is cluster programmed and integrated in the Melomics system. [ 8 ] Its first product is a vast repository of popular music compositions (roughly 1 billion), covering all essential styles. In addition to MP3 , all songs are available in editable formats ( MIDI ); [ 9 ] and music is licensed under CC0 , meaning that it is freely downloadable. [ 8 ] [ 10 ] 0music is the first album published by Melomics109 , which is available in MP3 and MIDI formats, under CC0 license. It has been argued that, by making such amount of editable, original and royalty-free music accessible to people, Melomics may accelerate the process of commoditization of music, and change the way music is composed and consumed in the future. [ 1 ] [ 9 ] [ 10 ] In the first stages of the development of the Melomics system, Iamus composed Opus one (on October 15, 2010), arguably the first fragment of professional contemporary classical music ever composed by a computer in its own style, rather than attempting to emulate the style of existing composers. The first full composition (also in contemporary classic style), Hello World! , premiered exactly one year after the creation of Opus one , on October 15, 2011. Four later works premiered on July 2, 2012, and were broadcast live [ 11 ] from the School of Computer Science at Universidad de Málaga [ 12 ] as part of the events included in the Alan Turing year . The compositions performed at this event were before recorded at Real Conservatorio María Cristina, Málaga (Spain), March 2 to 3, 2012, and Angel Studios, London (UK) at April 14, 2012, by the London Symphony Orchestra , creating Iamus' eponymous first album , which New Scientist reported as the "first complete album to be composed solely by a computer and recorded by human musicians." [ 13 ] Commenting on the quality and authenticity of the music, Stephen Smoliar, critic of classical music at The San Francisco Examiner , commented "What is primary is the act of making the music itself engaged by the performers and how the listener responds to what those performers do... what is most interesting about the documents generated by Iamus is their capacity to challenge the creative talents of performing musicians". [ 14 ] Melomics' empathic music has been tested in a number of therapeutic clinical trials , [ 15 ] [ 16 ] [ 17 ] [ 18 ] evidencing positive effects in reducing fear of heights, acute stress and pain perception. One of the studies resulted in a reduction of almost two thirds of pain perception in children undergoing a standard Skin Prick Test during allergy testing, as compared to the standard procedure. [ 18 ] Some of these experiments made use of free mobile apps to adapt music to daily activity, [ 19 ] such as jogging, [ 20 ] or commuting, [ 21 ] but also for therapeutic use, such as lessening stress before an exam, [ 22 ] reducing chronic pain, [ 23 ] insomnia, [ 24 ] and to help children go to sleep. [ 25 ] Ongoing efforts to allow Melomics to adapt music in real-time to changes in the physiological state of the listener, and to music branding were also reported. [ 10 ] [ 26 ]
https://en.wikipedia.org/wiki/Melomics
Melomics109 is a computer cluster located at Universidad de Málaga used to create digital music. It is part of the Spanish Supercomputing Network , and has been designed to increase the computational power provided by Iamus . [ 1 ] [ 2 ] Powered by Melomics ' technology, the composing module of Melomics109 is able to create and synthesize music in a variety of musical styles . This music has been made freely accessible to everyone. [ 3 ] [ 4 ] The cluster consists of three cabinets with customized front panels. 0music is the first album composed and interpreted by Melomics109. It was launched in July 2014, [ 5 ] [ 6 ] and was released in audio (MP3) and editable format (MIDI), under CC0 (public domain) licensing. [ 7 ] This computing article is a stub . You can help Wikipedia by expanding it . This music-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Melomics109
In chemistry , melon is a compound of carbon , nitrogen , and hydrogen of still somewhat uncertain composition, consisting mostly of heptazine units linked and closed by amine groups and bridges ( –NH– , =NH , –NH 2 , etc.). [ 2 ] It is a pale yellow solid, insoluble in most solvents. [ 1 ] A careful 2001 study indicates the formula C 60 N 91 H 33 , that consists of ten imino - heptazine units connected into a linear chain by amino bridges; that is, H(–C 6 N 8 H 2 )–NH–) 10 (NH 2 ) . [ 1 ] However, other researchers are still proposing different structures. Melon is the oldest known compound with the heptazine C 6 N 7 core, having been described in the early 19th century. It has been little studied until recently, when it has been recognized as a notable photocatalyst and as a possible precursor to carbon nitride . [ 2 ] In 1834 Liebig described the compounds that he named melamine , melam , and melon. [ 3 ] [ 4 ] The compound received little attention for a long time, due to its insolubility. In 1937 Linus Pauling showed by x-ray crystallography that the structure of melon and related compounds contained fused triazine rings. [ 4 ] In 1939, C. E. Redemamm and other proposed a structure consisting of 2-amino-heptazine units connected by amine bridges through carbons 5 and 8. [ 1 ] The structure was revised in 2001 by T. Komatsu , who proposed a tautomeric structure. [ 1 ] [ 4 ] The compound can be extracted from the solid residue of the thermal decomposition of ammonium thiocyanate NH 4 SCN at 400 °C. [ 1 ] [ 5 ] (The thermal decomposition of solid melem , on the other hand, yields a graphite-like C-N material. [ 6 ] ) According to Komatsu, a characterized form of melon consists of oligomers that can be described as condensations of 10 units of melem tautomer with loss of ammonia NH 3 . In this structure 2-imino-heptazine units are connected by amino bridges, from carbon 8 of one unit to nitrogen 4 of the next unit. X-ray diffraction data and other evidence indicate that the oligomer is planar, and the triangular heptazine cores have alternating orientations. [ 1 ] The crystal structure of melon is orthorhombic , with estimated lattice constants a = 739.6 pm , b = 2092.4 pm and c= 1295.4 pm. [ 1 ] Heated to 700 °C, melon converts to a polymer of high molecular weight, consisting of longer chains with the same motif. [ 1 ] Melon can be converted to 2,5,8-trichloroheptazine , a useful reagent for synthesis or heptazine derivatives. [ 5 ] In 2009, Xinchen Wang and others observed that melon acts as a catalyst for the splitting of water into hydrogen and oxygen , or converting CO 2 back into fuel , using energy from sunlight . It was the first metal-free photocatalyst , and it was seen to enjoy a number of advantages over previous compounds, including low cost of material, simple synthesis, negligible toxicity, exceptional chemical and thermal stability. The downside is its modest efficiency, which however seems amenable to improvement by doping or nanostructuring . [ 7 ] [ 2 ] Another wave of interest for melon happened in the 1990s, when theoretical computations suggested that β- C 3 N 4 — a hypothetical carbon nitride compound structurally analogous to β- Si 3 N 4 —might be harder than diamond . Melon seemed to be a good precursor for another form of the material, "graphitic" carbon nitride or g- C 3 N 4 . [ 2 ]
https://en.wikipedia.org/wiki/Melon_(chemistry)
Melphalan , sold under the brand name Alkeran among others, is a chemotherapy medication used to treat multiple myeloma ; malignant lymphoma ; lymphoblastic and myeloblastic leukemia ; childhood neuroblastoma ; ovarian cancer ; mammary adenocarcinoma ; and uveal melanoma . [ 2 ] [ 3 ] [ 5 ] [ 6 ] It is taken by mouth or by injection into a vein . [ 6 ] Common side effects include nausea and bone marrow suppression . [ 6 ] Other severe side effects may include anaphylaxis and the development of other cancers . [ 6 ] Use during pregnancy may result in harm to the fetus. [ 7 ] Melphalan belongs to the class of nitrogen mustard alkylating agents . [ 6 ] It works by interfering with the creation of DNA and RNA . [ 6 ] Melphalan was approved for medical use in the United States in 1964. [ 6 ] It is on the World Health Organization's List of Essential Medicines . [ 8 ] It is available as a generic medication . [ 9 ] In the European Union, melphalan is indicated for the treatment of multiple myeloma; malignant lymphoma (Hodgkin, non-Hodgkin lymphoma); acute lymphoblastic and myeloblastic leukemia; childhood neuroblastoma; ovarian cancer; and mammary adenocarcinoma. [ 5 ] In the United States, melphalan is used as a high-dose conditioning treatment prior to hematopoietic progenitor (stem) cell transplantation in people with multiple myeloma . [ 3 ] [ 10 ] In the European Union, it is indicated, in combination with other cytotoxic medicinal products, as reduced intensity conditioning treatment prior to allogeneic haematopoietic stem cell transplantation in malignant haematological diseases in adults. [ 5 ] In August 2023, the US Food and Drug Administration approved melphalan (Hepzato) as a liver-directed treatment for adults with uveal melanoma with unresectable hepatic metastases affecting less than 50% of the liver and no extrahepatic disease, or extrahepatic disease limited to the bone, lymph nodes, subcutaneous tissues, or lung that is amenable to resection or radiation. [ 11 ] [ 12 ] Common side effects include: [ 6 ] Less common side effects include: Melphalan chemically alters the DNA nucleotide guanine through alkylation , and causes linkages between strands of DNA. This chemical alteration inhibits DNA synthesis and RNA synthesis , functions necessary for cells to survive. These changes cause cytotoxicity in both dividing and non-dividing tumor cells. [ 13 ] 4-Nitro-L- phenylalanine ( 1 ) was converted to its phthalimide by heating with phthalic anhydride , and this was converted to its ethyl ester ( 2 ). Catalytic hydrogenation produced the corresponding aniline. Heating in acid with oxirane , followed by treatment with phosphorus oxychloride provided the bischloride, and removal of the protecting groups by heating in hydrochloric acid gave melphalan ( 3 ). On 17 September 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for melphalan. [ 17 ] The applicant for this medicinal product is Adienne S.r.l. S.U. [ 17 ] Melphalan was approved for medical use in the European Union in November 2020. [ 5 ]
https://en.wikipedia.org/wiki/Melphalan
MelsecNet [ 1 ] [ failed verification ] is a protocol developed and supported by Mitsubishi Electric for data delivery. MelsecNet supports 239 networks. [ 2 ] MelsecNet protocol has two variants. MELSECNET/H and its predecessor MELSECNET/10 use high speed and redundant functionality to give deterministic delivery of large data volumes. Both variants can use either coaxial bus type or optical loop type for transmission. Coaxial bus type uses the token bus method with an overall distance of 500 metres (550 yd) but optical loop type uses the Token Ring method and can support a distance up to 30 kilometres (19 miles). MELSECNET/H can support a maximum of 19,200 bytes/frame and a maximum communication speed of 25 Mbit/s. MELSECNET/10 supports 960 bytes/frame and a baud rate of 10 Mbit/s. Mitsubishi provides a manual for both the variants Melsecnet/H and MelsecNet/10. [ 3 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/MelsecNet
Melt blowing is a conventional fabrication method of micro- and nanofibers where a polymer melt is extruded through small nozzles surrounded by high speed blowing gas. The randomly deposited fibers form a nonwoven sheet product applicable for filtration, sorbents, apparels and drug delivery systems. The substantial benefits of melt blowing are simplicity, high specific productivity [ jargon ] and solvent-free operation. Choosing an appropriate combination of polymers with optimized rheological and surface properties, scientists have been able to produce melt-blown fibers with an average diameter as small as 36 nm. [ 1 ] During volcanic activity a fibrous material may be drawn by vigorous wind from molten basaltic magma called Pele's hair . [ 2 ] The same phenomenon applies for melt blowing of polymers. The first research on melt blowing was a naval attempt in the US to produce fine filtration materials for radiation measurements on drone aircraft in the 1950s. [ 3 ] Later on, Exxon Corporation developed the first industrial process based on the melt blowing principle with high throughput levels. [ 4 ] China produces 40% of the non-woven fabric in the world with the majority produced in Hebei province (2018). [ 5 ] Polymers with thermoplastic behavior are applicable for melt blowing. The main polymer types commonly processed with melt blowing: [ 6 ] Melt blowing is a manufacturing process used to create nonwoven fabrics and materials. It is particularly known for its ability to produce fine fibers, which can be used in various applications. Here's an overview of how melt blowing works: [ 7 ] The main uses of melt-blown nonwovens and other innovative approaches are as follows. [ 8 ] Nonwoven melt-blown fabrics are porous. As a result, they can filter liquids and gases. Their applications include water treatment, masks, and air-conditioning filters. During the COVID-19 pandemic, the price of meltblown spiked from few thousand USD per ton to approximately 100 thousand USD per ton. Nonwoven materials can retain liquids several times their own weight. Thus, those made from polypropylene are ideal for collecting oil contamination. [ 9 ] [ 10 ] The high absorption of melt-blown fabrics is exploited in disposable diapers and feminine hygiene products. [ 11 ] Melt-blown fabrics have three qualities that help make them useful for clothing, especially in harsh environments: thermal insulation , relative moisture resistance and breathability. Melt blowing can produce drug-loaded fibers for controlled drug delivery . [ 12 ] The high drug throughput rate (extrusion feeding), solvent-free operation and increased surface area of the product make melt blowing a promising new formulation technique.
https://en.wikipedia.org/wiki/Melt_blowing
Melt electrospinning is a processing technique to produce fibrous structures from polymer melts for applications that include tissue engineering , textiles and filtration . In general, electrospinning can be performed using either polymer melts or polymer solutions. However, melt electrospinning is distinct in that the collection of the fiber can be very focused; combined with moving collectors, melt electrospinning writing is a way to perform 3D printing . Since volatile solvents are not used, there are benefits for some applications where solvent toxicity and accumulation during manufacturing are a concern. The first description of melt electrospinning was by Charles Norton in a patent approved in 1936. After this first discovery, it wasn't until 1981 that melt electrospinning was described by Larrondo and Manley as part of a three-paper series. [ 1 ] A meeting abstract on melt electrospinning in a vacuum was published by Reneker and Rangkupan 20 years later in 2001. [ 2 ] Since this scientific publication in 2001, there have been regular articles on melt electrospinning, including reviews on the subject. [ 3 ] In 2011, melt electrospinning combined with a translating collector was with proposed as a new class of 3D printing . [ 4 ] The same physics of electrostatic fiber drawing apply to melt electrospinning. What differs are the physical properties of the polymer melt, compared to a polymer solution. When comparing polymer melts and polymer solutions, the former are normally more viscous than polymer solutions, and elongated electrified jets have been reported. [ 5 ] The molten electrified jet also requires cooling to solidify, while solution electrospinning relies on evaporation . While melt electrospinning typically results in micron diameter fibers, the path of the electrified jet in melt electrospinning can be predictable. [ 6 ] A minimum temperature is needed to ensure a molten polymer, all the way to the tip of the spinneret. Spinnerets have a relatively short length, compared to solution electrospinning. The most significant parameter for controlling the fiber diameter is the flow rate of the polymer to the spinneret - in general, the higher the flow rate, the larger the fiber diameter. While reported flow rates are low, all of the fluid electrospun is collected, unlike solution electrospinning where a great part of the solvent is evaporated. The molecular weight is important as to whether the polymer can be melt electrospun. For linear homogeneous polymers, a low molecular weight (below 30,000g/mol) can result in broken and poor quality fibers. [ 7 ] For high molecular weights (above 100,000 g/mol), the polymer can be very difficult to flow through the spinneret. Many melt electrospun fibers reported use molecular weights between 40,000 and 80,000 g/mol [ 4 ] or are blends of low and high molecular weight polymers. [ 8 ] Modifying the voltage does not greatly effect the resulting fiber diameter, however it has been reported that an optimum voltage is needed to make high quality and consistent fibers. Voltages from as low as 0.7kV up to 60kV have been used to melt electrospin. [ 9 ] [ 10 ] Different melt electrospinning machines have been built, with some mounted vertically and some horizontally. The approach to heating the polymer does vary and includes electrical heaters, heated air and circulating heaters. [ 3 ] One approach to melt electrospinning is pushing a solid polymer filament into a laser , which melts and is electrospun. Polymers exhibiting a melting point or glass transition temperature (Tg) are required for melt electrospinning, excluding thermosets (such as bakelite ) and biologically derived polymers (such as collagen ). Polymers melt electrospun so far include: These polymers are examples of the most used polymers, and a more comprehensive list can be found elsewhere. [ 3 ] Potential applications of melt electrospinning mirror that of solution electrospinning. Not using solvents to process a polymer assists in tissue engineering applications where solvents are often toxic. Additionally, some polymers such as polypropylene or polyethylene are not readily dissolved, so melt electrospinning is one approach to electrospin them into fibrous material. Melt electrospinning is used to process biomedical materials for tissue engineering research. Volatile solvents are often toxic so avoiding solvents has benefits in this field. Melt electrospun fibers were used as part of a "bimodal tissue scaffold ", where both micron-scale and nano-scale fibers were deposited simultaneously. [ 13 ] Scaffolds made via melt electrospinning can be fully penetrated with cells, which in turn produce extracellular matrix within the scaffold. [ 17 ] Melt electrospinning is also capable to formulate drug-loaded fibers for drug delivery . It is a promising new formulation technique in the field of pharmaceutical technology to prepare amorphous solid dispersions or solid solutions with enhanced or controlled drug dissolution because it can combine the advantages of melt extrusion (e.g. solvent-free, effective amorphization, continuous process) and solvent-based electrospinning (increased surface area). [ 18 ] [ 19 ] [ 20 ] The electrified molten jet created via melt electrospinning has a more predictable path, and polymer fibers can be deposited accurately onto the collector. When the collector is moved at sufficient speed (referred to as the critical translation speed), straight melt electrospun fibers can be deposited in a layer upon layer approach. This enables for the fabrication of complex, well-ordered structures. [ 4 ] In this respect melt electrospinning writing (MEW) can be considered a class of 3D printing . Melt electrospinning writing has been performed using either a translating flat surface [ 4 ] or a rotating cylinder/mandrel. [ 11 ] Most polymers that can be melt-electrospun can also be written assuming the parameters can be tuned in such a way as to produce a stable jet. Piezoelectric polymers such as polyvinylidene difluoride (PVDF) have also been shown to be processable via MEW, opening up potential applications in 3d printed sensors, soft robotics, and further applications in biofabrication . [ 21 ]
https://en.wikipedia.org/wiki/Melt_electrospinning
The Melt Flow Index ( MFI ) is a measure of the ease of flow of the melt of a thermoplastic polymer . It is defined as the mass of polymer, in grams, flowing in ten minutes through a capillary of a specific diameter and length by a pressure applied via prescribed alternative gravimetric weights for alternative prescribed temperatures. [ 1 ] [ 2 ] Polymer processors usually correlate the value of MFI with the polymer grade that they have to choose for different processes, and most often this value is not accompanied by the units, because it is taken for granted to be g/10min. Similarly, the test conditions of MFI measurement are normally expressed in kilograms rather than any other units. The method is described in the similar standards ASTM D1238 [ 3 ] and ISO 1133. [ 4 ] To reduce equipment costs an open source hardware -based MFI has been developed and validated on several thermoplastics including polylactic acid , poly(ethylene) terephthalate glycol, and high-density polyethylene/poly(ethylene) terephthalate blends. [ 5 ] Melt flow rate is a measure of the ability of the material's melt to flow under pressure, and is an indirect measure of molecular weight, with high melt flow rate corresponding to low molecular weight. Melt flow rate is inversely proportional to viscosity of the melt at the conditions of the test, though it should be borne in mind that the viscosity for any such material depends on the applied force. Ratios between two melt flow rate values for one material at different gravimetric weights are often used as a measure for the broadness of the molecular weight distribution. Melt flow rate is very commonly used for polyolefins, polyethylene being measured at 190 °C and polypropylene at 230 °C. The plastics engineer should choose a material with a melt index high enough that the molten polymer can be easily formed into the article intended, but low enough that the mechanical strength of the final article will be sufficient for its use. ISO standard 1133-1 governs the procedure for measurement of the melt flow rate. [ 6 ] The procedure for determining MFI is as follows: Synonyms of Melt Flow Index are Melt Flow Rate and Melt Index . More commonly used are their abbreviations: MFI , MFR and MI . Confusingly, MFR may also indicate "melt flow ratio", the ratio between two melt flow rates at different gravimetric weights. More accurately, this should be reported as FRR (flow rate ratio), or simply flow ratio. FRR is commonly used as an indication of the way in which rheological behavior is influenced by the molecular weight distribution of the material. formerly: ( MFI = Melt Flow Index ) → currently: ( MFR = Melt mass-Flow Rate ) formerly: ( MVI = Melt Volume Index ) → currently: ( MVR = Melt Volume-flow Rate ) formerly: ( MFR = Melt Flow Ratio ) → currently: ( FRR = Flow Rate Ratio ) The flow parameter that is readily accessible to most processors is the MFI. MFI is often used to determine how a polymer will process. However, MFI takes no account of the shear, shear rate or shear history and as such is not a good measure of the processing window of a polymer. It is a single-point viscosity measurement at a relatively low shear rate and temperature. Earlier, it was often said that MFI give a ‘dot’ when actually what is needed is a ‘plot’ for the polymer processors. However, this is not true now because of a unique approach developed for estimating the rheogram merely from the knowledge of the MFI. [ 7 ] The MFI device is not an extruder in the conventional polymer processing sense in that there is no screw to compress, heat and shear the polymer. MFI additionally does not take account of long chain branching [ 8 ] nor the differences between shear and elongational rheology. [ 9 ] Therefore, two polymers with the same MFI will not behave the same under any given processing conditions. [ 10 ] The relationship between MFI and temperature can be used to obtain the activation energies for polymers. [ 11 ] The activation energies developed from MFI values has the advantage of simplicity and easy availability. The concept of obtaining activation energy from MFI can be extended to copolymers as well wherein there exists an anomalous temperature dependence of melt viscosity leading to the existence of two distinct values of activation energies for each copolymer. [ 12 ] For a detailed numerical simulation of the melt flow index, see [ 13 ] or. [ 14 ] formerly MFI (currently MFR ) = Weight (gram) of Melted samples in 10 minutes Melt Flow Index Tester
https://en.wikipedia.org/wiki/Melt_flow_index
A melt inclusion is a small parcel or "blobs" of melt(s) that is entrapped by crystals growing [ 1 ] in magma and eventually forming igneous rocks . In many respects it is analogous to a fluid inclusion within magmatic hydrothermal systems. [ 2 ] [ 3 ] Melt inclusions tend to be microscopic in size and can be analyzed for volatile contents that are used to interpret trapping pressures of the melt at depth. Melt inclusions are generally small - most are less than 80 micrometres across (a micrometre is one thousandth of a millimeter, or about 0.00004 inches). [ 4 ] They may contain a number of different constituents, including glass (which represents melt that has been quenched by rapid cooling), small crystals and a separate vapour-rich bubble. [ 5 ] They occur in the crystals that can be found in igneous rocks, such as for example quartz , feldspar , olivine , pyroxene , nepheline , magnetite , perovskite and apatite . [ 6 ] [ 7 ] [ 8 ] Melt inclusions can be found in both volcanic and plutonic rocks. In addition, melt inclusions can contain immiscible (non-miscible) melt phases and their study is an exceptional way to find direct evidence for presence of two or more melts at entrapment. [ 5 ] Although they are small, melt inclusions can provide an abundance of useful information. Using microscopic observations and a range of chemical microanalysis techniques geochemists and igneous petrologists can obtain a range of unique information from melt inclusions. There are various techniques used in analyzing melt inclusion H 2 O and CO 2 contents, major, minor and trace elements including double-sided FTIR micro transmittance, [ 9 ] single-sided FTIR micro reflectance, [ 10 ] Raman spectroscopy, [ 11 ] microthermometry, [ 12 ] Secondary Ion Mass Spectroscopy ( SIMS ), Laser Ablation-Inductively Coupled Plasma Mass Spectrometry ( LA-ICPMS ), Scanning Electron Microscopy ( SEM ) and electron microprobe analysis ( EMPA ). [ 13 ] If there is a vapor bubble present within the melt inclusion, analysis of the vapor bubble must be taken into consideration when reconstructing the total volatile budget of the melt inclusion. [ 14 ] Microthermometry is the process of reheating a melt inclusion to its original melt temperature and then rapidly quenching to form a homogenous glass phase free of daughter minerals or vapor bubbles that may have been originally contained within the melt inclusion. [ 15 ] Stage heating is the process of heating a melt inclusion on a microscope-mounted stage and flowing either helium gas (Vernadsky stage) [ 16 ] [ 17 ] or argon gas (Linkam TS1400XY) [ 18 ] over the stage and then rapidly quenching the melt inclusion after it has reached its original melt temperature to form a homogenous glass phase. Use of a heating stage allows for observation of changing phases of the melt inclusion as it is reheated to its original melt temperature. [ 19 ] This process allows for reheating of one or more melt inclusions in a furnace held at a constant pressure of one atmosphere to their original melt temperatures and then rapidly quenching in water to produce a homogenous glass phase. [ 20 ] FTIR is an analytical method which uses an infrared laser focused on a spot on the glass phase of the melt inclusion to determine an absorption (or extinction) coefficient for either H 2 O and CO 2 associated with wavelengths for each species depending on the parent lithology that contained the melt inclusion. [ 10 ] [ 21 ] Raman spectroscopy is similar to FTIR in using a focused laser on the glass phase of the melt inclusion [ 22 ] [ 23 ] or a vapor bubble [ 24 ] that may be contained in the melt inclusion to identify wavelengths associated with the Raman vibrating bands of volatiles, such as H 2 O and CO 2 . Raman spectroscopy can also be used to determine the density of CO 2 contained in a vapor bubble if present at a high enough concentration within a melt inclusion. [ 11 ] SIMS is used to determine volatile and trace element concentrations by aiming an ion beam ( 16 O − or 133 Cs + ) at the melt inclusion to produce secondary ions that can be measured by a mass spectrometer. [ 25 ] LA - ICP-MS can determine major and trace elements, however, with LA-ICPMS, the melt inclusion and any accompanying materials within the melt inclusion are ionized, thus destroying the melt inclusion, and then analyzed with a mass spectrometer. [ 26 ] [ 27 ] Scanning electron microscopy is a useful tool to employ before any of the above analyses that may result in loss of the original material since it can be used to check for daughter minerals or vapor bubbles and help determine the best technique that should be chosen for melt inclusion analysis. [ 4 ] Electron microprobe analysis is ubiquitous in the analysis of major and minor elements in melt inclusions and provide oxide concentrations used in determining parental magma types of the melt inclusions and phenocryst hosts. [ 28 ] Melt inclusions have been imaged in three dimensions using X-ray microtomography . [ 29 ] This method can be used to determine the dimensions of different phases present in melt inclusions more precisely than by using visible light microscopy. Melt inclusions can be used to determine the composition, compositional evolution and volatile components [ 14 ] of magmas that existed in the history of magma systems. This is because melt inclusions act as a tiny pressure vessel that isolates and preserves the ambient melt surrounding the crystal before they are modified by later processes, such as post-entrapment crystallization. [ 4 ] Given that melt inclusions form at varying pressures (P) and temperatures (T), they can also provide important information about the entrapping conditions (P-T) at depth and their volatile contents (H 2 O, CO 2 , S, Cl and F) that drive volcanic eruptions. [ 21 ] The presence of a vapor bubble adds an additional component for analysis given that the vapor bubble could contain a significant proportion of the H 2 O and CO 2 originally in the melt sampled by the melt inclusion. [ 16 ] [ 30 ] If the vapor bubble is composed primarily of CO 2 , Raman spectroscopy can be used to determine the density of CO 2 present. [ 31 ] [ 11 ] Major and minor element concentrations are generally determined using EPMA and common element compositions include Si, Ti, Al, Cr, Fe, Mn, Mg, Ca, Ni, Na, K, P, Cl, F and S. [ 32 ] Knowledge of the oxide concentrations related to these major and minor elements can help to determine the composition of the parental magma, the melt inclusion and the phenocryst hosts. [ 28 ] Trace element concentrations can be measured by SIMS analysis with resolution in some cases as low as 1 ppm. [ 33 ] LA-ICPMS analyses can also be used to determine trace element concentrations, however lower resolution compared to SIMS does not provide determination of concentrations as low as 1 ppm. [ 5 ] Olivine (( Mg , Fe ) 2 Si O 4 ) is generally the first mineral to crystallize in magmatic and volcanic systems following the Bowen's reaction series . [ 34 ] As such, olivine-hosted melt inclusions can be of utility in recording magmatic processes in systems where limited crystallization has occurred, such as in volcanic eruptions . Additionally, olivine-hosted melt inclusions can record magma compositions and volatile content prior to the magma being significantly modified by fractional crystallization , assimilation, degassing or any other magmatic processes. For these reasons, olivine-hosted melt inclusions are a useful tool commonly utilized by geologists to better understand magmatic and volcanic systems. Experiments indicate that hydrogen diffusion via H + can occur on the timescales of minutes to hours [ 35 ] at temperatures as low as 800-1000ºC within the olivine crystal lattice (e.g., Mackwell and Kohlstedt, 1990). That hydrogen diffusion occurs at different rates along different crystallographic axes in the olivine crystal lattice , such that D [100] >D [010] >D [001] (Barth et al., 2023; Mackwell and Kohlstedt, 1990) and oxygen fugacity was shown to have no effect on diffusion rates in olivine. High diffusion rates of hydrogen in olivine and an insensitivity to oxygen fugacity on diffusion rates suggests that hydrogen diffusion occurs as proton diffusion (H + ) via the interstitial mechanism, as well as occupying silicon and metal sites. [ 36 ] Experiments have illuminated the mechanisms by which hydrogen diffuses in olivine-hosted melt inclusions, [ 37 ] concluding that hydrogen diffusion in olivine-hosted melt inclusions utilizes material exchange between the crystal olivine and the melt inclusion itself. Specifically, they conclude that the formation or destruction of olivine with vacancies in the metallic site (that holds Fe and Mg ) can assist in the dehydration or hydration of melt inclusions. For melt inclusion dehydration, the following reactions occur: 2 H 2 O + SiO 2 ⟶ H 4 SiO 4 {\displaystyle {{\ce {2H2O + SiO2 -> H4SiO4}}}} H 2 O + SiO 2 + MgO ⟶ MgH 2 SiO 4 {\displaystyle {{\ce {H2O + SiO2 + MgO -> MgH2SiO4}}}} H 2 O + Fe 2 O 3 + 2 SiO 2 ⟶ H 4 SiO 4 {\displaystyle {{\ce {H2O + Fe2O3 + 2SiO2 -> H4SiO4}}}} The above reactions indicate that hydrogen diffusion in melt inclusions involves silica, and that the major element composition of melt inclusions can be affected by both dehydration and hydration. The mechanisms and rates of diffusion of hydrogen in olivine and olivine-hosted melt inclusions provide important context for chemical composition data collected by researchers. For example, the fast diffusion rates of hydrogen in olivine are important to consider when analyzing a xenolith that originated in the mantle . As a mantle rock ascends to the surface, it will reach pressures where lower amounts of hydrogen will be able to be dissolved in the olivine. However, temperatures will likely remain high enough for significant diffusion to occur. As such, many olivine grains are likely significantly degassed of their water content by the time they reach the surface. Additionally, because silica compositions are affected by melt inclusion hydration and dehydration, changes in parental melt composition and extent of dehydration can be determined by compositional discrepancies between melt inclusions and host rocks. The presence of water as hydrogen is an important constraint of mantle mechanical properties. Water stored in olivine crystal defects can affect the strength of olivine. This water content can then in turn affect the rheology of the mantle. [ 38 ] Because hydrogen diffusivity is related to olivine electrical conductivity , which is in turn related to water content, determining the hydrogen diffusion history of an olivine crystal can provide constraints on the water content of the magma from which it was sourced. [ 39 ] Hydrogen diffusion in olivine can also be used to determine the rate at which magma was decompressed during a volcanic eruption . [ 40 ] Magma degasses water as it ascends due to encountering lower pressures and therefore lower water solubilities, exsolving into a vapor bubble. This degassing results in water diffusing out of olivine crystals as they encounter disequilibrium conditions. The diffusion of water out of the olivine crystal will result in a water concentration gradient from the core to the rim of the olivine crystal. The water concentrations can be measured using microanalytical techniques, such as FTIR or SIMS , and the rate of diffusion can be modelled mathematically. The resulting diffusion profile (Figure) allows for determination of the rate of decompression that the crystal experienced, and thus the rate of magma ascent during a volcanic eruption . This in turn allows for assessment of the explosivity of a volcanic event. [ 41 ] Crystallization of phases within the melt inclusion can occur after its entrapment. This is known as post-entrapment crystallization. The changes in melt inclusion compositions that result from this process can be corrected with different models, considering a hydrated or dehydrated melt inclusion. [ 42 ] However, this process is mainly related to the diffusion between two elements, Fe and Mg , rather than diffusion of hydrogen . The interdiffusion between both elements is recorded as 'Fe-Loss' or 'Fe-Gain' depending on if the crystal was cooled or heated. [ 43 ] These temperature effects were studied experimentally, [ 44 ] and can also reflect a change in hydrogen diffusion rates. Therefore, it is important to consider the effects of post-entrapment crystallization and to correct for these effects. The correction allows for accurate results and interpretations to be gathered from melt inclusion data. Henry Clifton Sorby , in 1858, was the first to document microscopic melt inclusions in crystals. [ 45 ] The study of melt inclusions has been driven more recently by the development of sophisticated chemical analysis techniques. Scientists from the former Soviet Union lead the study of melt inclusions in the decades after World War II , [ 46 ] and developed methods for heating melt inclusions under a microscope, so changes could be directly observed. A.T. Anderson explored analysis of melt inclusions from basaltic magmas from Kilauea Volcano in Hawaii to determine initial volatile concentrations of magma at depth. [ 47 ]
https://en.wikipedia.org/wiki/Melt_inclusion
Melt spinning is a metal forming technique that is typically used to form thin ribbons of metal or alloys with a particular atomic structure. [ 1 ] Some important commercial applications of melt-spun metals include high-efficiency transformers ( Amorphous metal transformer ), sensory devices, telecommunications equipment, and power electronics. [ 2 ] A typical melt spinning process involves casting molten metal by jetting it onto a rotating wheel or drum, which is cooled internally, usually by water or liquid nitrogen . The molten material rapidly solidifies upon contact with the large, cold surface area of the drum. The rotation of the drum constantly removes the solidified product while exposing new surface area to the molten metal stream, allowing for continuous production. The resulting ribbon is then directed along the production line to be packaged or machined into further products. [ 3 ] [ 4 ] The cooling rates achievable by melt spinning are on the order of 10 4 –10 6 Kelvins per second (K/s). Consequently, melt spinning is used to develop materials that require extremely high cooling rates in order to form, such as metallic glasses . Due to their rapid cooling, these products have a highly disordered atomic structure which gives them unique magnetic and physical properties ( see amorphous metals ). [ 3 ] [ 5 ] [ 6 ] Several variations to the melt spinning process provide specific advantages. These processes include planar flow casting , twin roll melt spinning , and auto ejection melt spinning. Originating with Robert Pond in a series of related patents from 1958 to 1961 (US Patent Nos. 2825108, 2910744, and 2976590), the current concept of the melt spinner was outlined by Pond and Maddin in 1969. At first, the liquid was quenched on the inner surface of a drum. Liebermann and Graham further developed the process as a continuous casting technique by 1976, this time on the drum's outer surface. [ 7 ] The process can continuously produce thin ribbons of material, with sheets several inches in width commercially available. [ 8 ] In melt spinning, the alloy or metal is first melted in a crucible . Then, an inert gas , usually argon , is used to jet the molten material out of a nozzle located on the underside of the crucible. The resulting stream of liquid is directed onto the outer circumferential surface of a rotating wheel or drum which is cooled internally. The drum's outer surface is located extremely close to the nozzle but does not touch it. Generally, the velocity of the drum's surface must be between 10 m/s and 60 m/s in order to avoid the formation of globules (droplets) or breaking the ribbon respectively. Once the stream contacts the drum's surface, a small puddle of melt (molten material) is formed. Due to the low viscosity of the melt, the shear forces generated by the relative movement of the drum's surface underneath the melt only extend a few microns into the puddle. In other words, only a small amount of the puddle is affected by the friction from the rotation of the drum. Consequently, as the drum spins, most of the melt puddle remains held between the nozzle and the drum by surface tension . However, the melt on the very bottom of the puddle, which is in direct contact with the drum, rapidly solidifies into a thin ribbon. The solidified ribbon is carried away from under the nozzle on the drum's surface for up to 10° of rotation before centrifugal force from the drum's rotation ejects it. [ 1 ] [ 4 ] [ 9 ] This process occurs continuously, so as solidified material is removed from underneath the puddle of melt, more liquid material is added to the puddle from the nozzle. There are many factors at play in even a basic melt spinning process. The quality and dimensions of the product are determined by how the machine is operated and configured. Consequently, there are many studies exploring the effects of variations in the melt spinner's configuration on specific alloys. For example, here is an article about the specific conditions that were found to work well for melt spinning Fe-B and Fe-Si-B alloys. In general, melt spinners will run with some variation in the following variables depending on the desired product. Since every material acts differently, the exact cause-effect relationship between each of these variables and the resulting ribbon is usually determined experimentally. Other less commonly adjusted variables exist, but their effects on the final ribbon dimensions and structure aren't all documented. [ 1 ] [ 10 ] [ 11 ] Different processes and techniques have been developed around melt spinning which offer advantages to the industrial applications and product consistency. Planar Flow Casting (PFC) is a commonly used melt spinning process for the industrial fabrication of wide metallic glass sheets. In this process, the primary modification is that a much wider nozzle is used to eject the melt from the crucible. As a result, the melt puddle covers a larger area of the drum, which in turn forms a larger area of ribbon. [ 9 ] PFC is commonly cast in a vacuum to avoid oxidation of the molten material, which would affect the quality of the resulting product. Ribbons up to 200 mm wide have been industrially achieved using PFC. [ 12 ] In Twin Roll Melt Spinning two rollers or drums are used instead of one. The rollers are placed side by side, and rotated such that the one to the left spins clockwise, and the one on the right spins counter-clockwise. This configuration results in material passing between the rollers being pulled down. The melt is jetted between the rollers where it is cooled and ejected as a ribbon. The advantage of twin-roll melt spinning is that it gives a high degree of control over the thickness of the resulting ribbon. With a single roller, controlling ribbon thickness is complicated involving close control over the flow rate of the melt, rotational speed of the wheel, and temperature of the melt. With the twin roller setup, a particular and consistent thickness can be achieved by simply changing the distance between the rollers. To date, twin roll melt spinning is still limited almost exclusively to laboratory scales. [ 13 ] [ 14 ] Auto Ejection Melt Spinning (AEMS) describes a type of melt spinning where ejection of the melt occurs as soon as it has liquefied, eliminating the need for a technician to manually control the flow rate, temperature, and/or release timing of the melt stream. [ 1 ] This modification allows for a much higher ribbon consistency between runs, and a greater level of automation in the process. Melt spinning is used to manufacture thin metal sheets or ribbons that are near amorphous or non-crystalline . The unique resulting electric and magnetic properties of melt-spun metals are a consequence of this structure as well as the composition of the alloy or metal that was used to form the ribbon. Normally, when a metallic material cools, the individual atoms solidify in strong, repeating patterns to form a crystalline solid. However, in melt spinning, the melt is quenched (cooled) so rapidly that the atoms don't have time to form these ordered structures before they completely solidify. Instead, the atoms are solidified in positions resembling their liquid state. This physical structure gives rise to the magnetic and electric properties of amorphous metals. [ 6 ] The amorphous material produced by melt spinning is considered a soft magnet. That is to say that their natural coercivity is less than 1000 Am-1, which means that the metal's magnetism is more responsive to outside influences and as a result can be easily switched on and off. This makes amorphous metals particularly useful in applications requiring the repeated magnetization and demagnetization of a material in order to function. Certain amorphous alloys also provide the ability to enhance and or channel flux created by electrical currents, making them useful for magnetic shielding and insulation. The exact magnetic properties of each alloy depend mostly on the atomic composition of the material. For example, nickel-iron alloys with a lower amount of nickel have a high electrical resistance , while those with a higher percentage of nickel have a high magnetic permeability . [ 15 ] [ 2 ]
https://en.wikipedia.org/wiki/Melt_spinning
Melting , or fusion , is a physical process that results in the phase transition of a substance from a solid to a liquid . This occurs when the internal energy of the solid increases, typically by the application of heat or pressure , which increases the substance's temperature to the melting point . At the melting point, the ordering of ions or molecules in the solid breaks down to a less ordered state, and the solid melts to become a liquid. Substances in the molten state generally have reduced viscosity as the temperature increases. An exception to this principle is elemental sulfur , whose viscosity increases in the range of 130 °C to 190 °C due to polymerization . [ 1 ] Some organic compounds melt through mesophases , states of partial order between solid and liquid. From a thermodynamics point of view, at the melting point the change in Gibbs free energy ∆G of the substances is zero, but there are non-zero changes in the enthalpy ( H ) and the entropy ( S ), known respectively as the enthalpy of fusion (or latent heat of fusion) and the entropy of fusion . Melting is therefore classified as a first-order phase transition . Melting occurs when the Gibbs free energy of the liquid becomes lower than the solid for that material. The temperature at which this occurs is dependent on the ambient pressure. Low-temperature helium is the only known exception to the general rule. [ 2 ] Helium-3 has a negative enthalpy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative enthalpy of fusion below 0.8 K. This means that, at appropriate constant pressures, heat must be removed from these substances in order to melt them. [ 3 ] Among the theoretical criteria for melting, the Lindemann [ 4 ] and Born [ 5 ] criteria are those most frequently used as a basis to analyse the melting conditions. The Lindemann criterion states that melting occurs because of "vibrational instability", e.g. crystals melt; when the average amplitude of thermal vibrations of atoms is relatively high compared with interatomic distances, e.g. < δu 2 > 1/2 > δ L R s , where δu is the atomic displacement, the Lindemann parameter δ L ≈ 0.20...0.25 and R s is one-half of the inter-atomic distance. [ 6 ] : 177 The "Lindemann melting criterion" is supported by experimental data both for crystalline materials and for glass-liquid transitions in amorphous materials. The Born criterion is based on a rigidity catastrophe caused by the vanishing elastic shear modulus, i.e. when the crystal no longer has sufficient rigidity to mechanically withstand the load, it becomes liquid. [ 7 ] Under a standard set of conditions, the melting point of a substance is a characteristic property. The melting point is often equal to the freezing point . However, under carefully created conditions, supercooling, or superheating past the melting or freezing point can occur. Water on a very clean glass surface will often supercool several degrees below the freezing point without freezing. Fine emulsions of pure water have been cooled to −38 °C without nucleation to form ice . [ citation needed ] Nucleation occurs due to fluctuations in the properties of the material. [ citation needed ] If the material is kept still there is often nothing (such as physical vibration) to trigger this change, and supercooling (or superheating) may occur. Thermodynamically, the supercooled liquid is in the metastable state with respect to the crystalline phase, and it is likely to crystallize suddenly. Glasses are amorphous solids , which are usually fabricated when the molten material cools very rapidly to below its glass transition temperature, without sufficient time for a regular crystal lattice to form. Solids are characterised by a high degree of connectivity between their molecules, and fluids have lower connectivity of their structural blocks. Melting of a solid material can also be considered as a percolation via broken connections between particles e.g. connecting bonds. [ 8 ] In this approach melting of an amorphous material occurs, when the broken bonds form a percolation cluster with T g dependent on quasi-equilibrium thermodynamic parameters of bonds e.g. on enthalpy ( H d ) and entropy ( S d ) of formation of bonds in a given system at given conditions: [ 9 ] where f c is the percolation threshold and R is the universal gas constant. Although H d and S d are not true equilibrium thermodynamic parameters and can depend on the cooling rate of a melt, they can be found from available experimental data on viscosity of amorphous materials . Even below its melting point, quasi-liquid films can be observed on crystalline surfaces. The thickness of the film is temperature-dependent. This effect is common for all crystalline materials. This pre-melting shows its effects in e.g. frost heave, the growth of snowflakes, and, taking grain boundary interfaces into account, maybe even in the movement of glaciers . In ultrashort pulse physics, a so-called nonthermal melting may take place. It occurs not because of the increase of the atomic kinetic energy, but because of changes of the interatomic potential due to excitation of electrons. Since electrons are acting like a glue sticking atoms together, heating electrons by a femtosecond laser alters the properties of this "glue", which may break the bonds between the atoms and melt a material even without an increase of the atomic temperature. [ 10 ] In genetics , melting DNA means to separate the double-stranded DNA into two single strands by heating or the use of chemical agents, polymerase chain reaction .
https://en.wikipedia.org/wiki/Melting
Melting-point depression is the phenomenon of reduction of the melting point of a material with a reduction of its size. This phenomenon is very prominent in nanoscale materials , which melt at temperatures hundreds of degrees lower than bulk materials. The melting temperature of a bulk material is not dependent on its size. However, as the dimensions of a material decrease towards the atomic scale, the melting temperature scales with the material dimensions. The decrease in melting temperature can be on the order of tens to hundreds of degrees for metals with nanometer dimensions. [ 1 ] [ 2 ] [ 3 ] Melting-point depression is most evident in nanowires , nanotubes and nanoparticles , which all melt at lower temperatures than bulk amounts of the same material. Changes in melting point occur because nanoscale materials have a much larger surface-to-volume ratio than bulk materials, drastically altering their thermodynamic and thermal properties. Melting-point depression was mostly studied for nanoparticles, owing to their ease of fabrication and theoretical modeling. The melting temperature of a nanoparticle decreases sharply as the particle reaches critical diameter, usually < 50 nm for common engineering metals. [ 1 ] [ 2 ] [ 4 ] Melting point depression is a very important issue for applications involving nanoparticles, as it decreases the functional range of the solid phase. Nanoparticles are currently used or proposed for prominent roles in catalyst , sensor , medicinal, optical, magnetic, thermal, electronic, and alternative energy applications. [ 6 ] Nanoparticles must be in a solid state to function at elevated temperatures in several of these applications. Two techniques allow measurement of the melting point of the nanoparticle. The electron beam of a transmission electron microscope (TEM) can be used to melt nanoparticles. [ 7 ] [ 8 ] The melting temperature is estimated from the beam intensity, while changes in the diffraction conditions to indicate phase transition from solid to liquid. This method allows direct viewing of nanoparticles as they melt, making it possible to test and characterize samples with a wider distribution of particle sizes. The TEM limits the pressure range at which melting point depression can be tested. More recently, researchers developed nano calorimeters that directly measure the enthalpy and melting temperature of nanoparticles. [ 4 ] Nanocalorimeters provide the same data as bulk calorimeters, however, additional calculations must account for the presence of the substrate supporting the particles. A narrow size distribution of nanoparticles is required since the procedure does not allow users to view the sample during the melting process. There is no way to characterize the exact size of melted particles during the experiment. Melting point depression was predicted in 1909 by Pawlow. [ 9 ] It was directly observed inside an electron microscope in the 1960s–70s [ 10 ] for nanoparticles of Pb, [ 11 ] [ 12 ] Au, [ 13 ] [ 14 ] and In. [ 12 ] Nanoparticles have a much greater surface-to-volume ratio than bulk materials. The increased surface-to-volume ratio means surface atoms have a much greater effect on the chemical and physical properties of a nanoparticle. Surface atoms bind in the solid phase with less cohesive energy because they have fewer neighboring atoms in close proximity compared to atoms in the bulk of the solid. Each chemical bond an atom shares with a neighboring atom provides cohesive energy, so atoms with fewer bonds and neighboring atoms have lower cohesive energy. The cohesive energy of the nanoparticle has been theoretically calculated as a function of particle size according to Equation 1. [ 15 ] E = E B ( 1 − d D ) {\displaystyle E=E_{B}\left(1-{\frac {d}{D}}\right)} Where: D = nanoparticle size As Equation 1 shows, the effective cohesive energy of a nanoparticle approaches that of the bulk material as the material extends beyond the atomic size range (D>>d). Atoms located at or near the surface of the nanoparticle have reduced cohesive energy due to a reduced number of cohesive bonds. An atom experiences an attractive force with all nearby atoms according to the Lennard-Jones potential . The cohesive energy of an atom is directly related to the thermal energy required to free the atom from the solid. According to Lindemann's criterion , the melting temperature of a material is proportional to its cohesive energy, a v (T M =Ca v ). [ 16 ] Since atoms near the surface have fewer bonds and reduced cohesive energy, they require less energy to free from the solid phase. Melting point depression of high surface-to-volume ratio materials results from this effect. For the same reason, surfaces of nanomaterials can melt at lower temperatures than the bulk material. [ 17 ] The theoretical size-dependent melting point of a material can be calculated through classical thermodynamic analysis. The result is the Gibbs–Thomson equation shown in Equation 2. [ 2 ] T M ( d ) = T M B ( 1 − 4 σ s l H f ρ s d ) {\displaystyle T_{M}(d)=T_{MB}\left(1-{\frac {4\sigma \,_{sl}}{H_{f}\rho \,_{s}d}}\right)} Where: T MB = bulk melting temperature Equation 2 gives the general relation between the melting point of a metal nanoparticle and its diameter. However, recent work indicates the melting point of semiconductor and covalently bonded nanoparticles may have a different dependence on particle size. [ 18 ] The covalent character of the bonds changes the melting physics of these materials. Researchers have demonstrated that Equation 3 more accurately models melting point depression in covalently bonded materials. [ 18 ] T M ( d ) = T M B ( 1 − ( c d ) 2 ) {\displaystyle T_{M}(d)=T_{MB}\left(1-\left({\frac {c}{d}}\right)^{2}\right)} Where: T MB =bulk melting temperature Equation 3 indicates that melting point depression is less pronounced in covalent nanoparticles due to the quadratic nature of particle size dependence in the melting Equation. The specific melting process for nanoparticles is currently unknown. The scientific community currently accepts several mechanisms as possible models of nanoparticle melting. [ 18 ] Each of the corresponding models effectively matches experimental data for the melting of nanoparticles. Three of the four models detailed below derive the melting temperature in a similar form using different approaches based on classical thermodynamics. The liquid drop model (LDM) assumes that an entire nanoparticle transitions from solid to liquid at a single temperature. [ 16 ] This feature distinguishes the model, as the other models predict melting of the nanoparticle surface prior to the bulk atoms. If the LDM is true, a solid nanoparticle should function over a greater temperature range than other models predict. The LDM assumes that the surface atoms of a nanoparticle dominate the properties of all atoms in the particle. The cohesive energy of the particle is identical for all atoms in the nanoparticle. The LDM represents the binding energy of nanoparticles as a function of the free energies of the volume and surface. [ 16 ] Equation 4 gives the normalized, size-dependent melting temperature of a material according to the liquid-drop model. T M ( d ) = 4 T M B H f d ( σ s v − σ l v ( ρ s ρ l ) 2 / 3 ) {\displaystyle T_{M}(d)={\frac {4T_{MB}}{H_{f}d}}\left(\sigma \,_{sv}-\sigma \,_{lv}\left({\frac {\rho \,_{s}}{\rho \,_{l}}}\right)^{2/3}\right)} Where: σ sv =solid-vapor interface energy The liquid shell nucleation model (LSN) predicts that a surface layer of atoms melts prior to the bulk of the particle. [ 19 ] The melting temperature of a nanoparticle is a function of its radius of curvature according to the LSN. Large nanoparticles melt at greater temperatures as a result of their larger radius of curvature. The model calculates melting conditions as a function of two competing order parameters using Landau potentials. One order parameter represents a solid nanoparticle, while the other represents the liquid phase. Each of the order parameters is a function of particle radius. The parabolic Landau potentials for the liquid and solid phases are calculated at a given temperature, with the lesser Landau potential assumed to be the equilibrium state at any point in the particle. In the temperature range of surface melting, the results show that the Landau curve of the ordered state is favored near the center of the particle while the Landau curve of the disordered state is smaller near the surface of the particle. The Landau curves intersect at a specific radius from the center of the particle. The distinct intersection of the potentials means the LSN predicts a sharp, unmoving interface between the solid and liquid phases at a given temperature. The exact thickness of the liquid layer at a given temperature is the equilibrium point between the competing Landau potentials. Equation 5 gives the condition at which an entire nanoparticle melts according to the LSN model. [ 20 ] T M ( d ) = 4 T M B H f d ( σ s v 1 − d 0 d − σ l v ( 1 − ρ s ρ l ) ) {\displaystyle T_{M}(d)={\frac {4T_{MB}}{H_{f}d}}\left({\frac {\sigma \,_{sv}}{1-{\frac {d_{0}}{d}}}}-\sigma \,_{lv}\left(1-{\frac {\rho \,_{s}}{\rho \,_{l}}}\right)\right)} Where: d 0 =atomic diameter The liquid nucleation and growth model (LNG) treats nanoparticle melting as a surface-initiated process. [ 21 ] The surface melts initially, and the liquid-solid interface quickly advances through the entire nanoparticle. The LNG defines melting conditions through the Gibbs-Duhem relations, yielding a melting temperature function dependent on the interfacial energies between the solid and liquid phases, the volumes and surface areas of each phase, and the size of the nanoparticle. The model calculations show that the liquid phase forms at lower temperatures for smaller nanoparticles. Once the liquid phase forms, the free energy conditions quickly change and favor melting. Equation 6 gives the melting conditions for a spherical nanoparticle according to the LNG model. [ 20 ] T M ( d ) = 2 T M B H f d ( σ s l − σ l v 3 ( σ s v − σ l v ρ s ρ l ) ) {\displaystyle T_{M}(d)={\frac {2T_{MB}}{H_{f}d}}\left(\sigma \,_{sl}-\sigma \,_{lv}3\left(\sigma \,_{sv}-\sigma \,_{lv}{\frac {\rho \,_{s}}{\rho \,_{l}}}\right)\right)} The bond-order-length-strength (BOLS) model employs an atomistic approach to explain melting point depression. [ 20 ] The model focuses on the cohesive energy of individual atoms rather than a classical thermodynamic approach. The BOLS model calculates the melting temperature for individual atoms from the sum of their cohesive bonds. As a result, the BOLS predicts the surface layers of a nanoparticle melt at lower temperatures than the bulk of the nanoparticle. The BOLS mechanism states that if one bond breaks, the remaining neighbouring ones become shorter and stronger. The cohesive energy, or the sum of bond energy, of the less coordinated atoms determines the thermal stability, including melting, evaporating and other phase transition. The lowered CN changes the equilibrium bond length between atoms near the surface of the nanoparticle. The bonds relax towards equilibrium lengths, increasing the cohesive energy per bond between atoms, independent of the exact form of the specific interatomic potential . However, the integrated, cohesive energy for surface atoms is much lower than for bulk atoms due to the reduced coordination number and an overall decrease in cohesive energy. Using a core–shell configuration, the melting point depression of nanoparticles is dominated by the outermost two atomic layers, yet atoms in the core interior retain their bulk nature. The BOLS model and the core–shell structure have been applied to other size dependencies of nanostructures such as the mechanical strength, chemical and thermal stability, lattice dynamics (optical and acoustic phonons), Photon emission and absorption, electronic colevel shift and work function modulation, magnetism at various temperatures, and dielectrics due to electron polarization etc. Reproduction of experimental observations in the above-mentioned size dependency has been realized. Quantitative information, such as the energy level of an isolated atom and the vibration frequency of individual dimer, has been obtained by matching the BOLS predictions to the measured size dependency. [ 21 ] Nanoparticle shape impacts the melting point of a nanoparticle. Facets, edges and deviations from a perfect sphere all change the magnitude of melting point depression. [ 16 ] These shape changes affect the surface -to-volume ratio, which affects the cohesive energy and thermal properties of a nanostructure. Equation 7 gives a general shape-corrected formula for the theoretical melting point of a nanoparticle-based on its size and shape. [ 16 ] T M ( d ) = T M B ( 1 − c z d ) {\displaystyle T_{M}(d)=T_{MB}\left(1-{\frac {c}{zd}}\right)} Where: c=materials constant The shape parameter is 1 for a sphere and 3/2 for a very long wire, indicating that melting-point depression is suppressed in nanowires compared to nanoparticles. Past experimental data show that nanoscale tin platelets melt within a narrow range of 10 °C of the bulk melting temperature. [ 8 ] The melting point depression of these platelets was suppressed compared to spherical tin nanoparticles. [ 4 ] Several nanoparticle melting simulations theorize that the supporting substrate affects the extent of melting-point depression of a nanoparticle. [ 1 ] [ 22 ] [ 23 ] These models account for energetic interactions between the substrate materials. A free nanoparticle, as many theoretical models assume, has a different melting temperature (usually lower) than a supported particle due to the absence of cohesive energy between the nanoparticle and substrate. However, measurement of the properties of a freestanding nanoparticle remains impossible, so the extent of the interactions cannot be verified through an experiment. Ultimately, substrates currently support nanoparticles for all nanoparticle applications, so substrate/nanoparticle interactions are always present and must impact melting point depression. Within the size–pressure approximation, which considers the stress induced by the surface tension and the curvature of the particle, it was shown that the size of the particle affects the composition and temperature of a eutectic point (Fe-C [ 1 ] ), the solubility of C in Fe [ 24 ] and Fe:Mo nanoclusters. [ 25 ] Reduced solubility can affect the catalytic properties of nanoparticles. In fact it, has been shown that size-induced instability of Fe-C mixtures represents the thermodynamic limit for the thinnest nanotube that can be grown from Fe nanocatalysts. [ 24 ]
https://en.wikipedia.org/wiki/Melting-point_depression
Melting curve analysis is an assessment of the dissociation characteristics of double-stranded DNA during heating. As the temperature is raised, the double strand begins to dissociate leading to a rise in the absorbance intensity, hyperchromicity . The temperature at which 50% of DNA is denatured is known as the melting temperature . Measurement of melting temperature can help us predict species by just studying the melting temperature. This is because every organism has a specific melting curve. The information gathered can be used to infer the presence and identity of single-nucleotide polymorphisms (SNP). This is because G-C base pairing have 3 hydrogen bonds between them while A-T base pairs have only 2. DNA with mutations from either A or T to either C or G will create a higher melting temperature. The information also gives vital clues to a molecule's mode of interaction with DNA. Molecules such as intercalators slot in between base pairs and interact through pi stacking . This has a stabilizing effect on DNA's structure which leads to a raise in its melting temperature. Likewise, increasing salt concentrations helps diffuse negative repulsions between the phosphates in the DNA's backbone. This also leads to a rise in the DNA's melting temperature. Conversely, pH can have a negative effect on DNA's stability which may lead to a lowering of its melting temperature. The energy required to break the base-base hydrogen bonding between two strands of DNA is dependent on their length, GC content and their complementarity. By heating a reaction-mixture that contains double-stranded DNA sequences and measuring dissociation against temperature, these attributes can be inferred. Originally, strand dissociation was observed using UV absorbance measurements, [ 1 ] but techniques based on fluorescence measurements [ 2 ] are now the most common approach. The temperature-dependent dissociation between two DNA-strands can be measured using a DNA-intercalating fluorophore such as SYBR green , EvaGreen or fluorophore-labelled DNA probes . In the case of SYBR green (which fluoresces 1000-fold more intensely while intercalated in the minor groove of two strands of DNA), the dissociation of the DNA during heating is measurable by the large reduction in fluorescence that results. [ 3 ] Alternatively, juxtapositioned probes (one featuring a fluorophore and the other, a suitable quencher ) can be used to determine the complementarity of the probe to the target sequence. [ 3 ] The graph of the negative first derivative of the melting-curve may make it easier to pin-point the temperature of dissociation (defined as 50% dissociation), by virtue of the peaks thus formed. SYBR Green enabled product differentiation in the LightCycler in 1997. [ 4 ] Hybridization probes (or FRET probes) were also demonstrated to provide very specific melting curves from the single-stranded (ss) probe-to-amplicon hybrid. Idaho Technology and Roche have done much to popularize this use on the LightCycler instrument. Since the late 1990s product analysis via SYBR Green, other double-strand specific dyes, or probe-based melting curve analysis has become nearly ubiquitous. The probe-based technique is sensitive enough to detect single-nucleotide polymorphisms (SNP) and can distinguish between homozygous wildtype , heterozygous and homozygous mutant alleles by virtue of the dissociation patterns produced. Without probes, amplicon melting (melting and analysis of the entire PCR product) was not generally successful at finding single base variants through melting profiles. With higher resolution instruments and advanced dyes, amplicon melting analysis of one base variants is now possible with several commercially available instruments. For example: Applied Biosystems 7500 Fast System and the 7900HT Fast Real-Time PCR System, Idaho Technology's LightScanner (the first plate-based high resolution melting device), Qiagen's Rotor-Gene instruments, and Roche's LightCycler 480 instruments. Many research and clinical examples [ 5 ] exist in the literature that show the use of melting curve analysis to obviate or complement sequencing efforts, and thus reduce costs. While most quantitative PCR machines have the option of melting curve generation and analysis, the level of analysis and software support varies. High Resolution Melt (known as either Hi-Res Melting, or HRM) is the advancement of this general technology and has begun to offer higher sensitivity for SNP detection within an entire dye-stained amplicon. It is less expensive and simpler in design to develop probeless melting curve systems. However, for genotyping applications, where large volumes of samples must be processed, the cost of development may be less important than the total throughput and ease of interpretation, thus favoring probe-based genotyping methods. Digital High Resolution Melting (dHRM) [ 6 ] is also used in conjunction with digital PCR (dPCR) to improve quantitative power by providing additional information on the melting behavior of the amplified DNA, which can help in distinguishing between different genetic variants and in ensuring the accuracy of the quantification. [ 7 ] dHRM is enabled by the use of sensitive DNA-binding dyes and digital PCR instrumentation, which allows for the collection of high-density data points to generate detailed melt profiles. These profiles can be used to identify even subtle differences in nucleic acid sequences, making dHRM a powerful tool for genotyping, mutation scanning, and methylation analysis [ 8 ] dHRM is an advanced molecular technique used for the analysis of genetic variations, such as single nucleotide polymorphisms (SNPs), mutations, and methylations, by monitoring the melting behavior of double-stranded DNA. [ 9 ] It is a post-PCR method that involves the gradual heating of PCR-amplified DNA in the presence of intercalating dyes that fluoresce when bound to double-stranded DNA. As the DNA melts, the fluorescence decreases, and the changes in fluorescence are monitored in real-time with digital PCR system. The resulting melting curves are then analyzed to detect genetic differences based on the melting temperatures of the DNA fragments. The technique has been further advanced by its application on digital microfluidics platforms, which can facilitate the analysis of single-nucleotide polymorphisms (SNPs) with high accuracy and sensitivity. [ 10 ] Additionally, massively parallel dHRM has been developed to enable rapid and absolutely quantitative sequence profiling, which can be particularly useful in clinical and industrial settings where accurate quantification of nucleic acids is critical. [ 11 ]
https://en.wikipedia.org/wiki/Melting_curve_analysis
The melting point (or, rarely, liquefaction point ) of a substance is the temperature at which it changes state from solid to liquid . At the melting point the solid and liquid phase exist in equilibrium . The melting point of a substance depends on pressure and is usually specified at a standard pressure such as 1 atmosphere or 100 kPa . When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point . Because of the ability of substances to supercool , the freezing point can easily appear to be below its actual value. When the "characteristic freezing point" of a substance is determined, in fact, the actual methodology is almost always "the principle of observing the disappearance rather than the formation of ice, that is, the melting point ." [ 1 ] For most substances, melting and freezing points are approximately equal. For example, the melting and freezing points of mercury is 234.32 kelvins (−38.83 °C ; −37.89 °F ). [ 2 ] However, certain substances possess differing solid-liquid transition temperatures. For example, agar melts at 85 °C (185 °F; 358 K) and solidifies from 31 °C (88 °F; 304 K); such direction dependence is known as hysteresis . The melting point of ice at 1 atmosphere of pressure is very close [ 3 ] to 0 °C (32 °F; 273 K); this is also known as the ice point. In the presence of nucleating substances , the freezing point of water is not always the same as the melting point. In the absence of nucleators water can exist as a supercooled liquid down to −48.3 °C (−54.9 °F; 224.8 K) before freezing. [ 4 ] The metal with the highest melting point is tungsten , at 3,414 °C (6,177 °F; 3,687 K); [ 5 ] this property makes tungsten excellent for use as electrical filaments in incandescent lamps . The often-cited carbon does not melt at ambient pressure but sublimes at about 3,700 °C (6,700 °F; 4,000 K); a liquid phase only exists above pressures of 10 MPa (99 atm) and estimated 4,030–4,430 °C (7,290–8,010 °F; 4,300–4,700 K) (see carbon phase diagram ). Hafnium carbonitride (HfCN) is a refractory compound with the highest known melting point of any substance to date and the only one confirmed to have a melting point above 4,273 K (4,000 °C; 7,232 °F) at ambient pressure. Quantum mechanical computer simulations predicted that this alloy (HfN 0.38 C 0.51 ) would have a melting point of about 4,400 K. [ 6 ] This prediction was later confirmed by experiment, though a precise measurement of its exact melting point has yet to be confirmed. [ 7 ] At the other end of the scale, helium does not freeze at all at normal pressure even at temperatures arbitrarily close to absolute zero ; a pressure of more than twenty times normal atmospheric pressure is necessary. Notes Many laboratory techniques exist for the determination of melting points. A Kofler bench is a metal strip with a temperature gradient (range from room temperature to 300 °C). Any substance can be placed on a section of the strip, revealing its thermal behaviour at the temperature at that point. Differential scanning calorimetry gives information on melting point together with its enthalpy of fusion . A basic melting point apparatus for the analysis of crystalline solids consists of an oil bath with a transparent window (most basic design: a Thiele tube ) and a simple magnifier. Several grains of a solid are placed in a thin glass tube and partially immersed in the oil bath. The oil bath is heated (and stirred) and with the aid of the magnifier (and external light source) melting of the individual crystals at a certain temperature can be observed. A metal block might be used instead of an oil bath. Some modern instruments have automatic optical detection. The measurement can also be made continuously with an operating process. For instance, oil refineries measure the freeze point of diesel fuel "online", meaning that the sample is taken from the process and measured automatically. This allows for more frequent measurements as the sample does not have to be manually collected and taken to a remote laboratory. [ citation needed ] For refractory materials (e.g. platinum, tungsten, tantalum, some carbides and nitrides, etc.) the extremely high melting point (typically considered to be above, say, 1,800 °C) may be determined by heating the material in a black body furnace and measuring the black-body temperature with an optical pyrometer . For the highest melting materials, this may require extrapolation by several hundred degrees. The spectral radiance from an incandescent body is known to be a function of its temperature. An optical pyrometer matches the radiance of a body under study to the radiance of a source that has been previously calibrated as a function of temperature. In this way, the measurement of the absolute magnitude of the intensity of radiation is unnecessary. However, known temperatures must be used to determine the calibration of the pyrometer. For temperatures above the calibration range of the source, an extrapolation technique must be employed. This extrapolation is accomplished by using Planck's law of radiation. The constants in this equation are not known with sufficient accuracy, causing errors in the extrapolation to become larger at higher temperatures. However, standard techniques have been developed to perform this extrapolation. [ citation needed ] Consider the case of using gold as the source (mp = 1,063 °C). In this technique, the current through the filament of the pyrometer is adjusted until the light intensity of the filament matches that of a black-body at the melting point of gold. This establishes the primary calibration temperature and can be expressed in terms of current through the pyrometer lamp. With the same current setting, the pyrometer is sighted on another black-body at a higher temperature. An absorbing medium of known transmission is inserted between the pyrometer and this black-body. The temperature of the black-body is then adjusted until a match exists between its intensity and that of the pyrometer filament. The true higher temperature of the black-body is then determined from Planck's Law. The absorbing medium is then removed and the current through the filament is adjusted to match the filament intensity to that of the black-body. This establishes a second calibration point for the pyrometer. This step is repeated to carry the calibration to higher temperatures. Now, temperatures and their corresponding pyrometer filament currents are known and a curve of temperature versus current can be drawn. This curve can then be extrapolated to very high temperatures. In determining melting points of a refractory substance by this method, it is necessary to either have black body conditions or to know the emissivity of the material being measured. The containment of the high melting material in the liquid state may introduce experimental difficulties. Melting temperatures of some refractory metals have thus been measured by observing the radiation from a black body cavity in solid metal specimens that were much longer than they were wide. To form such a cavity, a hole is drilled perpendicular to the long axis at the center of a rod of the material. These rods are then heated by passing a very large current through them, and the radiation emitted from the hole is observed with an optical pyrometer. The point of melting is indicated by the darkening of the hole when the liquid phase appears, destroying the black body conditions. Today, containerless laser heating techniques, combined with fast pyrometers and spectro-pyrometers, are employed to allow for precise control of the time for which the sample is kept at extreme temperatures. Such experiments of sub-second duration address several of the challenges associated with more traditional melting point measurements made at very high temperatures, such as sample vaporization and reaction with the container. For a solid to melt, heat is required to raise its temperature to the melting point. However, further heat needs to be supplied for the melting to take place: this is called the heat of fusion , and is an example of latent heat . [ 10 ] From a thermodynamics point of view, at the melting point the change in Gibbs free energy (ΔG) of the material is zero, but the enthalpy ( H ) and the entropy ( S ) of the material are increasing (ΔH, ΔS > 0). Melting phenomenon happens when the Gibbs free energy of the liquid becomes lower than the solid for that material. At various pressures this happens at a specific temperature. It can also be shown that: Here T , ΔS and ΔH are respectively the temperature at the melting point, change of entropy of melting and the change of enthalpy of melting. The melting point is sensitive to extremely large changes in pressure , but generally this sensitivity is orders of magnitude less than that for the boiling point , because the solid-liquid transition represents only a small change in volume. [ 11 ] [ 12 ] If, as observed in most cases, a substance is more dense in the solid than in the liquid state, the melting point will increase with increases in pressure. Otherwise the reverse behavior occurs. Notably, this is the case of water, as illustrated graphically to the right, but also of Si, Ge, Ga, Bi. With extremely large changes in pressure, substantial changes to the melting point are observed. For example, the melting point of silicon at ambient pressure (0.1 MPa) is 1415 °C, but at pressures in excess of 10 GPa it decreases to 1000 °C. [ 13 ] Melting points are often used to characterize organic and inorganic compounds and to ascertain their purity . The melting point of a pure substance is always higher and has a smaller range than the melting point of an impure substance or, more generally, of mixtures. The higher the quantity of other components, the lower the melting point and the broader will be the melting point range, often referred to as the "pasty range". The temperature at which melting begins for a mixture is known as the solidus while the temperature where melting is complete is called the liquidus . Eutectics are special types of mixtures that behave like single phases. They melt sharply at a constant temperature to form a liquid of the same composition. Alternatively, on cooling a liquid with the eutectic composition will solidify as uniformly dispersed, small (fine-grained) mixed crystals with the same composition. In contrast to crystalline solids, glasses do not possess a melting point; on heating they undergo a smooth glass transition into a viscous liquid . Upon further heating, they gradually soften, which can be characterized by certain softening points . The freezing point of a solvent is depressed when another compound is added, meaning that a solution has a lower freezing point than a pure solvent. This phenomenon is used in technical applications to avoid freezing, for instance by adding salt or ethylene glycol to water. [ citation needed ] In organic chemistry , Carnelley's rule , established in 1882 by Thomas Carnelley , states that high molecular symmetry is associated with high melting point . [ 14 ] Carnelley based his rule on examination of 15,000 chemical compounds. For example, for three structural isomers with molecular formula C 5 H 12 the melting point increases in the series isopentane −160 °C (113 K) n-pentane −129.8 °C (143 K) and neopentane −16.4 °C (256.8 K). [ 15 ] Likewise in xylenes and also dichlorobenzenes the melting point increases in the order meta, ortho and then para . Pyridine has a lower symmetry than benzene hence its lower melting point but the melting point again increases with diazine and triazines . Many cage-like compounds like adamantane and cubane with high symmetry have relatively high melting points. A high melting point results from a high heat of fusion , a low entropy of fusion , or a combination of both. In highly symmetrical molecules the crystal phase is densely packed with many efficient intermolecular interactions resulting in a higher enthalpy change on melting. An attempt to predict the bulk melting point of crystalline materials was first made in 1910 by Frederick Lindemann . [ 17 ] The idea behind the theory was the observation that the average amplitude of thermal vibrations increases with increasing temperature. Melting initiates when the amplitude of vibration becomes large enough for adjacent atoms to partly occupy the same space. The Lindemann criterion states that melting is expected when the vibration root mean square amplitude exceeds a threshold value. Assuming that all atoms in a crystal vibrate with the same frequency ν , the average thermal energy can be estimated using the equipartition theorem as [ 18 ] where m is the atomic mass , ν is the frequency , u is the average vibration amplitude, k B is the Boltzmann constant , and T is the absolute temperature . If the threshold value of u 2 is c 2 a 2 where c is the Lindemann constant and a is the atomic spacing , then the melting point is estimated as Several other expressions for the estimated melting temperature can be obtained depending on the estimate of the average thermal energy. Another commonly used expression for the Lindemann criterion is [ 19 ] From the expression for the Debye frequency for ν , where θ D is the Debye temperature and h is the Planck constant . Values of c range from 0.15 to 0.3 for most materials. [ 20 ] In February 2011, Alfa Aesar released over 10,000 melting points of compounds from their catalog as open data [ 21 ] and similar data has been mined from patents . [ 22 ] The Alfa Aesar and patent data have been summarized in (respectively) random forest [ 21 ] and support vector machines . [ 22 ] Primordial From decay Synthetic Border shows natural occurrence of the element
https://en.wikipedia.org/wiki/Melting_point
In the following table, the use row is the value recommended for use in other Wikipedia pages in order to maintain consistency across content. As quoted at http://www.webelements.com/ from these sources: Unit is K. Unit is °C As quoted from: Unit is °C Primordial From decay Synthetic Border shows natural occurrence of the element
https://en.wikipedia.org/wiki/Melting_points_of_the_elements_(data_page)
Meluadrine ( INN Tooltip International Nonproprietary Name ), also known as meluadrine tartrate ( JAN Tooltip Japanese Accepted Name ; developmental code name HSR-81 ) in the case of the tartrate salt , is a sympathomimetic and β 2 -adrenergic receptor agonist which was studied as a tocolytic drug but was never marketed. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] It was first described in the literature by 1994. [ 6 ] The drug is also known as ( R )-4-hydroxytulobuterol and is an active metabolite of tulobuterol . [ 7 ]
https://en.wikipedia.org/wiki/Meluadrine
Melvin Barnett Comisarow FRSC [ 1 ] is a Canadian physicist and analytical chemist who co-invented the Fourier-transform ion cyclotron resonance technique of Mass spectroscopy , together with Alan G. Marshall , [ 2 ] at the University of British Columbia . Comisarow was born in Alberta to a Ukrainian-Canadian family, and earned his bachelor's degree at the University of Alberta , 1963, before obtaining his PhD at Case Western Reserve University , under the supervision of George Andrew Olah in 1969, and subsequently a postdoc with John D. Baldeschwieler at Stanford University . [ 1 ] His first academic appointment was at the University of British Columbia, where he subsequently stayed until retirement. He is a fellow of the American Chemical Society , and the Royal Society of Canada , and has received numerous awards, including the Barringer Award of the Spectroscopy Society of Canada (1989); 1995 Field Franklin Award for Mass Spectroscopy, from the American Chemical Society; and the 1996 Fisher Award in Analytical Chemistry of Canadian Society for Chemistry. [ 1 ]
https://en.wikipedia.org/wiki/Melvin_Barnett_Comisarow
The Melvin Mooney Distinguished Technology Award is a professional award conferred by the ACS Rubber Division . Established in 1983, the award is named after Melvin Mooney , developer of the Mooney viscometer and of the Mooney-Rivlin hyperelastic law. The award consists of an engraved plaque and prize money. The medal honors individuals "who have exhibited exceptional technical competency by making significant and repeated contributions to rubber science and technology". [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Melvin_Mooney_Distinguished_Technology_Award
Melzer's reagent (also known as Melzer's iodine reagent , [ 1 ] Melzer's solution or informally as Melzer's ) is a chemical reagent used by mycologists to assist with the identification of fungi , and by phytopathologists for fungi that are plant pathogens . [ 1 ] Melzer's reagent is an aqueous solution of chloral hydrate , potassium iodide , and iodine . Depending on the formulation, it consists of approximately 2.50-3.75% potassium iodide and 0.75–1.25% iodine, with the remainder of the solution being 50% water and 50% chloral hydrate. [ 2 ] [ 3 ] Melzer's is toxic to humans if ingested due to the presence of iodine and chloral hydrate. [ 4 ] Due to the legal status of chloral hydrate, Melzer's reagent is difficult to obtain in the United States. [ 4 ] In response to difficulties obtaining chloral hydrate, scientists at Rutgers formulated Visikol [ 5 ] (compatible with Lugol's iodine ) as a replacement. In 2019, research showed that Visikol behaves differently to Melzer’s reagent in several key situations, noting it should not be recommended as a viable substitute. [ 6 ] Melzer's reagent is part of a class of iodine/potassium iodide (IKI)-containing reagents used in biology; Lugol's iodine is another such formula. Melzer's is used by exposing fungal tissue or cells to the reagent, typically in a microscope slide preparation, and looking for any of three color reactions: Among the amyloid reaction, two types can be distinguished: Melzer's reactions are typically almost immediate, though in some cases the reaction may take up to 20 minutes to develop. [ 2 ] The function of the chemicals that make up Melzer's reagent are several. The chloral hydrate is a clearing agent , bleaching and improving the transparency of various dark-colored microscopic materials. The potassium iodide is used to improve the solubility of the iodine, which is otherwise only semi-soluble in water. Iodine is thought to be the main active staining agent in Melzer's; it is thought to react with starch-like polysaccharides in the cell walls of amyloid material, however, its mechanism of action is not entirely understood. It has been observed that hemiamyloid material reacts differently when exposed to Melzer's than it does when exposed to other IKI solutions such as Lugol's, and that in some cases an amyloid reaction is shown in material that had prior exposure to KOH, but an inamyloid reaction without such pretreatment. [ 7 ] [ 8 ] An experiment in which spores from 35 species of basidiomycetes were tested for reactions to both Melzer's and Lugol's showed that spores in a large percentage of the species tested display very different reactions between the two reagents. These varied from being weakly or non-reactive in Lugols, to giving iodine-positive reactions in Lugol's but not in Melzer's, to even giving dextrinoid reactions in Lugol's while giving amyloid reactions in Melzer's. [ 4 ] Melzer's degrades into a cloudy precipitate when combined with alkaline solutions, [ 2 ] hence it cannot be used in combination or in direct series with such common mycological reagents such as potassium hydroxide or ammonium hydroxide solutions. When potassium hydroxide is used as a pretreatment, the alkalinity must be first neutralized before adding Melzer's. The use of iodine-containing solutions as an aid to describing and identifying fungi dates back to the mid-19th century. [ 4 ] Melzer's reagent was first described in 1924 [ 9 ] and takes its name from its inventor, the mycologist Václav Melzer , who modified an older chloral hydrate-containing IKI solution developed by botanist Arthur Meyer . [ 7 ] Melzer was a specialist in Russula , a genus in which the amyloidy on the spore ornamentation or entire spore is of great taxonomic significance. [ 10 ]
https://en.wikipedia.org/wiki/Melzer's_reagent
In computing, mem is a measurement unit for the number of memory accesses used or needed by a process, function, instruction set, algorithm or data structure. Mem has applications in computational complexity theory , computing efficiency, combinatorial optimization , supercomputing , computational cost ( algorithmic efficiency ) and other computational metrics. Example usage, when discussing processing time of a search tree node, for finding 10 × 10 Latin squares: "A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform 2 × 10 20 mems." ( Donald Knuth , 2011, The Art of Computer Programming , Volume 4A, p. 6). Reducing mems as a speed and efficiency enhancement is not a linear benefit, as it trades off increases in ordinary operations costs. This optimization technique also is called PForDelta [ 1 ] Although lossless compression methods like Rice, Golomb and PFOR are most often associated with signal processing codecs, the ability to optimize binary integers also adds relevance in reducing MEMS tradeoffs vs. operations. (See Golomb coding for details). [ 2 ] Breaking the Wall of the Quantum Computing Hype - MemComputing, Inc. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mem_(computing)
A membrane is a selective barrier; it allows some things to pass through but stops others. Such things may be molecules , ions , or other small particles. Membranes can be generally classified into synthetic membranes and biological membranes . [ 1 ] Biological membranes include cell membranes (outer coverings of cells or organelles that allow passage of certain constituents); [ 2 ] nuclear membranes , which cover a cell nucleus; and tissue membranes, such as mucosae and serosae . Synthetic membranes are made by humans for use in laboratories and industry (such as chemical plants ). This concept of a membrane has been known since the eighteenth century but was used little outside of the laboratory until the end of World War II. Drinking water supplies in Europe had been compromised by The War and membrane filters were used to test for water safety. However, due to the lack of reliability, slow operation, reduced selectivity and elevated costs, membranes were not widely exploited. The first use of membranes on a large scale was with microfiltration and ultrafiltration technologies. Since the 1980s, these separation processes, along with electrodialysis , are employed in large plants and, today, several experienced companies serve the market. [ 3 ] The degree of selectivity of a membrane depends on the membrane pore size. Depending on the pore size, they can be classified as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) membranes. Membranes can also be of various thickness, with homogeneous or heterogeneous structure. Membranes can be neutral or charged, and particle transport can be active or passive . The latter can be facilitated by pressure , concentration , chemical or electrical gradients of the membrane process. Microfiltration removes particles higher than 0.08–2 μm and operates within a range of 7–100 kPa. [ 4 ] Microfiltration is used to remove residual suspended solids (SS), to remove bacteria in order to condition the water for effective disinfection and as a pre-treatment step for reverse osmosis. [ 5 ] Relatively recent developments are membrane bioreactors (MBR) which combine microfiltration and a bioreactor for biological treatment. Ultrafiltration removes particles higher than 0.005–2 μm and operates within a range of 70–700 kPa. [ 4 ] Ultrafiltration is used for many of the same applications as microfiltration. Some ultrafiltration membranes have also been used to remove dissolved compounds with high molecular weight, such as proteins and carbohydrates. Also, they can remove viruses and some endotoxins. Nanofiltration is also known as "loose" RO and can reject particles smaller than 0.002 μm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening. Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are: [ 6 ] (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness ions, (3) lower the operating pressure of the RO process by reducing the feed-water total dissolved solids (TDS) concentration. Reverse osmosis is commonly used for desalination. As well, RO is commonly used for the removal of dissolved constituents from wastewater remaining after advanced treatment with microfiltration. RO excludes ions but requires high pressures to produce deionized water (850–7000 kPa). RO is the most widely used desalination technology because of its simplicity of use and relatively low energy costs compared with distillation, which uses technology based on thermal processes. Note that RO membranes remove water constituents at the ionic level. To do so, most current RO systems use a thin-film composite (TFC), mainly consisting of three layers: a polyamide layer, a polysulphone layer and a polyester layer. [ 7 ] An emerging class of membranes rely on nanostructure channels to separate materials at the molecular scale. These include carbon nanotube membranes , graphene membranes, membranes made from polymers of intrinsic microporosity (PIMS), and membranes incorporating metal–organic frameworks (MOFs). These membranes can be used for size selective separations such as nanofiltration and reverse osmosis, but also adsorption selective separations such as olefins from paraffins and alcohols from water that traditionally have required expensive and energy intensive distillation . In the membrane field, the term module is used to describe a complete unit composed of the membranes, the pressure support structure, the feed inlet, the outlet permeate and retentate streams, and an overall support structure. The principal types of membrane modules are: The key elements of any membrane process relate to the influence of the following parameters on the overall permeate flux are: The total permeate flow from a membrane system is given by following equation: Where Qp is the permeate stream flowrate [kg·s −1 ], F w is the water flux rate [kg·m −2 ·s −1 ] and A is the membrane area [m 2 ] The permeability (k) [m·s −2 ·bar −1 ] of a membrane is given by the next equation: The trans-membrane pressure (TMP) is given by the following expression: where P TMP is the trans-membrane pressure [kPa], P f the inlet pressure of feed stream [kPa]; P c the pressure of concentrate stream [kPa]; P p the pressure if permeate stream [kPa]. The rejection (r) could be defined as the number of particles that have been removed from the feedwater. The corresponding mass balance equations are: To control the operation of a membrane process, two modes, concerning the flux and the TMP, can be used. These modes are (1) constant TMP, and (2) constant flux. The operation modes will be affected when the rejected materials and particles in the retentate tend to accumulate in the membrane. At a given TMP, the flux of water through the membrane will decrease and at a given flux, the TMP will increase, reducing the permeability (k). This phenomenon is known as fouling , and it is the main limitation to membrane process operation. Two operation modes for membranes can be used. These modes are: Filtration leads to an increase in the resistance against the flow. In the case of the dead-end filtration process, the resistance increases according to the thickness of the cake formed on the membrane. As a consequence, the permeability (k) and the flux rapidly decrease, proportionally to the solids concentration [1] and, thus, requiring periodic cleaning. For cross-flow processes, the deposition of material will continue until the forces of the binding cake to the membrane will be balanced by the forces of the fluid. At this point, cross-flow filtration will reach a steady-state condition [2] , and thus, the flux will remain constant with time. Therefore, this configuration will demand less periodic cleaning. Fouling can be defined as the potential deposition and accumulation of constituents in the feed stream on the membrane. The loss of RO performance can result from irreversible organic and/or inorganic fouling and chemical degradation of the active membrane layer. Microbiological fouling, generally defined as the consequence of irreversible attachment and growth of bacterial cells on the membrane, is also a common reason for discarding old membranes. A variety of oxidative solutions, cleaning and anti-fouling agents is widely used in desalination plants, and their repetitive and incidental exposure can adversely affect the membranes, generally through the decrease of their rejection efficiencies. [ 12 ] Fouling can take place through several physicochemical and biological mechanisms which are related to the increased deposition of solid material onto the membrane surface. The main mechanisms by which fouling can occur, are: Since fouling is an important consideration in the design and operation of membrane systems, as it affects pre-treatment needs, cleaning requirements, operating conditions, cost and performance, it should prevent, and if necessary, removed. Optimizing the operation conditions is important to prevent fouling. However, if fouling has already taken place, it should be removed by using physical or chemical cleaning. Physical cleaning techniques for membrane include membrane relaxation and membrane backwashing . Chemical cleaning . Relaxation and backwashing effectiveness will decrease with operation time as more irreversible fouling accumulates on the membrane surface. Therefore, besides the physical cleaning, chemical cleaning may also be recommended. It includes: Optimizing the operation condition . Several mechanisms can be carried out to optimize the operating conditions of the membrane to prevent fouling, for instance: Membrane alteration . Recent efforts have focused on eliminating membrane fouling by altering the surface chemistry of the membrane material to reduce the likelihood that foulants will adhere to the membrane surface. The exact chemical strategy used is dependent on the chemistry of the solution that is being filtered. For example, membranes used in desalination might be made hydrophobic to resist fouling via accumulation of minerals, while membranes used for biologics might be made hydrophilic to reduce protein/organic accumulation. Modification of surface chemistry via thin film deposition can thereby largely reduce fouling. One drawback to using modification techniques is that, in some cases, the flux rate and selectivity of the membrane process can be negatively impacted. [ 20 ] Once the membrane reaches a significant performance decline it is discarded. Discarded RO membrane modules are currently classified worldwide as inert solid waste and are often disposed of in landfills; although they can also be energetically recovered. However, various efforts have been made over the past decades to avoid this, such as waste prevention, direct reapplication, and ways of recycling. In this regard, membranes also follows the waste management hierarchy. This means that the most preferable action is to upgrade the design of the membrane which leads to a reduction in use at same application and the least preferred action is a disposal and landfilling. [ 21 ] RO membranes have some environmental challenges that must be resolved in order to comply with the circular economy principles. Mainly they have a short service life of 5–10 years. Over the past two decades, the number of RO desalination plants has increased by 70%. The size of these RO plants has also increased significantly, with some reaching a production capacity exceeding 600,000 m 3 of water per day. This means a generation of 14,000 tonnes of membrane waste that is landfilled every year. To increment the lifespan of a membrane, different prevention methods are developed: combining the RO process with the pre-treatment process to improve efficiency; developing anti-fouling techniques; and developing suitable procedures for cleaning the membranes. Pre-treatment processes lower the operating costs because of lesser amounts of chemical additives in the saltwater feed and the lower operational maintenance required for the RO system. [ 22 ] Four types of fouling are found on RO membranes: (i) Inorganic (salt precipitation), (ii) Organic, (iii) Colloidal (particle deposition in the suspension) (iv) Microbiological (bacteria and fungi). Thereby, an appropriate combination of pre-treatment procedures and chemical dosing, as well as an efficient cleaning plan that tackle these types of fouling, should enable the development of an effective anti-fouling technique. Most plants clean their membranes every week (CEB – Chemically Enhanced Backwash). In addition to this maintenance cleaning, an intensive cleaning (CIP) is recommended, from two to four times annually. Reuse of RO membranes include the direct reapplication of modules in other separation processes with less stringent specifications. The conversion from the RO TFC membrane to a porous membrane is possible by degrading the dense layer of polyamide. Converting RO membranes by chemical treatment with different oxidizing solutions are aimed at removing the active layer of the polyamide membrane, intended for reuse in applications such as MF or UF. This causes an extended life of approximately two years. [ 23 ] A very limited number of reports have mentioned the potential of direct RO reuse. Studies shows that hydraulic permeability, salt rejection, morphological and topographical characteristics, and field emission scanning electron and atomic force microscopy were used in an autopsy investigation conducted. The old RO element's performance resembled that of nanofiltration (NF) membranes, thus it was not surprising to see the permeability increase from 1.0 to 2.1 L m -2 h-1 bar-1 and the drop in NaCl rejection from >90% to 35–50%. [ 24 ] On the other hand, In order to maximize the overall efficiency of the process, it has lately been common practice to combine RO elements of varying performances within the same pressure vessel, which is called Multi-membrane vessel design. In principle, this innovative hybrid system recommends using high rejection, low productivity membranes in the upstream segment of the filtration train, followed by high productivity, low energy membranes in the downstream section. There are two ways in which this design can help: either by decreasing energy use due to decreased pressure needs or by increasing output. Since this concept would reduce the number of modules and pressure vessels needed for a given application, it has the potential to significantly reduce initial investment costs. It is proposed to adapt this original concept, by internally reusing older RO membranes within the same pressure vessel. [ 25 ] Recycling of materials is a general term that involves physically transforming the material or its components so that they can be regenerated into other useful products. The membrane modules are complex structures, consisting of a number of different polymeric components and, potentially, the individual components can be recovered for other purposes. Plastic solid waste treatment and recycling can be separated into mechanical recycling, chemical recycling and energy recovery. Mechanical recycling characteristics: Chemical recycling characteristics: Energetic recovery characteristics: Post-treatment Distinct features of membranes are responsible for the interest in using them as additional unit operation for separation processes in fluid processes. Some advantages noted include: [ 3 ] Membranes are used with pressure as the driving processes in membrane filtration of solutes and in reverse osmosis . In dialysis and pervaporation the chemical potential along a concentration gradient is the driving force. Also perstraction as a membrane assisted extraction process relies on the gradient in chemical potential. A submerged flexible mound breakwater as a type of using membrane can be employed for wave control in shallow water as an advanced alternative to the conventional rigid submerged designs. [ 27 ] However, their overwhelming success in biological systems is not matched by their application. [ 28 ] The main reasons for this are:
https://en.wikipedia.org/wiki/Membrane
Membrane-introduction mass spectrometry (MIMS) is a method of introducing analytes into the mass spectrometer 's vacuum chamber via a semi-permeable membrane . [ 1 ] [ 2 ] Usually a thin, gas-permeable, hydrophobic membrane is used, for example polydimethylsiloxane . Samples can be almost any fluid including water, air or sometimes even solvents. The great advantage of the method of sample introduction is its simplicity. MIMS can be used to measure a variety of analytes in real-time, with little or no sample preparation. MIMS is most useful for the measurement of small, non-polar molecules , since molecules of this type have a greater affinity for the membrane material than the sample. The advantage of this method is that complex samples that cannot diffuse through the membrane are not incorporated into the mass spectroscopic measurements, highlighting the simplicity of only analyzing (small) molecules of interest. This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Membrane-introduction_mass_spectrometry
The elastic membrane analogy , also known as the soap-film analogy , was first published by pioneering aerodynamicist Ludwig Prandtl in 1903. [ 1 ] [ 2 ] It describes the stress distribution on a long bar in torsion . The cross section of the bar is constant along its length, and need not be circular. The differential equation that governs the stress distribution on the bar in torsion is of the same form as the equation governing the shape of a membrane under differential pressure. Therefore, in order to discover the stress distribution on the bar, all one has to do is cut the shape of the cross section out of a piece of wood, cover it with a soap film, and apply a differential pressure across it. Then the slope of the soap film at any area of the cross section is directly proportional to the stress in the bar at the same point on its cross section. While the membrane analogy allows the stress distribution on any cross section to be determined experimentally, it also allows the stress distribution on thin-walled, open cross sections to be determined by the same theoretical approach that describes the behavior of rectangular sections. Using the membrane analogy, any thin-walled cross section can be "stretched out" into a rectangle without affecting the stress distribution under torsion. The maximum shear stress , therefore, occurs at the edge of the midpoint of the stretched cross section, and is equal to 3 T / b t 2 {\displaystyle 3T/bt^{2}} , where T is the torque applied, b is the length of the stretched cross section, and t is the thickness of the cross section. It can be shown that the differential equation for the deflection surface of a homogeneous membrane, subjected to uniform lateral pressure and with uniform surface tension and with the same outline as that of the cross section of a bar under torsion , has the same form as that governing the stress distribution over the cross section of a bar under torsion . This analogy was originally proposed by Ludwig Prandtl in 1903. [ 3 ] Prandtl's stretched-membrane concept was used extensively in the field of electron tube ("vacuum tube") design (1930's to 1960's) to model the trajectory of electrons within a device. The model is constructed by uniformly stretching a thin rubber sheet over a frame, and deforming the sheet upwards with physical models of electrodes, impressed into the sheet from below. The entire assembly is tilted, and steel balls (as electron analogs) rolled down the assembly and the trajectories noted. The curved surface surrounding the "electrodes" represents the complex increase in field strength as the electron-analog approaches the "electrode"; the upward distortion in the sheet is a close analogy to field strength.
https://en.wikipedia.org/wiki/Membrane_analogy
Membrane biology is the study of the biological and physiochemical characteristics of membranes , with applications in the study of cellular physiology. [ 1 ] Membrane bioelectrical impulses are described by the Hodgkin cycle . Membrane biophysics is the study of biological membrane structure and function using physical , computational , mathematical , and biophysical methods . A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections. [ 2 ] This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Membrane_biology
Membrane bioreactors are combinations of membrane processes like microfiltration or ultrafiltration with a biological wastewater treatment process, the activated sludge process . These technologies are now widely used for municipal and industrial wastewater treatment . [ 1 ] The two basic membrane bioreactor configurations are the submerged membrane bioreactor and the side stream membrane bioreactor. [ 2 ] In the submerged configuration, the membrane is located inside the biological reactor and submerged in the wastewater, while in a side stream membrane bioreactor, the membrane is located outside the reactor as an additional step after biological treatment. Water scarcity has prompted efforts to reuse waste water once it has been properly treated, known as " water reclamation " (also called wastewater reuse , water reuse, or water recycling ). Among the treatment technologies available to reclaim wastewater , membrane processes stand out for their capacity to retain solids and salts and even to disinfect water, producing water suitable for reuse in irrigation and other applications. A semipermeable membrane is a material that allows the selective flow of certain substances. In the case of water purification or regeneration, the aim is to allow the water to flow through the membrane whilst retaining undesirable particles on the originating side. By varying the type of membrane, it is possible to get better pollutant retention of different kinds. Some of the required characteristics in a membrane for wastewater treatment are chemical and mechanical resistance for five years of operation and capacity to operate stably over a wide pH [ 3 ] range. There are two main types of membrane materials available on the market: organic-based polymeric membranes and ceramic membranes. Polymeric membranes are the most commonly used materials in water and wastewater treatment. In particular, polyvinylidene difluoride (PVDF) is the most prevalent material due to its long lifetime and chemical and mechanical resistance. [ 3 ] SiC TiO2 ZrO2 Silicon carbide Titanium dioxide / Titania Zirconium dioxide / Zirconia When used with domestic wastewater , membrane bioreactor processes can produce effluent of high enough quality for discharge into the oceans, surfaces, brackish bodies, or urban irrigation waterways. Other advantages of membrane bioreactors over conventional processes include reduced footprints and simpler retrofitting. It is possible to operate membrane bioreactor processes at higher mixed liquor suspended solids concentrations compared to conventional settlement separation systems, thus reducing the reactor volume to achieve the same loading rate. Recent technical innovation and significant membrane cost reduction have enabled membrane bioreactors to become an established process option to treat wastewater. [ 1 ] Membrane bioreactors have become an attractive option for the treatment and reuse of industrial and municipal wastewater, as evidenced by their consistently rising numbers and capacity. The current membrane bioreactor market was estimated to be worth around US$216 million in 2006 [ 4 ] and US$838.2 million in 2011, grounding projections that the market for membrane bioreactors was growing at an average rate of 22.4% and would reach a market size of US$3.44 billion in 2018. [ 5 ] The global membrane bioreactor market is expected to grow in the near future due to various driving forces, for instance increasing scarcity of water worldwide which makes wastewater reclamation more profitable; this will likely be further aggravated by continuing climate change. [ 6 ] Growing environmental concerns over industrial wastewater disposal along with declining freshwater resources across developing economies also account for increasing demand for membrane bioreactor technology. Population growth, urbanization, and industrialization will further complicate the business outlook. [ 7 ] However, high initial investments and operational expenditure may hamper the global membrane bioreactor market. In addition, technological limitations, particularly the recurrent costs of membrane fouling, are likely to hinder production adoption. Ongoing research and development progress toward increasing output and minimizing sludge formation are anticipated to fuel industry growth. [ 5 ] Membrane bioreactors can be used to reduce the footprint of an activated sludge sewage treatment system by removing some of the liquid components of the mixed liquor. This leaves a concentrated waste product that is then treated using the activated sludge process. Recent studies show the opportunity to use nanomaterials for the realization of more efficient and sustainable membrane bioreactors for wastewater treatment. [ 8 ] Membrane bioreactors were introduced in the late 1960s, shortly after commercial-scale ultrafiltration and microfiltration membranes became available. The original designs were introduced by Dorr-Oliver Inc. and combined the use of an activated sludge bioreactor with a cross-flow membrane filtration loop. The flat sheet membranes used in this process were polymeric and featured pore sizes ranging from 0.003 to 0.01 μm. Although the idea of replacing the settling tank of the conventional activated sludge process was attractive, it was difficult to justify the use of such a process because of the high cost of membranes, the low economic value of the product (tertiary effluent) and sometimes rapid losses of performance due to membrane fouling. As a result, the initial design focus was on the attainment of high fluxes, and it was, therefore, necessary to pump the mixed liquor and its suspended solids at high cross-flow velocity at significant energy demand (of the order 10 kWh/m 3 product) to reduce fouling. Because of the poor economics of the first-generation devices, they only found applications in niche areas with special needs such as isolated trailer parks or ski resorts. The next breakthrough for the membrane bioreactor came in 1989 with the introduction of submerged membrane bioreactor configurations. Until then, membrane bioreactors were designed with a separation device located external to the reactor (side stream membrane bioreactors) and relied on high trans-membrane pressure to maintain filtration. The submerged configuration takes advantage of coarse bubble aeration to produce mixing and limit fouling. The energy demand of the submerged system can be up to 2 orders of magnitude lower than that of the side stream systems and submerged systems operate at a lower flux, demanding more membrane area. In submerged configurations, aeration is considered as one of the major parameters in process performance both hydraulic and biological. Aeration maintains solids in suspension, scours the membrane surface, and provides oxygen to the biomass, leading to better biodegradability and cell synthesis. Submerged membrane bioreactor systems became preferred to side stream configurations, especially for domestic wastewater treatment. The next key steps in membrane bioreactor development were the acceptance of modest fluxes (25 percent or less of those in the first generation) and the idea to use two-phase (bubbly) flow to control fouling. The lower operating cost obtained with the submerged configuration along with the steady decrease in the membrane cost led to an exponential increase in membrane bioreactor plant installations from the mid-1990s. Since then, further improvements in membrane bioreactor design and operation have been introduced and incorporated into larger plants. While earlier devices were operated at solid retention times as high as 100 days with mixed liquor suspended solids up to 30 g/L, the recent trend is to apply lower solid retention times (around 10–20 days), resulting in more manageable suspended solids levels (10 to 15 g/L). Thanks to these new operating conditions, the oxygen transfer and the pumping cost in the reactors have tended to decrease and the overall maintenance has been simplified. There is now a range of membrane bioreactor systems available commercially, most of which use submerged membranes although some side stream modules are available; these side stream systems also use two-phase flow for fouling control. Typical hydraulic retention times range between 3 and 10 hours. For the most part, hollow fiber and flat sheet membrane configurations are utilized in membrane bioreactor applications. [ 9 ] Despite the more favorable energy usage of submerged membranes, there continued to be a market for the side stream configuration, particularly in smaller flow industrial applications. For ease of maintenance, side stream configurations can be installed on a lower level in a plant building, and thus membrane replacement can be undertaken without specialized lifting equipment. As a result, research and development has continued to improve the side stream configurations, and this has culminated in recent years with the development of low energy systems which incorporate more sophisticated control of the operating parameters coupled with periodic backwashes, which enable sustainable operation at energy usage as low as 0.3 kWh/m3 of product. In the immersed Membrane Bioreactor (iMBR) configuration, the filtration element is installed in either the main bioreactor vessel or in a separate tank. The modules are positioned above the aeration system, fulfilling two functions, the supply of oxygen and the cleaning of the membranes. The membranes can be a flat sheet or tubular or a combination of both and can incorporate an online backwash system which reduces membrane surface fouling by pumping membrane permeate back through the membrane. In systems where the membranes are in a separate tank from the bioreactor, individual trains of membranes can be isolated to undertake cleaning regimes incorporating membrane soaks, however, the biomass must be continuously pumped back to the main reactor to limit mixed liquor suspended solids concentration increases. Additional aeration is also required to provide air scouring to reduce fouling. Where the membranes are installed in the main reactor, membrane modules are removed from the vessel and transferred to an offline cleaning tank. [ 11 ] Usually, the internal/submerged configuration is used for larger-scale lower strength applications. [ 12 ] To optimize the reactor volume and minimize the production of sludge, submerged membrane bioreactor systems typically operate with mixed liquor suspended solids concentrations comprised between 12000 mg/L and 20000 mg/L, hence they offer good flexibility in the selection of the design Sludge retention time. It is mandatory to take into account that an excessively high content of mixed liquor suspended solids may render the aeration system less effective; the classical solution to this optimization problem is to ensure a concentration of mixed liquor suspended solids which approaches 10.000 mg/L to guarantee a good mass transfer of oxygen with a good permeation flux. This type of solution is widely accepted in larger-scale units, where the internal/submerged configuration is typically used, because of the higher relative cost of the membrane compared to the additional tank volume required. [ 13 ] Immersed MBR has been the preferred configuration due to its low energy consumption level, high biodegradation efficiency, and low fouling rate compared to side stream membrane bioreactors. In addition, iMBR systems can handle higher suspended solids concentrations, while traditional systems work only with suspended solids concentrations between 2.5 and 3.5, iMBR can handle concentrations between 4 and 12 g/L, an increase in range of 300%. This type of configuration is adopted in industrial sectors including textile, food & beverage, oil & gas, mining, power generation, pulp & paper. [ 14 ] In side stream membrane bioreactor technology, the filtration modules are outside the aerobic tank, hence the name side-stream configuration. Like the immersed or submerged configuration, the aeration system is also used to clean and supply oxygen to the bacteria that degrade the organic compounds. The biomass is either pumped directly through several membrane modules in series and back to the bioreactor or the biomass is pumped to a bank of modules, from which a second pump circulates the biomass through the modules in series. Cleaning and soaking of the membranes can be undertaken in situ with the use of an installed cleaning tank, pump, and pipework. The quality of the final product is such that it can be reused in process applications due to the filtration capacity of the micro- and ultrafiltration membranes. Usually, the external/side stream configuration is used for smaller scale and higher strength applications; the main advantage that the external/side stream configuration shows is the possibility to design and size the tank and the membrane separately, with practical advantages for the operation and the maintenance of the unit. As in other membrane processes, a shear over the membrane surface is needed to prevent or limit fouling; the external/side stream configuration provides this shear using a pumping system, while the internal/submerged configuration provides the shear through aeration in the bioreactor, and there is an energy requirement to promote the shear by pumping. In this configuration fouling is more consistent due to the higher fluxes involved. [ 15 ] Membrane bioreactor filtration performance inevitably decreases with filtration time due to the deposition of soluble and particulate materials onto and into the membrane, attributable to the interactions between activated sludge components and the membrane. This major drawback and process limitation has been under investigation since the earliest membrane bioreactors and remains one of the most challenging issues facing further development. [ 16 ] [ 17 ] Fouling is the process by which the particles (colloidal particles, solute macromolecules) are deposited or adsorbed onto the membrane surface or pores by physical and chemical interactions or mechanical action. This produces a reduction in size or blockage of membrane pores. Membrane fouling can cause severe flux drops and affects the quality of the water produced. Severe fouling may require intense chemical cleaning or membrane replacement. [ 18 ] This increases the operating costs of a treatment plant. Membrane fouling has traditionally been thought to occur through four mechanisms: 1) complete pore blocking, 2) standard blocking, 3) intermediate blocking, and 4) cake layer formation. [ 2 ] There are various types of foulants: biological (bacteria, fungi), colloidal (clays, flocs), scaling (mineral precipitates), and organic (oils, polyelectrolytes, humics). Membrane fouling can be accommodated either by allowing a decrease in permeation flux while holding transmembrane pressure constant or by increasing transmembrane pressure to maintain constant flux. Most wastewater treatment plants are operated in constant flux mode, and hence fouling phenomena are generally tracked via the variation of transmembrane pressure with time. In recent reviews covering membrane applications to bioreactors, it has been shown that, as with other membrane separation processes, membrane fouling is the most serious problem affecting system performance. Fouling leads to a significant increase in hydraulic resistance, manifested as permeate flux declines or transmembrane pressure increases when the process is operated under constant-transmembrane-pressure or constant-flux conditions respectively. [ 19 ] In systems where flux is maintained by increasing transmembrane pressure, the energy required to achieve filtration increases. Frequent membrane cleaning is an alternative that significantly increases operating costs as a result of added cleaning agent costs, added production downtime, and more frequent membrane replacement. Membrane fouling results from the interaction between a membrane material and the components of the activated sludge liquor, which include biological flocs formed by a large range of living or dead microorganisms along with soluble and colloidal compounds. The suspended biomass has no fixed composition and varies with feed water composition and reactor operating conditions. Thus, though many investigations of membrane fouling have been published, the diverse range of operating conditions and feedwater matrices employed, the different analytical methods used, and the limited information reported in most studies on the suspended biomass composition, have made it difficult to establish any generic behavior pertaining to membrane fouling in membrane bioreactors specifically. Air-induced cross flow in submerged membrane bioreactors can efficiently remove or at least reduce the fouling layer on the membrane surface. A recent review reports the latest findings on applications of aeration in submerged membrane configuration and describes the performance benefits of gas bubbling. [ 17 ] The choice of aeration rate is a key parameter in submerged membrane bioreactor design, as there is generally an optimal air flow rate beyond which further increases in aeration have no benefits for preventing fouling. Many other antifouling strategies can be applied in membrane bioreactor applications. They include, for example: In addition, different types and intensities of chemical cleaning may also be recommended on typical schedules: Intensive cleaning may also be carried out when further filtration cannot be sustained because of an elevated transmembrane pressure. Each of the four membrane bioreactor suppliers Kubota, Evoqua, Mitsubishi and GE Water have their own chemical cleaning recipes; these differ mainly in terms of concentration and methods (see Table 1). Under normal conditions, the prevalent cleaning agents are NaOCl ( sodium hypochlorite ) and citric acid . It is common for membrane bioreactor suppliers to adapt specific protocols for chemical cleanings (i.e. chemical concentrations and cleaning frequencies) for individual facilities. [ 9 ] Simply due to the high number of microorganisms in membrane bioreactors, pollutant uptake rates can be increased. This leads to better degradation in a given time span or to smaller required reactor volumes. In comparison to conventional activated sludge process treatments which typically achieve 95 percent removal, removal can be increased to 96 to 99 percent in membrane bioreactors (see table, [ 21 ] ). Chemical oxygen demand ( COD ) and biological oxygen demand ( BOD5 ) removal is found to increase with mixed liquor suspended solids concentration. Above 15 g/L, COD removal becomes almost independent of biomass concentration at >96 percent. [ 22 ] Arbitrary high suspended solids concentrations are not employed, however, lest oxygen transfer be impeded due to higher viscosity and non-Newtonian viscosity effects. Kinetics may also differ due to easier substrate access. In typical activated sludge process treatment, flocs may reach several 100 μm in size. This means that the substrate can reach the active sites only by diffusion which causes an additional resistance and limits the overall reaction rate (diffusion-controlled). Hydrodynamic stress in membrane bioreactors reduces floc size (to 3.5 μm in side stream configurations) and thereby increases the effective reaction rate. Like in the conventional activated sludge process, sludge yield is decreased at higher solids retention times or biomass concentrations. Little or no sludge is produced at sludge loading rates of 0.01 kgCOD/(kgMLSS d). [ 23 ] Because of the imposed biomass concentration limit, such low loading rates would result in enormous tank sizes or long hydrodynamic residence times in conventional activated sludge processes. Nutrient removal is one of the main concerns in modern wastewater treatment , especially, in areas that are sensitive to eutrophication . Nitrogen (N) is a pollutant present in wastewater that must be eliminated for multiple reasons: it reduces dissolved oxygen in surface waters, is toxic to the aquatic ecosystem , poses a risk to public health, and together with phosphorus (P), are responsible for the excessive growth of photosynthetic organisms like algae. All these factors make its reduction focus on wastewater treatment. In wastewater, nitrogen can be present in multiple forms. Like in the conventional activated sludge process, currently, the most widely applied technology for N-removal from municipal wastewater is nitrification combined with denitrification , carried out by bacteria nitrifying and the involvement of facultative organisms. Besides phosphorus precipitation, enhanced biological phosphorus removal can be implemented which requires an additional anaerobic process step. Some characteristics for membrane bioreactor technology render enhanced biological phosphorus removal in combination with post-denitrification an attractive alternative that achieves very low nutrient effluent concentrations. [ 22 ] For this, a membrane bioreactor improves the retention of solids, which provides a better biotreatment, supporting the development of slower-growing microorganisms, especially nitrifying ones, so that it makes them especially effective in the elimination of N (nitrification). Anaerobic membrane bioreactors (sometimes abbreviated AnMBR) were introduced in the 1980s in South Africa. However, anaerobic processes are normally used when a low-cost treatment is required that enables energy recovery but does not achieve advanced treatment (low carbon removal , no nutrients removal). In contrast, membrane-based technologies enable advanced treatment (disinfection), but at a high energy cost. Therefore, the combination of both can only be economically viable if a compact process for energy recovery is desired, or when disinfection is required after anaerobic treatment (cases of water reuse with nutrients). If maximal energy recovery is desired, a single anaerobic process will always be superior to a combination with a membrane process. Recently, anaerobic membrane bioreactors have seen successful full-scale application to the treatment of some types of industrial wastewaters—typically high-strength wastes. Example applications include the treatment of alcohol stillage wastewater in Japan [ 24 ] and the treatment of salad dressing/barbecue sauce wastewater in the United States. [ 25 ] Like in any other reactors, the hydrodynamics (or mixing) within a membrane bioreactor plays an important role in determining the pollutant removal and fouling control within the system. It has a substantial effect on energy usage and size requirements, and therefore the whole life cost of a membrane bioreactor is high. The removal of pollutants is greatly influenced by the length of time fluid elements spend in the membrane bioreactor (i.e. the residence time distribution ). The residence time distribution is a description of the hydrodynamics of mixing in the system and it is determined by the design of the reactor (e.g. size, inlet/recycle flow rates, wall/baffle/mixer/aerator positioning, mixing energy input). An example of the effect of mixing is that a continuous stirred-tank reactor will not have as high pollutant conversion per unit volume of reactor as a plug flow reactor. The control of fouling, as previously mentioned, is primarily achieved via coarse bubble aeration. The distribution of bubbles around the membranes, the shear at the membrane surface for cake removal and the size of the bubble are greatly influenced by the hydrodynamics of the system. The mixing within the system can also influence the production of possible foulants. For example, vessels not completely mixed (i.e. plug flow reactors) are more susceptible to the effects of shock loads which may cause cell lysis and release of soluble microbial products. Many factors affect the hydrodynamics of wastewater processes and hence membrane bioreactors. These range from physical properties (e.g. mixture rheology and gas/liquid/solid density etc.) to fluid boundary conditions (e.g. inlet/outlet/recycle flow rates, baffle/mixer position etc.). However, some factors are peculiar to membrane bioreactors and these include the filtration tank design (e.g. membrane type, multiple outlets attributed to membranes, membrane packing density, membrane orientation, etc.) and its operation (e.g. membrane relaxation, membrane backflush, etc.). The mixing modeling and design techniques applied to membrane bioreactors are very similar to those used for conventional activated sludge systems. They include the relatively quick and easy compartmental modelling technique which will only derive the residence time distribution of a process (e.g. the reactor) or a process unit (e.g. the membrane filtration vessel) and which relies on broad assumptions of the mixing properties of each sub-unit. Computational fluid dynamics modeling, on the other hand, does not rely on broad assumptions about the mixing characteristics and instead attempts to predict the hydrodynamics from a fundamental level. It is applicable to all scales of fluid flow and can reveal much information about the mixing in a process, ranging from the residence time distribution to the shear profile on a membrane surface. A visualization of such modeling results is shown in the image. Investigations of membrane bioreactor hydrodynamics have occurred at many different scales ranging from examination of shear stress at the membrane surface to residence time distribution analysis for a complete membrane bioreactor. Cui et al. (2003) [ 17 ] investigated the movement of Taylor bubbles [ 27 ] [ 28 ] [ 29 ] [ 30 ] through tubular membranes. Khosravi, M. (2007) [ 31 ] examined an entire membrane filtration vessel using CFD and velocity measurements. Brannock et al. (2007) [ 32 ] examined an entire MBR system using tracer study experiments and RTD analysis. Some of the advantages provided by membrane bioreactors are as follows. [ 33 ] The market for membrane bioreactors is segmented based on end-user type, such as municipal and industrial users, and end-user geography, for instance Europe, Middle East and Africa (EMEA), Asia-Pacific (APAC), and the Americas. [ 34 ] In this line, in 2016, some studies and reports showed that the APAC region took the lead in terms of market share, owning 41.90%. On the other hand, the EMEA region's market share is approximately 31.34% and the Americas constitute 26.67% of the market. [ 34 ] APAC has the largest membrane bioreactors market. Developing economies such as India, China, Indonesia, and the Philippines are major contributors to growth in this market region. APAC is considered one of the most disaster-prone regions in the world: in 2013, thousands of people died from water-related disasters in the region, accounting for nine-tenth of the water-related deaths, globally. In addition to this, the public water supply system in the region is not as developed when compared to other countries such as the US, Canada, the countries in Europe, etc. [ 34 ] The membrane bioreactors market in the EMEA region has witnessed stable growth. Countries such as Saudi Arabia, the UAE, Kuwait, Algeria, Turkey, and Spain are major contributors to that growth rate. Scarcity of clean and fresh water is the key driver for the increasing demand for efficient water treatment technologies. In this regard, increased awareness about water treatment and safe drinking water is also driving the growth. [ 34 ] Ultimately, the Americas region has been witnessing major demand from countries including the US, Canada, Antigua, Argentina, Brazil, and Chile. The membrane bioreactor market has grown on account of stringent regulatory enforcement towards the safe discharge of wastewater. The demand for this emerging technology comes mainly from the pharmaceuticals, food & beverages, automotive, and chemicals industries. [ 34 ]
https://en.wikipedia.org/wiki/Membrane_bioreactor
In cell biology, membrane bound polyribosomes are attached to a cell's endoplasmic reticulum . [ 1 ] When certain proteins are synthesized by a ribosome they can become " membrane-bound ". The newly produced polypeptide chains are inserted directly into the endoplasmic reticulum by the ribosome and are then transported to their destinations. Bound ribosomes usually produce proteins that are used within the cell membrane or are expelled from the cell via exocytosis . [ 2 ] A membrane-bound polyribosome, as the name suggests, is composed of multiple ribosomes that are associated with a membrane . Proteins are synthesized via messenger ribonucleic acid (mRNA) from the nucleus being released either into the cytoplasm or into the rough endoplasmic reticulum . [ 3 ] The rough endoplasmic reticulum branches off of the cell nucleus, has multiple cisternae or layered folds that have interstitial space for protein extrusion. [ 3 ] Ribosomes are located in both the cytosol, cellular fluid, or rough endoplasmic reticulum and attach to this ribonucleic acid by separation and re-association of subunits around the messenger ribonucleic acid. [ 3 ] In eukaryotic cells, the small subunit (40S) stays on one side and translates the messenger ribonucleic acid while the large subunit (60S) goes codon by codon down the mRNA and attaches each amino acids coded for making a polypeptide. [ 3 ] [ 4 ] A polysome is when multiple ribosomes attach to the same strand of messenger ribonucleic acid. [ 3 ] The polypeptides ribosomes produce go on to be cell structural proteins, enzymes, and many other things. [ 3 ] Ribosomes can also sometimes be associated with chloroplasts and mitochondria but these are not membrane bound. [ 3 ] Free-floating ribosomes can become membrane bound through a process called translocation. [ 5 ] Through translocation, ribosomes that are found in the cytosol producing proteins are moved and attached to the membrane. [ 3 ] This process is responsible for development of the rough endoplasmic reticulum. [ 3 ] First, ribosomes begin protein synthesis at the N-terminus. [ 3 ] The first of the polypeptide may actually be a signal sequence that tells the ribosome that the protein must be extruded into the rough endoplasmic reticulum. [ 3 ] The signal sequence triggers translocation by binding with a signal recognition particle (SRP) also located in the cytosol. [ 3 ] The signal recognition particle allows recognition and binding via a signal recognition particle receptor on the target membrane’s surface. [ 3 ] The signal recognition particle receptor and signal recognition particle are both attached to a translocon and bound with guanosine triphosphate (GTP) . [ 3 ] This guanosine triphosphate is phosphorylated for energy and opens the translocon allowing the ribosome to attach via its 60S subunit and its signal sequence to enter the lumen, or institial space of the rough endoplasmic reticulum. [ 3 ] The signal recognition particle and signal recognition particle receptor detach and can be recycled. [ 3 ] The signal sequence is cleaved inside the lumen of the rough endoplasmic reticulum and the ribosome continues to produce the protein into the endoplasmic reticulum where it is folded. [ 3 ] Upon protein synthesis completion, the translocon closes and the ribosome detaches. During translocation, translation briefly stops until binding with the membrane is finished. [ 3 ] It is also important to remember that ribosomes can associate and dissociate with the endoplasmic reticulum as need for protein synthesis. [ 6 ] After synthesis into the rough endoplasmic reticulum, proteins may travel to the end of the rough endoplasmic reticulum where they are exocytosed , or packaged into small vesicles formed via cleavage of the membrane of the rough endoplasmic reticulum. These vesicles are sent to the Golgi apparatus for sorting and release as needed by the cell. [ 3 ] Some proteins are made to be released immediately as the cell is in constant need of them while some proteins are store for immediate release upon signal. [ 3 ] The idea that translation and translocation occur simultaneously except in some yeasts was confirmed via microsomes . [ 3 ] Microsomes are small vesicles of rough endoplasmic reticulum's membranes formed after disruption of the organelle via homogenation. [ 7 ] Homogenation is physical disruption of cells. [ 7 ] Microsomes form after homogenization because of the membrane nature of the endoplasmic reticulum. [ 3 ] In a lipid bilayer, hydrophobic tails must come together and be hydrophilic head must face the external aqueous environment. [ 3 ] In an experiment, proteins were synthesized via ribosomes with microsomes added simultaneously and with microsomes added after synthesis. [ 3 ] In the group where microsomes were added simultaneously, the proteins were synthesized into the microsome with the signal sequence cleaved. [ 3 ] In the group where microsomes were added post protein synthesis, the proteins were located outside the microsome and retained their signal sequence. [ 3 ] Therefore, it is possible to tell if a protein has been extruded into a microsome by its length (lack of a N-terminal signal protein if extruded), resistance to proteases, lack of resistance to proteases in the presence of detergents, and glycosylation . [ 3 ] It was confirmed that non-extruded proteins are longer via SDS-page of proteins in the presence of and without microsomes. [ 3 ] Protease resistance is due to the characteristics of the surrounding endoplasmic reticulum. [ 3 ] And glycosylation occurs via glycosyltransferases to help with folding and stabilization of proteins. [ 3 ] The cleavage of a signal protein, resistance to proteases, and glycosylation provided by the endoplasmic reticulum to membrane-bound polyribosomes allows for more effective protein production. [ 3 ] Presence of the signal protein makes the protein bulkier, a different shape, and harder to store until the unusable signal sequence can be cleaved. [ 3 ] The protection from proteases due to protection by the endoplasmic reticulum prevents the protein from being degraded as it is formed. [ 3 ] Extrusion in the endoplasmic reticulum also makes sure that the protein folds correctly. [ 3 ] Resident endoplasmic reticulum proteins like binding protein (BiP), protein disulfide isomerase (PDI), and glycosyltransferases (GTs) are all responsible for ensuring correct protein folding and stabilization as the protein is assembled. [ 3 ] Binding protein can actively help fold or prevent folding of proteins while protein disulfide isomerase promotes the formation of disulfide bridges. [ 3 ] Glycosyltransferases promotes the glycosylation or incorporation of a carbohydrate to improve rigidity or structure of a protein. [ 3 ] Failure of proteins to fold correctly may result in the unfolded protein response . [ 3 ] Unfolded proteins cause swelling of the endoplasmic reticulum as more unfolded proteins continue to be produced. [ 3 ] The unfolded protein response can result in endoplasmic reticulum stress, phosphorylation of PERK, phosphorylation of elF2a, down regulation of protein production, and possibly apoptosis . [ 3 ] Apoptosis of affected cells may result in a disease like Amylotrophic Lateral Sclerosis (ALS) . [ 3 ] In Amylotrophic Lateral Sclerosis, dendritic cells undergo endoplasmic reticulum stress because of misfiled SOD1 proteins and apoptose resulting in lack of nerve transmission and loss of muscle control. [ 3 ] Eventually, those with Amylotrophic Lateral Sclerosis die because of lack of nerve impulses to signal breathing or heart ventricle contraction.
https://en.wikipedia.org/wiki/Membrane_bound_polyribosome
Membrane distillation ( MD ) is a thermally driven separation process in which separation is driven by phase change. A hydrophobic membrane presents a barrier for the liquid phase , allowing the vapour phase (e.g. water vapour) to pass through the membrane's pores. [ 1 ] The driving force of the process is a partial vapour pressure difference commonly triggered by a temperature difference. [ 2 ] [ 3 ] Most processes that use a membrane to separate materials rely on static pressure difference as the driving force between the two bounding surfaces (e.g. reverse osmosis - RO), or a difference in concentration ( dialysis ), or an electric field (ED). [ 4 ] The selectivity of a membrane can be due to the relation of the pore size to the size of the substance being retained, or its diffusion coefficient, or its electrical polarity . Membranes used for membrane distillation (MD) inhibit passage of liquid water while allowing permeability for free water molecules and thus, for water vapour. [ 1 ] These membranes are made of hydrophobic synthetic material (e.g. PTFE, PVDF or PP) and offer pores with a standard diameter between 0.1 and 0.5 μm (3.9 × 10 −6 and 1.97 × 10 −5 in). As water has strong dipole characteristics, whilst the membrane fabric is non-polar, the membrane material is not wetted by the liquid. [ 5 ] Even though the pores are considerably larger than the molecules, the high water surface tension prevents the liquid phase from entering the pores. A convex meniscus develops into the pore. [ 6 ] This effect is named capillary action. Amongst other factors, the depth of impression can depend on the external pressure load on the liquid. A dimension for the infiltration of the pores by the liquid is the contact angle Θ=90 – Θ'. As long as Θ < 90° and accordingly Θ' > 0° no wetting of the pores will take place. If the external pressure rises above the so-called liquid entry pressure , then Θ = 90°resulting in a bypass of the pore. The driving force which delivers the vapour through the membrane, in order to collect it on the permeate side as product water, is the partial water vapour pressure difference between the two bounding surfaces. This partial pressure difference is the result of a temperature difference between the two bounding surfaces. As can be seen in the image, the membrane is charged with a hot feed flow on one side and a cooled permeate flow on the other side. The temperature difference through the membrane, usually between 5 and 20 K, conveys a partial pressure difference which ensures that the vapour developing at the membrane surface follows the pressure drop, permeating through the pores and condensing on the cooler side. [ 7 ] Many different membrane distillation techniques exist. The basic four techniques mainly differ by the arrangement of their distillate channel or the manner in which this channel is operated. The following technologies are most common: In DCMD, both sides of the membrane are charged with liquid- hot feed water on the evaporator side and cooled permeate on the permeate side. The condensation of the vapour passing through the membrane happens directly inside the liquid phase at the membrane boundary surface. Since the membrane is the only barrier blocking the mass transport, relatively high surface related permeate flows can be achieved with DCMD. [ 8 ] A disadvantage is the high sensible heat loss, as the insulating properties of the single membrane layer are low. However, a high heat loss between evaporator and condenser is also the result of the single membrane layer. This lost heat is not available to the distillation process, thus lowering the efficiency. [ 9 ] Unlike other configurations of membrane distillation, in DCMD the cooling across the membrane is provided by permeate flow rather than feed preheating. Therefore, an external heat exchanger is also needed to recover heat from the permeate, and the high flow rate of the feed must be carefully optimized. [ 10 ] In air-gap MD, the evaporator channel resembles that in DCMD, whereas the permeate gap lies between the membrane and a cooled walling and is filled with air. The vapour passing through the membrane must additionally overcome this air gap before condensing on the cooler surface. The advantage of this method is the high thermal insulation towards the condenser channel, thus minimizing heat conduction losses. However, the disadvantage is that the air gap represents an additional barrier for mass transport, reducing the surface- related permeate output compared to DCMD. [ 13 ] A further advantage over DCMD is that volatile substances with a low surface tension such as alcohol or other solvents can be separated from diluted solutions, due to the fact that there is no contact between the liquid permeate and the membrane with AGMD. AGMD is especially advantageous compared to alternatives at higher salinity. [ 14 ] Variations on AGMD can include hydrophobic condensing surfaces [ 15 ] or porous condensers [ 16 ] for improved flux and energy efficiency . In AGMD, uniquely important design features include gap thickness, condensing surface hydrophobicity, gap spacer design, and tilt angle. [ 17 ] Sweeping-gas MD, also known as air stripping, uses a channel configuration with an empty gap on the permeate side. This configuration is the same as in AGMD. Condensation of the vapour takes place outside the MD module in an external condenser. As with AGMD, volatile substances with a low surface tension can be distilled with this process. [ 18 ] The advantage of SWGMD over AGMD is the significant reduction of the barrier to the mass transport through forced flow. Hereby higher surface-related productwater mass flows can be achieved than with AGMD. A disadvantage of SWGMD caused by the gas component and therefore the higher total mass flow, is the necessity of a higher condenser capacity. When using smaller gas mass flows there is a risk of the gas heating itself at the hot membrane surface, thus reducing the vapour pressure difference and therefore the driving force. One solution of this problem for SWGMD and for AGMD is the use of a cooled walling for the permeate channel, and maintaining temperature by flushing it with gas. [ 19 ] Vacuum MD contains an air gap channel configuration. Once it has passed through the membrane, the vapour is sucked out of the permeate channel and condenses outside the module as with SWGMD. VCMD and SWGMD can be applied for the separation of volatile substances from a watery solution or for the generation of pure water from concentrated salt water. One advantage of this method is that undissolved inert gasses blocking the membrane pores are sucked out by the vacuum, leaving a larger effective membrane surface active. [ 20 ] Furthermore, a reduction of the boiling point results in a comparable amount of product at lower overall temperatures and lower temperature differences through the membrane. A lower required temperature difference leaves a lower total- and specific thermal energy demand. However, the generation of a vacuum, which must be adjusted to the salt water temperature, requires complex technical equipment and is therefore a disadvantage to this method. The electrical energy demand is a lot higher as with DCMD and AGMD. An additional problem is the increase of the pH value due to the removal of CO 2 from the feed water. For vacuum membrane distillation to be efficient, it is often run in multistage configurations. [ 21 ] In the following, the principle channel configuration and operating method of a standard DCMD module as well as a DCMD module with separate permeate gap shall be explained. The design in the adjacent image depicts a flat channel configuration, but can also be understood as a schema for flat-, hollow fibre - or spiral wound modules. The complete channel configuration consists of a condenser channel with inlet and outlet and an evaporator channel with inlet and outlet. These two channels are separated by the hydrophobic, micro porous membrane. For cooling, the condenser channel is flooded with fresh water and the evaporator e.g. with salty feed water. The coolant enters the condenser channel at a temperature of 20 °C (68 °F). After passing through the membrane, the vapour condenses in the cooling water, releasing its latent heat and leading to a temperature increase in the coolant. Sensible heat conduction also heats the cooling water through the surface of the membrane. Due to the mass transport through the membrane the mass flow in the evaporator decreases whilst the condenser channel increases by the same amount. The mass flow of pre-heated coolant leaves the condenser channel at a temperature of about 72 °C (162 °F) and enters a heat exchanger, thus pre-heating the feed water. This feed water is then delivered to a further heat source and finally enters the evaporator channel of the MD module at a temperature of 80 °C (176 °F). The evaporation process extracts latent heat from the feed flow, which cools down the feed increasingly in flow direction. Additional heat reduction occurs due to sensible heat passing through the membrane. The cooled feed water leaves the evaporator channel at approximately 28 °C. Total temperature differences between condenser inlet and evaporator outlet and condenser inlet and evaporator outlet are about equal. In a PGMD module, the permeate channel is separated from the condenser channel by a condensation surface. This enables the direct use of a salt water feed as coolant, since it does not come into contact with the permeate. Considering this, the cooling-or feed water entering the condenser channel at a temperature T1 can now also be used to cool the permeate. Condensation of vapour takes place inside the liquid permeate. Pre-heated feed water that was used to cool the condenser can be conducted directly to a heat source for final heating, after leaving the condenser at a temperature T2. After it has reached temperature T3 it is guided into the evaporator. Permeate is extracted at temperature T5 and the cooled brine is discharged at temperature T4. An advantage of PGMD over DCMD is the direct use of feed water as cooling liquid inside the module and therefore the necessity of only one heat exchanger to heat the feed before entering the evaporator. Hereby heat conduction losses are reduced and expensive components can be cut. A further advantage is the separation of permeate from coolant. Therefore, the permeate does not have to be extracted later in the process and the coolant's mass flow in the condenser channel remains constant. The low flow velocity of the permeate in the permeate gap is a disadvantage of this configuration, as it leads to a poor heat conduction from the membrane surface to the condenser walling. High temperatures on the permeate side's membrane bounding surface are the result of this effect (temperature polarisation ), which lowers the vapour pressure difference and therefore the driving force of the process. However, it is beneficial, that the heat conduction losses through the membrane are also lowered by this effect. This poor gap heat conduction challenge is largely removed with a variant of PGMD called CGMD, or conductive gap membrane distillation, which adds thermally conductive spacers to the gaps. [ 22 ] [ 23 ] Compared to AGMD, in PGMD or CGMD, a higher surface related permeate output is achieved, as the mass flow is not additionally inhibited by the diffusion resistance of an air layer. [ 7 ] The typical vacuum multi-effect membrane distillation (e.g. the memsys brand [ clarification needed ] V-MEMD) module consists of a steam raiser, evaporation–condensation stages, and a condenser. Each stage recovers the heat of condensation, providing a multiple-effect design. Distillate is produced in each evaporation–condensation stage and in the condenser. [ 24 ] Steam raiser: The heat produced by the external heat source (e.g. solar thermal or waste heat) is exchanged in the steam raiser. The water in the steam raiser is at lower pressure (e.g. 400 hPa), compared to the ambient. The hot steam flows to the first evaporation–condensation stage (stage 1). Evaporation–condensation stages: Stages are composed of alternative hydrophobic membrane and foil (Polypropylene, PP) frames. Feed (e.g. seawater) is introduced into stage 1 of the module. Feed flows serially through the evaporation–condensation stages. At the end of last stage, it is ejected as brine. Stage 1: Steam from the evaporator condenses on a PP foil at pressure level P1 and corresponding temperature T1. The combination of a foil and a hydrophobic membrane creates a channel for the feed, where the feed is heated by the heat of condensation of the vapour from the steam raiser. Feed evaporates under the negative pressure P2. The vacuum is always applied to the permeate side of the membranes. Stage [2, 3, 4, x]: This process is replicated in further stages and each stage is at a lower pressure and temperature. Condenser: The vapour produced in the final evaporation–condensation stage is condensed in the condenser, using the coolant flow (e.g. seawater). Distillate production: Condensed distillate is transported via the bottom of each stage by pressure difference between stages. Design of memsys module: Inside each memsys frame, and between frames, channels are created. Foil frames are the ‘distillate channels’. Membrane frames are the ‘vapour channels’. Between foil and membrane frames, ‘feed channels’ are created. Vapour enters the stage and flows into parallel foil frames. The only option of for the vapour entering the foil frames is to condense, i.e. vapour enters a ‘dead-end’ foil frame. Although it is called a ‘dead-end’ frame, it does contain a small channel to remove the non-condensable gases and to apply the vacuum. The condensed vapour flows into a distillate channel. The heat of condensation is transported through the foil and is immediately converted into evaporation energy, generating new vapour in the seawater feed channel. The feed channel is limited by one condensing foil and a membrane. The vapour leaves the membrane channels and is collected in a main vapour channel. The vapour leaves the stage via this channel and enters the next stage. Memsys has developed a highly automated production line for the modules and could be easily extended. [ clarification needed ] As the memsys process works at modest low temperatures (less than 90 °C or 194 °F) and moderate negative pressure, all module components are made of polypropylene (PP). This eliminates corrosion and scaling and allows large-scale cost efficient production. Typical applications of membrane distillation are: Membrane distillation is very suitable for compact, solar powered desalination units providing small and medium range output less than 10,000 litres per day (2,600 US gal/d). [ 25 ] Especially the spiral wound design patented by GORE in the year 1985 suits this application. Within the MEMDIS project, which kicked off in 2003, the Fraunhofer Institute for Solar Energy Systems ISE began developing MD modules as well as installing and analysing two different solar powered operating systems, together with other project partners. The first system type is a so-called compact system, designed to produce a drinking water output of 100–120 litres per day (26–32 US gal/d) from sea-or brackish water. The main aim of the system design is a simple, self-sufficient, low maintenance and robust plant for target markets in arid and semi-arid areas of low infrastructure. The second system type is a so-called two-loop plant with a capacity of around 2,000 litres per day (530 US gal/d). Here, the collector circuit is separated from the desalination circuit by a saltwater resistant heat exchanger. [ 7 ] Based on these two system types, a various number of prototypes were developed, installed and observed. The standard configuration of today's (2011) compact system is able to produce a distillate output of up to 150 litres per day (40 US gal/d). The required thermal energy is supplied by a 6.5 m 2 (70 sq ft) solar thermal collector field. Electrical energy is supplied by a 75 W PV-module. This system type is currently being developed further and marketed by the Solar Spring GmbH, a spin-off of the Fraunhofer Institute for Solar Energy Systems. Within the MEDIRAS project, a further EU-project, an enhanced two-loop system was installed on the Island of Gran Canaria. Built inside a 6.1 m (20 ft) container and equipped with a collector aray size of 225 m 2 (2,420 sq ft), a heat storage tank makes a distillate output of up to 3,000 litres per day (790 US gal/d) possible. Further applications with up to 5,000 litres per day (1,300 US gal/d) have also been implemented, either 100% solar powered or as hybrid projects in combination with waste heat. [ citation needed ] The operation of membrane distillation systems faces several major barriers that may impair operation, or prevent it from being a viable option. The principal challenge is membrane wetting, where saline feed leaks through the membrane, contaminating the permeate. [ 1 ] This is especially caused by membrane fouling, where particulates, salts, or organic matter deposit on the membrane surface. [ 26 ] Techniques to mitigate fouling include membrane superhydrophobicity, [ 27 ] [ 28 ] air backwashing to reverse [ 1 ] or prevent wetting, [ 29 ] choosing non-fouling operating conditions, [ 30 ] and maintaining air layers on the membrane surface. [ 29 ] The single biggest challenge for membrane distillation to be cost effective is the energy efficiency. Commercial systems have not reached competitive energy consumption compared to the leading thermal technologies such as Multiple-effect distillation , although some have been close, [ 31 ] and research has shown potential for significant improvements on energy efficiency. [ 22 ]
https://en.wikipedia.org/wiki/Membrane_distillation
A membrane electrode assembly ( MEA ) is an assembled stack of proton-exchange membranes (PEM) or alkali anion exchange membrane (AAEM), catalyst and flat plate electrode used in fuel cells and electrolyzers . [ 1 ] [ 2 ] The PEM is sandwiched between two electrodes which have the catalyst embedded in them. The electrodes are electrically insulated from each other by the PEM. These two electrodes make up the anode and cathode respectively. The PEM is typically a fluoropolymer (PFSA) proton permeable electrical insulator barrier. Hydrocarbon variants are currently being developed and are expected to succeed fluoropolymers. This barrier allows the transport of the protons from the anode to the cathode through the membrane but forces the electrons to travel around a conductive path to the cathode. The most commonly used Nafion PEMs are Nafion XL, 112, 115, 117, and 1110. The electrodes are heat pressed onto the PEM. Commonly used materials for these electrodes are carbon cloth or carbon fiber papers. [ 3 ] NuVant produces a carbon cloth called ELAT which maximizes gas transport to the PEM as well as moves water vapor away from the PEM. Imbedding ELAT with noble metal catalyst allows this carbon cloth to also act as the electrode. Many other different methods and procedures also exist for the production of MEAs which are quite similar between fuel cells and electrolyzers . [ 1 ] Platinum is one of the most commonly used catalysts, however other platinum group metals are also used. Ruthenium and platinum are often used together, if carbon monoxide (CO) is a product of the electro- chemical reaction as CO poisons the PEM and impacts the efficiency of the fuel cell. Due to the high cost of these and other similar materials, research is being undertaken to develop catalysts that use lower cost materials as the high costs are still a hindering factor in the widespread economical acceptance of fuel cell technology. Current service life is 7,300 hours under cycling conditions, while at the same time reducing platinum group metal loading to 0.2 mg/cm2. [ 4 ] At this time most companies manufacturing MEAs specialize solely in high volume production, such as W. L. Gore & Associates , Johnson Matthey , and 3M . However, there are other companies which produce MEAs, allowing different shapes, catalysts or membranes to be evaluated as well, which include De Nora Tech, Fuel Cell Store, FuelCellsEtc, and many others. The global market for Membrane Electrode Assemblies (MEA) was estimated to be worth US$ 672 million in 2023 and is forecast to reach US$ 3853 million by 2030, with a CAGR of 28.2% during the forecast period 2024-2030. [ 5 ]
https://en.wikipedia.org/wiki/Membrane_electrode_assembly
Membrane emulsification (ME) is a relatively novel technique for producing all types of single and multiple emulsions for DDS ( drug delivery systems ), solid micro carriers for encapsulation of drug or nutrient, solder particles for surface-mount technology , mono dispersed polymer microspheres (for analytical column packing, enzyme carriers, liquid crystal display spacers, toner core particles). [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Membrane emulsification was introduced by Nakashima and Shimizu in the late 1980s in Japan. [ 7 ] [ 8 ] In this process, the dispersed phase is forced through the pores of a microporous membrane directly into the continuous phase. Emulsified droplets are formed and detached at the end of the pores with a drop-by-drop mechanism. The advantages of membrane emulsification over conventional emulsification processes are that it enables one to obtain very fine emulsions of controlled droplet sizes and narrow droplet size distributions. Successful emulsification can be carried out with much less consumption of emulsifier and energy, and because of the lowered shear stress effect, membrane emulsification allows the use of shear-sensitive ingredients, such as starch and proteins. [ 9 ] The membrane emulsification process is generally carried out in cross-flow (continuous or batch) mode or in a stirred cell (batch). [ 10 ] [ 11 ] A major limiting factor of ME was the low dispersed phase flux. In order to expand the industrial applications, the productivity of this method had to be increased. Some research has been aimed at solving this problem and others, such as membrane fouling. [ 12 ] [ 13 ] High dispersed phase flux has now been shown to be possible using single-pass annular gap crossflow membranes. [ 14 ]
https://en.wikipedia.org/wiki/Membrane_emulsification
Membrane fouling is a process whereby a solution or a particle is deposited on a membrane surface or in membrane pores in a processes such as in a membrane bioreactor , [ 1 ] reverse osmosis , [ 2 ] forward osmosis , [ 3 ] membrane distillation , [ 4 ] ultrafiltration , microfiltration , or nanofiltration [ 5 ] so that the membrane's performance is degraded. It is a major obstacle to the widespread use of this technology . Membrane fouling can cause severe flux decline and affect the quality of the water produced. Severe fouling may require intense chemical cleaning or membrane replacement. This increases the operating costs of a treatment plant . There are various types of foulants: colloidal (clays, flocs ), biological ( bacteria , fungi ), organic ( oils , polyelectrolytes , humics ) and scaling (mineral precipitates). [ 6 ] Fouling can be divided into reversible and irreversible fouling based on the attachment strength of particles to the membrane surface. Reversible fouling can be removed by a strong shear force or backwashing . Formation of a strong matrix of fouling layer with the solute during a continuous filtration process will result in reversible fouling being transformed into an irreversible fouling layer. Irreversible fouling is the strong attachment of particles which cannot be removed by physical cleaning. [ 7 ] Factors that affect membrane fouling: Recent fundamental studies indicate that membrane fouling is influenced by numerous factors such as system hydrodynamics, operating conditions, [ 8 ] membrane properties, and material properties (solute). At low pressure, low feed concentration, and high feed velocity, concentration polarisation effects are minimal and flux is almost proportional to trans-membrane pressure difference. However, in the high pressure range, flux becomes almost independent of applied pressure. [ 9 ] Deviation from linear flux-pressure relation is due to concentration polarization . At low feed flow rate or with high feed concentration, the limiting flux situation is observed even at relatively low pressures. Flux, [ 3 ] transmembrane pressure (TMP), Permeability, and Resistance are the best indicators of membrane fouling. Under constant flux operation, TMP increases to compensate for the fouling. On the other hand, under constant pressure operation, flux declines due to membrane fouling. In some technologies such as membrane distillation , fouling reduces membrane rejection, and thus permeate quality (e.g. as measured by electrical conductivity) is a primary measurement for fouling. [ 8 ] Even though membrane fouling is an inevitable phenomenon during membrane filtration , it can be minimised by strategies such as cleaning, appropriate membrane selection and choice of operating conditions. Membranes can be cleaned physically, biologically or chemically. Physical cleaning includes gas scour, sponges, water jets or backflushing using permeate [ 10 ] or pressurized air. [ 11 ] Biological cleaning uses biocides to remove all viable microorganisms , whereas chemical cleaning involves the use of acids and bases to remove foulants and impurities. Additionally, researchers have investigated the impact different coatings have on resistance to wear. A 2018 study from the Global Aqua Innovation Center in Japan reported improved surface roughness properties of PA membranes by coating them with multi-walled carbon nanotubes. [ 12 ] Another strategy to minimise membrane fouling is the use of the appropriate membrane for a specific operation. The nature of the feed water must first be known; then a membrane that is less prone to fouling with that solution is chosen. For aqueous filtration , a hydrophilic membrane is preferred. [ 13 ] For membrane distillation , a hydrophobic membrane is preferred. [ 14 ] Operating conditions during membrane filtration are also vital, as they may affect fouling conditions during filtration. For instance, crossflow filtration is often preferred to dead end filtration , because turbulence generated during the filtration entails a thinner deposit layer and therefore minimises fouling (e.g. tubular pinch effect ). In some applications such as in many MBR applications, air scour is used to promote turbulence at the membrane surface. Membrane performance can suffer from fouling-induced mechanical degradation. This may result in unwanted pressure and flux gradients, both of the solute and the solvent. The mechanism of membrane failure may be the direct consequence of fouling by means of physical alterations to the membrane, or by indirect means, in which the foulant removal processes yield membrane damage. It is important to note that the majority of membranes used commercially are polymers such as polyvinylidene fluoride (PVDF), polyacrylonitrile (PAN), polyethersulfone (PES) and polyamide (PA), which are materials which offer desirable properties (elasticity and strength) to withstand constant osmotic pressures. [ 15 ] The accumulation of foulants, however, degrades these properties through physical alterations to the membrane structure. The accumulation of foulants can lead to the formation of cracks, surface roughening, and changes in pore size distribution. [ 15 ] These physical changes are the result of impacts of hard material with a soft polymer membrane, weakening its structural integrity. Degradation of the mechanical structure makes the membranes more susceptible to mechanical damage, potentially reducing its overall lifespan. A 2006 study observed this degradation by uniaxially straining hollow fibers that were both clean and fouled. The researchers reported the relative embrittlement of the fouled fibers. [ 16 ] Beyond direct physical damage, fouling can also induce indirect effects on membrane mechanical properties due to the strategies used to combat it. Backwashing subjects not only the particulates, but the membrane to strong shear forces. Greater fouling frequency therefore exposes the membrane to cyclic loading which can lead to fatigue failure . This is a process whereby existing imperfections in the membrane (such as microcracks) can grow and propagate due to the complex stress state dynamics. These impacts are not unknown; A 2007 study simulated aging via cyclic backwash pulses, and reported similar embrittlement due to the effects. [ 17 ] Additionally, repeated chemical treatment of fouling subjects membranes to excessive amounts of chlorine or other treatment chemicals which can cause degradation. [ 18 ] This chemical degradation can lead to delamination of the membrane components, ultimately leading to failure.
https://en.wikipedia.org/wiki/Membrane_fouling
Gas mixtures can be effectively separated by synthetic membranes made from polymers such as polyamide or cellulose acetate , or from ceramic materials. [ 1 ] While polymeric membranes are economical and technologically useful, they are bounded by their performance, known as the Robeson limit (permeability must be sacrificed for selectivity and vice versa). [ 2 ] This limit affects polymeric membrane use for CO 2 separation from flue gas streams, since mass transport becomes limiting and CO 2 separation becomes very expensive due to low permeabilities. Membrane materials have expanded into the realm of silica , zeolites , metal-organic frameworks , and perovskites due to their strong thermal and chemical resistance as well as high tunability (ability to be modified and functionalized), leading to increased permeability and selectivity. Membranes can be used for separating gas mixtures where they act as a permeable barrier through which different compounds move across at different rates or not move at all. The membranes can be nanoporous, polymer, etc. and the gas molecules penetrate according to their size, diffusivity , or solubility. Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The polymer chain flexibility and free volume in the polymer of the membrane material influence the diffusion coefficient, as the space within the permeable membrane must be large enough for the gas molecules to diffuse across. The solubility is expressed as the ratio of the concentration of the gas in the polymer to the pressure of the gas in contact with it. Permeability is the ability of the membrane to allow the permeating gas to diffuse through the material of the membrane as a consequence of the pressure difference over the membrane, and can be measured in terms of the permeate flow rate, membrane thickness and area and the pressure difference across the membrane. The selectivity of a membrane is a measure of the ratio of permeability of the relevant gases for the membrane. It can be calculated as the ratio of permeability of two gases in binary separation. [ 3 ] The membrane gas separation equipment typically pumps gas into the membrane module and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side, and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day. [ 3 ] There are three main diffusion mechanisms. The first (b), Knudsen diffusion holds at very low pressures where lighter molecules can move across a membrane faster than heavy ones, in a material with reasonably large pores. [ 4 ] The second (c), molecular sieving , is the case where the pores of the membrane are too small to let one component pass, a process which is typically not practical in gas applications, as the molecules are too small to design relevant pores. In these cases the movement of molecules is best described by pressure-driven convective flow through capillaries, which is quantified by Darcy's law . However, the more general model in gas applications is the solution-diffusion (d) where particles are first dissolved onto the membrane and then diffuse through it both at different rates. This model is employed when the pores in the polymer membrane appear and disappear faster relative to the movement of the particles. [ 5 ] In a typical membrane system the incoming feed stream is separated into two components: permeant and retentate. Permeant is the gas that travels across the membrane and the retentate is what is left of the feed. On both sides of the membrane, a gradient of chemical potential is maintained by a pressure difference which is the driving force for the gas molecules to pass through. The ease of transport of each species is quantified by the permeability , P i . With the assumptions of ideal mixing on both sides of the membrane, ideal gas law , constant diffusion coefficient and Henry's law , the flux of a species can be related to the pressure difference by Fick's law : [ 4 ] where, (J i ) is the molar flux of species i across the membrane, (l) is membrane thickness, (P i ) is permeability of species i, (D i ) is diffusivity, (K i ) is the Henry coefficient, and (p i ' ) and (p i " ) represent the partial pressures of the species i at the feed and permeant side respectively. The product of D i K i is often expressed as the permeability of the species i, on the specific membrane being used. The flow of a second species, j, can be defined as: With the expression above, a membrane system for a binary mixture can be sufficiently defined. it can be seen that the total flow across the membrane is strongly dependent on the relation between the feed and permeate pressures. The ratio of feed pressure (p ' ) over permeate pressure (p " ) is defined as the membrane pressure ratio (θ). It is clear from the above, that a flow of species i or j across the membrane can only occur when: In other words, the membrane will experience flow across it when there exists a concentration gradient between feed and permeate. If the gradient is positive, the flow will go from the feed to the permeate and species i will be separated from the feed. Therefore, the maximum separation of species i results from: Another important coefficient when choosing the optimum membrane for a separation process is the membrane selectivity α ij defined as the ratio of permeability of species i with relation to the species j. This coefficient is used to indicate the level to which the membrane is able to separates species i from j. It is obvious from the expression above, that a membrane selectivity of 1 indicates the membrane has no potential to separate the two gases, the reason being, both gases will diffuse equally through the membrane. In the design of a separation process, normally the pressure ratio and the membrane selectivity are prescribed by the pressures of the system and the permeability of the membrane . The level of separation achieved by the membrane (concentration of the species to be separated) needs to be evaluated based on the aforementioned design parameters in order to evaluate the cost-effectiveness of the system. The concentration of species i and j across the membrane can be evaluated based on their respective diffusion flows across it. In the case of a binary mixture, the concentration of species i across the membrane: This can be further expanded to obtain an expression of the form: Using the relations: The expression can be rewritten as: Then using n j ′ = 1 − n i ′ a n d n j ″ = 1 − n i ″ {\displaystyle n_{j}'=1-n_{i}'\quad and\quad n_{j}''=1-n_{i}''} The solution to the above quadratic expression can be expressed as: Finally, an expression for the permeant concentration is obtained by the following: Along the separation unit, the feed concentration decays with the diffusion across the membrane causing the concentration at the membrane to drop accordingly. As a result, the total permeant flow (q" out ) results from the integration of the diffusion flow across the membrane from the feed inlet (q' in ) to feed outlet (q' out ). A mass balance across a differential length of the separation unit is therefore: where: Because of the binary nature of the mixture, only one species needs to be evaluated. Prescribing a function n' i =n' i (x), the species balance can be rewritten as: Where: Lastly, the area required per unit membrane length can be obtained by the following expression: The material of the membrane plays an important role in its ability to provide the desired performance characteristics. It is optimal to have a membrane with a high permeability and sufficient selectivity and it is also important to match the membrane properties to that of the system operating conditions (for example pressures and gas composition). Synthetic membranes are made from a variety of polymers including polyethylene , polyamides , polyimides , cellulose acetate , polysulphone and polydimethylsiloxane . [ 7 ] Polymeric membranes are a common option for use in the capture of CO 2 from flue gas because of the maturity of the technology in a variety of industries, namely petrochemicals. The ideal polymer membrane has both a high selectivity and permeability . Polymer membranes are examples of systems that are dominated by the solution-diffusion mechanism. The membrane is considered to have holes which the gas can dissolve (solubility) and the molecules can move from one cavity to the other (diffusion). [ 4 ] It was discovered by Robeson in the early 1990s that polymers with a high selectivity have a low permeability and opposite is true; materials with a low selectivity have a high permeability. This is best illustrated in a Robeson plot where the selectivity is plotted as a function of the CO 2 permeation. In this plot, the upper bound of selectivity is approximately a linear function of the permeability. It was found that the solubility in polymers is mostly constant but the diffusion coefficients vary significantly and this is where the engineering of the material occurs. Somewhat intuitively, the materials with the highest diffusion coefficients have a more open pore structure, thus losing selectivity. [ 8 ] [ 9 ] There are two methods that researchers are using to break the Robeson limit, one of these is the use of glassy polymers whose phase transition and changes in mechanical properties make it appear that the material is absorbing molecules and thus surpasses the upper limit. The second method of pushing the boundaries of the Robeson limit is by the facilitated transport method. As previously stated, the solubility of polymers is typically fairly constant but the facilitated transport method uses a chemical reaction to enhance the permeability of one component without changing the selectivity. [ 10 ] Nanoporous membranes are fundamentally different from polymer-based membranes in that their chemistry is different and that they do not follow the Robeson limit for a variety of reasons. The simplified figure of a nanoporous membrane shows a small portion of an example membrane structure with cavities and windows. The white portion represents the area where the molecule can move and the blue shaded areas represent the walls of the structure. In the engineering of these membranes, the size of the cavity (L cy x L cz ) and window region (L wy x L wz ) can be modified so that the desired permeation is achieved. It has been shown that the permeability of a membrane is the production of adsorption and diffusion. In low loading conditions, the adsorption can be computed by the Henry coefficient. [ 4 ] If the assumption is made that the energy of a particle does not change when moving through this structure, only the entropy of the molecules changes based on the size of the openings. If we first consider changes the cavity geometry, the larger the cavity, the larger the entropy of the absorbed molecules which thus makes the Henry coefficient larger. For diffusion, an increase in entropy will lead to a decrease in free energy which in turn leads to a decrease in the diffusion coefficient. Conversely, changing the window geometry will primarily effect the diffusion of the molecules and not the Henry coefficient. In summary, by using the above simplified analysis, it is possible to understand why the upper limit of the Robeson line does not hold for nanostructures. In the analysis, both the diffusion and Henry coefficients can be modified without affecting the permeability of the material which thus can exceed the upper limit for polymer membranes. [ 4 ] Silica membranes are mesoporous and can be made with high uniformity (the same structure throughout the membrane). The high porosity of these membranes gives them very high permeabilities. Synthesized membranes have smooth surfaces and can be modified on the surface to drastically improve selectivity. Functionalizing silica membrane surfaces with amine containing molecules (on the surface silanol groups) allows the membranes to separate CO 2 from flue gas streams more effectively. [ 2 ] Surface functionalization (and thus chemistry) can be tuned to be more efficient for wet flue gas streams as compared to dry flue gas streams. [ 11 ] While previously, silica membranes were impractical due to their technical scalability and cost (they are very difficult to produce in an economical manner on a large scale), there have been demonstrations of a simple method of producing silica membranes on hollow polymeric supports. These demonstrations indicate that economical materials and methods can effectively separate CO 2 and N 2 . [ 12 ] Ordered mesoporous silica membranes have shown considerable potential for surface modification that allows for ease of CO 2 separation. Surface functionalization with amines leads to the reversible formation of carbamates (during CO 2 flow), increasing CO 2 selectivity significantly. [ 12 ] Zeolites are crystalline aluminosilicates with a regular repeating structure of molecular-sized pores. Zeolite membranes selectively separate molecules based on pore size and polarity and are thus highly tunable to specific gas separation processes. In general, smaller molecules and those with stronger zeolite- adsorption properties are adsorbed onto zeolite membranes with larger selectivity. The capacity to discriminate based on both molecular size and adsorption affinity makes zeolite membranes an attractive candidate for CO 2 separation from N 2 , CH 4 , and H 2 . Scientists have found that the gas-phase enthalpy (heat) of adsorption on zeolites increases as follows: H 2 < CH 4 < N 2 < CO 2 . [ 13 ] It is generally accepted that CO 2 has the largest adsorption energy because it has the largest quadrupole moment , thereby increasing its affinity for charged or polar zeolite pores. At low temperatures, zeolite adsorption-capacity is large and the high concentration of adsorbed CO 2 molecules blocks the flow of other gases. Therefore, at lower temperatures, CO 2 selectively permeates through zeolite pores. Several recent research efforts have focused on developing new zeolite membranes that maximize the CO 2 selectivity by taking advantage of the low-temperature blocking phenomena. Researchers have synthesized Y-type (Si:Al>3) zeolite membranes which achieve room-temperature separation factors of 100 and 21 for CO 2 /N 2 and CO 2 /CH 4 mixtures respectively. [ 14 ] DDR-type and SAPO-34 membranes have also shown promise in separating CO 2 and CH 4 at a variety of pressures and feed compositions. [ 15 ] [ 16 ] The SAPO-34 membranes, being nitrogen selective, are also strong contender for natural gas sweetening process. [ 17 ] [ 18 ] [ 19 ] Researchers have also made an effort to utilize zeolite membranes for the separation of H 2 from hydrocarbons. Hydrogen can be separated from larger hydrocarbons such as C 4 H 10 with high selectivity. This is due to the molecular sieving effect since zeolites have pores much larger than H 2 , but smaller than these large hydrocarbons. Smaller hydrocarbons such as CH 4 , C 2 H 6 , and C 3 H 8 are small enough to not be separated by molecular sieving. Researchers achieved a higher selectivity of hydrogen when performing the separation at high temperatures, likely as a result of a decrease in the competitive adsorption effect. [ 20 ] There have been advances in zeolitic-imidazolate frameworks (ZIFs), a subclass of metal-organic frameworks (MOFs), that have allowed them to be useful for carbon dioxide separation from flue gas streams. Extensive modeling has been performed to demonstrate the value of using MOFs as membranes. [ 21 ] [ 22 ] MOF materials are adsorption-based, and thus can be tuned to achieve selectivity. [ 23 ] The drawback to MOF systems is stability in water and other compounds present in flue gas streams. Select materials, such as ZIF-8, have demonstrated stability in water and benzene, contents often present in flue gas mixtures. ZIF-8 can be synthesized as a membrane on a porous alumina support and has proven to be effective at separating CO 2 from flue gas streams. At similar CO 2 /CH 4 selectivity to Y-type zeolite membranes, ZIF-8 membranes achieve unprecedented CO 2 permeance, two orders of magnitude above the previous standard. [ 24 ] Perovskite are mixed metal oxide with a well-defined cubic structure and a general formula of ABO 3 , where A is an alkaline earth or lanthanide element and B is a transition metal . These materials are attractive for CO 2 separation because of the tunability of the metal sites as well as their stabilities at elevated temperatures. The separation of CO 2 from N 2 was investigated with an α-alumina membrane impregnated with BaTiO 3 . [ 25 ] It was found that adsorption of CO 2 was favorable at high temperatures due to an endothermic interaction between CO 2 and the material, promoting mobile CO 2 that enhanced CO 2 adsorption-desorption rate and surface diffusion. The experimental separation factor of CO 2 to N 2 was found to be 1.1-1.2 at 100 °C to 500 °C, which is higher than the separation factor limit of 0.8 predicted by Knudsen diffusion . Though the separation factor was low due to pinholes observed in the membrane, this demonstrates the potential of perovskite materials in their selective surface chemistry for CO 2 separation. In special cases other materials can be utilized; for example, palladium membranes permit transport solely of hydrogen. [ 26 ] In addition to palladium membranes (which are typically palladium silver alloys to stop embrittlement of the alloy at lower temperature) there is also a significant research effort looking into finding non-precious metal alternatives. Although slow kinetics of exchange on the surface of the membrane and tendency for the membranes to crack or disintegrate after a number of duty cycles or during cooling are problems yet to be fully solved. [ 27 ] Membranes are typically contained in one of three modules: [ 7 ] Membranes are employed in: [ 1 ] Oxygen-enriched air is in high demanded for a range of medical and industrial applications including chemical and combustion processes. Cryogenic distillation is the mature technology for commercial air separation for the production of large quantities of high purity oxygen and nitrogen. However, it is a complex process, is energy-intensive, and is generally not suitable for small-scale production. Pressure swing adsorption is also commonly used for air separation and can also produce high purity oxygen at medium production rates, but it still requires considerable space, high investment and high energy consumption. The membrane gas separation method is a relatively low environmental impact and sustainable process providing continuous production, simple operation, lower pressure/temperature requirements, and compact space requirements. [ 28 ] [ 3 ] A great deal of research has been undertaken to utilize membranes instead of absorption or adsorption for carbon capture from flue gas streams, however, no current [ when? ] projects exist that utilize membranes. Process engineering along with new developments in materials have shown that membranes have the greatest potential for low energy penalty and cost compared to competing technologies. [ 4 ] [ 10 ] [ 29 ] Today, membranes are used for commercial separations involving: N 2 from air, H 2 from ammonia in the Haber-Bosch process , natural gas purification , and tertiary-level enhanced oil recovery supply. [ 30 ] Single-stage membrane operations involve a single membrane with one selectivity value. Single-stage membranes were first used in natural gas purification, separating CO 2 from methane. [ 30 ] A disadvantage of single-stage membranes is the loss of product in the permeate due to the constraints imposed by the single selectivity value. Increasing the selectivity reduces the amount of product lost in the permeate, but comes at the cost of requiring a larger pressure difference to process an equivalent amount of a flue stream. In practice, the maximum pressure ratio economically possible is around 5:1. [ 31 ] To combat the loss of product in the membrane permeate, engineers use “cascade processes” in which the permeate is recompressed and interfaced with additional, higher selectivity membranes. [ 30 ] The retentate streams can be recycled, which achieves a better yield of product. Single-stage membranes devices are not feasible for obtaining a high concentration of separated material in the permeate stream. This is due to the pressure ratio limit that is economically unrealistic to exceed. Therefore, the use of multi-stage membranes is required to concentrate the permeate stream. The use of a second stage allows for less membrane area and power to be used. This is because of the higher concentration that passes the second stage, as well as the lower volume of gas for the pump to process. [ 31 ] [ 10 ] Other factors, such as adding another stage that uses air to concentrate the stream further reduces cost by increasing concentration within the feed stream. [ 10 ] Additional methods such as combining multiple types of separation methods allow for variation in creating economical process designs. Hybrid processes have long-standing history with gas separation. [ 32 ] Typically, membranes are integrated into already existing processes such that they can be retrofitted into already existing carbon capture systems. MTR, Membrane Technology and Research Inc., and UT Austin have worked to create hybrid processes, utilizing both absorption and membranes, for CO 2 capture. First, an absorption column using piperazine as a solvent absorbs about half the carbon dioxide in the flue gas, then the use of a membrane results in 90% capture. [ 33 ] A parallel setup is also, with the membrane and absorption processes occurring simultaneously. Generally, these processes are most effective when the highest content of carbon dioxide enters the amine absorption column. Incorporating hybrid design processes allows for retrofitting into fossil fuel power plants. [ 33 ] Hybrid processes can also use cryogenic distillation and membranes. [ 34 ] For example, hydrogen and carbon dioxide can be separated, first using cryogenic gas separation, whereby most of the carbon dioxide exits first, then using a membrane process to separate the remaining carbon dioxide, after which it is recycled for further attempts at cryogenic separation. [ 34 ] Cost limits the pressure ratio in a membrane CO 2 separation stage to a value of 5; higher pressure ratios eliminate any economic viability for CO 2 capture using membrane processes. [ 10 ] [ 35 ] Recent studies have demonstrated that multi-stage CO 2 capture/separation processes using membranes can be economically competitive with older and more common technologies such as amine-based absorption . [ 10 ] [ 34 ] Currently, both membrane and amine-based absorption processes can be designed to yield a 90% CO 2 capture rate. [ 29 ] [ 10 ] [ 35 ] [ 36 ] [ 33 ] [ 34 ] For carbon capture at an average 600 MW coal-fired power plant, the cost of CO 2 capture using amine-based absorption is in the $40–100 per ton of CO 2 range, while the cost of CO 2 capture using current membrane technology (including current process design schemes) is about $23 per ton of CO 2 . [ 10 ] Additionally, running an amine-based absorption process at an average 600 MW coal-fired power plant consumes about 30% of the energy generated by the power plant, while running a membrane process requires about 16% of the energy generated. [ 10 ] CO 2 transport (e.g. to geologic sequestration sites, or to be used for EOR ) costs about $2–5 per ton of CO 2 . [ 10 ] This cost is the same for all types of CO 2 capture/separation processes such as membrane separation and absorption. [ 10 ] In terms of dollars per ton of captured CO 2 , the least expensive membrane processes being studied at this time are multi-step counter-current flow/sweep processes. [ 29 ] [ 10 ] [ 35 ] [ 36 ] [ 33 ] [ 34 ]
https://en.wikipedia.org/wiki/Membrane_gas_separation
A membrane osmometer is a device used to indirectly measure the number average molecular weight ( M n {\displaystyle M_{n}} ) of a polymer sample. One chamber contains pure solvent and the other chamber contains a solution in which the solute is a polymer with an unknown M n {\displaystyle M_{n}} . The osmotic pressure of the solvent across the semipermeable membrane is measured by the membrane osmometer. [ 1 ] This osmotic pressure measurement is used to calculate M n {\displaystyle M_{n}} for the sample. A low concentration solution is created by adding a small amount of polymer to a solvent. This solution is separated from pure solvent by a semipermeable membrane. Solute cannot cross the semipermeable membrane but the solvent is able to cross the membrane. Solvent flows across the membrane to dilute the solution. The pressure required to stop the flow across the membrane is called the osmotic pressure. [ 1 ] The osmotic pressure is measured and used to calculate M n {\displaystyle M_{n}} . In an ideally dilute solution, van ‘t Hoff's law of osmotic pressure can be used to calculate M n {\displaystyle M_{n}} from osmotic pressure. [ 1 ] M n {\displaystyle M_{n}} , number average molecular weight, mass/mole R {\displaystyle R} , gas constant T {\displaystyle T} , absolute temperature , typically Kelvin c {\displaystyle c} , concentration of polymer, mass/volume Π {\displaystyle \Pi } , osmotic pressure In practice, the osmotic pressure produced by an ideally dilute solution would be too small to be accurately measured. For accurate M n {\displaystyle M_{n}} measurements, solutions are not ideally dilute and a virial equation is used to account for deviations from ideal behavior and allow the calculation of M n {\displaystyle M_{n}} . The virial equation takes a form similar to van ‘t Hoff's law of osmotic pressure, but contains additional constants to account for non-ideal behavior: Π c = R T ( 1 M n + A 1 c + A 2 c 2 + A 3 c 3 + … ) {\displaystyle {\Pi \over c}=RT({1 \over M_{n}}+A_{1}c+A_{2}c^{2}+A_{3}c^{3}+\dots )} where A n {\displaystyle A_{n}} are constants and c {\displaystyle c} is still the concentration of polymer 1. This virial equation may be represented in different additional forms: Π c = A M n + B c + C c 2 + D c 3 + … {\displaystyle {\Pi \over c}={A \over M_{n}}+Bc+Cc^{2}+Dc^{3}+\dots } Π c = M n ( Γ 1 + Γ 2 c + Γ 3 c 2 + Γ 4 c 3 + … ) {\displaystyle {\Pi \over c}=M_{n}(\Gamma _{1}+\Gamma _{2}c+\Gamma _{3}c^{2}+\Gamma _{4}c^{3}+\dots )} where B {\displaystyle B} and Γ {\displaystyle \Gamma } are constants and R T A 2 = B = R T M n Γ 2 {\displaystyle RTA_{2}=B={RT \over M_{n}}\Gamma _{2}} . Capillary tubes are attached to both the solvent and the solution compartments. In this case the osmotic pressure is provided by the additional pressure of the fluid in the solution compartment. The difference in the height of the fluid in the capillary tube of solution compartment versus the height of the fluid in the capillary tube of the solvent compartment is measured once the solution reaches equilibrium to calculate the osmotic pressure. [ 1 ] Π = Δ H ρ g {\displaystyle \Pi =\Delta H\rho g} Π {\displaystyle \Pi } , osmotic pressure Δ H {\displaystyle \Delta H} , change in height ρ {\displaystyle \rho } , density g {\displaystyle g} , acceleration due to gravity The main disadvantage of static osmometry is the long time it takes for equilibrium to be reached. It often takes 3 or more hours after the solute is added for the static osmometer to reach equilibrium. [ 2 ] In a dynamic osmometer flow of solvent is measured and a counteracting pressure is created to stop the flow. Flow rate of the solvent is measured by the movement of an air bubble in a capillary tube of the solvent. [ 2 ] The pressure of the solvent compartment is directly changed by raising or lowering a reservoir of solvent connected to the solvent compartment. [ 2 ] The pressure difference between the two compartments is the osmotic pressure. This can be calculate by measuring the change in height or measured directly with a flexible diaphragm. [ 2 ] Since the pressure is directly changed, an accurate measurement of osmotic pressure can be achieved in 10 – 30 minutes. [ 2 ] Membrane osmometry measurements are best used for 30,000 < M n < {\displaystyle <M_{n}<} 1,000,000 grams/mole. For M n {\displaystyle M_{n}} above 1,000,000 grams/mole, the solute is too dilute to create a measurable osmotic pressure. [ 1 ] For M n {\displaystyle M_{n}} below 30,000 grams per mole, the solute permeates through the membrane and the measurements are inaccurate. [ 2 ] Another issue for membrane osmometer is the limited membrane types. The most common membrane used is cellulose acetate ; however, cellulose acetate can only be used with toluene and water. [ 3 ] While toluene and water are useful solvent for many compounds, not all polymers are miscible in toluene or water. Regenerated cellulose membranes can be used for many other solvents, but are hard to obtain. [ 3 ]
https://en.wikipedia.org/wiki/Membrane_osmometer
A membrane oxygenator is a device used to add oxygen to, and remove carbon dioxide from the blood . It can be used in two principal modes: to imitate the function of the lungs in cardiopulmonary bypass (CPB) , and to oxygenate blood in longer term life support, termed extracorporeal membrane oxygenation (ECMO). A membrane oxygenator consists of a thin gas-permeable membrane separating the blood and gas flows in the CPB circuit; oxygen diffuses from the gas side into the blood, and carbon dioxide diffuses from the blood into the gas for disposal. The history of the oxygenator , or artificial lung, dates back to 1885, with the first demonstration of a disc oxygenator, on which blood was exposed to the atmosphere on rotating discs by Von Frey and Gruber. [1] These pioneers noted the dangers of blood streaming, foaming and clotting. In the 1920s and 30s, research into developing extracorporeal oxygenation continued. Working independently, Brukhonenko in the USSR and John Heysham Gibbon in the US demonstrated the feasibility of extracorporeal oxygenation. Brukhonenko used excised dog lungs, while Gibbon used a direct-contact drum-type oxygenator, perfusing cats for up to 25 minutes in the 1930s. [2] Gibbon's pioneering work was rewarded in May 1953 with the first successful cardiopulmonary bypass operation. [3] The oxygenator was of the stationary film type, in which oxygen was exposed to a film of blood as it flowed over a series of stainless steel plates. The disadvantages of direct contact between the blood and air were well recognized, and the less traumatic membrane oxygenator was developed to overcome these. The first membrane artificial lung was demonstrated in 1955 by the group led by Willem Kolff , [4] and in 1956 the first disposable-membrane oxygenator removed the need for time-consuming cleaning before re-use. [5] No patent was filed as Kolff believed that doctors should make technology available to all, without mind to profit. [ citation needed ] The first membrane artificial lungs were composed of large flat sheets of thin silicone rubber used to separate blood and gas. Dr. Kolff recognized the need for a more compact lung design and constructed the first coiled lung design using polyethylene. However, these first designs were impractical due to high resistance and large priming volume. Inspired by Kolff's design, Theodor Kolobow designed the first successful spiral coil membrane lung in the laboratory of George Henry Alexander Clowes using a vinyl fiberglass screen to allow gas to more easily flow in the tube. For these and other innovations, including applying slight suction to form a tight seal and prevent hypobaric gas emboli , NIH was issued a patent in 1970 for the silicon rubber spiral coil membrane lung invented by Dr. Kolobow. [ 1 ] Kolobow, with the assistance of Dr. Warren Zapol and NIH veterinarian Joseph Price , attempted the first in vivo experiments using the spiral membrane artificial lung on canines and lambs. The team went on to invent the first artificial placenta in 1967. [ 2 ] [ 1 ] The early artificial lungs used relatively impermeable polyethylene or Teflon homogeneous membranes, and it was not until more highly permeable silicone rubber membranes were introduced in the 1960s (and as hollow fibres in 1971) that the membrane oxygenator became commercially successful. [6] [7] The introduction of microporous hollow fibres with very low resistance to mass transfer revolutionized the design of membrane modules, as the limiting factor to oxygenator performance became the blood resistance. [8] Current designs of oxygenator typically use an extraluminal flow regime, where the blood flows outside the gas-filled hollow fibers, for short term life support, while only the homogeneous membranes are approved for long term use.
https://en.wikipedia.org/wiki/Membrane_oxygenator
Membrane potential (also transmembrane potential or membrane voltage ) is the difference in electric potential between the interior and the exterior of a biological cell . It equals the interior potential minus the exterior potential. This is the energy (i.e. work ) per charge which is required to move a (very small) positive charge at constant velocity across the cell membrane from the exterior to the interior. (If the charge is allowed to change velocity, the change of kinetic energy and production of radiation [ 1 ] must be taken into account.) Typical values of membrane potential, normally given in units of milli volts and denoted as mV, range from −80 mV to −40 mV. For such typical negative membrane potentials, positive work is required to move a positive charge from the interior to the exterior. However, thermal kinetic energy allows ions to overcome the potential difference. For a selectively permeable membrane, this permits a net flow against the gradient. This is a kind of osmosis . All animal cells are surrounded by a membrane composed of a lipid bilayer with proteins embedded in it. The membrane serves as both an insulator and a diffusion barrier to the movement of ions . Transmembrane proteins , also known as ion transporter or ion pump proteins, actively push ions across the membrane and establish concentration gradients across the membrane, and ion channels allow ions to move across the membrane down those concentration gradients. Ion pumps and ion channels are electrically analogous to a set of capacitors and resistors inserted in the membrane, and therefore create a voltage between the two sides of the membrane; acting somewhat like a memristor . [ 2 ] All plasma membranes have an electrical potential across them, with the inside usually negative with respect to the outside. [ 3 ] The membrane potential has two basic functions. First, it allows a cell to function as a battery, providing power to operate a variety of "molecular devices" embedded in the membrane. [ 4 ] Second, in electrically excitable cells such as neurons and muscle cells , it is used for transmitting signals between different parts of a cell. Signals are generated in excitable cells by opening or closing of ion channels at one point in the membrane, producing a local change in the membrane potential. This change in the electric field can be quickly sensed by either adjacent or more distant ion channels in the membrane. Those ion channels can then open or close as a result of the potential change, reproducing the signal. In non-excitable cells, and in excitable cells in their baseline states, the membrane potential is held at a relatively stable value, called the resting potential . For neurons, resting potential is defined as ranging from −80 to −70 millivolts; that is, the interior of a cell has a negative baseline voltage of a bit less than one-tenth of a volt. The opening and closing of ion channels can induce a departure from the resting potential. This is called a depolarization if the interior voltage becomes less negative (say from −70 mV to −60 mV), or a hyperpolarization if the interior voltage becomes more negative (say from −70 mV to −80 mV). In excitable cells, a sufficiently large depolarization can evoke an action potential , in which the membrane potential changes rapidly and significantly for a short time (on the order of 1 to 100 milliseconds), often reversing its polarity. Action potentials are generated by the activation of certain voltage-gated ion channels . In neurons, the factors that influence the membrane potential are diverse. They include numerous types of ion channels, some of which are chemically gated and some of which are voltage-gated. Because voltage-gated ion channels are controlled by the membrane potential, while the membrane potential itself is influenced by these same ion channels, feedback loops that allow for complex temporal dynamics arise, including oscillations and regenerative events such as action potentials. [ citation needed ] Differences in the concentrations of ions on opposite sides of a cellular membrane lead to a voltage called the membrane potential . [ 5 ] Many ions have a concentration gradient across the membrane, including potassium (K + ), which is at a high concentration inside and a low concentration outside the membrane. Sodium (Na + ) and chloride (Cl − ) ions are at high concentrations in the extracellular region, and low concentrations in the intracellular regions. These concentration gradients provide the potential energy to drive the formation of the membrane potential. This voltage is established when the membrane has permeability to one or more ions. In the simplest case, illustrated in the top diagram ("Ion concentration gradients"), if the membrane is selectively permeable to potassium, these positively charged ions can diffuse down the concentration gradient to the outside of the cell, leaving behind uncompensated negative charges. This separation of charges is what causes the membrane potential. The system as a whole is electro-neutral. The uncompensated positive charges outside the cell, and the uncompensated negative charges inside the cell, physically line up on the membrane surface and attract each other across the lipid bilayer . Thus, the membrane potential is physically located only in the immediate vicinity of the membrane. It is the separation of these charges across the membrane that is the basis of the membrane voltage. The top diagram is only an approximation of the ionic contributions to the membrane potential. Other ions including sodium, chloride, calcium, and others play a more minor role, even though they have strong concentration gradients, because they have more limited permeability than potassium. [ citation needed ] The membrane potential in a cell derives ultimately from two factors: electrical force and diffusion. Electrical force arises from the mutual attraction between particles with opposite electrical charges (positive and negative) and the mutual repulsion between particles with the same type of charge (both positive or both negative). Diffusion arises from the statistical tendency of particles to redistribute from regions where they are highly concentrated to regions where the concentration is low. Voltage, which is synonymous with difference in electrical potential , is the ability to drive an electric current across a resistance. Indeed, the simplest definition of a voltage is given by Ohm's law : V=IR, where V is voltage, I is current and R is resistance. If a voltage source such as a battery is placed in an electrical circuit, the higher the voltage of the source the greater the amount of current that it will drive across the available resistance. The functional significance of voltage lies only in potential differences between two points in a circuit. The idea of a voltage at a single point is meaningless. It is conventional in electronics to assign a voltage of zero to some arbitrarily chosen element of the circuit, and then assign voltages for other elements measured relative to that zero point. There is no significance in which element is chosen as the zero point—the function of a circuit depends only on the differences not on voltages per se . However, in most cases and by convention, the zero level is most often assigned to the portion of a circuit that is in contact with ground. The same principle applies to voltage in cell biology. In electrically active tissue, the potential difference between any two points can be measured by inserting an electrode at each point, for example one inside and one outside the cell, and connecting both electrodes to the leads of what is in essence a specialized voltmeter. By convention, the zero potential value is assigned to the outside of the cell and the sign of the potential difference between the outside and the inside is determined by the potential of the inside relative to the outside zero. In mathematical terms, the definition of voltage begins with the concept of an electric field E , a vector field assigning a magnitude and direction to each point in space. In many situations, the electric field is a conservative field , which means that it can be expressed as the gradient of a scalar function V , that is, E = −∇ V . This scalar field V is referred to as the voltage distribution. The definition allows for an arbitrary constant of integration—this is why absolute values of voltage are not meaningful. In general, electric fields can be treated as conservative only if magnetic fields do not significantly influence them, but this condition usually applies well to biological tissue. Because the electric field is the gradient of the voltage distribution, rapid changes in voltage within a small region imply a strong electric field; on the converse, if the voltage remains approximately the same over a large region, the electric fields in that region must be weak. A strong electric field, equivalent to a strong voltage gradient, implies that a strong force is exerted on any charged particles that lie within the region. Electrical signals within biological organisms are, in general, driven by ions . [ 7 ] The most important cations for the action potential are sodium (Na + ) and potassium (K + ). [ 8 ] Both of these are monovalent cations that carry a single positive charge. Action potentials can also involve calcium (Ca 2+ ), [ 9 ] which is a divalent cation that carries a double positive charge. The chloride anion (Cl − ) plays a major role in the action potentials of some algae , [ 10 ] but plays a negligible role in the action potentials of most animals. [ 11 ] Ions cross the cell membrane under two influences: diffusion and electric fields . A simple example wherein two solutions—A and B—are separated by a porous barrier illustrates that diffusion will ensure that they will eventually mix into equal solutions. This mixing occurs because of the difference in their concentrations. The region with high concentration will diffuse out toward the region with low concentration. To extend the example, let solution A have 30 sodium ions and 30 chloride ions. Also, let solution B have only 20 sodium ions and 20 chloride ions. Assuming the barrier allows both types of ions to travel through it, then a steady state will be reached whereby both solutions have 25 sodium ions and 25 chloride ions. If, however, the porous barrier is selective to which ions are let through, then diffusion alone will not determine the resulting solution. Returning to the previous example, let's now construct a barrier that is permeable only to sodium ions. Now, only sodium is allowed to diffuse cross the barrier from its higher concentration in solution A to the lower concentration in solution B. This will result in a greater accumulation of sodium ions than chloride ions in solution B and a lesser number of sodium ions than chloride ions in solution A. This means that there is a net positive charge in solution B from the higher concentration of positively charged sodium ions than negatively charged chloride ions. Likewise, there is a net negative charge in solution A from the greater concentration of negative chloride ions than positive sodium ions. Since opposite charges attract and like charges repel, the ions are now also influenced by electrical fields as well as forces of diffusion. Therefore, positive sodium ions will be less likely to travel to the now-more-positive B solution and remain in the now-more-negative A solution. The point at which the forces of the electric fields completely counteract the force due to diffusion is called the equilibrium potential. At this point, the net flow of the specific ion (in this case sodium) is zero. Every cell is enclosed in a plasma membrane , which has the structure of a lipid bilayer with many types of large molecules embedded in it. Because it is made of lipid molecules, the plasma membrane intrinsically has a high electrical resistivity, in other words a low intrinsic permeability to ions. However, some of the molecules embedded in the membrane are capable either of actively transporting ions from one side of the membrane to the other or of providing channels through which they can move. [ 12 ] In electrical terminology, the plasma membrane functions as a combined resistor and capacitor . Resistance arises from the fact that the membrane impedes the movement of charges across it. Capacitance arises from the fact that the lipid bilayer is so thin that an accumulation of charged particles on one side gives rise to an electrical force that pulls oppositely charged particles toward the other side. The capacitance of the membrane is relatively unaffected by the molecules that are embedded in it, so it has a more or less invariant value estimated at 2 μF/cm 2 (the total capacitance of a patch of membrane is proportional to its area). The conductance of a pure lipid bilayer is so low, on the other hand, that in biological situations it is always dominated by the conductance of alternative pathways provided by embedded molecules. Thus, the capacitance of the membrane is more or less fixed, but the resistance is highly variable. The thickness of a plasma membrane is estimated to be about 7–8 nanometers. Because the membrane is so thin, it does not take a very large transmembrane voltage to create a strong electric field within it. Typical membrane potentials in animal cells are on the order of 100 millivolts (that is, one tenth of a volt), but calculations show that this generates an electric field close to the maximum that the membrane can sustain—it has been calculated that a voltage difference much larger than 200 millivolts could cause dielectric breakdown , that is, arcing across the membrane. The resistance of a pure lipid bilayer to the passage of ions across it is very high, but structures embedded in the membrane can greatly enhance ion movement, either actively or passively , via mechanisms called facilitated transport and facilitated diffusion . The two types of structure that play the largest roles are ion channels and ion pumps , both usually formed from assemblages of protein molecules. Ion channels provide passageways through which ions can move. In most cases, an ion channel is permeable only to specific types of ions (for example, sodium and potassium but not chloride or calcium), and sometimes the permeability varies depending on the direction of ion movement. Ion pumps, also known as ion transporters or carrier proteins, actively transport specific types of ions from one side of the membrane to the other, sometimes using energy derived from metabolic processes to do so. Ion pumps are integral membrane proteins that carry out active transport , i.e., use cellular energy (ATP) to "pump" the ions against their concentration gradient. [ 13 ] Such ion pumps take in ions from one side of the membrane (decreasing its concentration there) and release them on the other side (increasing its concentration there). The ion pump most relevant to the action potential is the sodium–potassium pump , which transports three sodium ions out of the cell and two potassium ions in. [ 14 ] [ 15 ] As a consequence, the concentration of potassium ions K + inside the neuron is roughly 30-fold larger than the outside concentration, whereas the sodium concentration outside is roughly five-fold larger than inside. [ 15 ] [ 16 ] [ 17 ] In a similar manner, other ions have different concentrations inside and outside the neuron, such as calcium , chloride and magnesium . [ 17 ] If the numbers of each type of ion were equal, the sodium–potassium pump would be electrically neutral, but, because of the three-for-two exchange, it gives a net movement of one positive charge from intracellular to extracellular for each cycle, thereby contributing to a positive voltage difference. The pump has three effects: (1) it makes the sodium concentration high in the extracellular space and low in the intracellular space; (2) it makes the potassium concentration high in the intracellular space and low in the extracellular space; (3) it gives the intracellular space a negative voltage with respect to the extracellular space. The sodium-potassium pump is relatively slow in operation. If a cell were initialized with equal concentrations of sodium and potassium everywhere, it would take hours for the pump to establish equilibrium. The pump operates constantly, but becomes progressively less efficient as the concentrations of sodium and potassium available for pumping are reduced. Ion pumps influence the action potential only by establishing the relative ratio of intracellular and extracellular ion concentrations. The action potential involves mainly the opening and closing of ion channels not ion pumps. If the ion pumps are turned off by removing their energy source, or by adding an inhibitor such as ouabain , the axon can still fire hundreds of thousands of action potentials before their amplitudes begin to decay significantly. [ 13 ] In particular, ion pumps play no significant role in the repolarization of the membrane after an action potential. [ 8 ] Another functionally important ion pump is the sodium-calcium exchanger . This pump operates in a conceptually similar way to the sodium-potassium pump, except that in each cycle it exchanges three Na + from the extracellular space for one Ca ++ from the intracellular space. Because the net flow of charge is inward, this pump runs "downhill", in effect, and therefore does not require any energy source except the membrane voltage. Its most important effect is to pump calcium outward—it also allows an inward flow of sodium, thereby counteracting the sodium-potassium pump, but, because overall sodium and potassium concentrations are much higher than calcium concentrations, this effect is relatively unimportant. The net result of the sodium-calcium exchanger is that in the resting state, intracellular calcium concentrations become very low. Ion channels are integral membrane proteins with a pore through which ions can travel between extracellular space and cell interior. Most channels are specific (selective) for one ion; for example, most potassium channels are characterized by 1000:1 selectivity ratio for potassium over sodium, though potassium and sodium ions have the same charge and differ only slightly in their radius. The channel pore is typically so small that ions must pass through it in single-file order. [ 19 ] Channel pores can be either open or closed for ion passage, although a number of channels demonstrate various sub-conductance levels. When a channel is open, ions permeate through the channel pore down the transmembrane concentration gradient for that particular ion. Rate of ionic flow through the channel, i.e. single-channel current amplitude, is determined by the maximum channel conductance and electrochemical driving force for that ion, which is the difference between the instantaneous value of the membrane potential and the value of the reversal potential . [ 20 ] A channel may have several different states (corresponding to different conformations of the protein), but each such state is either open or closed. In general, closed states correspond either to a contraction of the pore—making it impassable to the ion—or to a separate part of the protein, stoppering the pore. For example, the voltage-dependent sodium channel undergoes inactivation , in which a portion of the protein swings into the pore, sealing it. [ 21 ] This inactivation shuts off the sodium current and plays a critical role in the action potential. Ion channels can be classified by how they respond to their environment. [ 22 ] For example, the ion channels involved in the action potential are voltage-sensitive channels ; they open and close in response to the voltage across the membrane. Ligand-gated channels form another important class; these ion channels open and close in response to the binding of a ligand molecule , such as a neurotransmitter . Other ion channels open and close with mechanical forces. Still other ion channels—such as those of sensory neurons —open and close in response to other stimuli, such as light, temperature or pressure. Leakage channels are the simplest type of ion channel, in that their permeability is more or less constant. The types of leakage channels that have the greatest significance in neurons are potassium and chloride channels. Even these are not perfectly constant in their properties: First, most of them are voltage-dependent in the sense that they conduct better in one direction than the other (in other words, they are rectifiers ); second, some of them are capable of being shut off by chemical ligands even though they do not require ligands in order to operate. Ligand-gated ion channels are channels whose permeability is greatly increased when some type of chemical ligand binds to the protein structure. Animal cells contain hundreds, if not thousands, of types of these. A large subset function as neurotransmitter receptors —they occur at postsynaptic sites, and the chemical ligand that gates them is released by the presynaptic axon terminal . One example of this type is the AMPA receptor , a receptor for the neurotransmitter glutamate that when activated allows passage of sodium and potassium ions. Another example is the GABA A receptor , a receptor for the neurotransmitter GABA that when activated allows passage of chloride ions. Neurotransmitter receptors are activated by ligands that appear in the extracellular area, but there are other types of ligand-gated channels that are controlled by interactions on the intracellular side. Voltage-gated ion channels , also known as voltage dependent ion channels , are channels whose permeability is influenced by the membrane potential. They form another very large group, with each member having a particular ion selectivity and a particular voltage dependence. Many are also time-dependent—in other words, they do not respond immediately to a voltage change but only after a delay. One of the most important members of this group is a type of voltage-gated sodium channel that underlies action potentials—these are sometimes called Hodgkin-Huxley sodium channels because they were initially characterized by Alan Lloyd Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the physiology of the action potential. The channel is closed at the resting voltage level, but opens abruptly when the voltage exceeds a certain threshold, allowing a large influx of sodium ions that produces a very rapid change in the membrane potential. Recovery from an action potential is partly dependent on a type of voltage-gated potassium channel that is closed at the resting voltage level but opens as a consequence of the large voltage change produced during the action potential. The reversal potential (or equilibrium potential ) of an ion is the value of transmembrane voltage at which diffusive and electrical forces counterbalance, so that there is no net ion flow across the membrane. This means that the transmembrane voltage exactly opposes the force of diffusion of the ion, such that the net current of the ion across the membrane is zero and unchanging. The reversal potential is important because it gives the voltage that acts on channels permeable to that ion—in other words, it gives the voltage that the ion concentration gradient generates when it acts as a battery . The equilibrium potential of a particular ion is usually designated by the notation E ion .The equilibrium potential for any ion can be calculated using the Nernst equation . [ 23 ] For example, reversal potential for potassium ions will be as follows: where Even if two different ions have the same charge (i.e., K + and Na + ), they can still have very different equilibrium potentials, provided their outside and/or inside concentrations differ. Take, for example, the equilibrium potentials of potassium and sodium in neurons. The potassium equilibrium potential E K is −84 mV with 5 mM potassium outside and 140 mM inside. On the other hand, the sodium equilibrium potential, E Na , is approximately +66 mV with approximately 12 mM sodium inside and 140 mM outside. [ note 1 ] A neuron 's resting membrane potential actually changes during the development of an organism. In order for a neuron to eventually adopt its full adult function, its potential must be tightly regulated during development. As an organism progresses through development the resting membrane potential becomes more negative. [ 24 ] Glial cells are also differentiating and proliferating as development progresses in the brain . [ 25 ] The addition of these glial cells increases the organism's ability to regulate extracellular potassium . The drop in extracellular potassium can lead to a decrease in membrane potential of 35 mV. [ 26 ] Cell excitability is the change in membrane potential that is necessary for cellular responses in various tissues. Cell excitability is a property that is induced during early embryogenesis. [ 27 ] Excitability of a cell has also been defined as the ease with which a response may be triggered. [ 28 ] The resting and threshold potentials forms the basis of cell excitability and these processes are fundamental for the generation of graded and action potentials. The most important regulators of cell excitability are the extracellular electrolyte concentrations (i.e. Na + , K + , Ca 2+ , Cl − , Mg 2+ ) and associated proteins. Important proteins that regulate cell excitability are voltage-gated ion channels , ion transporters (e.g. Na+/K+-ATPase , magnesium transporters , acid–base transporters ), membrane receptors and hyperpolarization-activated cyclic-nucleotide-gated channels . [ 29 ] For example, potassium channels and calcium-sensing receptors are important regulators of excitability in neurons , cardiac myocytes and many other excitable cells like astrocytes . [ 30 ] Calcium ion is also the most important second messenger in excitable cell signaling . Activation of synaptic receptors initiates long-lasting changes in neuronal excitability. [ 31 ] Thyroid , adrenal and other hormones also regulate cell excitability, for example, progesterone and estrogen modulate myometrial smooth muscle cell excitability. Many cell types are considered to have an excitable membrane. Excitable cells are neurons, muscle ( cardiac , skeletal , smooth ), vascular endothelial cells , pericytes , juxtaglomerular cells , interstitial cells of Cajal , many types of epithelial cells (e.g. beta cells , alpha cells , delta cells , enteroendocrine cells , pulmonary neuroendocrine cells , pinealocytes ), glial cells (e.g. astrocytes), mechanoreceptor cells (e.g. hair cells and Merkel cells ), chemoreceptor cells (e.g. glomus cells , taste receptors ), some plant cells and possibly immune cells . [ 32 ] Astrocytes display a form of non-electrical excitability based on intracellular calcium variations related to the expression of several receptors through which they can detect the synaptic signal. In neurons, there are different membrane properties in some portions of the cell, for example, dendritic excitability endows neurons with the capacity for coincidence detection of spatially separated inputs. [ 33 ] Electrophysiologists model the effects of ionic concentration differences, ion channels, and membrane capacitance in terms of an equivalent circuit , which is intended to represent the electrical properties of a small patch of membrane. The equivalent circuit consists of a capacitor in parallel with four pathways each consisting of a battery in series with a variable conductance. The capacitance is determined by the properties of the lipid bilayer, and is taken to be fixed. Each of the four parallel pathways comes from one of the principal ions, sodium, potassium, chloride, and calcium. The voltage of each ionic pathway is determined by the concentrations of the ion on each side of the membrane; see the Reversal potential section above. The conductance of each ionic pathway at any point in time is determined by the states of all the ion channels that are potentially permeable to that ion, including leakage channels, ligand-gated channels, and voltage-gated ion channels. For fixed ion concentrations and fixed values of ion channel conductance, the equivalent circuit can be further reduced, using the Goldman equation as described below, to a circuit containing a capacitance in parallel with a battery and conductance. In electrical terms, this is a type of RC circuit (resistance-capacitance circuit), and its electrical properties are very simple. Starting from any initial state, the current flowing across either the conductance or the capacitance decays with an exponential time course, with a time constant of τ = RC , where C is the capacitance of the membrane patch, and R = 1/g net is the net resistance. For realistic situations, the time constant usually lies in the 1–100 millisecond range. In most cases, changes in the conductance of ion channels occur on a faster time scale, so an RC circuit is not a good approximation; however, the differential equation used to model a membrane patch is commonly a modified version of the RC circuit equation. When the membrane potential of a cell goes for a long period of time without changing significantly, it is referred to as a resting potential or resting voltage. This term is used for the membrane potential of non-excitable cells, but also for the membrane potential of excitable cells in the absence of excitation. In excitable cells, the other possible states are graded membrane potentials (of variable amplitude), and action potentials, which are large, all-or-nothing rises in membrane potential that usually follow a fixed time course. Excitable cells include neurons , muscle cells, and some secretory cells in glands . Even in other types of cells, however, the membrane voltage can undergo changes in response to environmental or intracellular stimuli. For example, depolarization of the plasma membrane appears to be an important step in programmed cell death . [ 34 ] The interactions that generate the resting potential are modeled by the Goldman equation . [ 35 ] This is similar in form to the Nernst equation shown above, in that it is based on the charges of the ions in question, as well as the difference between their inside and outside concentrations. However, it also takes into consideration the relative permeability of the plasma membrane to each ion in question. The three ions that appear in this equation are potassium (K + ), sodium (Na + ), and chloride (Cl − ). Calcium is omitted, but can be added to deal with situations in which it plays a significant role. [ 36 ] Being an anion, the chloride terms are treated differently from the cation terms; the intracellular concentration is in the numerator, and the extracellular concentration in the denominator, which is reversed from the cation terms. P i stands for the relative permeability of the ion type i. In essence, the Goldman formula expresses the membrane potential as a weighted average of the reversal potentials for the individual ion types, weighted by permeability. (Although the membrane potential changes about 100 mV during an action potential, the concentrations of ions inside and outside the cell do not change significantly. They remain close to their respective concentrations when then membrane is at resting potential.) In most animal cells, the permeability to potassium is much higher in the resting state than the permeability to sodium. As a consequence, the resting potential is usually close to the potassium reversal potential. [ 37 ] [ 38 ] The permeability to chloride can be high enough to be significant, but, unlike the other ions, chloride is not actively pumped, and therefore equilibrates at a reversal potential very close to the resting potential determined by the other ions. Values of resting membrane potential in most animal cells usually vary between the potassium reversal potential (usually around −80 mV) and around −40 mV. The resting potential in excitable cells (capable of producing action potentials) is usually near −60 mV—more depolarized voltages would lead to spontaneous generation of action potentials. Immature or undifferentiated cells show highly variable values of resting voltage, usually significantly more positive than in differentiated cells. [ 39 ] In such cells, the resting potential value correlates with the degree of differentiation: undifferentiated cells in some cases may not show any transmembrane voltage difference at all. Maintenance of the resting potential can be metabolically costly for a cell because of its requirement for active pumping of ions to counteract losses due to leakage channels. The cost is highest when the cell function requires an especially depolarized value of membrane voltage. For example, the resting potential in daylight-adapted blowfly ( Calliphora vicina ) photoreceptors can be as high as −30 mV. [ 40 ] This elevated membrane potential allows the cells to respond very rapidly to visual inputs; the cost is that maintenance of the resting potential may consume more than 20% of overall cellular ATP . [ 41 ] On the other hand, the high resting potential in undifferentiated cells does not necessarily incur a high metabolic cost. This apparent paradox is resolved by examination of the origin of that resting potential. Little-differentiated cells are characterized by extremely high input resistance, [ 39 ] which implies that few leakage channels are present at this stage of cell life. As an apparent result, potassium permeability becomes similar to that for sodium ions, which places resting potential in-between the reversal potentials for sodium and potassium as discussed above. The reduced leakage currents also mean there is little need for active pumping in order to compensate, therefore low metabolic cost. As explained above, the potential at any point in a cell's membrane is determined by the ion concentration differences between the intracellular and extracellular areas, and by the permeability of the membrane to each type of ion. The ion concentrations do not normally change very quickly (with the exception of Ca 2+ , where the baseline intracellular concentration is so low that even a small influx may increase it by orders of magnitude), but the permeabilities of the ions can change in a fraction of a millisecond, as a result of activation of ligand-gated ion channels. The change in membrane potential can be either large or small, depending on how many ion channels are activated and what type they are, and can be either long or short, depending on the lengths of time that the channels remain open. Changes of this type are referred to as graded potentials , in contrast to action potentials, which have a fixed amplitude and time course. As can be derived from the Goldman equation shown above, the effect of increasing the permeability of a membrane to a particular type of ion shifts the membrane potential toward the reversal potential for that ion. Thus, opening Na + channels shifts the membrane potential toward the Na + reversal potential, which is usually around +100 mV. Likewise, opening K + channels shifts the membrane potential toward about −90 mV, and opening Cl − channels shifts it toward about −70 mV (resting potential of most membranes). Thus, Na + channels shift the membrane potential in a positive direction, K + channels shift it in a negative direction (except when the membrane is hyperpolarized to a value more negative than the K + reversal potential), and Cl − channels tend to shift it towards the resting potential. Graded membrane potentials are particularly important in neurons , where they are produced by synapses —a temporary change in membrane potential produced by activation of a synapse by a single graded or action potential is called a postsynaptic potential . Neurotransmitters that act to open Na + channels typically cause the membrane potential to become more positive, while neurotransmitters that activate K + channels typically cause it to become more negative; those that inhibit these channels tend to have the opposite effect. Whether a postsynaptic potential is considered excitatory or inhibitory depends on the reversal potential for the ions of that current, and the threshold for the cell to fire an action potential (around −50 mV). A postsynaptic current with a reversal potential above threshold, such as a typical Na + current, is considered excitatory. A current with a reversal potential below threshold, such as a typical K + current, is considered inhibitory. A current with a reversal potential above the resting potential, but below threshold, will not by itself elicit action potentials, but will produce subthreshold membrane potential oscillations . Thus, neurotransmitters that act to open Na + channels produce excitatory postsynaptic potentials , or EPSPs, whereas neurotransmitters that act to open K + or Cl − channels typically produce inhibitory postsynaptic potentials , or IPSPs. When multiple types of channels are open within the same time period, their postsynaptic potentials summate (are added together). From the viewpoint of biophysics, the resting membrane potential is merely the membrane potential that results from the membrane permeabilities that predominate when the cell is resting. The above equation of weighted averages always applies, but the following approach may be more easily visualized. At any given moment, there are two factors for an ion that determine how much influence that ion will have over the membrane potential of a cell: If the driving force is high, then the ion is being "pushed" across the membrane. If the permeability is high, it will be easier for the ion to diffuse across the membrane. So, in a resting membrane, while the driving force for potassium is low, its permeability is very high. Sodium has a huge driving force but almost no resting permeability. In this case, potassium carries about 20 times more current than sodium, and thus has 20 times more influence over E m than does sodium. However, consider another case—the peak of the action potential. Here, permeability to Na is high and K permeability is relatively low. Thus, the membrane moves to near E Na and far from E K . The more ions are permeant the more complicated it becomes to predict the membrane potential. However, this can be done using the Goldman-Hodgkin-Katz equation or the weighted means equation. By plugging in the concentration gradients and the permeabilities of the ions at any instant in time, one can determine the membrane potential at that moment. What the GHK equations means is that, at any time, the value of the membrane potential will be a weighted average of the equilibrium potentials of all permeant ions. The "weighting" is the ions relative permeability across the membrane. While cells expend energy to transport ions and establish a transmembrane potential, they use this potential in turn to transport other ions and metabolites such as sugar. The transmembrane potential of the mitochondria drives the production of ATP , which is the common currency of biological energy. Cells may draw on the energy they store in the resting potential to drive action potentials or other forms of excitation. These changes in the membrane potential enable communication with other cells (as with action potentials) or initiate changes inside the cell, which happens in an egg when it is fertilized by a sperm . Changes in the dielectric properties of plasma membrane may act as hallmark of underlying conditions such as diabetes and dyslipidemia. [ 42 ] In neuronal cells, an action potential begins with a rush of sodium ions into the cell through sodium channels, resulting in depolarization, while recovery involves an outward rush of potassium through potassium channels. Both of these fluxes occur by passive diffusion . A dose of salt may trigger the still-working neurons of a fresh cut of meat into firing, causing muscle spasms. [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ]
https://en.wikipedia.org/wiki/Membrane_potential
A membrane reactor is a physical device that combines a chemical conversion process with a membrane separation process to add reactants or remove products of the reaction. [ 1 ] Chemical reactors making use of membranes are usually referred to as membrane reactors. The membrane can be used for different tasks: [ 2 ] Membrane reactors are an example for the combination of two unit operations in one step, e.g., membrane filtration with the chemical reaction. [ 3 ] The integration of reaction section with selective extraction of a reactant allows an enhancement of the conversions compared to the equilibrium value. This characteristic makes membrane reactors suitable to perform equilibrium-limited endothermic reactions . [ 4 ] Selective membranes inside the reactor lead to several benefits: reactor section substitutes several downstream processes . Moreover, removing a product allows to exceed thermodynamics limitations. [ 5 ] In this way, it is possible to reach higher conversions of the reactants or to obtain the same conversion with a lower temperature. [ 5 ] Reversible reactions are usually limited by thermodynamics: when direct and reverse reactions, whose rate depends from reactants and product concentrations, are balanced, a chemical equilibrium state is achieved. [ 5 ] If temperature and pressure are fixed, this equilibrium state is a constraint for the ratio of products versus reactants concentrations, obstructing the possibility to reach higher conversions. [ 5 ] This limit can be overcome by removing a product of the reaction: in this way, the system cannot reach equilibrium and the reaction continues, reaching higher conversions (or same conversion at lower temperature). [ 6 ] Nevertheless, there are several hurdles in an industrial commercialization due to technical difficulties in designing membranes with long stabilities and due to the high costs of membranes. [ 7 ] Moreover, there is a lack of a process which lead the technology, even if in recent years this technology was successfully applied to hydrogen production and hydrocarbon dehydrogenation. [ 8 ] Generally, membrane reactors can be classified based on the membrane position and reactor configuration. [ 1 ] Usually there is a catalyst inside: if the catalyst is installed inside the membrane, the reactor is called catalytic membrane reactor (CMR); [ 1 ] if the catalyst (and the support) are packed and fixed inside, the reactor is called packed bed membrane reactor ; if the speed of the gas is high enough, and the particle size is small enough, fluidization of the bed occurs and the reactor is called fluidized bed membrane reactor. [ 1 ] Other types of reactor take the name from the membrane material, e.g., zeolite membrane reactor. Among these configurations, higher attention in recent years, particularly in hydrogen production, is given to fixed bed and fluidized bed: in these cases the standard reactor is simply integrated with membranes inside reaction space. [ 9 ] Today hydrogen is mainly used in chemical industry as a reactant in ammonia production and methanol synthesis, and in refinery processes for hydrocracking. [ 10 ] Moreover, there is a growing interest in its use as energy carrier and as fuel in fuel cells. [ 10 ] More than 50% of hydrogen is currently produced from steam reforming of natural gas, due to low costs and the fact that it is a mature technology. [ 11 ] Traditional processes are composed by a steam reforming section, to produce syngas from natural gas, two water gas shift reactors which enhance hydrogen in syngas and a pressure swing adsorption unit for hydrogen purification. [ 12 ] Membrane reactors make a process intensification including all these sections in one single unit, with both economic and environmental benefits. [ 13 ] To be suitable for hydrogen production industry, membranes must have a high flux, high selectivity towards hydrogen, low cost and high stability. [ 14 ] Among membranes, dense inorganic are the most suitable having a selectivity orders of magnitude bigger than porous ones. [ 15 ] Among dense membranes, metallic ones are the most used due to higher fluxes compared to ceramic ones. [ 9 ] The most used material in hydrogen separation membranes is palladium, particularly its alloy with silver. This metal, even if is more expensive than other ones, shows very high solubility towards hydrogen. [ 16 ] The transport mechanism of hydrogen inside palladium membranes follows a solution/diffusion mechanism: hydrogen molecule is adsorbed onto the surface of the membrane, then it is split into hydrogen atoms; these atoms go across the membrane through diffusion (see palladium hydride ) and then recombine again into hydrogen molecule on the low-pressure side of the membrane; then, it is desorbed from the surface. [ 14 ] In recent years, several works were performed to study the integration of palladium membranes inside fluidized bed membrane reactors for hydrogen production. [ 17 ] Submerged and sidestream membrane bioreactors in wastewater treatment plants are the most developed filtration based membrane reactors. [ citation needed ] The production of chloride (Cl 2 ) and caustic soda NaOH from NaCl is carried out industrially by the chlor-alkali-process using a proton conducting polyelectrolyte membrane. It is used on large scale and has replaced diaphragm electrolysis. Nafion has been developed as a bilayer membrane to withstand the harsh conditions during the chemical conversion. In biological systems, membranes fulfill a number of essential functions. The compartmentalization of biological cells is achieved by membranes. The semi-permeability allows to separate reactions and reaction environments. A number of enzymes are membrane bound and often mass transport through the membrane is active rather than passive as in artificial membranes , allowing the cell to keep up gradients for example by using active transport of protons or water. [ citation needed ] The use of a natural membrane is the first example of the utilization for a chemical reaction. By using the selective permeability of a pig's bladder , water could be removed from a condensation reaction to shift the equilibrium position of the reaction towards the condensation products according to Le Chatelier's principle . As enzymes are macromolecules and often differ greatly in size from reactants, they can be separated by size exclusion membrane filtration with ultra- or nanofiltration artificial membranes. This is used on industrial scale for the production of enantiopure amino acids by kinetic racemic resolution of chemically derived racemic amino acids. The most prominent example is the production of L- methionine on a scale of 400t/a. [ 18 ] The advantage of this method over other forms of immobilization of the catalyst is that the enzymes are not altered in activity or selectivity as it remains solubilized. [ citation needed ] The principle can be applied to all macromolecular catalysts which can be separated from the other reactants by means of filtration. So far, only enzymes have been used to a significant extent. In pervaporation, dense membranes are used for separation. For dense membranes the separation is governed by the difference of the chemical potential of the components in the membrane. The selectivity of the transport through the membrane is dependent on the difference in solubility of the materials in the membrane and their diffusivity through the membrane. For example, for the selective removal of water by using lipophilic membranes. This can be used to overcome thermodynamic limitations of condensation, e.g., esterification reactions by removing water. In the STAR process [ citation needed ] for the catalytic conversion of methane from natural gas with oxygen from air, to methanol by the partial oxidation 2CH 4 + O 2 → {\displaystyle \rightarrow } 2CH 3 OH. The partial pressure of oxygen has to be low to prevent the formation of explosive mixtures and to suppress the successive reaction to carbon monoxide , carbon dioxide and water . This is achieved by using a tubular reactor with an oxygen -selective membrane. The membrane allows the uniform distribution of oxygen as the driving force for the permeation of oxygen through the membrane is the difference in partial pressures on the air side and the methane side.
https://en.wikipedia.org/wiki/Membrane_reactor
Within molecular and cell biology membrane ruffling (also known as cell ruffling ) is the formation of a motile cell surface that contains a meshwork of newly polymerized actin filaments . It can also be regarded as one of the earliest structural changes observed in the cell. The GTP-binding protein Rac is the regulator of this membrane ruffling. Changes in the Polyphosphoinositide metabolism and changes in Ca 2+ level of the cell may also play an important role. A number of actin-binding and organizing proteins localize to membrane ruffles and potentially target to transducing molecules. Membrane ruffling is a characteristic feature of many actively migrating cells. When the membrane is unable to attach to the substrate, the membrane protrusion is recycled back into the cell. The ruffling of membranes is thought to be controlled by a group of enzymes known as Rho GTPases , specifically RhoA , Rac1 and cdc42 . Some bacteria such as enteropathogenic E. coli and enterohemorrhagic E. coli can induce membrane ruffling by secreting toxins via the type three secretion system and modifying the host cytoskeleton . Such toxins include EspT, Map, and SopE, which mimic RhoGEF and activate endogenous Rho GTPases to manipulate actin polymerisation in the infected cell. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Membrane_ruffling
Membrane scaling is when one or more sparingly soluble salts (e.g., calcium carbonate, calcium phosphate, etc.) precipitate and form a dense layer on the membrane surface in reverse osmosis (RO) applications. [ 1 ] Figures 1 and 2 show scanning electron microscopy (SEM) images of the RO membrane surface without and with scaling, respectively. Membrane scaling, like other types of membrane fouling , increases energy costs due to higher operating pressure, and reduces permeate water production. [ 2 ] Furthermore, scaling may damage and shorten the lifetime of membranes due to frequent membrane cleanings [ 3 ] and therefore it is a major operational challenge in RO applications. Membrane scaling can occur when sparingly soluble salts in RO concentrate become supersaturated , meaning their concentrations exceed their equilibrium ( solubility ) levels. In RO processes, the increased concentration of sparingly soluble salts in the concentrate is primarily caused by the withdrawal of permeate water from the feedwater. The ratio of permeate water to feedwater is known as recovery which is directly related to membrane scaling. Recovery needs to be as high as possible in RO installations to minimize specific energy consumption. However, at high recovery rates, the concentration of sparingly soluble salts in the concentrate can increase dramatically. For example, for 80% and 90% recovery, the concentration of salts in the concentrate can reach 5 and 10 times their concentration in the feedwater, respectively. If the calcium and phosphate concentrations in the RO feedwater are 200 mg/L and 5 mg/L, respectively, the concentrations in the RO concentrate will be 1000 mg/L and 50 mg/L at 90% recovery, exceeding the calcium phosphate solubility limit and resulting in calcium phosphate scaling. It is important to note that membrane scaling is not only dependent on supersaturation but also on crystallization kinetics, i.e., nucleation and crystal growth . The most common salts that cause scaling in RO processes are: There are a number of indices available to determine the scaling tendency of sparingly soluble salts in a water solution. These indices provide information if a given scale-forming specie is undersaturated , saturated, or supersaturated. Scaling does not occur when a compound is undersaturated, while it will take place sooner or later when a compound is supersaturated. The most commonly used indices to predict scaling in RO applications are: S I = log ⁡ I A P K s p {\displaystyle SI=\log {\frac {IAP}{K_{sp}}}} where, IAP and K sp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, SI for calcium sulphate can be calculated as follows: S I = log ⁡ γ [ C a 2 + ] γ [ S O 4 2 − ] K s p {\displaystyle SI=\log {\frac {\gamma [Ca^{2+}]\gamma [SO_{4}^{2-}]}{Ksp}}} where, γ is activity coefficient . [Ca 2+ ] and [SO 4 2− ] are calcium and sulphate concentrations in mol/L, respectively. S r = √ I A P K s p {\displaystyle S_{r}=\surd {\frac {IAP}{K_{sp}}}} where IAP and Ksp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, S r for calcium sulphate can be calculated as follows: S r = √ γ [ C a 2 + ] γ [ S O 4 2 − ] K s p {\displaystyle S_{r}=\surd {\frac {\gamma [Ca^{2+}]\gamma [SO_{4}^{2-}]}{Ksp}}} where, γ is activity coefficient. [Ca 2+ ] and [SO 4 2− ] are calcium and sulphate concentrations in mol/L, respectively. LSI is used only for calcium carbonate scaling. On the other hand, SI and S r are applicable for all compounds. A positive value for each SI and LSI indicates that scaling may occur in RO, whereas a negative value implies that scaling will not occur. Similarly, scaling may occur when S r >1, but not when S r <1. There are several methods for preventing scaling in RO applications, including acidification of RO feed, lowering RO system recovery, and antiscalant addition. [ 4 ] Acidification of RO feedwater was one of the first methods for tackling calcium carbonate scaling in RO processes. [ 5 ] However, due to the risks associated with the use of acid, this method is becoming less common. Furthermore, acidification may not be effective for all types of scales; for example, it is very effective in preventing calcium carbonate scaling but not calcium sulphate scaling. [ 6 ] Another method of preventing scaling is to operate RO at low recovery (ratio of permeate water to the feedwater). The recovery of the RO application is reduced in this approach to reduce the supersaturation level of the concentrate water to undersaturated conditions. Low recovery reduces the adverse effect of concentration polarization because there is less solute concentration on the membrane surface, reducing the potential for scale formation. This approach, however, is not very appealing or economical because it results in high specific energy consumption. Furthermore, the large amount of concentrate disposal is a problem. Antiscalants addition to the RO feed is one of the most widely applied strategies in term of scale control. [ 7 ] Antiscalants can be used to increase the recovery of RO process and are primarily contains organic compounds such as sulphonate , phosphonate , or carboxylic acid functional groups . [ 8 ] [ 9 ] The addition of antiscalants hinder the crystallization process, i.e., nucleation and/or growth phase of scaling compounds. Antiscalant prevent scale formation by three mechanisms, namely threshold inhibition, crystal modification and dispersion. [ 10 ] Threshold inhibition is when antiscalant molecules adsorb on crystal nuclei and halt their nucleation process, whereas crystal modification and dispersion are the ability of antiscalants to stop the growth and/or agglomeration of crystals and particles. [ 11 ] For silica scale, there is also an additional function where it prevents polymerisation of silica monomers, hence preventing the growth of silica polymers. [ 12 ] [ 13 ] There are various commercial antiscalants on the market such as Kurita, Avista, BASF etc. [ 14 ] [ 15 ] [ 16 ] In RO applications, antiscalants are chosen based on the composition of the feedwater, and their doses are usually calculated using computer programs created by antiscalant manufacturers. For example, Avista has a chemical dosing software called AdvisorCI™, [ 17 ] that is used to compute accurate dosing of chemicals in RO systems.
https://en.wikipedia.org/wiki/Membrane_scaling
Membrane structures are spatial structures made out of tensioned membranes. The structural use of membranes can be divided into pneumatic structures , tensile membrane structures , and cable domes. In these three kinds of structure, membranes work together with cables, columns and other construction members to find a form. Membranes are also used as non-structural cladding, as at the Beijing National Stadium where the spaces between the massive steel structural members are infilled with PTFE coated glass fiber fabric and ETFE foil. The other major building on the site, built for the 2008 Summer Olympics , is the Beijing National Aquatics Center , also known as the Water Cube. It is entirely clad in 100,000 square metres of inflated ETFE foil cushions arranged as an apparently random cellular structure. The common membranes used in membrane structures include: The International Centre for Numerical Methods in Engineering (CIMNE) in Barcelona biannually holds the international conference Textile Composites and Inflatable Structures (Structural Membranes) . [ 1 ] The conference has been taking place in Barcelona, Stuttgart and Munich. The tenth edition of the conference will be organized in 2021 in Munich. [ 2 ] This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Membrane_structure
Membrane technology encompasses the scientific processes used in the construction and application of membranes. Membranes are used to facilitate the transport or rejection of substances between mediums, and the mechanical separation of gas and liquid streams. In the simplest case, filtration is achieved when the pores of the membrane are smaller than the diameter of the undesired substance, such as a harmful microorganism. Membrane technology is commonly used in industries such as water treatment, chemical and metal processing, pharmaceuticals, biotechnology, the food industry, as well as the removal of environmental pollutants. After membrane construction, there is a need to characterize the prepared membrane to know more about its parameters, like pore size, function group, material properties, etc., which are difficult to determine in advance. In this process, instruments such as the Scanning Electron Microscope , the Transmission electron Microscope , the Fourier Transform Infrared Spectroscopy , X-ray Diffraction , and Liquid–Liquid Displacement Porosimetry are utilized. Membrane technology covers all engineering approaches for the transport of substances between two fractions with the help of semi-permeable membranes . In general, mechanical separation processes for separating gaseous or liquid streams use membrane technology. In recent years, different methods have been used to remove environmental pollutants, like adsorption , oxidation , and membrane separation. Different pollution occurs in the environment like air pollution, waste water pollution etc. [ 1 ] As per industry requirement to prevent industrial pollution because more than 70% of environmental pollution occurs due to industries. It is their responsibility to follow government rules of the Air Pollution Control & Prevention Act 1981 to maintain and prevent the harmful chemical release into the environment. [ 2 ] Make sure to do prevention & safety processes after that industries are able to release their waste in the environment. [ 3 ] Biomass -based Membrane technology is one of the most promising technologies for use as a pollutants removal weapon because it has low cost, more efficiency, & lack of secondary pollutants . [ 1 ] Typically polysulfone , polyvinylidene fluoride , and polypropylene are used in the membrane preparation process. These membrane materials are non-renewable and non-biodegradable which create harmful environmental pollution. [ 4 ] Researchers are trying to find a solution to synthesize an eco-friendly membrane which avoids environmental pollution. Synthesis of biodegradable material with the help of naturally available material such as biomass-based membrane synthesis can be used to remove pollutants. [ 5 ] Membrane separation processes operate without heating and therefore use less energy than conventional thermal separation processes such as distillation , sublimation or crystallization . The separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. Cold separation using membrane technology is widely used in the food technology , biotechnology and pharmaceutical industries. Furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. For example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. Depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. Important technical applications include the production of drinking water by reverse osmosis . In waste water treatment, membrane technology is becoming increasingly important. Ultra / microfiltration can be very effective in removing colloids and macromolecules from wastewater. This is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation. About half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble-free supply of oxygen in the blood . The importance of membrane technology is growing in the field of environmental protection ( Nano-Mem-Pro IPPC Database ). Even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants . Two basic models can be distinguished for mass transfer through the membrane: In real membranes, these two transport mechanisms certainly occur side by side, especially during ultra-filtration. In the solution-diffusion model, transport occurs only by diffusion . The component that needs to be transported must first be dissolved in the membrane. The general approach of the solution-diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution-membrane interface. This principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. During the filtration process a boundary layer forms on the membrane. This concentration gradient is created by molecules which cannot pass through the membrane. The effect is referred to as concentration polarization and, occurring during the filtration, leads to a reduced trans-membrane flow ( flux ). Concentration polarization is, in principle, reversible by cleaning the membrane which results in the initial flux being almost totally restored. Using a tangential flow to the membrane (cross-flow filtration) can also minimize concentration polarization. Transport through pores – in the simplest case – is done convectively . This requires the size of the pores to be smaller than the diameter of the two separate components. Membranes that function according to this principle are used mainly in micro- and ultrafiltration. They are used to separate macromolecules from solutions , colloids from a dispersion or remove bacteria. During this process, the retained particles or molecules form a pulpy mass ( filter cake ) on the membrane, and this blockage of the membrane hampers the filtration. This blockage can be reduced by the use of the cross-flow method ( cross-flow filtration ). Here, the liquid to be filtered flows along the front of the membrane and is separated by the pressure difference between the front and back of the membrane into retentate (the flowing concentrate) on the front and permeate (filtrate) on the back. The tangential flow on the front creates a shear stress that cracks the filter cake and reduces the fouling . According to the driving force of the operation, it is possible to distinguish: There are two main flow configurations of membrane processes: cross-flow (or tangential flow) and dead-end filtrations. In cross-flow filtration the feed flow is tangential to the surface of the membrane, retentate is removed from the same side further downstream, whereas the permeate flow is tracked on the other side. In dead-end filtration, the direction of the fluid flow is normal to the membrane surface. Both flow geometries offer some advantages and disadvantages. Generally, dead-end filtration is used for feasibility studies on a laboratory scale. The dead-end membranes are relatively easy to fabricate which reduces the cost of the separation process. The dead-end membrane separation process is easy to implement and the process is usually cheaper than cross-flow membrane filtration. The dead-end filtration process is usually a batch -type process, where the filtering solution is loaded (or slowly fed) into the membrane device, which then allows passage of some particles subject to the driving force. The main disadvantage of dead-end filtration is the extensive membrane fouling and concentration polarization . The fouling is usually induced faster at higher driving forces. Membrane fouling and particle retention in a feed solution also builds up a concentration gradients and particle backflow (concentration polarization). The tangential flow devices are more cost and labor-intensive, but they are less susceptible to fouling due to the sweeping effects and high shear rates of the passing flow. The most commonly used synthetic membrane devices (modules) are flat sheets/plates, spiral wounds, and hollow fibers . Flat membranes used in filtration and separation processes can be enhanced with surface patterning, where microscopic structures are introduced to improve performance. These patterns increase surface area, optimize water flow, and reduce fouling, leading to higher permeability and longer membrane lifespan. Research has shown that such modifications can significantly enhance efficiency in water purification, energy applications, and industrial separations. [ 6 ] Flat plates are usually constructed as circular thin flat membrane surfaces to be used in dead-end geometry modules. Spiral wounds are constructed from similar flat membranes but in the form of a "pocket" containing two membrane sheets separated by a highly porous support plate. [ 7 ] Several such pockets are then wound around a tube to create a tangential flow geometry and to reduce membrane fouling. Hollow fiber modules consist of an assembly of self-supporting fibers with dense skin separation layers, and a more open matrix helping to withstand pressure gradients and maintain structural integrity. [ 7 ] The hollow fiber modules can contain up to 10,000 fibers ranging from 200 to 2500 μm in diameter; The main advantage of hollow fiber modules is the very large surface area within an enclosed volume, increasing the efficiency of the separation process. The Disc tube module uses a cross-flow geometry and consists of a pressure tube and hydraulic discs, which are held by a central tension rod, and membrane cushions that lie between two discs. [ 8 ] The selection of synthetic membranes for a targeted separation process is usually based on few requirements. Membranes have to provide enough mass transfer area to process large amounts of feed stream. The selected membrane has to have high selectivity ( rejection ) properties for certain particles; it has to resist fouling and to have high mechanical stability. It also needs to be reproducible and to have low manufacturing costs. The main modeling equation for the dead-end filtration at constant pressure drop is represented by Darcy's law : [ 7 ] d V p d t = Q = Δ p μ A ( 1 R m + R ) {\displaystyle {\frac {dV_{p}}{dt}}=Q={\frac {\Delta p}{\mu }}\ A\left({\frac {1}{R_{m}+R}}\right)} where V p and Q are the volume of the permeate and its volumetric flow rate respectively (proportional to same characteristics of the feed flow), μ is dynamic viscosity of permeating fluid, A is membrane area, R m and R are the respective resistances of membrane and growing deposit of the foulants. R m can be interpreted as a membrane resistance to the solvent (water) permeation. This resistance is a membrane intrinsic property and is expected to be fairly constant and independent of the driving force, Δp. R is related to the type of membrane foulant, its concentration in the filtering solution, and the nature of foulant-membrane interactions. Darcy's law allows for calculation of the membrane area for a targeted separation at given conditions. The solute sieving coefficient is defined by the equation: [ 7 ] S = C p C f {\displaystyle S={\frac {C_{p}}{C_{f}}}} where C f and C p are the solute concentrations in feed and permeate respectively. Hydraulic permeability is defined as the inverse of resistance and is represented by the equation: [ 7 ] L p = J Δ p {\displaystyle L_{p}={\frac {J}{\Delta p}}} where J is the permeate flux which is the volumetric flow rate per unit of membrane area. The solute sieving coefficient and hydraulic permeability allow the quick assessment of the synthetic membrane performance. Membrane separation processes have a very important role in the separation industry. Nevertheless, they were not considered technically important until the mid-1970s. Membrane separation processes differ based on separation mechanisms and size of the separated particles. The widely used membrane processes include microfiltration , ultrafiltration , nanofiltration , reverse osmosis , electrolysis , dialysis , electrodialysis , gas separation , vapor permeation, pervaporation , membrane distillation , and membrane contactors. [ 9 ] All processes except for pervaporation involve no phase change. All processes except electrodialysis are pressure driven. Microfiltration and ultrafiltration is widely used in food and beverage processing (beer microfiltration, apple juice ultrafiltration), biotechnological applications and pharmaceutical industry ( antibiotic production, protein purification), water purification and wastewater treatment , the microelectronics industry, and others. Nanofiltration and reverse osmosis membranes are mainly used for water purification purposes. Dense membranes are utilized for gas separations (removal of CO 2 from natural gas, separating N 2 from air, organic vapor removal from air or a nitrogen stream) and sometimes in membrane distillation. The later process helps in the separation of azeotropic compositions reducing the costs of distillation processes. The pore sizes of technical membranes are specified differently depending on the manufacturer. One common distinction is by nominal pore size . It describes the maximum pore size distribution [ 10 ] and gives only vague information about the retention capacity of a membrane. The exclusion limit or "cut-off" of the membrane is usually specified in the form of NMWC (nominal molecular weight cut-off, or MWCO , molecular weight cut off , with units in Dalton ). It is defined as the minimum molecular weight of a globular molecule that is retained to 90% by the membrane. The cut-off, depending on the method, can by converted to so-called D 90 , which is then expressed in a metric unit. In practice the MWCO of the membrane should be at least 20% lower than the molecular weight of the molecule that is to be separated. Using track etched mica membranes [ 11 ] Beck and Schultz [ 12 ] demonstrated that hindered diffusion of molecules in pores can be described by the Rankin [ 13 ] equation. Filter membranes are divided into four classes according to pore size: The form and shape of the membrane pores are highly dependent on the manufacturing process and are often difficult to specify. Therefore, for characterization, test filtrations are carried out and the pore diameter refers to the diameter of the smallest particles which could not pass through the membrane. The rejection can be determined in various ways and provides an indirect measurement of the pore size. One possibility is the filtration of macromolecules (often dextran , polyethylene glycol or albumin ), another is measurement of the cut-off by gel permeation chromatography . These methods are used mainly to measure membranes for ultrafiltration applications. Another testing method is the filtration of particles with defined size and their measurement with a particle sizer or by laser induced breakdown spectroscopy (LIBS). A vivid characterization is to measure the rejection of dextran blue or other colored molecules. The retention of bacteriophage and bacteria , the so-called "bacteria challenge test", can also provide information about the pore size. To determine the pore diameter, physical methods such as porosimeter (mercury, liquid-liquid porosimeter and Bubble Point Test) are also used, but a certain form of the pores (such as cylindrical or concatenated spherical holes) is assumed. Such methods are used for membranes whose pore geometry does not match the ideal, and we get "nominal" pore diameter, which characterizes the membrane, but does not necessarily reflect its actual filtration behavior and selectivity. The selectivity is highly dependent on the separation process, the composition of the membrane and its electrochemical properties in addition to the pore size. With high selectivity, isotopes can be enriched (uranium enrichment) in nuclear engineering or industrial gases like nitrogen can be recovered ( gas separation ). Ideally, even racemics can be enriched with a suitable membrane. When choosing membranes selectivity has priority over a high permeability, as low flows can easily be offset by increasing the filter surface with a modular structure. In gas phase filtration different deposition mechanisms are operative, so that particles having sizes below the pore size of the membrane can be retained as well. Bio-Membrane is classified in two categories, synthetic membrane and natural membrane. synthetic membranes further classified in organic and inorganic membranes. Organic membrane sub classified polymeric membranes and inorganic membrane sub classified ceramic polymers. [ 15 ] Green membrane or Bio-membrane synthesis is the solution to protected environments which have largely comprehensive performance. Biomass is used in the form of activated carbon nanoparticles , like using cellulose based biomass coconut shell , hazelnut shell, walnut shell, agricultural wastes of corn stalks etc. [ 4 ] which improve  surface hydrophilicity , larger pore size, more and lower surface roughness therefore, the separation and anti-fouling performance of membranes are also improved simultaneously. [ 16 ] A biomass-based membrane is a membrane made from organic materials such as plant fibers. [ 4 ] These membranes are often used in water filtration and wastewater treatment applications. The fabrication of a pure biomass-based membrane is a complex process that involves a number of steps. The first step is to create a slurry of the organic materials . This slurry is then cast onto a substrate, such as a glass or metal plate. [ 17 ] The cast is then dried, and the resulting membrane is then subjected to a number of treatments, such as chemical or heat treatments, to improve its properties. One of the challenges in the fabrication of biomass-based membranes is to create a membrane with the desired properties. [ 18 ] List of instruments used in membrane synthesis procedures: After casting and synthesis of membrane there is need to characterize the prepared membrane to know more details about membrane parameters, like pore size, functional groups, wettability, surface charge, etc. It is important to know membrane properties so we are able to remove and treat a particulate pollutant, which causes pollution in the environment. [ 19 ] For characterization following different instruments are used: Water treatment is any process that improves the quality of water to make it more acceptable for a specific end-use. Membranes can be used to remove particulates from water by either size exclusion or charge separation. [ 20 ] In size exclusion , the pores in the membrane are sized such that only particles smaller than the pores can pass through. The pores in the membrane are sized such that only water molecules can pass through, leaving dissolved contaminants behind. [ 21 ] Utilization of membranes in gas separation, like carbon dioxide ( CO 2 ), Nitrogen oxides ( NO x ), Sulphur oxides ( SO x ), harmful gasses can be removed to protect the environment. [ 22 ] Biomass Membrane gas separation more effective than commercial membrane. [ 23 ] Membrane application in hemodialysis is a process of using a semipermeable membrane to remove waste products and excess fluids from the blood. [ 24 ]
https://en.wikipedia.org/wiki/Membrane_technology
Topology of a transmembrane protein refers to locations of N- and C-termini of membrane-spanning polypeptide chain with respect to the inner or outer sides of the biological membrane occupied by the protein. [ 1 ] Several databases provide experimentally determined topologies of membrane proteins. They include Uniprot , TOPDB, [ 3 ] [ 4 ] [ 5 ] OPM , and ExTopoDB. [ 6 ] [ 7 ] There is also a database of domains located conservatively on a certain side of membranes, TOPDOM. [ 8 ] Several computational methods were developed, with a limited success, for predicting transmembrane alpha-helices and their topology. Pioneer methods utilized the fact that membrane-spanning regions contain more hydrophobic residues than other parts of the protein, however applying different hydrophobic scales altered the prediction results. Later, several statistical methods were developed to improve the topography prediction and a special alignment method was introduced. [ 9 ] According to the positive-inside rule, [ 10 ] cytosolic loops near the lipid bilayer contain more positively-charged amino acids. Applying this rule resulted in the first topology prediction methods. There is also a negative-outside rule in transmembrane alpha-helices from single-pass proteins, although negatively charged residues are rarer than positively charged residues in transmembrane segments of proteins. [ 11 ] As more structures were determined, machine learning algorithms appeared. Supervised learning methods are trained on a set of experimentally determined structures, however, these methods highly depend on the training set. [ 12 ] [ 13 ] [ 14 ] [ 15 ] Unsupervised learning methods are based on the principle that topology depends on the maximum divergence of the amino acid distributions in different structural parts. [ 16 ] [ 17 ] It was also shown that locking a segment location based on prior knowledge about the structure improves the prediction accuracy. [ 18 ] This feature has been added to some of the existing prediction methods. [ 17 ] [ 14 ] The most recent methods use consensus prediction (i.e. they use several algorithms to determine the final topology) [ 19 ] and automatically incorporate previously determined experimental informations. [ 20 ] HTP database [ 21 ] [ 22 ] provides a collection of topologies that are computationally predicted for human transmembrane proteins. Discrimination of signal peptides and transmembrane segments is an additional problem in topology prediction treated with a limited success by different methods. [ 23 ] Both signal peptides and transmembrane segments contain hydrophobic regions which form α-helices. This causes the cross-prediction between them, which is a weakness of many transmembrane topology predictors. By predicting signal peptides and transmembrane helices simultaneously (Phobius [ 14 ] ), the errors caused by cross-prediction are reduced and the performance is substantially increased. Another feature used to increase the accuracy of the prediction is the homology (PolyPhobius).” It is also possible to predict beta-barrel membrane proteins' topology. [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Membrane_topology
Membranome database provides structural and functional information about more than 6000 single-pass (bitopic) transmembrane proteins from Homo sapiens , Arabidopsis thaliana , Dictyostelium discoideum , Saccharomyces cerevisiae , Escherichia coli and Methanocaldococcus jannaschii . [ 1 ] Bitopic membrane proteins consist of a single transmembrane alpha-helix connecting water-soluble domains of the protein situated at the opposite sides of a biological membrane. These proteins are frequently involved in the signal transduction and communication between cells in multicellular organisms . The database provides information about the individual proteins including computationally generated three-dimensional models of their transmembrane alpha-helices spatially arranged in the membrane, topology , intracellular localizations , amino acid sequences , domain architecture , functional annotation and available experimental structures from the Protein Data Bank . It also provides a classification of bitopic proteins into 15 functional classes, more than 700 structural superfamilies and 1400 families, along with 3D structures of bitopic protein complexes which are also classified to different families. [ 1 ] The second Membranome version [ 2 ] provides 3D models of more than 2000 parallel homodimers formed by TM α-helices of bitopic proteins from different organisms which were generated using TMDOCK program. [ 3 ] The models of the homodimers were verified through comparison with available experimental data for nearly 600 proteins. [ 4 ] The database includes downloadable coordinate files of transmembrane helices and their homodimers with calculated membrane boundaries. Membranome 3.0 version incorporates models generated by AlphaFold 2 . [ 5 ] The database website provides access to related webservers, FMAP [ 6 ] and TMDOCK which have been developed for modeling individual alpha-helices and their dimeric complexes in membranes. The database and webservers were used in experimental and bioinformatics studies of bitopic membrane proteins [ 7 ] [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Membranome_database
A memristor ( / ˈ m ɛ m r ɪ s t ər / ; a portmanteau of memory resistor ) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage . It was described and named in 1971 by Leon Chua , completing a theoretical quartet of fundamental electrical components which also comprises the resistor , capacitor and inductor . [ 1 ] Chua and Kang later generalized the concept to memristive systems . [ 2 ] Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notably ReRAM . The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated. [ 3 ] [ 4 ] Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as the memistor of Bernard Widrow, but Chua introduced a mathematical generality. The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkage Φ m ( t ) and the amount of electric charge that has flowed, q ( t ) : [ 1 ] f ( Φ m ( t ) , q ( t ) ) = 0 {\displaystyle f(\mathrm {\Phi } _{\mathrm {m} }(t),q(t))=0} The magnetic flux linkage , Φ m , is generalized from the circuit characteristic of an inductor. It does not represent a magnetic field here. Its physical meaning is discussed below. The symbol Φ m may be regarded as the integral of voltage over time. [ 5 ] In the relationship between Φ m and q , the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge: M ( q ) = d Φ m d q . {\displaystyle M(q)={\frac {\mathrm {d} \Phi _{\rm {m}}}{\mathrm {d} q}}\,.} Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are: M ( q ( t ) ) = d Φ / d t d q / d t = V ( t ) I ( t ) . {\displaystyle M(q(t))={\cfrac {\mathrm {d} \Phi _{\rm {}}/\mathrm {d} t}{\mathrm {d} q/\mathrm {d} t}}={\frac {V(t)}{I(t)}}\,.} To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the term M ( q ) , which characterizes the device, and write it as a differential equation. The above table covers all meaningful ratios of differentials of I , q , Φ m , and V . No device can relate d I to d q , or dΦ m to d V , because I is the time derivative of q and Φ m is the integral of V with respect to time. It can be inferred from this that memristance is charge-dependent resistance . If M ( q ( t )) is a constant, then we obtain Ohm's law , R ( t ) = V ( t )/ I ( t ) . If M ( q ( t )) is nontrivial, however, the equation is not equivalent because q ( t ) and M ( q ( t )) can vary with time. Solving for voltage as a function of time produces V ( t ) = M ( q ( t ) ) I ( t ) . {\displaystyle V(t)=\ M(q(t))I(t)\,.} This equation reveals that memristance defines a linear relationship between current and voltage, as long as M does not vary with charge. Nonzero current implies time varying charge. Alternating current , however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change in q does not cause much change in M . Furthermore, the memristor is static if no current is applied. If I ( t ) = 0 , we find V ( t ) = 0 and M ( t ) is constant. This is the essence of the memory effect. Analogously, we can define a W ( ϕ ( t )) as memductance: [ 1 ] i ( t ) = W ( ϕ ( t ) ) v ( t ) . {\displaystyle i(t)=W(\phi (t))v(t)\,.} The power consumption characteristic recalls that of a resistor, I 2 R : P ( t ) = I ( t ) V ( t ) = I 2 ( t ) M ( q ( t ) ) . {\displaystyle P(t)=\ I(t)V(t)=\ I^{2}(t)M(q(t))\,.} As long as M ( q ( t )) varies little, such as under alternating current, the memristor will appear as a constant resistor. If M ( q ( t )) increases rapidly, however, current and power consumption will quickly stop. M ( q ) is physically restricted to be positive for all values of q (assuming the device is passive and does not become superconductive at some q ). A negative value would mean that it would perpetually supply energy when operated with alternating current. In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept of device modeling . [ 6 ] Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis. In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors. [ 6 ] Pershin and Di Ventra [ 3 ] have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept. The rest of this article primarily addresses memristors as related to ReRAM devices, since the majority of work since 2008 has been concentrated in this area. Dr. Paul Penfield, in a 1974 MIT technical report [ 7 ] mentions the memristor in connection with Josephson junctions . This was an early use of the word "memristor" in the context of a circuit device. One of the terms in the current through a Josephson junction is of the form: i M ( v ) = ϵ cos ⁡ ( ϕ 0 ) v = W ( ϕ 0 ) v {\displaystyle {\begin{aligned}i_{M}(v)&=\epsilon \cos(\phi _{0})v\\&=W(\phi _{0})v\end{aligned}}} where ϵ is a constant based on the physical superconducting materials, v is the voltage across the junction and i M is the current through the junction. Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out. [ 8 ] [ 9 ] [ 10 ] [ 11 ] A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014. [ 12 ] Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see § Theory . A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electrons n e . [ 2 ] v M = R ( n e ) i M d n e d t = β n + α R ( n e ) i M 2 {\displaystyle {\begin{aligned}v_{\mathrm {M} }&=R(n_{\mathrm {e} })i_{\mathrm {M} }\\{\frac {\mathrm {d} n_{\mathrm {e} }}{\mathrm {d} t}}&=\beta n+\alpha R(n_{\mathrm {e} })i_{\mathrm {M} }^{2}\end{aligned}}} v M is the voltage across the discharge tube, i M is the current flowing through it, and n e is the number of conduction electrons. A simple memristance function is R ( n e ) = F / n e . The parameters α , β , and F depend on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the "pinched hysteresis loop" in the v-i plane. [ a ] [ 13 ] [ 14 ] Thermistors can be modeled as memristive devices: [ 14 ] v = R 0 ( T 0 ) exp ⁡ [ β ( 1 T − 1 T 0 ) ] i ≡ R ( T ) i d T d t = 1 C [ − δ ⋅ ( T − T 0 ) + R ( T ) i 2 ] {\displaystyle {\begin{aligned}v&=R_{0}(T_{0})\exp \left[\beta \left({\frac {1}{T}}-{\frac {1}{T_{0}}}\right)\right]i\\&\equiv R(T)i\\{\frac {\mathrm {d} T}{\mathrm {d} t}}&={\frac {1}{C}}\left[-\delta \cdot (T-T_{0})+R(T)i^{2}\right]\end{aligned}}} β is a material constant, T is the absolute body temperature of the thermistor, T 0 is the ambient temperature (both temperatures in Kelvin), R 0 ( T 0 ) denotes the cold temperature resistance at T = T 0 , C is the heat capacitance and δ is the dissipation constant for the thermistor. A fundamental phenomenon that has hardly been studied is memristive behaviour in p-n junctions . [ 15 ] The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients). In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide , thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called non-volatility property . [ 16 ] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again. [ 17 ] [ 18 ] The HP Labs result was published in the scientific journal Nature . [ 17 ] [ 19 ] Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects. [ 16 ] Chua also argued that the memristor is the oldest known circuit element , with its effects predating the resistor , capacitor , and inductor . [ 20 ] However, there are doubts as to whether a memristor can actually exist in physical reality. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Additionally, some experimental evidence contradicts Chua's generalization since a non-passive nanobattery effect is observable in resistance switching memory. [ 25 ] A simple test has been proposed by Pershin and Di Ventra [ 3 ] to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now, [ when? ] there seems to be no experimental resistance switching device ( ReRAM ) which can pass the test. [ 3 ] [ 4 ] These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic /neuromemristive computer architectures. [ 26 ] [ 27 ] In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018. [ 28 ] In March 2012, a team of researchers from HRL Laboratories and the University of Michigan announced the first functioning memristor array built on a CMOS chip. [ 29 ] According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011, Chua argued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching. [ 16 ] Williams argued that MRAM , phase-change memory and ReRAM are memristor technologies. [ 32 ] Some researchers argued that biological structures such as blood [ 33 ] and skin [ 34 ] [ 35 ] fit the definition. Others argued that the memory device under development by HP Labs and other forms of ReRAM are not memristors, but rather part of a broader class of variable-resistance systems, [ 36 ] and that a broader definition of memristor is a scientifically unjustifiable land grab that favored HP's memristor patents. [ 37 ] In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction. [ 38 ] In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. [ 21 ] They indicated inadequacies in the electrochemical modeling presented in the Nature article "The missing memristor found" [ 17 ] because the impact of concentration polarization effects on the behavior of metal− TiO 2− x −metal structures under voltage or current stress was not considered. [ 25 ] In a kind of thought experiment , Meuffels and Soni [ 21 ] furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-called non-volatility property [ 16 ] exists in physical reality, its behavior would violate Landauer's principle , which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted by Di Ventra and Pershin [ 22 ] in 2013. Within this context, Meuffels and Soni [ 21 ] pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence of free-energy barriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence of thermal fluctuations . When unprotected against thermal fluctuations , the internal memory states exhibit some diffusive dynamics, which causes state degradation. [ 22 ] The free-energy barriers must therefore be high enough to ensure a low bit-error probability of bit operation. [ 39 ] Consequently, there is always a lower limit of energy requirement – depending on the required bit-error probability – for intentionally changing a bit value in any memory device. [ 39 ] [ 40 ] In the general concept of memristive system the defining equations are (see § Theory ): y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) , {\displaystyle {\begin{aligned}y(t)&=g(\mathbf {x} ,u,t)u(t),\\{\dot {\mathbf {x} }}&=f(\mathbf {x} ,u,t),\end{aligned}}} where u ( t ) is an input signal, and y ( t ) is an output signal. The vector x {\displaystyle \mathbf {x} } represents a set of n state variables describing the different internal memory states of the device. x ˙ {\displaystyle {\dot {\mathbf {x} }}} is the time-dependent rate of change of the state vector x {\displaystyle \mathbf {x} } with time. When one wants to go beyond mere curve fitting and aims at a real physical modeling of non-volatile memory elements, e.g., resistive random-access memory devices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signal u ( t ) can be superposed with a stochastic term ξ ( t ) , which takes into account the existence of inevitable thermal fluctuations . The dynamic state equation in its general form then finally reads: x ˙ = f ( x , u ( t ) + ξ ( t ) , t ) , {\displaystyle {\dot {\mathbf {x} }}=f(\mathbf {x} ,u(t)+\xi (t),t),} where ξ ( t ) is, e.g., white Gaussian current or voltage noise . On the basis of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., whether the system would be able to retain its memory states in power-off mode. Such an analysis was performed by Di Ventra and Pershin [ 22 ] with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise. [ 22 ] [ 41 ] Di Ventra and Pershin [ 22 ] thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidable Johnson–Nyquist noise and permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality. The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices ( ReRAM )) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state { i } to state { j } depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition { i }→{ j } by means of, for example, an externally applied bias. A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition { i }→{ j } is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching ( ReRAM ) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything. When a two-terminal non-volatile memory device is found to be in a distinct resistance state { j } , there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems. An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated as Joule heat to the surrounding, so no extra energy remains in the system after it has been brought from one resistance state x i to another one x j . Thus, the internal energy of the memristor device in state x i , U ( V , T , x i ) , would be the same as in state x j , U ( V , T , x j ) , even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material. Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency. [ 42 ] A 2014 article from researchers of ReRAM concluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises. [ 43 ] Martin Reynolds, an electrical engineering analyst with research outfit Gartner , commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor. [ 44 ] Chua suggested experimental tests to determine if a device may properly be categorized as a memristor: [ 2 ] According to Chua [ 45 ] [ 46 ] all resistive switching memories including ReRAM , MRAM and phase-change memory meet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim. Experimental evidence shows that redox-based resistance memory ( ReRAM ) includes a nanobattery effect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling. [ 25 ] In 2008, researchers from HP Labs introduced a model for a memristance function based on thin films of titanium dioxide . [ 17 ] For R on ≪ R off the memristance function was determined to be M ( q ( t ) ) = R o f f ⋅ ( 1 − μ v R o n D 2 q ( t ) ) {\displaystyle M(q(t))=R_{\mathrm {off} }\cdot \left(1-{\frac {\mu _{v}R_{\mathrm {on} }}{D^{2}}}q(t)\right)} where R off represents the high resistance state, R on represents the low resistance state, μ v represents the mobility of dopants in the thin film, and D represents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects. For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch from R on to R off in time T on to T off , the charge must change by Δ Q = Q on − Q off . E s w i t c h = V 2 ∫ T o f f T o n d t M ( q ( t ) ) = V 2 ∫ Q o f f Q o n d q I ( q ) M ( q ) = V 2 ∫ Q o f f Q o n d q V ( q ) = V Δ Q {\displaystyle {\begin{aligned}E_{\mathrm {switch} }&=V^{2}\int _{T_{\mathrm {off} }}^{T_{\mathrm {on} }}{\frac {\mathrm {d} t}{M(q(t))}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{I(q)M(q)}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{V(q)}}\\&=V\Delta Q\end{aligned}}} Substituting V = I ( q ) M ( q ) , and then ∫ d q / V = ∆ Q / V for constant V to produces the final expression. This power characteristic differs fundamentally from that of a metal oxide semiconductor transistor , which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage. The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creating hysteresis , also called the "hard-switching regime". [ 17 ] Another kind of switch would have a cyclic M ( q ) so that each off-on event would be followed by an on-off event under constant bias. Such a device would act as a memristor under all conditions, but would be less practical. In the more general concept of an n -th order memristive system the defining equations are y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) {\displaystyle {\begin{aligned}y(t)&=g({\textbf {x}},u,t)u(t),\\{\dot {\textbf {x}}}&=f({\textbf {x}},u,t)\end{aligned}}} where u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and g and f are continuous functions . For a current-controlled memristive system the signal u ( t ) represents the current signal i ( t ) and the signal y ( t ) represents the voltage signal v ( t ) . For a voltage-controlled memristive system the signal u ( t ) represents the voltage signal v ( t ) and the signal y ( t ) represents the current signal i ( t ) . The pure memristor is a particular case of these equations, namely when x depends only on charge ( x = q ) and since the charge is related to the current via the time derivative d q /d t = i ( t ) . Thus for pure memristors f (i.e. the rate of change of the state) must be equal or proportional to the current i ( t ) . One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. [ 47 ] For a current-controlled memristive system, the input u ( t ) is the current i ( t ), the output y ( t ) is the voltage v ( t ), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors. [ 48 ] The concept of memristive networks was first introduced by Leon Chua in his 1976 paper "Memristive Devices and Systems." [ 2 ] Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance. In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data. One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. For the simplest model with only memristive devices with voltage generators in series, there is an exact and in closed form equation ( Caravelli–Traversa–Di Ventra equation , CTDV) [ 49 ] which describes the evolution of the internal memory of the network for each device. For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov model R ( x ) = R o f f ( 1 − x ) + R o n x {\displaystyle R(x)=R_{off}(1-x)+R_{on}x} , with d x / d t = I / β − α x {\displaystyle dx/dt=I/\beta -\alpha x} , there is a set of nonlinearly coupled differential equations that takes the form: where X {\displaystyle X} is the diagonal matrix with elements x i {\displaystyle x_{i}} on the diagonal, α , β , χ {\displaystyle \alpha ,\beta ,\chi } are based on the memristors physical parameters. The vector S → {\displaystyle {\vec {S}}} is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operator Ω 2 = Ω {\displaystyle \Omega ^{2}=\Omega } , defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with a Hopfield network , such as the existence of Lyapunov functions and classical tunnelling phenomena. [ 50 ] In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications. Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior of ReRAM . [ 36 ] [ 37 ] and have suggested extended memristive models to remedy perceived deficiencies. [ 25 ] One example [ 51 ] attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signal u ( t ) as a series expansion where m is a positive integer, u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and the functions g and f are continuous functions . This equation produces the same zero-crossing hysteresis curves as memristive systems but with a different frequency response than that predicted by memristive systems. Another example suggests including an offset value a {\displaystyle a} to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect. [ 25 ] There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053]. Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. [ 52 ] [ 53 ] [ 54 ] The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current. Although not cited in HP's initial reports on their TiO 2 memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s. [ 55 ] The HP device is composed of a thin (50 nm ) titanium dioxide film between two 5 nm thick electrodes , one titanium , the other platinum . Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers , meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (see Fast-ion conductor ), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current. [ 17 ] Since the HP device displays fast-ion conduction at nanoscale, it is considered a nanoionic device . [ 56 ] Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device enters hysteresis . It ceases to integrate q =∫ I d t , but rather keeps q at an upper bound and M fixed, thus acting as a constant resistor until current is reversed. Memory applications of thin-film oxides had been an area of active investigation for some time. IBM published an article in 2000 regarding structures similar to that described by Williams. [ 57 ] Samsung has a U.S. patent for oxide-vacancy based switches similar to that described by Williams. [ 58 ] In April 2010, HP labs announced that they had practical memristors working at 1 ns (~1 GHz) switching times and 3 nm by 3 nm sizes, [ 59 ] which bodes well for the future of the technology. [ 60 ] At these densities it could easily rival the current sub-25 nm flash memory technology. It seems that memristance has been reported in nanoscale thin films of silicon dioxide as early as the 1960s . [ 61 ] However, hysteretic conductance in silicon was associated to memristive effects only in 2009. [ 62 ] More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy. [ 63 ] In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. [ 64 ] They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to use fast-ion conductor as this passive layer, which allows a significant reduction of the ionic extraction field. In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor. [ 65 ] In 2010, Alibart, Gamrat, Vuillaume et al. [ 66 ] introduced a new hybrid organic/ nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor [ 67 ] and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning). [ 68 ] In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. [ 69 ] The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines. In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor. [ 70 ] In 2014, Bessonov et al. reported a flexible memristive device comprising a MoO x / MoS 2 heterostructure sandwiched between silver electrodes on a plastic foil. [ 71 ] The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. [ 72 ] in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX 2 , M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride , which is the thinnest memory material of around 0.33 nm. [ 73 ] These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD / MOCVD , enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS 2 or h-BN atomristors, indicating a new application of memristors for 5G , 6G and THz communication and connectivity systems. [ 74 ] [ 75 ] In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology. [ 76 ] The ferroelectric memristor [ 77 ] is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: R OFF ≫ R ON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither R ON or R OFF , but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved. In 2013, Ageev, Blinov et al. [ 78 ] reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope . Later it was found [ 79 ] that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain Δ L 0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field Edef in the nanotube under the influence of an external electric field E ( x , t ). Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. [ 80 ] In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated, [ 81 ] whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage [ 82 ] furthermore a natural silk fibroin demonstrated memristive properties; [ 83 ] spin-memristive systems based on biomolecules are also being studied. [ 84 ] In 2012, Sandro Carrara and co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors. [ 85 ] Since then, several memristive sensors have been demonstrated. [ 86 ] Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. [ 87 ] In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview by IEEE Spectrum . [ 88 ] A first experimental proof of the spintronic memristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011. [ 89 ] The magnetic tunnel junction has been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic ( spin-transfer torque ). Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on a magnetic tunnel junction (MTJ) endowed with bi-stable spin-dependent states [ 90 ] ( resistive switching ). The MTJ consists in a SrTiO3 (STO) tunnel barrier that separates half-metallic oxide LSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, the tunnel magnetoresistance (TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from the half-metallic LSMO electrode, within the Julliere model , this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions. Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008 [ 91 ] and 2009. [ 92 ] While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, [ 92 ] another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies [ 93 ] and its impact [ 94 ] on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity [ 95 ] or multiferroicity. [ 96 ] The magnetization state of a MTJ can be controlled by Spin-transfer torque , and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving a MRAM . However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable. [ 97 ] The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vector x = ( x 1 , x 2 ), where x 1 describes the magnetic state of the electrodes and x 2 denotes the resistive state of the MgO barrier. In this case the change of x 1 is current-controlled (spin torque is due to a high current density) whereas the change of x 2 is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system. [ 98 ] A fundamentally different mechanism for memristive behavior has been proposed by Pershin and Di Ventra . [ 99 ] [ 100 ] The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang. [ 2 ] The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces, [ 101 ] but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor. [ 1 ] This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications. In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor. [ 102 ] The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide. [ 103 ] The SDC device is operational immediately after fabrication. In the Ge 2 Se 3 active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge 2 Se 3 /Ag/Ge 2 Se 3 , directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about 350 °C (662 °F), the device has significantly higher processing and operating temperatures at 250 °C (482 °F) and at least 150 °C (302 °F), respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at 150 °C (302 °F). There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated by Heidemarie Schmidt and her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO 3 thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1: 10.1.1.189.3614 ] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358]. The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions. The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051]. Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications. A potential application of memristors is in analog memories for superconducting quantum computers. [ 12 ] Memristors can potentially be fashioned into non-volatile solid-state memory , which could allow greater data density than hard drives with access times similar to DRAM , replacing both components. [ 31 ] HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, [ 104 ] and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm 3 ). [ 105 ] In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM. [ 106 ] The devices' resistance would be read with alternating current so that the stored value would not be affected. [ 107 ] In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory. [ 108 ] Memristors have applications in programmable logic [ 109 ] signal processing , [ 110 ] super-resolution imaging [ 111 ] physical neural networks , [ 112 ] control systems , [ 113 ] reconfigurable computing , [ 114 ] in-memory computing , [ 115 ] brain–computer interfaces [ 116 ] and RFID . [ 117 ] Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation [ 118 ] Several early works have been reported in this direction. [ 119 ] [ 120 ] In 2009, a simple electronic circuit [ 121 ] consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. [ 122 ] It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime molds Physarum polycephalum where the viscosity of channels in the cytoplasm responds to periodic environment changes. [ 122 ] Applications of such circuits may include, e.g., pattern recognition . The DARPA SyNAPSE project funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010, Versace and Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model. [ 123 ] MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware. [ 124 ] Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki. [ 125 ] In 2011, they showed [ 126 ] how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired from Hebbian learning rule . In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena. [ 127 ] The memistor and memtransistor are transistor-based devices which include memristor function. In 2009, Di Ventra , Pershin, and Chua extended [ 128 ] the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin. [ 22 ] In September 2014, Mohamed-Salah Abdelouahab , Rene Lozi , and Leon Chua published a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements using fractional derivatives . [ 129 ] Sir Humphry Davy is said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808. [ 20 ] [ 130 ] However the first device of a related nature to be constructed was the memistor (i.e. memory resistor), a term coined in 1960 by Bernard Widrow to describe a circuit element of an early artificial neural network called ADALINE . A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO 2 which was later claimed by researchers from Hewlett Packard to be evidence of a memristor. [ 55 ] [ citation needed ] Leon Chua postulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element. [ 1 ] Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior. [ 2 ] On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in Nature identifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors. [ 17 ] On 23 January 2009, Di Ventra , Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors , whose properties depend on the state and history of the system. [ 128 ] In July 2014, the MeMOSat/ LabOSat group [ 131 ] (composed of researchers from Universidad Nacional de General San Martín (Argentina) , INTI, CNEA , and CONICET ) put memory devices into a Low Earth orbit . [ 132 ] Since then, seven missions with different devices [ 133 ] are performing experiments in low orbits, onboard Satellogic 's Ñu-Sat satellites. [ 134 ] [ 135 ] [ clarification needed ] On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially. [ 136 ] These devices remain available in small numbers. On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload. [ 137 ] In 2021, Jennifer Rupp and Martin Bazant of MIT started a "Lithionics" research programme to investigate applications of lithium beyond their use in battery electrodes , including lithium oxide -based memristors in neuromorphic computing . [ 138 ] [ 139 ] In May 2023, TECHiFAB GmbH [https://techifab.com/] announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers. In the September 2023 issue of Science Magazine , Chinese scientists Wenbin Zhang et al. described the development and testing of a memristor-based integrated circuit . [ 140 ]
https://en.wikipedia.org/wiki/Memcapacitor
A memristor ( / ˈ m ɛ m r ɪ s t ər / ; a portmanteau of memory resistor ) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage . It was described and named in 1971 by Leon Chua , completing a theoretical quartet of fundamental electrical components which also comprises the resistor , capacitor and inductor . [ 1 ] Chua and Kang later generalized the concept to memristive systems . [ 2 ] Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notably ReRAM . The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated. [ 3 ] [ 4 ] Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as the memistor of Bernard Widrow, but Chua introduced a mathematical generality. The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkage Φ m ( t ) and the amount of electric charge that has flowed, q ( t ) : [ 1 ] f ( Φ m ( t ) , q ( t ) ) = 0 {\displaystyle f(\mathrm {\Phi } _{\mathrm {m} }(t),q(t))=0} The magnetic flux linkage , Φ m , is generalized from the circuit characteristic of an inductor. It does not represent a magnetic field here. Its physical meaning is discussed below. The symbol Φ m may be regarded as the integral of voltage over time. [ 5 ] In the relationship between Φ m and q , the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge: M ( q ) = d Φ m d q . {\displaystyle M(q)={\frac {\mathrm {d} \Phi _{\rm {m}}}{\mathrm {d} q}}\,.} Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are: M ( q ( t ) ) = d Φ / d t d q / d t = V ( t ) I ( t ) . {\displaystyle M(q(t))={\cfrac {\mathrm {d} \Phi _{\rm {}}/\mathrm {d} t}{\mathrm {d} q/\mathrm {d} t}}={\frac {V(t)}{I(t)}}\,.} To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the term M ( q ) , which characterizes the device, and write it as a differential equation. The above table covers all meaningful ratios of differentials of I , q , Φ m , and V . No device can relate d I to d q , or dΦ m to d V , because I is the time derivative of q and Φ m is the integral of V with respect to time. It can be inferred from this that memristance is charge-dependent resistance . If M ( q ( t )) is a constant, then we obtain Ohm's law , R ( t ) = V ( t )/ I ( t ) . If M ( q ( t )) is nontrivial, however, the equation is not equivalent because q ( t ) and M ( q ( t )) can vary with time. Solving for voltage as a function of time produces V ( t ) = M ( q ( t ) ) I ( t ) . {\displaystyle V(t)=\ M(q(t))I(t)\,.} This equation reveals that memristance defines a linear relationship between current and voltage, as long as M does not vary with charge. Nonzero current implies time varying charge. Alternating current , however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change in q does not cause much change in M . Furthermore, the memristor is static if no current is applied. If I ( t ) = 0 , we find V ( t ) = 0 and M ( t ) is constant. This is the essence of the memory effect. Analogously, we can define a W ( ϕ ( t )) as memductance: [ 1 ] i ( t ) = W ( ϕ ( t ) ) v ( t ) . {\displaystyle i(t)=W(\phi (t))v(t)\,.} The power consumption characteristic recalls that of a resistor, I 2 R : P ( t ) = I ( t ) V ( t ) = I 2 ( t ) M ( q ( t ) ) . {\displaystyle P(t)=\ I(t)V(t)=\ I^{2}(t)M(q(t))\,.} As long as M ( q ( t )) varies little, such as under alternating current, the memristor will appear as a constant resistor. If M ( q ( t )) increases rapidly, however, current and power consumption will quickly stop. M ( q ) is physically restricted to be positive for all values of q (assuming the device is passive and does not become superconductive at some q ). A negative value would mean that it would perpetually supply energy when operated with alternating current. In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept of device modeling . [ 6 ] Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis. In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors. [ 6 ] Pershin and Di Ventra [ 3 ] have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept. The rest of this article primarily addresses memristors as related to ReRAM devices, since the majority of work since 2008 has been concentrated in this area. Dr. Paul Penfield, in a 1974 MIT technical report [ 7 ] mentions the memristor in connection with Josephson junctions . This was an early use of the word "memristor" in the context of a circuit device. One of the terms in the current through a Josephson junction is of the form: i M ( v ) = ϵ cos ⁡ ( ϕ 0 ) v = W ( ϕ 0 ) v {\displaystyle {\begin{aligned}i_{M}(v)&=\epsilon \cos(\phi _{0})v\\&=W(\phi _{0})v\end{aligned}}} where ϵ is a constant based on the physical superconducting materials, v is the voltage across the junction and i M is the current through the junction. Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out. [ 8 ] [ 9 ] [ 10 ] [ 11 ] A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014. [ 12 ] Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see § Theory . A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electrons n e . [ 2 ] v M = R ( n e ) i M d n e d t = β n + α R ( n e ) i M 2 {\displaystyle {\begin{aligned}v_{\mathrm {M} }&=R(n_{\mathrm {e} })i_{\mathrm {M} }\\{\frac {\mathrm {d} n_{\mathrm {e} }}{\mathrm {d} t}}&=\beta n+\alpha R(n_{\mathrm {e} })i_{\mathrm {M} }^{2}\end{aligned}}} v M is the voltage across the discharge tube, i M is the current flowing through it, and n e is the number of conduction electrons. A simple memristance function is R ( n e ) = F / n e . The parameters α , β , and F depend on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the "pinched hysteresis loop" in the v-i plane. [ a ] [ 13 ] [ 14 ] Thermistors can be modeled as memristive devices: [ 14 ] v = R 0 ( T 0 ) exp ⁡ [ β ( 1 T − 1 T 0 ) ] i ≡ R ( T ) i d T d t = 1 C [ − δ ⋅ ( T − T 0 ) + R ( T ) i 2 ] {\displaystyle {\begin{aligned}v&=R_{0}(T_{0})\exp \left[\beta \left({\frac {1}{T}}-{\frac {1}{T_{0}}}\right)\right]i\\&\equiv R(T)i\\{\frac {\mathrm {d} T}{\mathrm {d} t}}&={\frac {1}{C}}\left[-\delta \cdot (T-T_{0})+R(T)i^{2}\right]\end{aligned}}} β is a material constant, T is the absolute body temperature of the thermistor, T 0 is the ambient temperature (both temperatures in Kelvin), R 0 ( T 0 ) denotes the cold temperature resistance at T = T 0 , C is the heat capacitance and δ is the dissipation constant for the thermistor. A fundamental phenomenon that has hardly been studied is memristive behaviour in p-n junctions . [ 15 ] The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients). In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide , thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called non-volatility property . [ 16 ] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again. [ 17 ] [ 18 ] The HP Labs result was published in the scientific journal Nature . [ 17 ] [ 19 ] Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects. [ 16 ] Chua also argued that the memristor is the oldest known circuit element , with its effects predating the resistor , capacitor , and inductor . [ 20 ] However, there are doubts as to whether a memristor can actually exist in physical reality. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Additionally, some experimental evidence contradicts Chua's generalization since a non-passive nanobattery effect is observable in resistance switching memory. [ 25 ] A simple test has been proposed by Pershin and Di Ventra [ 3 ] to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now, [ when? ] there seems to be no experimental resistance switching device ( ReRAM ) which can pass the test. [ 3 ] [ 4 ] These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic /neuromemristive computer architectures. [ 26 ] [ 27 ] In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018. [ 28 ] In March 2012, a team of researchers from HRL Laboratories and the University of Michigan announced the first functioning memristor array built on a CMOS chip. [ 29 ] According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011, Chua argued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching. [ 16 ] Williams argued that MRAM , phase-change memory and ReRAM are memristor technologies. [ 32 ] Some researchers argued that biological structures such as blood [ 33 ] and skin [ 34 ] [ 35 ] fit the definition. Others argued that the memory device under development by HP Labs and other forms of ReRAM are not memristors, but rather part of a broader class of variable-resistance systems, [ 36 ] and that a broader definition of memristor is a scientifically unjustifiable land grab that favored HP's memristor patents. [ 37 ] In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction. [ 38 ] In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. [ 21 ] They indicated inadequacies in the electrochemical modeling presented in the Nature article "The missing memristor found" [ 17 ] because the impact of concentration polarization effects on the behavior of metal− TiO 2− x −metal structures under voltage or current stress was not considered. [ 25 ] In a kind of thought experiment , Meuffels and Soni [ 21 ] furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-called non-volatility property [ 16 ] exists in physical reality, its behavior would violate Landauer's principle , which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted by Di Ventra and Pershin [ 22 ] in 2013. Within this context, Meuffels and Soni [ 21 ] pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence of free-energy barriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence of thermal fluctuations . When unprotected against thermal fluctuations , the internal memory states exhibit some diffusive dynamics, which causes state degradation. [ 22 ] The free-energy barriers must therefore be high enough to ensure a low bit-error probability of bit operation. [ 39 ] Consequently, there is always a lower limit of energy requirement – depending on the required bit-error probability – for intentionally changing a bit value in any memory device. [ 39 ] [ 40 ] In the general concept of memristive system the defining equations are (see § Theory ): y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) , {\displaystyle {\begin{aligned}y(t)&=g(\mathbf {x} ,u,t)u(t),\\{\dot {\mathbf {x} }}&=f(\mathbf {x} ,u,t),\end{aligned}}} where u ( t ) is an input signal, and y ( t ) is an output signal. The vector x {\displaystyle \mathbf {x} } represents a set of n state variables describing the different internal memory states of the device. x ˙ {\displaystyle {\dot {\mathbf {x} }}} is the time-dependent rate of change of the state vector x {\displaystyle \mathbf {x} } with time. When one wants to go beyond mere curve fitting and aims at a real physical modeling of non-volatile memory elements, e.g., resistive random-access memory devices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signal u ( t ) can be superposed with a stochastic term ξ ( t ) , which takes into account the existence of inevitable thermal fluctuations . The dynamic state equation in its general form then finally reads: x ˙ = f ( x , u ( t ) + ξ ( t ) , t ) , {\displaystyle {\dot {\mathbf {x} }}=f(\mathbf {x} ,u(t)+\xi (t),t),} where ξ ( t ) is, e.g., white Gaussian current or voltage noise . On the basis of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., whether the system would be able to retain its memory states in power-off mode. Such an analysis was performed by Di Ventra and Pershin [ 22 ] with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise. [ 22 ] [ 41 ] Di Ventra and Pershin [ 22 ] thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidable Johnson–Nyquist noise and permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality. The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices ( ReRAM )) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state { i } to state { j } depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition { i }→{ j } by means of, for example, an externally applied bias. A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition { i }→{ j } is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching ( ReRAM ) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything. When a two-terminal non-volatile memory device is found to be in a distinct resistance state { j } , there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems. An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated as Joule heat to the surrounding, so no extra energy remains in the system after it has been brought from one resistance state x i to another one x j . Thus, the internal energy of the memristor device in state x i , U ( V , T , x i ) , would be the same as in state x j , U ( V , T , x j ) , even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material. Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency. [ 42 ] A 2014 article from researchers of ReRAM concluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises. [ 43 ] Martin Reynolds, an electrical engineering analyst with research outfit Gartner , commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor. [ 44 ] Chua suggested experimental tests to determine if a device may properly be categorized as a memristor: [ 2 ] According to Chua [ 45 ] [ 46 ] all resistive switching memories including ReRAM , MRAM and phase-change memory meet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim. Experimental evidence shows that redox-based resistance memory ( ReRAM ) includes a nanobattery effect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling. [ 25 ] In 2008, researchers from HP Labs introduced a model for a memristance function based on thin films of titanium dioxide . [ 17 ] For R on ≪ R off the memristance function was determined to be M ( q ( t ) ) = R o f f ⋅ ( 1 − μ v R o n D 2 q ( t ) ) {\displaystyle M(q(t))=R_{\mathrm {off} }\cdot \left(1-{\frac {\mu _{v}R_{\mathrm {on} }}{D^{2}}}q(t)\right)} where R off represents the high resistance state, R on represents the low resistance state, μ v represents the mobility of dopants in the thin film, and D represents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects. For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch from R on to R off in time T on to T off , the charge must change by Δ Q = Q on − Q off . E s w i t c h = V 2 ∫ T o f f T o n d t M ( q ( t ) ) = V 2 ∫ Q o f f Q o n d q I ( q ) M ( q ) = V 2 ∫ Q o f f Q o n d q V ( q ) = V Δ Q {\displaystyle {\begin{aligned}E_{\mathrm {switch} }&=V^{2}\int _{T_{\mathrm {off} }}^{T_{\mathrm {on} }}{\frac {\mathrm {d} t}{M(q(t))}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{I(q)M(q)}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{V(q)}}\\&=V\Delta Q\end{aligned}}} Substituting V = I ( q ) M ( q ) , and then ∫ d q / V = ∆ Q / V for constant V to produces the final expression. This power characteristic differs fundamentally from that of a metal oxide semiconductor transistor , which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage. The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creating hysteresis , also called the "hard-switching regime". [ 17 ] Another kind of switch would have a cyclic M ( q ) so that each off-on event would be followed by an on-off event under constant bias. Such a device would act as a memristor under all conditions, but would be less practical. In the more general concept of an n -th order memristive system the defining equations are y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) {\displaystyle {\begin{aligned}y(t)&=g({\textbf {x}},u,t)u(t),\\{\dot {\textbf {x}}}&=f({\textbf {x}},u,t)\end{aligned}}} where u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and g and f are continuous functions . For a current-controlled memristive system the signal u ( t ) represents the current signal i ( t ) and the signal y ( t ) represents the voltage signal v ( t ) . For a voltage-controlled memristive system the signal u ( t ) represents the voltage signal v ( t ) and the signal y ( t ) represents the current signal i ( t ) . The pure memristor is a particular case of these equations, namely when x depends only on charge ( x = q ) and since the charge is related to the current via the time derivative d q /d t = i ( t ) . Thus for pure memristors f (i.e. the rate of change of the state) must be equal or proportional to the current i ( t ) . One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. [ 47 ] For a current-controlled memristive system, the input u ( t ) is the current i ( t ), the output y ( t ) is the voltage v ( t ), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors. [ 48 ] The concept of memristive networks was first introduced by Leon Chua in his 1976 paper "Memristive Devices and Systems." [ 2 ] Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance. In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data. One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. For the simplest model with only memristive devices with voltage generators in series, there is an exact and in closed form equation ( Caravelli–Traversa–Di Ventra equation , CTDV) [ 49 ] which describes the evolution of the internal memory of the network for each device. For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov model R ( x ) = R o f f ( 1 − x ) + R o n x {\displaystyle R(x)=R_{off}(1-x)+R_{on}x} , with d x / d t = I / β − α x {\displaystyle dx/dt=I/\beta -\alpha x} , there is a set of nonlinearly coupled differential equations that takes the form: where X {\displaystyle X} is the diagonal matrix with elements x i {\displaystyle x_{i}} on the diagonal, α , β , χ {\displaystyle \alpha ,\beta ,\chi } are based on the memristors physical parameters. The vector S → {\displaystyle {\vec {S}}} is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operator Ω 2 = Ω {\displaystyle \Omega ^{2}=\Omega } , defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with a Hopfield network , such as the existence of Lyapunov functions and classical tunnelling phenomena. [ 50 ] In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications. Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior of ReRAM . [ 36 ] [ 37 ] and have suggested extended memristive models to remedy perceived deficiencies. [ 25 ] One example [ 51 ] attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signal u ( t ) as a series expansion where m is a positive integer, u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and the functions g and f are continuous functions . This equation produces the same zero-crossing hysteresis curves as memristive systems but with a different frequency response than that predicted by memristive systems. Another example suggests including an offset value a {\displaystyle a} to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect. [ 25 ] There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053]. Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. [ 52 ] [ 53 ] [ 54 ] The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current. Although not cited in HP's initial reports on their TiO 2 memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s. [ 55 ] The HP device is composed of a thin (50 nm ) titanium dioxide film between two 5 nm thick electrodes , one titanium , the other platinum . Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers , meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (see Fast-ion conductor ), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current. [ 17 ] Since the HP device displays fast-ion conduction at nanoscale, it is considered a nanoionic device . [ 56 ] Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device enters hysteresis . It ceases to integrate q =∫ I d t , but rather keeps q at an upper bound and M fixed, thus acting as a constant resistor until current is reversed. Memory applications of thin-film oxides had been an area of active investigation for some time. IBM published an article in 2000 regarding structures similar to that described by Williams. [ 57 ] Samsung has a U.S. patent for oxide-vacancy based switches similar to that described by Williams. [ 58 ] In April 2010, HP labs announced that they had practical memristors working at 1 ns (~1 GHz) switching times and 3 nm by 3 nm sizes, [ 59 ] which bodes well for the future of the technology. [ 60 ] At these densities it could easily rival the current sub-25 nm flash memory technology. It seems that memristance has been reported in nanoscale thin films of silicon dioxide as early as the 1960s . [ 61 ] However, hysteretic conductance in silicon was associated to memristive effects only in 2009. [ 62 ] More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy. [ 63 ] In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. [ 64 ] They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to use fast-ion conductor as this passive layer, which allows a significant reduction of the ionic extraction field. In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor. [ 65 ] In 2010, Alibart, Gamrat, Vuillaume et al. [ 66 ] introduced a new hybrid organic/ nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor [ 67 ] and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning). [ 68 ] In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. [ 69 ] The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines. In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor. [ 70 ] In 2014, Bessonov et al. reported a flexible memristive device comprising a MoO x / MoS 2 heterostructure sandwiched between silver electrodes on a plastic foil. [ 71 ] The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. [ 72 ] in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX 2 , M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride , which is the thinnest memory material of around 0.33 nm. [ 73 ] These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD / MOCVD , enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS 2 or h-BN atomristors, indicating a new application of memristors for 5G , 6G and THz communication and connectivity systems. [ 74 ] [ 75 ] In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology. [ 76 ] The ferroelectric memristor [ 77 ] is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: R OFF ≫ R ON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither R ON or R OFF , but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved. In 2013, Ageev, Blinov et al. [ 78 ] reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope . Later it was found [ 79 ] that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain Δ L 0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field Edef in the nanotube under the influence of an external electric field E ( x , t ). Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. [ 80 ] In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated, [ 81 ] whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage [ 82 ] furthermore a natural silk fibroin demonstrated memristive properties; [ 83 ] spin-memristive systems based on biomolecules are also being studied. [ 84 ] In 2012, Sandro Carrara and co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors. [ 85 ] Since then, several memristive sensors have been demonstrated. [ 86 ] Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. [ 87 ] In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview by IEEE Spectrum . [ 88 ] A first experimental proof of the spintronic memristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011. [ 89 ] The magnetic tunnel junction has been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic ( spin-transfer torque ). Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on a magnetic tunnel junction (MTJ) endowed with bi-stable spin-dependent states [ 90 ] ( resistive switching ). The MTJ consists in a SrTiO3 (STO) tunnel barrier that separates half-metallic oxide LSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, the tunnel magnetoresistance (TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from the half-metallic LSMO electrode, within the Julliere model , this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions. Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008 [ 91 ] and 2009. [ 92 ] While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, [ 92 ] another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies [ 93 ] and its impact [ 94 ] on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity [ 95 ] or multiferroicity. [ 96 ] The magnetization state of a MTJ can be controlled by Spin-transfer torque , and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving a MRAM . However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable. [ 97 ] The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vector x = ( x 1 , x 2 ), where x 1 describes the magnetic state of the electrodes and x 2 denotes the resistive state of the MgO barrier. In this case the change of x 1 is current-controlled (spin torque is due to a high current density) whereas the change of x 2 is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system. [ 98 ] A fundamentally different mechanism for memristive behavior has been proposed by Pershin and Di Ventra . [ 99 ] [ 100 ] The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang. [ 2 ] The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces, [ 101 ] but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor. [ 1 ] This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications. In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor. [ 102 ] The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide. [ 103 ] The SDC device is operational immediately after fabrication. In the Ge 2 Se 3 active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge 2 Se 3 /Ag/Ge 2 Se 3 , directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about 350 °C (662 °F), the device has significantly higher processing and operating temperatures at 250 °C (482 °F) and at least 150 °C (302 °F), respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at 150 °C (302 °F). There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated by Heidemarie Schmidt and her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO 3 thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1: 10.1.1.189.3614 ] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358]. The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions. The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051]. Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications. A potential application of memristors is in analog memories for superconducting quantum computers. [ 12 ] Memristors can potentially be fashioned into non-volatile solid-state memory , which could allow greater data density than hard drives with access times similar to DRAM , replacing both components. [ 31 ] HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, [ 104 ] and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm 3 ). [ 105 ] In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM. [ 106 ] The devices' resistance would be read with alternating current so that the stored value would not be affected. [ 107 ] In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory. [ 108 ] Memristors have applications in programmable logic [ 109 ] signal processing , [ 110 ] super-resolution imaging [ 111 ] physical neural networks , [ 112 ] control systems , [ 113 ] reconfigurable computing , [ 114 ] in-memory computing , [ 115 ] brain–computer interfaces [ 116 ] and RFID . [ 117 ] Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation [ 118 ] Several early works have been reported in this direction. [ 119 ] [ 120 ] In 2009, a simple electronic circuit [ 121 ] consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. [ 122 ] It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime molds Physarum polycephalum where the viscosity of channels in the cytoplasm responds to periodic environment changes. [ 122 ] Applications of such circuits may include, e.g., pattern recognition . The DARPA SyNAPSE project funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010, Versace and Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model. [ 123 ] MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware. [ 124 ] Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki. [ 125 ] In 2011, they showed [ 126 ] how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired from Hebbian learning rule . In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena. [ 127 ] The memistor and memtransistor are transistor-based devices which include memristor function. In 2009, Di Ventra , Pershin, and Chua extended [ 128 ] the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin. [ 22 ] In September 2014, Mohamed-Salah Abdelouahab , Rene Lozi , and Leon Chua published a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements using fractional derivatives . [ 129 ] Sir Humphry Davy is said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808. [ 20 ] [ 130 ] However the first device of a related nature to be constructed was the memistor (i.e. memory resistor), a term coined in 1960 by Bernard Widrow to describe a circuit element of an early artificial neural network called ADALINE . A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO 2 which was later claimed by researchers from Hewlett Packard to be evidence of a memristor. [ 55 ] [ citation needed ] Leon Chua postulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element. [ 1 ] Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior. [ 2 ] On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in Nature identifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors. [ 17 ] On 23 January 2009, Di Ventra , Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors , whose properties depend on the state and history of the system. [ 128 ] In July 2014, the MeMOSat/ LabOSat group [ 131 ] (composed of researchers from Universidad Nacional de General San Martín (Argentina) , INTI, CNEA , and CONICET ) put memory devices into a Low Earth orbit . [ 132 ] Since then, seven missions with different devices [ 133 ] are performing experiments in low orbits, onboard Satellogic 's Ñu-Sat satellites. [ 134 ] [ 135 ] [ clarification needed ] On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially. [ 136 ] These devices remain available in small numbers. On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload. [ 137 ] In 2021, Jennifer Rupp and Martin Bazant of MIT started a "Lithionics" research programme to investigate applications of lithium beyond their use in battery electrodes , including lithium oxide -based memristors in neuromorphic computing . [ 138 ] [ 139 ] In May 2023, TECHiFAB GmbH [https://techifab.com/] announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers. In the September 2023 issue of Science Magazine , Chinese scientists Wenbin Zhang et al. described the development and testing of a memristor-based integrated circuit . [ 140 ]
https://en.wikipedia.org/wiki/Memductance
A memex (from " mem ory ex pansion") is a hypothetical electromechanical device for interacting with microform documents and described in Vannevar Bush 's 1945 article " As We May Think ". Bush envisioned the memex as a device in which individuals would compress and store all of their books, records, and communications, "mechanized so that it may be consulted with exceeding speed and flexibility". The individual was supposed to use the memex as an automatic personal filing system , making the memex "an enlarged intimate supplement to his memory". [ 1 ] The concept of the memex influenced the development of early hypertext systems and personal knowledge base software. [ 2 ] The hypothetical implementation depicted by Bush for the purpose of concrete illustration was based upon a document bookmark list of static microfilm pages and lacked a true hypertext system, where parts of pages would have internal structure beyond the common textual format. In " As We May Think ", Vannevar Bush describes a memex as an electromechanical device enabling individuals to develop and read a large self-contained research library, create and follow associative trails of links and personal annotations, and recall these trails at any time to share them with other researchers. This device would closely mimic the associative processes of the human mind, but it would be gifted with permanent recollection. As Bush writes, "Thus science may implement the ways in which man produces, stores, and consults the record of the race". [ 3 ] The technology used would have been a combination of electromechanical controls and microfilm cameras and readers, all integrated into a large desk. Most of the microfilm library would have been contained within the desk, but the user could add or remove microfilm reels at will. A memex would hypothetically read and write content on these microfilm reels, using electric photocells to read coded symbols recorded next to individual microfilm frames while the reels spun at high speed, stopping on command. The coded symbols would enable the memex to index, search, and link content to create and follow associative trails. The top of the desk would have slanting translucent screens on which material could be projected for convenient reading. The top of the memex would have a transparent platen. When a longhand note, photograph, memoranda, or other things were placed on the platen, the depression of a lever would cause the item to be photographed onto the next blank space in a section of the memex film. According to Bush, the memex could become "a sort of mechanized private file and library". [ 4 ] The memex device as described by Bush "would use microfilm storage, dry photography, and analog computing to give postwar scholars access to a huge, indexed repository of knowledge any section of which could be called up with a few keystrokes." [ 5 ] An associative trail as conceived by Bush would be a way to create a new linear sequence of microfilm frames across any arbitrary sequence of microfilm frames by creating a chained sequence of links in the way just described, along with personal comments and side trails . At the time, Bush saw the current ways of indexing information as limiting and instead proposed a way to store information that was analogous to the mental association of the human brain: storing information with the capability of easy access at a later time using certain cues (in this case, a series of numbers as a code to retrieve data). [ 6 ] According to Bush, the memex would have features other than linking. The user could record new information on microfilm, by taking photos from paper or from a touch-sensitive translucent screen. A user could "...insert a comment of his own, either linking it into the main trail or joining it by a side trail to a particular item. ...Thus he builds a trail of his interest through the maze of materials available to him." [ 7 ] A user could also create a copy of an interesting trail (containing references and personal annotations) and "...pass it to his friend for insertion in his own memex, there to be linked into the more general trail." [ 7 ] In September 1945, Life magazine published an illustration by Alfred D. Crimi showing the "Memex desk". According to Life magazine, the Memex desk "would instantly bring files and material on an subject to the operator's fingertips". The mechanical core of the desk would also include "a mechanism which automatically photographs longhand notes, pictures and letters, then file them in the desk for future reference." [ 8 ] Bush's 1945 " As We May Think " idea for the memex extended far beyond a mechanism that might augment the research of one individual working in isolation. In Bush's idea, the ability to connect, annotate, and share both published works and personal trails would profoundly change the process by which the "world's record" is created and used: Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by a patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. ... The historian, with a vast chronological account of a people, parallels it with a skip trail that stops only on the salient items and can follow at any time contemporary trails which lead him all over civilization at a particular epoch. There is a new profession of trailblazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world's record but for his disciples the entire scaffolding by which they were erected. — As We May Think Bush said of his " As We May Think " memex device that "technical difficulties of all sorts have been ignored," but that, "also ignored are means as yet unknown which may come any day to accelerate technical progress as violently as did the advent of the thermionic tube ." [ 3 ] Michael Buckland concluded that Bush's 1945 vision for an information retrieval machine is unhistorically viewed in relation to the subsequent development of electronic computer technology. Buckland studied the historical background of information retrieval in and before 1939 because the Memex was based on Bush's work during 1938–1940 in building a photoelectric microfilm selector, an electronic retrieval technology invented by Emanuel Goldberg for Zeiss Ikon in the 1920s. According to Buckland, the legacy of Bush is twofold: a significant engineering achievement in building a rapid prototype microfilm selector, and "a speculative article" which through "the social prestige of its author, has had an immediate and lasting effect in stimulating others." [ 9 ] The pioneer of human–computer interaction Douglas Engelbart was inspired by Bush's proposal for a co-evolution between humans and machines. [ 10 ] In a 1999 publication, Engelbart recollects that reading "As We May Think" in 1945 he "became 'infected' with the idea of building a means to extend and navigate this great pool of human knowledge". [ 11 ] Around 1961, Engelbart re-read Bush's article, and from 1962 onward Engelbart developed a series of technical designs. [ 12 ] Engelbart updated the Memex microfilm storage desk and thereby arrived at a pioneering vision for a personal computer connected to an electronic visual display and a mouse pointing device . [ 13 ] In 1962, Engelbart sent Bush a draft article for comment; Bush never replied. The article was published in 1963 under the title "A Conceptual Framework for the Augmentation of Man's Intellect". [ 14 ] In 1965, J. C. R. Licklider dedicated his book "Libraries of the Future" to Bush. Licklider wrote that he had often heard of the memex and "trails of reference", even before he had read "As We May Think". [ 15 ] Also in 1965, Ted Nelson coined the word hypertext in a paper that quoted Bush's memex idea at length. [ 16 ] In 1968, Nelson collaborated with Andries van Dam to implement the Hypertext Editing System (HES). [ 17 ] In his 1987 book entitled " Literary Machines ", Nelson defined hypertext as "non-sequential writing with reader-controlled links". [ 18 ] When Tim Berners-Lee built his ENQUIRE software at CERN in 1980, which led to his invention of the World Wide Web in 1989, the ideas developed by Bush, Engelbart and Nelson did not influence his work, since he was not aware of them. However, as Berners-Lee began to refine his ideas, the work of these predecessors would later help to confirm the legitimacy of his concept. [ 19 ] [ 20 ] In 2003, Microsoft promoted a life-logging research project under the name MyLifeBits as an attempt to fulfill Bush's memex vision. [ 21 ] In 1959, Vannevar Bush described an improved "Memex II". [ 22 ] In the manuscript draft of "Memex II" he wrote, "Professional societies will no longer print papers..." and states that individuals will either order sets of papers to come on tape – complete with photographs and diagrams – or download ' facsimiles ' by telephone. Each society would maintain a 'master memex' containing all papers, references, tables "intimately interconnected by trails, so that one may follow a detailed matter from paper to paper, going back through the classics, recording criticism in the margins." [ 23 ] In 1967, Vannevar Bush published a retrospective article entitled "Memex Revisited" [ 24 ] in his book Science Is Not Enough . Published 22 years after his initial conception of the Memex, Bush details the various technological advancements that have made his vision a possibility. Specifically, Bush cites photocells, transistors, cathode ray tubes, magnetic and videotape, "high-speed electric circuits", and "miniaturization of solid-state devices" such as the TV and radio. The article claims that magnetic tape would be central to the creation of a modern Memex device. The erasable quality of the tape is of special significance, as this would allow for modification of information stored in the proposed Memex. [ 24 ] In the article, Bush stresses the continued importance of supplementing "how creative men think" and relates that the systems for indexing data are still insufficient and rely too much on linear pathways rather than the association-based system of the human brain. Bush writes that a machine with the "speed and flexibility" of the brain is not attainable, but improvements could be made in regard to the capacity to obtain informational "permanence and clarity". [ 24 ] Bush also relates that, unlike digital technology, Memex would be of no significant aid to business or profitable ventures, and as a consequence, its development would occur only long after the mechanization of libraries and the introduction of what he describes as the specialized "group machine", which would be useful for the sharing of ideas in fields such as medicine. Furthermore, although Bush discusses the compressional ability and rapidity so key to modern machines, he relates that speed will not be an integral part of Memex, stating that a tenth of a second would be an acceptable interval for its data retrieval, rather than the billionths of a second that modern computers are capable of. "For Memex," he writes, "the problem is not swift access, but selective access". Bush states that although the code-reading and potential linking capabilities of the rapid selector would be key to the creation of Memex, there is still an issue of enabling "moderately rapid access to really large memory storage". There is an issue concerning selection, Bush conveys, and despite the fact that improvements have been made in the speed of digital selection, according to Bush, "selection, in the broad sense, is still a stone adze in the hands of the cabinetmaker". Bush goes on to discuss the record-making process and how Memex could incorporate systems of voice-control and user-propagated learning. [ 24 ] He proposes a machine that could respond to "simple remarks" as well as build trails [ 24 ] based on its user's "habits of association," as Belinda Barnet described them in "The Technical Evolution of Vannevar Bush's Memex." Barnet also makes the distinction between the idea of a constructive Memex and the "permanent trails" described in As We May Think, and attributes Bush's machine learning concepts to Claude Shannon 's mechanical mouse and work with "feedback and machine learning ". [ 25 ] Inspired by Bush's hypothetical device in his 1945 article, Defense Advanced Research Projects Agency launched a program named Memex in 2014 to fight human trafficking crimes on the dark web. [ 26 ] DARPA later released the Memex artificial intelligence search technologies as open-source software. [ 27 ] In 2016, DARPA Memex program received the 2016 Presidential Award for Extraordinary Efforts to Combat Trafficking in Persons for developing the anti-trafficking technology tool. [ 28 ] Dozens of law enforcement organizations worldwide use the Memex software to conduct investigations. [ 29 ]
https://en.wikipedia.org/wiki/Memex
A memristor ( / ˈ m ɛ m r ɪ s t ər / ; a portmanteau of memory resistor ) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage . It was described and named in 1971 by Leon Chua , completing a theoretical quartet of fundamental electrical components which also comprises the resistor , capacitor and inductor . [ 1 ] Chua and Kang later generalized the concept to memristive systems . [ 2 ] Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notably ReRAM . The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated. [ 3 ] [ 4 ] Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as the memistor of Bernard Widrow, but Chua introduced a mathematical generality. The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkage Φ m ( t ) and the amount of electric charge that has flowed, q ( t ) : [ 1 ] f ( Φ m ( t ) , q ( t ) ) = 0 {\displaystyle f(\mathrm {\Phi } _{\mathrm {m} }(t),q(t))=0} The magnetic flux linkage , Φ m , is generalized from the circuit characteristic of an inductor. It does not represent a magnetic field here. Its physical meaning is discussed below. The symbol Φ m may be regarded as the integral of voltage over time. [ 5 ] In the relationship between Φ m and q , the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge: M ( q ) = d Φ m d q . {\displaystyle M(q)={\frac {\mathrm {d} \Phi _{\rm {m}}}{\mathrm {d} q}}\,.} Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are: M ( q ( t ) ) = d Φ / d t d q / d t = V ( t ) I ( t ) . {\displaystyle M(q(t))={\cfrac {\mathrm {d} \Phi _{\rm {}}/\mathrm {d} t}{\mathrm {d} q/\mathrm {d} t}}={\frac {V(t)}{I(t)}}\,.} To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the term M ( q ) , which characterizes the device, and write it as a differential equation. The above table covers all meaningful ratios of differentials of I , q , Φ m , and V . No device can relate d I to d q , or dΦ m to d V , because I is the time derivative of q and Φ m is the integral of V with respect to time. It can be inferred from this that memristance is charge-dependent resistance . If M ( q ( t )) is a constant, then we obtain Ohm's law , R ( t ) = V ( t )/ I ( t ) . If M ( q ( t )) is nontrivial, however, the equation is not equivalent because q ( t ) and M ( q ( t )) can vary with time. Solving for voltage as a function of time produces V ( t ) = M ( q ( t ) ) I ( t ) . {\displaystyle V(t)=\ M(q(t))I(t)\,.} This equation reveals that memristance defines a linear relationship between current and voltage, as long as M does not vary with charge. Nonzero current implies time varying charge. Alternating current , however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change in q does not cause much change in M . Furthermore, the memristor is static if no current is applied. If I ( t ) = 0 , we find V ( t ) = 0 and M ( t ) is constant. This is the essence of the memory effect. Analogously, we can define a W ( ϕ ( t )) as memductance: [ 1 ] i ( t ) = W ( ϕ ( t ) ) v ( t ) . {\displaystyle i(t)=W(\phi (t))v(t)\,.} The power consumption characteristic recalls that of a resistor, I 2 R : P ( t ) = I ( t ) V ( t ) = I 2 ( t ) M ( q ( t ) ) . {\displaystyle P(t)=\ I(t)V(t)=\ I^{2}(t)M(q(t))\,.} As long as M ( q ( t )) varies little, such as under alternating current, the memristor will appear as a constant resistor. If M ( q ( t )) increases rapidly, however, current and power consumption will quickly stop. M ( q ) is physically restricted to be positive for all values of q (assuming the device is passive and does not become superconductive at some q ). A negative value would mean that it would perpetually supply energy when operated with alternating current. In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept of device modeling . [ 6 ] Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis. In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors. [ 6 ] Pershin and Di Ventra [ 3 ] have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept. The rest of this article primarily addresses memristors as related to ReRAM devices, since the majority of work since 2008 has been concentrated in this area. Dr. Paul Penfield, in a 1974 MIT technical report [ 7 ] mentions the memristor in connection with Josephson junctions . This was an early use of the word "memristor" in the context of a circuit device. One of the terms in the current through a Josephson junction is of the form: i M ( v ) = ϵ cos ⁡ ( ϕ 0 ) v = W ( ϕ 0 ) v {\displaystyle {\begin{aligned}i_{M}(v)&=\epsilon \cos(\phi _{0})v\\&=W(\phi _{0})v\end{aligned}}} where ϵ is a constant based on the physical superconducting materials, v is the voltage across the junction and i M is the current through the junction. Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out. [ 8 ] [ 9 ] [ 10 ] [ 11 ] A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014. [ 12 ] Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see § Theory . A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electrons n e . [ 2 ] v M = R ( n e ) i M d n e d t = β n + α R ( n e ) i M 2 {\displaystyle {\begin{aligned}v_{\mathrm {M} }&=R(n_{\mathrm {e} })i_{\mathrm {M} }\\{\frac {\mathrm {d} n_{\mathrm {e} }}{\mathrm {d} t}}&=\beta n+\alpha R(n_{\mathrm {e} })i_{\mathrm {M} }^{2}\end{aligned}}} v M is the voltage across the discharge tube, i M is the current flowing through it, and n e is the number of conduction electrons. A simple memristance function is R ( n e ) = F / n e . The parameters α , β , and F depend on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the "pinched hysteresis loop" in the v-i plane. [ a ] [ 13 ] [ 14 ] Thermistors can be modeled as memristive devices: [ 14 ] v = R 0 ( T 0 ) exp ⁡ [ β ( 1 T − 1 T 0 ) ] i ≡ R ( T ) i d T d t = 1 C [ − δ ⋅ ( T − T 0 ) + R ( T ) i 2 ] {\displaystyle {\begin{aligned}v&=R_{0}(T_{0})\exp \left[\beta \left({\frac {1}{T}}-{\frac {1}{T_{0}}}\right)\right]i\\&\equiv R(T)i\\{\frac {\mathrm {d} T}{\mathrm {d} t}}&={\frac {1}{C}}\left[-\delta \cdot (T-T_{0})+R(T)i^{2}\right]\end{aligned}}} β is a material constant, T is the absolute body temperature of the thermistor, T 0 is the ambient temperature (both temperatures in Kelvin), R 0 ( T 0 ) denotes the cold temperature resistance at T = T 0 , C is the heat capacitance and δ is the dissipation constant for the thermistor. A fundamental phenomenon that has hardly been studied is memristive behaviour in p-n junctions . [ 15 ] The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients). In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide , thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called non-volatility property . [ 16 ] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again. [ 17 ] [ 18 ] The HP Labs result was published in the scientific journal Nature . [ 17 ] [ 19 ] Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects. [ 16 ] Chua also argued that the memristor is the oldest known circuit element , with its effects predating the resistor , capacitor , and inductor . [ 20 ] However, there are doubts as to whether a memristor can actually exist in physical reality. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Additionally, some experimental evidence contradicts Chua's generalization since a non-passive nanobattery effect is observable in resistance switching memory. [ 25 ] A simple test has been proposed by Pershin and Di Ventra [ 3 ] to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now, [ when? ] there seems to be no experimental resistance switching device ( ReRAM ) which can pass the test. [ 3 ] [ 4 ] These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic /neuromemristive computer architectures. [ 26 ] [ 27 ] In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018. [ 28 ] In March 2012, a team of researchers from HRL Laboratories and the University of Michigan announced the first functioning memristor array built on a CMOS chip. [ 29 ] According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011, Chua argued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching. [ 16 ] Williams argued that MRAM , phase-change memory and ReRAM are memristor technologies. [ 32 ] Some researchers argued that biological structures such as blood [ 33 ] and skin [ 34 ] [ 35 ] fit the definition. Others argued that the memory device under development by HP Labs and other forms of ReRAM are not memristors, but rather part of a broader class of variable-resistance systems, [ 36 ] and that a broader definition of memristor is a scientifically unjustifiable land grab that favored HP's memristor patents. [ 37 ] In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction. [ 38 ] In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. [ 21 ] They indicated inadequacies in the electrochemical modeling presented in the Nature article "The missing memristor found" [ 17 ] because the impact of concentration polarization effects on the behavior of metal− TiO 2− x −metal structures under voltage or current stress was not considered. [ 25 ] In a kind of thought experiment , Meuffels and Soni [ 21 ] furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-called non-volatility property [ 16 ] exists in physical reality, its behavior would violate Landauer's principle , which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted by Di Ventra and Pershin [ 22 ] in 2013. Within this context, Meuffels and Soni [ 21 ] pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence of free-energy barriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence of thermal fluctuations . When unprotected against thermal fluctuations , the internal memory states exhibit some diffusive dynamics, which causes state degradation. [ 22 ] The free-energy barriers must therefore be high enough to ensure a low bit-error probability of bit operation. [ 39 ] Consequently, there is always a lower limit of energy requirement – depending on the required bit-error probability – for intentionally changing a bit value in any memory device. [ 39 ] [ 40 ] In the general concept of memristive system the defining equations are (see § Theory ): y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) , {\displaystyle {\begin{aligned}y(t)&=g(\mathbf {x} ,u,t)u(t),\\{\dot {\mathbf {x} }}&=f(\mathbf {x} ,u,t),\end{aligned}}} where u ( t ) is an input signal, and y ( t ) is an output signal. The vector x {\displaystyle \mathbf {x} } represents a set of n state variables describing the different internal memory states of the device. x ˙ {\displaystyle {\dot {\mathbf {x} }}} is the time-dependent rate of change of the state vector x {\displaystyle \mathbf {x} } with time. When one wants to go beyond mere curve fitting and aims at a real physical modeling of non-volatile memory elements, e.g., resistive random-access memory devices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signal u ( t ) can be superposed with a stochastic term ξ ( t ) , which takes into account the existence of inevitable thermal fluctuations . The dynamic state equation in its general form then finally reads: x ˙ = f ( x , u ( t ) + ξ ( t ) , t ) , {\displaystyle {\dot {\mathbf {x} }}=f(\mathbf {x} ,u(t)+\xi (t),t),} where ξ ( t ) is, e.g., white Gaussian current or voltage noise . On the basis of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., whether the system would be able to retain its memory states in power-off mode. Such an analysis was performed by Di Ventra and Pershin [ 22 ] with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise. [ 22 ] [ 41 ] Di Ventra and Pershin [ 22 ] thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidable Johnson–Nyquist noise and permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality. The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices ( ReRAM )) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state { i } to state { j } depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition { i }→{ j } by means of, for example, an externally applied bias. A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition { i }→{ j } is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching ( ReRAM ) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything. When a two-terminal non-volatile memory device is found to be in a distinct resistance state { j } , there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems. An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated as Joule heat to the surrounding, so no extra energy remains in the system after it has been brought from one resistance state x i to another one x j . Thus, the internal energy of the memristor device in state x i , U ( V , T , x i ) , would be the same as in state x j , U ( V , T , x j ) , even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material. Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency. [ 42 ] A 2014 article from researchers of ReRAM concluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises. [ 43 ] Martin Reynolds, an electrical engineering analyst with research outfit Gartner , commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor. [ 44 ] Chua suggested experimental tests to determine if a device may properly be categorized as a memristor: [ 2 ] According to Chua [ 45 ] [ 46 ] all resistive switching memories including ReRAM , MRAM and phase-change memory meet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim. Experimental evidence shows that redox-based resistance memory ( ReRAM ) includes a nanobattery effect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling. [ 25 ] In 2008, researchers from HP Labs introduced a model for a memristance function based on thin films of titanium dioxide . [ 17 ] For R on ≪ R off the memristance function was determined to be M ( q ( t ) ) = R o f f ⋅ ( 1 − μ v R o n D 2 q ( t ) ) {\displaystyle M(q(t))=R_{\mathrm {off} }\cdot \left(1-{\frac {\mu _{v}R_{\mathrm {on} }}{D^{2}}}q(t)\right)} where R off represents the high resistance state, R on represents the low resistance state, μ v represents the mobility of dopants in the thin film, and D represents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects. For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch from R on to R off in time T on to T off , the charge must change by Δ Q = Q on − Q off . E s w i t c h = V 2 ∫ T o f f T o n d t M ( q ( t ) ) = V 2 ∫ Q o f f Q o n d q I ( q ) M ( q ) = V 2 ∫ Q o f f Q o n d q V ( q ) = V Δ Q {\displaystyle {\begin{aligned}E_{\mathrm {switch} }&=V^{2}\int _{T_{\mathrm {off} }}^{T_{\mathrm {on} }}{\frac {\mathrm {d} t}{M(q(t))}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{I(q)M(q)}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{V(q)}}\\&=V\Delta Q\end{aligned}}} Substituting V = I ( q ) M ( q ) , and then ∫ d q / V = ∆ Q / V for constant V to produces the final expression. This power characteristic differs fundamentally from that of a metal oxide semiconductor transistor , which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage. The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creating hysteresis , also called the "hard-switching regime". [ 17 ] Another kind of switch would have a cyclic M ( q ) so that each off-on event would be followed by an on-off event under constant bias. Such a device would act as a memristor under all conditions, but would be less practical. In the more general concept of an n -th order memristive system the defining equations are y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) {\displaystyle {\begin{aligned}y(t)&=g({\textbf {x}},u,t)u(t),\\{\dot {\textbf {x}}}&=f({\textbf {x}},u,t)\end{aligned}}} where u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and g and f are continuous functions . For a current-controlled memristive system the signal u ( t ) represents the current signal i ( t ) and the signal y ( t ) represents the voltage signal v ( t ) . For a voltage-controlled memristive system the signal u ( t ) represents the voltage signal v ( t ) and the signal y ( t ) represents the current signal i ( t ) . The pure memristor is a particular case of these equations, namely when x depends only on charge ( x = q ) and since the charge is related to the current via the time derivative d q /d t = i ( t ) . Thus for pure memristors f (i.e. the rate of change of the state) must be equal or proportional to the current i ( t ) . One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. [ 47 ] For a current-controlled memristive system, the input u ( t ) is the current i ( t ), the output y ( t ) is the voltage v ( t ), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors. [ 48 ] The concept of memristive networks was first introduced by Leon Chua in his 1976 paper "Memristive Devices and Systems." [ 2 ] Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance. In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data. One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. For the simplest model with only memristive devices with voltage generators in series, there is an exact and in closed form equation ( Caravelli–Traversa–Di Ventra equation , CTDV) [ 49 ] which describes the evolution of the internal memory of the network for each device. For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov model R ( x ) = R o f f ( 1 − x ) + R o n x {\displaystyle R(x)=R_{off}(1-x)+R_{on}x} , with d x / d t = I / β − α x {\displaystyle dx/dt=I/\beta -\alpha x} , there is a set of nonlinearly coupled differential equations that takes the form: where X {\displaystyle X} is the diagonal matrix with elements x i {\displaystyle x_{i}} on the diagonal, α , β , χ {\displaystyle \alpha ,\beta ,\chi } are based on the memristors physical parameters. The vector S → {\displaystyle {\vec {S}}} is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operator Ω 2 = Ω {\displaystyle \Omega ^{2}=\Omega } , defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with a Hopfield network , such as the existence of Lyapunov functions and classical tunnelling phenomena. [ 50 ] In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications. Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior of ReRAM . [ 36 ] [ 37 ] and have suggested extended memristive models to remedy perceived deficiencies. [ 25 ] One example [ 51 ] attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signal u ( t ) as a series expansion where m is a positive integer, u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and the functions g and f are continuous functions . This equation produces the same zero-crossing hysteresis curves as memristive systems but with a different frequency response than that predicted by memristive systems. Another example suggests including an offset value a {\displaystyle a} to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect. [ 25 ] There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053]. Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. [ 52 ] [ 53 ] [ 54 ] The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current. Although not cited in HP's initial reports on their TiO 2 memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s. [ 55 ] The HP device is composed of a thin (50 nm ) titanium dioxide film between two 5 nm thick electrodes , one titanium , the other platinum . Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers , meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (see Fast-ion conductor ), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current. [ 17 ] Since the HP device displays fast-ion conduction at nanoscale, it is considered a nanoionic device . [ 56 ] Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device enters hysteresis . It ceases to integrate q =∫ I d t , but rather keeps q at an upper bound and M fixed, thus acting as a constant resistor until current is reversed. Memory applications of thin-film oxides had been an area of active investigation for some time. IBM published an article in 2000 regarding structures similar to that described by Williams. [ 57 ] Samsung has a U.S. patent for oxide-vacancy based switches similar to that described by Williams. [ 58 ] In April 2010, HP labs announced that they had practical memristors working at 1 ns (~1 GHz) switching times and 3 nm by 3 nm sizes, [ 59 ] which bodes well for the future of the technology. [ 60 ] At these densities it could easily rival the current sub-25 nm flash memory technology. It seems that memristance has been reported in nanoscale thin films of silicon dioxide as early as the 1960s . [ 61 ] However, hysteretic conductance in silicon was associated to memristive effects only in 2009. [ 62 ] More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy. [ 63 ] In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. [ 64 ] They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to use fast-ion conductor as this passive layer, which allows a significant reduction of the ionic extraction field. In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor. [ 65 ] In 2010, Alibart, Gamrat, Vuillaume et al. [ 66 ] introduced a new hybrid organic/ nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor [ 67 ] and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning). [ 68 ] In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. [ 69 ] The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines. In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor. [ 70 ] In 2014, Bessonov et al. reported a flexible memristive device comprising a MoO x / MoS 2 heterostructure sandwiched between silver electrodes on a plastic foil. [ 71 ] The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. [ 72 ] in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX 2 , M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride , which is the thinnest memory material of around 0.33 nm. [ 73 ] These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD / MOCVD , enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS 2 or h-BN atomristors, indicating a new application of memristors for 5G , 6G and THz communication and connectivity systems. [ 74 ] [ 75 ] In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology. [ 76 ] The ferroelectric memristor [ 77 ] is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: R OFF ≫ R ON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither R ON or R OFF , but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved. In 2013, Ageev, Blinov et al. [ 78 ] reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope . Later it was found [ 79 ] that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain Δ L 0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field Edef in the nanotube under the influence of an external electric field E ( x , t ). Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. [ 80 ] In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated, [ 81 ] whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage [ 82 ] furthermore a natural silk fibroin demonstrated memristive properties; [ 83 ] spin-memristive systems based on biomolecules are also being studied. [ 84 ] In 2012, Sandro Carrara and co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors. [ 85 ] Since then, several memristive sensors have been demonstrated. [ 86 ] Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. [ 87 ] In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview by IEEE Spectrum . [ 88 ] A first experimental proof of the spintronic memristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011. [ 89 ] The magnetic tunnel junction has been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic ( spin-transfer torque ). Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on a magnetic tunnel junction (MTJ) endowed with bi-stable spin-dependent states [ 90 ] ( resistive switching ). The MTJ consists in a SrTiO3 (STO) tunnel barrier that separates half-metallic oxide LSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, the tunnel magnetoresistance (TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from the half-metallic LSMO electrode, within the Julliere model , this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions. Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008 [ 91 ] and 2009. [ 92 ] While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, [ 92 ] another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies [ 93 ] and its impact [ 94 ] on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity [ 95 ] or multiferroicity. [ 96 ] The magnetization state of a MTJ can be controlled by Spin-transfer torque , and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving a MRAM . However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable. [ 97 ] The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vector x = ( x 1 , x 2 ), where x 1 describes the magnetic state of the electrodes and x 2 denotes the resistive state of the MgO barrier. In this case the change of x 1 is current-controlled (spin torque is due to a high current density) whereas the change of x 2 is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system. [ 98 ] A fundamentally different mechanism for memristive behavior has been proposed by Pershin and Di Ventra . [ 99 ] [ 100 ] The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang. [ 2 ] The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces, [ 101 ] but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor. [ 1 ] This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications. In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor. [ 102 ] The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide. [ 103 ] The SDC device is operational immediately after fabrication. In the Ge 2 Se 3 active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge 2 Se 3 /Ag/Ge 2 Se 3 , directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about 350 °C (662 °F), the device has significantly higher processing and operating temperatures at 250 °C (482 °F) and at least 150 °C (302 °F), respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at 150 °C (302 °F). There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated by Heidemarie Schmidt and her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO 3 thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1: 10.1.1.189.3614 ] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358]. The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions. The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051]. Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications. A potential application of memristors is in analog memories for superconducting quantum computers. [ 12 ] Memristors can potentially be fashioned into non-volatile solid-state memory , which could allow greater data density than hard drives with access times similar to DRAM , replacing both components. [ 31 ] HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, [ 104 ] and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm 3 ). [ 105 ] In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM. [ 106 ] The devices' resistance would be read with alternating current so that the stored value would not be affected. [ 107 ] In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory. [ 108 ] Memristors have applications in programmable logic [ 109 ] signal processing , [ 110 ] super-resolution imaging [ 111 ] physical neural networks , [ 112 ] control systems , [ 113 ] reconfigurable computing , [ 114 ] in-memory computing , [ 115 ] brain–computer interfaces [ 116 ] and RFID . [ 117 ] Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation [ 118 ] Several early works have been reported in this direction. [ 119 ] [ 120 ] In 2009, a simple electronic circuit [ 121 ] consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. [ 122 ] It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime molds Physarum polycephalum where the viscosity of channels in the cytoplasm responds to periodic environment changes. [ 122 ] Applications of such circuits may include, e.g., pattern recognition . The DARPA SyNAPSE project funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010, Versace and Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model. [ 123 ] MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware. [ 124 ] Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki. [ 125 ] In 2011, they showed [ 126 ] how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired from Hebbian learning rule . In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena. [ 127 ] The memistor and memtransistor are transistor-based devices which include memristor function. In 2009, Di Ventra , Pershin, and Chua extended [ 128 ] the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin. [ 22 ] In September 2014, Mohamed-Salah Abdelouahab , Rene Lozi , and Leon Chua published a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements using fractional derivatives . [ 129 ] Sir Humphry Davy is said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808. [ 20 ] [ 130 ] However the first device of a related nature to be constructed was the memistor (i.e. memory resistor), a term coined in 1960 by Bernard Widrow to describe a circuit element of an early artificial neural network called ADALINE . A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO 2 which was later claimed by researchers from Hewlett Packard to be evidence of a memristor. [ 55 ] [ citation needed ] Leon Chua postulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element. [ 1 ] Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior. [ 2 ] On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in Nature identifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors. [ 17 ] On 23 January 2009, Di Ventra , Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors , whose properties depend on the state and history of the system. [ 128 ] In July 2014, the MeMOSat/ LabOSat group [ 131 ] (composed of researchers from Universidad Nacional de General San Martín (Argentina) , INTI, CNEA , and CONICET ) put memory devices into a Low Earth orbit . [ 132 ] Since then, seven missions with different devices [ 133 ] are performing experiments in low orbits, onboard Satellogic 's Ñu-Sat satellites. [ 134 ] [ 135 ] [ clarification needed ] On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially. [ 136 ] These devices remain available in small numbers. On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload. [ 137 ] In 2021, Jennifer Rupp and Martin Bazant of MIT started a "Lithionics" research programme to investigate applications of lithium beyond their use in battery electrodes , including lithium oxide -based memristors in neuromorphic computing . [ 138 ] [ 139 ] In May 2023, TECHiFAB GmbH [https://techifab.com/] announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers. In the September 2023 issue of Science Magazine , Chinese scientists Wenbin Zhang et al. described the development and testing of a memristor-based integrated circuit . [ 140 ]
https://en.wikipedia.org/wiki/Meminductor
Memo motion or spaced-shot photography is a tool of time and motion study that analyzes long operations by using a camera. It was developed 1946 by Marvin E. Mundel at Purdue University , who was first to save film material while planning studies on kitchen work. Mundel published the method in 1947 with several studies in his textbook Systematic Motion and time study . [ 1 ] A study showed the following advantages of Memo-Motion in regard to other forms of time and motion study: [ 2 ] As a versatile tool of work study it was used in the US to some extent, but rarely in Europe and other industrial countries mainly because of difficulties procuring the required cameras. Today Memo-Motion could have a comeback because more and more workplaces have conditions which it can explore. Scottish motion study pioneer, Anne Gillespie Shaw , used Memomotion in a number of films commissioned from her company, The Anne Shaw Organisation, for commercial and public secto r organisations.
https://en.wikipedia.org/wiki/Memo_motion
In computing , memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls to pure functions and returning the cached result when the same inputs occur again. Memoization has also been used in other contexts (and for purposes other than speed gains), such as in simple mutually recursive descent parsing. [ 1 ] It is a type of caching , distinct from other forms of caching such as buffering and page replacement . In the context of some logic programming languages, memoization is also known as tabling . [ 2 ] The term memoization was coined by Donald Michie in 1968 [ 3 ] and is derived from the Latin word memorandum ('to be remembered'), usually truncated as memo in American English, and thus carries the meaning of 'turning [the results of] a function into something to be remembered'. While memoization might be confused with memorization (because they are etymological cognates ), memoization has a specialized meaning in computing. A memoized function "remembers" the results corresponding to some set of specific inputs. Subsequent calls with remembered inputs return the remembered result rather than recalculating it, thus eliminating the primary cost of a call with given parameters from all but the first call made to the function with those parameters. The set of remembered associations may be a fixed-size set controlled by a replacement algorithm or a fixed set, depending on the nature of the function and its use. A function can only be memoized if it is referentially transparent ; that is, only if calling the function has exactly the same effect as replacing that function call with its return value. (Special case exceptions to this restriction exist, however.) While related to lookup tables , since memoization often uses such tables in its implementation, memoization populates its cache of results transparently on the fly, as needed, rather than in advance. Memoized functions are optimized for speed in exchange for a higher use of computer memory space . The time/space "cost" of algorithms has a specific name in computing: computational complexity . All functions have a computational complexity in time (i.e. they take time to execute) and in space . Although a space–time tradeoff occurs (i.e., space used is speed gained), this differs from some other optimizations that involve time-space trade-off, such as strength reduction , in that memoization is a run-time rather than compile-time optimization. Moreover, strength reduction potentially replaces a costly operation such as multiplication with a less costly operation such as addition, and the results in savings can be highly machine-dependent (non-portable across machines), whereas memoization is a more machine-independent, cross-platform strategy. Consider the following pseudocode function to calculate the factorial of n : For every integer n such that n ≥ 0 , the final result of the function factorial is invariant ; if invoked as x = factorial(3) , the result is such that x will always be assigned the value 6. The non-memoized implementation above, given the nature of the recursive algorithm involved, would require n + 1 invocations of factorial to arrive at a result, and each of these invocations, in turn, has an associated cost in the time it takes the function to return the value computed. Depending on the machine, this cost might be the sum of: In a non-memoized implementation, every top-level call to factorial includes the cumulative cost of steps 2 through 6 proportional to the initial value of n . A memoized version of the factorial function follows: In this particular example, if factorial is first invoked with 5, and then invoked later with any value less than or equal to five, those return values will also have been memoized, since factorial will have been called recursively with the values 5, 4, 3, 2, 1, and 0, and the return values for each of those will have been stored. If it is then called with a number greater than 5, such as 7, only 2 recursive calls will be made (7 and 6), and the value for 5! will have been stored from the previous call. In this way, memoization allows a function to become more time-efficient the more often it is called, thus resulting in eventual overall speed-up. Memoization is heavily used in compilers for functional programming languages, which often use call by name evaluation strategy. To avoid overhead with calculating argument values, compilers for these languages heavily use auxiliary functions called thunks to compute the argument values, and memoize these functions to avoid repeated calculations. While memoization may be added to functions internally and explicitly by a computer programmer in much the same way the above memoized version of factorial is implemented, referentially transparent functions may also be automatically memoized externally . [ 1 ] The techniques employed by Peter Norvig have application not only in Common Lisp (the language in which his paper demonstrated automatic memoization), but also in various other programming languages . Applications of automatic memoization have also been formally explored in the study of term rewriting [ 4 ] and artificial intelligence . [ 5 ] In programming languages where functions are first-class objects (such as Lua , Python , or Perl [ 6 ] ), automatic memoization can be implemented by replacing (at run-time) a function with its calculated value once a value has been calculated for a given set of parameters. The function that does this value-for-function-object replacement can generically wrap any referentially transparent function. Consider the following pseudocode (where it is assumed that functions are first-class values): In order to call an automatically memoized version of factorial using the above strategy, rather than calling factorial directly, code invokes memoized-call(factorial)( n ) . Each such call first checks to see if a holder array has been allocated to store results, and if not, attaches that array. If no entry exists at the position values[arguments] (where arguments are used as the key of the associative array), a real call is made to factorial with the supplied arguments. Finally, the entry in the array at the key position is returned to the caller. The above strategy requires explicit wrapping at each call to a function that is to be memoized. In those languages that allow closures , memoization can be effected implicitly via a functor factory that returns a wrapped memoized function object in a decorator pattern . In pseudocode, this can be expressed as follows: Rather than call factorial , a new function object memfact is created as follows: The above example assumes that the function factorial has already been defined before the call to construct-memoized-functor is made. From this point forward, memfact( n ) is called whenever the factorial of n is desired. In languages such as Lua, more sophisticated techniques exist which allow a function to be replaced by a new function with the same name, which would permit: Essentially, such techniques involve attaching the original function object to the created functor and forwarding calls to the original function being memoized via an alias when a call to the actual function is required (to avoid endless recursion ), as illustrated below: (Note: Some of the steps shown above may be implicitly managed by the implementation language and are provided for illustration.) When a top-down parser tries to parse an ambiguous input with respect to an ambiguous context-free grammar (CFG), it may need an exponential number of steps (with respect to the length of the input) to try all alternatives of the CFG in order to produce all possible parse trees. This eventually would require exponential memory space. Memoization was explored as a parsing strategy in 1991 by Peter Norvig, who demonstrated that an algorithm similar to the use of dynamic programming and state-sets in Earley's algorithm (1970), and tables in the CYK algorithm of Cocke, Younger and Kasami, could be generated by introducing automatic memoization to a simple backtracking recursive descent parser to solve the problem of exponential time complexity. [ 1 ] The basic idea in Norvig's approach is that when a parser is applied to the input, the result is stored in a memotable for subsequent reuse if the same parser is ever reapplied to the same input. Richard Frost and Barbara Szydlowski also used memoization to reduce the exponential time complexity of parser combinators , describing the result as a memoizing purely functional top-down backtracking language processor. [ 7 ] Frost showed that basic memoized parser combinators can be used as building blocks to construct complex parsers as executable specifications of CFGs. [ 8 ] [ 9 ] Memoization was again explored in the context of parsing in 1995 by Mark Johnson and Jochen Dörre. [ 10 ] [ 11 ] In 2002, it was examined in considerable depth by Bryan Ford in the form called packrat parsing . [ 12 ] In 2007, Frost, Hafiz and Callaghan [ citation needed ] described a top-down parsing algorithm that uses memoization for refraining redundant computations to accommodate any form of ambiguous CFG in polynomial time ( Θ (n 4 ) for left-recursive grammars and Θ(n 3 ) for non left-recursive grammars). Their top-down parsing algorithm also requires polynomial space for potentially exponential ambiguous parse trees by 'compact representation' and 'local ambiguities grouping'. Their compact representation is comparable with Tomita's compact representation of bottom-up parsing . [ 13 ] Their use of memoization is not only limited to retrieving the previously computed results when a parser is applied to a same input position repeatedly (which is essential for polynomial time requirement); it is specialized to perform the following additional tasks: Frost, Hafiz and Callaghan also described the implementation of the algorithm in PADL’08 [ citation needed ] as a set of higher-order functions (called parser combinators ) in Haskell , which enables the construction of directly executable specifications of CFGs as language processors. The importance of their polynomial algorithm's power to accommodate ‘any form of ambiguous CFG’ with top-down parsing is vital with respect to the syntax and semantics analysis during natural language processing . The X-SAIGA site has more about the algorithm and implementation details. While Norvig increased the power of the parser through memoization, the augmented parser was still as time complex as Earley's algorithm, which demonstrates a case of the use of memoization for something other than speed optimization. Johnson and Dörre [ 11 ] demonstrate another such non-speed related application of memoization: the use of memoization to delay linguistic constraint resolution to a point in a parse where sufficient information has been accumulated to resolve those constraints. By contrast, in the speed optimization application of memoization, Ford demonstrated that memoization could guarantee that parsing expression grammars could parse in linear time even those languages that resulted in worst-case backtracking behavior. [ 12 ] Consider the following grammar : (Notation note: In the above example, the production S → (A c ) | (B d ) reads: "An S is either an A followed by a c or a B followed by a d ." The production X → x [X] reads "An X is an x followed by an optional X .") This grammar generates one of the following three variations of string : xac , xbc , or xbd (where x here is understood to mean one or more x 's .) Next, consider how this grammar, used as a parse specification, might effect a top-down, left-right parse of the string xxxxxbd : The key concept here is inherent in the phrase again descends into X . The process of looking forward, failing, backing up, and then retrying the next alternative is known in parsing as backtracking, and it is primarily backtracking that presents opportunities for memoization in parsing. Consider a function RuleAcceptsSomeInput(Rule, Position, Input) , where the parameters are as follows: Let the return value of the function RuleAcceptsSomeInput be the length of the input accepted by Rule , or 0 if that rule does not accept any input at that offset in the string. In a backtracking scenario with such memoization, the parsing process is as follows: In the above example, one or many descents into X may occur, allowing for strings such as xxxxxxxxxxxxxxxxbd . In fact, there may be any number of x 's before the b . While the call to S must recursively descend into X as many times as there are x 's, B will never have to descend into X at all, since the return value of RuleAcceptsSomeInput( X , 0, xxxxxxxxxxxxxxxxbd ) will be 16 (in this particular case). Those parsers that make use of syntactic predicates are also able to memoize the results of predicate parses, as well, thereby reducing such constructions as: to one descent into A . If a parser builds a parse tree during a parse, it must memoize not only the length of the input that matches at some offset against a given rule, but also must store the sub-tree that is generated by that rule at that offset in the input, since subsequent calls to the rule by the parser will not actually descend and rebuild that tree. For the same reason, memoized parser algorithms that generate calls to external code (sometimes called a semantic action routine ) when a rule matches must use some scheme to ensure that such rules are invoked in a predictable order. Since, for any given backtracking or syntactic predicate capable parser not every grammar will need backtracking or predicate checks, the overhead of storing each rule's parse results against every offset in the input (and storing the parse tree if the parsing process does that implicitly) may actually slow down a parser. This effect can be mitigated by explicit selection of those rules the parser will memoize. [ 14 ]
https://en.wikipedia.org/wiki/Memoization
In immunology , a memory B cell ( MBC ) is a type of B lymphocyte that forms part of the adaptive immune system . These cells develop within germinal centers of the secondary lymphoid organs . Memory B cells circulate in the blood stream in a quiescent state, sometimes for decades. [ 1 ] Their function is to memorize the characteristics of the antigen that activated their parent B cell during initial infection such that if the memory B cell later encounters the same antigen , it triggers an accelerated and robust secondary immune response . [ 2 ] [ 3 ] Memory B cells have B cell receptors (BCRs) on their cell membrane, identical to the one on their parent cell, that allow them to recognize antigen and mount a specific antibody response. [ 4 ] In a T-cell dependent development pathway, naïve follicular B cells are activated by antigen-presenting follicular B helper T cells (T FH ) during the initial infection, or primary immune response . [ 3 ] Naïve B cells circulate through follicles in secondary lymphoid organs (i.e. spleen and lymph nodes ) where they can be activated by a floating foreign peptide brought in through the lymph or by antigen presented by antigen-presenting cells (APCs) such as dendritic cells (DCs). [ 5 ] B cells may also be activated by binding foreign antigen in the periphery where they then move into the secondary lymphoid organs. [ 3 ] A signal transduced by the binding of the peptide to the B cell causes the cells to migrate to the edge of the follicle bordering the T cell area. [ 5 ] The B cells internalize the foreign peptides, break them down, and express them on class II major histocompatibility complexes (MHCII), which are cell surface proteins. Within the secondary lymphoid organs, most of the B cells will enter B-cell follicles where a germinal center will form. Most B cells will eventually differentiate into plasma cells or memory B cells within the germinal center. [ 3 ] [ 6 ] The T FH s that express T cell receptors (TCRs) cognate to the peptide (i.e. specific for the peptide-MHCII complex) at the border of the B cell follicle and T-cell zone will bind to the MHCII ligand. The T cells will then express the CD40 ligand (CD40L) molecule and will begin to secrete cytokines which cause the B cells to proliferate and to undergo class switch recombination , a mutation in the B cell's genetic coding that changes their immunoglobulin type. [ 7 ] [ 8 ] Class switching allows memory B cells to secrete different types of antibodies in future immune responses. [ 3 ] The B cells then either differentiate into plasma cells , germinal center B cells, or memory B cells depending on the expressed transcription factors . The activated B cells that expressed the transcription factor Bcl-6 will enter B-cell follicles and undergo germinal center reactions. [ 7 ] Once inside the germinal center, the B cells undergo proliferation, followed by mutation of the genetic coding region of their BCR , a process known as somatic hypermutation . [ 3 ] The mutations will either increase or decrease the affinity of the surface receptor for a particular antigen, a progression called affinity maturation . After acquiring these mutations, the receptors on the surface of the B cells (B cell receptors) are tested within the germinal center for their affinity to the current antigen. [ 9 ] B cell clones with mutations that have increased the affinity of their surface receptors receive survival signals via interactions with their cognate T FH cells. [ 2 ] [ 3 ] [ 10 ] The B cells that do not have high enough affinity to receive these survival signals, as well as B cells that are potentially auto-reactive, will be selected against and die through apoptosis. [ 6 ] These processes increase variability at the antigen binding sites such that every newly generated B cell has a unique receptor. [ 11 ] After differentiation, memory B cells relocate to the periphery of the body where they will be more likely to encounter antigen in the event of a future exposure. [ 6 ] [ 2 ] [ 3 ] Many of the circulating B cells become concentrated in areas of the body that have a high likelihood of coming into contact with antigen, such as the Peyer's patch . The process of differentiation into memory B cells within the germinal center is not yet fully understood. [ 3 ] Some researchers hypothesize that differentiation into memory B cells occurs randomly. [ 6 ] [ 4 ] Other hypotheses propose that the transcription factor NF-κB and the cytokine IL-24 are involved in the process of differentiation into memory B cells. [ 11 ] [ 3 ] An additional hypothesis states that the B cells with relatively lower affinity for antigen will become memory B cells, in contrast to B cells with relatively higher affinity that will become plasma cells. Not all B cells present in the body have undergone somatic hypermutations. IgM+ memory B cells that have not undergone class switch recombination demonstrate that memory B cells can be produced independently of the germinal centers. Upon infection with a pathogen, many B cells will differentiate into the plasma cells , also called effector B cells, which produce a first wave of protective antibodies and help clear infection. [ 6 ] [ 2 ] Plasma cells secrete antibodies specific for the pathogens but they cannot respond upon secondary exposure. A fraction of the B cells with BCRs cognate to the antigen differentiate into memory B cells that survive long-term in the body. [ 12 ] The memory B cells can maintain their BCR expression and will be able to respond quickly upon secondary exposure. [ 6 ] The memory B cells produced during the primary immune response are specific to the antigen involved during the first exposure. In a secondary response, the memory B cells specific to the antigen or similar antigens will respond. [ 3 ] When memory B cells reencounter their specific antigen, they proliferate and differentiate into plasma cells, which then respond to and clear the antigen. [ 3 ] The memory B cells that do not differentiate into plasma cells at this point can reenter the germinal centers to undergo further class switching or somatic hypermutation for further affinity maturation. [ 3 ] Differentiation of memory B cells into plasma cells is far faster than differentiation by naïve B cells, which allows memory B cells to produce a more efficient secondary immune response. [ 4 ] The efficiency and accumulation of the memory B cell response is the foundation for vaccines and booster shots. [ 4 ] [ 3 ] The phenotype of memory cells that prognosticate plasma cells or germinal center cells fate has been discovered  few years ago. Based on expression microarray comparisons between memory B cells and naïve B cells, it was identified that there are several surface proteins, such as CD80 , PD-L2 and CD73 that are only expressed on the memory B cells, so they also serve to divide this cells in multiple phenotypic subsets. [ 13 ] Moreover, it has been shown that the memory cells that express CD80, PD-L2 and CD73 are more likely to become plasma cells. On the other hand, the cells which don´t have these type of markers are more likely to form germinal center cells. The IgM + memory B cells do not express CD80 or CD73, whereas IgG + express them. Moreover, IgG + are more likely to differenciate into antibody-secreting cells. [ 14 ] Memory B cells can survive for decades, which gives them the capacity to respond to multiple exposures to the same antigen. [ 3 ] The long-lasting survival is hypothesized to be a result of certain anti-apoptosis genes that are more highly expressed in memory B cells than other subsets of B cells. [ 6 ] Additionally, the memory B cell does not need to have continual interaction with the antigen nor with T cells in order to survive long-term. [ 4 ] However, it is true that the lifespan of individual memory B cells remains poorly defined, although they have a critical role in long-term immunity. In one study using a B cell receptor (BCR) transgenic system (it was a H chain transgenic mouse model which lacked secreted Ig, so it didn´t deposit Ag-containing immune complexes), it was shown that the number of memory B cells remain constant for a period of around 8–20 weeks after the immunization. It was also estimated that the half-life of memory B cells was between 8–10 weeks, after doing an experiment in which the cells were treated in vivo with bromodeoxyuridine . [ 15 ] In other experiments in mouse, it has been shown that the lifespan of memory B cells is at least 9 times greater than the lifespan of a follicular naïve B cell. [ 16 ] Memory B cells are typically distinguished by the cell surface marker CD27, although some subsets do not express CD27. Memory B cells that lack CD27 are generally associated with exhausted B cells or certain autoimmune conditions such as HIV, lupus, or rheumatoid arthritis. [ 2 ] [ 3 ] Because B cells have typically undergone class switching, they can express a range of immunoglobulin molecules. Some specific attributes of particular immunoglobulin molecules are described below: It is important to mention the importance of integration of signalling pathways related to the receptors of BCRs and TLRs in order to modulate the production of the antibodies by the expansion of the memory B cells. Therefore, there are different factors that provide the information in order to secret different types of antibodies. It has been demonstrated that the production of specific-IgG1, anaphylactic-IgG1 and total-IgE depends on the signal produce by TLR2 and Myd88 . Moreover, the signal produce by TLR4 when it is stimulated by natterins (protein obtained from T. nattereri fish venom) accelerates the synthesis of the antibody IgE acting as an adjuvant, as it was shown in an in vivo experiment with mice. [ 17 ] The receptor CCR6 is generally a marker of B cells that will eventually differentiate into MBCs. This receptor detects chemokines , which are chemical messengers that allow the B cell to move within the body. Memory B cells may have this receptor to allow them to move out of the germinal center and into the tissues where they have a higher probability of encountering antigen. [ 6 ] It has been shown that memory B cells have high level expression of CCR6 as well as an increased chemotactic response to the CCR6 ligand ( CCL20 ) in comparison with naïve B cells. Nevertheless, the primary humoral response and the maintenance of the memory B cells are not affected in CCR6-deficient mice. However, there is not an effective secondary response  from the memory B cells when there is a reexposure of the antigen if the cells do not express CCR6. Therefore we can confirm that CCR6 is essential for the ability of memory B cells to be recalled to their cognate antigen as well as for the appropriate anatomical positioning of these cells. [ 18 ] This subset of cells differentiates from activated B cells into memory B cells before entering the germinal center. B cells that have a high level of interaction with T FH within the B cell follicle have a higher propensity of entering the germinal center. The B cells that develop into memory B cells independently from germinal centers likely experience CD40 and cytokine signaling from T cells. [ 4 ] Class switching can still occur prior to interaction with the germinal center, while somatic hypermutation only occurs after interaction with the germinal center. [ 4 ] The lack of somatic hypermutation is hypothesized to be beneficial; a lower level of affinity maturation means that these memory B cells are less specialized to a specific antigen and may be able to recognize a wider range of antigens. [ 11 ] [ 19 ] [ 4 ] T-independent memory B cells T-independent memory B cells are a subset called B1 cells. These cells generally reside in the peritoneal cavity. When reintroduced to antigen, some of these B1 cells can differentiate into memory B cells without interacting with a T cell. [ 4 ] These B cells produce IgM antibodies to help clear infection. [ 20 ] T-bet memory B cells T-bet B cells are a subset that have been found to express the transcription factor T-bet. T-bet is associated with class switching. T-bet B cells are also thought to be important in immune responses against intracellular bacterial and viral infections. [ 21 ] Vaccines are based on the notion of immunological memory . The preventative injection of a non-pathogenic antigen into the organism allows the body to generate a durable immunological memory . The injection of the antigen leads to an antibody response followed by the production of memory B cells. These memory B cells are promptly reactivated upon infection with the antigen and can effectively protect the organism from disease. [ 22 ] Long-lived plasma cells and memory B cells are responsible for the long-term humoral immunity elicited by most vaccines. An experiment has been carried in order to observe the longevity of memory B cells after vaccination, in this case with the smallpox vaccine (DryVax), which was selected due to the fact that smallpox was eradicated, so the immune memory to smallpox is a useful benchmark to understand the longevity of the immune memory B cells in the absence of restimulation. The study concluded that the specific memory B cells are maintained for decades, indicating that the immunological memory is long-lived in the B cell compartment after a robust initial antigen exposure. [ 23 ]
https://en.wikipedia.org/wiki/Memory_B_cell
For computer memory , Memory ProteXion , found in IBM xSeries servers, is a form of " redundant bit steering ". This technology uses redundant bits in a data packet to recover from a DIMM failure. Memory ProteXion is different from normal ECC error correction in that it uses only 6 bits for ECC, leaving 2 bits behind. These 2 bits can be used to re-route data from failed memory, much like hot spare on a RAID . The ECC is used to reconstruct the data, and the extra bits to store it. Memory ProteXion, also known as “redundant bit steering”, is the technology behind using redundant bits in a data packet to provide backup in the event of a DIMM failure. One failure does not cause a predictive failure analysis to be issued on the DIMM, but 2 failures and more will issue a PFA to inform the system administrator that a replacement is needed. This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Memory_ProteXion
Memory T cells are a subset of T lymphocytes that might have some of the same functions as memory B cells . Their lineage is unclear. Antigen -specific memory T cells specific to viruses or other microbial molecules can be found in both central memory T cells (T CM ) and effector memory T cells (T EM ) subsets. Although most information is currently based on observations in the cytotoxic T cells ( CD8 -positive) subset, similar populations appear to exist for both the helper T cells ( CD4 -positive) and the cytotoxic T cells. Primary function of memory cells is augmented immune response after reactivation of those cells by reintroduction of relevant pathogen into the body. It is important to note that this field is intensively studied and some information may not be available as of yet. Clones of memory T cells expressing a specific T cell receptor can persist for decades in our body. Since memory T cells have shorter half-lives than naïve T cells do, continuous replication and replacement of old cells are likely involved in the maintenance process. [ 3 ] Currently, the mechanism behind memory T cell maintenance is not fully understood. Activation through the T cell receptor may play a role. [ 3 ] It is found that memory T cells can sometimes react to novel antigens, potentially caused by intrinsic the diversity and breadth of the T cell receptor binding targets. [ 3 ] These T cells could cross-react to environmental or resident antigens in our bodies (like bacteria in our gut) and proliferate. These events would help maintain the memory T cell population. [ 3 ] The cross-reactivity mechanism may be important for memory T cells in the mucosal tissues since these sites have higher antigen density. [ 3 ] For those resident in blood, bone marrow, lymphoid tissues, and spleen, homeostatic cytokines (including IL-17 and IL-15 ) or major histocompatibility complex II (MHCII) signaling may be more important. [ 3 ] Memory T cells undergo different changes and play different roles in different life stages for humans. At birth and early childhood, T cells in the peripheral blood are mainly naïve T cells. [ 10 ] Through frequent antigen exposure, the population of memory T cells accumulates. This is the memory generation stage, which lasts from birth to about 20–25 years old when our immune system encounters the greatest number of new antigens. [ 3 ] [ 10 ] During the memory homeostasis stage that comes next, the number of memory T cells plateaus and is stabilized by homeostatic maintenance. [ 10 ] At this stage, the immune response shifts more towards maintaining homeostasis since few new antigens are encountered. [ 10 ] Tumor surveillance also becomes important at this stage. [ 10 ] At later stages of life, at about 65–70 years of age, immunosenescence stage comes, in which stage immune dysregulation, decline in T cell function and increased susceptibility to pathogens are observed. [ 3 ] [ 10 ] As of April 2020, the lineage relationship between effector and memory T cells is unclear. [ 11 ] [ 12 ] [ 13 ] Two competing models exist. One is called the On-Off-On model. [ 12 ] When naive T cells are activated by T cell receptor (TCR) binding to antigen and its downstream signaling pathway, they actively proliferate and form a large clone of effector cells. Effector cells undergo active cytokine secretion and other effector activities. [ 11 ] After antigen clearance, some of these effector cells form memory T cells, either in a randomly determined manner or are selected based on their superior specificity. [ 11 ] These cells would reverse from the active effector role to a state more similar to naive T cells and would be "turned on" again upon the next antigen exposure. [ 13 ] This model predicts that effector T cells can transit into memory T cells and survive, retaining the ability to proliferate. [ 11 ] It also predicts that certain gene expression profiles would follow the on-off-on pattern during naive, effector, and memory stages. [ 13 ] Evidence supporting this model includes the finding of genes related to survival and homing that follow the on-off-on expression pattern, including interleukin-7 receptor alpha (IL-7Rα), Bcl-2, CD26L, and others. [ 13 ] The other model is the developmental differentiation model. [ 12 ] This model argues that effector cells produced by the highly activated naive T cells would all undergo apoptosis after antigen clearance. [ 11 ] Memory T cells are instead produced by naive T cells that are activated but never entered with full strength into the effector stage. [ 11 ] The progeny of memory T cells are not fully activated because they are not as specific to the antigen as the expanding effector T cells. Studies looking at cell division history found that the length of telomere and activity of telomerase were reduced in effector T cells compared to memory T cells, which suggests that memory T cells did not undergo as much cell division as effector T cells, which is inconsistent with the On-Off-On model. [ 11 ] Repeated or chronic antigenic stimulation of T cells, like HIV infection , would induce elevated effector functions but reduce memory. [ 12 ] It was also found that massively proliferated T cells are more likely to generate short-lived effector cells, while minimally proliferated T cells would form more long-lived cells. [ 11 ] Epigenetic modifications are involved in the change from naive T-cells. [ 14 ] For example, in CD4 + memory T cells, positive histone modifications mark key cytokine genes that are up-regulated during the secondary immune response, including IFNγ , IL4 , and IL17A . [ 14 ] Some of these modifications persisted after antigen clearance, establishing an epigenetic memory that allows a faster activation upon re-encounter with the antigen. [ 14 ] For CD8 + memory T cells, certain effector genes, such as IFNγ , would not be expressed but they are transcriptionally poised for fast expression upon activation. [ 14 ] Additionally, the enhancement of expression for certain genes also depends on the strength of the initial TCR signaling for the progeny of memory T cells, which is correlated to the regulatory element activation that directly changes gene expression level. [ 14 ] Historically, memory T cells were thought to belong to either the effector (T EM cells) or central memory (T CM cells) subtypes, each with its own distinguishing set of cell surface markers (see below). [ 15 ] Subsequently, numerous additional populations of memory T cells were discovered including tissue-resident memory T (T RM ) cells, stem memory T SCM cells, and virtual memory T cells . The single unifying theme for all memory T cell subtypes is that they are long-lived and can quickly expand to large numbers of effector T cells upon re-exposure to their cognate antigen. By this mechanism, they provide the immune system with "memory" against previously encountered pathogens. Memory T cells may be either CD4 + or CD8 + and usually express CD45RO and at the same time lack CD45RA. [ 16 ] There have been numerous other subpopulations of memory T cells suggested. Investigators have studied Stem memory T SCM cells. Like naive T cells, T SCM cells are CD45RO−, CCR7 +, CD45RA+, CD62L+ ( L-selectin ), CD27+, CD28+, and IL-7Rα+, but they also express large amounts of CD95, IL-2Rβ, CXCR3, and LFA-1, and show numerous functional attributes distinctive of memory cells. [ 6 ] T cells possess the ability to be activated independently of their cognate antigen stimulation, i.e. without TCR stimulation. At early stages of infection, T cells specific for unrelated antigen are activated only by the presence of inflammation. This happens in the inflammatory milieu resulting from microbial infection, cancer or autoimmunity in both mice and humans and occurs locally as well as systematically [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] . Moreover, bystander activated T cells can migrate to the site of infection, due to increased CCR5 expression. [ 26 ] This phenomenon was observed predominantly in memory CD8+ T cells, which have lower sensitivity to cytokine stimulation, compared to their naive counterparts and get activated in this manner more easily. [ 25 ] Virtual memory CD8+ T cells also display heightened sensitivity to cytokine-induced activation in mouse models, but this was not directly demonstrated in humans. [ 26 ] Conversely, TCR-independent activation of naive CD8+ T cells remains controversial. [ 26 ] [ 28 ] Apart from infections, bystander activation also plays an important role in antitumor immunity. [ 30 ] In human cancerous tissues, a high number of virus-specific, not tumor-specific, CD8+ T cells was detected. [ 30 ] This type of activation is considered to be beneficial for the host in terms of cancer clearance efficiency. [ 26 ] The major drivers of bystander activation are cytokines , such as IL-15 , IL-18 , IL-12 or type I IFNs, often working synergistically. [ 25 ] [ 26 ] [ 28 ] [ 29 ] IL-15 is responsible for cytotoxic activity of bystander-activated T cells. It induces the NKG2D (a receptor typically expressed on NK cells ) expression on memory CD8+ T cells, leading to innate-like cytotoxicity, i.e. recognition of NKG2D ligands as indicators of infection, cell stress and cell transformation as well as destruction of altered cells in an NK-like manner. [ 25 ] [ 26 ] [ 28 ] [ 29 ] TCR activation was shown to abrogate IL-15 mediated NKG2D expression on T cells. [ 28 ] [ 29 ] Additionally, IL-15 induces expression of cytolytic molecules, cell expansion and enhances the cell response to IL-18. [ 25 ] [ 26 ] [ 29 ] IL-18 is another cytokine involved in this process, typically acting in synergy with IL-12, enhancing the differentiation of memory T cells into effector cells, i.e. it induces IFN-γ production and cell proliferation. [ 25 ] [ 26 ] [ 29 ] Toll-like receptors (TLRs), especially TLR2 , have been linked to TCR-independent activation of CD8+ T cells upon bacterial infection as well. [ 25 ] [ 29 ] Despite TCR-independent activation being studied more extensively in CD8+ T cells, there's a clear evidence of this phenomenon occurring in CD4+ T cells . However, it's considered to be less efficient, presumably due to lower CD122 (also known as IL2RB or IL15RB) expression. [ 31 ] [ 32 ] Similarly to their CD8+ counterparts, memory and effector CD4+ T cells exhibit increased sensitivity to TCR-independent activation. [ 26 ] [ 32 ] IL-1β , synergistically with IL-12 and IL-23, stimulates memory CD4+ T cells and drives Th17 response. [ 32 ] Moreover, IL-18, IL-12 and IL-27 induce cytokine expression in effector and memory CD4+ T cells [ 32 ] and IL-2 is considered to be a strong activation inducer of CD4+ T cells that can replace TCR stimulation even in naive cells. [ 32 ] TLR2 was also reported to be present on memory CD4+ T cells, which respond to their agonist by IFNγ production, even without TCR stimulation. [ 32 ] Bystander activation plays role in the elimination of the spread of infection in its early stages and helps in tumor clearance. However, this type of activation can also have deleterious outcome, especially in chronic infections and autoimmune diseases . [ 26 ] [ 27 ] [ 28 ] [ 29 ] Liver injury during chronic Hepatitis B virus infection is a result of non-HBV-specific CD8+ T cell infiltration into the tissue. [ 26 ] A similar situation occurs during the acute Hepatitis A virus infection [ 26 ] and activated virus unrelated CD4+ T cells contribute to ocular lesions in Herpes Simplex Virus infections. [ 26 ] [ 32 ] Increased IL-15 expression and subsequent excessive NKG2D expression was linked to exacerbation of some autoimmune disorders, such as, type I diabetes , multiple sclerosis and inflammatory bowel diseases , for instance Crohn's disease and celiac disease . [ 25 ] Furthermore, enhanced TLR2 expression was observed in joints, cartilage and bones of rheumatoid arthritis patients and the presence of its ligand, peptidoglycan , was detected in their synovial fluid . [ 25 ]
https://en.wikipedia.org/wiki/Memory_T_cell
Memory T cell inflation phenomenon is the formation and maintenance of a large population of specific CD8+ T cells in reaction to cytomegalovirus (CMV) infection. [ 1 ] [ 2 ] [ 3 ] [ 4 ] CMV is a worldwide type of virus which affects 60-80 % of the human population in developed countries. The virus is spread through saliva or urine and in healthy individuals can survive under the immune system control without any visible symptoms. The CMV life strategy is to integrate DNA into the genome of the host cells and escape the mechanism of natural immunity. [ 5 ] Immune response against CMV is primarily provided by CD8 + T cells which recognize viral fragments in MHC class I complex on the surface of infected cells and destroy these cells. Specific CD8+ T cells are generated in secondary lymphoid organs where naïve T cells encounter with cytomegalovirus antigen on antigen presenting cells. [ 6 ] This results in a population of migrating effector CD8 + T-lymphocytes and the second small population called central memory T-cells that remains in the secondary lymphatic organs and the bone marrow . These cells are capable to respond and proliferate immediately after repeated pathogen recognition. [ 3 ] [ 5 ] The amount of memory cells generated as a response to cytomegalovirus is approximately 9.1% - 10.2% of all circulating CD4 + and CD8 + memory cells. [ 5 ] Generally, these cells express a low level of node localization markers - CD62L, CCR7 and occur in peripheral organs. [ 3 ] [ 6 ] They retain their standard functions like cytokine production and cytotoxicity . They do not express costimulatory molecules ( CD28 ) and PD-1 receptor inhibitors on the surface, but they express the inhibitory molecules KLRG1 and CD85 . [ 4 ] Remodeling of immune response and reduced ability to protect individuals from infectious diseases is observed in relation to age. Especially in the elderly, long-term CMV infection leads to a rapid increase the number of CMV-specific T cells. The number of CMV memory CD8 + T lymphocytes then predominate and the total number of available naïve T lymphocytes decrease. [ 3 ] [ 7 ] CD8+T cells form up to 50 % of all peripheral blood memory cells in HCV-positive elderly individuals. [ 7 ] The same effect on the immune system has been described in herpes viruses and parvoviruses. [ 5 ] Potential therapeutic use of memory cells is vaccination based on induction of memory T cells in the periphery that will be capable of effectively and immediately attacking the pathogen. [ 5 ]
https://en.wikipedia.org/wiki/Memory_T_cell_inflation
Age-related memory loss , sometimes described as "normal aging " (also spelled "ageing" in British English ), is qualitatively different from memory loss associated with types of dementia such as Alzheimer's disease , and is believed to have a different brain mechanism. [ 1 ] Mild cognitive impairment (MCI) is a condition in which people face memory problems more often than that of the average person their age. These symptoms, however, do not prevent them from carrying out normal activities and are not as severe as the symptoms for Alzheimer's disease (AD). Symptoms often include misplacing items, forgetting events or appointments, and having trouble finding words. [ 2 ] [ 3 ] According to recent research, MCI is seen as the transitional state between cognitive changes of normal aging and Alzheimer's disease. Several studies have indicated that individuals with MCI are at an increased risk for developing AD, ranging from one percent to twenty-five percent per year; in one study twenty-four percent of MCI patients progressed to AD in two years and twenty percent more over three years, whereas another study indicated that the progression of MCI subjects was fifty-five percent in four and a half years. [ 4 ] [ 5 ] Some patients with MCI, however, never progress to AD. [ 6 ] Studies have also indicated patterns that are found in both MCI and AD. Much like patients with Alzheimer's disease, those with mild cognitive impairment have difficulty accurately defining words and using them appropriately in sentences when asked. While MCI patients had a lower performance in this task than the control group, AD patients performed worse overall. The abilities of MCI patients stood out, however, due to the ability to provide examples to make up for their difficulties. AD patients failed to use any compensatory strategies and therefore exhibited the difference in use of episodic memory and executive functioning. [ 7 ] Normal aging is associated with a decline in various memory abilities in many cognitive tasks; the phenomenon is known as age-related memory impairment (AMI) or age-associated memory impairment (AAMI). The ability to encode new memories of events or facts and working memory shows decline in both cross-sectional and longitudinal studies. [ 8 ] Studies comparing the effects of aging on episodic memory , semantic memory , short-term memory and priming find that episodic memory is especially impaired in normal aging; some types of short-term memory are also impaired. [ 9 ] The deficits may be related to impairments seen in the ability to refresh recently processed information. [ 10 ] Source information is one type of episodic memory that declines with old age; this kind of knowledge includes where and when the person learned the information. Knowing the source and context of information can be extremely important in daily decision-making, so this is one way in which memory decline can affect the lives of the elderly. Therefore, reliance on political stereotypes is one way to use their knowledge about the sources when making judgments, and the use of metacognitive knowledge gains importance. [ 11 ] This deficit may be related to declines in the ability to bind information together in memory during encoding and retrieve those associations at a later time. [ 12 ] [ 13 ] Throughout the many years of studying the progression of aging and memory, it has been hard to distinguish an exact link between the two. Many studies have tested psychologists theories throughout the years and they have found solid evidence that supports older adults having a harder time recalling contextual information while the more familiar or automatic information typically stays well preserved throughout the aging process (Light, 2000). Also, there is an increase of irrelevant information as one ages which can lead to an elderly person believing false information since they are often in a state of confusion. [ citation needed ] Episodic memory is supported by networks spanning frontal, temporal, and parietal lobes. The interconnections in the lobes are presumed to enable distinct aspects of memory, whereas the effects of gray matter lesions have been extensively studied, less is known about the interconnecting fiber tracts. In aging, degradation of white matter structure has emerged as an important general factor, further focusing attention on the critical white matter connections. [ citation needed ] Exercise affects many people young and old. [ 14 ] For the young, if exercise is introduced it can form a constructive habit that can be instilled throughout adulthood. For the elderly, especially those with Alzheimer's or other diseases that affect the memory, when the brain is introduced to exercise the hippocampus is likely to retain its size and improve its memory. [ 15 ] It is also possible that the years of education a person has had and the amount of attention they received as a child might be a variable closely related to the links of aging and memory. [ citation needed ] There is a positive correlation between early life education and memory gains in older age. This effect is especially significant in women. [ 16 ] In particular, associative learning , which is another type of episodic memory, is vulnerable to the effects of aging, and this has been demonstrated across various study paradigms. [ 17 ] This has been explained by the Associative Deficit Hypothesis (ADH), which states that aging is associated with a deficiency in creating and retrieving links between single units of information. This can include knowledge about context, events or items. The ability to bind pieces of information together with their episodic context in a coherent whole has been reduced in the elderly population. [ 18 ] Furthermore, the older adults' performances in free recall involved temporal contiguity to a lesser extent than for younger people, indicating that associations regarding contiguity become weaker with age. [ 19 ] Several reasons have been speculated as to why older adults use less effective encoding and retrieval strategies as they age. The first is the "disuse" view, which states that memory strategies are used less by older adults as they move further away from the educational system. Second is the "diminished attentional capacity" hypothesis, which means that older people engage less in self-initiated encoding due to reduced attentional capacity. The third reason is the "memory self-efficacy," which indicates that older people do not have confidence in their own memory performances, leading to poor consequences. [ 17 ] It is known that patients with Alzheimer's disease and patients with semantic dementia both exhibit difficulty in tasks that involve picture naming and category fluency. This is tied to damage to their semantic network , which stores knowledge of meanings and understandings. [ citation needed ] One phenomenon, known as "Senior Moments", is a memory deficit that appears to have a biological cause. When an older adult is interrupted while completing a task, it is likely that the original task at hand can be forgotten. Studies have shown that the brain of an older adult does not have the ability to re-engage after an interruption and continues to focus on the particular interruption unlike that of a younger brain. [ 20 ] This inability to multi-task is normal with aging and is expected to become more apparent with the increase of older generations remaining in the work field. A biological explanation for memory deficits in aging includes a postmortem examination of five brains of elderly people with better memory than average. These people are called the "super aged," and it was found that these individuals had fewer fiber-like tangles of tau protein than in typical elderly brains. However, a similar amount of amyloid plaque was found. [ 21 ] More recent research has extended established findings of age related decline in executive functioning, [ 22 ] [ 23 ] by examining related cognitive processes that underlie healthy older adults' sequential performance. Sequential performance refers to the execution of a series steps needed to complete a routine, such as the steps required to make a cup of coffee or drive a car. An important part of healthy aging involves older adults' use of memory and inhibitory processes to carry out daily activities in a fixed order without forgetting the sequence of steps that were just completed while remembering the next step in the sequence. A study from 2009 [ 24 ] examined how young and older adults differ in the underlying representation of a sequence of tasks and their efficiency at retrieving the information needed to complete their routine. Findings from this study revealed that when older and young adults had to remember a sequence of eight animal images arranged in a fixed order, both age groups spontaneously used the organizational strategy of chunking to facilitate retrieval of information. However, older adults were slower at accessing each chunk compared to younger adults, and were better able to benefit from the use of memory aids, such as verbal rehearsal to remember the order of the fixed sequence. Results from this study suggest that there are age differences in memory and inhibitory processes that affect people's sequence of actions and the use of memory aids could facilitate the retrieval of information in older age. The causes for memory issues and aging is still unclear, even after the many theories have been tested. There has yet to be a distinct link between the two because it is hard to determine exactly how each aspect of aging effects the memory and aging process. However, it is known that the brain shrinks with age due to the expansion of ventricles causing there to be little room in the head. Unfortunately, it is hard to provide a solid link between the shrinking brain and memory loss due to not knowing exactly which area of the brain has shrunk and what the importance of that area truly is in the aging process (Baddeley, Anderson, & Eysenck, 2015) Attempting to recall information or a situation that has happened can be very difficult since different pieces of information of an event are stored in different areas. During recall of an event, the various pieces of information are pieced back together again and any missing information is filled up by the brain, unconsciously which can account for why people sometimes receive and believe false information (Swaab, 2014). Memory lapses can be both aggravating and frustrating but they are due to the overwhelming number of information that is being taken in by the brain. Issues in memory can also be linked to several common physical and psychological causes, such as: anxiety , dehydration , depression , infections , medication side effects, poor nutrition , vitamin B12 deficiency , psychological stress , substance abuse , chronic alcoholism , thyroid imbalances, and blood clots in the brain . Taking care of the body and mind with appropriate medication , doctoral check-ups, and daily mental and physical exercise can prevent some of these memory issues. [ 25 ] Some memory issues are due to stress , anxiety, or depression . A traumatic life event, such as the death of a spouse , can lead to changes in lifestyle and can leave an elderly person feeling unsure of themselves, sad, and lonely. Dealing with such drastic life changes can therefore leave some people confused or forgetful. While in some cases these feelings may fade, it is important to take these emotional problems seriously. By emotionally supporting a struggling relative and seeking help from a doctor or counselor , the forgetfulness can be improved. [ 3 ] Memory loss can come from different situations of trauma including accidents, head-injuries and even from situations of abuse in the past. Sometimes the memories of traumas can last a lifetime and other times they can be forgotten, intentionally or not, and the causes are highly debated throughout psychology. There is a possibility that the damage to the brain makes it harder for a person to encode and process information that should be stored in long-term memory (Nairne, 2000). There is support for environmental cues being helpful in recovery and retrieval of information, meaning that there is enough significance to the cue that it brings back the memory. [ citation needed ] Tests and data show that as people age, the contiguity effect, which is stimuli that occur close together in the associated time, starts to weaken. [ 26 ] This is supported by the associative deficit theory of memory, which access the memory performance of an elder person and is attributed to their difficulty in creating and retaining cohesive episodes. The supporting research in this test, after controlling for sex, education, and other health-related issues, show that greater age was associated with lower hit and greater false alarm rates, and also a more liberal bias response on recognition tests. [ 27 ] Older people have a higher tendency to make outside intrusions during a memory test. This can be attributed to the inhibition effect. Inhibition caused participants to take longer time in recalling or recognizing an item, and also subjected the participants to make more frequent errors. For instance, in a study using metaphors as the test subject, older participants rejected correct metaphors more often than literally false statements. [ 28 ] Working memory, which as previously stated is a memory system that stores and manipulates information as complete cognitive tasks are completed, demonstrates great declines during the aging process. There have been various theories offered to explain why these changes may occur, which include fewer attentional resources, slower speed of processing, less capacity to hold information, and lack of inhibitory control. All of these theories offer strong arguments, and it is likely that the decline in working memory is due to the problems cited in all of these areas. [ citation needed ] Some theorists argue that the capacity of working memory decreases with age, and hence people are able to hold less information. [ 29 ] In this theory, declines in working memory are described as the result of limiting the amount of information an individual can simultaneously keep active, so that a higher degree of integration and manipulation of information is not possible because the products of earlier memory processing are forgotten before the subsequent products. [ 30 ] Another theory that is being examined to explain age related declines in working memory is that there is a limit in attentional resources seen over age. This means that older individuals are less capable of dividing their attention between two tasks, and thus tasks with higher attentional demands are more difficult to complete due to a reduction in mental energy . [ 31 ] Tasks that are simple and more automatic, however, see fewer declines from age. Working memory tasks often involve divided attention, thus they are more likely to strain the limited resources of aging individuals. [ 31 ] Speed of processing is another theory that has been raised to explain working memory deficits. As a result of various studies he has completed examining this topic, Salthouse argues that as one ages, the speed of processing information decreases significantly. It is this decrease in processing speed that is then responsible for the inability to use working memory efficiently as one ages. [ 31 ] The younger persons brain is able to obtain and process information at a quicker rate which allows for subsequent integration and manipulation needed to complete the cognitive task at hand. As this processing slows, cognitive tasks that rely on quick processing speed then become more difficult. [ 31 ] Finally, the theory of inhibitory control has been offered to account for decline seen in working memory. This theory examines the idea that older adults are unable to suppress irrelevant information in working memory, and thus the capacity for relevant information is subsequently limited. Less space for new stimuli due may attribute to the declines seen in an individual's working memory as they age. [ 31 ] As the aging process continues, deficits are seen in the ability to integrate, manipulate, and reorganize the contents of working memory in order to complete higher level cognitive tasks such as problem solving, decision making, goal setting, and planning. More research must be completed in order to determine what the exact cause of these age-related deficits in working memory are. It is likely that attention, processing speed, capacity reduction, and inhibitory control may all play a role in these age-related deficits. The brain regions that are active during working memory tasks are also being evaluated, and research has shown that different parts of the brain are activated during working memory in younger adults as compared to older adults. This suggests that younger and older adults are performing these tasks differently. [ 31 ] There are two different methods for studying the ways aging and memory effect each other which are cross-sectional and longitudinal . Both methods have been used multiple times in the past, but they both have advantages and disadvantages. Cross-sectional studies include testing different groups of people at different ages on a single occasion. This is where most of the evidence for studies including memory and aging come from. The disadvantage of cross-sectional studies is not being able to compare current data to previous data, or make a prediction about the future data. Longitudinal studies include testing the same group of participants the same number of times, over many years which are carefully selected in order to reflect upon a full range of a population (Ronnlund, Nyberg, Backman, & Nilsson; Ronnlund & Nilsson, 2006). The advantage to longitudinal studies include being able to see the effects that aging has on performance for each participant and even being able to distinguish early signs of memory related diseases. However, this type of study can be very costly and timely which might make it more likely to have participants drop out over the course of the study. (Baddeley, Anderson, & Eysenck, 2015). A deficiency of the RbAp48 protein has been associated with age-related memory loss. [ citation needed ] In 2010, experiments that have tested for the significance of under-performance of memory for an older adult group as compared to a young adult group, hypothesized that the deficit in associate memory due to age can be linked with a physical deficit. This deficit can be explained by the inefficient processing in the medial-temporal regions. This region is important in episodic memory, which is one of the two types of long-term human memory, and it contains the hippocampi, which are crucial in creating memorial association between items. [ 32 ] Age-related memory loss is believed to originate in the dentate gyrus , whereas Alzheimer's is believed to originate in the entorhinal cortex . [ 33 ] During normal aging, oxidative DNA damage in the brain accumulates in the promoters of genes involved in learning and memory , as well as in genes involved in neuronal survival. [ 34 ] Oxidative DNA damage includes DNA single-strand breaks which can give rise to DNA double-strand breaks (DSBs). [ 35 ] DSBs accumulate in neurons and astrocytes of the hippocampus and frontal cortex at early stages and during the progression to Alzheimer's disease , a process that could be an important driver of neurodegeneration and cognitive decline. [ 36 ] Various actions have been suggested to prevent memory loss or even improve memory . [ citation needed ] The Mayo Clinic has suggested seven steps: stay mentally active, socialize regularly, get organized, eat a healthy diet , include physical activity in one's daily routine, and manage chronic conditions . [ 37 ] Because some of the causes of memory loss include medications, stress, depression, heart disease , excessive alcohol use, thyroid problems, vitamin B12 deficiency, not drinking enough water, and not eating nutritiously, fixing those problems could be a simple, effective way to slow down dementia . Some say that exercise is the best way to prevent memory problems, because that would increase blood flow to the brain and perhaps help new brain cells grow. [ citation needed ] The treatment will depend on the cause of memory loss, but various drugs to treat Alzheimer's disease have been suggested in recent years. There are four drugs currently approved by the Food and Drug Administration (FDA) for the treatment of Alzheimer's, and they all act on the cholinergic system: Donepezil , Galantamine , Rivastigmine , and Tacrine . Although these medications are not the cure for Alzheimer's, symptoms may be reduced for up to eighteen months for mild or moderate dementia. These drugs do not forestall the ultimate decline to full Alzheimer's. [ 38 ] Also, modality is important in determining the strength of the memory. For instance, auditory creates stronger memory abilities than visual. This is shown by the higher recency and primacy effects of an auditory recall test compared to that of a visual test. Research has shown that auditory training, through instrumental musical activity or practice, can help preserve memory abilities as one ages. Specifically, in Hanna-Pladdy and McKay's experiment, they tested and found that the number of years of musical training, all things equal, leads to a better performance in non-verbal memory and increases the life span on cognition abilities in one's advanced years. [ 39 ] By keeping the patient active, focusing on their positive abilities, and avoiding stress, these tasks can easily be accomplished. Routines for bathing and dressing must be organized in a way so that the individual still feels a sense of independence. Simple approaches such as finding clothes with large buttons, elastic waist bands, or Velcro straps can ease the struggles of getting dressed in the morning. Further, finances should be managed or have a trusted individual appointed to manage them. Changing passwords to prevent over-use and involving a trusted family member or friend in managing accounts can prevent financial issues. When household chores begin to pile up, find ways to break down large tasks into small, manageable steps that can be rewarded. Finally, talking with and visiting a family member or friend with memory issues is very important. Using a respectful and simple approach, talking one-on-one can ease the pain of social isolation and bring much mental stimulation. [ 40 ] Many people who experience memory loss and other cognitive impairments can have changes in behaviors that are challenging to deal with for care givers. See also Caregiver stress . To help, caregivers should learn different ways to communicate and to deescalate possibly aggressive situations. Because decision-making skills can be impaired, it can be beneficial to give simple commands instead of asking multiple questions. See also Caring for People with Dementia. [ 41 ] Caregiving can be a physically, mentally, and emotionally taxing job to take on. A caregiver also needs to remember to care for themselves, taking breaks, finding time to themselves and possibly joining a support group are a few ways to avoid burnout. [ 41 ] In contrast, implicit, or procedural memory , typically shows no decline with age. [ 42 ] Other types of short-term memory show little decline, [ 9 ] and semantic knowledge (e.g. vocabulary) actually improves with age. [ 43 ] In addition, the enhancement seen in memory for emotional events is also maintained with age. [ 44 ] Losing working memory has been cited as being the primary reason for a decline in a variety of cognitive tasks due to aging. These tasks include long-term memory, problem solving, decision making, and language. [ 31 ] Working memory involves the manipulation of information that is being obtained, and then using this information to complete a task. For example, the ability of one to recite numbers they have just been given backwards requires working memory, rather than just simple rehearsal of the numbers which would require only short-term memory. One's ability to tap into one's working memory declines as the aging process progresses. [ 31 ] It has been seen that the more complex a task is, the more difficulty the aging person has with completing this task. Active reorganization and manipulation of information becomes increasingly harder as adults age. [ 45 ] When an older individual is completing a task, such as having a conversation or doing work, they are using their working memory to help them complete this task. As they age, their ability to multi-task seems to decline; thus after an interruption it is often more difficult for an aging individual to successfully finish the task at hand. [ 46 ] Additionally, working memory plays a role in the comprehension and production of speech. There is often a decline in sentence comprehension and sentence production as individuals age. Rather than linking this decline directly to deficits in linguistic ability, it is actually deficits in working memory that contribute to these decreasing language skills. [ 47 ] Studies have shown that with aging , in terms of short-term visual memory, viewing time and task complexity affect performance. When there is a delay or when the task is complex recall declines. [ 48 ] In a study conducted to measure whether visual memory in older adults with age-related visual decline was caused by memory performance or visual functioning, the following were examined: relationships among age, visual activity, and visual and verbal memory in 89 community dwelling volunteers aged 60–87 years. The findings were that the effect of vision was not specific to visual memory. [ 49 ] Therefore, vision was found to be correlated with general memory function in older adults and is not modality specific. Most research on memory and aging has focused on how older adults perform worse at a particular memory task. However, researchers have also discovered that simply saying that older adults are doing the same thing, only less of it, is not always accurate. In some cases, older adults seem to be using different strategies than younger adults. For example, brain imaging studies have revealed that older adults are more likely to use both hemispheres when completing memory tasks than younger adults. [ 51 ] In addition, older adults sometimes show a positivity effect when remembering information, which seems to be a result of the increased focus on regulating emotion seen with age. [ 44 ] For instance, eye tracking reveals that older adults showed preferential looking toward happy faces and away from sad faces. [ 52 ]
https://en.wikipedia.org/wiki/Memory_and_aging
Memory architecture describes the methods used to implement electronic computer data storage in a manner that is a combination of the fastest, most reliable, most durable, and least expensive way to store and retrieve information. Depending on the specific application, a compromise of one of these requirements may be necessary in order to improve another requirement. Memory architecture also explains how binary digits are converted into electric signals and then stored in the memory cells. And also the structure of a memory cell. For example, dynamic memory is commonly used for primary data storage due to its fast access speed. However dynamic memory must be repeatedly refreshed with a surge of current dozens of time per second, or the stored data will decay and be lost. Flash memory allows for long-term storage over a period of years, but it is much slower than dynamic memory, and the static memory storage cells wear out with frequent use. Similarly, the data bus is often designed to suit specific needs such as serial or parallel data access, and the memory may be designed to provide for parity error detection or even error correction . The earliest memory architectures are the Harvard architecture , which has two physically separate memories and data paths for program and data, and the Princeton architecture which uses a single memory and data path for both program and data storage. [ 1 ] Most general purpose computers use a hybrid split-cache modified Harvard architecture that appears to an application program to have a pure Princeton architecture machine with gigabytes of virtual memory , but internally (for speed) it operates with an instruction cache physically separate from a data cache, more like the Harvard model. [ 1 ] DSP systems usually have a specialized, high bandwidth memory subsystem; with no support for memory protection or virtual memory management. [ 2 ] Many digital signal processors have 3 physically separate memories and datapaths -- program storage, coefficient storage, and data storage. A series of multiply–accumulate operations fetch from all three areas simultaneously to efficiently implement audio filters as convolutions . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Memory_architecture
The memory cell is the fundamental building block of computer memory . The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 ( high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it. Over the history of computing , different memory cell architectures have been used, including core memory and bubble memory . Today [ as of? ] , the most common memory cell architecture is MOS memory , which consists of metal–oxide–semiconductor (MOS) memory cells. Modern random-access memory (RAM) uses MOS field-effect transistors (MOSFETs) as flip-flops, along with MOS capacitors for certain types of RAM. The SRAM ( static RAM ) memory cell is a type of flip-flop circuit, typically implemented using MOSFETs. These require very low power to maintain the stored value when not being accessed. A second type, DRAM ( dynamic RAM ), is based on MOS capacitors. Charging and discharging a capacitor can store either a '1' or a '0' in the cell. However, since the charge in the capacitor slowly dissipates, it must be refreshed periodically. Due to this refresh process, DRAM consumes more power, but it can achieve higher storage densities. Most non-volatile memory (NVM), on the other hand, is based on floating-gate memory cell architectures. Non-volatile memory technologies such as EPROM , EEPROM , and flash memory utilize floating-gate memory cells, which rely on floating-gate MOSFET transistors. The memory cell is the fundamental building block of memory. It can be implemented using different technologies, such as bipolar , MOS , and other semiconductor devices . It can also be built from magnetic material such as ferrite cores or magnetic bubbles. [ 1 ] Regardless of the implementation technology used, the purpose of the binary memory cell is always the same. It stores one bit of binary information that can be accessed by reading the cell and it must be set to store a 1 and reset to store a 0. [ 2 ] Logic circuits without memory cells are called combinational , meaning the output depends only on the present input. But memory is a key element of digital systems . In computers, it allows to store both programs and data and memory cells are also used for temporary storage of the output of combinational circuits to be used later by digital systems. Logic circuits that use memory cells are called sequential circuits , meaning the output depends not only on the present input, but also on the history of past inputs. This dependence on the history of past inputs makes these circuits stateful and it is the memory cells that store this state. These circuits require a timing generator or clock for their operation. [ 3 ] Computer memory used in most contemporary computer systems is built mainly out of DRAM cells; since the layout is much smaller than SRAM, it can be more densely packed yielding cheaper memory with greater capacity. Since the DRAM memory cell stores its value as the charge of a capacitor, and there are current leakage issues, its value must be constantly rewritten. This is one of the reasons that make DRAM cells slower than the larger SRAM (static RAM) cells, which has its value always available. That is the reason why SRAM memory is used for on- chip cache included in modern microprocessor chips. [ 4 ] On December 11, 1946 Freddie Williams applied for a patent on his cathode-ray tube (CRT) storing device ( Williams tube ) with 128 40- bit words. It was operational in 1947 and is considered the first practical implementation of random-access memory (RAM). [ 5 ] In that year, the first patent applications for magnetic-core memory were filed by Frederick Viehe. [ 6 ] [ 7 ] Practical magnetic-core memory was developed by An Wang in 1948, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialised with the Whirlwind computer in 1953. [ 8 ] Ken Olsen also contributed to its development. [ 9 ] Semiconductor memory began in the early 1960s with bipolar memory cells, made of bipolar transistors . While it improved performance, it could not compete with the lower price of magnetic-core memory. [ 10 ] In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. [ 11 ] Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. [ 12 ] [ 13 ] The invention of the MOSFET enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements, a function previously served by magnetic cores . [ 14 ] The first modern memory cells were introduced in 1964, when John Schmidt designed the first 64-bit p-channel MOS ( PMOS ) static random-access memory (SRAM). [ 15 ] [ 16 ] SRAM typically has six- transistor cells, whereas DRAM (dynamic random-access memory) typically has single-transistor cells. [ 17 ] [ 15 ] In 1965, Toshiba 's Toscal BC-1411 electronic calculator used a form of capacitive bipolar DRAM, storing 180-bit data on discrete memory cells, consisting of germanium bipolar transistors and capacitors. [ 18 ] [ 19 ] MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors , and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. [ 20 ] In 1967, Dennard filed a patent for a single-transistor DRAM memory cell, based on MOS technology. [ 21 ] The first commercial bipolar 64-bit SRAM was released by Intel in 1969 with the 3101 Schottky TTL . One year later, it released the first DRAM integrated circuit chip, the Intel 1103 , based on MOS technology. By 1972, it beat previous records in semiconductor memory sales. [ 22 ] DRAM chips during the early 1970s had three-transistor cells, before single-transistor cells became standard since the mid-1970s. [ 17 ] [ 15 ] CMOS memory was commercialized by RCA , which launched a 288-bit CMOS SRAM memory chip in 1968. [ 23 ] CMOS memory was initially slower than NMOS memory, which was more widely used by computers in the 1970s. [ 24 ] In 1978, Hitachi introduced the twin-well CMOS process, with its HM6147 (4 kb SRAM) memory chip, manufactured with a 3 μm process . The HM6147 chip was able to match the performance of the fastest NMOS memory chip at the time, while the HM6147 also consumed significantly less power. With comparable performance and much less power consumption, the twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computer memory in the 1980s. [ 24 ] The two most common types of DRAM memory cells since the 1980s have been trench-capacitor cells and stacked-capacitor cells. [ 25 ] Trench-capacitor cells are where holes (trenches) are made in a silicon substrate, whose side walls are used as a memory cell, whereas stacked-capacitor cells are the earliest form of three-dimensional memory (3D memory), where memory cells are stacked vertically in a three-dimensional cell structure. [ 26 ] Both debuted in 1984, when Hitachi introduced trench-capacitor memory and Fujitsu introduced stacked-capacitor memory. [ 25 ] The floating-gate MOSFET (FGMOS) was invented by Dawon Kahng and Simon Sze at Bell Labs in 1967. [ 27 ] They proposed the concept of floating-gate memory cells, using FGMOS transistors, which could be used to produce reprogrammable ROM (read-only memory). [ 28 ] Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM (erasable programmable ROM), EEPROM (electrically erasable programmable ROM) and flash memory . [ 29 ] Flash memory was invented by Fujio Masuoka at Toshiba in 1980. [ 30 ] [ 31 ] Masuoka and his colleagues presented the invention of NOR flash in 1984, [ 32 ] and then NAND flash in 1987. [ 33 ] Multi-level cell (MLC) flash memory was introduced by NEC , which demonstrated quad-level cells in a 64 Mb flash chip storing 2-bit per cell in 1996. [ 25 ] 3D V-NAND , where flash memory cells are stacked vertically using 3D charge trap flash (CTP) technology, was first announced by Toshiba in 2007, [ 34 ] and first commercially manufactured by Samsung Electronics in 2013. [ 35 ] [ 36 ] The following schematics detail the three most used implementations for memory cells: The flip-flop has many different implementations, its storage element is usually a latch consisting of a NAND gate loop or a NOR gate loop with additional gates used to implement clocking. Its value is always available for reading as an output. The value remains stored until it is changed through the set or reset process. Flip-flops are typically implemented using MOSFETs . Floating-gate memory cells, based on floating-gate MOSFETs , are used for most non-volatile memory (NVM) technologies, including EPROM , EEPROM and flash memory . [ 29 ] According to R. Bez and A. Pirovano: A floating-gate memory cell is basically an MOS transistor with a gate completely surrounded by dielectrics (Fig. 1.2), the floating-gate (FG), and electrically governed by a capacitive-coupled control-gate (CG). Being electrically isolated, the FG acts as the storing electrode for the cell device. Charge injected into the FG is maintained there, allowing modulation of the ‘apparent’ threshold voltage (i.e. VT seen from the CG) of the cell transistor. [ 29 ]
https://en.wikipedia.org/wiki/Memory_cell_(computing)
Memory corruption occurs in a computer program when the contents of a memory location are modified due to programmatic behavior that exceeds the intention of the original programmer or program/language constructs; this is termed as violation of memory safety . The most likely causes of memory corruption are programming errors (software bugs). When the corrupted memory contents are used later in that program, it leads either to program crash or to strange and bizarre program behavior. Nearly 10% of application crashes on Windows systems are due to heap corruption. [ 1 ] Modern programming languages like C and C++ have powerful features of explicit memory management and pointer arithmetic . These features are designed for developing efficient applications and system software. However, using these features incorrectly may lead to memory corruption errors. Memory corruption is one of the most intractable class of programming errors, for two reasons: Memory corruption errors can be broadly classified into four categories: Many memory debuggers such as Purify , Valgrind , Insure++ , Parasoft C/C++test , AddressSanitizer are available to detect memory corruption errors.
https://en.wikipedia.org/wiki/Memory_corruption
Memory erasure is the selective artificial removal of memories or associations from the mind . Memory erasure has been shown to be possible in some experimental conditions; some of the techniques currently being investigated are: drug-induced amnesia , selective memory suppression, destruction of neurons , interruption of memory, memory reconsolidation , [ 1 ] and the disruption of specific molecular mechanisms. [ 2 ] There are many reasons that research is being done on the selective removal of memories. Potential patients for this research include patients with psychiatric disorders such as post traumatic stress disorder , or substance use disorder , among others. [ 2 ] Memory erasure is also featured in numerous works of fiction, with fictional methods and properties that do not necessarily correspond with scientific reality. Research focused on gaining a better understanding of what memories are has been going on for many years, in this way so has research in memory erasure. The basis for the recent history for memory erasure has been focused on determining how the brain actively keeps memories stored and retrieves them. There have been several instances where researchers found drugs that when applied to certain areas of the brain, usually the amygdala , have relative success in being able to erase some memories. As early as 2009 researchers were able to trace and destroy neurons involved in supporting the specific type of memory that they were trying to erase. These neurons were targeted by using replication-defective herpes simplex virus (HSV) to increase cyclic adenosine monophosphate response element-binding protein ( CREB ) in them. As a result, the neurons were activated in fear memory or testing far more often in both wild-type and CREB-deficient mice. For the study, transgenic mice were used that allowed use of diphtheria toxin to preferentially target cells that were overexpressing CREB, since these were the cells more likely involved with fear memories. This caused the erasure of the target memory but allowed the mice to still form new fear memories which confirmed the cells were involved only in storing fear memories and not forming them. [ 3 ] Aside from the biotechnology approach to studying memory, research in psychiatry on how memories work has also been going on for several years. There have been some studies that show that some behavioral therapy can erase bad memories. [ 4 ] There has been some evidence that psychodynamic therapy and other energy techniques [ 5 ] can help with forgetting memories among other psychiatric issues there is no proven therapeutic approach for trying to erase bad memories. [ 6 ] There are several different types of possible patients that have the potential to draw great benefit from the selective memory erasure; these include people with drug addiction, or posttraumatic stress disorder ( PTSD ). PTSD patients may include war veterans, people who witnessed horrific events, victims of violent crimes and many other possibly traumatic events. These potential patients have unwanted memories that can be absolutely devastating to their daily lives and cause them to not be able to function properly. [ 7 ] Research continues, and in 2020, researchers were looking at potential new approaches to PTSD treatment. [ 8 ] [ 9 ] There are three main types of memories: sensory memory , short-term memory , and long-term memory . Sensory memory, in short, is the ability to hold sensory information for a short period of time, for example, looking at an object and being able to remember what it looked like moments after. Short-term memory is memory that allows a person to recall a short period of time; this can be a few seconds to a minute. Short-term memory allows people to remember what happened during that short time span without actually practicing the memory. Long-term memory has a much larger capacity than the prior two and actually stores information from both these types of memories to create a long lasting and large memory. Long-term memory is the largest target for research involving selective memory erasure. Within long-term memory there are several types of retention. [ 10 ] Implicit memory (or 'muscle memory') is generally described as the ability to remember how to use objects or specific movements of the body (e.g. using a hammer). Explicit memory , (or 'declarative memory') is that which can be consciously drawn upon by a person to remember. Explicit memory can be split into further subcategories; episodic memory , which is the memory of specific events and the information surrounding it, and semantic memory , which is the ability to remember factual information (e.g. what numbers mean). [ 11 ] A type of memory of main concern for memory erasure are emotional memories . These memories often involve several different aspects of information in them that can come from a variety of the different categories of memories mentioned above. These emotional memories are powerful memories that can elicit strong physiological effects on a person. [ 12 ] An example of an emotional memory can be found in patients with PTSD, for these patients a traumatic event has left a lasting emotional memory that can have powerful effects on a person even without them consciously retrieving the memory. [ 13 ] Drug-induced amnesia is the idea of selectively losing or inhibiting the creation of memories using drugs. Amnesia can be used as a treatment for patients who have experienced psychological trauma or for medical procedures where full anesthesia is not an option. Drug-induced amnesia is also a side-effect of other drugs like alcohol and rohypnol . There are other drugs that also can cause their users to be put in an amnesic state, where they experience some type of amnesia because of their use. Examples of these drugs include Triazolam , Midazolam and Diazepam . [ 14 ] There is a growing amount of information that has shown that memory depends largely on the brain's synaptic plasticity, with a large part of this being dependent on its ability to maintain long-term potentiation (LTP). [ 15 ] Studies on LTP have also started to indicate that there are several molecular mechanisms that may be at the basis of memory storage. [ 16 ] A more recent approach to erasing memories and the associations the brain makes with objects is disrupting specific molecular mechanisms in the brain that are actively keeping memories active. [ 17 ] Recovering methamphetamine (METH) addicts have reported that the sight of certain objects such as a lighter, gum or drug paraphernalia can cause massive cravings that can sometimes lead to a break in their mental strength and cause them to relapse . [ 2 ] This indicates that long-term memories can be called upon by various different associations that were made with the memory without the conscious effort of the person. With an increasing belief that memories are largely supported by functional and structural plasticity deriving from F-actin polymerization in postsynaptic dendritic spines at excitatory synapses . [ 2 ] Recent research has been done to target this F-actin polymerization by using direct actin depolymerization or a myosin II inhibitor to disrupt the polymerized F-actin associated with METH memory associations. The study indicated types of associations can be disrupted days to weeks after consolidation . [ 2 ] Although the depolymerization techniques had no effect on food reward based associations or shock based associations the results demonstrate the idea that meth associated memories' actin cytoskeleton is constantly changing making it uniquely sensitive to depolymerization during the maintenance phase. This is some of the first evidence showing that memories made with different associations are actively maintained using different molecular substrates. These results also show that the actin cytoskeleton may be a promising target for selective disruption of unwanted long-term memories. [ 2 ] Selective memory suppression is the idea that someone can consciously block an unwanted memory. Several different therapeutic techniques or training have been attempted to test this idea with varied success. [ 18 ] Many of these techniques focus on blocking the retrieval of a memory using suppression techniques to slowly teach the brain to suppress the memory. Although some of these techniques have been useful for some people it has not been shown to be a clear cut solution to forgetting memories. Because these memories are not truly erased but merely suppressed the question of how permanent the solution is and what actually happens to the memories can be troubling for some. [ 19 ] Selective memory suppression is also something that can occur without a person being consciously aware of suppressing the creation and retrieval of unwanted memories. When this occurs without the person knowing it is usually referred to as memory inhibition ; the memory itself is called a repressed memory . [ 20 ] One of the ways scientists have attempted to erase these memories through suppression is by interrupting the reconsolidation of a memory. Reconsolidation of a memory is when a person recalls a memory, usually a fearful one, it becomes susceptible to alteration, and then gets stored again. [ 21 ] This has led many researchers to believe that this time period is the best time for memories to be altered or erased. Studies have shown that through behavioral training results showed that they were able to erase memories by tampering with memories during the reconsolidation phase. [ 22 ] With evidence showing that different memories excite different neurons or system of neurons in the brain [ 23 ] the technique of destroying select neurons in the brain to erase specific memories is also being researched. Studies have started to investigate the possibility of using distinct toxins along with biotechnology that allows the researchers to see which areas of the brain are being used during the reward learning process of making a memory to destroy target neurons. In a paper published in 2009, authors showed that neurons in the lateral amygdala that had a higher level of cyclic adenosine monophosphate response element-binding protein (CREB) were activated primarily over other neurons by fear memory expression. This indicated to them that these neurons were directly involved in the making of the memory trace for that fear memory. They then proceeded to train mice using auditory fear training to produce a fear memory. They proceeded to check which of the neurons were overexpressing CREB and then, using an inducible diphtheria-toxin strategy , they destroyed those neurons, resulting in persistent and strong memory erasure of the fear memory. [ 1 ] Researchers have also found that the levels of the neurotransmitter, acetylcholine, can also effect which memories are most prominent in our minds. [ 24 ] Due to the lack of understanding of the brain this technique of destroying neurons may have a much larger effect on the patient than just the removal of the intended memories. Due to this complex nature of the brain treatment that would stun the neurons instead of destroying them could be another approach that could be taken. [ 25 ] A way of selectively erasing memories may be possible through optogenetics , a type of gene therapy that targets specific neurons. In 2017, researchers at Stanford demonstrated a technique for observing hundreds of neurons firing in the brain of a live mouse, in real time, and have linked that activity to long-term information storage. By using a virus to trigger production of a light-sensitive protein in neurons linked to a fear, they could erase the memory by weakening the pathways using light. [ 26 ] [ 27 ] There is an epistemological issue in determining whether the absence of evidence (i.e., memory trace) is evidence of absence . In experimental studies, the absence of behavior indicative of memory is sometimes interpreted as the absence of the memory trace; however, the memory impairment may be temporary due to deficits in recall. [ 28 ] Alternatively, the memory trace be latent and demonstrable via its indirect effects on new learning. [ 29 ] [ 30 ] The measurement issue is compounded by the fact that memory processes are dynamic and may not always manifest in single locations or in static and easily identifiable changes detectable by current technologies. Michael Davis, researcher at Emory University, argues that complete erasure can only be confidently concluded if all of the biological events that occurred when the memory was formed revert to their original status. [ 31 ] The current state of technology and methodology may not be sensitive enough to detect all types of memory traces. Davis contends that because making these measurements in a complex organism is implausible, the concept of complete memory erasure (what he deems "strong form of forgetting") is not useful scientifically. [ 31 ] As with most new technologies the idea of being able to erase memories comes with many ethical questions. One ethical question that arises is the idea that although there are some extremely painful memories that some people (for example PTSD patients) would like to be rid of, not all unpleasant memories are bad. [ 7 ] The ability to soften or erase memories could have drastic effects on how society functions. The ability to remember unpleasant effects from one's past has a huge impact on the future actions they may take. Remembering and learning from past mistakes is crucial in the emotional development of a person and helps to ensure they do not repeat previous errors. [ 32 ] The ability to erase memory could also have a massive impact on the law. When it comes to determining the outcome of a trial, the ability to modify memory could have a massive impact on the judicial system. Another ethical question that arises is to how the government will use this technology and what restrictions would need to be put in place. Some worry that if soldiers can go into battle knowing that the memories created during that time period can simply be erased they may not uphold military morale and standards. [ 7 ] Many are also skeptical with who should be able to have procedures done on them, so they are urging for a set of laws to determine this. Memory erasure has also been a common topic of interest in science fiction and other fiction. Several notable comics, TV shows and movies feature memory erasure, including Telefon , Total Recall , Men in Black , Eternal Sunshine of the Spotless Mind , Black Mirror , [ episode needed ] Futurama , [ episode needed ] The Bourne Identity , NBC's Heroes [ episode needed ] and Dollhouse . [ 33 ] Novels that feature memory erasure include The Invincible by Stanisław Lem , some of the Harry Potter novels (including Harry Potter and the Chamber of Secrets ) by J. K. Rowling , and The Giver by Lois Lowry . Several works by Philip K. Dick are about memory erasure, including " Paycheck ", " We Can Remember It for You Wholesale " (which served as the inspiration for Total Recall ).
https://en.wikipedia.org/wiki/Memory_erasure
Memory foam consists mainly of polyurethane with additional chemicals that increase its viscosity and density . It is often referred to as " viscoelastic " polyurethane foam, or low-resilience polyurethane foam ( LRPu ). The foam bubbles or 'cells' are open, effectively creating a matrix through which air can move. Higher-density memory foam softens in reaction to body heat, allowing it to mold to a warm body in a few minutes. Newer foams may recover their original shape more quickly. [ 1 ] Memory foam derives its viscoelastic properties from several effects, due to the material's internal structure. The network effect is the force working to restore the foam's structure when it is deformed. This effect is generated by the deformed porous material pushing outwards to restore its structure against an applied pressure. Three effects work against the network effect, slowing the regeneration of the foam's original structure: The effects are temperature-dependent, so the temperature range at which memory foam retains its properties is limited. If it is too cold, it hardens. If it is too hot, it acts like conventional foams, quickly springing back to its original shape. The underlying physics of this process can be described by polymeric creep . [ 2 ] [ 3 ] The pneumatic and adhesive effects are strongly correlated with the size of the pores within memory foam. Smaller pores lead to higher internal surface area and reduced air flow, increasing the adhesion and pneumatic effects. Thus the foam's properties can be controlled by changing its cell structure and porosity . Its glass transition temperature can also be modulated by using additives in the foam's material. [ 2 ] Memory foam's mechanical properties can affect the comfort of mattresses produced with it. There is also a trade-off between comfort and durability. Certain memory foams may have a more rigid cell structure, leading to a weaker distribution of weight, but better recovery of the original structure, leading to improved cyclability and durability. Denser cell structure can also resist the penetration of water vapor , leading to reduced weathering and better durability and overall appearance. [ 4 ] Memory foam was developed in 1966 under a contract by NASA 's Ames Research Center to improve the safety of aircraft cushions. The temperature-sensitive memory foam was initially referred to as "slow spring back foam"; most called it "temper foam". [ 5 ] Created by feeding gas into a polymer matrix, it had an open-cell solid structure that matched pressure against it, yet slowly returned to its original shape. [ 6 ] Later commercialisation of the foam included use in medical equipment such as X-ray table pads, and sports equipment such as American / Canadian football helmet liners. When NASA released memory foam to the public domain in the early 1980s, Fagerdala World Foams was one of the few companies willing to work with it, as the manufacturing process remained difficult and unreliable. Their 1991 product, the Tempur-Pedic Swedish Mattress eventually led to the mattress and cushion company Tempur World. Memory foam was subsequently used in medical settings. For example, when patients were required to lie immobile in bed, on a firm mattress, for an unhealthy period of time, the pressure on some of their body regions impaired blood flow, causing pressure sores or gangrene . Memory foam mattresses significantly decreased such events, as well as alternating pressure air mattresses. [ 5 ] [ 7 ] Memory foam was initially too expensive for widespread use, but became cheaper. Its most common domestic uses are mattresses, pillows, shoes, and blankets. It has medical uses, such as wheelchair seat cushions, hospital bed pillows and padding for people suffering long-term pain or postural problems. Heat retention can be a disadvantage when used in mattresses and pillows, so in second-generation memory foam, companies began using open cell structure to improve breathability. In 2006, the third generation of memory foam was introduced. Gel visco or gel memory foam consists of gel particles fused with visco foam to reduce trapped body heat, speed up spring back time and help the mattress feel softer. This technology was originally developed and patented by Peterson Chemical Technology, [ 8 ] and gel mattresses became popular with the release of Serta's iComfort line and Simmons' Beautyrest line in 2011. Gel-infused memory foam was next developed with what were described as "beads" containing the gel which, as a phase-change material , achieved the desired temperature stabilization or cooling effect by changing from a solid to a liquid "state" within the capsule. Changing physical states can significantly alter an element's heat absorption properties. Since the development of gel memory foam, other materials have been added. Aloe vera , green tea extract and activated charcoal have been combined with it to reduce odors or provide aromatherapy while sleeping. Rayon has been used in woven mattress covers over memory foam beds to wick moisture away from the body to increase comfort. Phase-change materials (PCMs) have also been used in covers on memory foam pillows, beds, and mattress pads. Materials other than polyurethane also have the properties necessary to make memory foam. Polyethylene terephthalate , one such polymeric material, provides certain benefits over polyurethane, such as recyclability, lightness, and thermal insulation . [ 9 ] A memory foam mattress is usually denser than other foam mattresses, making it both more supportive and heavier. Memory foam mattresses are often sold for higher prices than traditional mattresses. Memory foam used in mattresses is commonly manufactured in densities ranging from less than 24kg/m 3 (1.5 lb/ft 3 ) to 128kg/m 3 (8 lb/ft 3 ) density. Most standard memory foam has a density of 16–80 kg/m 3 (1 to 5 lb/ft 3 ). Most bedding, such as topper pads and comfort layers in mattresses, has a density of 48–72 kg/m 3 (3 to 4.5 lb/ft 3 ). High densities such as 85 kg/m 3 (5.3 lb/ft 3 ) are used infrequently. The firmness property (hard to soft) of memory foam is used in determining comfort. It is measured by a foam's indentation force deflection (IFD) rating. However, it is not a complete measurement of a "soft" or "firm" feel . A foam of higher IFD but lower density can feel soft when compressed. IFD measures the force in newtons (or pounds-force ) required to make a dent 1 inch into a foam sample 500 mm × 500 mm × 100 mm (19.7 in × 19.7 in × 3.9 in) by a 323 cm 3 (50 sq in, 8-inch-diameter) disc—known as IFD @ 25% compression. [ 10 ] IFD ratings for memory foams range between super soft (IFD 10) and semi-rigid (IFD 12). Most memory foam mattresses are firm (IFD 12 to IFD 16). Second and third generation memory foams have an open-cell structure that reacts to body heat and weight by molding to the sleeper's body, helping relieve pressure points, preventing pressure sores, etc. [ 11 ] [ better source needed ] Manufacturers claim that this may help relieve pressure points to relieve pain and promote more restful sleep, although there are no objective studies supporting the mattresses' claimed benefits. [ 12 ] Memory foam mattresses retain body heat, so they can be excessively warm in hot weather. However, gel-type memory foams tend to be cooler due to their greater breathability. [ 13 ] Emissions from memory foam mattresses may directly cause more respiratory irritation than other mattresses. Memory foam, like other polyurethane products, can be combustible. [ 14 ] Laws in several jurisdictions have been enacted to require that all bedding, including memory foam items, be resistant to ignition from an open flame such as a candle or cigarette lighter. US bedding laws that went into effect in 2010 change the Cal-117 Bulletin for FR testing. [ 15 ] There is concern that high levels of the fire retardant PBDE commonly used in memory foam could cause health problems for some users. [ 16 ] PBDEs are no longer used in most bedding foams, especially in the European Union. Manufacturers caution about leaving babies and small children unattended on memory foam mattresses, as they may find it difficult to turn over and may suffocate. [ 13 ] The United States Environmental Protection Agency published two documents proposing National Emissions Standards for Hazardous Air Pollutants (HAP) concerning hazardous emissions produced during the making of flexible polyurethane foam products. [ 17 ] The HAP emissions associated with polyurethane foam production include methylene chloride , toluene diisocyanate , methyl chloroform , methylene diphenyl diisocyanate , propylene oxide , diethanolamine , methyl ethyl ketone , methanol , and toluene . However, not all chemical emissions associated with the production of these material have been classified. Methylene chloride makes up over 98 percent of the total HAP emissions from this industry. Short-term exposure to high concentrations of methylene chloride also irritates the nose and throat. The effects of chronic (long-term) exposure to methylene chloride in humans involve the central nervous system, and include headaches, dizziness, nausea, and memory loss. Animal studies indicate that inhalation of methylene chloride affects the liver, kidney, and cardiovascular system. Developmental or reproductive effects of methylene chloride have not been reported in humans, but limited animal studies have reported lowered fetal body weights in exposed rats. [ 18 ]
https://en.wikipedia.org/wiki/Memory_foam
Memory improvement is the act of enhancing one's memory. Factors motivating research on improving memory include conditions such as amnesia , age-related memory loss , people’s desire to enhance their memory, and the search to determine factors that impact memory and cognition . There are different techniques to improve memory, some of which include cognitive training , psychopharmacology , diet , stress management , and exercise . Each technique can improve memory in different ways. Neuroplasticity is the mechanism by which the brain encodes experience, learns new behaviors, and can relearn behaviors lost due to brain damage. [ 1 ] Experience-dependent neuroplasticity suggests that the brain changes in response to experiences. After the learning of London taxicab drivers, who memorize maps of the city while studying to drive taxis, was studied over a period of time, it was found that the grey matter volume increased in the posterior hippocampus , an area in the brain involved heavily in memory. The longer taxi drivers navigated the streets of London, the higher the volume of the gray matter in their posterior hippocampus. This suggests a correlation between mental training or exercise and the brain's capacity to manage greater volume and more complex information. The increase in volume led to a decrease in the taxi drivers' ability to acquire new visuo-spatial information. [ 2 ] Research has found that chronic and acute stress have adverse effects on memory processing systems . Discovering that the brain can change as a result of experience has resulted in the development of cognitive training . Cognitive training improves cognitive functioning, which can increase working memory capacity and improve cognitive skills and functions in clinical populations with working memory deficiencies. [ 15 ] Cognitive training may focus on factors such as attention , speed of processing , neurofeedback , dual-tasking and perceptual training. [ 15 ] Cognitive training has been shown to improve cognitive abilities for up to five years. In one experiment studying how the cognitive functions of older adults were impacted by cognitive training involving memory, reasoning, and speed of processing, it was found that improvements in cognitive ability were maintained over time and had a positive transfer effect on everyday functioning. The results indicate that each type of cognitive training can produce immediate and lasting improvements in each kind of cognitive ability, thus suggesting that training can be beneficial to improving memory. [ 16 ] Cognitive training in areas other than memory has been seen to generalize and transfer to memory systems. The Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study by the American Geriatrics Society in 2009 demonstrated that cognitive training designed to improve the accuracy and speed of the auditory system also improved memory and attention system functioning. [ 17 ] Cognitive training can be categorized as strategy training or core training. The manner in which a training study is conducted may affect outcomes or perceptions of them. Expectancy and effort effects occur when the experimenter subconsciously influences the participants to perform a desired result. One form of expectancy bias is the placebo effect, which is caused by the expectation that a training will have a positive influence on cognition. Control groups may be used to eliminate this bias because participants in them would not expect to benefit from the training. Researchers sometimes generalize their results, which can be misleading. An example is to generalize findings of a single task and interpret the observed improvements as a broadly defined cognitive ability. The study may result in inconsistency if there are a variety of comparison groups used in working memory training, which is impacted by training and assessment timeline, assessment conditions, training setting and control group selection. [ 15 ] The Five x Five System is a set of memory enhancement tools that are scientifically validated. The system was created by Dr. Peter Marshall for research purposes at Royal Holloway, University of London. The system involves five groups of five tactics designed to maximize storage and recall at each stage of the process of registering, short-term storage, long-term storage, consolidation and retrieval and was designed to test efficacy of memory training in school curricula. Each section is of equal text length so that it can be taught verbatim in the same amount of time by all competent teachers. [ 18 ] Generation effect The generation effect relies on the involvement of the individual in creating their own study materials in order to enhance encoding and long-term retrieval. [ 19 ] Though the underlying mechanisms of the generation effect are not fully understood, an analysis concluded that the effect is real. [ 20 ] Testing effect The testing effect is a derivative of the generation effect as it involves generating the self-testing material. Moreover, it is known that repeatedly testing oneself enhances encoding, thus improving memory. [ 19 ] The testing effect happens when most of the learning is allocated to declarative knowledge and long term memory is enhanced. [ 21 ] Practice is necessary for retrieving memories. [ 22 ] The more frequently that a person practices memorization, the more capable they are of remembering it later. [ 22 ] The development of a retrieval structure that makes it easier to access long-term memories is facilitated by using repeated retrieval practice. [ 21 ] The testing effect occurs because of the development of an adequate retrieval structure. [ 21 ] The testing effect is different from re-reading because the information being learned is being practiced and tested, forcing the information to be drawn from memory to recall. [ 22 ] The testing effect allows for information to be recalled over a longer period, as it is used as a self-testing tool, and aids in recalling information in the future. [ 23 ] This strategy is effective when using memory recall for information such as that being tested on and needing to be in long-term memory. [ 21 ] Spacing effect Taking scheduled breaks and having short study sessions has proven to be more effective for memory compared to one long study session. It is also known that memory can be improved by sleeping after learning. [ 19 ] [ 24 ] Longer breaks between study sessions have been associated with better learning and retention. Encountering previously learned information after a break helps improve long- and short-term retention. [ 25 ] Illusion of learning Illusions of learning should be avoided when improving memory. Some learning and studying strategies people use may seem more effective than they actually are. This creates a problem where the individual thinks they know the material, when they don't necessarily. This could be caused by fluency and the familiarity effect. As people reread the material over and over, it becomes easier to read, creating a sense of fluency. However, this fluency does not indicate that encoding or retrieval of the material is being enhanced. The familiarity effect creates an illusion of learning; when the individual recognizes a word or concept to be familiar, they may interpret that as knowing and understanding the material. [ 19 ] State-dependent learning Retrieval is known to be improved when the environment/mood state that the encoding happened in, matches the environment/mood state at the time of retrieval. [ 26 ] Concept Maps “are diagrams that link word concepts in a fluid manner to central key concepts.” [ 21 ] They center around a main topic or idea, with lines protruding from the center with related information. [ 27 ] Other concepts and ideas are then written at the end of each of the lines with new, related information. These related ideas are usually one or two words in length, giving only the essence of what is needed for memory retrieval . [ 21 ] Related ideas can also be drawn at the ends of the lines. This may be especially useful, given the drawing effect (people remember images better than words). [ 28 ] These diagrams are beneficial because they require the creator to link and integrate different ideas, which improve critical thinking and leads to more meaningful learning. [ 29 ] Concept maps also help to facilitate the storage of material in long term memory, as well as help to show visually any knowledge gaps that may be present. [ 21 ] Concept maps have been shown to improve people's ability to complete novel problem solving tasks. [ 30 ] The Drawing Effect is another way to improve memory. Studies show that images are better remembered than words, something that is now known as the picture-superiority effect. [ 28 ] Furthermore, another study found that when people are studying vocabulary, they remember more when they draw the definition, in comparison to writing it. [ 31 ] This is thought to be because drawing uses 3 different types of memory- elaborative, motor, and pictorial. [ 32 ] The benefit of using pictures to enhance memory is even seen at an older age, including in dementia patients. [ 32 ] The method of loci is a technique utilized for memory recall when items to be remembered are associated with different locations that are well known to the learner. [ 21 ] Method of loci is one of the oldest and most effective mnemonics based on visual imagery. [ 21 ] The more that visual memory is exercised through using objects to recall information, the higher the memory recall. [ 33 ] The locations that are utilized when using the method of loci aid in the effectiveness of memory recall. [ 21 ] Using the location of a driving route to work is more effective than using a room within a home because items in a room can be moved around while a route to work is more constant without items being moved around. [ 21 ] There are limitations when using method of loci, since it is difficult to recall any given item without working one's way through the list sequence, which can be time consuming. [ 21 ] Another limitation is that it is not useful when an individual is trying to learn and remember the real world. [ 21 ] This and other mnemonic techniques are effective because they allow the learner to apply their own knowledge to increase their memory recall . [ 21 ] Psychopharmacology is the scientific study of the actions of drugs and their effects on mood , sensation , thought , and behavior . There is evidence that aspects of memory can be improved by action on selective neurotransmitter systems, such as the cholinergic system , which releases acetylcholine, which may have therapeutic benefits for patients with cognitive disorders. [ 34 ] Findings from studies have indicated that acute administration of nicotine can improve cognitive performance (particularly for tasks that require attention), short-term episodic memory and prospective memory task performance. Chronic usage of low-dose nicotine in animals has been found to increase the number of neuronal nicotinic acetylcholine receptors (nAChRs) and improve performance on learning and memory tasks. [ 35 ] Short-term nicotine treatment, utilizing nicotine skin patches, have shown that it may be possible to improve cognitive performance in a variety of groups such as normal non-smoking adults, Alzheimer's disease patients, schizophrenics , and adults with attention-deficit hyperactivity disorder . [ 36 ] Similarly, evidence suggests that smoking improves visuospatial working memory impairments in schizophrenic patients, which may explain the high rate of tobacco smoking found in people with schizophrenia. [ 37 ] Meditation , a form of mental training to focus attention, [ 12 ] has been shown to increase the control over brain resource distribution, improving both attention and self-regulation . [ 13 ] The changes are potentially long-lasting, as meditation may be able to strengthen neuronal circuits as selective attention improves. [ 38 ] Meditation may also increase cognitive limited capacity , affecting the way in which stimuli are processed. [ 12 ] Meditation practice has also been associated with physical changes in brain structure. Magnetic resonance imaging (MRI) of Buddhist insight meditation practitioners who practiced mindfulness meditation found that they had an increase in cortical thickness and hippocampus volume compared to the control group. [ 39 ] This research provides evidence that practicing meditation promotes neural plasticity and experience-dependent cortical plasticity. [ 40 ] Mindfulness, which is also known to increase openness to experiences out of curiosity, interest and acceptance, [ 41 ] can increase one's capacity to focus and their awareness momentarily. Research shows that mindfulness can improve memory, which influences stress processing pathways in the amygdala and prefrontal cortex. [ 42 ] Mindfulness meditation works in association with the sympathetic nervous system (SNS) to regulate the hypothalamic-pituitary-adrenal (HPA) system and the sympathomedullary pathway (SAM) to maintain homeostasis on stress-reactive physiology. [ 43 ] In both human and animal studies, exercise has been shown to improve cognitive performance on encoding and retrieval tasks. The Morris water maze and radial arm water maze studies of rodents found that, when compared to sedentary animals, exercised mice showed improved performance traversing the water maze and improved memory of the location of an escape platform. [ 44 ] Human studies have shown that cognitive performance is improved due to physiological arousal, which made mental processes faster and improved memory storage and retrieval. [ 45 ] Ongoing exercise interventions have been found to favorably impact memory processes in older adults [ 46 ] and children. [ 47 ] Exercise has been found to positively regulate hippocampal neurogenesis , [ 48 ] which is considered an explanation for the positive influence of physical activities on memory performance. Hippocampus-dependent learning can promote the survival of newborn neurons, which may serve as a foundation for the formation of new memories. [ 49 ] Exercise has been found to increase the level of the brain-derived neurotrophic factor (BDNF) protein in rats, with elevated BDNF levels corresponding with strengthened performance on memory tasks. Data also suggests that BDNF availability at the beginning of cognitive testing is related to the overall acquisition of a new cognitive task and may be important in determining the strength of recall in memory tasks. [ 44 ] A meta-analysis concluded that resistance training , as compared to cardiovascular exercise, had no measurable effect on working memory. [ 50 ] Some evidence shows that the amount of effort put into exercising is positively correlated with the level of cognitive performance after working out in the short term and long term. [ 51 ] Aristotle wrote a treatise about memory: De memoria et reminiscentia . To improve recollection, he advised that a systematic search should be made and that practice was helpful. He suggested grouping the items to be remembered in threes and then concentrating upon the central member of each triad. [ 52 ] Playing music has recently gained attention as a possible way to promote brain plasticity. Results that have been found suggest that learning music can improve different aspects of memory. Children who participated in one year of instrumental musical training showed improved verbal memory, whereas no such improvement was shown in children who discontinued musical training. [ 53 ] Similarly, adults with no previous musical training who participated in individualized piano instruction showed improved performance on tasks designed to test attention and working memory compared to a healthy control group. [ 54 ] Evidence suggests that the improvements to verbal, working and long-term memory associated to musical training are a result of the enhanced verbal rehearsal mechanisms musicians possess. [ 55 ] Another study tested how learning a new activity impacts the memory and mental control of elderly patients. [ 56 ] The patients were divided into five groups that each spent 15 hours a week doing one of five different activities: learning digital photography, quilting , learning both digital photography and quilting, socializing with others, or doing solitary activities by themselves. It was found that all groups improved with regard to mental control and that learning new skills led to improved episodic memory. [ 56 ] Physical memory aids, which are typically worn on the wrist or finger, can help the user remember something they might otherwise forget. Aids can be used by people with memory loss. Typical memory aids for people with Alzheimer's include sticky notes and color-coded memory aids. [ 57 ] Tying a string around one's finger is used to remember things. [ 58 ] [ 59 ] One school yearbook from 1849 suggested that a string tied around a finger or a knot tied in the corner of a handkerchief were used to remember something important for a student. [ 60 ] The oldest documented legend of a string used as a memory aid was in the myth Ariadne's thread , which describes Ariadne presenting a thread to her lover, Theseus, so that he could find his way out of the Minotaur's labyrinth. The knot-in-the-handkerchief memory aid was used by German philosopher Martin Heidegger. [ 61 ] A memory clamp (also called a "reality clamp") is a generic name for a type of physical memory aid worn on the wrist or finger to help the user remember something they might otherwise forget. It was originally invented by physicist Rick Yukon, who used visuals that were difficult to ignore with a deliberately intrusive shape and size. [ 62 ] [ 63 ] Memory clamps are designed to be difficult to ignore visually, typically with bright colors and sometimes contrasting base colors, to cause a slight amount of visual and physical discomfort, so that the user maintains at least partial awareness of the intrusion. It is designed to be worn intermittently, so that the user doesn't become accustomed to it. [ 62 ] Other methods for remembering things include writing on one's own hand, sending a text message to oneself, or using sticky notes . [ 64 ] Wrist-worn, finger-worn and ankle-worn memory aids have been used for hundreds of years. [ 65 ]
https://en.wikipedia.org/wiki/Memory_improvement
Memory of Mankind on the Moon [ 1 ] was a time capsule that was launched onboard Astrobotic Technology's Peregrine lander . [ 2 ] [ 3 ] It was made in collaboration with Hungarian company Puli Space Technologies [ 4 ] [ 5 ] and Memory of Mankind . This space - or spaceflight -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Memory_of_Mankind_on_the_Moon
Memory operations per second or MOPS is a metric for an expression of the performance capacity of semiconductor memory . It can also be used to determine the efficiency of RAM in the Windows operating environment . [ 1 ] [ 2 ] MOPS can be affected by multiple applications being open at once without adequate job scheduling . [ 3 ]
https://en.wikipedia.org/wiki/Memory_operations_per_second
Memory transfer was a biological process proposed by James V. McConnell and others in the 1960s. Memory transfer proposes a chemical basis for memory termed memory RNA which can be passed down through flesh instead of an intact nervous system. Since RNA encodes information [ 1 ] and living cells produce and modify RNA in reaction to external events, it might also be used in neurons to record stimuli. [ 2 ] [ 3 ] [ 4 ] This was proposed as an explanation for the results of McConnell's experiments in which planarians retained memory of acquired information after regeneration . In McConnell's experiments, he classically conditioned planarians to contract their bodies upon exposure to light by pairing it with an electric shock. [ 5 ] [ 6 ] The planarians retained this acquired information after being sliced and regenerated , even after multiple slicings to produce a planarian where none of the original trained planarian was present. [ 6 ] The same held true after the planarians were ground up and fed to untrained cannibalistic planarians, usually Dugesia dorotocephala . [ 6 ] [ 7 ] As the nervous system was fragmented but the nucleic acids were not, this seemed to indicate the existence of memory RNA. [ 6 ] Some further experiments seem to support the original findings in that some memories may be stored outside the brain, [ 1 ] [ 8 ] [ 9 ] but McConnell's experiments proved to be largely irreproducible and it was later suggested that only sensitization was transferred, [ 5 ] or that no transfer occurred and the effect was due to stress hormones in the donor or pheromone trails left on dirty lab glass. [ 2 ] Memory transfer through memory RNA is not currently a well-accepted explanation for the planarian behavior. [ 6 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Memory_transfer
In probability and statistics , memorylessness is a property of probability distributions . It describes situations where previous failures or elapsed time does not affect future trials or further wait time. Only the geometric and exponential distributions are memoryless. A random variable X {\displaystyle X} is memoryless if Pr ( X > t + s ∣ X > s ) = Pr ( X > t ) {\displaystyle \Pr(X>t+s\mid X>s)=\Pr(X>t)} where Pr {\displaystyle \Pr } is its probability mass function or probability density function when X {\displaystyle X} is discrete or continuous respectively and t {\displaystyle t} and s {\displaystyle s} are nonnegative numbers. [ 1 ] [ 2 ] In discrete cases, the definition describes the first success in an infinite sequence of independent and identically distributed Bernoulli trials , like the number of coin flips until landing heads. [ 3 ] In continuous situations, memorylessness models random phenomena, like the time between two earthquakes. [ 4 ] The memorylessness property asserts that the number of previously failed trials or the elapsed time is independent , or has no effect, on the future trials or lead time. The equality characterizes the geometric and exponential distributions in discrete and continuous contexts respectively. [ 1 ] [ 5 ] In other words, the geometric random variable is the only discrete memoryless distribution and the exponential random variable is the only continuous memoryless distribution. In discrete contexts, the definition is altered to Pr ( X > t + s ∣ X ≥ s ) = Pr ( X > t ) {\textstyle \Pr(X>t+s\mid X\geq s)=\Pr(X>t)} when the geometric distribution starts at 0 {\displaystyle 0} instead of 1 {\displaystyle 1} so the equality is still satisfied. [ 6 ] [ 7 ] If a continuous probability distribution is memoryless, then it must be the exponential distribution. From the memorylessness property, Pr ( X > t + s ∣ X > s ) = Pr ( X > t ) . {\displaystyle \Pr(X>t+s\mid X>s)=\Pr(X>t).} The definition of conditional probability reveals that Pr ( X > t + s ) Pr ( X > s ) = Pr ( X > t ) . {\displaystyle {\frac {\Pr(X>t+s)}{\Pr(X>s)}}=\Pr(X>t).} Rearranging the equality with the survival function , S ( t ) = Pr ( X > t ) {\displaystyle S(t)=\Pr(X>t)} , gives S ( t + s ) = S ( t ) S ( s ) . {\displaystyle S(t+s)=S(t)S(s).} This implies that for any natural number k {\displaystyle k} S ( k t ) = S ( t ) k . {\displaystyle S(kt)=S(t)^{k}.} Similarly, by dividing the input of the survival function and taking the k {\displaystyle k} -th root, S ( t k ) = S ( t ) 1 k . {\displaystyle S\left({\frac {t}{k}}\right)=S(t)^{\frac {1}{k}}.} In general, the equality is true for any rational number in place of k {\displaystyle k} . Since the survival function is continuous and rational numbers are dense in the real numbers (in other words, there is always a rational number arbitrarily close to any real number), the equality also holds for the reals. As a result, S ( t ) = S ( 1 ) t = e t ln ⁡ S ( 1 ) = e − λ t {\displaystyle S(t)=S(1)^{t}=e^{t\ln S(1)}=e^{-\lambda t}} where λ = − ln ⁡ S ( 1 ) ≥ 0 {\displaystyle \lambda =-\ln S(1)\geq 0} . This is the survival function of the exponential distribution. [ 5 ]
https://en.wikipedia.org/wiki/Memorylessness
A memristor ( / ˈ m ɛ m r ɪ s t ər / ; a portmanteau of memory resistor ) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage . It was described and named in 1971 by Leon Chua , completing a theoretical quartet of fundamental electrical components which also comprises the resistor , capacitor and inductor . [ 1 ] Chua and Kang later generalized the concept to memristive systems . [ 2 ] Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notably ReRAM . The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated. [ 3 ] [ 4 ] Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as the memistor of Bernard Widrow, but Chua introduced a mathematical generality. The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkage Φ m ( t ) and the amount of electric charge that has flowed, q ( t ) : [ 1 ] f ( Φ m ( t ) , q ( t ) ) = 0 {\displaystyle f(\mathrm {\Phi } _{\mathrm {m} }(t),q(t))=0} The magnetic flux linkage , Φ m , is generalized from the circuit characteristic of an inductor. It does not represent a magnetic field here. Its physical meaning is discussed below. The symbol Φ m may be regarded as the integral of voltage over time. [ 5 ] In the relationship between Φ m and q , the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge: M ( q ) = d Φ m d q . {\displaystyle M(q)={\frac {\mathrm {d} \Phi _{\rm {m}}}{\mathrm {d} q}}\,.} Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are: M ( q ( t ) ) = d Φ / d t d q / d t = V ( t ) I ( t ) . {\displaystyle M(q(t))={\cfrac {\mathrm {d} \Phi _{\rm {}}/\mathrm {d} t}{\mathrm {d} q/\mathrm {d} t}}={\frac {V(t)}{I(t)}}\,.} To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the term M ( q ) , which characterizes the device, and write it as a differential equation. The above table covers all meaningful ratios of differentials of I , q , Φ m , and V . No device can relate d I to d q , or dΦ m to d V , because I is the time derivative of q and Φ m is the integral of V with respect to time. It can be inferred from this that memristance is charge-dependent resistance . If M ( q ( t )) is a constant, then we obtain Ohm's law , R ( t ) = V ( t )/ I ( t ) . If M ( q ( t )) is nontrivial, however, the equation is not equivalent because q ( t ) and M ( q ( t )) can vary with time. Solving for voltage as a function of time produces V ( t ) = M ( q ( t ) ) I ( t ) . {\displaystyle V(t)=\ M(q(t))I(t)\,.} This equation reveals that memristance defines a linear relationship between current and voltage, as long as M does not vary with charge. Nonzero current implies time varying charge. Alternating current , however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change in q does not cause much change in M . Furthermore, the memristor is static if no current is applied. If I ( t ) = 0 , we find V ( t ) = 0 and M ( t ) is constant. This is the essence of the memory effect. Analogously, we can define a W ( ϕ ( t )) as memductance: [ 1 ] i ( t ) = W ( ϕ ( t ) ) v ( t ) . {\displaystyle i(t)=W(\phi (t))v(t)\,.} The power consumption characteristic recalls that of a resistor, I 2 R : P ( t ) = I ( t ) V ( t ) = I 2 ( t ) M ( q ( t ) ) . {\displaystyle P(t)=\ I(t)V(t)=\ I^{2}(t)M(q(t))\,.} As long as M ( q ( t )) varies little, such as under alternating current, the memristor will appear as a constant resistor. If M ( q ( t )) increases rapidly, however, current and power consumption will quickly stop. M ( q ) is physically restricted to be positive for all values of q (assuming the device is passive and does not become superconductive at some q ). A negative value would mean that it would perpetually supply energy when operated with alternating current. In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept of device modeling . [ 6 ] Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis. In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors. [ 6 ] Pershin and Di Ventra [ 3 ] have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept. The rest of this article primarily addresses memristors as related to ReRAM devices, since the majority of work since 2008 has been concentrated in this area. Dr. Paul Penfield, in a 1974 MIT technical report [ 7 ] mentions the memristor in connection with Josephson junctions . This was an early use of the word "memristor" in the context of a circuit device. One of the terms in the current through a Josephson junction is of the form: i M ( v ) = ϵ cos ⁡ ( ϕ 0 ) v = W ( ϕ 0 ) v {\displaystyle {\begin{aligned}i_{M}(v)&=\epsilon \cos(\phi _{0})v\\&=W(\phi _{0})v\end{aligned}}} where ϵ is a constant based on the physical superconducting materials, v is the voltage across the junction and i M is the current through the junction. Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out. [ 8 ] [ 9 ] [ 10 ] [ 11 ] A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014. [ 12 ] Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see § Theory . A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electrons n e . [ 2 ] v M = R ( n e ) i M d n e d t = β n + α R ( n e ) i M 2 {\displaystyle {\begin{aligned}v_{\mathrm {M} }&=R(n_{\mathrm {e} })i_{\mathrm {M} }\\{\frac {\mathrm {d} n_{\mathrm {e} }}{\mathrm {d} t}}&=\beta n+\alpha R(n_{\mathrm {e} })i_{\mathrm {M} }^{2}\end{aligned}}} v M is the voltage across the discharge tube, i M is the current flowing through it, and n e is the number of conduction electrons. A simple memristance function is R ( n e ) = F / n e . The parameters α , β , and F depend on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the "pinched hysteresis loop" in the v-i plane. [ a ] [ 13 ] [ 14 ] Thermistors can be modeled as memristive devices: [ 14 ] v = R 0 ( T 0 ) exp ⁡ [ β ( 1 T − 1 T 0 ) ] i ≡ R ( T ) i d T d t = 1 C [ − δ ⋅ ( T − T 0 ) + R ( T ) i 2 ] {\displaystyle {\begin{aligned}v&=R_{0}(T_{0})\exp \left[\beta \left({\frac {1}{T}}-{\frac {1}{T_{0}}}\right)\right]i\\&\equiv R(T)i\\{\frac {\mathrm {d} T}{\mathrm {d} t}}&={\frac {1}{C}}\left[-\delta \cdot (T-T_{0})+R(T)i^{2}\right]\end{aligned}}} β is a material constant, T is the absolute body temperature of the thermistor, T 0 is the ambient temperature (both temperatures in Kelvin), R 0 ( T 0 ) denotes the cold temperature resistance at T = T 0 , C is the heat capacitance and δ is the dissipation constant for the thermistor. A fundamental phenomenon that has hardly been studied is memristive behaviour in p-n junctions . [ 15 ] The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients). In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide , thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called non-volatility property . [ 16 ] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again. [ 17 ] [ 18 ] The HP Labs result was published in the scientific journal Nature . [ 17 ] [ 19 ] Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects. [ 16 ] Chua also argued that the memristor is the oldest known circuit element , with its effects predating the resistor , capacitor , and inductor . [ 20 ] However, there are doubts as to whether a memristor can actually exist in physical reality. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Additionally, some experimental evidence contradicts Chua's generalization since a non-passive nanobattery effect is observable in resistance switching memory. [ 25 ] A simple test has been proposed by Pershin and Di Ventra [ 3 ] to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now, [ when? ] there seems to be no experimental resistance switching device ( ReRAM ) which can pass the test. [ 3 ] [ 4 ] These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic /neuromemristive computer architectures. [ 26 ] [ 27 ] In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018. [ 28 ] In March 2012, a team of researchers from HRL Laboratories and the University of Michigan announced the first functioning memristor array built on a CMOS chip. [ 29 ] According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011, Chua argued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching. [ 16 ] Williams argued that MRAM , phase-change memory and ReRAM are memristor technologies. [ 32 ] Some researchers argued that biological structures such as blood [ 33 ] and skin [ 34 ] [ 35 ] fit the definition. Others argued that the memory device under development by HP Labs and other forms of ReRAM are not memristors, but rather part of a broader class of variable-resistance systems, [ 36 ] and that a broader definition of memristor is a scientifically unjustifiable land grab that favored HP's memristor patents. [ 37 ] In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction. [ 38 ] In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. [ 21 ] They indicated inadequacies in the electrochemical modeling presented in the Nature article "The missing memristor found" [ 17 ] because the impact of concentration polarization effects on the behavior of metal− TiO 2− x −metal structures under voltage or current stress was not considered. [ 25 ] In a kind of thought experiment , Meuffels and Soni [ 21 ] furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-called non-volatility property [ 16 ] exists in physical reality, its behavior would violate Landauer's principle , which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted by Di Ventra and Pershin [ 22 ] in 2013. Within this context, Meuffels and Soni [ 21 ] pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence of free-energy barriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence of thermal fluctuations . When unprotected against thermal fluctuations , the internal memory states exhibit some diffusive dynamics, which causes state degradation. [ 22 ] The free-energy barriers must therefore be high enough to ensure a low bit-error probability of bit operation. [ 39 ] Consequently, there is always a lower limit of energy requirement – depending on the required bit-error probability – for intentionally changing a bit value in any memory device. [ 39 ] [ 40 ] In the general concept of memristive system the defining equations are (see § Theory ): y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) , {\displaystyle {\begin{aligned}y(t)&=g(\mathbf {x} ,u,t)u(t),\\{\dot {\mathbf {x} }}&=f(\mathbf {x} ,u,t),\end{aligned}}} where u ( t ) is an input signal, and y ( t ) is an output signal. The vector x {\displaystyle \mathbf {x} } represents a set of n state variables describing the different internal memory states of the device. x ˙ {\displaystyle {\dot {\mathbf {x} }}} is the time-dependent rate of change of the state vector x {\displaystyle \mathbf {x} } with time. When one wants to go beyond mere curve fitting and aims at a real physical modeling of non-volatile memory elements, e.g., resistive random-access memory devices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signal u ( t ) can be superposed with a stochastic term ξ ( t ) , which takes into account the existence of inevitable thermal fluctuations . The dynamic state equation in its general form then finally reads: x ˙ = f ( x , u ( t ) + ξ ( t ) , t ) , {\displaystyle {\dot {\mathbf {x} }}=f(\mathbf {x} ,u(t)+\xi (t),t),} where ξ ( t ) is, e.g., white Gaussian current or voltage noise . On the basis of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., whether the system would be able to retain its memory states in power-off mode. Such an analysis was performed by Di Ventra and Pershin [ 22 ] with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise. [ 22 ] [ 41 ] Di Ventra and Pershin [ 22 ] thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidable Johnson–Nyquist noise and permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality. The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices ( ReRAM )) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state { i } to state { j } depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition { i }→{ j } by means of, for example, an externally applied bias. A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition { i }→{ j } is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching ( ReRAM ) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything. When a two-terminal non-volatile memory device is found to be in a distinct resistance state { j } , there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems. An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated as Joule heat to the surrounding, so no extra energy remains in the system after it has been brought from one resistance state x i to another one x j . Thus, the internal energy of the memristor device in state x i , U ( V , T , x i ) , would be the same as in state x j , U ( V , T , x j ) , even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material. Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency. [ 42 ] A 2014 article from researchers of ReRAM concluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises. [ 43 ] Martin Reynolds, an electrical engineering analyst with research outfit Gartner , commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor. [ 44 ] Chua suggested experimental tests to determine if a device may properly be categorized as a memristor: [ 2 ] According to Chua [ 45 ] [ 46 ] all resistive switching memories including ReRAM , MRAM and phase-change memory meet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim. Experimental evidence shows that redox-based resistance memory ( ReRAM ) includes a nanobattery effect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling. [ 25 ] In 2008, researchers from HP Labs introduced a model for a memristance function based on thin films of titanium dioxide . [ 17 ] For R on ≪ R off the memristance function was determined to be M ( q ( t ) ) = R o f f ⋅ ( 1 − μ v R o n D 2 q ( t ) ) {\displaystyle M(q(t))=R_{\mathrm {off} }\cdot \left(1-{\frac {\mu _{v}R_{\mathrm {on} }}{D^{2}}}q(t)\right)} where R off represents the high resistance state, R on represents the low resistance state, μ v represents the mobility of dopants in the thin film, and D represents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects. For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch from R on to R off in time T on to T off , the charge must change by Δ Q = Q on − Q off . E s w i t c h = V 2 ∫ T o f f T o n d t M ( q ( t ) ) = V 2 ∫ Q o f f Q o n d q I ( q ) M ( q ) = V 2 ∫ Q o f f Q o n d q V ( q ) = V Δ Q {\displaystyle {\begin{aligned}E_{\mathrm {switch} }&=V^{2}\int _{T_{\mathrm {off} }}^{T_{\mathrm {on} }}{\frac {\mathrm {d} t}{M(q(t))}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{I(q)M(q)}}\\&=V^{2}\int _{Q_{\mathrm {off} }}^{Q_{\mathrm {on} }}{\frac {\mathrm {d} q}{V(q)}}\\&=V\Delta Q\end{aligned}}} Substituting V = I ( q ) M ( q ) , and then ∫ d q / V = ∆ Q / V for constant V to produces the final expression. This power characteristic differs fundamentally from that of a metal oxide semiconductor transistor , which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage. The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creating hysteresis , also called the "hard-switching regime". [ 17 ] Another kind of switch would have a cyclic M ( q ) so that each off-on event would be followed by an on-off event under constant bias. Such a device would act as a memristor under all conditions, but would be less practical. In the more general concept of an n -th order memristive system the defining equations are y ( t ) = g ( x , u , t ) u ( t ) , x ˙ = f ( x , u , t ) {\displaystyle {\begin{aligned}y(t)&=g({\textbf {x}},u,t)u(t),\\{\dot {\textbf {x}}}&=f({\textbf {x}},u,t)\end{aligned}}} where u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and g and f are continuous functions . For a current-controlled memristive system the signal u ( t ) represents the current signal i ( t ) and the signal y ( t ) represents the voltage signal v ( t ) . For a voltage-controlled memristive system the signal u ( t ) represents the voltage signal v ( t ) and the signal y ( t ) represents the current signal i ( t ) . The pure memristor is a particular case of these equations, namely when x depends only on charge ( x = q ) and since the charge is related to the current via the time derivative d q /d t = i ( t ) . Thus for pure memristors f (i.e. the rate of change of the state) must be equal or proportional to the current i ( t ) . One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. [ 47 ] For a current-controlled memristive system, the input u ( t ) is the current i ( t ), the output y ( t ) is the voltage v ( t ), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors. [ 48 ] The concept of memristive networks was first introduced by Leon Chua in his 1976 paper "Memristive Devices and Systems." [ 2 ] Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws. A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance. In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data. One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. For the simplest model with only memristive devices with voltage generators in series, there is an exact and in closed form equation ( Caravelli–Traversa–Di Ventra equation , CTDV) [ 49 ] which describes the evolution of the internal memory of the network for each device. For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov model R ( x ) = R o f f ( 1 − x ) + R o n x {\displaystyle R(x)=R_{off}(1-x)+R_{on}x} , with d x / d t = I / β − α x {\displaystyle dx/dt=I/\beta -\alpha x} , there is a set of nonlinearly coupled differential equations that takes the form: where X {\displaystyle X} is the diagonal matrix with elements x i {\displaystyle x_{i}} on the diagonal, α , β , χ {\displaystyle \alpha ,\beta ,\chi } are based on the memristors physical parameters. The vector S → {\displaystyle {\vec {S}}} is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operator Ω 2 = Ω {\displaystyle \Omega ^{2}=\Omega } , defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with a Hopfield network , such as the existence of Lyapunov functions and classical tunnelling phenomena. [ 50 ] In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications. Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior of ReRAM . [ 36 ] [ 37 ] and have suggested extended memristive models to remedy perceived deficiencies. [ 25 ] One example [ 51 ] attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signal u ( t ) as a series expansion where m is a positive integer, u ( t ) is an input signal, y ( t ) is an output signal, the vector x represents a set of n state variables describing the device, and the functions g and f are continuous functions . This equation produces the same zero-crossing hysteresis curves as memristive systems but with a different frequency response than that predicted by memristive systems. Another example suggests including an offset value a {\displaystyle a} to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect. [ 25 ] There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053]. Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. [ 52 ] [ 53 ] [ 54 ] The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current. Although not cited in HP's initial reports on their TiO 2 memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s. [ 55 ] The HP device is composed of a thin (50 nm ) titanium dioxide film between two 5 nm thick electrodes , one titanium , the other platinum . Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers , meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (see Fast-ion conductor ), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current. [ 17 ] Since the HP device displays fast-ion conduction at nanoscale, it is considered a nanoionic device . [ 56 ] Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device enters hysteresis . It ceases to integrate q =∫ I d t , but rather keeps q at an upper bound and M fixed, thus acting as a constant resistor until current is reversed. Memory applications of thin-film oxides had been an area of active investigation for some time. IBM published an article in 2000 regarding structures similar to that described by Williams. [ 57 ] Samsung has a U.S. patent for oxide-vacancy based switches similar to that described by Williams. [ 58 ] In April 2010, HP labs announced that they had practical memristors working at 1 ns (~1 GHz) switching times and 3 nm by 3 nm sizes, [ 59 ] which bodes well for the future of the technology. [ 60 ] At these densities it could easily rival the current sub-25 nm flash memory technology. It seems that memristance has been reported in nanoscale thin films of silicon dioxide as early as the 1960s . [ 61 ] However, hysteretic conductance in silicon was associated to memristive effects only in 2009. [ 62 ] More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy. [ 63 ] In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. [ 64 ] They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to use fast-ion conductor as this passive layer, which allows a significant reduction of the ionic extraction field. In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor. [ 65 ] In 2010, Alibart, Gamrat, Vuillaume et al. [ 66 ] introduced a new hybrid organic/ nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor [ 67 ] and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning). [ 68 ] In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. [ 69 ] The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines. In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor. [ 70 ] In 2014, Bessonov et al. reported a flexible memristive device comprising a MoO x / MoS 2 heterostructure sandwiched between silver electrodes on a plastic foil. [ 71 ] The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. [ 72 ] in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX 2 , M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride , which is the thinnest memory material of around 0.33 nm. [ 73 ] These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD / MOCVD , enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS 2 or h-BN atomristors, indicating a new application of memristors for 5G , 6G and THz communication and connectivity systems. [ 74 ] [ 75 ] In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology. [ 76 ] The ferroelectric memristor [ 77 ] is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: R OFF ≫ R ON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither R ON or R OFF , but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved. In 2013, Ageev, Blinov et al. [ 78 ] reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope . Later it was found [ 79 ] that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain Δ L 0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field Edef in the nanotube under the influence of an external electric field E ( x , t ). Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. [ 80 ] In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated, [ 81 ] whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage [ 82 ] furthermore a natural silk fibroin demonstrated memristive properties; [ 83 ] spin-memristive systems based on biomolecules are also being studied. [ 84 ] In 2012, Sandro Carrara and co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors. [ 85 ] Since then, several memristive sensors have been demonstrated. [ 86 ] Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. [ 87 ] In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview by IEEE Spectrum . [ 88 ] A first experimental proof of the spintronic memristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011. [ 89 ] The magnetic tunnel junction has been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic ( spin-transfer torque ). Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on a magnetic tunnel junction (MTJ) endowed with bi-stable spin-dependent states [ 90 ] ( resistive switching ). The MTJ consists in a SrTiO3 (STO) tunnel barrier that separates half-metallic oxide LSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, the tunnel magnetoresistance (TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from the half-metallic LSMO electrode, within the Julliere model , this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions. Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008 [ 91 ] and 2009. [ 92 ] While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, [ 92 ] another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies [ 93 ] and its impact [ 94 ] on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity [ 95 ] or multiferroicity. [ 96 ] The magnetization state of a MTJ can be controlled by Spin-transfer torque , and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving a MRAM . However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable. [ 97 ] The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vector x = ( x 1 , x 2 ), where x 1 describes the magnetic state of the electrodes and x 2 denotes the resistive state of the MgO barrier. In this case the change of x 1 is current-controlled (spin torque is due to a high current density) whereas the change of x 2 is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system. [ 98 ] A fundamentally different mechanism for memristive behavior has been proposed by Pershin and Di Ventra . [ 99 ] [ 100 ] The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang. [ 2 ] The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces, [ 101 ] but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor. [ 1 ] This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications. In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor. [ 102 ] The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide. [ 103 ] The SDC device is operational immediately after fabrication. In the Ge 2 Se 3 active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge 2 Se 3 /Ag/Ge 2 Se 3 , directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about 350 °C (662 °F), the device has significantly higher processing and operating temperatures at 250 °C (482 °F) and at least 150 °C (302 °F), respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at 150 °C (302 °F). There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated by Heidemarie Schmidt and her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO 3 thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1: 10.1.1.189.3614 ] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358]. The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions. The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051]. Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications. A potential application of memristors is in analog memories for superconducting quantum computers. [ 12 ] Memristors can potentially be fashioned into non-volatile solid-state memory , which could allow greater data density than hard drives with access times similar to DRAM , replacing both components. [ 31 ] HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, [ 104 ] and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm 3 ). [ 105 ] In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM. [ 106 ] The devices' resistance would be read with alternating current so that the stored value would not be affected. [ 107 ] In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory. [ 108 ] Memristors have applications in programmable logic [ 109 ] signal processing , [ 110 ] super-resolution imaging [ 111 ] physical neural networks , [ 112 ] control systems , [ 113 ] reconfigurable computing , [ 114 ] in-memory computing , [ 115 ] brain–computer interfaces [ 116 ] and RFID . [ 117 ] Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation [ 118 ] Several early works have been reported in this direction. [ 119 ] [ 120 ] In 2009, a simple electronic circuit [ 121 ] consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. [ 122 ] It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime molds Physarum polycephalum where the viscosity of channels in the cytoplasm responds to periodic environment changes. [ 122 ] Applications of such circuits may include, e.g., pattern recognition . The DARPA SyNAPSE project funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010, Versace and Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model. [ 123 ] MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware. [ 124 ] Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki. [ 125 ] In 2011, they showed [ 126 ] how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired from Hebbian learning rule . In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena. [ 127 ] The memistor and memtransistor are transistor-based devices which include memristor function. In 2009, Di Ventra , Pershin, and Chua extended [ 128 ] the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin. [ 22 ] In September 2014, Mohamed-Salah Abdelouahab , Rene Lozi , and Leon Chua published a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements using fractional derivatives . [ 129 ] Sir Humphry Davy is said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808. [ 20 ] [ 130 ] However the first device of a related nature to be constructed was the memistor (i.e. memory resistor), a term coined in 1960 by Bernard Widrow to describe a circuit element of an early artificial neural network called ADALINE . A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO 2 which was later claimed by researchers from Hewlett Packard to be evidence of a memristor. [ 55 ] [ citation needed ] Leon Chua postulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element. [ 1 ] Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior. [ 2 ] On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in Nature identifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors. [ 17 ] On 23 January 2009, Di Ventra , Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors , whose properties depend on the state and history of the system. [ 128 ] In July 2014, the MeMOSat/ LabOSat group [ 131 ] (composed of researchers from Universidad Nacional de General San Martín (Argentina) , INTI, CNEA , and CONICET ) put memory devices into a Low Earth orbit . [ 132 ] Since then, seven missions with different devices [ 133 ] are performing experiments in low orbits, onboard Satellogic 's Ñu-Sat satellites. [ 134 ] [ 135 ] [ clarification needed ] On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially. [ 136 ] These devices remain available in small numbers. On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload. [ 137 ] In 2021, Jennifer Rupp and Martin Bazant of MIT started a "Lithionics" research programme to investigate applications of lithium beyond their use in battery electrodes , including lithium oxide -based memristors in neuromorphic computing . [ 138 ] [ 139 ] In May 2023, TECHiFAB GmbH [https://techifab.com/] announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers. In the September 2023 issue of Science Magazine , Chinese scientists Wenbin Zhang et al. described the development and testing of a memristor-based integrated circuit . [ 140 ]
https://en.wikipedia.org/wiki/Memristance
Menaechmus ( Greek : Μέναιχμος , c. 380 – c. 320 BC) was an ancient Greek mathematician , geometer and philosopher [ 1 ] born in Alopeconnesus or Prokonnesos in the Thracian Chersonese , who was known for his friendship with the renowned philosopher Plato and for his apparent discovery of conic sections and his solution to the then-long-standing problem of doubling the cube using the parabola and hyperbola . Menaechmus is remembered by mathematicians for his discovery of the conic sections and his solution to the problem of doubling the cube. [ 2 ] Menaechmus likely discovered the conic sections, that is, the ellipse , the parabola , and the hyperbola , as a by-product of his search for the solution to the Delian problem . [ 3 ] Menaechmus knew that in a parabola y 2 = L x, where L is a constant called the latus rectum , although he was not aware of the fact that any equation in two unknowns determines a curve. [ 4 ] He apparently derived these properties of conic sections and others as well. Using this information it was now possible to find a solution to the problem of the duplication of the cube by solving for the points at which two parabolas intersect, a solution equivalent to solving a cubic equation. [ 4 ] In modern notation, let x y = 1 {\displaystyle xy=1} be a hyperbola, and y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} be a parabola, then their intersections are the solutions to a x 3 + b x 2 + c x = 1 {\displaystyle ax^{3}+bx^{2}+cx=1} . Now set a = 1 / 2 , b = 0 , c = 0 {\displaystyle a=1/2,b=0,c=0} . [ 5 ] There are few direct sources for Menaechmus's work; his work on conic sections is known primarily from an epigram by Eratosthenes , and the accomplishment of his brother (of devising a method to create a square equal in area to a given circle using the quadratrix ), Dinostratus , is known solely from the writings of Proclus . Proclus also mentions that Menaechmus was taught by Eudoxus . There is a curious statement by Plutarch to the effect that Plato disapproved of Menaechmus achieving his doubled cube solution with the use of mechanical devices; the proof currently known appears to be purely algebraic. Menaechmus was said to have been the tutor of Alexander the Great ; this belief derives from the following anecdote: supposedly, once, when Alexander asked him for a shortcut to understanding geometry, he replied "O King, for traveling over the country, there are royal road and roads for common citizens, but in geometry there is one road for all." [ 6 ] However, this quotation is first attested by Stobaeus , about 500 AD, and so whether Menaechmus really taught Alexander is uncertain. Where precisely he died is uncertain as well, though modern scholars believe that he eventually expired in Cyzicus .
https://en.wikipedia.org/wiki/Menaechmus
Menarche ( / m ə ˈ n ɑːr k i / mə- NAR -kee ; from Ancient Greek μήν (mēn) ' month ' and ἀρχή (arkhē) ' beginning ' ) is the first menstrual cycle , or first menstrual bleeding , in female humans . From both social and medical perspectives, it is often considered the central event of female puberty , as it signals the possibility of fertility . Girls experience menarche at different ages, but the most common age is 12. [ 1 ] [ 2 ] Having menarche occur between the ages of 9–14 in the West is considered normal. [ 3 ] The timing of menarche is influenced by female biology , as well as genetic , environmental factors , and nutritional factors. [ 4 ] The mean age of menarche has declined over the last century, but the magnitude of the decline and the factors responsible remain subjects of contention. The worldwide average age of menarche is very difficult to estimate accurately, and it varies significantly by geographical region, race, ethnicity and other characteristics, and occurs mostly during a span of ages from 8 to 16, with a small percentage of girls having menarche by age 10, and the vast majority having it by the time they were 14. [ 3 ] There is a later age of onset in Asian populations compared to the West, but it too is changing with time. For example a Korean study in 2011 showed an overall average age of 12.7, with around 20% before age 12, and more than 90% by age 14. [ 5 ] A Chinese study from 2014 published in Acta Paediatrica showed similar results (overall average of age 12.8 in 2005 down to age 12.3 in 2014) and a similar trend in time, but also similar findings about ethnic, cultural, and environmental effects. [ 6 ] The average age of menarche was about 12.7 years in Canada in 2001, [ 7 ] and 12.9 in the United Kingdom . [ 8 ] A study of girls in Istanbul , Turkey, in 2011 found the median age at menarche to be 12.7 years. [ better source needed ] [ 9 ] In the United States, an analysis of 10,590 women aged 15–44 taken from the 2013–2017 round of the CDC's National Survey of Family Growth found a median age of 11.9 years (down from 12.1 in 1995), with a mean of 12.5 years (down from 12.6). [ 10 ] Menarche is the culmination of a series of physiological and anatomic processes of puberty : Menarche is not painful and occurs without warning. [ 12 ] The menstruum , or flow , consists of a combination of fresh and clotted blood with endometrial tissue. Menarche is not painful and occurs without warning. Flow may be scanty in amount and might be as little as a single instance of "spotting". Like other menses , menarche may be accompanied by lower abdominal cramps. [ 12 ] For most girls, menarche does not mean that ovulation has occurred. In post-menarchal girls, about 80% of the cycles are anovulatory in the first year after menarche, 50% in the third, and 10% in the sixth year. [ 13 ] Regular ovulation is usually indicated by predictable and consistent intervals between menses, and predictable and consistent patterns of flow (e.g., heaviness or cramping). Continuing ovulation typically requires a body fat percentage of at least 22%. [ 11 ] Not every girl follows the typical pattern. Some girls ovulate prior to their first menstruation . Although unlikely, it is possible for a girl who has engaged in sexual intercourse shortly before her menarche to conceive and become pregnant (delaying her menarche until after the end of her pregnancy, if she carries to full term). Younger age of menarche is not correlated with a younger age of first sexual intercourse. [ 14 ] When menarche occurs, it confirms that the girl has had a gradual estrogen -induced growth of the uterus , especially the endometrium , and that the "outflow tract" from the uterus, through the cervix to the vagina , is open. When experiencing menarche, the blood flow (colloquially described as having one's "period") can vary from a slow and spotty discharge to a consistent blood flow for 3–7 days. The color of the blood ranges from bright red to brown in color; this is normal. Periods may be light or heavy. [ 15 ] In very rare instances, menarche may occur at an unusually early age, preceding thelarche and other signs of puberty. This is termed isolated premature menarche , but other causes of vaginal bleeding must be investigated and excluded. [ 16 ] Isolated premature menarche is rarely the first manifestation of precocious puberty . [ citation needed ] When menarche has failed to occur for more than three years after thelarche, or beyond 15 years of age, the delay is referred to as primary amenorrhea . Certain systemic or chronic illness can delay menarche, such as diabetes mellitus type 1 , cystic fibrosis , asthma , inflammatory diseases , [ 17 ] and untreated celiac disease , [ 18 ] [ 19 ] among others. [ 20 ] Sometimes, lab tests do not return determinative results, so that underlying pathologies are not identified and the girl is diagnosed with constitutional growth delay . [ 21 ] Studies have been conducted to observe the association of the timing of menarche with various conditions and diseases. Some studies have shown that there may be an association between early or late-age menarche and cardiovascular disease , although the mechanism of the association is not well understood. [ 22 ] A systematic review has concluded that early onset of menarche is a risk factor for insulin resistance [ 23 ] and breast cancer risk. [ 24 ] There is conflicting evidence regarding the association between obesity and timing of menarche; a meta-analysis and systematic review has determined that more studies must be conducted to make any definitive conclusions about this association. [ 25 ] Some of the aspects of family structure and function reported to be independently associated with earlier menarche [antenatal and early childhood] Other research has focused on the effect of childhood stress on timing of puberty, especially female. Stress is a vague term and studies have examined conditions ranging from family tensions or conflict to wartime refugee status with threat to physical survival. The more dire social conditions have been found to be associated with delay of maturation, an effect that is compounded by inadequate diet and nutrition. There is mixed evidence if milder degrees of stress can accelerate puberty in girls as would be predicted by life history theory and demonstrated in non-human mammals. [ 28 ] The understanding of these environmental effects is incomplete and the following cautions are relevant: There were few systematic studies of timing of menarche before the second half of the 20th century. Most older estimates of average onset of menarche were based on observation of a small, homogeneous, non-representative sample of the larger population, or based on recall by adult women, which is susceptible to error. Most sources agree that the average age of menarche in girls in modern societies has declined, though the reasons and the degree remain subjects of study. From the sixth to the 15th centuries in Europe, most women reached menarche at about 14, between the ages of 12 and 15. [ 31 ] The average age of menarche dropped from 14-15 years in the early 20th century to 12-13 years in the present, but girls in the 19th century had a later age of menarche (16 to 18 years) compared to girls in earlier centuries. [ 32 ] A large North American survey reported a 2–3 month decline from the mid-1970s to the mid-1990s. [ 33 ] A 2011 study found that each 1 kg/m 2 increase in childhood body-mass index (BMI) can be expected to result in a 6.5% higher absolute risk of early menarche (before age 12 years). [ 34 ] This is called the secular trend . [ 35 ] [ 36 ] In 2002, fewer than 10% of U.S. girls started to menstruate before 11 years of age, and 90% of all U.S. girls were menstruating by 13.75 years of age, with a median age of 12.43 years. This age at menarche is not much different (0.3 years earlier) than that reported for U.S. girls in 1973. Age at menarche for non-Hispanic black girls was significantly earlier than that of white girls, whereas non-white Mexican American girls were only slightly earlier than white girls. [ 37 ] Menstruation is a cultural as well as scientific phenomenon as many societies have rituals, social norms, and religious laws associated with it. These typically begin at menarche and may be enacted during each menstruation cycle. The menarches are important in determining a status change for girls. Upon menarche and completion of the ritual, they have become a woman as defined by their culture. Canadian psychological researcher Niva Piran claims that menarche or the perceived average age of puberty is used in many cultures to separate girls from activity with boys, and to begin transition into womanhood. [ 38 ] For example, post-menarche, young women compete in field hockey while young men play ice hockey. Some cultures have observed rites of passage such as a party or other celebration, for a girl experiencing menarche, in the past and the present. [ 39 ] In ancient Japan, when a Japanese girl had her first period, the family sometimes celebrated by eating red-colored rice and beans ( sekihan ) . Although both blood and sekihan rice are red, this was not of symbolic significance. All rice in ancient Japan was red; it was also rare and precious. (At most other times, millet was eaten instead.) The celebration was kept a secret from extended family until the rice was served. [ 40 ] In South Indian Hindu communities, young women are given a special menarche ceremony called Ruthu Sadangu ; at that time, they begin to wear two-piece saris . [ 41 ] In Morocco , the girl is thrown a celebration. All of her family members are invited and the girl is showered with money and gifts. Quinceañera in Latin America, is similar, except that the specific age of 15 marks the transition rather than menarche. The Mescalero Apaches place high importance on their menarche ceremony and it is regarded as the most important ritual in their tribe. Each year, there is an eight-day event celebrating all of the girls who have menstruated in the past year. The days are split between feasting and private ceremonies reflecting on their new womanly status. [ 42 ] In the United States , public schools have a sex education program that teaches girls about menstruation and what to expect at the onset of menarche; this takes place between the fifth and eight grades. Like most of the modern industrialized world, menstruation is a private matter and a girl's menarche is not a community phenomenon. [ 43 ] The Ulithi tribe of Micronesia call a girl's menarche kufar . She goes to a menstrual house , where the women bathe her and recite spells. She will have to return to the menstruation hut every time she menstruates. Her parents build her a private hut that she will live in until she is married. [ 40 ] In Sri Lanka , an astrologer is contacted to study the alignment of stars when the girl experiences menarche because it is believed that her future can be predicted. The women of the family then gather in her home and scrub her in a ritual bathing ceremony. Her family then throws a familial party at which the girl wears white and may receive gifts. [ 40 ] In Ethiopia , Beta Jewish women were separated from male society and sent to menstruation huts during menarche and every menstruation following as the blood associated with menstruation in the Beta Jewish culture was believed to be impure . The Beta Jews built their villages surrounding and near bodies of water specifically for their women to have a place to clean themselves. The menstruation huts were built close to these bodies of water. [ 44 ] In India, purdah is practiced by some Hindu and Muslim communities. Women, starting at menarche and continuing with each subsequent period, are separated from men, and also wear different garments to conceal their skin during menstruation. In Australia , the Aboriginals [ which? ] treat a girl to "love magic". She is taught the ways of womanhood by the other women in her tribe. Her mother builds her a menstruation hut to which she confines herself for the remainder of her menses. The hut is burned and she is bathed in the river at the end of menstruation. When she returns to the village, [ clarification needed ] she is paired with a man who will be her husband. [ 40 ] In Nigeria , the Tiv ethnic group cut four lines into the abdomen of their girls during menarche. The lines are supposed to represent fertility. [ 40 ] The Navajo have a celebration called kinaalda (kinn-all-duh). Girls are expected to demonstrate their strength through footraces. The girls make a cornmeal pudding for the tribe to taste. The girls who experience menarche wear special clothes and style their hair like the Navajo goddess " Changing Woman ". [ 40 ] The Nuu-chah-nulth (also known as the Nootka) believe that physical endurance is the most important quality in young women. At menarche the girl is taken out to sea and left there to swim back. [ 40 ] Girls experiencing their first periods are part of many movies, particularly ones that include coming-of-age plot lines, such as The Blue Lagoon (1980), The Company of Wolves (1984), An Angel at My Table (1990), My Girl (1991), Return to the Blue Lagoon (1991), Eve’s Bayou (1997), and A Walk on the Moon (1999). Menarche is also discussed in an episode of the animated series Baymax! (2022) in which the eponymous healthcare robot helps a girl deal with her first period. In the horror movie Carrie (1976), an adaptation of the Stephen King novel of the same name , protagonist Carrie White experiences menarche as she showers after the school gym class. Unaware of what is happening to her, she panics and pleads for help, but the other girls respond by bullying her. Menarche unleashes Carrie's telekinetic powers which are key to her wild transformation that causes death and destruction. [ 45 ] This theme is common to horror movies, another example being the Canadian horror movie Ginger Snaps (2000), where the protagonist's first period is central to her gradual transformation into a werewolf . The theme of transformation around menarche is similarly present in Turning Red (2022), although the film also explores other aspects of puberty and the protagonist does not yet have her first period. One of the better-known middle reader (8 to 12 year old) novels in the U.S. and Canada about the year leading up to menarche in the 1970s to 1990s is " Are You There God? It's Me, Margaret " (1970) by Judy Blume .
https://en.wikipedia.org/wiki/Menarche
A Mendelian error in the genetic analysis of a species, describes an allele in an individual which could not have been received from either of its biological parents by Mendelian inheritance . Inheritance is defined by a set of related individuals who have the same or similar phenotypes for a locus of a particular gene. A Mendelian error means that the very structure of the inheritance as defined by analysis of the parental genes is incorrect: one parent of one individual is not actually the parent indicated; therefore the assumption is that the parental information is incorrect. [ 1 ] Possible explanations for Mendelian errors are genotyping errors, erroneous assignment of the individuals as relatives, or de novo mutations. Mendelian error is established by demonstrating the existence of a trait which is inconsistent with every possible combination of genotype compatible with the individual. This method of determination requires pedigree checking, however, and establishing a contradiction between phenotype and pedigree is an NP-complete problem. Genetic inconsistencies which do not correspond to this definition are Non-Mendelian Errors. Statistical genetics analysis is used to detect these errors and to detect the possibility of the individual being linked to a specific disease linked to a single gene. Examples of such diseases in humans caused by single genes are Huntington's disease or Marfan syndrome . [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mendelian_error
In Euclidean geometry , Menelaus's theorem , named for Menelaus of Alexandria , is a proposition about triangles in plane geometry . Suppose we have a triangle △ ABC , and a transversal line that crosses BC, AC, AB at points D, E, F respectively, with D, E, F distinct from A, B, C . A weak version of the theorem states that | A F ¯ F B ¯ | × | B D ¯ D C ¯ | × | C E ¯ E A ¯ | = 1 , {\displaystyle \left|{\frac {\overline {AF}}{\overline {FB}}}\right|\times \left|{\frac {\overline {BD}}{\overline {DC}}}\right|\times \left|{\frac {\overline {CE}}{\overline {EA}}}\right|=1,} where "| |" denotes absolute value (i.e., all segment lengths are positive). The theorem can be strengthened to a statement about signed lengths of segments , which provides some additional information about the relative order of collinear points. Here, the length AB is taken to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line; for example, A F ¯ F B ¯ {\displaystyle {\tfrac {\overline {AF}}{\overline {FB}}}} is defined as having positive value when F is between A and B and negative otherwise. The signed version of Menelaus's theorem states A F ¯ F B ¯ × B D ¯ D C ¯ × C E ¯ E A ¯ = − 1. {\displaystyle {\frac {\overline {AF}}{\overline {FB}}}\times {\frac {\overline {BD}}{\overline {DC}}}\times {\frac {\overline {CE}}{\overline {EA}}}=-1.} Equivalently, [ 1 ] A F ¯ × B D ¯ × C E ¯ = − F B ¯ × D C ¯ × E A ¯ . {\displaystyle {\overline {AF}}\times {\overline {BD}}\times {\overline {CE}}=-{\overline {FB}}\times {\overline {DC}}\times {\overline {EA}}.} Some authors organize the factors differently and obtain the seemingly different relation [ 2 ] F A ¯ F B ¯ × D B ¯ D C ¯ × E C ¯ E A ¯ = 1 , {\displaystyle {\frac {\overline {FA}}{\overline {FB}}}\times {\frac {\overline {DB}}{\overline {DC}}}\times {\frac {\overline {EC}}{\overline {EA}}}=1,} but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same. The converse is also true: If points D, E, F are chosen on BC, AC, AB respectively so that A F ¯ F B ¯ × B D ¯ D C ¯ × C E ¯ E A ¯ = − 1 , {\displaystyle {\frac {\overline {AF}}{\overline {FB}}}\times {\frac {\overline {BD}}{\overline {DC}}}\times {\frac {\overline {CE}}{\overline {EA}}}=-1,} then D, E, F are collinear . The converse is often included as part of the theorem. (Note that the converse of the weaker, unsigned statement is not necessarily true.) The theorem is very similar to Ceva's theorem in that their equations differ only in sign. By re-writing each in terms of cross-ratios , the two theorems may be seen as projective duals . [ 3 ] A proof given by John Wellesley Russell uses Pasch's axiom to consider cases where a line does or does not meet a triangle. [ 4 ] First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line DEF misses the triangle (see diagram), or one is negative and the other two are positive, the case where DEF crosses two sides of the triangle. To check the magnitude, construct perpendiculars from A, B, C to the line DEF and let their lengths be a, b, c respectively. Then by similar triangles it follows that | A F ¯ F B ¯ | = | a b | , | B D ¯ D C ¯ | = | b c | , | C E ¯ E A ¯ | = | c a | . {\displaystyle \left|{\frac {\overline {AF}}{\overline {FB}}}\right|=\left|{\frac {a}{b}}\right|,\quad \left|{\frac {\overline {BD}}{\overline {DC}}}\right|=\left|{\frac {b}{c}}\right|,\quad \left|{\frac {\overline {CE}}{\overline {EA}}}\right|=\left|{\frac {c}{a}}\right|.} Therefore, | A F ¯ F B ¯ | × | B D ¯ D C ¯ | × | C E ¯ E A ¯ | = | a b × b c × c a | = 1. {\displaystyle \left|{\frac {\overline {AF}}{\overline {FB}}}\right|\times \left|{\frac {\overline {BD}}{\overline {DC}}}\right|\times \left|{\frac {\overline {CE}}{\overline {EA}}}\right|=\left|{\frac {a}{b}}\times {\frac {b}{c}}\times {\frac {c}{a}}\right|=1.} For a simpler, if less symmetrical way to check the magnitude, [ 5 ] draw CK parallel to AB where DEF meets CK at K . Then by similar triangles | B D ¯ D C ¯ | = | B F ¯ C K ¯ | , | A E ¯ E C ¯ | = | A F ¯ C K ¯ | , {\displaystyle \left|{\frac {\overline {BD}}{\overline {DC}}}\right|=\left|{\frac {\overline {BF}}{\overline {CK}}}\right|,\quad \left|{\frac {\overline {AE}}{\overline {EC}}}\right|=\left|{\frac {\overline {AF}}{\overline {CK}}}\right|,} and the result follows by eliminating CK from these equations. The converse follows as a corollary. [ 6 ] Let D, E, F be given on the lines BC, AC, AB so that the equation holds. Let F' be the point where DE crosses AB . Then by the theorem, the equation also holds for D, E, F' . Comparing the two, A F ¯ F B ¯ = A F ′ ¯ F ′ B ¯ . {\displaystyle {\frac {\overline {AF}}{\overline {FB}}}={\frac {\overline {AF'}}{\overline {F'B}}}\ .} But at most one point can cut a segment in a given ratio so F = F'. The following proof [ 7 ] uses only notions of affine geometry , notably homotheties . Whether or not D, E, F are collinear, there are three homotheties with centers D, E, F that respectively send B to C , C to A , and A to B . The composition of the three then is an element of the group of homothety-translations that fixes B , so it is a homothety with center B , possibly with ratio 1 (in which case it is the identity). This composition fixes the line DE if and only if F is collinear with D, E (since the first two homotheties certainly fix DE , and the third does so only if F lies on DE ). Therefore D, E, F are collinear if and only if this composition is the identity, which means that the magnitude of the product of the three ratios is 1: D C → D B → × E A → E C → × F B → F A → = 1 , {\displaystyle {\frac {\overrightarrow {DC}}{\overrightarrow {DB}}}\times {\frac {\overrightarrow {EA}}{\overrightarrow {EC}}}\times {\frac {\overrightarrow {FB}}{\overrightarrow {FA}}}=1,} which is equivalent to the given equation. It is uncertain who actually discovered the theorem; however, the oldest extant exposition appears in Spherics by Menelaus. In this book, the plane version of the theorem is used as a lemma to prove a spherical version of the theorem. [ 8 ] In Almagest , Ptolemy applies the theorem on a number of problems in spherical astronomy. [ 9 ] During the Islamic Golden Age , Muslim scholars devoted a number of works that engaged in the study of Menelaus's theorem, which they referred to as "the proposition on the secants" ( shakl al-qatta' ). The complete quadrilateral was called the "figure of secants" in their terminology. [ 9 ] Al-Biruni 's work, The Keys of Astronomy , lists a number of those works, which can be classified into studies as part of commentaries on Ptolemy's Almagest as in the works of al-Nayrizi and al-Khazin where each demonstrated particular cases of Menelaus's theorem that led to the sine rule , [ 10 ] or works composed as independent treatises such as:
https://en.wikipedia.org/wiki/Menelaus's_theorem
Meng Huo You ( Chinese : 猛火油 ; pinyin : měng huǒ yóu ; lit. 'fierce-fire oil') [1] is the name given to petroleum in ancient China , which practiced the use of petroleum as an incendiary weapon in warfare. During the Eastern Han dynasty , the Chinese historian and poet Ban Gu recorded in the geography section of his Book of Han that a flammable liquid substance are found in the Gao Nu County, located in the northeast portion of present-day Shaanxi Province . This "flammable liquid" was likely petroleum that had seeped through the ground and was floating above the local waters. Four hundred years later, during the late stages of Southern and Northern dynasties , historian Fan Ye cited in the Book of Later Han that the collection and exploitation of petroleum had been around for some time: (延壽)縣南有山,石出泉水,大如,燃之極明,不可食。縣人謂之石漆。 In the southern mountains of Yan Shou county, there exists [a] water which springs from the rock and combusts brightly; the liquid is not for consumption. Local residents refer to it as "stone-lacquer". This "stone-lacquer" is likely petroleum. There are similar records from the Jin and Northern Wei dynasties as well. The earliest mention of "rock oil" (石油), the Chinese name for petroleum, is by a book "Grand Peace Records" from the Northern Song dynasty , and officially designated the current name by Song dynasty polymath scientist Shen Kuo using the description found in his famous book Dream Pool Essays . Due to the chemical characteristics of petroleum that it continues to burn in water, it was widely used by feudal militaries. Some examples include the defense of the city of Jiuquan in Gansu Province in the year 578 against the invading Göktürk army, in which the defenders used petroleum to ignite and destroy the siege engines brought in by the invaders. In ancient China, the use of petroleum in warfare was the most effective during the Five Dynasties and Ten Kingdoms period and the following four major dynasties and powers from Song through to Yuan , including the Jin and Liao dynasties. During this time a small dynasty in Vietnam paid tribute to the Chinese emperor with petroleum. Before this time, fire attacks were limited to burning wood or animal fat; using petroleum greatly increased the potency of fire attacks, and the fact that trying to douse the burning petroleum with water will only spread the fire more rapidly made burning oil an ideal weapon for destroying cities filled with wooden structures. Meng Huo You is in many ways similar to the use of Greek fire . The devices used for the Chinese weapons had propelling systems as well. One of the key differences between the two weapons is the use of gunpowder as the ignition in the Chinese design. The Chinese continued to use the oil as a way of repelling nomadic invaders from the northwest. However, by the Ming and Qing dynasties, the newly mature technology of gunpowder had for the most part replaced the use of these short-range flamethrowers , which saw little mention in the historical records of the last dynasties of imperial China .
https://en.wikipedia.org/wiki/Meng_Huo_You
In mathematics, a Menger space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness . A Menger space is a space in which for every sequence of open covers U 1 , U 2 , … {\displaystyle {\mathcal {U}}_{1},{\mathcal {U}}_{2},\ldots } of the space there are finite sets F 1 ⊂ U 1 , F 2 ⊂ U 2 , … {\displaystyle {\mathcal {F}}_{1}\subset {\mathcal {U}}_{1},{\mathcal {F}}_{2}\subset {\mathcal {U}}_{2},\ldots } such that the family F 1 ∪ F 2 ∪ ⋯ {\displaystyle {\mathcal {F}}_{1}\cup {\mathcal {F}}_{2}\cup \cdots } covers the space. In 1924, Karl Menger [ 1 ] introduced the following basis property for metric spaces: Every basis of the topology contains a countable family of sets with vanishing diameters that covers the space. Soon thereafter, Witold Hurewicz [ 2 ] observed that Menger's basis property can be reformulated to the above form using sequences of open covers. Menger conjectured that in ZFC every Menger metric space is σ-compact. A. W. Miller and D. H. Fremlin [ 3 ] proved that Menger's conjecture is false, by showing that there is, in ZFC, a set of real numbers that is Menger but not σ-compact. The Fremlin-Miller proof was dichotomic, and the set witnessing the failure of the conjecture heavily depends on whether a certain (undecidable) axiom holds or not. Bartoszyński and Tsaban [ 4 ] gave a uniform ZFC example of a Menger subset of the real line that is not σ-compact. For subsets of the real line, the Menger property can be characterized using continuous functions into the Baire space N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} . For functions f , g ∈ N N {\displaystyle f,g\in \mathbb {N} ^{\mathbb {N} }} , write f ≤ ∗ g {\displaystyle f\leq ^{*}g} if f ( n ) ≤ g ( n ) {\displaystyle f(n)\leq g(n)} for all but finitely many natural numbers n {\displaystyle n} . A subset A {\displaystyle A} of N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} is dominating if for each function f ∈ N N {\displaystyle f\in \mathbb {N} ^{\mathbb {N} }} there is a function g ∈ A {\displaystyle g\in A} such that f ≤ ∗ g {\displaystyle f\leq ^{*}g} . Hurewicz proved that a subset of the real line is Menger iff every continuous image of that space into the Baire space is not dominating. In particular, every subset of the real line of cardinality less than the dominating number d {\displaystyle {\mathfrak {d}}} is Menger. The cardinality of Bartoszyński and Tsaban's counter-example to Menger's conjecture is d {\displaystyle {\mathfrak {d}}} .
https://en.wikipedia.org/wiki/Menger_space
Meniscal cyst is a well-defined cystic lesion located along the peripheral margin of the meniscus , a part of the knee, nearly always associated with horizontal meniscal tears . Pain and swelling or focal mass at the level of the joint. The pain may be related to a meniscal tear or distension of the knee capsule or both. The mass varies in consistency from soft/ fluctuant to hard. Size is variable, and meniscal cysts are known to change in size with knee flexion/extension. Various etiologies have been proposed, including trauma , hemorrhage , chronic infection , and mucoid degeneration. The most widely accepted theory describes meniscal cysts resulting from extrusion of synovial fluid through a peripherally extended horizontal meniscal tear, accumulating outside the joint capsule. They arise more commonly from the lateral joint margin, and occur most often in 20- to 40-year-old males. Magnetic resonance imaging is the modality of choice for diagnosis of meniscal cysts. In their most subtle form, meniscal cysts present as focal areas of high signal intensity within a swollen meniscus. It is not uncommon for radiologists to miss this type of meniscal cyst because the signal intensity is not quite as great as fluid on T2 weighted sequences.2 When this fluid is extruded into the adjacent soft tissues, the swollen meniscus subsequently assumes a more normal shape, and the extruded fluid demonstrates a higher T2 signal typical of parameniscal cysts . Medial meniscus horizontal tear extending into a meniscal cyst. Sagittal T2 images of a medial meniscus horizontal tear extending into a meniscal cyst. Large medial meniscus cyst. Treatment of meniscal cysts consists of a combination of cyst decompression ( intraarticular decompression versus open cystectomy ) and arthroscopic repair of any meniscal abnormalities. Success rates are significantly higher when both the cyst and meniscal tear are treated compared to treating only one disease process.
https://en.wikipedia.org/wiki/Meniscal_cyst
The Menke nitration is the nitration of electron rich aromatic compounds with cupric nitrate and acetic anhydride . The reaction introduces the nitro group predominantly in the ortho position to the activation group. [ 1 ] It may proceed via the intermediary of acetyl nitrate . The reaction is named after the Dutch chemist J.B. Menke. [ 2 ] This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Menke_nitration
Menopause , also known as the climacteric , is the time when menstrual periods permanently stop, marking the end of the reproductive stage for the female human. [ 1 ] [ 4 ] [ 5 ] It typically occurs between the ages of 45 and 55, although the exact timing can vary. [ 6 ] Menopause is usually a natural change related to a decrease in circulating blood estrogen levels. [ 1 ] It can occur earlier in those who smoke tobacco . [ 1 ] [ 7 ] Other causes include surgery that removes both ovaries , some types of chemotherapy , or anything that leads to a decrease in hormone levels. [ 8 ] [ 1 ] At the physiological level, menopause happens because of a decrease in the ovaries' production of the hormones estrogen and progesterone . [ 1 ] While typically not needed, measuring hormone levels in the blood or urine can confirm a diagnosis. [ 9 ] Menopause is the opposite of menarche , the time when periods start. [ 10 ] In the years before menopause, a woman's periods typically become irregular, [ 11 ] [ 12 ] which means that periods may be longer or shorter in duration, or be lighter or heavier in the amount of flow. [ 11 ] During this time, women often experience hot flashes ; these typically last from 30 seconds to ten minutes and may be associated with shivering, night sweats , and reddening of the skin. [ 11 ] Hot flashes [ 11 ] can recur for four to five years. [ 4 ] Other symptoms may include vaginal dryness , [ 13 ] trouble sleeping, and mood changes. [ 11 ] [ 14 ] The severity of symptoms varies between women. [ 4 ] Menopause before the age of 45 years is considered to be "early menopause", and ovarian failure or surgical removal of the ovaries before the age of 40 years is termed " premature ovarian insufficiency ". [ 15 ] In addition to symptoms (hot flushes/flashes, night sweats, mood changes, arthralgia and vaginal dryness), the physical consequences of menopause include bone loss, increased central abdominal fat, and adverse changes in a woman's cholesterol profile and vascular function. [ 15 ] These changes predispose postmenopausal women to increased risks of osteoporosis and bone fracture, and of cardio-metabolic disease ( diabetes and cardiovascular disease ). [ 15 ] Medical professionals often define menopause as having occurred when a woman has not had any menstrual bleeding for a year. [ 1 ] It may also be defined by a decrease in hormone production by the ovaries . [ 16 ] In those who have had surgery to remove their uterus but still have functioning ovaries, menopause is not considered to have yet occurred. [ 15 ] Following the removal of the uterus, symptoms of menopause typically occur earlier. [ 17 ] Iatrogenic menopause occurs when both ovaries are surgically removed ( Oophorectomy ) along with the uterus for medical reasons. Medical treatment of menopause is primarily to ameliorate symptoms and prevent bone loss. [ 18 ] Mild symptoms may be improved with treatment. With respect to hot flashes, avoiding nicotine, caffeine, and alcohol is often recommended; sleeping naked in a cool room and using a fan may help. The most effective treatment for menopausal symptoms is menopausal hormone therapy (MHT). [ 13 ] [ 18 ] Non-hormonal therapies for hot flashes include cognitive-behavioral therapy , clinical hypnosis , gabapentin , fezolinetant or selective serotonin reuptake inhibitors . [ 19 ] [ 20 ] These will not improve symptoms such as joint pain or vaginal dryness, which affect over 55% of women. [ 18 ] Exercise may help with sleeping problems. Many of the concerns about the use of MHT raised by older studies are no longer considered barriers to MHT in healthy women. [ 18 ] High-quality evidence for the effectiveness of alternative medicine has not been found. [ 4 ] During early menopause transition, the menstrual cycles remain regular but the interval between cycles begins to lengthen. Hormone levels begin to fluctuate. Ovulation may not occur with each cycle. [ 21 ] The term menopause refers to a point in time that follows one year after the last menstruation . [ 21 ] During the menopausal transition and after menopause, women can experience a wide range of symptoms. [ 11 ] However, for women who enter the menopause transition without having regular menstrual cycles (due to prior surgery, other medical conditions or ongoing hormonal contraception) the menopause cannot be identified by bleeding patterns and is defined as the permanent loss of ovarian function. [ 18 ] During the transition to menopause, menstrual patterns can show shorter cycling (by 2–7 days); [ 21 ] longer cycles remain possible. [ 21 ] There may be irregular bleeding (lighter, heavier, spotting). [ 11 ] [ 21 ] Dysfunctional uterine bleeding is often experienced by women approaching menopause due to the hormonal changes that accompany the menopause transition. Spotting or bleeding may simply be related to vaginal atrophy , a benign sore ( polyp or lesion), or may be a functional endometrial response. The European Menopause and Andropause Society has released guidelines for assessment of the endometrium , which is usually the main source of spotting or bleeding. [ 22 ] In post-menopausal women, however, any unscheduled vaginal bleeding is of concern and requires an appropriate investigation to rule out the possibility of malignant diseases. Urogenital symptoms may appear during menopause and continue through postmenopause and include painful intercourse , vaginal dryness and atrophic vaginitis (thinning of the membranes of the vulva , the vagina , the cervix and the outer urinary tract ). There may also be considerable shrinking and loss in elasticity of all of the outer and inner genital areas. Urinary urgency may also occur and urinary incontinence in some women. [ 21 ] [ 23 ] The most common physical symptoms of menopause are heavy night sweats , and hot flashes (also known as vasomotor symptoms). [ 24 ] Sleeping problems and insomnia are also common. [ 25 ] Other physical symptoms may be reported that are not specific to menopause but may be exacerbated by it, such as lack of energy , joint soreness , stiffness , back pain , breast enlargement, breast pain , heart palpitations , headache , dizziness , dry , itchy skin, thinning, tingling skin, rosacea , weight gain . [ 21 ] [ 26 ] Psychological symptoms are often reported but they are not specific to menopause and can be caused by other factors. [ 27 ] [ 28 ] They include anxiety , poor memory, inability to concentrate, depressive mood, irritability , mood swings , and less interest in sexual activity . [ 21 ] [ 29 ] [ 11 ] Menopause-related cognitive impairment can be confused with the mild cognitive impairment that precedes dementia . [ 30 ] There is evidence of small decreases in verbal memory, on average, which may be caused by the effects of declining estrogen levels on the brain, [ 31 ] or perhaps by reduced blood flow to the brain during hot flashes . [ 32 ] However, these tend to resolve for most women during the postmenopause. Subjective reports of memory and concentration problems are associated with several factors, such as lack of sleep, and stress. [ 28 ] [ 27 ] Exposure to endogenous estrogen during reproductive years provides women with protection against cardiovascular disease , which is lost around 10 years after the onset of menopause. The menopausal transition is associated with an increase in fat mass (predominantly in visceral fat ), an increase in insulin resistance , dyslipidaemia , and endothelial dysfunction . [ 33 ] Women with vasomotor symptoms during menopause seem to have an especially unfavorable cardiometabolic profile, [ 34 ] as well as women with premature onset of menopause (before 45 years of age). [ 35 ] These risks can be reduced by managing risk factors, such as tobacco smoking, hypertension , increased blood lipids and body weight. [ 36 ] [ 37 ] The annual rates of bone mineral density loss are highest starting one year before the final menstrual period and continuing through the two years after it. [ 38 ] Thus, post menopausal women are at increased risk of osteopenia , osteoporosis and fractures . Menopause is a normal event in a woman's life and a natural part of aging. [ 23 ] Menopause can also be induced early. [ 39 ] Induced menopause occurs as a result of medical treatment such as chemotherapy , radiotherapy , oophorectomy , or complications of tubal ligation , hysterectomy , unilateral or bilateral salpingo-oophorectomy or leuprorelin usage. [ 40 ] Menopause typically occurs at some point between 47 and 54 years of age. [ 6 ] According to various data, more than 95% of women have their last period between the ages of 44–56 (median 49–50). 2% of women under the age of 40, 5% between the ages of 40–45 and the same number between the ages of 55–58 have their last bleeding. [ 41 ] The average age of the last period in the United States is 51 years, in Russia is 50 years, in Greece is 49 years, in Turkey is 47 years, in Egypt is 47 years and in India is 46 years. [ 42 ] Beyond the influence of genetics, these differences are also due to early-life environmental conditions [ 43 ] and associated with epigenetic effects. [ 44 ] The menopausal transition or perimenopause leading up to menopause usually lasts 3–4 years (sometimes as long as 5–14 years). [ 1 ] [ 12 ] Undiagnosed and untreated coeliac disease is a risk factor for early menopause. Coeliac disease can present with several non-gastrointestinal symptoms, in the absence of gastrointestinal symptoms, and most cases escape timely recognition and go undiagnosed, leading to a risk of long-term complications. A strict gluten-free diet reduces the risk. Women with early diagnosis and treatment of coeliac disease present a normal duration of fertile life span. [ 45 ] [ 46 ] Women who have undergone hysterectomy with ovary conservation go through menopause on average 1.5 years earlier than the expected age. [ 18 ] In rare cases, a woman's ovaries stop working at a very early age, ranging anywhere from the age of puberty to age 40. This is known as premature ovarian failure or premature ovarian insufficiency (POI) and affects 1 to 2% of women by age 40. [ 47 ] [ 48 ] [ 49 ] It is diagnosed or confirmed by high blood levels of follicle stimulating hormone (FSH) and luteinizing hormone (LH) on at least three occasions at least four weeks apart. [ 50 ] Premature ovarian insufficiency may be related to an auto immune disorder and therefore might co-occur with other autoimmune disorders such as thyroid disease, [adrenal insufficiency], and diabetes mellitus . [ 49 ] Other causes include chemotherapy , being a carrier of the fragile X syndrome gene, and radiotherapy . [ 49 ] However, in about 50–80% of cases of premature ovarian insufficiency, the cause is unknown, i.e., it is generally idiopathic . [ 48 ] [ 50 ] Early menopause can be related to cigarette smoking, higher body mass index , racial and ethnic factors, illnesses, and the removal of the uterus. [ 51 ] Menopause can be surgically induced by bilateral oophorectomy (removal of ovaries), [ 39 ] which is often, but not always, done in conjunction with removal of the fallopian tubes ( salpingo-oophorectomy ) and uterus ( hysterectomy ). [ 52 ] Cessation of menses as a result of removal of the ovaries is called "surgical menopause". Surgical treatments, such as the removal of ovaries, might cause periods to stop altogether. [ 53 ] The sudden and complete drop in hormone levels may produce extreme withdrawal symptoms such as hot flashes, etc. The symptoms of early menopause may be more severe. [ 53 ] Removal of the uterus without removal of the ovaries does not directly cause menopause, although pelvic surgery of this type can often precipitate a somewhat earlier menopause, perhaps because of a compromised blood supply to the ovaries. [ medical citation needed ] The time between surgery and possible early menopause is due to the fact that ovaries are still producing hormones. [ 53 ] The menopausal transition, and postmenopause itself, is a natural change, not usually a disease state or a disorder. The main cause of this transition is the natural depletion and aging of the finite amount of oocytes ( ovarian reserve ). This process is sometimes accelerated by other conditions and is known to occur earlier after a wide range of gynecologic procedures such as hysterectomy (with and without ovariectomy ), endometrial ablation and uterine artery embolisation . The depletion of the ovarian reserve causes an increase in circulating follicle-stimulating hormone (FSH) and luteinizing hormone (LH) levels because there are fewer oocytes and follicles responding to these hormones and producing estrogen. [ citation needed ] The transition has a variable degree of effects. [ 54 ] The stages of the menopause transition have been classified according to a woman's reported bleeding pattern, supported by changes in the pituitary follicle-stimulating hormone (FSH) levels. [ 55 ] In younger women, during a normal menstrual cycle the ovaries produce estradiol , testosterone and progesterone in a cyclical pattern under the control of FSH and luteinizing hormone (LH), which are both produced by the pituitary gland . During perimenopause (approaching menopause), estradiol levels and patterns of production remain relatively unchanged or may increase compared to young women, but the cycles become frequently shorter or irregular. [ 56 ] The often observed increase in estrogen is presumed to be in response to elevated FSH levels that, in turn, is hypothesized to be caused by decreased feedback by inhibin . [ 57 ] Similarly, decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. [ 58 ] [ 59 ] The menopausal transition is characterized by marked, and often dramatic, variations in FSH and estradiol levels. Because of this, measurements of these hormones are not considered to be reliable guides to a woman's exact menopausal status. [ 57 ] Menopause occurs because of the sharp decrease of estradiol and progesterone production by the ovaries. After menopause, estrogen continues to be produced mostly by aromatase in fat tissues and is produced in small amounts in many other tissues such as ovaries, bone, blood vessels, and the brain where it acts locally. [ 60 ] The substantial fall in circulating estradiol levels at menopause impacts many tissues, from brain to skin. In contrast to the sudden fall in estradiol during menopause, the levels of total and free testosterone, as well as dehydroepiandrosterone sulfate (DHEAS) and androstenedione appear to decline more or less steadily with age. An effect of natural menopause on circulating androgen levels has not been observed. [ 61 ] Thus specific tissue effects of natural menopause cannot be attributed to loss of androgenic hormone production. [ 62 ] Hot flashes and other vasomotor and body symptoms accompanying the menopausal transition are associated with estrogen insufficiency and changes that occur in the brain, primarily the hypothalamus and involve complex interplay between the neurotransmitters kisspeptin , neurokinin B , and dynorphin , which are found in KNDy neurons in the infundibular nucleus. [ 63 ] Decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. Hastened ovarian aging has been observed after endometrial ablation . While it is difficult to prove that these surgeries are causative, it has been hypothesized that the endometrium may be producing endocrine factors contributing to the endocrine feedback and regulation of the ovarian stimulation. Elimination of these factors contributes to faster depletion of the ovarian reserve. Reduced blood supply to the ovaries that may occur as a consequence of hysterectomy and uterine artery embolisation has been hypothesized to contribute to this effect. [ 58 ] [ 59 ] Impaired DNA repair mechanisms may contribute to earlier depletion of the ovarian reserve during aging. [ 64 ] As women age, double-strand breaks accumulate in the DNA of their primordial follicles. Primordial follicles are immature primary oocytes surrounded by a single layer of granulosa cells. An enzyme system is present in oocytes that ordinarily accurately repairs DNA double-strand breaks. This repair system is called " homologous recombinational repair", and it is especially effective during meiosis. Meiosis is the general process by which germ cells are formed in all sexual eukaryotes; it appears to be an adaptation for efficiently removing damages in germ line DNA. [ 65 ] Human primary oocytes are present at an intermediate stage of meiosis, termed prophase I (see Oogenesis ). Expression of four key DNA repair genes that are necessary for homologous recombinational repair during meiosis (BRCA1, MRE11, Rad51, and ATM) decline with age in oocytes. [ 64 ] This age-related decline in ability to repair DNA double-strand damages can account for the accumulation of these damages, that then likely contributes to the depletion of the ovarian reserve. Ways of assessing the impact on women of some of these menopause effects, include the Greene climacteric scale questionnaire, [ 66 ] the Cervantes scale [ 67 ] and the Menopause rating scale. [ 68 ] The term "perimenopause", which literally means "around the menopause", refers to the menopause transition years before the date of the final episode of flow. [ 1 ] [ 12 ] [ 69 ] [ 70 ] According to the North American Menopause Society , this transition can last for four to eight years. [ 71 ] The Centre for Menstrual Cycle and Ovulation Research describes it as a six- to ten-year phase ending 12 months after the last menstrual period. [ 72 ] During perimenopause, estrogen levels average about 20–30% higher than during premenopause, often with wide fluctuations. [ 72 ] These fluctuations cause many of the physical changes during perimenopause as well as menopause, especially during the last 1–2 years of perimenopause (before menopause). [ 69 ] [ 73 ] Some of these changes are hot flashes , night sweats , difficulty sleeping, mood swings, vaginal dryness or atrophy , incontinence , osteoporosis , and heart disease. [ 72 ] Perimenopause is also associated with a higher likelihood of depression (affecting from 45 percent to 68 percent of perimenopausal women), which is twice as likely to affect those with a history of depression. [ 74 ] [ 75 ] During this period, fertility diminishes but is not considered to reach zero until the official date of menopause. The official date is determined retroactively, once 12 months have passed after the last appearance of menstrual blood. The menopause transition typically begins between 40 and 50 years of age (average 47.5). [ 76 ] [ 77 ] The duration of perimenopause may be for up to eight years. [ 77 ] Women will often, but not always, start these transitions (perimenopause and menopause) about the same time as their mother did. [ 78 ] Some research appears to show that melatonin supplementation in perimenopausal women can improve thyroid function and gonadotropin levels, as well as restoring fertility and menstruation and preventing depression associated with menopause. [ 79 ] The term "postmenopausal" describes women who have not experienced any menstrual flow for a minimum of 12 months, assuming that they have a uterus and are not pregnant or lactating . [ 52 ] The reason for this delay in declaring postmenopause is that periods are usually erratic during menopause. Therefore, a reasonably long stretch of time is necessary to be sure that the cycling has ceased. At this point a woman is considered infertile; however, the possibility of becoming pregnant has usually been very low (but not quite zero) for a number of years before this point is reached. [ citation needed ] In women with or without a uterus, menopause or postmenopause can also be identified by a blood test showing a very high follicle-stimulating hormone level, greater than 25 IU/L in a random blood draw; it rises as ovaries become inactive. [ 52 ] FSH continues to rise, as its counterpart estradiol continues to drop for about 2 years after the last menstrual period, after which the levels of each of these hormones stabilize. The stabilization period after the begin of early postmenopause has been estimated to last 3 to 6 years, so early postmenopause lasts altogether about 5 to 8 years, during which hormone withdrawal effects such as hot flashes disappear. [ 52 ] Finally, late postmenopause has been defined as the remainder of a woman s lifespan, when reproductive hormones do not change any more. [ citation needed ] A period-like flow during postmenopause, even spotting, may be a sign of endometrial cancer . Perimenopause is a natural stage of life. It is not a disease or a disorder. Therefore, it does not automatically require any kind of medical treatment. However, in those cases where the physical, mental, and emotional effects of perimenopause are strong enough that they significantly disrupt the life of the woman experiencing them, palliative medical therapy may sometimes be appropriate. In the context of the menopause, menopausal hormone therapy (MHT) is the use of estrogen in women without a uterus and estrogen plus progestogen in women who have an intact uterus. [ 80 ] MHT may be reasonable for the treatment of menopausal symptoms, such as hot flashes. [ 81 ] It is the most effective treatment option, especially when delivered as a skin patch. [ 82 ] [ 83 ] Its use, however, appears to increase the risk of strokes and blood clots . [ 84 ] When used for menopausal symptoms the global recommendation is MHT should be prescribed for a long as there are defined treatment effects and goals for the individual woman. [ 18 ] MHT is also effective for preventing bone loss and osteoporotic fracture, [ 85 ] but it is generally recommended only for women at significant risk for whom other therapies are unsuitable. [ 86 ] MHT may be unsuitable for some women, including those at increased risk of cardiovascular disease, increased risk of thromboembolic disease (such as those with obesity or a history of venous thrombosis) or increased risk of some types of cancer. [ 86 ] There is some concern that this treatment increases the risk of breast cancer. [ 87 ] Women at increased risk of cardiometabolic disease and VTE may be able to use transdermal estradiol which does not appear to increase risks in low to moderate doses. [ 18 ] Adding testosterone to hormone therapy has a positive effect on sexual function in postmenopausal women, although it may be accompanied by hair growth or acne if used in excess. Transdermal testosterone therapy in appropriate dosing is generally safe. [ 88 ] SERMs are a category of drugs, either synthetically produced or derived from a botanical source, that act selectively as agonists or antagonists on the estrogen receptors throughout the body. The most commonly prescribed SERMs are raloxifene and tamoxifen . Raloxifene exhibits oestrogen agonist activity on bone and lipids, and antagonist activity on breast and the endometrium. [ 89 ] Tamoxifen is in widespread use for treatment of hormone sensitive breast cancer. Raloxifene prevents vertebral fractures in postmenopausal, osteoporotic women and reduces the risk of invasive breast cancer. [ 90 ] Some of the SSRIs and SNRIs appear to provide some relief from vasomotor symptoms. [ 20 ] [ 19 ] The most effective SSRIs and SNRIs are paroxetine , escitalopram , citalopram , venlafaxine , and desvenlafaxine . [ 19 ] They may, however, be associated with appetite and sleeping problems, constipation and nausea. [ 20 ] [ 91 ] Gabapentin or fezolinetant can also improve the frequency and severity of vasomotor symptoms. [ 20 ] [ 19 ] Side effects of using gabapentin include drowsiness and headaches. [ 20 ] [ 91 ] Cognitive behavioural therapy and clinical hypnosis can decrease the amount women are affected by hot flashes. [ 19 ] Mindfulness is not yet proven to be effective in easing vasomotor symptoms. [ 92 ] [ 93 ] [ 19 ] Exercise has been thought to reduce postmenopausal symptoms through the increase of endorphin levels, which decrease as estrogen production decreases. [ 94 ] However, there is insufficient evidence to suggest that exercise helps with the symptoms of menopause. [ 19 ] Similarly, yoga has not been shown to be useful as a treatment for vasomotor symptoms. [ 19 ] However a high BMI is a risk factor for vasomotor symptoms in particular. Weight loss may help with symptom management. [ 95 ] [ 19 ] There is no strong evidence that cooling techniques such as using specific clothing or environment control tools (for example fans) help with symptoms. [ 19 ] Paced breathing and relaxation are not effective in easing symptoms. [ 19 ] There is no evidence of consistent benefit of taking any dietary supplements or herbal products for menopausal symptoms. [ 19 ] [ 96 ] [ 97 ] These widely marketed but ineffective supplements include soy isoflavones , pollen extracts , black cohosh , omega-3 among many others. [ 19 ] [ 98 ] [ 99 ] There is no evidence of consistent benefit of alternative therapies for menopausal symptoms despite their popularity. [ 97 ] [ 19 ] As of 2023, there is no evidence to support the efficacy of acupuncture as a management for menopausal symptoms. [ 19 ] [ 100 ] [ 97 ] The Cochrane review found not enough evidence in 2016 to show a difference between Chinese herbal medicine and placebo for the vasomotor symptoms. [ 101 ] The menopause transition is a process, involving hormonal, menstrual, and typically vasomotor changes. However, the experience of the menopause as a whole is very much influenced by psychological and social factors, such as past experience, lifestyle, social and cultural meanings of menopause, and a woman's social and material circumstances. Menopause has been described as a biopsychosocial experience, with social and cultural factors playing a prominent role in the way menopause is experienced and perceived. [ citation needed ] The paradigm within which a woman considers menopause influences the way she views it: women who understand menopause as a medical condition rate it significantly more negatively than those who view it as a life transition or a symbol of aging. [ 104 ] There is some evidence that negative attitudes and expectations, held before the menopause, predict symptom experience during the menopause, [ 105 ] and beliefs and attitudes toward menopause tend to be more positive in postmenopausal than in premenopausal women. [ 106 ] Women with more negative attitudes towards the menopause report more symptoms during this transition. [ 105 ] Menopause is a stage of life experienced in different ways. It can be characterized by personal challenges, changes in personal roles within the family and society. Women's approaches to changes during menopause are influenced by their personal, family and sociocultural background. [ 107 ] Women from different regions and countries also have different attitudes. Postmenopausal women had more positive attitudes toward menopause compared with peri- or premenopausal women. Other influencing factors of attitudes toward menopause include age, menopausal symptoms, psychological and socioeconomical status, and profession and ethnicity. [ 108 ] Ethnicity and geography play roles in the experience of menopause. American women of different ethnicities report significantly different types of menopausal effects. One major study found Caucasian women most likely to report what are sometimes described as psychosomatic symptoms, while African-American women were more likely to report vasomotor symptoms. [ 109 ] There may be variations in experiences of women from different ethnic backgrounds regarding menopause and care. Immigrant women reported more vasomotor symptoms and other physical symptoms and poorer mental health than non-immigrant women and were mostly dissatisfied with the care they had received. Self-management strategies for menopausal symptoms were also influenced by culture. [ 110 ] Two multinational studies of Asian women, found that hot flushes were not the most commonly reported symptoms, instead body and joint aches, memory problems, sleeplessness, irritability and migraines were. [ 111 ] In another study comparing experiences of menopause amongst White Australian women and women in Laos, Australian women reported higher rates of depression, as well as fears of aging, weight gain and cancer – fears not reported by Laotian women, who positioned menopause as a positive event. [ 112 ] Japanese women experience menopause effects, or kōnenki (更年期), in a different way from American women. [ 113 ] Japanese women report lower rates of hot flashes and night sweats; this can be attributed to a variety of factors, both biological and social. Historically, kōnenki was associated with wealthy middle-class housewives in Japan, i.e., it was a "luxury disease" that women from traditional, inter-generational rural households did not report. Menopause in Japan was viewed as a symptom of the inevitable process of aging, rather than a "revolutionary transition", or a "deficiency disease" in need of management. [ 113 ] As of 2005, in Japanese culture, reporting of vasomotor symptoms has been on the increase, with research finding that of 140 Japanese participants, hot flashes were prevalent in 22.1%. [ 114 ] This was almost double that of 20 years prior. [ 115 ] Whilst the exact cause for this is unknown, possible contributing factors include dietary changes, increased medicalisation of middle-aged women and increased media attention on the subject. [ 115 ] However, reporting of vasomotor symptoms is still "significantly" lower than in North America. [ 116 ] Additionally, while most women in the United States apparently have a negative view of menopause as a time of deterioration or decline, some studies seem to indicate that women from some Asian cultures have an understanding of menopause that focuses on a sense of liberation and celebrates the freedom from the risk of pregnancy. [ 117 ] Diverging from these conclusions, one study appeared to show that many American women "experience this time as one of liberation and self-actualization ". [ 118 ] In some women, menopause may bring about a sense of loss related to the end of fertility. In addition, this change often aligns with other stressors, such as the responsibility of looking after elderly parents or dealing with the emotional challenges of " empty nest syndrome " when children move out of the family home. This situation can be accentuated in cultures where being older is negatively perceived . Midlife is typically a life stage when men and women may be dealing with demanding life events and responsibilities, such as work, health problems, and caring roles. For example, in 2018 in the UK women aged 45–54 report more work-related stress than men or women of any other age group. [ 119 ] Hot flashes are often reported to be particularly distressing at work and lead to embarrassment and worry about potential stigmatisation . [ 120 ] A June 2023 study by the Mayo Clinic estimated an annual loss of $1.8 billion in the United States due to workdays missed as a result of menopause symptoms. [ 121 ] This was one of the largest studies to date examining the impact of menopause symptoms on work outcomes. The research concluded there was a strong need to improve medical treatment for menopausal women and make the workplace environment more supportive to avoid such productivity losses. Menopause literally means the "end of monthly cycles" (the end of monthly periods or menstruation ), from the Greek word pausis ("pause") and mēn ("month"). This is a medical coinage; the Greek word for menses is actually different. In Ancient Greek, the menses were described in the plural, ta emmēnia ("the monthlies"), and its modern descendant has been clipped to ta emmēna . The Modern Greek medical term is emmenopausis in Katharevousa or emmenopausi in Demotic Greek . The Ancient Greeks did not produce medical concepts about any symptoms associated with end of menstruation and did not use a specific word to refer to this time of a woman's life. The word menopause was invented by French doctors at the beginning of the nineteenth century. Greek etymology was reconstructed at this time and it was the Parisian student doctor Charles-Pierre-Louis de Gardanne who invented a variation of the word in 1812, which was edited to its final French form in 1821. [ 122 ] Some of them noted that peasant women had no complaints about the end of menses, while urban middle-class women had many troubling symptoms. Doctors at this time considered the symptoms to be the result of urban lifestyles of sedentary behaviour, alcohol consumption, too much time indoors, and over-eating, with a lack of fresh fruit and vegetables. [ 123 ] The word "menopause" was coined specifically for female humans, where the end of fertility is traditionally indicated by the permanent stopping of monthly menstruations. However, menopause exists in some other animals, many of which do not have monthly menstruation; [ 124 ] in this case, the term means a natural end to fertility that occurs before the end of the natural lifespan. In the 21st century, celebrities have spoken out about their experiences of the menopause, which has led to it becoming less of a taboo as it has boosted awareness of the debilitating symptoms. Subsequently, TV shows have been running features on the menopause to help women experiencing symptoms. In the UK Lorraine Kelly has been an advocate for getting women to speak about their experiences including sharing her own. This has led to an increase in women seeking treatment such as HRT. [ 125 ] Davina McCall also led an awareness campaign based on a documentary on Channel 4 . [ 126 ] In the UK, Carolyn Harris sponsored the Menopause (Support and Services) Bill in June 2021. It was to exempt hormone replacement therapy from National Health Service prescription charges and to make provisions about menopause support and services, including public education and communication in supporting perimenopausal and post-menopausal women, and to raise awareness of menopause and its effects. The bill was withdrawn on 29 October 2021. [ 127 ] In the US, David McKinley , Republican from West Virginia introduced the Menopause Research Act in September 2022 for $100 million in 2023 and 2024, but it stalled. [ 128 ] The majority of mammal species reach menopause when they cease the production of ovarian follicles, which contain eggs (oocytes), between one-third and two-thirds of their maximum possible lifespan. [ 129 ] However, few live long enough in the wild to reach this point. Humans are joined by a limited number of other species in which females live substantially longer than their ability to reproduce. Examples of others include cetaceans : beluga whales , [ 130 ] narwhals , [ 130 ] orcas , [ 131 ] false killer whales [ 132 ] and short-finned pilot whales . [ 133 ] Menopause has been reported in a variety of other vertebrate species, but these examples tend to be from captive individuals, and thus are not necessarily representative of what happens in natural populations in the wild. Menopause in captivity has been observed in several species of nonhuman primates , [ 124 ] including rhesus monkeys [ 134 ] and chimpanzees . [ 135 ] Some research suggests that wild chimpanzees do not experience menopause, as their fertility declines are associated with declines in overall health. [ 136 ] Menopause has been reported in elephants in captivity [ 137 ] and guppies . [ 138 ] Dogs do not experience menopause; the canine estrus cycle simply becomes irregular and infrequent. Although older female dogs are not considered good candidates for breeding, offspring have been produced by older animals, see Canine reproduction . Similar observations have been made in cats. [ 139 ] Life histories show a varying degree of senescence ; rapid senescing organisms (e.g., Pacific salmon and annual plants ) do not have a post-reproductive life-stage. Gradual senescence is exhibited by all placental mammalian life histories. [ original research? ] There are various theories on the origin and process of the evolution of the menopause . These attempt to suggest evolutionary benefits to the human species stemming from the cessation of women's reproductive capability before the end of their natural lifespan. It is conjectured that in highly social groups natural selection favors females that stop reproducing and devote that post-reproductive life span to continuing to care for existing offspring, both their own and those of others to whom they are related, especially their granddaughters and grandsons. [ 140 ]
https://en.wikipedia.org/wiki/Menopause
In organic chemistry , the Menshutkin reaction converts a tertiary amine into a quaternary ammonium salt by reaction with an alkyl halide . Similar reactions occur when tertiary phosphines are treated with alkyl halides. The reaction is the method of choice for the preparation of quaternary ammonium salts. [ 1 ] Some phase transfer catalysts (PTC) can be prepared according to the Menshutkin reaction, for instance the synthesis of triethyl benzyl ammonium chloride (TEBA) from triethylamine and benzyl chloride : Reactions are typically conducted in polar solvents such as alcohols. [ 1 ] Alkyl iodides are superior alkylating agents relative to the bromides, which in turn are superior to chlorides. As is typical for an S N 2 process, benzylic, allylic, and α-carbonylated alkyl halides are excellent reactants. Even though alkyl chlorides are poor alkylating agents ( gem -dichlorides especially so), amines should not be handled in chlorinated solvents such as dichloromethane and dichloroethane, especially at high temperatures, due to the possibility of a Menshutkin reaction. (Sometimes, kinetically facile reactions like acylations are sometimes conducted in chlorinated solvents nonetheless.) Highly nucleophilic tertiary amines like DABCO will react with dichloromethane at room temperature overnight and at reflux (39-40 °C) over several hours to give the quaternized product (see the article on Selectfluor ). Due to steric hindrance and unfavorable electronic properties, chloroform reacts very slowly with tertiary amines over a period of several weeks to months. [ 2 ] Even pyridines, which are considerably less nucleophilic than typical tertiary amines, react with dichloromethane at room temperature over a period of several days to weeks to give bis(pyridinium)methane salts. [ 3 ] In addition to solvent and alkylating agent, other factors strongly influence the reaction. In one particular macrocycle system the reaction rate is not only accelerated (150000 fold compared to quinuclidine ) but the halide order is also changed The reaction is named after its discoverer, Nikolai Menshutkin , who described the procedure in 1890. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Depending on the source, his name (and the reaction named after him) is spelled as Menšutkin, Menshutkin, or Menschutkin.
https://en.wikipedia.org/wiki/Menshutkin_reaction
Menstruation is the shedding of the uterine lining ( endometrium ). It occurs on a regular basis in uninseminated [ 1 ] sexually reproductive-age females of certain mammal species. Although there is some disagreement in definitions between sources, menstruation is generally considered to be limited to primates . It is common in simians ( Old World monkeys , New World monkeys , and apes ), but completely lacking in strepsirrhine primates and possibly weakly present in tarsiers . Beyond primates, it is known only in bats , the elephant shrew , and the spiny mouse species Acomys cahirinus . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Overt menstruation (where there is bleeding from the uterus through the vagina ) is found primarily in humans and close relatives such as chimpanzees . [ 7 ] Females of other species of placental mammals undergo estrous cycles , in which the endometrium is completely reabsorbed by the animal (covert menstruation) at the end of its reproductive cycle . [ 8 ] Many zoologists regard this as different from a "true" menstrual cycle. Female domestic animals used for breeding—for example dogs, pigs, cattle, or horses—are monitored for physical signs of an estrous cycle period, which indicates that the animal is ready for insemination . Females of most mammal species advertise fertility to males with visual behavioral cues, pheromones , or both. [ 9 ] This period of advertised fertility is known as oestrus , "estrus" or heat . [ 9 ] In species that experience estrus, females are generally only receptive to copulation while they are in heat [ 9 ] ( dolphins are an exception). [ 10 ] In the estrous cycles of most placentals , if no fertilization takes place, the uterus reabsorbs the endometrium. This breakdown of the endometrium without vaginal discharge is sometimes called covert menstruation . [ 11 ] Overt menstruation (where there is blood flow from the vagina) occurs primarily in humans and close evolutionary relatives such as chimpanzees. [ 7 ] Some species, such as domestic dogs , experience small amounts of vaginal bleeding while approaching heat; [ 12 ] this discharge has a different physiologic cause than menstruation. [ 13 ] A few mammals do not experience obvious, visible signs of fertility ( concealed ovulation ). In humans, studies show that both males and females can detect the fertility of females through hormonal signaling and alterations in scent ( fertility awareness ), but some research suggests that behavioral clues may be needed to consciously assess fertility. [ 14 ] [ 15 ] Orangutans also lack visible signs of impending ovulation. [ 16 ] Also, it has been said that the extended estrus period of the bonobo (reproductive-age females are in heat for 75% of their menstrual cycle) [ 17 ] has a similar effect to the lack of a "heat" in human females. [ 18 ] Most female mammals have an estrous cycle , yet only ten primate species, four bat species, the elephant shrew , and one known species of spiny mouse have a menstrual cycle. [ 19 ] [ 20 ] As these groups are not closely related, it is likely that four distinct evolutionary events have caused menstruation to arise. [ 21 ] There are varying views on evolution of overt menstruation in humans and related species, and the evolutionary advantages in losing blood associated with dismantling the uterine lining rather than absorbing it, as most mammals do. [ 22 ] The reason is likely related to differences in the ovulation process. [ 21 ] Most female placentals have a uterine lining that builds up when the animal begins ovulation, and later further increases in thickness and blood flow after a fertilized egg has successfully implanted. This final process of thickening is known as decidualization , and is usually triggered by hormones released by the embryo. In humans, decidualization happens spontaneously at the beginning of each menstrual cycle, triggered by hormonal signals from the ovaries. For this reason, the human uterine lining becomes fully thickened during each cycle as a defense to trophoblast penetration of the endometrial wall , [ 23 ] regardless of whether an egg becomes fertilized or successfully implants in the uterus. This produces more unneeded material per cycle than in non-menstruating mammals, which may explain why the extra material is not simply reabsorbed as done by those species. In essence, menstruating animals treat every cycle as a possible pregnancy by thickening the protective layer around the endometrial wall, while non-menstruating placental mammals do not begin the pregnancy process until a fertilized egg has implanted in the uterine wall. [ medical citation needed ] For this reason, it is speculated that menstruation is a side effect of spontaneous decidualization, which evolved in some placental mammals due to its advantages over non-spontaneous decidualization. Spontaneous decidualization allows for more maternal control in the maternal-fetal conflict by increasing selectivity over the implanted embryo. [ 21 ] This may be necessary in humans and other primates, due to the abnormally large number of genetic disorders in these species. [ 24 ] Since most aneuploidy events result in stillbirth or miscarriage, there is an evolutionary advantage to ending the pregnancy early, rather than nurturing a fetus that will later miscarry. There is evidence to show that some abnormalities in the developing embryo can be detected by endometrial stromal cells in the uterus, but only upon differentiation into decidual cells . [ 24 ] This triggers epigenetic changes that prevent formation of the placenta, which prevents the embryo from implanting and leaves it to be removed in the next menstruation. [ better source needed ] [ 25 ] This failsafe mode is not possible in species where decidualization is controlled by hormonal triggers from the embryo. This is sometimes referred to as the choosy uterus theory, and it is theorized that this positive outweighs the negative impacts of menstruation in species with high aneuploidy rates and hence a high number of 'doomed' embryos. [ medical citation needed ] The female will ovulate spontaneously and be receptive to the male for breeding (express estrus) at regular biologically defined intervals. The female is receptive to males only while experiencing estrus. For breeding livestock, there are a number of advantages to be gained by finding methods to induce ovulation on a planned schedule, and thus synchronize the estrus cycle between many female animals. If animals can be bred on the same schedule, it increases convenience for the livestock owner, since the young animals will be at the same stage of development. Also, if artificial insemination (AI) is used for breeding, the AI technician's time can be used more efficiently, by breeding several females at the same time. In order to induce estrus, a variety of techniques have been tried in recent years, involving both more natural, and more hormonal based methods. [ 28 ] Different ways of injecting or feeding hormones to livestock are costly, and have variable success rates. [ 29 ] Average length (days) of estrus and estrous cycles: [ 29 ]
https://en.wikipedia.org/wiki/Menstruation_(mammal)
The mental model theory of reasoning was developed by Philip Johnson-Laird and Ruth M.J. Byrne (Johnson-Laird and Byrne, 1991). It has been applied to the main domains of deductive inference including relational inferences such as spatial and temporal deductions; propositional inferences, such as conditional, disjunctive and negation deductions; quantified inferences such as syllogisms ; and meta-deductive inferences. Ongoing research on mental models and reasoning has led the theory to be extended to account for probabilistic inference (e.g., Johnson-Laird, 2006) and counterfactual thinking (Byrne, 2005). This psychology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mental_model_theory_of_reasoning
Mentat Portable Streams ( MPS ) was a platform independent implementation of the UNIX System V STREAMS networking protocol stack , normally sold with the Mentat TCP stack providing TCP/IP support. Portable Streams was used in a number of commercial products, including Apple Computer 's Open Transport , AIX , VxWorks , Palm OS 's Cobalt, Novell 's UnixWare and other systems. [ 1 ] [ failed verification ] Mentat also ported the system to Linux [ 2 ] and Windows NT [ 3 ] [ failed verification ] as a standalone product. Portable Streams was written by Mentat, who was purchased by Packeteer in 2004. [ 4 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Mentat_Portable_Streams
This page provides supplementary chemical data on Menthol . The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI or the links below, and follow its directions.
https://en.wikipedia.org/wiki/Menthol_(data_page)
Independence of irrelevant alternatives ( IIA ) is an axiom of decision theory which codifies the intuition that a choice between A {\displaystyle A} and B {\displaystyle B} (which are both related) should not depend on the quality of a third, unrelated outcome C {\textstyle C} . There are several different variations of this axiom, which are generally equivalent under mild conditions. As a result of its importance, the axiom has been independently rediscovered in various forms across a wide variety of fields, including economics , [ 1 ] cognitive science , social choice , [ 1 ] fair division , rational choice , artificial intelligence , probability , [ 2 ] and game theory . It is closely tied to many of the most important theorems in these fields, including Arrow's impossibility theorem , the Balinski–Young theorem , and the money pump arguments . In behavioral economics , failures of IIA (caused by irrationality ) are called menu effects or menu dependence . [ 3 ] This is sometimes explained with a short story by philosopher Sidney Morgenbesser : Morgenbesser, ordering dessert, is told by a waitress that he can choose between blueberry or apple pie. He orders apple. Soon the waitress comes back and explains cherry pie is also an option. Morgenbesser replies "In that case, I'll have blueberry." IIA rules out this kind of arbitrary behavior, by stating that: In economics, the axiom is connected to the theory of revealed preferences . Economists often invoke IIA when building descriptive (positive) models of to ensure agents have well-defined preferences that can be used for making testable predictions . If agents' behavior or preferences are allowed to change depending on irrelevant circumstances, any model could be made unfalsifiable by claiming some irrelevant circumstance must have changed when repeating the experiment. Often, the axiom is justified by arguing that any irrational agent will be money pumped until going bankrupt , making their preferences unobservable or irrelevant to the rest of the economy. While economists must often make do with assuming IIA for reasons of computation or to make sure they are addressing a well-posed problem , experimental economists have shown that real human decisions often violate IIA. For example, the decoy effect shows that inserting a $5 medium soda between a $3 small and $5.10 large can make customers perceive the large as a better deal (because it's "only 10 cents more than the medium"). Behavioral economics introduces models that weaken or remove many assumptions of consumer rationality, including IIA. This provides greater accuracy, at the cost of making the model more complex and more difficult to falsify. In social choice theory , independence of irrelevant alternatives is often stated as "if one candidate ( X {\displaystyle X} ) would win an election without a new candidate ( Y {\displaystyle Y} ), and Y {\displaystyle Y} is added to the ballot, then either X {\displaystyle X} or Y {\displaystyle Y} should win the election." Arrow's impossibility theorem shows that no reasonable (non-random, non- dictatorial ) ranked voting system can satisfy IIA. However, Arrow's theorem does not apply to rated voting methods. These can pass IIA under certain assumptions, but fail it if they are not met. Specific candidates that change the outcome without winning are called spoilers . [ 4 ] Methods that unconditionally pass IIA include sortition and random dictatorship . Deterministic voting methods that behave like majority rule when there are only two candidates can be shown to fail IIA by the use of a Condorcet cycle : Consider a scenario in which there are three candidates A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} , and the voters' preferences are as follows: (These are preferences, not votes, and thus are independent of the voting method.) 75% prefer C {\displaystyle C} over A {\displaystyle A} , 65% prefer B {\displaystyle B} over C {\displaystyle C} , and 60% prefer A {\displaystyle A} over B {\displaystyle B} . The presence of this societal intransitivity is the voting paradox . Regardless of the voting method and the actual votes, there are only three cases to consider: For particular voting methods, the following results hold: Generalizations of Arrow's impossibility theorem show that if the voters change their rating scales depending on the candidates who are running, the outcome of cardinal voting may still be affected by the presence of non-winning candidates. [ 5 ] Approval voting , score voting , and median voting may satisfy the IIA criterion if it is assumed that voters rate candidates individually and independently of knowing the available alternatives in the election, using their own absolute scale. If voters do not behave in accordance with this assumption, then those methods also fail the IIA criterion. Balinski and Laraki disputed that any interpersonal comparisons are required for rated voting rules to pass IIA. They argue the availability of a common language with verbal grades is sufficient for IIA by allowing voters to give consistent responses to questions about candidate quality. In other words, they argue most voters will not change their beliefs about whether a candidate is "good", "bad", or "neutral" simply because another candidate joins or drops out of a race. [ 6 ] [ page needed ] Arguments have been made that IIA is itself an undesirable and/or unrealistic criteria. IIA is largely incompatible with the majority criterion unless there are only two alternatives and the vast majority of voting systems fail the criteria. The satisfaction of IIA by Approval and Range voting rests on making an unrealistic assumption that voters who have meaningful preferences between two alternatives, but would approve or rate those two alternatives the same in an election with other irrelevant alternatives, would necessarily either cast a vote in which both alternatives are still approved or rated the same, or abstain, even in an election between only those two alternatives. If it is assumed to be at least possible that any voter having preferences might not abstain, or vote their favorite and least favorite candidates at differing ratings respectively, then these systems would also fail IIA. Allowing either of these conditions alone causes approval and range voting to fail IIA. The satisfaction of IIA leaves only voting methods that have undesirable in some other way, such as treating one of the voters as a dictator, or requires making unrealistic assumptions about voter behavior. Amartya Sen argued that seemingly independent alternatives could provide context for individual choice, and thus that menu dependence might not be irrational. As an example, he described a person considering whether to take an apple out of a basket without being greedy. If the only two options available are "take the apple" or "don't take the apple", this person may conclude that there is only one apple left and so refrain from taking the last apple as they don't want to be greedy. However, if a third option "take another apple" were available, that would provide context that there are more apples in the basket, and they would then be free to take the first apple. [ 7 ]
https://en.wikipedia.org/wiki/Menu_dependence
Meprin A ( EC 3.4.24.18 , endopeptidase-2 , meprin-a , meprin , N-benzoyl-L-tyrosyl-p-aminobenzoic acid hydrolase , PABA-peptide hydrolase , PPH ) is an enzyme that cleaves protein and peptide substrates preferentially on carboxyl side of hydrophobic residues. [ 1 ] This metalloprotease can be associated with inflammatory responses. [ 2 ] It can be found in the extracellular space where it can also form complex structures by joining its monomers together. [ 2 ] Meprin A is a dimer composed of the products transcribed from the following two genes: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Meprin_A
In mathematics , the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm : In summation notation , The series converges to the natural logarithm (shifted by 1) whenever − 1 < x ≤ 1 {\displaystyle -1<x\leq 1} . The series was discovered independently by Johannes Hudde (1656) [ 1 ] and Isaac Newton (1665) but neither published the result. Nicholas Mercator also independently discovered it, and included values of the series for small values in his 1668 treatise Logarithmotechnia ; the general series was included in John Wallis 's 1668 review of the book in the Philosophical Transactions . [ 2 ] The series can be obtained by computing the Taylor series of ln ⁡ ( x ) {\displaystyle \ln(x)} at x = 1 {\displaystyle x=1} : and substituting all x {\displaystyle x} with x + 1 {\displaystyle x+1} . Alternatively, one can start with the finite geometric series ( t ≠ − 1 {\displaystyle t\neq -1} ) which gives It follows that and by termwise integration, If − 1 < x ≤ 1 {\displaystyle -1<x\leq 1} , the remainder term tends to 0 as n → ∞ {\displaystyle n\to \infty } . This expression may be integrated iteratively k more times to yield where and are polynomials in x . [ 3 ] Setting x = 1 {\displaystyle x=1} in the Mercator series yields the alternating harmonic series The complex power series is the Taylor series for − log ⁡ ( 1 − z ) {\displaystyle -\log(1-z)} , where log denotes the principal branch of the complex logarithm . This series converges precisely for all complex number | z | ≤ 1 , z ≠ 1 {\displaystyle |z|\leq 1,z\neq 1} . In fact, as seen by the ratio test , it has radius of convergence equal to 1, therefore converges absolutely on every disk B (0, r ) with radius r < 1. Moreover, it converges uniformly on every nibbled disk B ( 0 , 1 ) ¯ ∖ B ( 1 , δ ) {\textstyle {\overline {B(0,1)}}\setminus B(1,\delta )} , with δ > 0. This follows at once from the algebraic identity: observing that the right-hand side is uniformly convergent on the whole closed unit disk.
https://en.wikipedia.org/wiki/Mercator_series
The Mercedes-Benz Bionic is a concept car created by DaimlerChrysler AG under the Mercedes Group. It was first introduced in 2005 at the DaimlerChrysler Innovation Symposium in Washington, D. C. The Bionic is modeled after the yellow boxfish , Ostracion cubicus , [ 1 ] and has 80% lower nitrogen oxide emissions with its selective catalytic reduction technology. The Bionic is powered by a 103 kW direct-injection diesel engine with an average fuel economy of 54.7 MPG (US) (~4.3 L/100 km). [ 2 ] This engine also outputs around 140 hp (104 kW) and a little over 221 ft⋅lbf (300 N⋅m) of torque at around 1,600 rpm. The Bionic can go from 0 to 60 mph (0 to 97 km/h) in about eight seconds and has a top speed of a little over 190 km/h (118 mph). The exterior design was modeled after the yellow boxfish ( Ostracion cubicus ), a marine fish that lives in coral reefs. Mercedes-Benz decided to model the Bionic after this fish due to the supposed low coefficient of drag of its body shape [ 3 ] and the rigidity of its exoskeleton ; this influenced the car's unusual looks. It was believed that the shape of the boxfish would improve aerodynamics and stability. [ 4 ] However, in 2015, a paper in Journal of the Royal Society Interface claimed that "The drag-reduction performance of the two boxfish species studied was relatively low compared with more generalized body shapes of fish". [ 5 ] [ 6 ] Other parts of the design include the fact that the rear wheels are partially fitted with plastic and that it's considered as a lightweight vehicle. Mercedes-Benz reported a drag coefficient of 0.19; [ 7 ] for comparison, the production vehicle with the lowest ever C d value was the GM EV1 , at 0.195. While the Bionic had a much larger internal volume than the EV1, the Bionic's larger frontal area made the EV1 more aerodynamic overall, as drag is a product of the area and the drag coefficient. The vehicle was capable of seating four people. [ 2 ]
https://en.wikipedia.org/wiki/Mercedes-Benz_Bionic
Mercedes Pascual is an Uruguayan theoretical ecologist , [ 1 ] and a Professor in the Department of Ecology and Evolution at the University of Chicago , where she leads the Laboratory for Modeling and Theory in Ecology and Epidemiology (MATE). [ 2 ] [ 3 ] She was previously the Rosemary Grant Collegiate Professor at the University of Michigan and a Howard Hughes Medical Institute Investigator. [ 4 ] Pascual has developed systems models for the study of complicated, irregular cycles in ecosystems, using mathematical, statistical and computational approaches. She applies these models to the study of food webs , [ 5 ] [ 6 ] [ 7 ] ecology , and epidemiology , in particular the evolution of infectious diseases . [ 8 ] She has discovered relationships between El Niño climate patterns and the occurrence of cholera outbreaks in Bangladesh . [ 1 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] One of the patterns she reports is that El Niño episodes are becoming an increasingly-strong driver of disease outbreaks. Her work may be the first quantitative evidence to show global climate change effecting an infectious disease . [ 13 ] Other diseases that she studies include malaria [ 14 ] and influenza . [ 15 ] Her models can be used predictively in support of public health. [ 16 ] [ 17 ] Pascual was born in Uruguay and grew up in Argentina and Brazil . [ 18 ] Her father was a chemical engineer. [ 1 ] Pascual did undergraduate work in marine biology and mathematics at Universidade Santa Úrsula (USU, 1978–1979) and at Pontifical Catholic University of Rio de Janeiro (PUC, 1980). She received her Licentiate degree in biology from the Universidad de Ciencias Exactas y Naturales in Buenos Aires, Argentina in 1985. She received an M.Sc. in mathematics from New Mexico State University in Las Cruces, New Mexico in 1989. [ 19 ] Pascual earned her Ph.D. in biological oceanography from a joint program of Massachusetts Institute of Technology and Woods Hole Oceanographic Institution , attending from 1989–1995. [ 20 ] [ 1 ] She worked with Hal Caswell. [ 1 ] Her thesis was on Some Nonlinear Problems in Plankton Ecology. [ 20 ] She did postdoctoral work at Princeton University from 1995–1997. [ 21 ] In addition to other positions, Pascual held an assistant professorship at the University of Maryland [ 22 ] from 1997–2000. She joined the University of Michigan [ 13 ] as an assistant professor in the newly-created department of Ecology and Evolutionary Biology in 2001. [ 23 ] She was an associate professor from 2004–2008, and the Rosemary Grant Collegiate Professor from 2008–2014. In addition, Pascual was a Howard Hughes Medical Institute Investigator from 2008 to 2015. As of 2015, Pascual became a Professor in the Department of Ecology and Evolution at the University of Chicago . [ 2 ] In 1996, Pascual received the U.S. Department of Energy Alexander Hollaender Distinguished Postdoctoral Fellowship to study at Princeton University. [ 21 ] She received a Centennial fellowship from the James S. McDonnell Foundation in 1999. [ 22 ] In 2002, Discover magazine recognized Pascual as one of the 50 most important women in science. [ 24 ] Pascual received the 2014 Robert H. MacArthur Award from the Ecological Society of America . [ 13 ] Pascual is a member of the American Association for the Advancement of Science , and served on its board of directors from 2015–2019. [ 8 ] In 2019, she was elected to the American Academy of Arts and Sciences . [ 25 ]
https://en.wikipedia.org/wiki/Mercedes_Pascual
A merchant's mark is an emblem or device adopted by a merchant, and placed on goods or products sold by him in order to keep track of them, or as a sign of authentication. It may also be used as a mark of identity in other contexts. Merchants' marks are as old as the sealings of the third millennium BCE found in Sumer that originated in the Indus Valley . [ 1 ] Impressions of cloth, strings and other packing material on the reverse of tags with seal impressions indicate that the Harappan seals were used to control economic administration and trade. [ 2 ] [ 3 ] Amphorae from the Roman Empire can sometimes be traced to their sources from the inscriptions on their handles. Commercial inscriptions in Latin , known as tituli picti , appear on Roman containers used for trade . [ 4 ] Symbolic merchants' marks continued to be used by artisans and townspeople of the medieval and early modern eras [ 5 ] to identify themselves and authenticate their goods. These distinctive and easily recognizable marks often appeared in their seals on documents and on products made for sale. They are often found on headstones , the covers of commercial ledgers, [ 6 ] and in works of stained glass , [ 7 ] brass, and stone, serving in place of heraldic imagery, which could not be used by the middle classes . They were the precursors of hallmarks , printer's marks , [ 8 ] and trademarks . To manage the risks of piracy or shipwreck, merchants often consigned a cargo to several vessels or caravans; a mark on a bale established legal ownership and avoided confusion. Early travellers, voyagers and merchants displayed their merchant's marks as well to ward off evil. Adventurous travellers and sailors ascribed the terrors and perils of their life to the wrath of the Devil . To counter these dangers merchants employed all sorts of religious and magical means to place their caravans, ships and merchandise under the protection of God and His Saints. One such symbol combined the mystical "Sign of Four" with the merchant's name or initials. The "Sign of Four" [ 9 ] was an outgrowth of an ancient symbol adopted by the Romans and by Christianity, Chi Rho (XP), standing for the first two letters of Christus in Greek letters; this was simplified to a reversed "4" in medieval times. The evolution of this symbol is shown in M. J. Shah's article. [ 10 ] The "Sign of Four" is called the "Staff of Mercury" ( Caduceus ) in German and Scandinavian literature on house marks. [ 11 ] The joint stock company or limited liability company was another way to reduce a merchant's risks of loss of ships and merchandise from dangerous voyages and travel. By royal charter a monopoly was assured and a merchant's personal liability was limited to the amount of his own investment. If a voyage succeeded the gains accrued to all of the investors in proportion to their invested capital shares. Modern institutions, corporations and trademarks, find some of their origins in these symbolic and legal devices for limiting physical and pecuniary risks . [ citation needed ] When the East India Company was chartered by Elizabeth I , Queen of England in 1600 it was still customary for each merchant or Company of Merchant Adventurers to have a distinguishing mark which included the "Sign of Four" and served as a trademark. The East India Company's mark was made up from a '+', a '4' and the initials EIC. This mark forms the central emblem displayed on the Scinde Dawk postage stamps. [ 12 ] Also, it was a central motif of the East India Company's coinage. [ 13 ]
https://en.wikipedia.org/wiki/Merchant's_mark
The Merck Index is an encyclopedia of chemicals , drugs and biologicals with over 10,000 monographs on single substances or groups of related compounds [ 1 ] published online by the Royal Society of Chemistry . [ 2 ] The first edition of the Merck's Index was published in 1889 by the German chemical company Emanuel Merck and was primarily used as a sales catalog for Merck's growing list of chemicals it sold. [ 2 ] The American subsidiary was established two years later and continued to publish it. During World War I the US government seized Merck's US operations and made it a separate American "Merck" company that continued to publish the Merck Index. In 2012 the Merck Index was licensed to the Royal Society of Chemistry . [ 3 ] An online version of The Merck Index, including historic records and new updates not in the print edition, [ 1 ] is commonly available through research libraries. It also includes an appendix with monographs on organic named reactions . The 15th edition was published in April 2013. Monographs in The Merck Index typically contain: [ 1 ] This article about an encyclopedia is a stub . You can help Wikipedia by expanding it . This article about a chemistry -related book is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Merck_Index