text
stringlengths
2
132k
source
dict
the IRB. Additionally, precipitation contributes to increased groundwater flow and aquatic currents, which improves mineral advection. This enhanced advection facilitates the transportation of ferric iron (Fe(III)), increasing its availability as the electron acceptor for IRB. Precipitation also affects IRB through its influence on rainwater and oxic weathering. By modifying the soil and groundwater iron and mineral content, weathering further adjusts the conditions required for IRB activity. Similarly, periodic flooding of grasslands and rice patties also improves IRB metabolic efficacy. Sedimentation due to hydrological impacts listed above, as well as other natural processes, shapes the potential habitats of IRB. The rate of deposition can greatly impact the potential creation and removal of aquatic sediment. With increased sedimentation, the amount of anoxic iron-rich sediment will increase, benefiting IRB dispersion. In contrast, the removal of sediment due to fast currents, or low rates of sedimentation will expose deeper layers to oxygen, restricting IRB activity. This variation in aquatic sedimentation will create a dynamic setting for IRB growth. The seasonal cycle of thawing and melting in permafrost regions also creates dynamic conditions for IRB. The thawing process of permafrost results in an anoxic setting, which supports iron reduction and IRB metabolic activity. On the other hand, preserved permafrost maintains oxic conditions, preventing IRB from conducting iron reduction. These seasonal fluctuations in permafrost create a cycle of changing opportunities for IRB metabolic activity and iron reduction. Iron-reducing bacteria (IRB) display exceptional flexibility to physicochemical factors such as temperature, salinity, and pH. Despite the majority of IRB preferring a mesophilic temperature range from 20 °C to 40 °C, a minority of extremophiles prevail in niches such as hydrothermal vents, which can exceed 80 °C. Similarly, IRB are predominantly neutrophilic, with a preferred range of 6 to 8 pH, however, a few IRB are considered acidophilic or
{ "page_id": 79627568, "source": null, "title": "Dissimilatory iron reducing bacteria" }
alkaliphilic. This minority of IRB inhabit pH ranges below 5 pH and above 9 pH, respectively, in ecological niches such as mine drainage sites or soda lake sediment. Just as with pH and temperature, IRB are largely resilient to saline conditions. This is exhibited by their geographic distribution in both freshwater and marine environments. On the extreme end, IRB had been observed in hypersaline lake sediments, which contained 5M of NaCl. Despite the extremophilic nature of some IRB, very few are tolerant of multiple extreme physicochemical factors. IRB occupying niches outside of the preferred physicochemical ranges experience reduced growth and metabolic activity. == Environmental significance == === Role in Iron cycle === The bacterial iron cycle is based on the bacterially mediated oxidation of Fe2+ bicarbonate under O2 limiting conditions (anoxic) and ends with the formation of Fe(OH)3 ferrihydrite, which is easily accessible for reduction by iron-reducing bacteria. Formation of ferrihydrite in anoxic environments occurs through two phototrophic pathways involving either cyanobacteria or nonsulfur purple bacteria. == Possible uses of IRB in Bioremediation == === Mercury Methylation === Mercury (Hg) pollution affects water sources and poses a human health concern. Mercury methylation is an anaerobic microbial process usually driven by dissimilatory sulfate-reducing bacteria (DSRB). This process produces methylmercury (MeHg), which can be transferred from the sediments to water and organisms and bioaccumulates easily. Dissimilatory iron-reducing bacteria (DIRB) may mediate mercury methylation, as it has been shown that the mercury methylation rate is positively correlated with the iron reduction rate, implying that Fe(III) reduction stimulates the formation of MeHg. Iron availability influences this process in two ways. First, it can change the activity rate of dissimilatory iron-reducing bacteria (DIRB) compared to DSRB. Second, it can alter the chemistry of mercury, affecting its bioavailability. However, the relationship of mercury methylation with iron
{ "page_id": 79627568, "source": null, "title": "Dissimilatory iron reducing bacteria" }
reduction most likely depends on the physiological characteristics of the DIRB present in an area. === Removal of organic matter === Urbanization and poor sewage management introduce organic pollutants to rivers, which accumulate in sediments. This leads to water degradation and anaerobic water bodies, where organic matter (OM) is transformed into odorous substances. One possible solution to this problem is using DIRB to metabolize the OM in the sediments under anaerobic conditions. OM serves as an electron donor in sediments, supporting iron reduction by DIRB; consequently, higher sediment pollution levels result in increased iron reduction by DIRB. Large amounts of DIRB result in better-sustained removal of OM, and sediments enriched with iron, nitrogen and sulphur allowed DIRB to improve iron reduction because, under iron-limiting conditions, they can use extracellular electron shuttles to enhance further Fe(III) reduction. DIRB communities with high diversity and adaptability are considered the best choice for OM bioremediation. However, their performance is affected by local environmental factors, and different DIRBs have different environmental sensitivities. === Removal of heavy metals === Rapid industrialization has led to an increased demand for Copper (Cu), and the combination of bigger mine sizes and declining copper ore grades has resulted in higher copper tailings. Copper tailings occupy an extensive farmland and forest land area, negatively impacting food safety. IRB reduce Fe (III) to Fe (II) and therefore decrease the environmental redox potential. Directly adding IRB to an environment leads to competition and co-existence with the local microbial community, causing shifts in the microbial composition that alter the tailing environment. The iron reduction process also alters the microbial community, promoting detoxification of heavy metals in sediments. Some bioremediation attempts have used both SRB and IRB, which increased the pH and permeation time of the copper tailings. Reducing tailings permeability decreases water and oxygen
{ "page_id": 79627568, "source": null, "title": "Dissimilatory iron reducing bacteria" }
infiltration in mine tailings, reducing oxidation and creating an anoxic environment that increases iron reduction rates, immobilizes heavy metals, and supports their conversion into sulphides. However, remediation effects may vary with depth due to changes in the microbial community composition. === Dechlorination of organic coumpounds === Chlorinated organic compounds are toxic to organisms and difficult to degrade. Some DIRBS have dechlorination genes that allow them to dechlorinate these pollutants partially; these abilities can be enhanced through genetic engineering. The dechlorination process starts when DIRB converts Fe(III) to Fe(II), which helps remove chlorinated pollutants. Fe(II) species can react with chlorinated organic compounds directly and accelerate dechlorination. Furthermore, the close union of Fe(II) atoms promotes electron transfer reactions characteristic of dechlorination. This type of dechlorination is abiotic and interacts with the biotic pathway (cell-mediated). However, excess DIRB can also hinder dechlorination; thus, maintaining an appropriate DIRB dosage is essential to ensure effective pollutant removal. Furthermore, DIRB can also release nutrients to the environment, enhancing the dechlorination abilities of other bacteria. Even if possible, using only DIRB for dechlorination is not ideal because they have a low degradation rate and produce intermediate toxic substances. For this reason, research is being done on coupling DIRB with other microorganisms, chemical materials, and technology. == References ==
{ "page_id": 79627568, "source": null, "title": "Dissimilatory iron reducing bacteria" }
Zeitschrift für Physikalische Chemie (English: Journal of Physical Chemistry) is a monthly peer-reviewed scientific journal covering physical chemistry that is published by Oldenbourg Wissenschaftsverlag. Its English subtitle is "International Journal of Research in Physical Chemistry and Chemical Physics". It was established in 1887 by Wilhelm Ostwald, Jacobus Henricus van 't Hoff, and Svante August Arrhenius as the first scientific journal for publications specifically in the field of physical chemistry. The editor-in-chief is Klaus Rademann (Humboldt University of Berlin). == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.408. == References == == External links == Official website
{ "page_id": 656692, "source": null, "title": "Zeitschrift für Physikalische Chemie" }
Pure and Applied Chemistry is the official journal for the International Union of Pure and Applied Chemistry (IUPAC). It is published monthly by Walter de Gruyter and contains recommendations and reports, and lectures from symposia. == References ==
{ "page_id": 8783159, "source": null, "title": "Pure and Applied Chemistry" }
In physics, an absorption edge (also known as an absorption discontinuity or absorption limit) is a sharp discontinuity in the absorption spectrum of a substance. These discontinuities occur at wavelengths where the energy of an absorbed photon corresponds to an electronic transition or ionization potential. When the quantum energy of the incident radiation becomes smaller than the work required to eject an electron from one or other quantum states in the constituent absorbing atom, the incident radiation ceases to be absorbed by that state. For example, incident radiation on an atom of a wavelength that has a corresponding energy just below the binding energy of the K-shell electron in that atom cannot eject the K-shell electron. Siegbahn notation is used for notating absorption edges. In compound semiconductors, the bonding between atoms of different species forms a set of dipoles. These dipoles can absorb energy from an electromagnetic field, achieving a maximum coupling to the radiation when the frequency of the radiation equals a vibrational mode of the dipole. When this happens, the absorption coefficient gets a peak yielding the fundamental edge. This occurs in the far infrared region of the spectrum. == See also == K-edge Siegbahn notation == References ==
{ "page_id": 18482489, "source": null, "title": "Absorption edge" }
Varignon's theorem is a theorem of French mathematician Pierre Varignon (1654–1722), published in 1687 in his book Projet d'une nouvelle mécanique. The theorem states that the torque of a resultant of two concurrent forces about any point is equal to the algebraic sum of the torques of its components about the same point. In other words, "If many concurrent forces are acting on a body, then the algebraic sum of torques of all the forces about a point in the plane of the forces is equal to the torque of their resultant about the same point." == Proof == Consider a set of N {\displaystyle N} force vectors f 1 , f 2 , . . . , f N {\displaystyle \mathbf {f} _{1},\mathbf {f} _{2},...,\mathbf {f} _{N}} that concur at a point O {\displaystyle \mathbf {O} } in space. Their resultant is: F = ∑ i = 1 N f i {\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {f} _{i}} . The torque of each vector with respect to some other point O 1 {\displaystyle \mathbf {O} _{1}} is T O 1 f i = ( O − O 1 ) × f i {\displaystyle \mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {f} _{i}}=(\mathbf {O} -\mathbf {O} _{1})\times \mathbf {f} _{i}} . Adding up the torques and pulling out the common factor ( O − O 1 ) {\displaystyle (\mathbf {O} -\mathbf {O_{1}} )} , one sees that the result may be expressed solely in terms of F {\displaystyle \mathbf {F} } , and is in fact the torque of F {\displaystyle \mathbf {F} } with respect to the point O 1 {\displaystyle \mathbf {O} _{1}} : ∑ i = 1 N T O 1 f i = ( O − O 1 ) × ( ∑ i = 1 N f i
{ "page_id": 38208829, "source": null, "title": "Varignon's theorem (mechanics)" }
) = ( O − O 1 ) × F = T O 1 F {\displaystyle \sum _{i=1}^{N}\mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {f} _{i}}=(\mathbf {O} -\mathbf {O} _{1})\times \left(\sum _{i=1}^{N}\mathbf {f} _{i}\right)=(\mathbf {O} -\mathbf {O} _{1})\times \mathbf {F} =\mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {F} }} . Proving the theorem, i.e. that the sum of torques about O 1 {\displaystyle \mathbf {O} _{1}} is the same as the torque of the sum of the forces about the same point. == References == == External links == Varignon's Theorem at TheFreeDictionary.com
{ "page_id": 38208829, "source": null, "title": "Varignon's theorem (mechanics)" }
The William Bate Hardy Prize is awarded by the Cambridge Philosophical Society. It is awarded once in three years “for the best original memoir, investigation or discovery by a member of the University of Cambridge in connection with Biological Science that may have been published during the three years immediately preceding”. == Recipients == (incomplete list-prize awarded at least 22 times by 2014) 1966 Hugh Huxley (inaugural winner) 1969 Sydney Brenner and Ralph Riley 1976 Frederick Sanger 1978 Richard Henderson 1981 César Milstein 1984 John Gurdon 1987 Michael Berridge 1991 Azim Surani 1993 Martin Evans 1995 Nicholas Barry Davies 1998 Tim Clutton-Brock and Andrew Wyllie (shared) 2001 Michael Neuberger and James Cuthbert Smith (shared) 2004 Andrea Brand and Robin Irvine (shared) 2010 Beverley Glover, Dr Peter Forster and Simon Conway Morris [1] (shared) 2014 Serena Nik-Zainal == See also == List of biology awards == External links == Charter, Abstracts of Laws Prescribed by Charter, Bye Laws, Regulations from the William Hopkins Prize and the William Bate Hardy Prize == References ==
{ "page_id": 35849534, "source": null, "title": "William Bate Hardy Prize" }
Trans fat regulation, that aims to limit the amount of "trans fat" — fat containing trans fatty acids — in industrial food products, has been enacted in many countries. These regulations were motivated by numerous studies that pointed to significant negative health effects of trans fat. It is generally accepted that trans fat in the diet is a contributing factor in several diseases, including cardiovascular disease, diabetes, and cancer. == History == As early as 1956, there were suggestions in the scientific literature that trans fats could be a cause of the large increase in coronary artery disease but after three decades the concerns were still largely unaddressed. Instead, by the 1980s, fats of animal origin had become one of the greatest concerns of dieticians. Activists, such as Phil Sokolof, who took out full page ads in major newspapers, attacked the use of beef tallow in McDonald's french fries and urged fast-food companies to switch to vegetable oils. The result was an almost overnight switch by most fast-food outlets to trans fats. Studies in the early 1990s, however, brought renewed scrutiny and confirmation of the negative health impact of trans fats. In 1994, it was estimated that trans fats caused 20,000 deaths annually in the United States from heart disease. Mandatory food labeling for trans fats was introduced in several countries. Campaigns were launched by activists to bring attention to the issue and change the practices of food manufacturers. == International regulation == The international trade in food is standardized in the Codex Alimentarius. Hydrogenated oils and fats come under the scope of Codex Stan 19. Non-dairy fat spreads are covered by Codex Stan 256-2007. In the Codex Alimentarius, trans fat to be labelled as such is defined as the geometrical isomers of monounsaturated and polyunsaturated fatty acids having non-conjugated
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
[interrupted by at least one methylene group (−CH2−)] carbon-carbon double bonds in the trans configuration. This definition excludes specifically the trans fats (vaccenic acid and conjugated linoleic acid) that are present especially in human milk, dairy products, and beef. In 2018 the World Health Organization launched a plan to eliminate trans fat from the global food supply. They estimate that trans fat leads to more than 500,000 deaths from cardiovascular disease yearly. === Argentina === Trans fat content labeling is required starting in August 2006. Since 2010, vegetable oils and fats sold to consumers directly must contain only 2% of trans fat over total fat, and other food must contain less than 5% of their total fat. Starting on 10 December 2014, Argentina has on effect a total ban on food with trans fat, a regulation that could save the government more than US$100 million a year on healthcare. === Australia === The Australian federal government has indicated that it wants to pursue actively a policy to reduce trans fats from fast foods. The former federal assistant health minister, Christopher Pyne, asked fast food outlets to reduce their trans fat use. A draft plan was proposed, with a September 2007 timetable, to reduce reliance on trans fats and saturated fats. As of 2018, Australia's food labeling laws do not require trans fats to be shown separately from the total fat content. However, margarine in Australia has been mostly free of trans fat since 1996. === Austria === Trans fat content limited to 4% of total fat, 2% on products that contain more than 20% fat. === Belgium === The Conseil Supérieur de la Santé published in 2012 a science-policy advisory report on industrially produced trans fatty acids that focuses on the general population. Its recommendation to the legislature was to
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
prohibit more than 2 g of trans fatty acids per 100 g of fat in food products. === Brazil === Resolution 360 of 23 December 2003 by the Brazilian ministry of health required for the first time in the country that the amount of trans fat to be specified in labels of food products. On 31 July 2006, such labelling of trans fat contents became mandatory. In 2019 Anvisa published a new legislation to reduce the total amount of trans fat in any industrialized food sold in Brazil to a maximum of 2% by the end of 2023. === Canada === In a process that began in 2004, Health Canada finally banned partially hydrogenated oils (PHOs), the primary source of industrially produced trans fats in foods, in September 2018. On 15 September 2017, Health Canada announced that trans fat will be completely banned effective on 15 September 2018. and the ban came into effect in September 2018, banning partially hydrogenated oils (the largest source of industrially produced trans fats in foods). It is now illegal for manufacturers to add partially hydrogenated oils to foods sold in or imported into Canada. === Denmark === In March 2003, Denmark became the first country to effectively ban artificial trans fat It limited the trans share to 2% of fats and oils destined for human consumption, a standard that partially hydrogenated oil fails. This restriction is on the ingredients rather than the final products. This regulatory approach thus made Denmark the first country in which it was possible to eat "far less" than 1 g of industrially produced trans fats daily, even with a diet including processed foods. One public health study concluded that Danish government's efforts to decrease trans fat intake from 6 g to 1 g per day over 20 years is
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
related to a 50% decrease in deaths from ischemic heart disease. === European Union === In 2004, the European Food Safety Authority produced a scientific opinion on trans fatty acids, surmising that "higher intakes of TFA may increase risk for coronary heart disease". From 2 April 2021 foods in the EU intended for consumers are required to contain less than 2g of industrial trans fat per 100g of fat. === Greece === Law in Greece limits content of trans fats sold in school canteens to 0.1% (Ministerial Decision Υ1γ/ΓΠ/οικ 81025/ΦΕΚ 2135/τ.Β'/29-08-2013 as modified by Ministerial Decision Υ1γ/ Γ.Π/οικ 96605/ΦΕΚ 2800 τ.Β/4-11-201). === Iceland === Total trans fat content was limited in 2010 to 2% of total fat content. === Israel === Since 2014, it is obligatory to mark food products with more than 2% (by weight) fat. The nutritional facts must contain the amount of trans fats. === Romania === On 19 August 2020, the president promulgated Law 182/2020 that limits trans fats to 2 grams per every 100 grams of fat, max. The food producers who will not conform will be fined with a sum ranging between 10,000 and 30,000 lei. It will come into force on the 1st of April 2020, and it was initiated in 2017 by Save Romania Union senator Adrian Wiener. === Saudi Arabia === The Saudi Food and Drug Authority (SFDA) requires importers and manufacturer to write the trans fats amounts in the nutritional facts labels of food products according to the requirements of Saudi Standard Specifications/Gulf Specifications. Starting in 2020, Saudi Minister of Health announced the ban of trans fat in all food products due to their health risks. === Singapore === The Ministry of Health announced a total ban on partially-hydrogenated oils (PHOs) on 6 March 2019. The target was set to
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
ban PHOs by June 2021, with the aim of encouraging healthy eating habits. The total ban on PHOs took effect on 1 June 2021. === Sweden === The parliament gave the government a mandate in 2011 to submit without delay a law prohibiting the use of industrially produced trans fats in foods, as of 2017 the law has not yet been implemented. === Switzerland === Switzerland followed Denmark's trans fats ban, and implemented its own starting in April 2008. === United Kingdom === In October 2005, the Food Standards Agency (FSA) asked for better labelling in the UK. In the edition of 29 July 2006 of the British Medical Journal, an editorial also called for better labelling. In January 2007, the British Retail Consortium announced that major UK retailers, including Asda, Boots, Co-op Food, Iceland, Marks and Spencer, Sainsbury's, Tesco and Waitrose intended to cease adding trans fatty acids to their own products by the end of 2007. On 13 December 2007, the Food Standards Agency issued news releases stating that voluntary measures to reduce trans fats in food had already resulted in safe levels of consumer intake. On 15 April 2010, a British Medical Journal editorial called for trans fats to be "virtually eliminated in the United Kingdom by next year". The June 2010 National Institute for Health and Clinical Excellence (NICE) report Prevention of cardiovascular disease declared that 40,000 cardiovascular disease deaths in 2006 were "mostly preventable". To achieve this, NICE offered 24 recommendations including product labelling, public education, protecting under–16s from marketing of unhealthy foods, promoting exercise and physically active travel, and even reforming the Common Agricultural Policy to reduce production of unhealthy foods. Fast-food outlets were mentioned as a risk factor, with (in 2007) 170 g of McDonald's fries and 160 g nuggets containing 6 to
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
8 g of trans fats, conferring a substantially increased risk of coronary artery disease death. NICE made three specific recommendation for diet: (1) reduction of dietary salt to 3 g per day by 2025; (2) halving consumption of saturated fats; and (3) eliminating the use of industrially produced trans fatty acids in food. However, the recommendations were greeted unhappily by the food industry, which stated that it was already voluntarily dropping the trans fat levels to below the WHO recommendations of a maximum of 2%. Rejecting an outright ban, the Health Secretary Andrew Lansley launched on 15 March 2012 a voluntary pledge to remove artificial trans fats by the end of the year. Asda, Pizza Hut, Burger King, Tesco, Unilever and United Biscuits are some of 73 businesses who have agreed to do so. Lansley and his special Adviser Bill Morgan formerly worked for firms with interests in the food industry and some journalists have alleged that this results in a conflict of interest. Many health professionals are not happy with the voluntary nature of the deal. Simon Capewell, Professor of Clinical Epidemiology at the University of Liverpool, felt that justifying intake on the basis of average figures was unsuitable since some members of the community could considerably exceed this. === United States === On 11 July 2003, the Food and Drug Administration (FDA) issued a regulation requiring manufacturers to list trans fat on the Nutrition Facts panel of foods and some dietary supplements. The new labeling rule became mandatory across the board on 1 January 2006, even for companies that petitioned for extensions. However, unlike in many other countries, trans fat levels of less than 0.5 grams per serving can be listed as 0 grams trans fat on the food label. According to a study published in the Journal
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
of Public Policy & Marketing, without an interpretive footnote or further information on recommended daily value, many consumers do not know how to interpret the meaning of trans fat content on the Nutrition Facts panel. Without specific prior knowledge about trans fat and its negative health effects, consumers, including those at risk for heart disease, may misinterpret nutrient information provided on the panel. The FDA did not approve nutrient content claims such as "trans fat free" or "low trans fat", as they could not determine a "recommended daily value". Nevertheless, the agency is planning a consumer study to evaluate the consumer understanding of such claims and perhaps consider a regulation allowing their use on packaged foods. However, there is no requirement to list trans fats on institutional food packaging; thus bulk purchasers such as schools, hospitals, jails and cafeterias are unable to evaluate the trans fat content of commercial food items. Critics of the plan, including FDA advisor Dr. Carlos Camargo, have expressed concern that the 0.5 gram per serving threshold is too high to refer to a food as free of trans fat. This is because a person eating many servings of a product, or eating multiple products over the course of the day may still consume a significant amount of trans fat. Despite this, the FDA estimates that by 2009, trans fat labeling will have prevented from 600 to 1,200 cases of coronary artery disease, and 250 to 500 deaths, yearly. This benefit is expected to result from consumers choosing alternative foods lower in trans fats, and manufacturers reducing the amount of trans fats in their products. The American Medical Association supports any state and federal efforts to ban the use of artificial trans fats in U.S. restaurants and bakeries. The American Public Health Association adopted a new
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
policy statement regarding trans fats in 2007. These new guidelines, entitled Restricting Trans Fatty Acids in the Food Supply, recommend that the government require nutrition facts labeling of trans fats on all commercial food products. They also urge federal, state, and local governments to ban and monitor use of trans fats in restaurants. Furthermore, the APHA recommends barring the sales and availability of foods containing significant amounts of trans fat in public facilities including universities, prisons, and day care facilities etc. In January 2007, faced with the prospect of an outright ban on the sale of their product, Crisco was reformulated to meet the United States Food and Drug Administration (FDA) definition of "zero grams trans fats per serving" (that is less than one gram per tablespoon, or up to 7% by weight; or less than 0.5 grams per serving size) by boosting the saturation and then diluting the resulting solid fat with unsaturated vegetable oils. In 2010, according to the FDA, the average American consumed 5.8 grams of trans fat per day (2.6% of energy intake). Monoglycerides and diglycerides are not considered fats by the FDA, despite their nearly equal calorie per weight contribution during ingestion. On 7 November 2013, the FDA issued a preliminary determination that trans fats are not "generally recognized as safe", which was widely seen as a precursor to reclassifying trans fats as a "food additive," meaning they could not be used in foods without specific regulatory authorization. This would have the effect of virtually eliminating trans fats from the US food supply. The ruling was formally enacted on 16 June 2015, requiring that within three years, by 18 June 2018 no food prepared in the United States is allowed to include trans fats, unless approved by the FDA. The FDA agreed in May 2018
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
to give companies one more year to find other ingredients for enhancing product flavors or grease industrial baking pans, effectively banning trans fats in the United States from May 2019 onwards. Also, while new products can no longer be made with trans fats, they will give foods already on the shelves some time to cycle out of the market. ==== State and local regulation ==== Even before the federal ban, the state of California and several U.S. cities took action to reduce consumption of trans fats. In 2005, Tiburon, California, became the first American city where all restaurants voluntarily cook with trans fat-free oils. In 2007, Montgomery County, Maryland, approved a ban on partially hydrogenated oils, becoming the first county in the nation to restrict trans fats. New York City embarked on a campaign in 2005 to reduce consumption of trans fats, noting that heart disease is the primary cause of resident deaths. This has included a public education campaign and a request to restaurant owners to eliminate trans fat from their offerings voluntarily. Finding that the voluntary program was not successful, New York City's Board of Health in 2006 solicited public comments on a proposal to ban artificial trans fats in restaurants. The board voted to ban trans fat in restaurant food on 5 December 2006. New York was the first large US city to strictly limit trans fats in restaurants. Restaurants were barred from using most frying and spreading fats containing artificial trans fats above 0.5 g per serving on 1 July 2007, and were supposed to have met the same target in all of their foods by 1 July 2008. The Philadelphia City Council unanimously voted to enact a ban in February 2007. The ordinance does not apply to prepackaged foods sold in the city, but did
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
require restaurants in the city to stop frying food in trans fats by 1 September 2007. The ordinance also contained a provision going into effect one year later that barred trans fat from being used as an ingredient in commercial kitchens. On 10 October 2007, the Philadelphia City Council approved the use of trans fats by small bakeries throughout the city. Albany County of New York passed a ban on trans fats. The ban was adopted after a unanimous vote by the county legislature on 14 May 2007. The decision was made after New York City's decision, but no plan has been put into place. Legislators received a letter from Rick J. Sampson, president and CEO of the New York State Restaurant Association, calling on them to "delay any action on this issue until the full impact of the New York City ban is known." San Francisco officially asked its restaurants to stop using trans fat in January 2008. The voluntary program will grant a city decal to restaurants that comply and apply for the decal. Legislators say the next step will be a mandatory ban. Chicago also passed partial ban on oils and posting requirements for fast food restaurants. Trans fat bans were also introduced in the state legislatures of Massachusetts, Maryland, and Vermont. In March 2008, the Boston Public Health Commission's Board of Health passed a regulation food service establishments from selling foods containing artificial trans fats at more than 0.5 grams per serving, which is similar to the New York City regulation; there are some exceptions for clearly labeled packaged foods and charitable bake sales. In July 2008, California became the first state to ban trans fats in restaurants effective 1 January 2010; Governor Arnold Schwarzenegger signed the bill into law. California restaurants are prohibited from using
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
oil, shortening, and margarine containing artificial trans fats in spreads or for frying, with the exception of deep frying doughnuts. As of 1 January 2011, doughnuts and other baked goods have been prohibited from containing artificial trans fats. Packaged foods are not covered by the ban and can legally contain trans fats. ==== 2015–2018 federal phaseout ==== In 2009, at the age of 94, University of Illinois professor Fred Kummerow, a trans fat researcher who had campaigned for decades for a federal ban on the substance, filed a petition with the U.S. Food and Drug Administration (FDA) seeking elimination of artificial trans fats from the U.S. food supply. The FDA did not act on his petition for four years, and in 2013 Kummerow filed a lawsuit against the FDA and the U.S. Department of Health and Human Services, seeking to compel the FDA to respond to his petition and "to ban partially hydrogenated oils unless a complete administrative review finds new evidence for their safety." Kummerow's petition stated that "Artificial trans fat is a poisonous and deleterious substance, and the FDA has acknowledged the danger." Three months after the suit was filed, on 16 June 2015, the FDA moved to eliminate artificial trans fats from the U.S. food supply, giving manufacturers a deadline of three years. The FDA specifically ruled that trans fat was not generally recognized as safe and "could no longer be added to food after 18 June 2018, unless a manufacturer could present convincing scientific evidence that a particular use was safe." Kummerow stated: "Science won out." The ban is believed to prevent about 90,000 premature deaths annually. The FDA estimates the ban will cost the food industry $6.2 billion over 20 years as the industry reformulates products and substitutes new ingredients for trans fat. The benefits
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
are estimated at $140 billion over 20 years mainly from lower health care spending. Food companies can petition the FDA for approval of specific uses of partially hydrogenated oils if the companies submit data proving the oils' use is safe. === Manufacturer response === Palm oil, a natural oil extracted from the fruit of oil palm trees that is semi-solid at room temperature (15–25 degrees Celsius), can potentially serve as a substitute for partially hydrogenated fats in baking and processed food applications, although there is disagreement about whether replacing partially hydrogenated fats with palm oil confers any health benefits. A 2006 study supported by the National Institutes of Health and the USDA Agricultural Research Service concluded that palm oil is not a safe substitute for partially hydrogenated fats (trans fats) in the food industry, because palm oil results in adverse changes in the blood concentrations of LDL and apolipoprotein B just as trans fat does. The J.M. Smucker Company, American manufacturer of Crisco (the original partially hydrogenated vegetable shortening), in 2004 released a new formulation made from solid saturated palm oil cut with soybean oil and sunflower oil. This blend yielded an equivalent shortening much like the prior partially hydrogenated Crisco, and was labelled zero grams of trans fat per 1 tablespoon serving (as compared with 1.5 grams per tablespoon of original Crisco). As of 24 January 2007, Smucker claims that all Crisco shortening products in the US have been reformulated to contain less than one gram of trans fat per serving while keeping saturated fat content less than butter. The separately marketed trans fat free version introduced in 2004 was discontinued. On 22 May 2004, Unilever, the corporate descendant of Joseph Crosfield & Sons (the original producer of Wilhelm Normann's hydrogenation hardened oils) announced that they have eliminated trans
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
fats from all their margarine products in Canada, including their flagship Becel brand. == See also == Diet and heart disease Health crisis Fat interesterification == References ==
{ "page_id": 263487, "source": null, "title": "Trans fat regulation" }
The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient. == Domains == === Combinatorics === In some problems, each variable can take one of several discrete values, or the range of possible values is divided to give a finite number of possibilities. Taking the variables together, a huge number of combinations of values must be considered. This effect is also known as the combinatorial explosion. Even in the simplest case of d {\displaystyle d} binary variables, the number of possible combinations already is 2 d {\displaystyle 2^{d}} , exponential in the dimensionality. Naively, each additional dimension doubles the effort needed to try all combinations. === Sampling === There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, 102 = 100 evenly
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
spaced sample points suffice to sample a unit interval (try to visualize a "1-dimensional" cube) with no more than 10−2 = 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of 10−2 = 0.01 between adjacent points would require 1020 = [(102)10] sample points. In general, with a spacing distance of 10−n the 10-dimensional hypercube appears to be a factor of 10n(10−1) = [(10n)10/(10n)] "larger" than the 1-dimensional hypercube, which is the unit interval. In the above example n = 2: when using a sampling distance of 0.01 the 10-dimensional hypercube appears to be 1018 "larger" than the unit interval. This effect is a combination of the combinatorics problems above and the distance function problems explained below. === Optimization === When solving dynamic optimization problems by numerical backward induction, the objective function must be computed for each combination of values. This is a significant obstacle when the dimension of the "state variable" is large. === Machine learning === In machine learning problems that involve learning a "state-of-nature" from a finite number of data samples in a high-dimensional feature space with each feature having a range of possible values, typically an enormous amount of training data is required to ensure that there are several samples with each combination of values. In an abstract sense, as the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially. A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily. Nevertheless, in the context of a simple classifier (e.g., linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix), Zollanvari, et al., showed both analytically and empirically that as long as the relative cumulative efficacy of an additional feature set (with respect to features that are already part of the classifier) is greater (or less) than the size of this additional feature set, the expected error of the classifier constructed using these additional features will be less (or greater) than the expected error of the classifier constructed without them. In other words, both the size of additional features and their (relative) cumulative discriminatory effect are important in observing a decrease or increase in the average predictive power. In metric learning, higher dimensions can sometimes allow a model to achieve better performance. After normalizing embeddings to the surface of a hypersphere, FaceNet achieves the best performance using 128 dimensions as opposed to 64, 256, or 512 dimensions in one ablation study. A loss function for unitary-invariant dissimilarity between word embeddings was found to be minimized in high dimensions. === Data mining === In data mining, the curse of dimensionality refers to a data set with too many features. Consider the first table, which depicts 200 individuals and 2000 genes (features) with a 1 or 0 denoting whether or not they have a genetic mutation in that gene. A data mining application to this data set may be finding the correlation between specific genetic mutations and creating a classification algorithm such as a decision tree to determine whether an
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
individual has cancer or not. A common practice of data mining in this domain would be to create association rules between genetic mutations that lead to the development of cancers. To do this, one would have to loop through each genetic mutation of each individual and find other genetic mutations that occur over a desired threshold and create pairs. They would start with pairs of two, then three, then four until they result in an empty set of pairs. The complexity of this algorithm can lead to calculating all permutations of gene pairs for each individual or row. Given the formula for calculating the permutations of n items with a group size of r is: n ! ( n − r ) ! {\displaystyle {\frac {n!}{(n-r)!}}} , calculating the number of three pair permutations of any given individual would be 7988004000 different pairs of genes to evaluate for each individual. The number of pairs created will grow by an order of factorial as the size of the pairs increase. The growth is depicted in the permutation table (see right). As we can see from the permutation table above, one of the major problems data miners face regarding the curse of dimensionality is that the space of possible parameter values grows exponentially or factorially as the number of features in the data set grows. This problem critically affects both computational time and space when searching for associations or optimal features to consider. Another problem data miners may face when dealing with too many features is that the number of false predictions or classifications tends to increase as the number of features grows in the data set. In terms of the classification problem discussed above, keeping every data point could lead to a higher number of false positives and false negatives in
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
the model. This may seem counterintuitive, but consider the genetic mutation table from above, depicting all genetic mutations for each individual. Each genetic mutation, whether they correlate with cancer or not, will have some input or weight in the model that guides the decision-making process of the algorithm. There may be mutations that are outliers or ones that dominate the overall distribution of genetic mutations when in fact they do not correlate with cancer. These features may be working against one's model, making it more difficult to obtain optimal results. This problem is up to the data miner to solve, and there is no universal solution. The first step any data miner should take is to explore the data, in an attempt to gain an understanding of how it can be used to solve the problem. One must first understand what the data means, and what they are trying to discover before they can decide if anything must be removed from the data set. Then they can create or use a feature selection or dimensionality reduction algorithm to remove samples or features from the data set if they deem it necessary. One example of such methods is the interquartile range method, used to remove outliers in a data set by calculating the standard deviation of a feature or occurrence. === Distance function === When a measure such as a Euclidean distance is defined using many coordinates, there is little difference in the distances between different pairs of points. One way to illustrate the "vastness" of high-dimensional Euclidean space is to compare the proportion of an inscribed hypersphere with radius r {\displaystyle r} and dimension d {\displaystyle d} , to that of a hypercube with edges of length 2 r . {\displaystyle 2r.} The volume of such a sphere is 2
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
r d π d / 2 d Γ ( d / 2 ) {\displaystyle {\frac {2r^{d}\pi ^{d/2}}{d\;\Gamma (d/2)}}} , where Γ {\displaystyle \Gamma } is the gamma function, while the volume of the cube is ( 2 r ) d {\displaystyle (2r)^{d}} . As the dimension d {\displaystyle d} of the space increases, the hypersphere becomes an insignificant volume relative to that of the hypercube. This can clearly be seen by comparing the proportions as the dimension d {\displaystyle d} goes to infinity: V h y p e r s p h e r e V h y p e r c u b e = π d / 2 d 2 d − 1 Γ ( d / 2 ) → 0 {\displaystyle {\frac {V_{\mathrm {hypersphere} }}{V_{\mathrm {hypercube} }}}={\frac {\pi ^{d/2}}{d2^{d-1}\Gamma (d/2)}}\rightarrow 0} as d → ∞ {\displaystyle d\rightarrow \infty } . Furthermore, the distance between the center and the corners is r d {\displaystyle r{\sqrt {d}}} , which increases without bound for fixed r. In this sense when points are uniformly generated in a high-dimensional hypercube, almost all points are much farther than r {\displaystyle r} units away from the centre. In high dimensions, the volume of the d-dimensional unit hypercube (with coordinates of the vertices ± 1 {\displaystyle \pm 1} ) is concentrated near a sphere with the radius d / 3 {\displaystyle {\sqrt {d}}/{\sqrt {3}}} for large dimension d. Indeed, for each coordinate x i {\displaystyle x_{i}} the average value of x i 2 {\displaystyle x_{i}^{2}} in the cube is ⟨ x i 2 ⟩ = 1 2 ∫ − 1 1 x 2 d x = 1 3 {\displaystyle \left\langle x_{i}^{2}\right\rangle ={\frac {1}{2}}\int _{-1}^{1}x^{2}dx={\frac {1}{3}}} . The variance of x i 2 {\displaystyle x_{i}^{2}} for uniform distribution in the cube is 1 2 ∫ −
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
1 1 x 4 d x − ⟨ x i 2 ⟩ 2 = 4 45 {\displaystyle {\frac {1}{2}}\int _{-1}^{1}x^{4}dx-\left\langle x_{i}^{2}\right\rangle ^{2}={\frac {4}{45}}} Therefore, the squared distance from the origin, r 2 = ∑ i x i 2 {\textstyle r^{2}=\sum _{i}x_{i}^{2}} has the average value d/3 and variance 4d/45. For large d, distribution of r 2 / d {\displaystyle r^{2}/d} is close to the normal distribution with the mean 1/3 and the standard deviation 2 / 45 d {\displaystyle 2/{\sqrt {45d}}} according to the central limit theorem. Thus, when uniformly generating points in high dimensions, both the "middle" of the hypercube, and the corners are empty, and all the volume is concentrated near the surface of a sphere of "intermediate" radius d / 3 {\textstyle {\sqrt {d/3}}} . This also helps to understand the chi-squared distribution. Indeed, the (non-central) chi-squared distribution associated to a random point in the interval [-1, 1] is the same as the distribution of the length-squared of a random point in the d-cube. By the law of large numbers, this distribution concentrates itself in a narrow band around d times the standard deviation squared (σ2) of the original derivation. This illuminates the chi-squared distribution and also illustrates that most of the volume of the d-cube concentrates near the boundary of a sphere of radius σ d {\displaystyle \sigma {\sqrt {d}}} . A further development of this phenomenon is as follows. Any fixed distribution on the real numbers induces a product distribution on points in R d {\displaystyle \mathbb {R} ^{d}} . For any fixed n, it turns out that the difference between the minimum and the maximum distance between a random reference point Q and a list of n random data points P1,...,Pn become indiscernible compared to the minimum distance: lim d → ∞ E (
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
dist max ⁡ ( d ) − dist min ⁡ ( d ) dist min ⁡ ( d ) ) → 0 {\displaystyle \lim _{d\to \infty }E\left({\frac {\operatorname {dist} _{\max }(d)-\operatorname {dist} _{\min }(d)}{\operatorname {dist} _{\min }(d)}}\right)\to 0} . This is often cited as distance functions losing their usefulness (for the nearest-neighbor criterion in feature-comparison algorithms, for example) in high dimensions. However, recent research has shown this to only hold in the artificial scenario when the one-dimensional distributions R {\displaystyle \mathbb {R} } are independent and identically distributed. When attributes are correlated, data can become easier and provide higher distance contrast and the signal-to-noise ratio was found to play an important role, thus feature selection should be used. More recently, it has been suggested that there may be a conceptual flaw in the argument that contrast-loss creates a curse in high dimensions. Machine learning can be understood as the problem of assigning instances to their respective generative process of origin, with class labels acting as symbolic representations of individual generative processes. The curse's derivation assumes all instances are independent, identical outcomes of a single high dimensional generative process. If there is only one generative process, there would exist only one (naturally occurring) class and machine learning would be conceptually ill-defined in both high and low dimensions. Thus, the traditional argument that contrast-loss creates a curse, may be fundamentally inappropriate. In addition, it has been shown that when the generative model is modified to accommodate multiple generative processes, contrast-loss can morph from a curse to a blessing, as it ensures that the nearest-neighbor of an instance is almost-surely its most closely related instance. From this perspective, contrast-loss makes high dimensional distances especially meaningful and not especially non-meaningful as is often argued. === Nearest neighbor search === The effect complicates nearest neighbor
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
search in high dimensional space. It is not possible to quickly reject candidates by using the difference in one coordinate as a lower bound for a distance based on all the dimensions. However, it has recently been observed that the mere number of dimensions does not necessarily result in difficulties, since relevant additional dimensions can also increase the contrast. In addition, for the resulting ranking it remains useful to discern close and far neighbors. Irrelevant ("noise") dimensions, however, reduce the contrast in the manner described above. In time series analysis, where the data are inherently high-dimensional, distance functions also work reliably as long as the signal-to-noise ratio is high enough. ==== k-nearest neighbor classification ==== Another effect of high dimensionality on distance functions concerns k-nearest neighbor (k-NN) graphs constructed from a data set using a distance function. As the dimension increases, the indegree distribution of the k-NN digraph becomes skewed with a peak on the right because of the emergence of a disproportionate number of hubs, that is, data-points that appear in many more k-NN lists of other data-points than the average. This phenomenon can have a considerable impact on various techniques for classification (including the k-NN classifier), semi-supervised learning, and clustering, and it also affects information retrieval. === Anomaly detection === In a 2012 survey, Zimek et al. identified the following problems when searching for anomalies in high-dimensional data: Concentration of scores and distances: derived values such as distances become numerically similar Irrelevant attributes: in high dimensional data, a significant number of attributes may be irrelevant Definition of reference sets: for local methods, reference sets are often nearest-neighbor based Incomparable scores for different dimensionalities: different subspaces produce incomparable scores Interpretability of scores: the scores often no longer convey a semantic meaning Exponential search space: the search space can no
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
longer be systematically scanned Data snooping bias: given the large search space, for every desired significance a hypothesis can be found Hubness: certain objects occur more frequently in neighbor lists than others. Many of the analyzed specialized methods tackle one or another of these problems, but there remain many open research questions. === Blessing of dimensionality === Surprisingly and despite the expected "curse of dimensionality" difficulties, common-sense heuristics based on the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems. The term "blessing of dimensionality" was introduced in the late 1990s. Donoho in his "Millennium manifesto" clearly explained why the "blessing of dimensionality" will form a basis of future data mining. The effects of the blessing of dimensionality were discovered in many applications and found their foundation in the concentration of measure phenomena. One example of the blessing of dimensionality phenomenon is linear separability of a random point from a large finite random set with high probability even if this set is exponentially large: the number of elements in this random set can grow exponentially with dimension. Moreover, this linear functional can be selected in the form of the simplest linear Fisher discriminant. This separability theorem was proven for a wide class of probability distributions: general uniformly log-concave distributions, product distributions in a cube and many other families (reviewed recently in ). "The blessing of dimensionality and the curse of dimensionality are two sides of the same coin." For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time,
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
it makes the similarity search in high dimensions difficult and even useless (curse). Zimek et al. noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping. == See also == == References ==
{ "page_id": 787776, "source": null, "title": "Curse of dimensionality" }
The history of molecular evolution starts in the early 20th century with "comparative biochemistry", but the field of molecular evolution came into its own in the 1960s and 1970s, following the rise of molecular biology. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the last common ancestor. In the late 1960s, the neutral theory of molecular evolution provided a theoretical basis for the molecular clock, though both the clock and the neutral theory were controversial, since most evolutionary biologists held strongly to panselectionism, with natural selection as the only important cause of evolutionary change. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life. == Early history == Before the rise of molecular biology in the 1950s and 1960s, a small number of biologists had explored the possibilities of using biochemical differences between species to study evolution. Alfred Sturtevant predicted the existence of chromosomal inversions in 1921 and with Dobzhansky constructed one of the first molecular phylogenies on 17 Drosophila Pseudo-obscura strains from the accumulation of chromosomal inversions observed from the hybridization of polyten chromosomes. Ernest Baldwin worked extensively on comparative biochemistry beginning in the 1930s, and Marcel Florkin pioneered techniques for constructing phylogenies based on molecular and biochemical characters in the 1940s. However, it was not until the 1950s that biologists developed techniques for producing biochemical data for the quantitative study of molecular evolution. The first molecular systematics research was based on immunological assays and protein "fingerprinting" methods. Alan Boyden—building on immunological methods of George Nuttall—developed new techniques beginning in 1954, and in the early
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
1960s Curtis Williams and Morris Goodman used immunological comparisons to study primate phylogeny. Others, such as Linus Pauling and his students, applied newly developed combinations of electrophoresis and paper chromatography to proteins subject to partial digestion by digestive enzymes to create unique two-dimensional patterns, allowing fine-grained comparisons of homologous proteins. Beginning in the 1950s, a few naturalists also experimented with molecular approaches—notably Ernst Mayr and Charles Sibley. While Mayr quickly soured on paper chromatography, Sibley successfully applied electrophoresis to egg-white proteins to sort out problems in bird taxonomy, soon supplemented that with DNA hybridization techniques—the beginning of a long career built on molecular systematics. While such early biochemical techniques found grudging acceptance in the biology community, for the most part they did not impact the main theoretical problems of evolution and population genetics. This would change as molecular biology shed more light on the physical and chemical nature of genes. === Genetic load, the classical/balance controversy, and the measurement of heterozygosity === At the time that molecular biology was coming into its own in the 1950s, there was a long-running debate—the classical/balance controversy—over the causes of heterosis, the increase in fitness observed when inbred lines are outcrossed. In 1950, James F. Crow offered two different explanations (later dubbed the classical and balance positions) based on the paradox first articulated by J. B. S. Haldane in 1937: the effect of deleterious mutations on the average fitness of a population depends only on the rate of mutations (not the degree of harm caused by each mutation) because more-harmful mutations are eliminated more quickly by natural selection, while less-harmful mutations remain in the population longer. H. J. Muller dubbed this "genetic load". Muller, motivated by his concern about the effects of radiation on human populations, argued that heterosis is primarily the result of
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
deleterious homozygous recessive alleles, the effects of which are masked when separate lines are crossed—this was the dominance hypothesis, part of what Dobzhansky labeled the classical position. Thus, ionizing radiation and the resulting mutations produce considerable genetic load even if death or disease does not occur in the exposed generation, and in the absence of mutation natural selection will gradually increase the level of homozygosity. Bruce Wallace, working with J. C. King, used the overdominance hypothesis to develop the balance position, which left a larger place for overdominance (where the heterozygous state of a gene is more fit than the homozygous states). In that case, heterosis is simply the result of the increased expression of heterozygote advantage. If overdominant loci are common, then a high level of heterozygosity would result from natural selection, and mutation-inducing radiation may in fact facilitate an increase in fitness due to overdominance. (This was also the view of Dobzhansky.) Debate continued through 1950s, gradually becoming a central focus of population genetics. A 1958 study of Drosophila by Wallace suggested that radiation-induced mutations increased the viability of previously homozygous flies, providing evidence for heterozygote advantage and the balance position; Wallace estimated that 50% of loci in natural Drosophila populations were heterozygous. Motoo Kimura's subsequent mathematical analyses reinforced what Crow had suggested in 1950: that even if overdominant loci are rare, they could be responsible for a disproportionate amount of genetic variability. Accordingly, Kimura and his mentor Crow came down on the side of the classical position. Further collaboration between Crow and Kimura led to the infinite alleles model, which could be used to calculate the number of different alleles expected in a population, based on population size, mutation rate, and whether the mutant alleles were neutral, overdominant, or deleterious. Thus, the infinite alleles model offered a
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
potential way to decide between the classical and balance positions, if accurate values for the level of heterozygosity could be found. By the mid-1960s, the techniques of biochemistry and molecular biology—in particular protein electrophoresis—provided a way to measure the level of heterozygosity in natural populations: a possible means to resolve the classical/balance controversy. In 1963, Jack L. Hubby published an electrophoresis study of protein variation in Drosophila; soon after, Hubby began collaborating with Richard Lewontin to apply Hubby's method to the classical/balance controversy by measuring the proportion of heterozygous loci in natural populations. Their two landmark papers, published in 1966, established a significant level of heterozygosity for Drosophila (12%, on average). However, these findings proved difficult to interpret. Most population geneticists (including Hubby and Lewontin) rejected the possibility of widespread neutral mutations; explanations that did not involve selection were anathema to mainstream evolutionary biology. Hubby and Lewontin also ruled out heterozygote advantage as the main cause because of the segregation load it would entail, though critics argued that the findings actually fit well with overdominance hypothesis. === Protein sequences and the molecular clock === While evolutionary biologists were tentatively branching out into molecular biology, molecular biologists were rapidly turning their attention toward evolution. After developing the fundamentals of protein sequencing with insulin between 1951 and 1955, Frederick Sanger and his colleagues had published a limited interspecies comparison of the insulin sequence in 1956. Francis Crick, Charles Sibley and others recognized the potential for using biological sequences to construct phylogenies, though few such sequences were yet available. By the early 1960s, techniques for protein sequencing had advanced to the point that direct comparison of homologous amino acid sequences was feasible. In 1961, Emanuel Margoliash and his collaborators completed the sequence for horse cytochrome c (a longer and more widely distributed protein
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
than insulin), followed in short order by a number of other species. In 1962, Linus Pauling and Emile Zuckerkandl proposed using the number of differences between homologous protein sequences to estimate the time since divergence, an idea Zuckerkandl had conceived around 1960 or 1961. This began with Pauling's long-time research focus, hemoglobin, which was being sequenced by Walter Schroeder; the sequences not only supported the accepted vertebrate phylogeny, but also the hypothesis (first proposed in 1957) that the different globin chains within a single organism could also be traced to a common ancestral protein. Between 1962 and 1965, Pauling and Zuckerkandl refined and elaborated this idea, which they dubbed the molecular clock, and Emil L. Smith and Emanuel Margoliash expanded the analysis to cytochrome c. Early molecular clock calculations agreed fairly well with established divergence times based on paleontological evidence. However, the essential idea of the molecular clock—that individual proteins evolve at a regular rate independent of a species' morphological evolution—was extremely provocative (as Pauling and Zuckerkandl intended it to be). == The "molecular wars" == From the early 1960s, molecular biology was increasingly seen as a threat to the traditional core of evolutionary biology. Established evolutionary biologists—particularly Ernst Mayr, Theodosius Dobzhansky and G. G. Simpson, three of the founders of the modern evolutionary synthesis of the 1930s and 1940s—were extremely skeptical of molecular approaches, especially when it came to the connection (or lack thereof) to natural selection. Molecular evolution in general—and the molecular clock in particular—offered little basis for exploring evolutionary causation. According to the molecular clock hypothesis, proteins evolved essentially independently of the environmentally determined forces of selection; this was sharply at odds with the panselectionism prevalent at the time. Moreover, Pauling, Zuckerkandl, and other molecular biologists were increasingly bold in asserting the significance of "informational macromolecules" (DNA,
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
RNA and proteins) for all biological processes, including evolution. The struggle between evolutionary biologists and molecular biologists—with each group holding up their discipline as the center of biology as a whole—was later dubbed the "molecular wars" by Edward O. Wilson, who experienced firsthand the domination of his biology department by young molecular biologists in the late 1950s and the 1960s. In 1961, Mayr began arguing for a clear distinction between functional biology (which considered proximate causes and asked "how" questions) and evolutionary biology (which considered ultimate causes and asked "why" questions) He argued that both disciplines and individual scientists could be classified on either the functional or evolutionary side, and that the two approaches to biology were complementary. Mayr, Dobzhansky, Simpson and others used this distinction to argue for the continued relevance of organismal biology, which was rapidly losing ground to molecular biology and related disciplines in the competition for funding and university support. It was in that context that Dobzhansky first published his famous statement, "nothing in biology makes sense except in the light of evolution", in a 1964 paper affirming the importance of organismal biology in the face of the molecular threat; Dobzhansky characterized the molecular disciplines as "Cartesian" (reductionist) and organismal disciplines as "Darwinian". Mayr and Simpson attended many of the early conferences where molecular evolution was discussed, critiquing what they saw as the overly simplistic approaches of the molecular clock. The molecular clock, based on uniform rates of genetic change driven by random mutations and drift, seemed incompatible with the varying rates of evolution and environmentally-driven adaptive processes (such as adaptive radiation) that were among the key developments of the evolutionary synthesis. At the 1962 Wenner-Gren conference, the 1964 Colloquium on the Evolution of Blood Proteins in Bruges, Belgium, and the 1964 Conference on Evolving Genes
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
and Proteins at Rutgers University, they engaged directly with the molecular biologists and biochemists, hoping to maintain the central place of Darwinian explanations in evolution as its study spread to new fields. === Gene-centered view of evolution === Though not directly related to molecular evolution, the mid-1960s also saw the rise of the gene-centered view of evolution, spurred by George C. Williams's Adaptation and Natural Selection (1966). Debate over units of selection, particularly the controversy over group selection, led to increased focus on individual genes (rather than whole organisms or populations) as the theoretical basis for evolution. However, the increased focus on genes did not mean a focus on molecular evolution; in fact, the adaptationism promoted by Williams and other evolutionary theories further marginalized the apparently non-adaptive changes studied by molecular evolutionists. == The neutral theory of molecular evolution == The intellectual threat of molecular evolution became more explicit in 1968, when Motoo Kimura introduced the neutral theory of molecular evolution. Based on the available molecular clock studies (of hemoglobin from a wide variety of mammals, cytochrome c from mammals and birds, and triosephosphate dehydrogenase from rabbits and cows), Kimura (assisted by Tomoko Ohta) calculated an average rate of DNA substitution of one base pair change per 300 base pairs (encoding 100 amino acids) per 28 million years. For mammal genomes, this indicated a substitution rate of one every 1.8 years, which would produce an unsustainably high substitution load unless the preponderance of substitutions was selectively neutral. Kimura argued that neutral mutations occur very frequently, a conclusion compatible with the results of the electrophoretic studies of protein heterozygosity. Kimura also applied his earlier mathematical work on genetic drift to explain how neutral mutations could come to fixation, even in the absence of natural selection; he soon convinced James F. Crow
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
of the potential power of neutral alleles and genetic drift as well. Kimura's theory—described only briefly in a letter to Nature—was followed shortly after with a more substantial analysis by Jack L. King and Thomas H. Jukes—who titled their first paper on the subject "Non-Darwinian Evolution". Though King and Jukes produced much lower estimates of substitution rates and the resulting genetic load in the case of non-neutral changes, they agreed that neutral mutations driven by genetic drift were both real and significant. The fairly constant rates of evolution observed for individual proteins was not easily explained without invoking neutral substitutions (though G. G. Simpson and Emil Smith had tried). Jukes and King also found a strong correlation between the frequency of amino acids and the number of different codons encoding each amino acid. This pointed to substitutions in protein sequences as being largely the product of random genetic drift. King and Jukes' paper, especially with the provocative title, was seen as a direct challenge to mainstream neo-Darwinism, and it brought molecular evolution and the neutral theory to the center of evolutionary biology. It provided a mechanism for the molecular clock and a theoretical basis for exploring deeper issues of molecular evolution, such as the relationship between rate of evolution and functional importance. The rise of the neutral theory marked synthesis of evolutionary biology and molecular biology—though an incomplete one. With their work on firmer theoretical footing, in 1971 Emile Zuckerkandl and other molecular evolutionists founded the Journal of Molecular Evolution. === The neutralist-selectionist debate and near-neutrality === The critical responses to the neutral theory that soon appeared marked the beginning of the neutralist-selectionist debate. In short, selectionists viewed natural selection as the primary or only cause of evolution, even at the molecular level, while neutralists held that neutral mutations were
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
widespread and that genetic drift was a crucial factor in the evolution of proteins. Kimura became the most prominent defender of the neutral theory—which would be his main focus for the rest of his career. With Ohta, he refocused his arguments on the rate at which drift could fix new mutations in finite populations, the significance of constant protein evolution rates, and the functional constraints on protein evolution that biochemists and molecular biologists had described. Though Kimura had initially developed the neutral theory partly as an outgrowth of the classical position within the classical/balance controversy (predicting high genetic load as a consequence of non-neutral mutations), he gradually deemphasized his original argument that segregational load would be impossibly high without neutral mutations (which many selectionists, and even fellow neutralists King and Jukes, rejected). From the 1970s through the early 1980s, both selectionists and neutralists could explain the observed high levels of heterozygosity in natural populations, by assuming different values for unknown parameters. Early in the debate, Kimura's student Tomoko Ohta focused on the interaction between natural selection and genetic drift, which was significant for mutations that were not strictly neutral, but nearly so. In such cases, selection would compete with drift: most slightly deleterious mutations would be eliminated by natural selection or chance; some would move to fixation through drift. The behavior of this type of mutation, described by an equation that combined the mathematics of the neutral theory with classical models, became the basis of Ohta's nearly neutral theory of molecular evolution. In 1973, Ohta published a short letter in Nature suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Molecular evolutionists were finding that while rates of protein evolution (consistent with the
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
molecular clock) were fairly independent of generation time, rates of noncoding DNA divergence were inversely proportional to generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that most amino acid substitutions are slightly deleterious while noncoding DNA substitutions are more neutral. In this case, the faster rate of neutral evolution in proteins expected in small populations (due to genetic drift) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations). Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because of the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research. == Microbial phylogeny == While early work in molecular evolution focused on readily sequenced proteins and relatively recent evolutionary history, by the late 1960s some molecular biologists were pushing further toward the base of the tree of life by studying highly conserved nucleic acid sequences. Carl
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
Woese, a molecular biologist whose earlier work was on the genetic code and its origin, began using small subunit ribosomal RNA to reclassify bacteria by genetic (rather than morphological) similarity. Work proceeded slowly at first, but accelerated as new sequencing methods were developed in the 1970s and 1980s. By 1977, Woese and George Fox announced that some bacteria, such as methanogens, lacked the rRNA units that Woese's phylogenetic studies were based on; they argued that these organisms were actually distinct enough from conventional bacteria and the so-called higher organisms to form their own kingdom, which they called archaebacteria. Though controversial at first (and challenged again in the late 1990s), Woese's work became the basis of the modern three-domain system of Archaea, Bacteria, and Eukarya (replacing the five-domain system that had emerged in the 1960s). Work on microbial phylogeny also brought molecular evolution closer to cell biology and origin of life research. The differences between archaea pointed to the importance of RNA in the early history of life. In his work with the genetic code, Woese had suggested RNA-based life had preceded the current forms of DNA-based life, as had several others before him—an idea that Walter Gilbert would later call the "RNA world". In many cases, genomics research in the 1990s produced phylogenies contradicting the rRNA-based results, leading to the recognition of widespread lateral gene transfer across distinct taxa. Combined with the probable endosymbiotic origin of organelle-filled eukarya, this pointed to a far more complex picture of the origin and early history of life, one which might not be describable in the traditional terms of common ancestry. == References == == Notes == Dietrich, Michael R. "The Origins of the Neutral Theory of Molecular Evolution." Journal of the History of Biology, Vol. 27, No. 1 (Spring 1994), pp 21–59 Dietrich,
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
Michael R. (1998). "Paradox and Persuasion: Negotiating the Place of Molecular Evolution within Evolutionary Biology". Journal of the History of Biology. 31 (1): 85–111. doi:10.1023/A:1004257523100. PMID 11619919. S2CID 29935487. Crow, James F. "Motoo Kimura, 13 November 1924 - 13 November 1994." Biographical Memoirs of Fellows of the Royal Society, Vol. 43 (November 1997), pp 254–265 Hagen, Joel B. (1999). "Naturalists, Molecular Biologists, and the Challenge of Molecular Evolution". Journal of the History of Biology. 32 (2): 321–341. doi:10.1023/A:1004660202226. PMID 11624208. S2CID 26994015. Kreitman, Martin. "The neutralist-selectionist debate: The neutral theory is dead. Long live the neutral theory", BioEssays, Vol. 18, No. 8 (1996), pp. 678–684 Morgan, Gregory J. (1998). "Emile Zuckerkandl, Linus Pauling, and the Molecular Evolutionary Clock, 1959-1965". Journal of the History of Biology. 31 (2): 155–178. doi:10.1023/A:1004394418084. PMID 11620303. S2CID 5660841. Ohta, Tomoko. "The neutralist-selectionist debate: The current significance and standing of neutral and nearly neutral theories", BioEssays, Vol. 18, No. 8 (1996), pp. 673–677 Sapp, Jan. Genesis: The Evolution of Biology. New York: Oxford University Press, 2003. ISBN 0-19-515618-8 Wilson, Edward O. Naturalist. Warner Books, 1994. ISBN 0-446-67199-1 Zuckerkandl, Emile (1987). "On the Molecular Evolutionary Clock". Journal of Molecular Evolution. 26 (1–2): 34–46. Bibcode:1987JMolE..26...34Z. doi:10.1007/BF02111280. PMID 3125336. S2CID 3616497. == External links == Perspectives on Molecular Evolution - maintained by historian of science Michael R. Dietrich
{ "page_id": 11863361, "source": null, "title": "History of molecular evolution" }
This article contains a list of the most studied restriction enzymes whose names start with Bsa to Bso inclusive. It contains approximately 90 enzymes. The following information is given: Enzyme: Accepted name of the molecule, according to the internationally adopted nomenclature, and bibliographical references. (Further reading: see the section "Nomenclature" in the article "Restriction enzyme".) PDB code: Code used to identify the structure of a protein in the PDB database of protein structures. The 3D atomic structure of a protein provides highly valuable information to understand the intimate details of its mechanism of action. Source: Organism that naturally produces the enzyme. Recognition sequence: Sequence of DNA recognized by the enzyme and to which it specifically binds. Cut: Cutting site and DNA products of the cut. The recognition sequence and the cutting site usually match, but sometimes the cutting site can be dozens of nucleotides away from the recognition site. Isoschizomers and neoschizomers: An isoschizomer is an enzyme that recognizes the same sequence as another. A neoschizomer is a special type of isoschizomer that recognizes the same sequence as another, but cuts in a different manner. A maximum number of 8-10 most common isoschizomers are indicated for every enzyme but there may be many more. Neoschizomers are shown in bold and green color font (e.g.: BamHI). When "None on date" is indicated, that means that there were no registered isoschizomers in the databases on that date with a clearly defined cutting site. Isoschizomers indicated in white font and grey background correspond to enzymes not listed in the current lists: == Whole list navigation == == Restriction enzymes == === Bsa - Bso === == Notes ==
{ "page_id": 27460930, "source": null, "title": "List of restriction enzyme cutting sites: Bsa–Bso" }
The Goldman–Hodgkin–Katz flux equation (or GHK flux equation or GHK current density equation) describes the ionic flux across a cell membrane as a function of the transmembrane potential and the concentrations of the ion inside and outside of the cell. Since both the voltage and the concentration gradients influence the movement of ions, this process is a simplified version of electrodiffusion. Electrodiffusion is most accurately defined by the Nernst–Planck equation and the GHK flux equation is a solution to the Nernst–Planck equation with the assumptions listed below. == Origin == The American David E. Goldman of Columbia University, and the English Nobel laureates Alan Lloyd Hodgkin and Bernard Katz derived this equation. == Assumptions == Several assumptions are made in deriving the GHK flux equation (Hille 2001, p. 445) : The membrane is a homogeneous substance The electrical field is constant so that the transmembrane potential varies linearly across the membrane The ions access the membrane instantaneously from the intra- and extracellular solutions The permeant ions do not interact The movement of ions is affected by both concentration and voltage differences == Equation == The GHK flux equation for an ion S (Hille 2001, p. 445): Φ S = P S z S 2 V m F 2 R T [ S ] i − [ S ] o exp ⁡ ( − z S V m F / R T ) 1 − exp ⁡ ( − z S V m F / R T ) {\displaystyle \Phi _{S}=P_{S}z_{S}^{2}{\frac {V_{m}F^{2}}{RT}}{\frac {[{\mbox{S}}]_{i}-[{\mbox{S}}]_{o}\exp(-z_{S}V_{m}F/RT)}{1-\exp(-z_{S}V_{m}F/RT)}}} where Φ {\displaystyle \Phi } S is the current density (flux) outward through the membrane carried by ion S, measured in amperes per square meter (A·m−2) PS is the permeability of the membrane for ion S measured in m·s−1 zS is the valence of ion S Vm is
{ "page_id": 6882629, "source": null, "title": "Goldman–Hodgkin–Katz flux equation" }
the transmembrane potential in volts F is the Faraday constant, equal to 96,485 C·mol−1 or J·V−1·mol−1 R is the gas constant, equal to 8.314 J·K−1·mol−1 T is the absolute temperature, measured in kelvins (= degrees Celsius + 273.15) [S]i is the intracellular concentration of ion S, measured in mol·m−3 or mmol·l−1 [S]o is the extracellular concentration of ion S, measured in mol·m−3 == Implicit definition of reversal potential == The reversal potential is shown to be contained in the GHK flux equation (Flax 2008). The proof is replicated from the reference (Flax 2008) here. We wish to show that when the flux is zero, the transmembrane potential is not zero. Formally it is written lim Φ S → 0 V m ≠ 0 {\displaystyle \lim _{\Phi _{S}\rightarrow 0}V_{m}\neq 0} which is equivalent to writing lim V m → 0 Φ S ≠ 0 {\displaystyle \lim _{V_{m}\rightarrow 0}\Phi _{S}\neq 0} , which states that when the transmembrane potential is zero, the flux is not zero. However, due to the form of the GHK flux equation when V m = 0 {\displaystyle V_{m}=0} , Φ S = 0 0 {\displaystyle \Phi _{S}={\frac {0}{0}}} . This is a problem as the value of 0 0 {\displaystyle {\frac {0}{0}}} is indeterminate. We turn to l'Hôpital's rule to find the solution for the limit: lim V m → 0 Φ S = P S z S 2 F 2 R T [ V m ( [ S ] i − [ S ] o exp ⁡ ( − z S V m F / R T ) ) ] ′ [ 1 − exp ⁡ ( − z S V m F / R T ) ] ′ {\displaystyle \lim _{V_{m}\rightarrow 0}\Phi _{S}=P_{S}{\frac {z_{S}^{2}F^{2}}{RT}}{\frac {[V_{m}([{\mbox{S}}]_{i}-[{\mbox{S}}]_{o}\exp(-z_{S}V_{m}F/RT))]'}{[1-\exp(-z_{S}V_{m}F/RT)]'}}} where [ f ] ′ {\displaystyle [f]'} represents the differential
{ "page_id": 6882629, "source": null, "title": "Goldman–Hodgkin–Katz flux equation" }
of f and the result is : lim V m → 0 Φ S = P S z S F ( [ S ] i − [ S ] o ) {\displaystyle \lim _{V_{m}\rightarrow 0}\Phi _{S}=P_{S}z_{S}F([{\mbox{S}}]_{i}-[{\mbox{S}}]_{o})} It is evident from the previous equation that when V m = 0 {\displaystyle V_{m}=0} , Φ S ≠ 0 {\displaystyle \Phi _{S}\neq 0} if ( [ S ] i − [ S ] o ) ≠ 0 {\displaystyle ([{\mbox{S}}]_{i}-[{\mbox{S}}]_{o})\neq 0} and thus lim Φ S → 0 V m ≠ 0 {\displaystyle \lim _{\Phi _{S}\rightarrow 0}V_{m}\neq 0} which is the definition of the reversal potential. By setting Φ S = 0 {\displaystyle \Phi _{S}=0} we can also obtain the reversal potential : Φ S = 0 = P S z S 2 F 2 R T V m ( [ S ] i − [ S ] o exp ⁡ ( − z S V m F / R T ) ) 1 − exp ⁡ ( − z S V m F / R T ) {\displaystyle \Phi _{S}=0=P_{S}{\frac {z_{S}^{2}F^{2}}{RT}}{\frac {V_{m}([{\mbox{S}}]_{i}-[{\mbox{S}}]_{o}\exp(-z_{S}V_{m}F/RT))}{1-\exp(-z_{S}V_{m}F/RT)}}} which reduces to : [ S ] i − [ S ] o exp ⁡ ( − z S V m F / R T ) = 0 {\displaystyle [{\mbox{S}}]_{i}-[{\mbox{S}}]_{o}\exp(-z_{S}V_{m}F/RT)=0} and produces the Nernst equation : V m = − R T z S F ln ⁡ ( [ S ] i [ S ] o ) {\displaystyle V_{m}=-{\frac {RT}{z_{S}F}}\ln \left({\frac {[{\mbox{S}}]_{i}}{[{\mbox{S}}]_{o}}}\right)} == Rectification == Since one of the assumptions of the GHK flux equation is that the ions move independently of each other, the total flow of ions across the membrane is simply equal to the sum of two oppositely directed fluxes. Each flux approaches an asymptotic value as the membrane potential diverges from zero. These asymptotes are Φ S
{ "page_id": 6882629, "source": null, "title": "Goldman–Hodgkin–Katz flux equation" }
| i → o = P S z S 2 V m F 2 R T [ S ] i for V m ≫ 0 {\displaystyle \Phi _{S|i\to o}=P_{S}z_{S}^{2}{\frac {V_{m}F^{2}}{RT}}[{\mbox{S}}]_{i}\ {\mbox{for}}\ V_{m}\gg \;0} Φ S | i → o = 0 for V m ≪ 0 {\displaystyle \Phi _{S|i\to o}=0\;{\mbox{for}}\ V_{m}\ll \;0} and Φ S | o → i = P S z S 2 V m F 2 R T [ S ] o for V m ≪ 0 {\displaystyle \Phi _{S|o\to i}=P_{S}z_{S}^{2}{\frac {V_{m}F^{2}}{RT}}[{\mbox{S}}]_{o}\ {\mbox{for}}\ V_{m}\ll \;0} Φ S | o → i = 0 for V m ≫ 0 {\displaystyle \Phi _{S|o\to i}=0\;{\mbox{for}}\ V_{m}\gg \;0} where subscripts 'i' and 'o' denote the intra- and extracellular compartments, respectively. Intuitively one may understand these limits as follows: if an ion is only found outside a cell, then the flux is Ohmic (proportional to voltage) when the voltage causes the ion to flow into the cell, but no voltage could cause the ion to flow out of the cell, since there are no ions inside the cell in the first place. Keeping all terms except Vm constant, the equation yields a straight line when plotting Φ {\displaystyle \Phi } S against Vm. It is evident that the ratio between the two asymptotes is merely the ratio between the two concentrations of S, [S]i and [S]o. Thus, if the two concentrations are identical, the slope will be identical (and constant) throughout the voltage range (corresponding to Ohm's law scaled by the surface area). As the ratio between the two concentrations increases, so does the difference between the two slopes, meaning that the current is larger in one direction than the other, given an equal driving force of opposite signs. This is contrary to the result obtained if using Ohm's law scaled by
{ "page_id": 6882629, "source": null, "title": "Goldman–Hodgkin–Katz flux equation" }
the surface area, and the effect is called rectification. The GHK flux equation is mostly used by electrophysiologists when the ratio between [S]i and [S]o is large and/or when one or both of the concentrations change considerably during an action potential. The most common example is probably intracellular calcium, [Ca2+]i, which during a cardiac action potential cycle can change 100-fold or more, and the ratio between [Ca2+]o and [Ca2+]i can reach 20,000 or more. == References == Hille, Bertil (2001). Ion channels of excitable membranes, 3rd ed., Sinauer Associates, Sunderland, Massachusetts. ISBN 978-0-87893-321-1 Flax, Matt R. and Holmes, W.Harvey (2008). Goldman-Hodgkin-Katz Cochlear Hair Cell Models – a Foundation for Nonlinear Cochlear Mechanics, Conference proceedings: Interspeech 2008. == See also == Goldman equation Nernst equation Reversal potential
{ "page_id": 6882629, "source": null, "title": "Goldman–Hodgkin–Katz flux equation" }
Chirality () is a property of asymmetry important in several branches of science. The word chirality is derived from the Greek χείρ (kheir), "hand", a familiar chiral object. An object or a system is chiral if it is distinguishable from its mirror image; that is, it cannot be superposed (not to be confused with superimposed) onto it. Conversely, a mirror image of an achiral object, such as a sphere, cannot be distinguished from the object. A chiral object and its mirror image are called enantiomorphs (Greek, "opposite forms") or, when referring to molecules, enantiomers. A non-chiral object is called achiral (sometimes also amphichiral) and can be superposed on its mirror image. The term was first used by Lord Kelvin in 1893 in the second Robert Boyle Lecture at the Oxford University Junior Scientific Club which was published in 1894: I call any geometrical figure, or group of points, 'chiral', and say that it has chirality if its image in a plane mirror, ideally realized, cannot be brought to coincide with itself. Human hands are perhaps the most recognized example of chirality. The left hand is a non-superposable mirror image of the right hand; no matter how the two hands are oriented, it is impossible for all the major features of both hands to coincide across all axes. This difference in symmetry becomes obvious if someone attempts to shake the right hand of a person using their left hand, or if a left-handed glove is placed on a right hand. In mathematics, chirality is the property of a figure that is not identical to its mirror image. == Mathematics == In mathematics, a figure is chiral (and said to have chirality) if it cannot be mapped to its mirror image by rotations and translations alone. For example, a right shoe is
{ "page_id": 32703814, "source": null, "title": "Chirality" }
different from a left shoe, and clockwise is different from anticlockwise. See for a full mathematical definition. A chiral object and its mirror image are said to be enantiomorphs. The word enantiomorph stems from the Greek ἐναντίος (enantios) 'opposite' + μορφή (morphe) 'form'. A non-chiral figure is called achiral or amphichiral. The helix (and by extension a spun string, a screw, a propeller, etc.) and Möbius strip are chiral two-dimensional objects in three-dimensional ambient space. The J, L, S and Z-shaped tetrominoes of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves, glasses (sometimes), and shoes. A similar notion of chirality is considered in knot theory, as explained below. Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. === Geometry === In geometry, a figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry. In two dimensions, every figure that possesses an axis of symmetry is achiral, and it can be shown that every bounded achiral figure must have an axis of symmetry. In three dimensions, every figure that possesses a plane of symmetry or a center of symmetry is achiral. There are, however, achiral figures lacking both plane and center of symmetry. In terms of point groups, all chiral figures lack an improper axis of rotation (Sn). This means that they cannot contain a center of inversion (i) or a mirror plane (σ). Only figures with a point group designation of C1, Cn, Dn, T, O, or I can be chiral. === Knot theory === A knot is called achiral if it can be continuously deformed into its mirror image,
{ "page_id": 32703814, "source": null, "title": "Chirality" }
otherwise it is called chiral. For example, the unknot and the figure-eight knot are achiral, whereas the trefoil knot is chiral. == Physics == In physics, chirality may be found in the spin of a particle, where the handedness of the object is determined by the direction in which the particle spins. Not to be confused with helicity, which is the projection of the spin along the linear momentum of a subatomic particle, chirality is an intrinsic quantum mechanical property, like spin. Although both chirality and helicity can have left-handed or right-handed properties, only in the massless case are they identical. In particular for a massless particle the helicity is the same as the chirality while for an antiparticle they have opposite sign. The handedness in both chirality and helicity relate to the rotation of a particle while it proceeds in linear motion with reference to the human hands. The thumb of the hand points towards the direction of linear motion whilst the fingers curl into the palm, representing the direction of rotation of the particle (i.e. clockwise and counterclockwise). Depending on the linear and rotational motion, the particle can either be defined by left-handedness or right-handedness. A symmetry transformation between the two is called parity. Invariance under parity by a Dirac fermion is called chiral symmetry. === Electromagnetism === Electromagnetic waves can have handedness associated with their polarization. Polarization of an electromagnetic wave is the property that describes the orientation, i.e., the time-varying direction and amplitude, of the electric field vector. For example, the electric field vectors of left-handed or right-handed circularly polarized waves form helices of opposite handedness in space. Circularly polarized waves of opposite handedness propagate through chiral media at different speeds (circular birefringence) and with different losses (circular dichroism). Both phenomena are jointly known as optical
{ "page_id": 32703814, "source": null, "title": "Chirality" }
activity. Circular birefringence causes rotation of the polarization state of electromagnetic waves in chiral media and can cause a negative index of refraction for waves of one handedness when the effect is sufficiently large. While optical activity occurs in structures that are chiral in three dimensions (such as helices), the concept of chirality can also be applied in two dimensions. 2D-chiral patterns, such as flat spirals, cannot be superposed with their mirror image by translation or rotation in two-dimensional space (a plane). 2D chirality is associated with directionally asymmetric transmission (reflection and absorption) of circularly polarized waves. 2D-chiral materials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back. The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism. Like the twist of a 2d-chiral pattern appears reversed for opposite directions of observation, 2d-chiral materials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries. While optical activity is associated with 3d chirality and circular conversion is associated with 2d chirality, both effects have also been observed in structures that are not chiral by themselves. For the observation of these chiral electromagnetic effects, chirality does not have to be an intrinsic property of the material that interacts with the electromagnetic wave. Instead, both effects can also occur when the propagation direction of the electromagnetic wave together with the structure of an (achiral) material form a chiral experimental arrangement. This case, where the mutual arrangement of achiral components forms
{ "page_id": 32703814, "source": null, "title": "Chirality" }
a chiral (experimental) arrangement, is known as extrinsic chirality. Chiral mirrors are a class of metamaterials that reflect circularly polarized light of a certain helicity in a handedness-preserving manner, while absorbing circular polarization of the opposite handedness. However, most absorbing chiral mirrors operate only in a narrow frequency band, as limited by the causality principle. Employing a different design methodology that allows undesired waves to pass through instead of absorbing the undesired waveform, chiral mirrors are able to show good broadband performance. == Chemistry == A chiral molecule is a type of molecule that has a non-superposable mirror image. The feature that is most often the cause of chirality in molecules is the presence of an asymmetric carbon atom. The term "chiral" in general is used to describe the object that is non-superposable on its mirror image. In chemistry, chirality usually refers to molecules. Two mirror images of a chiral molecule are called enantiomers or optical isomers. Pairs of enantiomers are often designated as "right-", "left-handed" or, if they have no bias, "achiral". As polarized light passes through a chiral molecule, the plane of polarization, when viewed along the axis toward the source, will be rotated clockwise (to the right) or anticlockwise (to the left). A right handed rotation is dextrorotary (d); that to the left is levorotary (l). The d- and l-isomers are the same compound but are called enantiomers. An equimolar mixture of the two optical isomers, which is called a racemic mixture, will produce no net rotation of polarized light as it passes through. Left handed molecules have l- prefixed to their names; d- is prefixed to right handed molecules. However, this d- and l- notation of distinguishing enantiomers does not say anything about the actual spatial arrangement of the ligands/substituents around the stereogenic center, which is
{ "page_id": 32703814, "source": null, "title": "Chirality" }
defined as configuration. Another nomenclature system employed to specify configuration is Fischer convention. This is also referred to as the D- and L-system. Here the relative configuration is assigned with reference to D-(+)-Glyceraldehyde and L-(−)-Glyceraldehyde, being taken as standard. Fischer convention is widely used in sugar chemistry and for α-amino acids. Due to the drawbacks of Fischer convention, it is almost entirely replaced by Cahn-Ingold-Prelog convention, also known as the sequence rule or R and S nomenclature. This was further extended to assign absolute configuration to cis-trans isomers with the E-Z notation. Molecular chirality is of interest because of its application to stereochemistry in inorganic chemistry, organic chemistry, physical chemistry, biochemistry, and supramolecular chemistry. More recent developments in chiral chemistry include the development of chiral inorganic nanoparticles that may have the similar tetrahedral geometry as chiral centers associated with sp3 carbon atoms traditionally associated with chiral compounds, but at larger scale. Helical and other symmetries of chiral nanomaterials were also obtained. == Biology == All of the known life-forms show specific chiral properties in chemical structures as well as macroscopic anatomy, development and behavior. In any specific organism or evolutionarily related set thereof, individual compounds, organs, or behavior are found in the same single enantiomorphic form. Deviation (having the opposite form) could be found in a small number of chemical compounds, or certain organ or behavior but that variation strictly depends upon the genetic make up of the organism. From chemical level (molecular scale), biological systems show extreme stereospecificity in synthesis, uptake, sensing, metabolic processing. A living system usually deals with two enantiomers of the same compound in drastically different ways. In biology, homochirality is a common property of amino acids and carbohydrates. The chiral protein-making amino acids, which are translated through the ribosome from genetic coding, occur in the
{ "page_id": 32703814, "source": null, "title": "Chirality" }
L form. However, D-amino acids are also found in nature. The monosaccharides (carbohydrate-units) are commonly found in D-configuration. DNA double helix is chiral (as any kind of helix is chiral), and B-form of DNA shows a right-handed turn. Sometimes, when two enantiomers of a compound are found in organisms, they significantly differ in their taste, smell and other biological actions. For example,(+)-Carvone is responsible for the smell of caraway seed oil, whereas (–)-carvone is responsible for smell of spearmint oil. However, it is a commonly held misconception that (+)-limonene is found in oranges (causing its smell), and (–)-limonene is found in lemons (causing its smell). In 2021, after rigorous experimentation, it was found that all citrus fruits contain only (+)-limonene and the odor difference is because of other contributing factors. Also, for artificial compounds, including medicines, in case of chiral drugs, the two enantiomers sometimes show remarkable difference in effect of their biological actions. Darvon (dextropropoxyphene) is a painkiller, whereas its enantiomer, Novrad (levopropoxyphene) is an anti-cough agent. In case of penicillamine, the (S-isomer is used in the treatment of primary chronic arthritis, whereas the (R)-isomer has no therapeutic effect, as well as being highly toxic. In some cases, the less therapeutically active enantiomer can cause side effects. For example, (S-naproxen is an analgesic but the (R-isomer causes renal problems. In such situations where one of the enantiomers of a racemic drug is active and the other partner has undesirable or toxic effect one may switch from racemate to a single enantiomer drug for a better therapeutic value.[1] Such a switching from a racemic drug to an enantiopure drug is called a chiral switch. The naturally occurring plant form of alpha-tocopherol (vitamin E) is RRR-α-tocopherol whereas the synthetic form (all-racemic vitamin E, or dl-tocopherol) is equal parts of the stereoisomers
{ "page_id": 32703814, "source": null, "title": "Chirality" }
RRR, RRS, RSS, SSS, RSR, SRS, SRR, and SSR with progressively decreasing biological equivalency, so that 1.36 mg of dl-tocopherol is considered equivalent to 1.0 mg of d-tocopherol. Macroscopic examples of chirality are found in the plant kingdom, the animal kingdom and all other groups of organisms. A simple example is the coiling direction of any climber plant, which can grow to form either a left- or right-handed helix. In anatomy, chirality is found in the imperfect mirror image symmetry of many kinds of animal bodies. Organisms such as gastropods exhibit chirality in their coiled shells, resulting in an asymmetrical appearance. Over 90% of gastropod species have dextral (right-handed) shells in their coiling, but a small minority of species and genera are virtually always sinistral (left-handed). A very few species (for example Amphidromus perversus) show an equal mixture of dextral and sinistral individuals. In humans, chirality (also referred to as handedness or laterality) is an attribute of humans defined by their unequal distribution of fine motor skill between the left and right hands. An individual who is more dexterous with the right hand is called right-handed, and one who is more skilled with the left is said to be left-handed. Chirality is also seen in the study of facial asymmetry and is known as aurofacial asymmetry. According to the Axial Twist theory, vertebrate animals develop into a left-handed chirality. Due to this, the brain is turned around and the heart and bowels are turned by 90°. In the case of the health condition situs inversus totalis, in which all the internal organs are flipped horizontally (i.e. the heart placed slightly to the right instead of the left), chirality poses some problems should the patient require a liver or heart transplant, as these organs are chiral, thus meaning that the blood
{ "page_id": 32703814, "source": null, "title": "Chirality" }
vessels which supply these organs would need to be rearranged should a normal, non situs inversus (situs solitus) organ be required. In the monocot bloodroot family, the species of the genera Wachendorfia and Barberetta have only individuals that either have the style points to the right or the style pointed to the left, with both morphs appearing within the same populations. This is thought to increase outcrossing and so boost genetic diversity, which in turn may help to survive in a changing environment. Remarkably, the related genus Dilatris also has chirally dimorphic flowers, but here both morphs occur on the same plant. In flatfish, the summer flounder or fluke are left-eyed, while halibut are right-eyed. == Resources and Research == === Journal === Chirality- a scientific journal focused on chirality in chemistry and biochemistry in respect to biological, chemical, materials, pharmacological, spectroscopic and physical properties. === Selected Books === Creutz, Michael (2018). From quarks to pions: chiral symmetry and confinement. New Jersey London Singapore Beijing , Shanghai Hong Kong Taipei Chennai Tokyo: World Scientific. ISBN 978-981-322-923-5 Wolf, Christian (2008). Dynamic stereochemistry of chiral compounds: principles and applications. Cambridge: RSC Publ. ISBN 978-0-85404-246-3 Beesley, Thomas E.; Scott, Raymond P. W. (1998). Chiral chromatography. Separation science series. Chichester Weinheim: Wiley. ISBN 978-0-471-97427-7 == See also == Handedness Chiral drugs Chiral switch Chiral inversion Metachirality Orientation (space) Sinistral and dextral Tendril perversion Chirality (physics) == References == == External links == Hegstrom, Roger A.; Kondepudi, Dilip K. "The Handedness of the Universe" (PDF).
{ "page_id": 32703814, "source": null, "title": "Chirality" }
The molecular formula C18H25NO3 (molar mass: 303.40 g/mol) may refer to: EA-3580 CAR-302,668 MDPEP
{ "page_id": 74712392, "source": null, "title": "C18H25NO3" }
Analytical Chemistry is a biweekly peer-reviewed scientific journal published since 1929 by the American Chemical Society. Articles address general principles of chemical measurement science and novel analytical methodologies. Topics commonly include chemical reactions and selectivity, chemometrics and data processing, electrochemistry, elemental and molecular characterization, imaging, instrumentation, mass spectrometry, microscale and nanoscale systems, -omics, sensing, separations, spectroscopy, and surface analysis. It is abstracted and indexed in Chemical Abstracts Service, CAB International, EBSCOhost, ProQuest, PubMed, Scopus, and the Science Citation Index Expanded. According to the Journal Citation Reports, it has a 2022 impact factor of 7.4. The editor-in-chief is Jonathan V. Sweedler (University of Illinois). == See also == List of chemistry journals == References == == External links == Official website
{ "page_id": 4392265, "source": null, "title": "Analytical Chemistry (journal)" }
Fructolysis refers to the metabolism of fructose from dietary sources. Though the metabolism of glucose through glycolysis uses many of the same enzymes and intermediate structures as those in fructolysis, the two sugars have very different metabolic fates in human metabolism. Under one percent of ingested fructose is directly converted to plasma triglyceride. 29% - 54% of fructose is converted in liver to glucose, and about a quarter of fructose is converted to lactate. 15% - 18% is converted to glycogen. Glucose and lactate are then used normally as energy to fuel cells all over the body. Fructose is a dietary monosaccharide present naturally in fruits and vegetables, either as free fructose or as part of the disaccharide sucrose, and as its polymer inulin. It is also present in the form of refined sugars including granulated sugars (white crystalline table sugar, brown sugar, confectioner's sugar, and turbinado sugar), refined crystalline fructose, as high fructose corn syrups as well as in honey. About 10% of the calories contained in the Western diet are supplied by fructose (approximately 55 g/day). Unlike glucose, fructose is not an insulin secretagogue, and can in fact lower circulating insulin. In addition to the liver, fructose is metabolized in the intestines, testis, kidney, skeletal muscle, fat tissue and brain, but it is not transported into cells via insulin-sensitive pathways (insulin regulated transporters GLUT1 and GLUT4). Instead, fructose is taken in by GLUT5. Fructose in muscles and adipose tissue is phosphorylated by hexokinase. == Fructolysis and glycolysis are independent pathways == Although the metabolism of fructose and glucose share many of the same intermediate structures, they have very different metabolic fates in human metabolism. Fructose is metabolized almost completely in the liver in humans, and is directed toward replenishment of liver glycogen and triglyceride synthesis, while much of
{ "page_id": 17302858, "source": null, "title": "Fructolysis" }
dietary glucose passes through the liver and goes to skeletal muscle, where it is metabolized to CO2, H2O and ATP, and to fat cells where it is metabolized primarily to glycerol phosphate for triglyceride synthesis as well as energy production. The products of fructose metabolism are liver glycogen and de novo lipogenesis of fatty acids and eventual synthesis of endogenous triglyceride. This synthesis can be divided into two main phases: The first phase is the synthesis of the trioses, dihydroxyacetone (DHAP) and glyceraldehyde; the second phase is the subsequent metabolism of these trioses either in the gluconeogenic pathway for glycogen replenishment and/or the complete metabolism in the fructolytic pathway to pyruvate, which enters the Krebs cycle, is converted to citrate and subsequently directed toward de novo synthesis of the free fatty acid palmitate. === The metabolism of fructose to DHAP and glyceraldehyde === The first step in the metabolism of fructose is the phosphorylation of fructose to fructose 1-phosphate by fructokinase (Km = 0.5 mM, ≈ 9 mg/100 ml), thus trapping fructose for metabolism in the liver. Hexokinase IV (Glucokinase), also occurs in the liver and would be capable of phosphorylating fructose to fructose 6-phosphate (an intermediate in the gluconeogenic pathway); however, it has a relatively high Km (12 mM) for fructose and, therefore, essentially all of the fructose is converted to fructose-1-phosphate in the human liver. Much of the glucose, on the other hand, is not phosphorylated (Km of hepatic glucokinase (hexokinase IV) = 10 mM), passes through the liver directed toward peripheral tissues, and is taken up by the insulin-dependent glucose transporter, GLUT 4, present on adipose tissue and skeletal muscle. Fructose-1-phosphate then undergoes hydrolysis by fructose-1-phosphate aldolase (aldolase B) to form dihydroxyacetone phosphate (DHAP) and glyceraldehyde; DHAP can either be isomerized to glyceraldehyde 3-phosphate by triosephosphate isomerase
{ "page_id": 17302858, "source": null, "title": "Fructolysis" }
or undergo reduction to glycerol 3-phosphate by glycerol 3-phosphate dehydrogenase. The glyceraldehyde produced may also be converted to glyceraldehyde 3-phosphate by glyceraldehyde kinase or converted to glycerol 3-phosphate by glyceraldehyde 3-phosphate dehydrogenase. The metabolism of fructose at this point yields intermediates in gluconeogenic pathway leading to glycogen synthesis, or can be oxidized to pyruvate and reduced to lactate, or be decarboxylated to acetyl CoA in the mitochondria and directed toward the synthesis of free fatty acid, resulting finally in triglyceride synthesis. === Synthesis of glycogen from DHAP and glyceraldehyde-3-phosphate === The synthesis of glycogen in the liver following a fructose-containing meal proceeds from gluconeogenic precursors. Fructose is initially converted to DHAP and glyceraldehyde by fructokinase and aldolase B. The resultant glyceraldehyde then undergoes phosphorylation to glyceraldehyde-3-phosphate. Increased concentrations of DHAP and glyceraldehyde-3-phosphate in the liver drive the gluconeogenic pathway toward glucose-6-phosphate, glucose-1-phosphate and glycogen formation. It appears that fructose is a better substrate for glycogen synthesis than glucose and that glycogen replenishment takes precedence over triglyceride formation. Once liver glycogen is replenished, the intermediates of fructose metabolism are primarily directed toward triglyceride synthesis. === Synthesis of triglyceride from DHAP and glyceraldehyde-3-phosphate === Carbons from dietary fructose are found in both the FFA and glycerol moieties of plasma triglycerides (TG). Excess dietary fructose can be converted to pyruvate, enter the Krebs cycle and emerges as citrate directed toward free fatty acid synthesis in the cytosol of hepatocytes. The DHAP formed during fructolysis can also be converted to glycerol and then glycerol 3-phosphate for TG synthesis. Thus, fructose can provide trioses for both the glycerol 3-phosphate backbone, as well as the free fatty acids in TG synthesis. Indeed, fructose may provide the bulk of the carbohydrate directed toward de novo TG synthesis in humans. === Fructose induces hepatic lipogenic enzymes === Fructose
{ "page_id": 17302858, "source": null, "title": "Fructolysis" }
consumption results in the insulin-independent induction of several important hepatic lipogenic enzymes including pyruvate kinase, NADP+-dependent malate dehydrogenase, citrate lyase, acetyl CoA carboxylase, fatty acid synthase, as well as pyruvate dehydrogenase. Although not a consistent finding among metabolic feeding studies, diets high in refined fructose have been shown to lead to hypertriglyceridemia in a wide range of populations including individuals with normal glucose metabolism as well as individuals with impaired glucose tolerance, diabetes, hypertriglyceridemia, and hypertension. The hypertriglyceridemic effects observed are a hallmark of increased dietary carbohydrate, and fructose appears to be dependent on a number of factors including the amount of dietary fructose consumed and degree of insulin resistance. ‡ = Mean ± SEM activity in nmol/min per mg protein § = 12 rats/group * = Significantly different from control at p < 0.05 == Abnormalities in fructose metabolism == The lack of two important enzymes in fructose metabolism results in the development of two inborn errors in carbohydrate metabolism – essential fructosuria and hereditary fructose intolerance. In addition, reduced phosphorylation potential within hepatocytes can occur with intravenous infusion of fructose. === Inborn errors in fructose metabolism === ==== Essential fructosuria ==== The absence of fructokinase results in the inability to phosphorylate fructose to fructose-1-phosphate within the cell. As a result, fructose is neither trapped within the cell nor directed toward its metabolism. Free fructose concentrations in the liver increase and fructose is free to leave the cell and enter plasma. This results in an increase in plasma concentration of fructose, eventually exceeding the kidneys' threshold for fructose reabsorption resulting in the appearance of fructose in the urine. Essential fructosuria is a benign asymptomatic condition. ==== Hereditary fructose intolerance ==== The absence of fructose-1-phosphate aldolase (aldolase B) results in the accumulation of fructose 1 phosphate in hepatocytes, kidney and
{ "page_id": 17302858, "source": null, "title": "Fructolysis" }
small intestines. An accumulation of fructose-1-phosphate following fructose ingestion inhibits glycogenolysis (breakdown of glycogen) and gluconeogenesis, resulting in severe hypoglycemia. It is symptomatic resulting in severe hypoglycemia, abdominal pain, vomiting, hemorrhage, jaundice, hepatomegaly, and hyperuricemia eventually leading to liver and/or kidney failure and death. The incidence varies throughout the world, but it is estimated at 1:55,000 (range 1:10,000 to 1:100,000) live births. === Reduced phosphorylation potential === Intravenous (i.v.) infusion of fructose has been shown to lower phosphorylation potential in liver cells by trapping inorganic phosphate (Pi) as fructose 1-phosphate. The fructokinase reaction occurs quite rapidly in hepatocytes trapping fructose in cells by phosphorylation. On the other hand, the splitting of fructose 1 phosphate to DHAP and glyceraldehyde by Aldolase B is relatively slow. Therefore, fructose-1-phosphate accumulates with the corresponding reduction of intracellular Pi available for phosphorylation reactions in the cell. This is why fructose is contraindicated for total parenteral nutrition (TPN) solutions and is never given intravenously as a source of carbohydrate. It has been suggested that excessive dietary intake of fructose may also result in reduced phosphorylation potential. However, this is still a contentious issue. Dietary fructose is not well absorbed and increased dietary intake often results in malabsorption. Whether or not sufficient amounts of dietary fructose could be absorbed to cause a significant reduction in phosphorylating potential in liver cells remains questionable and there are no clear examples of this in the literature. == References == == External links == The Entry of Fructose and Galactose into Glycolysis, Chapter 16.1.11. Biochemistry, 5th edition, Jeremy M Berg, John L Tymoczko, and Lubert Stryer, New York: W H Freeman; 2002. Tappy, L; Lê, K. A. (2010). "Metabolic effects of fructose and the worldwide increase in obesity". Physiological Reviews. 90 (1): 23–46. doi:10.1152/physrev.00019.2009. PMID 20086073.
{ "page_id": 17302858, "source": null, "title": "Fructolysis" }
Kunihiko Fukushima (Japanese: 福島 邦彦, born 16 March 1936) is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a senior research scientist at the Fuzzy Logic Systems Institute in Fukuoka, Japan. == Notable scientific achievements == In 1980, Fukushima published the neocognitron, the original deep convolutional neural network (CNN) architecture. Fukushima proposed several supervised and unsupervised learning algorithms to train the parameters of a deep neocognitron such that it could learn internal representations of incoming data. Today, however, the CNN architecture is usually trained through backpropagation. This approach is now heavily used in computer vision. In 1969 Fukushima introduced the ReLU (Rectifier Linear Unit) activation function in the context of visual feature extraction in hierarchical neural networks, which he called "analog threshold element". (Though the ReLU was first used by Alston Householder in 1941 as a mathematical abstraction of biological neural networks.) As of 2017 it is the most popular activation function for deep neural networks. == Education and career == In 1958, Fukushima received his Bachelor of Engineering in electronics from Kyoto University. He became a senior research scientist at the NHK Science & Technology Research Laboratories. In 1989, he joined the faculty of Osaka University. In 1999, he joined the faculty of the University of Electro-Communications. In 2001, he joined the faculty of Tokyo University of Technology. From 2006 to 2010, he was a visiting professor at Kansai University. Fukushima acted as founding president of the Japanese Neural Network Society (JNNS). He also was a founding member on the board of governors of the International Neural Network Society (INNS), and president of the Asia-Pacific Neural Network Assembly (APNNA). He was one of the board of governors of the International Neural Network Society (INNS) in
{ "page_id": 23594316, "source": null, "title": "Kunihiko Fukushima" }
1989-1990 and 1993-2005. == Awards == In 2020, Fukushima received the Bower Award and Prize for Achievement in Science. In 2022, Fukushima became a laureate of the Asian Scientist 100 by the Asian Scientist. He also received the IEICE Achievement Award and Excellent Paper Awards, the IEEE Neural Networks Pioneer Award, the APNNA Outstanding Achievement Award, the JNNS Excellent Paper Award and the INNS Helmholtz Award. == External links == ResearchMap profile == References ==
{ "page_id": 23594316, "source": null, "title": "Kunihiko Fukushima" }
The Knight shift is a shift in the nuclear magnetic resonance (NMR) frequency of a paramagnetic substance first published in 1949 by the UC Berkeley physicist Walter D. Knight. For an ensemble of N spins in a magnetic induction field B → {\displaystyle {\vec {B}}} , the nuclear Hamiltonian for the Knight shift is expressed in Cartesian form by: H ^ KS = − ∑ i N γ i ⋅ I → ^ i ⋅ K ^ i ⋅ B → {\displaystyle {{\hat {\mathcal {H}}}_{\text{KS}}}=-\sum \limits _{\mathit {i}}^{N}{{{\gamma }_{\mathit {i}}}\cdot {{\hat {\vec {I}}}_{\mathit {i}}}\cdot {{\hat {\mathbf {K} }}_{\mathit {i}}}\cdot {\vec {B}}}} , where for the ith spin γ i {\displaystyle {\gamma }_{\mathit {i}}} is the gyromagnetic ratio, I → ^ i {\displaystyle {{\hat {\vec {I}}}_{\mathit {i}}}} is a vector of the Cartesian nuclear angular momentum operators, the K ^ i = ( K x x K x y K x z K y x K y y K y z K z x K z y K z z ) {\displaystyle {{\hat {\mathbf {K} }}_{i}}=\left({\begin{matrix}{{K}_{xx}}&{{K}_{xy}}&{{K}_{xz}}\\{{K}_{yx}}&{{K}_{yy}}&{{K}_{yz}}\\{{K}_{zx}}&{{K}_{zy}}&{{K}_{zz}}\\\end{matrix}}\right)} matrix is a second-rank tensor similar to the chemical shift shielding tensor. The Knight shift refers to the relative shift K in NMR frequency for atoms in a metal (e.g. sodium) compared with the same atoms in a nonmetallic environment (e.g. sodium chloride). The observed shift reflects the local magnetic field produced at the sodium nucleus by the magnetization of the conduction electrons. The average local field in sodium augments the applied resonance field by approximately one part per 1000. In nonmetallic sodium chloride the local field is negligible in comparison. The Knight shift is due to the conduction electrons in metals. They introduce an "extra" effective field at the nuclear site, due to the spin orientations of the conduction electrons in the presence of
{ "page_id": 1181004, "source": null, "title": "Knight shift" }
an external field. This is responsible for the shift observed in the nuclear magnetic resonance. The shift comes from two sources, one is the Pauli paramagnetic spin susceptibility, the other is the s-component wavefunctions at the nucleus. Depending on the electronic structure, the Knight shift may be temperature dependent. However, in metals which normally have a broad featureless electronic density of states, Knight shifts are temperature independent. == References ==
{ "page_id": 1181004, "source": null, "title": "Knight shift" }
The electron affinity (Eea) of an atom or molecule is defined as the amount of energy released when an electron attaches to a neutral atom or molecule in the gaseous state to form an anion. X(g) + e− → X−(g) + energy This differs by sign from the energy change of electron capture ionization. The electron affinity is positive when energy is released on electron capture. In solid state physics, the electron affinity for a surface is defined somewhat differently (see below). == Measurement and use of electron affinity == This property is used to measure atoms and molecules in the gaseous state only, since in a solid or liquid state their energy levels would be changed by contact with other atoms or molecules. A list of the electron affinities was used by Robert S. Mulliken to develop an electronegativity scale for atoms, equal to the average of the electrons affinity and ionization potential. Other theoretical concepts that use electron affinity include electronic chemical potential and chemical hardness. Another example, a molecule or atom that has a more positive value of electron affinity than another is often called an electron acceptor and the less positive an electron donor. Together they may undergo charge-transfer reactions. === Sign convention === To use electron affinities properly, it is essential to keep track of sign. For any reaction that releases energy, the change ΔE in total energy has a negative value and the reaction is called an exothermic process. Electron capture for almost all non-noble gas atoms involves the release of energy and thus is exothermic. The positive values that are listed in tables of Eea are amounts or magnitudes. It is the word "released" within the definition "energy released" that supplies the negative sign to ΔE. Confusion arises in mistaking Eea for a
{ "page_id": 197964, "source": null, "title": "Electron affinity" }
change in energy, ΔE, in which case the positive values listed in tables would be for an endo- not exo-thermic process. The relation between the two is Eea = −ΔE(attach). However, if the value assigned to Eea is negative, the negative sign implies a reversal of direction, and energy is required to attach an electron. In this case, the electron capture is an endothermic process and the relationship, Eea = −ΔE(attach) is still valid. Negative values typically arise for the capture of a second electron, but also for the nitrogen atom. The usual expression for calculating Eea when an electron is attached is Eea = (Einitial − Efinal)attach = −ΔE(attach) This expression does follow the convention ΔX = X(final) − X(initial) since −ΔE = −(E(final) − E(initial)) = E(initial) − E(final). Equivalently, electron affinity can also be defined as the amount of energy required to detach an electron from the atom while it holds a single-excess-electron thus making the atom a negative ion, i.e. the energy change for the process X− → X + e− If the same table is employed for the forward and reverse reactions, without switching signs, care must be taken to apply the correct definition to the corresponding direction, attachment (release) or detachment (require). Since almost all detachments (require +) an amount of energy listed on the table, those detachment reactions are endothermic, or ΔE(detach) > 0. Eea = (Efinal − Einitial)detach = ΔE(detach) = −ΔE(attach). == Electron affinities of the elements == Although Eea varies greatly across the periodic table, some patterns emerge. Generally, nonmetals have more positive Eea than metals. Atoms whose anions are more stable than neutral atoms have a greater Eea. Chlorine most strongly attracts extra electrons; neon most weakly attracts an extra electron. The electron affinities of the noble gases have
{ "page_id": 197964, "source": null, "title": "Electron affinity" }
not been conclusively measured, so they may or may not have slightly negative values. Eea generally increases across a period (row) in the periodic table prior to reaching group 18. This is caused by the filling of the valence shell of the atom; a group 17 atom releases more energy than a group 1 atom on gaining an electron because it obtains a filled valence shell and therefore is more stable. In group 18, the valence shell is full, meaning that added electrons are unstable, tending to be ejected very quickly. Counterintuitively, Eea does not decrease when progressing down most columns of the periodic table. For example, Eea actually increases consistently on descending the column for the group 2 data. Thus, electron affinity follows the same "left-right" trend as electronegativity, but not the "up-down" trend. The following data are quoted in kJ/mol. == Molecular electron affinities == The electron affinity of molecules is a complicated function of their electronic structure. For instance the electron affinity for benzene is negative, as is that of naphthalene, while those of anthracene, phenanthrene and pyrene are positive. In silico experiments show that the electron affinity of hexacyanobenzene surpasses that of fullerene. == "Electron affinity" as defined in solid state physics == In the field of solid state physics, the electron affinity is defined differently than in chemistry and atomic physics. For a semiconductor-vacuum interface (that is, the surface of a semiconductor), electron affinity, typically denoted by EEA or χ, is defined as the energy obtained by moving an electron from the vacuum just outside the semiconductor to the bottom of the conduction band just inside the semiconductor: E e a ≡ E v a c − E C {\displaystyle E_{\rm {ea}}\equiv E_{\rm {vac}}-E_{\rm {C}}} In an intrinsic semiconductor at absolute zero, this concept is
{ "page_id": 197964, "source": null, "title": "Electron affinity" }
functionally analogous to the chemistry definition of electron affinity, since an added electron will spontaneously go to the bottom of the conduction band. At nonzero temperature, and for other materials (metals, semimetals, heavily doped semiconductors), the analogy does not hold since an added electron will instead go to the Fermi level on average. In any case, the value of the electron affinity of a solid substance is very different from the chemistry and atomic physics electron affinity value for an atom of the same substance in gas phase. For example, a silicon crystal surface has electron affinity 4.05 eV, whereas an isolated silicon atom has electron affinity 1.39 eV. The electron affinity of a surface is closely related to, but distinct from, its work function. The work function is the thermodynamic work that can be obtained by reversibly and isothermally removing an electron from the material to vacuum; this thermodynamic electron goes to the Fermi level on average, not the conduction band edge: W = E v a c − E F {\displaystyle W=E_{\rm {vac}}-E_{\rm {F}}} . While the work function of a semiconductor can be changed by doping, the electron affinity ideally does not change with doping and so it is closer to being a material constant. However, like work function the electron affinity does depend on the surface termination (crystal face, surface chemistry, etc.) and is strictly a surface property. In semiconductor physics, the primary use of the electron affinity is not actually in the analysis of semiconductor–vacuum surfaces, but rather in heuristic electron affinity rules for estimating the band bending that occurs at the interface of two materials, in particular metal–semiconductor junctions and semiconductor heterojunctions. In certain circumstances, the electron affinity may become negative. Often negative electron affinity is desired to obtain efficient cathodes that can supply
{ "page_id": 197964, "source": null, "title": "Electron affinity" }
electrons to the vacuum with little energy loss. The observed electron yield as a function of various parameters such as bias voltage or illumination conditions can be used to describe these structures with band diagrams in which the electron affinity is one parameter. For one illustration of the apparent effect of surface termination on electron emission, see Figure 3 in Marchywka Effect. == See also == Electron-capture mass spectrometry Electronegativity Electron donor Ionization energy — a closely related concept describing the energy required to remove an electron from a neutral atom or molecule One-electron reduction Valence electron Vacuum level == References == Tro, Nivaldo J. (2008). Chemistry: A Molecular Approach (2nd Edn.). New Jersey: Pearson Prentice Hall. ISBN 0-13-100065-9. pp. 348–349. == External links == Electron affinity, definition from the IUPAC Gold Book
{ "page_id": 197964, "source": null, "title": "Electron affinity" }
A merodiploid is a partially diploid bacterium, which has its own chromosome complement and a chromosome fragment introduced by conjugation, transformation or transduction. It can also be defined as an essentially haploid organism that carries a second copy of a part of its genome. The term is derived from the Greek, meros = part, and was originally used to describe both unstable partial diploidy, such as that which occurs briefly in recipients after mating with an Hfr strain (1), and the stable state, exemplified by F-prime strains (see Hfr'S And F-Primes). Over time the usage has tended to confine the term to descriptions of stable genetic states. Merodiploidy refers to the partial duplication of chromosomes in a haploid organism. == References == == External links == "MCB 150 at the University of Illinois @ Urbana-Champaign". Life.illinois.edu. Retrieved 2016-05-13.
{ "page_id": 50529615, "source": null, "title": "Merodiploid" }
In chemistry, a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular bipyramid. This is one geometry for which the bond angles surrounding the central atom are not identical (see also pentagonal bipyramid), because there is no geometrical arrangement with five terminal atoms in equivalent positions. Examples of this molecular geometry are phosphorus pentafluoride (PF5), and phosphorus pentachloride (PCl5) in the gas phase. == Axial (or apical) and equatorial positions == The five atoms bonded to the central atom are not all equivalent, and two different types of position are defined. For phosphorus pentachloride as an example, the phosphorus atom shares a plane with three chlorine atoms at 120° angles to each other in equatorial positions, and two more chlorine atoms above and below the plane (axial or apical positions). According to the VSEPR theory of molecular geometry, an axial position is more crowded because an axial atom has three neighboring equatorial atoms (on the same central atom) at a 90° bond angle, whereas an equatorial atom has only two neighboring axial atoms at a 90° bond angle. For molecules with five identical ligands, the axial bond lengths tend to be longer because the ligand atom cannot approach the central atom as closely. As examples, in PF5 the axial P−F bond length is 158 pm and the equatorial is 152 pm, and in PCl5 the axial and equatorial are 214 and 202 pm respectively. In the mixed halide PF3Cl2 the chlorines occupy two of the equatorial positions, indicating that fluorine has a greater apicophilicity or tendency to occupy an axial position. In general ligand apicophilicity increases with electronegativity and also with pi-electron withdrawing ability, as in the sequence Cl < F < CN. Both factors decrease
{ "page_id": 1901903, "source": null, "title": "Trigonal bipyramidal molecular geometry" }
electron density in the bonding region near the central atom so that crowding in the axial position is less important. == Related geometries with lone pairs == The VSEPR theory also predicts that substitution of a ligand at a central atom by a lone pair of valence electrons leaves the general form of the electron arrangement unchanged with the lone pair now occupying one position. For molecules with five pairs of valence electrons including both bonding pairs and lone pairs, the electron pairs are still arranged in a trigonal bipyramid but one or more equatorial positions is not attached to a ligand atom so that the molecular geometry (for the nuclei only) is different. The seesaw molecular geometry is found in sulfur tetrafluoride (SF4) with a central sulfur atom surrounded by four fluorine atoms occupying two axial and two equatorial positions, as well as one equatorial lone pair, corresponding to an AX4E molecule in the AXE notation. A T-shaped molecular geometry is found in chlorine trifluoride (ClF3), an AX3E2 molecule with fluorine atoms in two axial and one equatorial position, as well as two equatorial lone pairs. Finally, the triiodide ion (I−3) is also based upon a trigonal bipyramid, but the actual molecular geometry is linear with terminal iodine atoms in the two axial positions only and the three equatorial positions occupied by lone pairs of electrons (AX2E3); another example of this geometry is provided by xenon difluoride, XeF2. == Berry pseudorotation == Isomers with a trigonal bipyramidal geometry are able to interconvert through a process known as Berry pseudorotation. Pseudorotation is similar in concept to the movement of a conformational diastereomer, though no full revolutions are completed. In the process of pseudorotation, two equatorial ligands (both of which have a shorter bond length than the third) "shift" toward the
{ "page_id": 1901903, "source": null, "title": "Trigonal bipyramidal molecular geometry" }
molecule's axis, while the axial ligands simultaneously "shift" toward the equator, creating a constant cyclical movement. Pseudorotation is particularly notable in simple molecules such as phosphorus pentafluoride (PF5). == See also == AXE method Molecular geometry == References == == External links == Indiana University Molecular Structure Center Interactive molecular examples for point groups Molecular Modeling Animated Trigonal Planar Visual
{ "page_id": 1901903, "source": null, "title": "Trigonal bipyramidal molecular geometry" }
In probability theory, the KPZ fixed point is a Markov field and conjectured to be a universal limit of a wide range of stochastic models forming the universality class of a non-linear stochastic partial differential equation called the KPZ equation. Even though the universality class was already introduced in 1986 with the KPZ equation itself, the KPZ fixed point was not concretely specified until 2021 when mathematicians Konstantin Matetski, Jeremy Quastel and Daniel Remenik gave an explicit description of the transition probabilities in terms of Fredholm determinants. == Introduction == All models in the KPZ class have in common, that they have a fluctuating height function or some analogue function, that can be thought of as a function, that models the growth of the model by time. The KPZ equation itself is also a member of this class and the canonical model of modelling random interface growth. The strong KPZ universality conjecture conjectures that all models in the KPZ universality class converge under a specific scaling of the height function to the KPZ fixed point and only depend on the initial condition. Matetski-Quastel-Remenik constructed the KPZ fixed point for the ( 1 + 1 ) {\displaystyle (1+1)} -dimensional KPZ universality class (i.e. one space and one time dimension) on the polish space of upper semicontinous functions (UC) with the topology of local UC convergence. They did this by studying a particular model of the KPZ universality class the TASEP („Totally Asymmetric Simple Exclusion Process“) with general initial conditions and the random walk of its associated height function. They achieved this by rewriting the biorthogonal function of the correlation kernel, that appears in the Fredholm determinant formula for the multi-point distribution of the particles in the Weyl chamber. Then they showed convergence to the fixed point. == KPZ fixed point ==
{ "page_id": 76547408, "source": null, "title": "KPZ fixed point" }
Let h ( t , x → ) {\displaystyle h(t,{\vec {x}})} denote a height function of some probabilistic model with ( t , x → ) ∈ R × R d {\displaystyle (t,{\vec {x}})\in \mathbb {R} \times \mathbb {R} ^{d}} denoting space-time. So far only the case for d = 1 {\displaystyle d=1} , also noted as ( 1 + 1 ) {\displaystyle (1+1)} , was deeply studied, therefore we fix this dimension for the rest of the article. In the KPZ universality class exist two equilibrium points or fixed points, the trivial Edwards-Wilkinson (EW) fixed point and the non-trivial KPZ fixed point. The KPZ equation connects them together. The KPZ fixed point is rather defined as a height function h ( t , x → ) {\displaystyle {\mathfrak {h}}(t,{\vec {x}})} and not as a particular model with a height function. === KPZ fixed point === The KPZ fixed point ( h ( t , x ) ) t ≥ 0 , x ∈ R {\displaystyle ({\mathfrak {h}}(t,x))_{t\geq 0,x\in \mathbb {R} }} is a Markov process, such that the n-point distribution for x 1 < x 2 < ⋯ < x n ∈ R {\displaystyle x_{1}<x_{2}<\cdots <x_{n}\in \mathbb {R} } and t > 0 {\displaystyle t>0} can be represented as P h ( 0 , ⋅ ) ( h ( t , x 1 ) ≤ a 1 , h ( t , x 2 ) ≤ a 2 , … , h ( t , x n ) ≤ a n ) = det ( I − K ) L 2 ( { x 1 , x 2 , … , x n } × R ) {\displaystyle \mathbb {P} _{{\mathfrak {h}}(0,\cdot )}({\mathfrak {h}}(t,x_{1})\leq a_{1},{\mathfrak {h}}(t,x_{2})\leq a_{2},\dots ,{\mathfrak {h}}(t,x_{n})\leq a_{n})=\det(I-K)_{L^{2}(\{x_{1},x_{2},\dots ,x_{n}\}\times \mathbb {R} )}} where a 1 , …
{ "page_id": 76547408, "source": null, "title": "KPZ fixed point" }
, a n ∈ R {\displaystyle a_{1},\dots ,a_{n}\in \mathbb {R} } and K {\displaystyle K} is a trace class operator called the extended Brownian scattering operator and the subscript means that the process in h ( 0 , ⋅ ) {\displaystyle {\mathfrak {h}}(0,\cdot )} starts. === KPZ universality conjectures === The KPZ conjecture conjectures that the height function h ( t , x → ) {\displaystyle h(t,{\vec {x}})} of all models in the KPZ universality at time t {\displaystyle t} fluctuate around the mean with an order of t 1 / 3 {\displaystyle t^{1/3}} and the spacial correlation of the fluctuation is of order t 2 / 3 {\displaystyle t^{2/3}} . This motivates the so-called 1:2:3 scaling which is the characteristic scaling for the KPZ fixed point. The EW fixed point has also a scaling the 1:2:4 scaling. The fixed points are invariant under their associated scaling. ==== 1:2:3 scaling ==== The 1:2:3 scaling of a height function is for ε > 0 {\displaystyle \varepsilon >0} ε 1 / 2 h ( ε − 3 / 2 t , ε − 1 x ) − C ε t , {\displaystyle \varepsilon ^{1/2}h(\varepsilon ^{-3/2}t,\varepsilon ^{-1}x)-C_{\varepsilon }t,} where 1:3 and 2:3 stand for the proportions of the exponents and C ε {\displaystyle C_{\varepsilon }} is just a constant. ==== Strong conjecture ==== The strong conjecture says, that all models in the KPZ universality class converge under 1:2:3 scaling of the height function if their initial conditions also converge, i.e. lim ε → 0 ε 1 / 2 ( h ( c 1 ε − 3 / 2 t , c 2 ε − 1 x ) − c 3 ε − 3 / 2 t ) = ( d ) h ( t , x ) {\displaystyle \lim \limits _{\varepsilon \to 0}\varepsilon
{ "page_id": 76547408, "source": null, "title": "KPZ fixed point" }
^{1/2}(h(c_{1}\varepsilon ^{-3/2}t,c_{2}\varepsilon ^{-1}x)-c_{3}\varepsilon ^{-3/2}t)\;{\stackrel {(d)}{=}}\;{\mathfrak {h}}(t,x)} with initial condition h ( 0 , x ) := lim ε → 0 ε 1 / 2 h ( 0 , c 2 ε − 1 x ) , {\displaystyle {\mathfrak {h}}(0,x):=\lim \limits _{\varepsilon \to 0}\varepsilon ^{1/2}h(0,c_{2}\varepsilon ^{-1}x),} where c 1 , c 2 , c 3 {\displaystyle c_{1},c_{2},c_{3}} are constants depending on the model. ==== Weak conjecture ==== If we remove the growth term in the KPZ equation, we get ∂ t h ( t , x ) = ν ∂ x 2 h + σ ξ , {\displaystyle \partial _{t}h(t,x)=\nu \partial _{x}^{2}h+\sigma \xi ,} which converges under the 1:2:4 scaling lim ε → 0 ε 1 / 2 ( h ( c 1 ε − 2 t , c 2 ε − 1 x ) − c 3 ε − 3 / 2 t ) = ( d ) h ( t , x ) {\displaystyle \lim \limits _{\varepsilon \to 0}\varepsilon ^{1/2}(h(c_{1}\varepsilon ^{-2}t,c_{2}\varepsilon ^{-1}x)-c_{3}\varepsilon ^{-3/2}t)\;{\stackrel {(d)}{=}}\;{\mathfrak {h}}(t,x)} to the EW fixed point. The weak conjecture says now, that the KPZ equation is the only Heteroclinic orbit between the KPZ and EW fixed point. === Airy process === If one fixes the time dimension and looks at the limit lim t → ∞ t − 1 / 3 ( h ( c 1 t , c 2 t 2 / 3 x ) − c 3 t ) = ( d ) A ( x ) , {\displaystyle \lim \limits _{t\to \infty }t^{-1/3}(h(c_{1}t,c_{2}t^{2/3}x)-c_{3}t){\stackrel {(d)}{=}}\;{\mathcal {A}}(x),} then one gets the Airy process ( A ( x ) ) x ∈ R {\displaystyle ({\mathcal {A}}(x))_{x\in \mathbb {R} }} which also occurs in the theory of random matrices. == References ==
{ "page_id": 76547408, "source": null, "title": "KPZ fixed point" }
In thermodynamics, vapor quality is the mass fraction in a saturated mixture that is vapor; in other words, saturated vapor has a "quality" of 100%, and saturated liquid has a "quality" of 0%. Vapor quality is an intensive property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures (for example, compressed liquids or superheated fluids). Vapor quality is an important quantity during the adiabatic expansion step in various thermodynamic cycles (like Organic Rankine cycle, Rankine cycle, etc.). Working fluids can be classified by using the appearance of droplets in the vapor during the expansion step. Quality χ can be calculated by dividing the mass of the vapor by the mass of the total mixture: χ = m vapor m total {\displaystyle \chi ={\frac {m_{\text{vapor}}}{m_{\text{total}}}}} where m indicates mass. Another definition used in chemical engineering defines quality (q) of a fluid as the fraction that is saturated liquid. By this definition, a saturated liquid has q = 0. A saturated vapor has q = 1. An alternative definition is the 'equilibrium thermodynamic quality'. It can be used only for single-component mixtures (e.g. water with steam), and can take values < 0 (for sub-cooled fluids) and > 1 (for super-heated vapors): χ eq = h − h f h f g {\displaystyle \chi _{\text{eq}}={\frac {h-h_{f}}{h_{fg}}}} where h is the mixture specific enthalpy, defined as: h = m f ⋅ h f + m g ⋅ h g m f + m g . {\displaystyle h={\frac {m_{f}\cdot h_{f}+m_{g}\cdot h_{g}}{m_{f}+m_{g}}}.} Subscripts f and g refer to saturated liquid and saturated gas respectively, and fg refers to vaporization. == Calculation == The above expression for vapor quality can be
{ "page_id": 9569619, "source": null, "title": "Vapor quality" }
expressed as: χ = y − y f y g − y f {\displaystyle \chi ={\frac {y-y_{f}}{y_{g}-y_{f}}}} where y {\displaystyle y} is equal to either specific enthalpy, specific entropy, specific volume or specific internal energy, y f {\displaystyle y_{f}} is the value of the specific property of saturated liquid state and y g − y f {\displaystyle y_{g}-y_{f}} is the value of the specific property of the substance in dome zone, which we can find both liquid y f {\displaystyle y_{f}} and vapor y g {\displaystyle y_{g}} . Another expression of the same concept is: χ = m v m l + m v {\displaystyle \chi ={\frac {m_{v}}{m_{l}+m_{v}}}} where m v {\displaystyle m_{v}} is the vapor mass and m l {\displaystyle m_{l}} is the liquid mass. == Steam quality and work == The origin of the idea of vapor quality was derived from the origins of thermodynamics, where an important application was the steam engine. Low quality steam would contain a high moisture percentage and therefore damage components more easily. High quality steam would not corrode the steam engine. Steam engines use water vapor (steam) to push pistons or turbines, and that movement creates work. The quantitatively described steam quality (steam dryness) is the proportion of saturated steam in a saturated water/steam mixture. In other words, a steam quality of 0 indicates 100% liquid, while a steam quality of 1 (or 100%) indicates 100% steam. The quality of steam on which steam whistles are blown is variable and may affect frequency. Steam quality determines the velocity of sound, which declines with decreasing dryness due to the inertia of the liquid phase. Also, the specific volume of steam for a given temperature decreases with decreasing dryness. Steam quality is very useful in determining enthalpy of saturated water/steam mixtures, since the enthalpy
{ "page_id": 9569619, "source": null, "title": "Vapor quality" }
of steam (gaseous state) is many orders of magnitude higher than the enthalpy of water (liquid state). == References ==
{ "page_id": 9569619, "source": null, "title": "Vapor quality" }
This article contains a list of the most studied restriction enzymes whose names start with C to D inclusive. It contains approximately 80 enzymes. The following information is given: Enzyme: Accepted name of the molecule, according to the internationally adopted nomenclature, and bibliographical references. (Further reading: see the section "Nomenclature" in the article "Restriction enzyme".) PDB code: Code used to identify the structure of a protein in the PDB database of protein structures. The 3D atomic structure of a protein provides highly valuable information to understand the intimate details of its mechanism of action. Source: Organism that naturally produces the enzyme. Recognition sequence: Sequence of DNA recognized by the enzyme and to which it specifically binds. Cut: Cutting site and DNA products of the cut. The recognition sequence and the cutting site usually match, but sometimes the cutting site can be dozens of nucleotides away from the recognition site. Isoschizomers and neoschizomers: An isoschizomer is an enzyme that recognizes the same sequence as another. A neoschizomer is a special type of isoschizomer that recognizes the same sequence as another, but cuts in a different manner. A maximum number of 8-10 most common isoschizomers are indicated for every enzyme but there may be many more. Neoschizomers are shown in bold and green color font (e.g.: BamHI). When "None on date" is indicated, that means that there were no registered isoschizomers in the databases on that date with a clearly defined cutting site. Isoschizomers indicated in white font and grey background correspond to enzymes not listed in the current lists: == Whole list navigation == == Restriction enzymes == === C === === D === == Notes ==
{ "page_id": 27460950, "source": null, "title": "List of restriction enzyme cutting sites: C–D" }
The International Day for Biological Diversity (or World Biodiversity Day) is a United Nations–sanctioned international day for the promotion of biodiversity issues. It is currently held on May 22. The International Day for Biological Diversity falls within the scope of the UN Post-2015 Development Agenda's Sustainable Development Goals. In this larger initiative of international cooperation, the topic of biodiversity concerns stakeholders in sustainable agriculture; desertification, land degradation and drought; water and sanitation; health and sustainable development; energy; science, technology and innovation, knowledge-sharing and capacity-building; urban resilience and adaptation; sustainable transport; climate change and disaster risk reduction; oceans and seas; forests; vulnerable groups including indigenous peoples; and food security. The critical role of biodiversity in sustainable development was recognized in a Rio+20 outcome document, “The World We Want: A Future for All”. From its creation by the Second Committee of the UN General Assembly in 1993 until 2000, it was held on December 29 to celebrate the day the Convention on Biological Diversity went into effect. On December 20, 2000, The date was shifted to commemorate the adoption of the Convention on May 22, 1992, at the Rio de Janeiro Earth Summit, and partly to avoid the many other holidays that occur in late December. == Themes == == See also == United Nations Decade on Biodiversity (2011–2020) International Year of Biodiversity (2010) == References == == External links == Post-2015 Development Agenda Considering Man's Place in the World," May 22, 2014 Sustainable Development Goals " The World We Want: A Future for All”
{ "page_id": 5113175, "source": null, "title": "International Day for Biological Diversity" }
In physics, particularly in quantum field theory, the Weyl equation is a relativistic wave equation for describing massless spin-1/2 particles called Weyl fermions. The equation is named after Hermann Weyl. The Weyl fermions are one of the three possible types of elementary fermions, the other two being the Dirac and the Majorana fermions. None of the elementary particles in the Standard Model are Weyl fermions. Previous to the confirmation of the neutrino oscillations, it was considered possible that the neutrino might be a Weyl fermion (it is now expected to be either a Dirac or a Majorana fermion). In condensed matter physics, some materials can display quasiparticles that behave as Weyl fermions, leading to the notion of Weyl semimetals. Mathematically, any Dirac fermion can be decomposed as two Weyl fermions of opposite chirality coupled by the mass term. == History == The Dirac equation was published in 1928 by Paul Dirac, and was first used to model spin-1/2 particles in the framework of relativistic quantum mechanics. Hermann Weyl published his equation in 1929 as a simplified version of the Dirac equation. Wolfgang Pauli wrote in 1933 against Weyl's equation because it violated parity. However, three years before, Pauli had predicted the existence of a new elementary fermion, the neutrino, to explain the beta decay, which eventually was described using the Weyl equation. In 1937, Conyers Herring proposed that Weyl fermions may exist as quasiparticles in condensed matter. Neutrinos were experimentally observed in 1956 as particles with extremely small masses (and historically were even sometimes thought to be massless). The same year the Wu experiment showed that parity could be violated by the weak interaction, addressing Pauli's criticism. This was followed by the measurement of the neutrino's helicity in 1958. As experiments showed no signs of a neutrino mass, interest in
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
the Weyl equation resurfaced. Thus, the Standard Model was built under the assumption that neutrinos were Weyl fermions. While Italian physicist Bruno Pontecorvo had proposed in 1957 the possibility of neutrino masses and neutrino oscillations, it was not until 1998 that Super-Kamiokande eventually confirmed the existence of neutrino oscillations, and their non-zero mass. This discovery confirmed that Weyl's equation cannot completely describe the propagation of neutrinos, as the equations can only describe massless particles. In 2015, the first Weyl semimetal was demonstrated experimentally in crystalline tantalum arsenide (TaAs) by the collaboration of M.Z. Hasan's (Princeton University) and H. Ding's (Chinese Academy of Sciences) teams. Independently, the same year, M. Soljačić team (Massachusetts Institute of Technology) also observed Weyl-like excitations in photonic crystals. == Equation == The Weyl equation comes in two forms. The right-handed form can be written as follows: σ μ ∂ μ ψ = 0 {\displaystyle \sigma ^{\mu }\partial _{\mu }\psi =0} Expanding this equation, and inserting c {\displaystyle c} for the speed of light, it becomes I 2 1 c ∂ ψ ∂ t + σ x ∂ ψ ∂ x + σ y ∂ ψ ∂ y + σ z ∂ ψ ∂ z = 0 {\displaystyle I_{2}{\frac {1}{c}}{\frac {\partial \psi }{\partial t}}+\sigma _{x}{\frac {\partial \psi }{\partial x}}+\sigma _{y}{\frac {\partial \psi }{\partial y}}+\sigma _{z}{\frac {\partial \psi }{\partial z}}=0} where σ μ = ( σ 0 σ 1 σ 2 σ 3 ) = ( I 2 σ x σ y σ z ) {\displaystyle \sigma ^{\mu }={\begin{pmatrix}\sigma ^{0}&\sigma ^{1}&\sigma ^{2}&\sigma ^{3}\end{pmatrix}}={\begin{pmatrix}I_{2}&\sigma _{x}&\sigma _{y}&\sigma _{z}\end{pmatrix}}} is a vector whose components are the 2×2 identity matrix I 2 {\displaystyle I_{2}} for μ = 0 {\displaystyle \mu =0} and the Pauli matrices for μ = 1 , 2 , 3 , {\displaystyle \mu =1,2,3,} and ψ {\displaystyle \psi }
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
is the wavefunction – one of the Weyl spinors. The left-handed form of the Weyl equation is usually written as: σ ¯ μ ∂ μ ψ = 0 {\displaystyle {\bar {\sigma }}^{\mu }\partial _{\mu }\psi =0} where σ ¯ μ = ( I 2 − σ x − σ y − σ z ) . {\displaystyle {\bar {\sigma }}^{\mu }={\begin{pmatrix}I_{2}&-\sigma _{x}&-\sigma _{y}&-\sigma _{z}\end{pmatrix}}~.} The solutions of the right- and left-handed Weyl equations are different: they have right- and left-handed helicity, and thus chirality, respectively. It is convenient to indicate this explicitly, as follows: σ μ ∂ μ ψ R = 0 {\displaystyle \sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}=0} and σ ¯ μ ∂ μ ψ L = 0 . {\displaystyle {\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}=0~.} == Plane wave solutions == The plane-wave solutions to the Weyl equation are referred to as the left and right handed Weyl spinors, each is with two components. Both have the form ψ ( r , t ) = ( ψ 1 ψ 2 ) = χ e − i ( k ⋅ r − ω t ) = χ e − i ( p ⋅ r − E t ) / ℏ {\displaystyle \psi \left(\mathbf {r} ,t\right)={\begin{pmatrix}\psi _{1}\\\psi _{2}\\\end{pmatrix}}=\chi e^{-i(\mathbf {k} \cdot \mathbf {r} -\omega t)}=\chi e^{-i(\mathbf {p} \cdot \mathbf {r} -Et)/\hbar }} , where χ = ( χ 1 χ 2 ) {\displaystyle \chi ={\begin{pmatrix}\chi _{1}\\\chi _{2}\\\end{pmatrix}}} is a momentum-dependent two-component spinor which satisfies σ μ p μ χ = ( I 2 E − σ → ⋅ p → ) χ = 0 {\displaystyle \sigma ^{\mu }p_{\mu }\chi =\left(I_{2}E-{\vec {\sigma }}\cdot {\vec {p}}\right)\chi =0} or σ ¯ μ p μ χ = ( I 2 E + σ → ⋅ p → ) χ = 0 {\displaystyle {\bar
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
{\sigma }}^{\mu }p_{\mu }\chi =\left(I_{2}E+{\vec {\sigma }}\cdot {\vec {p}}\right)\chi =0} . By direct manipulation, one obtains that ( σ ¯ ν p ν ) ( σ μ p μ ) χ = ( σ ν p ν ) ( σ ¯ μ p μ ) χ = p μ p μ χ = ( E 2 − p → ⋅ p → ) χ = 0 {\displaystyle \left({\bar {\sigma }}^{\nu }p_{\nu }\right)\left(\sigma ^{\mu }p_{\mu }\right)\chi =\left(\sigma ^{\nu }p_{\nu }\right)\left({\bar {\sigma }}^{\mu }p_{\mu }\right)\chi =p_{\mu }p^{\mu }\chi =\left(E^{2}-{\vec {p}}\cdot {\vec {p}}\right)\chi =0} , and concludes that the equations correspond to a particle that is massless. As a result, the magnitude of momentum p {\displaystyle \mathbf {p} } relates directly to the wave-vector k {\displaystyle \mathbf {k} } by the de Broglie relations as: | p | = ℏ | k | = ℏ ω c ⇒ | k | = ω c {\displaystyle |\mathbf {p} |=\hbar |\mathbf {k} |={\frac {\hbar \omega }{c}}\,\Rightarrow \,|\mathbf {k} |={\frac {\omega }{c}}} The equation can be written in terms of left and right handed spinors as: σ μ ∂ μ ψ R = 0 σ ¯ μ ∂ μ ψ L = 0 {\displaystyle {\begin{aligned}\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}&=0\\{\bar {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}&=0\end{aligned}}} === Helicity === The left and right components correspond to the helicity λ {\displaystyle \lambda } of the particles, the projection of angular momentum operator J {\displaystyle \mathbf {J} } onto the linear momentum p {\displaystyle \mathbf {p} } : p ⋅ J | p , λ ⟩ = λ | p | | p , λ ⟩ {\displaystyle \mathbf {p} \cdot \mathbf {J} \left|\mathbf {p} ,\lambda \right\rangle =\lambda |\mathbf {p} |\left|\mathbf {p} ,\lambda \right\rangle } Here λ = ± 1 2 . {\textstyle \lambda =\pm {\frac {1}{2}}~.}
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
== Lorentz invariance == Both equations are Lorentz invariant under the Lorentz transformation x ↦ x ′ = Λ x {\displaystyle x\mapsto x^{\prime }=\Lambda x} where Λ ∈ S O ( 1 , 3 ) . {\displaystyle \Lambda \in \mathrm {SO} (1,3)~.} More precisely, the equations transform as σ μ ∂ ∂ x μ ψ R ( x ) ↦ σ μ ∂ ∂ x ′ μ ψ R ′ ( x ′ ) = ( S − 1 ) † σ μ ∂ ∂ x μ ψ R ( x ) {\displaystyle \sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)\mapsto \sigma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=\left(S^{-1}\right)^{\dagger }\sigma ^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {R}}(x)} where S † {\displaystyle S^{\dagger }} is the Hermitian transpose, provided that the right-handed field transforms as ψ R ( x ) ↦ ψ R ′ ( x ′ ) = S ψ R ( x ) {\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=S\psi _{\rm {R}}(x)} The matrix S ∈ S L ( 2 , C ) {\displaystyle S\in SL(2,\mathbb {C} )} is related to the Lorentz transform by means of the double covering of the Lorentz group by the special linear group S L ( 2 , C ) {\displaystyle \mathrm {SL} (2,\mathbb {C} )} given by σ μ Λ μ ν = ( S − 1 ) † σ ν S − 1 {\displaystyle \sigma _{\mu }{\Lambda ^{\mu }}_{\nu }=\left(S^{-1}\right)^{\dagger }\sigma _{\nu }S^{-1}} Thus, if the untransformed differential vanishes in one Lorentz frame, then it also vanishes in another. Similarly σ ¯ μ ∂ ∂ x μ ψ L ( x ) ↦ σ ¯ μ ∂ ∂ x ′ μ ψ L ′ ( x ′ ) = S σ ¯ μ
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
∂ ∂ x μ ψ L ( x ) {\displaystyle {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)\mapsto {\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=S{\overline {\sigma }}^{\mu }{\frac {\partial }{\partial x^{\mu }}}\psi _{\rm {L}}(x)} provided that the left-handed field transforms as ψ L ( x ) ↦ ψ L ′ ( x ′ ) = ( S † ) − 1 ψ L ( x ) . {\displaystyle \psi _{\rm {L}}(x)\mapsto \psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}(x)~.} Proof: Neither of these transformation properties are in any way "obvious", and so deserve a careful derivation. Begin with the form ψ R ( x ) ↦ ψ R ′ ( x ′ ) = R ψ R ( x ) {\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=R\psi _{\rm {R}}(x)} for some unknown R ∈ S L ( 2 , C ) {\displaystyle R\in \mathrm {SL} (2,\mathbb {C} )} to be determined. The Lorentz transform, in coordinates, is x ′ μ = Λ μ ν x ν {\displaystyle x^{\prime \mu }={\Lambda ^{\mu }}_{\nu }x^{\nu }} or, equivalently, x ν = ( Λ − 1 ) ν μ x ′ μ {\displaystyle x^{\nu }={\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }x^{\prime \mu }} This leads to σ μ ∂ μ ′ ψ R ′ ( x ′ ) = σ μ ∂ ∂ x ′ μ ψ R ′ ( x ′ ) = σ μ ∂ x ν ∂ x ′ μ ∂ ∂ x ν R ψ R ( x ) = σ μ ( Λ − 1 ) ν μ ∂ ∂ x ν R ψ R ( x ) = σ μ ( Λ − 1 ) ν μ ∂ ν R ψ R ( x ) {\displaystyle {\begin{aligned}\sigma ^{\mu }\partial _{\mu
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
}^{\prime }\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)&=\sigma ^{\mu }{\frac {\partial }{\partial x^{\prime \mu }}}\psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)\\&=\sigma ^{\mu }{\frac {\partial x^{\nu }}{\partial x^{\prime \mu }}}{\frac {\partial }{\partial x^{\nu }}}R\psi _{\rm {R}}(x)\\&=\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }{\frac {\partial }{\partial x^{\nu }}}R\psi _{\rm {R}}(x)\\&=\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }\partial _{\nu }R\psi _{\rm {R}}(x)\end{aligned}}} In order to make use of the Weyl map σ μ Λ μ ν = ( S − 1 ) † σ ν S − 1 {\displaystyle \sigma _{\mu }{\Lambda ^{\mu }}_{\nu }=\left(S^{-1}\right)^{\dagger }\sigma _{\nu }S^{-1}} a few indexes must be raised and lowered. This is easier said than done, as it invokes the identity η Λ T η = Λ − 1 {\displaystyle \eta \Lambda ^{\mathsf {T}}\eta =\Lambda ^{-1}} where η = diag ( + 1 , − 1 , − 1 , − 1 ) {\displaystyle \eta ={\mbox{diag}}(+1,-1,-1,-1)} is the flat-space Minkowski metric. The above identity is often used to define the elements Λ ∈ S O ( 1 , 3 ) . {\displaystyle \Lambda \in \mathrm {SO} (1,3).} One takes the transpose: ( Λ − 1 ) ν μ = ( Λ − 1 T ) μ ν {\displaystyle {\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }={\left(\Lambda ^{-1{\mathsf {T}}}\right)_{\mu }}^{\nu }} to write σ μ ( Λ − 1 ) ν μ ∂ ν R ψ R ( x ) = σ μ ( Λ − 1 T ) μ ν ∂ ν R ψ R ( x ) = σ μ Λ μ ν ∂ ν R ψ R ( x ) = ( S − 1 ) † σ μ ∂ μ S − 1 R ψ R ( x ) {\displaystyle {\begin{aligned}\sigma ^{\mu }{\left(\Lambda ^{-1}\right)^{\nu }}_{\mu }\partial _{\nu }R\psi _{\rm {R}}(x)&=\sigma ^{\mu }{\left(\Lambda ^{-1{\mathsf {T}}}\right)_{\mu }}^{\nu }\partial _{\nu }R\psi _{\rm {R}}(x)\\&=\sigma _{\mu }{\Lambda ^{\mu }}_{\nu }\partial ^{\nu
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
}R\psi _{\rm {R}}(x)\\&=\left(S^{-1}\right)^{\dagger }\sigma _{\mu }\partial ^{\mu }S^{-1}R\psi _{\rm {R}}(x)\end{aligned}}} One thus regains the original form if S − 1 R = 1 , {\displaystyle S^{-1}R=1,} that is, R = S . {\displaystyle R=S.} Performing the same manipulations for the left-handed equation, one concludes that ψ L ( x ) ↦ ψ L ′ ( x ′ ) = L ψ L ( x ) {\displaystyle \psi _{\rm {L}}(x)\mapsto \psi _{\rm {L}}^{\prime }\left(x^{\prime }\right)=L\psi _{\rm {L}}(x)} with L = ( S † ) − 1 . {\displaystyle L=\left(S^{\dagger }\right)^{-1}.} === Relationship to Majorana === The Weyl equation is conventionally interpreted as describing a massless particle. However, with a slight alteration, one may obtain a two-component version of the Majorana equation. This arises because the special linear group S L ( 2 , C ) {\displaystyle \mathrm {SL} (2,\mathbb {C} )} is isomorphic to the symplectic group S p ( 2 , C ) . {\displaystyle \mathrm {Sp} (2,\mathbb {C} )~.} The symplectic group is defined as the set of all complex 2×2 matrices that satisfy S T ω S = ω {\displaystyle S^{\mathsf {T}}\omega S=\omega } where ω = i σ 2 = [ 0 1 − 1 0 ] {\displaystyle \omega =i\sigma _{2}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}} The defining relationship can be rewritten as ω S ∗ = ( S † ) − 1 ω {\displaystyle \omega S^{*}=\left(S^{\dagger }\right)^{-1}\omega } where S ∗ {\displaystyle S^{*}} is the complex conjugate. The right handed field, as noted earlier, transforms as ψ R ( x ) ↦ ψ R ′ ( x ′ ) = S ψ R ( x ) {\displaystyle \psi _{\rm {R}}(x)\mapsto \psi _{\rm {R}}^{\prime }\left(x^{\prime }\right)=S\psi _{\rm {R}}(x)} and so the complex conjugate field transforms as ψ R ∗ ( x ) ↦ ψ R ′ ∗ ( x ′ ) =
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
S ∗ ψ R ∗ ( x ) {\displaystyle \psi _{\rm {R}}^{*}(x)\mapsto \psi _{\rm {R}}^{\prime *}\left(x^{\prime }\right)=S^{*}\psi _{\rm {R}}^{*}(x)} Applying the defining relationship, one concludes that m ω ψ R ∗ ( x ) ↦ m ω ψ R ′ ∗ ( x ′ ) = ( S † ) − 1 m ω ψ R ∗ ( x ) {\displaystyle m\omega \psi _{\rm {R}}^{*}(x)\mapsto m\omega \psi _{\rm {R}}^{\prime *}\left(x^{\prime }\right)=\left(S^{\dagger }\right)^{-1}m\omega \psi _{\rm {R}}^{*}(x)} which is exactly the same Lorentz covariance property noted earlier. Thus, the linear combination, using an arbitrary complex phase factor η = e i ϕ {\displaystyle \eta =e^{i\phi }} i σ μ ∂ μ ψ R ( x ) + η m ω ψ R ∗ ( x ) {\displaystyle i\sigma ^{\mu }\partial _{\mu }\psi _{\rm {R}}(x)+\eta m\omega \psi _{\rm {R}}^{*}(x)} transforms in a covariant fashion; setting this to zero gives the complex two-component Majorana equation. The Majorana equation is conventionally written as a four-component real equation, rather than a two-component complex equation; the above can be brought into four-component form (see that article for details). Similarly, the left-chiral Majorana equation (including an arbitrary phase factor ζ {\displaystyle \zeta } ) is i σ ¯ μ ∂ μ ψ L ( x ) + ζ m ω ψ L ∗ ( x ) = 0 {\displaystyle i{\overline {\sigma }}^{\mu }\partial _{\mu }\psi _{\rm {L}}(x)+\zeta m\omega \psi _{\rm {L}}^{*}(x)=0} As noted earlier, the left and right chiral versions are related by a parity transformation. The skew complex conjugate ω ψ ∗ = i σ 2 ψ {\displaystyle \omega \psi ^{*}=i\sigma ^{2}\psi } can be recognized as the charge conjugate form of ψ . {\displaystyle \psi ~.} Thus, the Majorana equation can be read as an equation that connects a spinor to its charge-conjugate form. The two
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }
distinct phases on the mass term are related to the two distinct eigenvalues of the charge conjugation operator; see charge conjugation and Majorana equation for details. Define a pair of operators, the Majorana operators, D L = i σ ¯ μ ∂ μ + ζ m ω K D R = i σ μ ∂ μ + η m ω K {\displaystyle D_{\rm {L}}=i{\overline {\sigma }}^{\mu }\partial _{\mu }+\zeta m\omega K\qquad D_{\rm {R}}=i\sigma ^{\mu }\partial _{\mu }+\eta m\omega K} where K {\displaystyle K} is a short-hand reminder to take the complex conjugate. Under Lorentz transformations, these transform as D L ↦ D L ′ = S D L S † D R ↦ D R ′ = ( S † ) − 1 D R S − 1 {\displaystyle D_{\rm {L}}\mapsto D_{\rm {L}}^{\prime }=SD_{\rm {L}}S^{\dagger }\qquad D_{\rm {R}}\mapsto D_{\rm {R}}^{\prime }=\left(S^{\dagger }\right)^{-1}D_{\rm {R}}S^{-1}} whereas the Weyl spinors transform as ψ L ↦ ψ L ′ = ( S † ) − 1 ψ L ψ R ↦ ψ R ′ = S ψ R {\displaystyle \psi _{\rm {L}}\mapsto \psi _{\rm {L}}^{\prime }=\left(S^{\dagger }\right)^{-1}\psi _{\rm {L}}\qquad \psi _{\rm {R}}\mapsto \psi _{\rm {R}}^{\prime }=S\psi _{\rm {R}}} just as above. Thus, the matched combinations of these are Lorentz covariant, and one may take D L ψ L = 0 D R ψ R = 0 {\displaystyle D_{\rm {L}}\psi _{\rm {L}}=0\qquad D_{\rm {R}}\psi _{\rm {R}}=0} as a pair of complex 2-spinor Majorana equations. The products D L D R {\displaystyle D_{\rm {L}}D_{\rm {R}}} and D R D L {\displaystyle D_{\rm {R}}D_{\rm {L}}} are both Lorentz covariant. The product is explicitly D R D L = ( i σ μ ∂ μ + η m ω K ) ( i σ ¯ μ ∂ μ + ζ m ω K ) = − ( ∂
{ "page_id": 33162584, "source": null, "title": "Weyl equation" }