id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
61,341,549 | https://en.wikipedia.org/wiki/Oana%20Caruana%20Pulpan | Oana Caruana Pulpan (born 17 April 1978) is a Maltese chess player. She was awarded the Woman FIDE Master title in 2009. She is a member of the Maltese Women's National Team. Her father introduced her to chess when she was very young and has been playing it since.
She is sponsored by the company she works with, Actavis. She is also a pharmacologist. She teaches chess classes at St. Catherine's High School in Pembroke, Malta.
References
1978 births
Living people
Chess Woman FIDE Masters
Maltese chess players
Pharmacologists | Oana Caruana Pulpan | [
"Chemistry"
] | 119 | [
"Pharmacology",
"Biochemists",
"Pharmacologists"
] |
61,343,781 | https://en.wikipedia.org/wiki/Bifora%20%28architecture%29 | In architecture, a bifora is a type of window divided vertically into two openings by a small column or a mullion or a pilaster; the openings are topped by arches, round or pointed. Sometimes the bifora is framed by a further arch; the space between the two arches may be decorated with a coat of arms or a small circular opening (oculus).
The bifora was used in Byzantine architecture, including Italian buildings such as the Basilica of Sant'Apollinare Nuovo, in Ravenna. Typical of the Romanesque and Gothic periods, in which it became an ornamental motif for windows and belfries, the bifora was also often used during the Renaissance period. In Baroque architecture and Neoclassical architecture the bifora was largely forgotten, or replaced by elements like the three openings of the Venetian window. It was also copied in the Moorish architecture in Spain, where it is called (from Arabic ).
It returned in vogue in the nineteenth century in the period of eclecticism and rediscovery of the ancient styles in Gothic Revival and Romanesque Revival architecture.
Gallery
See also
Monofora
Trifora
Quadrifora
Polifora
References
Architectural elements
Windows | Bifora (architecture) | [
"Technology",
"Engineering"
] | 246 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
61,348,137 | https://en.wikipedia.org/wiki/Gamma-ray%20burst%20precursor | A gamma-ray precursor is a short X-ray outburst event that comes before the main outburst of the gamma-ray burst progenitor. There is no consensus on the mechanism for this event, although several theories have been suggested.
History
The first gamma-ray precursor event was from GRB 900126, a long GRB. Immediately, because of the non-thermal nature of the emission, it was recognized that the mechanism of emission for this event was likely internal to the neutron star and not from the accretion disk. Systematic surveys were subsequently carried out to find the percentages of gamma-ray bursts that contained precursor events. Although it was found that 3% of bursts in the BATSE catalogue had a precursor event, a later review found that 20% of long GRBs have a precursor event, although slightly different search criteria were used in that review. That percentage was also found in another study, although others have also found percentages wavering around 10%.
Properties
The precursor event occurs in a wide range of time frames before the main burst. This time can range up to hundreds of seconds. The precursors typically, but not always, show a non-thermal spectrum. Notably, the first gamma-ray precursor to be detected showed a thermal spectrum, with a peak in the X-ray wavelengths. There is no set definition of a precursor. Some allow a broad definition, where the precursor is merely a less-energetic event that happens before the main burst, while some impose additional restrictions, such as the precursor having a longer duration than the actual burst. This is the main reason varying percentages of precursors in samples have been found.
Model
No consensus model exists for gamma-ray burst precursors. According to the collapsar model, a long-GRB results from the collision of jets with the material surrounding a collapsed star. In this model, the precursor could be generated from the jet becomes optically thin. Under this theory, it is difficult to explain the large time gap (hundreds of seconds in some cases) between the precursor and the gamma-ray burst. Various mechanisms for precursors being completely separate phenomena from the main GRB event have also been proposed. In one such scenario, the precursor occurs from the formation of a weak jet during the collapse of the progenitor. This theory explains the time gap between the precursor and burst, although no experimental evidence has differentiated it from others.
References
Gamma-ray bursts
Gamma-ray astronomy | Gamma-ray burst precursor | [
"Physics",
"Astronomy"
] | 500 | [
"Gamma-ray astronomy",
"Physical phenomena",
"Astronomical events",
"Gamma-ray bursts",
"Stellar phenomena",
"Astronomical sub-disciplines"
] |
61,349,287 | https://en.wikipedia.org/wiki/Limepit | A limepit is either a place where limestone is quarried, or a man-made pit used to burn lime stones in the same way that modern-day kilns and furnaces constructed of brick are now used above ground for the calcination of limestone (calcium carbonate, CaCO3) and by which quicklime (calcium oxide, CaO) is produced, an essential component in waterproofing and in wall plastering (plaster skim).
Primitive limepits
The production of lime in the Land of Israel has been dated as far back as the Canaanite period, and has continued in successive generations ever since. The man-made limepit was usually dug in ground near the place where limestone could be quarried. Remnants of old limepits have been unearthed in archaeological digs all throughout the Levant. In a country where hundreds of such limepits or limekilns for burning limestone were found, the Israel Antiquities Authority (IAA) describes dozens of them (), one discovered in Kiryat Ye'arim, another in Har Giora - East (2 km. north of Bar-Giora), as well as in Neve Yaakov, among other places. Two lime kilns, stratigraphically dated to the late Hellenistic period were excavated at Ramat Rachel, the latter of which being circular in shape (3.6 metres in diameter) and built into the ruins of a large pool, using earlier walls. A rounded kiln (2.5–2.8 metres in diameter) was found northeast of Jerusalem dating back to the Iron Age (seventh–sixth century BCE), and was built of stones and had a rectangular unit adjacent to it. In the Lachish area, several lime kilns were excavated by a team on behalf of the IAA, and which kilns were partially hewn in the bedrock and partially built of fieldstones, and last used at some point between the mid-15th century and the mid-17th century CE.
In Bedouin-Arab culture in Israel, the limepit was dug to a depth of about and about in diameter.
By all appearances, the pit was made after the same basic principle used in a "Dakota fire pit," which is made with an air inlet at the base, allowing for air-ventilation, but on a larger scale. Air intake was achieved by digging an adjacent channel which ran from a short distance into the limepit, or else an underground shaft (shafts) at floor level of limepit leading from an open area, allowing for a steady, free-flowing draught of air to be drawn into the limepit as it burns. In this way, there was no need for the use of bellows to reach a high temperature, but only to stoke the fire with wood continuously for several days for it to reach a temperature of 900° Celsius (1650° F). Its mode of operation was similar to that of a shaft kiln. After cooling, wood ashes that had accumulated were then separated from the burnt blocks of limestone. The limestone blocks were then crushed, afterwards slaked (the process of adding water and constantly turning the lime to create a chemical reaction, whereby the burnt lime, or what is known also as calcium oxide, is changed into calcium hydroxide), and mixed with an aggregate to form an adhesive paste (plaster) used in construction and for daubing buildings.
When properly burnt, limestone loses its carbonic acid () and becomes converted into caustic or quicklime (CaO). One-hundred parts of raw limestone yields about 56 parts of quicklime. In the West, quicklime was formerly a major component in common mortar, besides its predominant use in plastering. In some Middle-Eastern countries where rain-fall was scarce in the dry season, lime production for use in plastering home-made cisterns (in making them impermeable by adding thereto a pozzolanic agent) was especially important. This enabled them to collect the winter run-off of rain water and to have it stored for later use, whether for personal or agricultural needs. Lime is also an important component in the production of Nabulsi soap, in dyeing fabrics, and in use as a depilatory.
Basic design
Many limepits were sunken in the ground at a depth of between 2.5 and 5 meters and 3 to 4.5 meters in diameter, in a circular fashion, and some were built with a retaining wall along the inside for support, usually constructed of uncut field-stones. Simpler limepits were made without supportive walls. In the following account, Abu-Rabiʻa describes the practice of Bedouins in the Negev, during the late 19th and early 20th-century:
Lime is derived from chalk by burning. The Bedouins used it in plastering their cisterns. Burning chalk stone was performed in simple kilns in close proximity to where the chalk was found. Lime kilns were made by digging a round hole, three metres wide, two and a half metres deep. After the hole was dug, the chalk and fuel for a fire would be brought to it. Stones of chalk (limestone) would be arranged in a circular dome in the pit. The burning process would last three to six days, without letup. After the burning was finished, the kiln would be left to cool for four to six days. The lime would then be taken out. The large lime blocks along the edge of the pit were considered of the highest quality, while the small pieces towards the center of the pit were considered grade B. One camel load, or cantur (qentar / quntar = 100 ratels, or 250–300 kilograms), of lime would fetch 40 grush on the Jerusalem market in the early 1880s.
In Israel, the principal fuel used to keep the lime-kiln burning was the dried brushwood of prickly burnet (Sarcopoterium spinosum) and savory (Satureja thymbra), where often camel loads of this dried wood would be hauled to the lime-kiln. Monolithic stone structures were already in use for burning limestone (nāri) during the Ottoman period, throughout the Levant. Modern kilns for burning lime first appeared in Palestine during the British Mandate.
Chemical changes
The lime stones selected were those that had the least amount of impurities within them. Limepits were almost always built near the supply of limestone, and a sufficient pile of wood kindling was heaped in great store before the actual burning process began, a supply that was to last between 3 and 7 days of continual burning, both, by night and day. In the southern Mediterranean regions, one of the favorite wood sources was thorny burnet (Sarcopoterium spinosum). The fire was attended by men with long staves and pitchforks who pushed the burning material into the pit. Initially, a cloud of smoke billowed from the pit. After several days of burning, when the uppermost stone in the fire pit began to glow a fiery red, it signaled that the burning process of the lime was finished, and that the process of carbon dioxide emissions from the limestone has been completed, and that the lime was now ready for marketing as lime or powder. After being allowed to cool, the burnt limestone was extracted from the pit when it was light and brittle. During the burning process, the limestone loses about 50% of its original anatomical weight. The lime becomes ready for use only after water has been added.
Gallery
See also
Lime kiln
Lime plaster
Qadad (Method of waterproofing cisterns in South Arabia)
References
Bibliography
Eliyahu-Behar, A.; Yahalom-Mack, N.; Ben-Shlomo, D. (2017). "Excavation and Analysis of an Early Iron Age Lime Kiln", Israel Exploration Journal 67, pp. 14–31
Kilns
History of chemistry
Soil-based building materials
Lime kilns
Construction in Asia
Fireplaces
Building materials
Plastering
Firing techniques | Limepit | [
"Physics",
"Chemistry",
"Engineering"
] | 1,672 | [
"Building engineering",
"Chemical equipment",
"Coatings",
"Architecture",
"Construction",
"Lime kilns",
"Materials",
"Kilns",
"Plastering",
"Matter",
"Building materials"
] |
70,752,698 | https://en.wikipedia.org/wiki/Stefan%20H%C3%BCfner | Stefan Hüfner (July 2, 1935 in Löwenberg, Silesia – January 17, 2013 in Saarbrücken, Saarland) was a German experimental physicist specialized in solid-state physics and photoemission spectroscopy.
Education and career
Hüfner studied mathematics and physics at the Goethe University of Frankfurt and the Technical University of Darmstadt. After graduating from 1960 to 1966, he was a scientific assistant at the Institute for Technical Physics at the TU Darmstadt. In 1963 he received his doctorate there, supervised by Karl-Heinz Hellwege. In 1966 he obtained habilitation in physics. He was a guest researcher at the Technical University of Munich and at the Bell Telephone Laboratories in Murray Hill, N.Y., USA. From 1967 to 1968 he was a privatdozent at the TU Darmstadt and the doctoral supervisor of Peter Grünberg, who was awarded the Nobel Prize in Physics in 2007.
In 1968 he received a call to the professorship for experimental physics at the Free University of Berlin as the successor to Professor Gerhard Simonsohn. In 1975, Hüfner moved to the professorship for experimental physics at Saarland University. In 1994 he became founding speaker of the Collaborative Research Center "Interface-determined Materials". In 2001 he took over the office of university vice president for planning and strategy, which he held until the beginning of 2003. In September 2003 he retired.
Honors and awards
Hüfner was an emeritus member of the advisory board of the Max Planck Institute for Nuclear Physics in Heidelberg, the Max Planck Institute for Physics in Munich, the Max Planck Institute for Plasma Physics in Greifswald and Munich and the Max Planck Institute for Quantum Optics in Munich and other advisory boards of the Max Planck Society. Since 2004 he has been a member and chairman of the Technical Committee for Engineering Sciences of the Elite Network of Bavaria. In 2006/2007 he was a visiting professor at the University of British Columbia in Vancouver, Canada.
He received honorary doctorates from the University of Fribourg and the Free University of Berlin.
Works
Hüfner authored the classic textbook on photoemission spectroscopy, first published in 1995 and has gone through three editions in total.
In addition to numerous scientific publications, Hüfner has also written several novels, including Der Tote von Dresden (Conte Verlag 2004, ) and Artikel eins. Ein Zukunftsroman (Conte Verlag 2006, ).
Bibliography
Textbooks and monographs
Fictions
Reviews
See also
Jürgen Kirschner
References
External links
Universität des Saarlandes: „Neuer Vizepräsident: Stefan Hüfner“, Januar 2001
2013 deaths
1935 births
Academic staff of Saarland University
Academic staff of the Free University of Berlin
20th-century German physicists
German materials scientists
German experimental physicists
Condensed matter physicists
Scientists at Bell Labs
Goethe University Frankfurt alumni
Technische Universität Darmstadt alumni
Academic staff of Technische Universität Darmstadt
People from Lwówek Śląski | Stefan Hüfner | [
"Physics",
"Materials_science"
] | 610 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
70,768,110 | https://en.wikipedia.org/wiki/PSR%20J0523%E2%88%927125 | |- style="vertical-align: top;"
| Distance
| 160,000 ly
PSR J0523−7125 is a pulsar that, due to its size and brightness, was initially believed to be a distant galaxy. It is located about away in the southern constellation of Dorado, near the center of the Large Magellanic Cloud. Investigation via the Australian Square Kilometre Array Pathfinder showed the pulsar to have a high circular polarization with a steep spectrum. Its rotation measure is twice as large as any other pulsar found in the Large Magellanic Cloud, which also makes it one of the most luminous pulsars ever found.
References
Stars in the Large Magellanic Cloud
Dorado
Pulsars | PSR J0523−7125 | [
"Astronomy"
] | 156 | [
"Dorado",
"Constellations"
] |
70,768,864 | https://en.wikipedia.org/wiki/Mathematics%20of%20the%20Incas | The mathematics of the Incas (or of the Tawantinsuyu) was the set of numerical and geometric knowledge and instruments developed and used in the nation of the Incas before the arrival of the Spaniards. It can be mainly characterized by its usefulness in the economic field. The quipus and yupanas are proof of the importance of arithmetic in Inca state administration. This was embodied in a simple but effective arithmetic, for accounting purposes, based on the decimal numeral system; they too had a concept of zero, and mastered addition, subtraction, multiplication, and division. The mathematics of the Incas had an eminently applicative character to tasks of management, statistics, and measurement that was far from the Euclidean outline of mathematics as a deductive corpus, since it was suitable and useful for the needs of a centralized administration.
On the other hand, the construction of roads, canals and monuments, as well as the layout of cities and fortresses, required the development of practical geometry, which was indispensable for the measurement of lengths and surfaces, in addition to architectural design. At the same time, they developed important measurement systems for length and volume, which took parts of the human body as reference. In addition, they used appropriate objects or actions that allowed to appreciate the result in another way, but relevant and effective.
Inca numeral system
The prevailing numeral system was the base-ten. One of the main references confirming this are the chronicles that present a hierarchy of organized authorities, using the decimal numeral system with its arithmometer: Quipu.
It is also possible to confirm the use of the decimal system in the Inca system by the interpretation of the quipus, which are organized in such a way that the knots — according to their location — can represent: units, tens, hundreds, etc.
However, the main confirmation of the use of this system is expressed in the denomination of the numbers in Quechua, in which the numbers are developed in decimal form. This can be appreciated in the following table:
Accounting systems
Quipus
The quipus constituted a mnemonic system based on knotted strings used to record all kinds of quantitative or qualitative information; if they were dealing with the results of mathematical operations, only those previously performed on the "Inca abacuss" or yupanas were cancelled. Although one of its functions is related to mathematics — as it was an instrument capable of accounting — it was also used to store information related to census, product amount, and food kept in state warehouses. Quipus are even mentioned as instruments the Incas used to record their traditions and history in a different way than in writing.
Several chroniclers also mention the use of quipus to store historical news. However, it has not yet been discovered how this system worked. In the Tahuantinsuyo, it was specialized personnel who handled the strings. They were known as quipucamayoc and they could be in charge of the strings of an entire region or suyu. Although the tradition is being lost, the quipus continue to be used as mnemonic instruments in some indigenous villages where they are used to record the product of the crops and the animals of the communities.
According to the Jesuit chronicler Bernabé Cobo, the Incas designated to certain specialists the tasks related to accounting. These specialists were called quipo camayos, in whom the Incas placed all their trust. In his study of the quipu sample VA 42527 (Museum für Völkerkunde, Berlin), Sáez-Rodríguez noted that, in order to close the accounting books of the chacras, certain numbers were ordered according to their value in the agricultural calendar, for which the khipukamayuq — the accountant entrusted with the granary — was directly in charge.
Yupanas
In the case of numerical information, the mathematical operations were previously carried out on the abacuss or yupanas. These could be made of carved stone or clay, had boxes or compartments that corresponded to the decimal units, and were counted or marked with the help of small stones or grains of corn or quinoa. Units, tens, hundreds, etc. could be indicated according to whether they were implicit in each operation.
Recent research regarding the yupanas suggests that they allowed to calculate considerable numbers based on a probably non-decimal system, but based in relation to the number 40. If true, it is curious to note the coincidence between the geometric progression achieved in the yupana and the current processing systems; on the other hand, it is also contradictory that they based their accounting system on the number 40. If the investigations continue and this fact is confirmed, it would be necessary to compare its use with the decimal system, which according to the historical tradition and previous investigations, was the one used by the Incas.
In October 2010, Peruvian researcher Andrés Chirinos with the support of the Spanish Agency for International Development Cooperation (in Spanish, Agencia Española de Cooperación Internacional para el Desarrollo, AECID), reviewed drawings and ancient descriptions of the indigenous chronicler Guaman Poma de Ayala and finally deciphered the riddle of the yupana — that he calls "pre-Hispanic calculator" — as being capable of adding, subtracting, multiplying, and dividing. This made him hopeful to finally discover how the quipus worked as well.
Units of measurement
There were different units of measurement for magnitudes such as length and volume in pre-Hispanic times. The Andean peoples, as in many other places in the world, took parts of the human body as a reference to establish their units of measurement. There was not a single system of units of obligatory and uniform use throughout the Andean world. Many documents and chronicles have recorded different systems of local origin that remained in use until the 16th century.
Length
Among the units of length measurement, there was the rikra (fathom), which is the distance measured between a man's thumbs with arms extended horizontally. The kukuchu tupu (kukush tupu) was equivalent to the Spanish codo (cubit) and was the distance measured from the elbow to the end of the fingers of the hand. There was also the capa (span), and the smallest was the yuku or jeme, which was the length between the index finger and the thumb, separating one from the other as much as possible. The distance between two villages would have been evaluated by the number of chasquis required to carry an errand from one village to the other. They would have used direct proportionality between the circumference of a sheepfold and the number of chacra partitions.
Surface
The tupu was the unit of measurement of surface area. In general terms it was defined as the plot of land required for the maintenance of a married couple without children. Every hatun runa or "common man" received a plot of land upon marriage and its production had to satisfy the basic needs of food and trade of the spouses. It did not correspond to an exact measurement, since its dimensions varied according to the conditions of each land and from one ethnic group to another. The quality of the soil was taken into consideration and the necessary rest time was calculated accordingly, which had to be considered after a certain number of agricultural campaigns. After that time, the couple could claim a new tupu from their curaca.
Capacity
Among the units of measurement of capacity there is the pokcha, which was equivalent to half a fanega or 27.7 liters. Some crops such as corn were measured in containers; liquids were measured in a variety of pitchers and jars. There were boxes of a variety of cántaros and tinajas, and straw or reed boxes in which objects were kept. These boxes were also used in warehouses to store delicate or exquisite products, such as dried fruits. Coca leaves were measured in runcu or large baskets. Other baskets were known as ysanga. Among these measures of capacity there is the poctoy or purash (almozada), which is equivalent to the portion of grains or flour that can be kept in the concavity formed with the hands together. The ancient inhabitants of the Andes knew the scales of saucers and nets as well as the huipe, an instrument similar to steelyards. Apparently, its presence is associated with the works of jewelry and metallurgy, trades in which it is necessary to know the exact weights to use the right proportions of the alloys.
Volume
Especially the volume of their colcas (trojas) and their tambos (state warehouses, located in key points of the Qhapaq Ñan). They used the runqu (rongos: bales), portable containers or ishanka (baskets) or the capacity of a chacra. They would have handled the proportionality of the volumes of prisms with respect to their heights — without varying the bases.
Time
To measure time, they used the day (workday), which could include a morning, even an afternoon. Time was also useful, indirectly, to appreciate the distance between two cities; for example, 20 days from Cajamarca to Cusco was the accepted time measurement.
Months, years, and the phases of the moon — much consulted for the tasks of sowing, aporques and harvests and in navigation — were also measured in days.
See also
Inca Empire
History of the Incas
History of Peru
Mathematics
Notes
References
Bibliography
Inca Empire
Inca mathematics
Inca culture
Pre-Columbian cultures
Numeral systems | Mathematics of the Incas | [
"Mathematics"
] | 1,960 | [
"Numeral systems",
"Mathematical objects",
"Numbers"
] |
76,602,480 | https://en.wikipedia.org/wiki/9-Oxodecenoic%20acid | 9-Oxodecenoic acid (9-oxo-2(E)-decenoic acid, also called 9-ODA) is an unsaturated ketocarboxylic or fatty acid and a pheromone secreted by the queen bee of the honeybee species Apis mellifera. It functions as a sex attractant that stimulates the olfactory receptors of male drones. Additionally, this acid plays a crucial role in regulating the colony's social structure; it inhibits the development of ovaries in worker bees, which are sterile females. However, its inhibitory effect on the worker bees' ovaries is only fully effective when combined with another pheromone, 9-hydroxydecenoic acid. When the queen bee is removed from the hive, the worker bees initiate the construction of new queen cells and the previously inhibited drones develop functional ovaries. The exact biological mechanisms through which 9-oxodecenoic acid and related substances influence these processes are not fully understood, but they are thought to affect the nervous system in some way.
Synthesis
9-Oxodecenoic acid can be synthesized starting from azelaic acid.
An efficient synthesis is possible starting from diethyl-3-oxoglutarate. This is alkylated by 6-bromo-1-hexene and magnesium ethanolate, then decarboxylated. The double bond is oxidized to the aldehyde with osmium tetroxide and sodium periodate in aqueous tert-butanol. The acid group is introduced by condensation with malonic acid.
References
Pheromones
Alkenoic acids
Ketones | 9-Oxodecenoic acid | [
"Chemistry"
] | 350 | [
"Ketones",
"Pheromones",
"Chemical ecology",
"Functional groups"
] |
76,608,109 | https://en.wikipedia.org/wiki/Boronization | Boronization is a wall conditioning technique for fusion machines (such as tokamaks), where a thin film of boron is deposited on the walls of the vacuum vessel in order to reduce the impurity content (for example oxygen) which can be deleterious for fusion plasma operation.
This technique can be seen as a plasma-assisted chemical vapor deposition of boron. The typical workflow involves performing a glow discharge and injecting a gas containing boron into the vacuum vessel chamber.
Boronization as a wall conditioning technique was first developed for the TEXTOR tokamak at the Forschungszentrum Jülich. It is now a well-established technique and has been successfully applied on many machines, examples include DIII-D and ASDEX.
Real-time boron powder injection is an advanced technique that offers several advantages over traditional boronization. This method involves injecting submillimeter boron powder directly into the plasma during operation, where it evaporates and deposits a thin boron layer on plasma-facing surfaces. Unlike earlier approaches, it avoids the use of toxic diborane gas and allows continuous conditioning without interrupting plasma operations. This approach is particularly valuable in long-pulse or steady-state devices, where traditional coatings may degrade quickly, helping to maintain wall integrity and limit impurities entering the plasma. It has been studied in many devices like ASDEX Upgrade and DIII-D and is now also being considered for ITER.
See also
Glow discharge
Sputtering
Plasma surface interaction
References
Electrical discharge in gases
Fusion power | Boronization | [
"Physics",
"Chemistry"
] | 321 | [
"Matter",
"Physical phenomena",
"Electrical discharge in gases",
"Plasma physics",
"Plasma phenomena",
"Fusion power",
"Nuclear fusion",
"Ions"
] |
76,610,962 | https://en.wikipedia.org/wiki/Ferroelectric%20tunnel%20junction | A Ferroelectric tunnel junction (FTJ) is a form of tunnel junction including a ferroelectric dielectric material sandwiched between two electrically conducting materials. Electrons do not directly pass through the junction, and instead they pass the barrier via quantum tunnelling. The structure is similar to a ferroelectric capacitor, but the ferroelectric layer is fabricated thin enough to enable significant tunneling current. The magnitude of the tunneling current is switched by the ferroelectric polarization and is governed by the tunneling electroresistance (TER).
There exists two conditions that must be met in order to manufacture a reliable FTJ: the FE-layer must be at maximum 3 nm in order to allow the electron tunneling (see section tunneling), and the interfaces on both sides need to be energetically asymmetrical in order to obtain two separate potential barrier heights.
Description
Ferroelectric tunnel junctions are being developed as a memristive component for the semiconductor industry. As of early 2024, FTJ based technologies are not commercially available. To enable sufficient tunneling probability, the ferroelectric layer must be thin enough (in the nanometer scale), rendering many conventional ferroelectric materials redundant. Ferroelectricity as a phenomenon was long thought to disappear in thicknesses required for tunneling, which hindered research around the topic until the 2000s. Since, significant ferroelectricity has been shown in thin films, and FTJs have been successfully shown to follow the proposed working principle.
While most ferroelectric materials require high fabrication temperatures, polycrystalline thin film hafnium oxide has been shown to be ferroelectric even with back-end complementary metal oxide semiconductor (CMOS) compatible fabrication temperatures, rendering FTJs especially interesting for the semiconductor industry.
The hafnium oxide is deposited using atomic layer deposition (ALD) to enable precise growth to form thin enough layers. FTJs have gained significant interest due to the memristive properties as well as CMOS compatible operating voltages and fabrication methods.
In addition to ferroelectric tunnel junctions, there are other ferroelectric devices, including ferroelectric capacitors (FeCAP), ferroelectric field-effect transistors (FeFET), ferroelectric random-access memory (FeRAM) and multiferroic tunnel junctions (MFTJ), which are ferroelectric tunnel junction with ferromagnetic materials as the two electrodes.
Basic operating principle
Ferroelectric tunnel junctions are devices where the current through the device can be controlled by the voltage driven across the device. These memristive components use ferroelectric behavior to change the tunneling probability through the device.
Ferroelectricity
In a simple explanation of ferroelectricity, the electric dipole moments of the crystalline unit cells point first in random directions. As a voltage is driven across the material, these dipole moments rotate to align with the electric field induced by the voltage difference. Once the voltage is lowered back to zero, the dipole moments remain aligned with the previous field. The sum of individual dipole moments form the polarization of the material. In non-ferroelectric materials the polarization relaxes back to zero once the voltage is brought down; in ferroelectric materials the polarization remains. When a voltage of the opposite sign is driven through the same piece of ferroelectric material, the polarization switches to point in the opposite direction. Again, the polarization remains even after the field is reduced to zero. This results in a hysteresis effect seen in the polarization-electric field (PE) curve.
Switching the ferroelectric polarization of the material affects the height of the potential barrier in the device. The potential barrier influences the tunneling probability and thus the current measured, which can be utilized as voltage-controlled memory.
Tunneling
As the name ferroelectric tunnel junction suggests, the devices operate based on quantum tunneling through a barrier. As electrons tunnel through the barrier, the resulting movement can be measured as current. The amplitude of the current is determined by the tunneling probability.
On the interface of the insulating potential barrier, when the energy of the incident wave is lower than the barrier energy, the wavefunction decays exponentially into the insulator. Depending on the ratio of thickness with respect to the decay constant of the material, there is a chance of tunneling through the material, which is represented as the transmission coefficient:
where and are the edges of the potential barrier, is the height of the potential barrier at point ,
is the energy of the electron, and is the mass of electron.
In addition to direct tunneling, Fowler-Nordheim tunneling, and thermionic emission contribute to the total current significantly in different operating voltages.
Current state of research and development
As of now, FTJs are CMOS back-end compatible, whereas the front-end compatibility is still under development. Nevertheless, the back-end compatibility allows the integration of FTJs into current silicon semiconductor technology with relatively small investments into new fabrication infrastructure. As computing, due to emergence of machine learning and artificial intelligence, is shifting increasingly from logic-centric into memory-centric computing, the research and development into power efficient, fast, and reliable CMOS compatible non-volatile memory is highly relevant.
Due to the non-destructive readout of the non-volatile memory implemented with FTJs, the components have gained interest in the field of neuromorphic computing. In addition, FTJs exhibit behavior such as accumulative switching, which is promising in hardware implementations of spiking neural networks.
The existence of interfacial layers between the metal and FE material, also known as dead layers, cause changes in device characteristics plaguing the functionality of the device.
Other tunnel junctions
In addition to ferroelectric tunnel junctions, other more established and emerging devices based on the same principles exist. These include:
Magnetic tunnel junction: the electrons tunnel from one magnetic material to another via a thin insulating barrier.
Multijunction photovoltaic cell
Tunnel diode
Superconducting tunnel junction
Scanning tunneling microscope tip/air/substrate structure can be also viewed as a tunnel junction. Some research has been done with STM tips concerning ferroelectricity, in controlling the domain switching with an STM tip. This is not a ferroelectric tunnel junction since the ferroelectric material does not function as the potential barrier.
References
Ferroelectric materials | Ferroelectric tunnel junction | [
"Physics",
"Materials_science"
] | 1,355 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Hysteresis",
"Matter"
] |
75,036,186 | https://en.wikipedia.org/wiki/Weather%20of%202005 | The following is a list of weather events that occurred on Earth in the year 2005. The year began with a weak El Niño, although this would fade into a neutral phase later in the year. The most common weather events to have a significant impact are blizzards, cold waves, droughts, heat waves, wildfires, floods, tornadoes, and tropical cyclones.
Overview
Deadliest events
Types
The following listed different types of special weather conditions worldwide.
Cold snaps and winter storms
Floods
Heat waves and droughts
Tornadoes
Tropical cyclones
When the year began, a tropical low was active near the northwest coast of Australia, which soon became the first named storm of the year – Tropical Cyclone Raymond, which soon moved ashore the Kimberley region. Throughout the year, there were a total of nine named storms in the Australian basin. The strongest and most notable of these was powerful Cyclone Ingrid, which made landfalls in Queensland, Northern Territory, and Western Australia, the only cyclone on record to strike all three regions as a severe tropical cyclone. Two Australian storms entered the South-West Indian Ocean, where an additional six named storms developed. Also in the southern hemisphere, the South Pacific was active with eight named storms, including a succession of four cyclones that struck the Cook Islands – Meena, Nancy, Olaf, and Percy. The four cyclones' monetary damage totaled over US$25 million, equivalent to 14% of the country's gross domestic product (GDP).
The two deadliest tropical cyclones of the year were a part of the record-breaking 2005 Atlantic hurricane season. In October, Hurricane Stan and a broader weather system produced severe flooding across eastern Mexico and Central America, killing 1,668 people, with Guatemala hit the hardest. In late August, Hurricane Katrina became the costliest U.S. hurricane, leaving $125 billion in damage and 1,392 deaths. The strongest tropical cyclone of the year was Hurricane Wilma, which in October became the most intense Atlantic hurricane ever recorded, with a barometric pressure of . Wilma was one of four Category 5 hurricanes – the strongest ranking on the Saffir-Simpson scale – in the hyperactive season, along with Emily, Katrina, and Rita. The 2005 Atlantic hurricane season was the most active season on record, with 28 named storms in the Atlantic, including an unnamed subtropical storm, as well as Zeta, which developed in December and continued into early January 2006.
Also in the northern hemisphere, there were 23 named storms in the western Pacific Ocean, including 13 typhoons, of which Haitang was the strongest. In the eastern Pacific, there were 15 named storms, of which Kenneth was the strongest and longest-lived. In the North Indian Ocean, there were four named storms, although none of them intensified beyond a cyclonic storm, or roughly a weak tropical storm.
Wildfires
Extratropical cyclones and other weather systems
Timeline
This is a timeline of deadly weather events during 2005.
January
January 16–25 – Cyclone Ernest struck southern Madagascar after previously moving around the northern and western portions of the country, killing 78 people.
February
March
March 1–August 31 – A drought across the American Midwest caused US2.4 billion worth of crop damage.
March 2–15 – Heavy rains in Madagascar left 8,000 people homeless and caused 25 fatalities.
March 4–16 – Cyclone Ingrid became the first ever severe tropical cyclone to make landfalls in the Australian subdivisions of Queensland, Northern Territory, and Western Australia. In its formative stages, high waves from the cyclone killed five people when a boat capsized off Papua New Guinea.
March 10 – Inclement weather caused boat accidents that killed 29 people.
March 12–19 – Tropical Storm Roke, known locally as Auring, moved through the central Philippines, killing 18 people.
March 24–26 – Floods in the Malagasy province of Anosy killed four people.
April
May
May 12 – Floods caused a fatality in the Autonomous Region in Muslim Mindanao in the southern Philippines.
May 17–21 – Tropical Depression Adrian struck the Pacific coast of Honduras after weakening from hurricane intensity, killing five people across Central America.
June
June 8–13 – Tropical Storm Arlene struck the Florida panhandle, causing one drowning death.
June 27–July 5 – A land depression moved across India, producing flooding across Madhya Pradesh that killed 26 people.
June 28–30 – Tropical Storm Bret struck the Mexican state of Veracruz, killing two people.
July
July 3–7 Hurricane Cindy killed three people as it moved through the southeast United States. Cindy produced an outbreak of 33 tornadoes, with one causing $40 million in damage to the Atlanta Motor Speedway.
July 4–13 – Hurricane Dennis moved through the Caribbean and Gulf of Mexico, striking Cuba and later the Florida panhandle. On July 8, Dennis became the strongest Atlantic hurricane before the month of August. The hurricane killed 88 people and left US$4.06 billion in damage.
July 10–20 – Typhoon Haitang hit Taiwan, killing 15 people, and it later hit Zhejiang in mainland China, killing another three people.
July 11–21 – Hurricane Emily moved through the Caribbean, striking Grenada and two locations in Mexico – along the Yucatán Peninsula and in Tamaulipas. Emily caused 17 fatalities and about US$1 billion in damage. On July 16, Emily broke the record for the strongest Atlantic hurricane before the month of August, set by Dennis eight days earlier.
July 18–20 – Tropical Storm Eugene brushed the southwest coast of Mexico, causing one death when a boat overturned.
July 23–25 – Tropical Storm Gert hit the Mexican state of Tamaulipas, killing one person.
July 29–31 – A depression moved ashore Bangladesh, with its heavy rains causing a fatality when a wall collapsed.
July 29–August 7 – Typhoon Matsa moved ashore southern Zhejiang in mainland China, killing 25 people.
August
August 2–11 – In South Korea, landslides from heavy rain killed 15 people.
August 4–18 – Hurricane Irene caused a rip current death as it moved offshore the eastern United States.
August 13–16 – A storm in Vietnam killed 13 people.
August 17–27 – Typhoon Mawar brushed eastern Japan, causing one death.
August 22–23 – Tropical Storm Jose hit the Mexican state of Veracruz, killing 16 people.
August 23–30 – Hurricane Katrina became the costliest American hurricane when it struck Florida, Louisiana, and Mississippi, estimated at US$125 billion. Katrina was the deadliest Atlantic hurricane since 1928, with a death toll of 1,392 people, which was more recently surpassed by Hurricane Maria in 2017. Katrina left large portions of the New Orleans area underwater after storm surge breached the levee. The hurricane's widespread effects resulted in the greatest number of displaced people in the country since the Dust Bowl.
August 24–September 1 – Typhoon Talim struck Taiwan, killing five. It later hit Fujian in mainland China, where the typhoon killed 167 people.
August 29–September 8 – Typhoon Nabi moved from the Northern Marianas Islands to Japan, killing 35 people.
September
September 1–10 – Hurricane Maria traversed the Atlantic Ocean, while its remnants impacted Europe, with a landslide in Bergen, Norway killing three people. Rip currents from Maria and nearby Hurricane Nate caused a drowning death in New Jersey.
September 6–17 – Hurricane Ophelia meandered off the east coast of the United States, killing three people.
September 12–17 – A depression struck Odisha and moved across India, killing six people in Madhya Pradesh from flooding.
September 14–16 – A depression struck Gujarat, killing 13 people.
September 16–18 – Tropical Storm Vicente killed 22 people when it struck Vietnam, including two drowning deaths in Hong Kong.
September 17–21 – Cyclonic Storm Pyarr originated offshore Bangladesh and moved ashore eastern India, killing 91 people between the two countries.
September 18–26 – Hurricane Rita became the strongest hurricane ever recorded in the Gulf of Mexico, before weakening and striking the U.S. Gulf coast near the border of Texas and Louisiana. There were 120 deaths, and damage was estimated at US$18.5 billion.
September 19–28 – Typhoon Damrey moved from the Philippines, through the southern Chinese island of Hainan, and with a final landfall Vietnam, killing at least 124 people.
September 25–October 3 – Typhoon Longwang struck eastern Taiwan, killing three people, and later mainland China in Fujian province, where the typhoon killed at least 133 people.
October
October 1–3 – Floods in Bangladesh killed 16 people and displaced 50,000.
October 1–5 – Hurricane Stan made hit the Mexican states of Quintana Roo and Veracruz. The storm, along with a broader weather disturbance, killed 1,669 people across Mexico and Central America, particularly in Guatemala, while damage was estimated at US$2.7 billion. El Salvador's Santa Ana Volcano erupted on October 1, occurring simultaneous to the flooding.
October 5–14 – Tropical Storm Tammy and a subtropical depression fueled moisture to produce flooding across the northeastern United States, resulting in ten deaths and
October 7–10 – Floods in Vietnam killed 17 people.
October 15–26 – Hurricane Wilma moved from the Caribbean into the Gulf of Mexico and across the western Atlantic Ocean, becoming the strongest Atlantic hurricane on record on October 19. At its peak, Wilma had an estimated barometric pressure of , while its eye measured only across, the smallest known eye in an Atlantic hurricane. Its winds reached , the fourth Category 5 hurricane of the season. Along its path, Wilma killed 48 people and caused US$20.2 billion in damage.
October 20–28 – Floods in Vietnam killed 67 people.
October 21–29 – Monsoonal floods and a deep depression in southern India killed 127 people.
October 22–24 – Tropical Storm Alpha struck Hispaniola, killing 26 people. Alpha was the first tropical storm to be named using the Greek Alphabet, due to the hyperactive season exhausting the regular naming list.
October 28–November 2 – Typhoon Kai-tak struck Vietnam, killing 20 people.
October 26–31 – Hurricane Beta struck Nicaragua after becoming the final of a record seven major hurricanes to occur during the season. Beta killed nine people.
November
November 14–21 – Tropical Storm Gamma moved across the Caribbean, causing 39 deaths, most of them in Honduras.
November 22–28 – Former Tropical Storm Delta struck the Canary Islands in the eastern Atlantic Ocean, leaving 19 people missing or killed, most of them from a shipwreck.
November 28–December 2 – Cyclonic Storm Baaz originated over the eastern Bay of Bengal and later struck India, killing 11 people in Thailand and another 11 in India.
December
References
Notes
Weather by year
2005-related lists
2005 meteorology | Weather of 2005 | [
"Physics"
] | 2,170 | [
"Weather",
"Physical phenomena",
"Weather by year"
] |
75,037,508 | https://en.wikipedia.org/wiki/Emotional%20selection%20%28information%29 | Emotional selection describes the perpetuation and evolution of information based on its ability to evoke emotions. The hypothesis posits that information spreads throughout populations not just based on their factual accuracy or utility, but also based on the emotional impact it has on recipients. Emotional selection suggests that if a meme or a piece of information evokes strong emotions—whether positive or negative—it is more likely to be shared and propagated. The emotional response effectively acts as a selection mechanism, giving certain memes an advantage in the competition for attention and dissemination. This hypothesis underscores the importance of emotional resonance in the virality and longevity of information in cultural evolution.
References
Selection
Evolutionary biology
Ecological processes
Emotion
Information theory | Emotional selection (information) | [
"Physics",
"Mathematics",
"Technology",
"Engineering",
"Biology"
] | 145 | [
"Evolutionary biology",
"Emotion",
"Physical phenomena",
"Earth phenomena",
"Telecommunications engineering",
"Behavior",
"Evolutionary processes",
"Selection",
"Applied mathematics",
"Computer science",
"Information theory",
"Ecological processes",
"Human behavior"
] |
75,048,709 | https://en.wikipedia.org/wiki/Primogenic%20Effect | In inorganic chemistry, the Primogenic Effect describes the change in excited state manifolds for first row vs second and third row metal complexes. The effect is used to rationalize the ability or inability of certain metal complexes to function as photosensitizers, which in turn is relevant to photocatalysis.
Complexes of the type [M(2,2’-bipyridine)3]2+ are low spin for M2+ = Fe(II), Ru(II), and Os(II). These species have similar ground state properties: they are diamagnetic and undergo reversible oxidation to the trications. As a consequence of the Primogenic Effect, the first excited state for [Fe(bipy)3]2+ is a ligand field state (LF state) with a high spin configuration. Such LF states characteristically decay to the ground state rapidly (femtoseconds). By contrast, for [Ru(bipy)3]2+ and [Os(bipy)3]2+, the first excited state is charge-transfer in character. Bonding in this kind of excited state can be described as [MIII(bipy−)(bipy)2]2+, i.e. an oxidized metal ion bound to one bipy radical anion as well as two ordinary bipy ligands. Such charge-separated states have relatively long lifetimes of 900 (Ru) and 25 (Os) nanoseconds. Nanosecond lifetimes are sufficiently long that these excited states can participate in bimolecular reactions, i.e. they can photosensitize. One consequence of the primogenic effect is that first-row metals are usually incapable of serving as photosensitizers. This failure is unfortunate because first row metals are far cheaper than second and third row metals.
The origin of the Primogenic Effect is traced to the presence (2nd and 3rd row metals) or absence (1st row metals) of a radial nodes in the wave functions of the valence d orbitals.
References
Chemical bonding
Electron states
Quantum chemistry
Photochemistry | Primogenic Effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 442 | [
"Electron",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
"Electron states",
" and optical physics"
] |
63,535,261 | https://en.wikipedia.org/wiki/Advanced%20Physical%20Layer | Ethernet Advanced Physical Layer (Ethernet-APL) describes a physical layer for the Ethernet communication technology which is especially developed for the requirements of the process industries. The development of Ethernet-APL was determined by the need for communication at high speeds and over long distances, the supply of power and communications signals via common single, twisted-pair (2-wire) cable as well as protective measures for the safe use within explosion hazardous areas.
Because it was created specifically for demanding industrial applications, Ethernet-APL, as a subset of the widely adopted Ethernet standard, offers a high level of robustness for extremely reliable operation.
Ethernet has long become the standard communication solution in the information technology field, while Industrial Ethernet is the common description of the variant of this standard for the manufacturing and process industries. Ethernet-APL provides the missing link, extending unified Ethernet communication all the way down to field instrumentation.
Structure
Being a physical layer, Ethernet-APL is independent of any protocol or communications stack and designed for wide adoption and application in process automation.
Ethernet as basis for APL
Ethernet-APL is a specific, single-pair Ethernet based on 10BASE-T1L as defined in IEEE 802.3cg, with additional provisions for process industries. The Ethernet-APL communication is thus part of and fully compatible with the IEEE 802.3 Ethernet specification.
Ethernet-APL communication relies on 10 Mbit/s full duplex communication transported via one twisted pair cable. It supports all major network topologies including the well known trunk & spur topology which is widely utilized in process industries. The maximum trunk length is 1000 meters into Zone 1, Div 2. The maximum spur length is specified as 200 m into Zone 0, Div. 1.
Ethernet-APL incorporates a number of enhancements especially tailored to the demanding requirements of process and other industries like Intrinsic safety and adds port profiles for optional power supply and hazardous area protection.
Intrinsic safety
Intrinsic safety is a vital requirement especially by the worldwide process industries which demand an easy to implement solution to control and power field instruments in explosion hazardous areas. For this reason, optional intrinsic safety is fully integrated into the definitions of the Ethernet-APL communication standard.
In the technical specification 2-WISE the 2-wire intrinsically safe Ethernet is defined.
The intrinsic safety barrier is an electronic circuit at each output or input of a switch or instrument. It prevents ignitable electric energy from reaching the connector. The intrinsic safety barrier is separate from the communications circuit (PHY). This design principle ensures:
Chip manufacturers can build commercially available PHY in quantities also available for applications not requiring intrinsic safety
Device vendors can easily build intrinsically safe devices
Ethernet-APL is designed to support easy planning, validation, installation, documentation and implementation of the intrinsically safe operation of field instruments in hazardous areas. This includes among other aspects live work on cables and instruments without a hot work permit. All suitable products carry an approval by a notified body.
Port profile specifications
Part of the standards for Ethernet-APL include the definition of port profiles for interoperability in various application scenarios. This includes aspects such as segment type, differentiating a trunk-to-trunk port from a spur-to-spur port. Other specifications refer to the power characteristics, differentiating between source-to-load and unpowered-to-unpowered ports. Another provision is the definition of power classes. This includes the limitation of the maximum supply voltage and supply current for intrinsically safe power supply.
Further topics that are specified in the port profile specification are wiring rules, pin assignments for terminals and connectors as well as shielding and grounding rules.
References
Further reading
Ethernet
Industrial Ethernet
Industrial automation | Advanced Physical Layer | [
"Engineering"
] | 745 | [
"Industrial Ethernet",
"Industrial automation",
"Automation",
"Industrial engineering"
] |
63,539,376 | https://en.wikipedia.org/wiki/Two-dimensional%20critical%20Ising%20model | The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge .
Correlation functions of the spin and energy operators are described by the minimal model. While the minimal model has been exactly solved (see Ising critical exponents), the solution does not cover other observables such as connectivities of clusters.
The minimal model
Space of states and conformal dimensions
The Kac table of the minimal model is:
This means that the space of states is generated by three primary states, which correspond to three primary fields or operators:
The decomposition of the space of states into irreducible representations of the product of the left- and right-moving Virasoro algebras is
where is the irreducible highest-weight representation of the Virasoro algebra with the conformal dimension .
In particular, the Ising model is diagonal and unitary.
Characters and partition function
The characters of the three representations of the Virasoro algebra that appear in the space of states are
where is the Dedekind eta function, and are theta functions of the nome , for example .
The modular S-matrix, i.e. the matrix such that , is
where the fields are ordered as .
The modular invariant partition function is
Fusion rules and operator product expansions
The fusion rules of the model are
The fusion rules are invariant under the symmetry .
The three-point structure constants are
Knowing the fusion rules and three-point structure constants, it is possible to write operator product expansions, for example
where are the conformal dimensions of the primary fields, and the omitted terms are contributions of descendant fields.
Correlation functions on the sphere
Any one-, two- and three-point function of primary fields is determined by conformal symmetry up to a multiplicative constant. This constant is set to be one for one- and two-point functions by a choice of field normalizations. The only non-trivial dynamical quantities are the three-point structure constants, which were given above in the context of operator product expansions.
with .
The three non-trivial four-point functions are of the type . For a four-point function , let and be the s- and t-channel Virasoro conformal blocks, which respectively correspond to the contributions of (and its descendants) in the operator product expansion , and of (and its descendants) in the operator product expansion . Let be the cross-ratio.
In the case of , fusion rules allow only one primary field in all channels, namely the identity field.
In the case of , fusion rules allow only the identity field in the s-channel, and the spin field in the t-channel.
In the case of , fusion rules allow two primary fields in all channels: the identity field and the energy field. In this case we write the conformal blocks in the case only: the general case is obtained by inserting the prefactor , and identifying with the cross-ratio.
In the case of , the conformal blocks are:
From the representation of the model in terms of Dirac fermions, it is possible to compute correlation functions of any number of spin or energy operators:
These formulas have generalizations to correlation functions on the torus, which involve theta functions.
Other observables
Disorder operator
The two-dimensional Ising model is mapped to itself by a high-low temperature duality. The image of the spin operator under this duality is a disorder operator , which has the same left and right conformal dimensions . Although the disorder operator does not belong to the minimal model, correlation functions involving the disorder operator can be computed exactly, for example
whereas
Connectivities of clusters
The Ising model has a description as a random cluster model due to Fortuin and Kasteleyn. In this description, the natural observables are connectivities of clusters, i.e. probabilities that a number of points belong to the same cluster.
The Ising model can then be viewed as the case of the -state Potts model, whose parameter can vary continuously, and is related to the central charge of the Virasoro algebra.
In the critical limit, connectivities of clusters have the same behaviour under conformal transformations as correlation functions of the spin operator. Nevertheless, connectivities do not coincide with spin correlation functions: for example, the three-point connectivity does not vanish, while . There are four independent four-point connectivities, and their sum coincides with . Other combinations of four-point connectivities are not known analytically. In particular they are not related to correlation functions of the minimal model, although they are related to the limit of spin correlators in the -state Potts model.
References
Exactly solvable models
Conformal field theory
Lattice models
Spin models
Statistical mechanics | Two-dimensional critical Ising model | [
"Physics",
"Materials_science"
] | 1,001 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics"
] |
73,668,337 | https://en.wikipedia.org/wiki/National%20Quantum%20Mission%20India | National Quantum Mission India is an initiative by the Department of Science and Technology, Government of India, to foster quantum technologies related scientific and industrial research and development to support national Digital India, Make India, Skill India and Sustainable development goals.
Background
The union cabinet of Government of India approved the National Quantum Mission with a cost of INR 6003.65 cr ($730,297,000) from 2023–24 to 2030–31. Quantum key distribution (QKD) satellites are being developed by ISRO as part of the National Quantum Mission to provide secure communication.
Selection of Thematic Hubs (T-Hubs) and Technical Groups (TGs)
In January 2024, the National Quantum Mission issued a Call for Proposals (CFP), inviting top-tier academic and research institutions to contribute to the development of quantum technologies in four main areas:
1. Quantum Computing
2. Quantum Communication
3. Quantum Sensing & Metrology
4. Quantum Materials & Devices
The initiative garnered an impressive 384 submissions from across the country.
Four T-Hubs and 14 TGs Announced
On September 30, 2024, the National Quantum Mission reached a decisive phase with the announcement of the four T-Hubs.
After a thorough evaluation, 17 proposals were selected, representing the pinnacle of quantum research excellence. The T-Hubs bring together 152 researchers from 43 institutions nationwide, showcasing India's collective drive to become a global leader in quantum science and technology.
The four T-Hubs will be 1. Indian Institute of Science (IISc) Bengaluru for quantum computing 2. Indian Institute of Technology Madras (IIT-M) along with Centre for Development of Telematics New Delhi for Quantum Communications 3. Indian Institute of Technology Bombay (IIT-B) for Quantum Sensing & Metrology and 4. Indian Institute of Technology Delhi (IIT-D) for Quantum Materials & Devices and comprises 14 Technical Groups.
References
Science and technology in India
Quantum computing
Quantum mechanics | National Quantum Mission India | [
"Physics"
] | 399 | [
"Theoretical physics",
"Quantum mechanics"
] |
73,672,727 | https://en.wikipedia.org/wiki/Van%20der%20Waals%20integration | van der Waals integration is a physical assembly strategy, in which prefabricated building blocks are physically assembled together through weak van der Waals interactions. This concept was originally proposed in two-dimensional materials research community to construct 2D van der Waals heterostructures.
A key benefit of van der Waals integration is that it offers an alternative way to integrate highly disparate material systems with unprecedented degrees of freedom, regardless of their crystal structures, lattice parameters, or orientation. As this physical assembly method does not involve one-to-one chemical bonds between adjacent layers, the van der Waals integration approach can thus enable the creation of a wide spectrum of series of artificial van der Waals heterostructures and novel moiré superlattices through layer transfer. Highly disparate material systems with diverse functionalities can be integrated together with atomically clean and electronically sharp interfaces., eliminating the rigorous lattice matching and process compatibility requirements that applied epitaxy This approach has proven fruitful in 2D photonics, polariton physics, hetero-integrated photonics, and wearable optoelectronic applications.
References
Van der Waals molecules | Van der Waals integration | [
"Physics",
"Chemistry"
] | 236 | [
"Molecules",
"Matter",
"Van der Waals molecules"
] |
58,325,116 | https://en.wikipedia.org/wiki/Titu%27s%20lemma | In mathematics, the following inequality is known as Titu's lemma, Bergström's inequality, Engel's form or Sedrakyan's inequality, respectively, referring to the article About the applications of one useful inequality of Nairi Sedrakyan published in 1997, to the book Problem-solving strategies of Arthur Engel published in 1998 and to the book Mathematical Olympiad Treasures of Titu Andreescu published in 2003.
It is a direct consequence of Cauchy–Bunyakovsky–Schwarz inequality. Nevertheless, in his article (1997) Sedrakyan has noticed that written in this form this inequality can be used as a proof technique and it has very useful new applications. In the book Algebraic Inequalities (Sedrakyan) several generalizations of this inequality are provided.
Statement of the inequality
For any real numbers and positive reals we have (Nairi Sedrakyan (1997), Arthur Engel (1998), Titu Andreescu (2003))
Probabilistic statement
Similarly to the Cauchy–Schwarz inequality, one can generalize Sedrakyan's inequality to random variables.
In this formulation let be a real random variable, and let be a positive random variable. X and Y need not be independent, but we assume and are both defined.
Then
Direct applications
Example 1. Nesbitt's inequality.
For positive real numbers
Example 2. International Mathematical Olympiad (IMO) 1995.
For positive real numbers , where we have that
Example 3.
For positive real numbers we have that
Example 4.
For positive real numbers we have that
Proofs
Example 1.
Proof: Use and to conclude:
Example 2.
We have that
Example 3.
We have so that
Example 4.
We have that
References
Inequalities
Linear algebra
Operator theory
Articles containing proofs
Mathematical analysis
Probabilistic inequalities | Titu's lemma | [
"Mathematics"
] | 384 | [
"Mathematical theorems",
"Mathematical analysis",
"Binary relations",
"Theorems in probability theory",
"Probabilistic inequalities",
"Mathematical relations",
"Inequalities (mathematics)",
"Linear algebra",
"Articles containing proofs",
"Mathematical problems",
"Algebra"
] |
58,337,706 | https://en.wikipedia.org/wiki/Polyurethane%20urea%20elastomer | The polyurethane urea elastomer (PUU), or poly (urethane urea) elastomer, is a flexible polymeric material that is composed of linkages made out of polyurethane and polyurea compounds. Due to its hyperelastic properties, it is capable of bouncing back high-speed ballistic projectiles as if the material had “hardened” upon impact. PUUs were developed by researchers from the U.S. Army Research Laboratory (ARL) and the Army’s Institute for Soldier Nanotechnology at the Massachusetts Institute of Technology (MIT) to potentially replace polyethylene materials in body armor and other protective gear, such as combat helmets, face shields, and ballistic vests.
Composition
In general, PUUs are composed of both hard and soft segments that each play a role in the material’s physical properties. The soft segments consist of two types of chemical compounds, long-chain polyols and diisocyanates, that react and connect together with urethane linkages. On the other hand, the short-chain diamines react with the diisocyanates to form the hard segments that are held together with urea linkages. The mechanical properties of the PUU largely depend on the specific diisocyanates, long-chain polyols, and short-chain diamines in play, because how these components interact determines how well the soft and hard segments of the elastomers both crystallize and undergo microphase separation. As a result, variations in this molecular arrangement of chemical compounds have been shown to greatly affect the elastomer’s morphology and the macroscopic, mechanical properties that it exhibits.
Hyperelastic behavior
In 2017, researchers from the Army Research Laboratory and MIT reported that PUUs are capable of demonstrating hyperelastic properties, meaning that the material becomes extremely hardened upon being deformed within a very short time. As a result, the material may withstand ballistic impacts at exceptionally high speeds.
For the study, the researchers investigated the performance of different PUU variants where 4,4’-dicyclohexylmethane diisocyanate (HMDI) was chosen as the diisocyanate compound, diethyltoluenediamine (DETA) was chosen as the short-chain diamine compound, and poly(tetramethyleneoxide) (PTMO) was chosen as the long-chain polyol compound. Despite consisting of the same chemical compounds with the same stoichiometric ratio of 2:1:1 of [HDMI]:[DETA]:[PTMO], the samples differed regarding the molecular weight of their respective PTMO component, namely , , and , for the soft segments of the elastomers.
Each of the three samples were subjected to a laser-induced projectile impact test (LIPIT), which tested the dynamic response of the material by using a pulsed laser to shoot it with microparticles made of silica at speeds ranging from . The researchers found that the sample with the PTMO was the most rigid variant with the particle exhibiting a shallow penetration of about upon impact despite travelling at before rebounding at . In contrast, the sample with the PTMO displayed a deeper penetration of about , but had a slower particle rebound of , making it the most rubber-like among the PUU samples. The strain-rates associated with these impacts were on the order of 2.0 x 10^8/s for the former and 8.1 x 10^7/s for the latter.
However, all three PUU variants demonstrated rebound capabilities with no signs of post-mortem damage after impact from the microparticles. In contrast, when the LIPIT was performed on a ductile, glassy polycarbonate at similar speeds to that of the PTMO PUU variant, the polycarbonate displayed predominant deformation upon impact, despite its high fracture toughness and ballistic strength. According to the researchers, the effectiveness of the PUUs may come from how the molecules “resonate” similar to chain-mail upon impact with each oscillations at specific frequencies dissipate the absorbed energy. In comparison, the polycarbonate lacked the broad range of relaxation times, a characteristic that reflects how efficiently the molecules in the polymer chains respond to an external impulse, that PUUs are known to have. As a result, the researchers concluded that even the most rubber-like variant of the PUU, specifically the PTMO sample, demonstrated greater robustness and dynamic stiffening than the glassy polycarbonate.
ARL researchers have stated that the PUU’s primary benefit comes not from its extra strength but its fabric-like flexibility, which demonstrates its potential as a replacement material for the rigid ceramic and metal plates generally found in military battle armor. However, as of 2018, the PUU is still under development in the testing phase.
References
External links
PU Polyurethane Open Belts
Military technology
Polyurethanes
Plastics
Elastomers
Body armor | Polyurethane urea elastomer | [
"Physics",
"Chemistry"
] | 1,033 | [
"Synthetic materials",
"Unsolved problems in physics",
"Elastomers",
"Amorphous solids",
"Plastics"
] |
66,341,566 | https://en.wikipedia.org/wiki/Adrian%20Constantin | Adrian Constantin (born 22 April 1970) is a Romanian-Austrian mathematician who does research in the field of nonlinear partial differential equations. He is a professor at the University of Vienna and has made groundbreaking contributions to the mathematics of wave propagation. He is listed as an ISI Highly Cited Researcher with more than 160 publications and 11,000 citations.
Life and career
Adrian Constantin was born in Timișoara, Romania, where he studied at the Nikolaus Lenau High School. He was later educated at the University of Nice Sophia Antipolis (BSc 1991, MSc 1992) and at New York University (NYU), where he got his PhD in 1996 under Henry McKean with the thesis "The Periodic Problem for the Camassa–Holm equation". He did post-doctoral work at the University of Basel and at the University of Zurich.
After a short period as a lecturer at the University of Newcastle upon Tyne, he became a professor at the University of Lund in 2000, and then was Erasmus Smith's Professor of Mathematics at Trinity College Dublin (TCD) from 2004 to 2008, and was made a fellow in 2005. Since then he has been university professor for partial differential equations at the University of Vienna, and also had a chair at King's College London during the period 2011-2014.
Constantin specializes in the role of mathematics in geophysics using nonlinear partial differential equations to mathematically model currents and waves in the oceans and in the atmosphere. These flows and waves play an important role in the El Niño climate phenomenon and in natural disasters such as tsunamis. His approach takes into account the fact that the surface of the earth is curved and the importance of the Coriolis force.
Awards and honours
2000: Highly Cited Researcher with more than 160 publications and 11,000 citations
2005: Göran Gustafsson Prize from the Royal Swedish Academy of Sciences
2007: Friedrich Wilhelm Bessel Research Prize from the Alexander von Humboldt Foundation
2009: Fluid Dynamics Research Prize from the Japan Society of Fluid Mechanics
2010: Advanced Grant from the European Research Council (ERC)
2012: Plenary lecture at the European Congress of Mathematicians (ECM) in Krakow
2020: Wittgenstein Award from The Austrian Ministry for Science
2022: Elected corresponding member of the Austrian Academy of Sciences, 22 April 2022
2022: Elected member of the German National Academy of Sciences Leopoldina, 16 March 2022
2022: Made an honorary citizen of the city of Timișoara
2024: Elected full member of the Austrian Academy of Sciences, 15 April 2024
Selected publications
papers
1998: Wave breaking for nonlinear nonlocal shallow water equations (with J. Escher), Acta Mathematica 181 229–243.
1999: A shallow water equation on the circle (with H. P. McKean), Comm. Pure Appl. Math. 52 949–982.
2000: Stability of peakons (with W. Strauss), Comm. Pure Appl. Math. 53 603–610.
2004: Exact steady periodic water waves with vorticity (with W. Strauss), Comm. Pure Appl. Math. 57 481–527.
2006: The trajectories of particles in Stokes waves, Invent. Math. 166 523–535.
2007: Global conservative solutions of the Camassa-Holm equation (with A. Bressan), Arch. Ration. Mech. Anal. 183 215–239.
2011: Analyticity of periodic traveling free surface water waves with vorticity (with J. Escher), Ann. of Math. 173 559–568.
2016: Global bifurcation of steady gravity water waves with critical layers (with W. Strauss and E. Varvaruca), Acta Mathematica 217 195–262.
2019: Equatorial wave-current interactions (with R. I. Ivanov), Comm. Math. Phys. 370 1–48.
2022: On the propagation of nonlinear waves in the atmosphere (with R. S. Johnson), Proceedings of the Royal Society A 478 (2260), 20210895
2022: Stratospheric planetary flows from the perspective of the Euler equation on a rotating sphere (with P. Germain), Arch. Ration. Mech. Anal., (245 587–644)
Books
2011: "Nonlinear Water Waves with Applications to Wave-Current Interactions and Tsunamis", Society for Industrial and Applied Mathematics, Philadelphia,
2016: "Fourier Analysis. Part 1. Theory", London Mathematical Society, Cambridge University Press,
2024: "Analysis I", Springer Spektrum, Berlin, Heidelberg,
References
External links
Adrian Constantin's homepage
Literature by and about Adrian Constantin in the catalog of the German National Library
1970 births
Living people
People from Timișoara
Austrian mathematicians
Romanian mathematicians
New York University alumni
Côte d'Azur University alumni
Romanian emigrants to Austria
Mathematical analysts
Partial differential equation theorists
Fluid dynamicists
Academics of Trinity College Dublin
Corresponding Members of the Austrian Academy of Sciences
Fellows of Trinity College Dublin
Professorships at King's College London
Academic staff of the University of Vienna | Adrian Constantin | [
"Chemistry",
"Mathematics"
] | 1,051 | [
"Mathematical analysis",
"Fluid dynamicists",
"Mathematical analysts",
"Fluid dynamics"
] |
66,342,149 | https://en.wikipedia.org/wiki/Convergence%20research | Convergence research aims to solve complex problems employing transdisciplinarity. While academic disciplines are useful for identifying and conveying coherent bodies of knowledge, some problems require collaboration among disciplines, including both enhanced understanding of scientific phenomena as well as resolving social issues. The two defining characteristics of convergence research include: 1) the nature of the problem, and 2) the collaboration among disciplines.
Definition
In 2002, it was published the foundational report "Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science" (Roco et al. 2002 and 2003) and article "Coherence and Divergence of Megatrends in Science and Engineering" (Roco MC, 2002), followed by the international report "Convergence of Knowledge, Technology and Society: Beyond Convergence of Nano-Bio- Info-Cognitive Technologies" (Roco et al. 2013) and "Principles and Methods that Facilitate Convergence" (Roco 2016).
In 2016, convergence research was identified by the National Science Foundation as one of 10 Big Idea's for future investments. As defined by NSF, convergence research has two primary characteristics, namely:
"Research driven by a specific and compelling problem. Convergence research is generally inspired by the need to address a specific challenge or opportunity, whether it arises from deep scientific questions or pressing societal needs.
Deep integration across disciplines. As experts from different disciplines pursue common research challenges, their knowledge, theories, methods, data, research communities and languages become increasingly intermingled or integrated. New frameworks, paradigms or even disciplines can form sustained interactions across multiple communities."
National Research Council published a report on "Convergence: Facilitating Transdisciplinary Integration of Life Sciences, Physical Sciences, Engineering, and Beyond" in 2014.
An illustration of implementing convergence principles to the National Nanotechnology Initiative is described in in 2013.
An illustration of application of convergence to health, science and engineering research is described in in 2016.
Examples of convergence research
Biomedicine
Advancing healthcare and promoting wellness to the point of providing personalized medicine will increase health and reduce costs for everyone. While recognizing the potential benefits of personalized medicine, critics cite the importance of maintaining investments in public health as highlighted by the approaches to combat the COVID-19 pandemic.
Cyber-physical systems
The internet of things allows all people, machines, and infrastructure to be monitored, maintained, and operated in real-time, everywhere. Because the United States Government is one of the largest user of "things", cybersecurity is critical to any effective system.
STEMpathy
Jobs that utilize skills in science, technology, engineering, and mathematics to provide care for human welfare through the use of empathy have been described as creating value with "hired hearts". Thomas Friedman coined the term "STEMpathy" to describe these jobs.
Sustainability
Beyond recycling, the goal of achieving zero waste means designing a closed loop of the material and energy necessary to operate the built environment. Individuals and organizations, including corporations and governments, increasingly are committing to achieving zero waste.
References
Biomedicine
Computer_systems
Sustainability | Convergence research | [
"Technology",
"Engineering",
"Biology"
] | 627 | [
"Computer engineering",
"Biomedicine",
"Computer systems",
"Computer science",
"Computers"
] |
66,345,561 | https://en.wikipedia.org/wiki/Gap%20surface%20plasmon | A gap surface plasmon (or gap plasmon) is a guided electromagnetic wave which propagates in a transparent medium located between two extremely close metallic regions. Propagating in a gap between metals forces light to propagate partially inside the metallic regions, causing the gap plasmon to slow down.
The velocity of the gap-plasmon can be modulated by changing the thickness of the gap even by a few nanometers.
A gap-plasmon is a guided mode, a solution of Maxwell's equations without source. It is the form under which light propagates inside an extremely thin gap between two metals (having the same nature or not). As a gap plasmon, the electromagnetic wave can propagate up to four to five times slower than in vacuum. Such a guided mode only exists for parallel to the interface magnetic fields (p polarization). The distance between the metallic area has to be typically smaller than 50 nm in order to noticeably slow the guided mode. Actually, the GAP plasmon propagates partially inside the metal : the field of the GAP plasmon penetrates the metal to a depth of typically 25 nm, called the skin depth. A slow guided mode presents a short effective wavelength and so a very large wave vector (noted kx when the wave propagates along an Ox axis). As the thickness of the dielectric region decreases, the gap-plasmon is slowed by the metal and its effective index (as well as its wavevector) increases, while its effective wavelength shrinks.
Devices based on gap-plasmon, such as resonators, present a typical size which is of the order of the effective wavelength. Gap-plasmon resonators have in general a reduced size compared to the wavelength of light in vacuum. Such a miniaturization is particularly sought after in plasmonics.
Applications
Gap plasmon resonators :
They can be obtained by self-assembly of chemically synthesized nanocubes or by lithography. A GAP plasmon resonator is a cavity for the guided mode : the wave is reflected back and forth inside the resonator.
Such structures (see picture) present a very small volume compared to the wavelength is vacuum (which allow to reach a very important Purcell effect). Such resonators can then be used to design metasurfaces, fabricate reflection holograms or for subwavelength color printing.
Example : chemically synthesized silver nanocubes on a gold layer, separated by polymer (see picture).
Electro-optical modulators :
Electro-optical modulators are designed to modulate a light signal, i.e. they modulate on the characteristics of a light beam (such as its wavelength, polarization state or intensity) to encode a signal. The gap plasmon based modulators are the smallest existing modulators. Losses are reduced thanks to this small size. They operate over a large frequency range. Actually, the upper frequency limit of such devices is currently beyond the reach of electronic measuring devices.
References
Electromagnetism
Metamaterials
Plasmonics | Gap surface plasmon | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 654 | [
"Plasmonics",
"Electromagnetism",
"Physical phenomena",
"Metamaterials",
"Materials science",
"Surface science",
"Fundamental interactions",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
67,858,994 | https://en.wikipedia.org/wiki/Iterative%20rational%20Krylov%20algorithm | The iterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful for model order reduction (MOR) of single-input single-output (SISO) linear time-invariant dynamical systems. At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solving shifted pairs of linear systems, each of size ; where is the original system order, and is the desired reduced model order (usually ).
The algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008. It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967. The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012, for a particular kind of systems.
MOR as an optimization problem
Consider a SISO linear time-invariant dynamical system, with input , and output :
Applying the Laplace transform, with zero initial conditions, we obtain the transfer function , which is a fraction of polynomials:
Assume is stable. Given , MOR tries to approximate the transfer function , by a stable rational transfer function , of order :
A possible approximation criterion is to minimize the absolute error in norm:
This is known as the optimization problem. This problem has been studied extensively, and it is known to be non-convex; which implies that usually it will be difficult to find a global minimizer.
Meier–Luenberger conditions
The following first order necessary optimality condition for the problem, is of great importance for the IRKA algorithm.
Note that the poles are the eigenvalues of the reduced matrix .
Hermite interpolation
An Hermite interpolant of the rational function , through distinct points , has components:
where the matrices and may be found by solving dual pairs of linear systems, one for each shift [Theorem 1.1]:
IRKA algorithm
As can be seen from the previous section, finding an Hermite interpolator of , through given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these "optimal" interpolation points.
For this, it starts with arbitrary interpolation points (closed under conjugation), and then, at each iteration , it imposes the first order necessary optimality condition of the problem:
1. find the Hermite interpolant of , through the actual shift points: .
2. update the shifts by using the poles of the new :
The iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:
As already mentioned, each Hermite interpolation requires solving shifted pairs of linear systems, each of size :
Also, updating the shifts requires finding the poles of the new interpolant . That is, finding the eigenvalues of the reduced matrix .
Pseudocode
The following is a pseudocode for the IRKA algorithm [Algorithm 4.1].
algorithm IRKA
input: , , closed under conjugation
% Solve primal systems
% Solve dual systems
while relative change in {} > tol
% Reduced order matrix
% Update shifts, using poles of
% Solve primal systems
% Solve dual systems
end while
return % Reduced order model
Convergence
A SISO linear system is said to have symmetric state space (SSS), whenever: This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3D Maxwell's equations. For SSS systems with distinct poles, the following convergence result has been proven: "IRKA is a locally convergent fixed point iteration to a local minimizer of the optimization problem."
Although there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.
Extensions
IRKA algorithm has been extended by the original authors to multiple-input multiple-output (MIMO) systems, and also to discrete time and differential algebraic systems [Remark 4.1].
See also
Model order reduction
References
External links
Model Order Reduction Wiki
Numerical analysis
Mathematical modeling | Iterative rational Krylov algorithm | [
"Mathematics"
] | 857 | [
"Mathematical modeling",
"Applied mathematics",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
67,864,389 | https://en.wikipedia.org/wiki/Bolzano%20process | The Bolzano process is a means to reduce magnesium to metallic form. "Dolomite-ferrosilicon briquettes are stacked on a special charge support system through which internal electric heating is conducted to the charge. A complete reaction takes 20 to 24 hours at 1,200 °C."
In 2014, Brazilian operations produced 10-15 kilotons of Mg by this process.
Also in 2014, Nevada Clean Magnesium announced its Tami-Mosi plan to create a ASTM B-92 pilot plant. The mineral resource is estimated at 412 billion tons of 12.3% grade Mg. The company produced its first ingot from a pilot plant in December 2018.
References
Metallurgy
Metallurgical processes
Smelting
Magnesium processes | Bolzano process | [
"Chemistry",
"Materials_science",
"Engineering"
] | 153 | [
"Smelting",
"Metallurgical processes",
"Magnesium processes",
"Metallurgy",
"Materials science",
"nan"
] |
67,865,542 | https://en.wikipedia.org/wiki/2Blades | 2Blades is an agricultural phytopathology non-profit which performs research to improve durable genetic resistance in crops, and funds other researchers to do the same. 2Blades was co-founded by Dr. Roger Freedman and Dr. Diana Horvath in 2004.
Funding source
2Blades is partly funded by the Gatsby Charitable Foundation does its research at The Sainsbury Laboratory, among other locations . One co-founder, Chairman Roger Freedman also works for Gatsby, which was founded by Lord David Sainsbury. Freedman had pitched an idea to Sainsbury's venture capital company to begin investing in plant genetic engineering technologies, and although the board did so, they found someone else to lead it. Freedman had wanted to run it, but was told that was not for him by Sainsbury. Indeed, soon thereafter Sainsbury set up another early investment company specifically for Freedman and a colleague, and a separate non-profit for Freedman to grant money, both for plant science. The non-profit was 2Blades.
Research activities
2Blades routinely works in partnership with other crop disease organizations like CIMMYT and BGRI. The foundation also conducts research in partnerships with the industry, including with Bayer CropScience and Monsanto. The organisation's End the Blight campaign has been joined by CIP (the International Potato Center) and Chairman of Joseph P. Kennedy Enterprises Christopher Kennedy. This campaign is advancing research and delivering cultivars specifically for Phytophthora infestans in Africa. Mr Kennedy is chairman of 2Blades African Potato Initiative which is funding the delivery of a Victoria-based cultivar to East African markets.
Crops and pathogens of research interest to the foundation include P. infestans on potato, rye, Phakopsora pachyrhizi on soybean, Puccinia graminis f. sp. tritici on wheat, and Fusarium oxysporum f.sp. cubense on Musa spp.
References
Bibliography of affiliated personnel
(ISNI 0000000134837464). BLDSC number: D1141/71. Jisc text ill. BL EThOS ID uk.bl.ethos.455951
PD ORCID 0000-0003-0620-5923 GS N3w9QUUAAAAJ RID D-1181-2009. RF ISNI 0000000134837464.
PD ORCID 0000-0003-0620-5923 GS N3w9QUUAAAAJ RID D-1181-2009. RF ISNI 0000000134837464.
Oadi N. Matny, Mehran Patpour, Ming Luo, Liqiong Xie, Soma Chakraborty, Aihua Wang, James A. Kolmer, Terese Richardson, Dhara Bhatt, Mohammad Hoque, Chris Sorenson, Burkhard Steuernagel, Brande B. H. Wulff, Narayana Upadhyaya, Rohit Mago, Sam Periyannan, Evans Lagudah, Roger Freedman (ISNI 0000000134837464), Lynne Reuber, Brian J. Steffenson, and Michael Ayliffe, A Wheat Multi-Transgene Cassette Provides Stem and Leaf Rust Resistance in the Field. In Plant and Animal Genome XXVIII Conference (January 11-15, 2020). PAG.
External links
Official website:
Phytopathology
Wheat diseases
Agronomy
Plant breeding
Genetic engineering and agriculture | 2Blades | [
"Chemistry",
"Engineering",
"Biology"
] | 749 | [
"Plant breeding",
"Genetic engineering and agriculture",
"Genetic engineering",
"Molecular biology"
] |
67,868,790 | https://en.wikipedia.org/wiki/Dmitrii%20Treschev | Dmitrii (or Dmitry) Valerevich Treschev (Дмитрий Валерьевич Трещёв, born 25 October 1964 im Olenegorsk, Murmansk Oblast) is a Russian mathematician and mathematical physicist, specializing in dynamical systems of classical mechanics.
Education and career
Treschev completed his secondary study in 1981 with degree from the Специализированный учебно-научный центр (СУНЦ) МГУ имени А.Н. Колмогорова (Specialized Educational and Scientific Center, МГУ, Physics and Mathematics Boarding School No. 18 named after A. N. Kolmogorov). Treschev completed his undergraduate study in 1986 with degree from the Faculty of Mechanics and Mathematics, Moscow State University. There in 1988 he received his Candidate of Sciences degree (PhD) with thesis Геометрические методы исследования периодических траекторий динамических систем (Geometric methods of investigation of periodic trajectories of dynamical systems) under the supervision of Valerii Vasilievich Kozlov. In 1992 Treschev received his Russian Doctor of Sciences degree (habilitation) with thesis Качественные методы исследования гамильтоновых систем, близких к интегрируемым (Qualitative methods for studying Hamiltonian systems close to integrable).
At the secondary school СУНЦ, Treschev taught as a professor in the Department of Mathematics from 1986 until his resignation. At Moscow State University, he is since 1993 a leading researcher, since 1998 a professor, and since 2006 head of the Department of Theoretical Mechanics. At the Steklov Institute he became in 2005 a chief researcher and the deputy director for research and is since 2017 the director for research. He is the author or coauthor of over 70 scientific publications. Together with V. V. Kozlov, he supervises the seminar Избранные задачи классической динамики (Selected problems of classical dynamics).
Treschev's research deals with integrability and non-integrability, dynamical stability, KAM theory, separatrix splitting, averaging in slow-fast systems, chaos in Hamiltonian dynamics, Arnold diffusion, statistical mechanics, and ergodic theory. He has served on the editorial boards of the journals Nonlinearity (1st published in 1988), Chaos, Mathematical Notes, and Regular and Chaotic Dynamics (first published in 2007).
In 1995 Treschev was a Laureate of the State Prize of the Russian Federation for young scientists. In 2007 he was awarded the Lyapunov Prize. He was elected in 2003 a corresponding member and in 2016 a full member of the Russian Academy of Sciences. In 2002 he was an invited speaker with talk Continuous averaging in dynamical systems at the International Congress of Mathematicians in Beijing.
Selected publications
Articles
Books
References
External links
mathnet.ru
1964 births
Living people
Moscow State University alumni
Academic staff of Moscow State University
Academic staff of the Steklov Institute of Mathematics
20th-century Russian mathematicians
21st-century Russian mathematicians
Dynamical systems theorists
Mathematical physicists | Dmitrii Treschev | [
"Mathematics"
] | 786 | [
"Dynamical systems theorists",
"Dynamical systems"
] |
72,246,040 | https://en.wikipedia.org/wiki/Fast%20probability%20integration | Fast probability integration (FPI) is a method of determining the probability of a class of events, particularly a failure event, that is faster to execute than Monte Carlo analysis. It is used where large numbers of time-variant variables contribute to the reliability of a system. The method was proposed by Wen and Chen in 1987.
For a simple failure analysis with one stress variable, there will be a time-variant failure barrier, , beyond which the system will fail. This simple case may have a deterministic solution, but for more complex systems, such as crack analysis of a large structure, there can be a very large number of variables, for instance, because of the large number of ways a crack can propagate. In many cases, it is infeasible to produce a deterministic solution even when the individual variables are all individually deterministic. In this case, one defines a probabilistic failure barrier surface, , over the vector space of the stress variables.
If failure barrier crossings are assumed to comply with the Poisson counting process an expression for maximum probable failure can be developed for each stress variable. The overall probability of failure is obtained by averaging (that is, integrating) over the entire variable vector space. FPI is a method of approximating this integral. The input to FPI is a time-variant expression, but the output is time-invariant, allowing it to be solved by first-order reliability method (FORM) or second-order reliability method (SORM).
An FPI package is included as part of the core modules of the NASA-designed NESSUS software. It was initially used to analyse risks and uncertainties concerning the Space Shuttle main engine, but is now used much more widely in a variety of industries.
References
Bibliography
Beck, André T.; Melchers, Robert E., "Fatigue and fracture reliability analysis under random loading", pp. 2201–2204 in, Bathe, K.J (ed), Proceedings of the Second MIT Conference on Computational Fluid and Solid Mechanics June 17–20, 2003, Elsevier, 2003 .
Murthy, Pappu L.N.; Mital, Subodh K.; Shah, Ashwin R., "Design tool developed for probabilistic modeling of ceramic matrix composite strength", pp. 127–128 in, Research & Technology 1998, NASA Lewis Research Center, 1999.
Riha, David S.; Thacker, Ben H.; Huyse, Luc J.; Enright, Mike P.; Waldhart, Chris J.; Francis, W. Loren; Nicolella, Dniel P.; Hudak, Stephen J.; Liang, Wuwei; Fitch, Simeon H.K., "Applications of reliability assessment for aerospace, automotive, bioengineering, and weapons systems", ch. 1 in, Nikolaidis, Efstratios; Ghiocel, Dan M.; Singhal, Suren, Engineering Design Reliability Applications: For the Aerospace, Automotive and Ship Industries, CRC Press, 2007 .
Shah, A.R.; Shiao, M.C.; Nagpal, V.K.; Chamis, C.C., Probabilistic Evaluation of Uncertainties and Risks in Aerospace Components, NASA Technical Memorandum 105603, March 1992.
Wen, Y.K.; Chen, H.C., "On fast integration for time variant structural reliability", Probabalistic Engineering Mechanics, vol. 2, iss. 3, pp. 156–162, September 1987.
Probabilistic models
Reliability engineering | Fast probability integration | [
"Engineering"
] | 745 | [
"Systems engineering",
"Reliability engineering"
] |
72,251,171 | https://en.wikipedia.org/wiki/Curtis%20House%2C%20Rickmansworth | The Curtis House, Rickmansworth was the first solar house constructed in the United Kingdom. The house, in Rickmansworth, England, was built in 1956 by British architect Edward Curtis, for his own occupation.
References
External links
"Dream House" - British Pathe film on YouTube showing the house
Houses in Hertfordshire
Solar design | Curtis House, Rickmansworth | [
"Engineering"
] | 67 | [
"Solar design",
"Energy engineering"
] |
72,252,525 | https://en.wikipedia.org/wiki/S2S%20%28mathematics%29 | In mathematics, S2S is the monadic second order theory with two successors. It is one of the most expressive natural decidable theories known, with many decidable theories interpretable in S2S. Its decidability was proved by Rabin in 1969.
Basic properties
The first order objects of S2S are finite binary strings. The second order objects are arbitrary sets (or unary predicates) of finite binary strings. S2S has functions s→s0 and s→s1 on strings, and predicate s∈S (equivalently, S(s)) meaning string s belongs to set S.
Some properties and conventions:
By default, lowercase letters refer to first order objects, and uppercase to second order objects.
The inclusion of sets makes S2S second order, with "monadic" indicating absence of k-ary predicate variables for k>1.
Concatenation of strings s and t is denoted by st, and is not generally available in S2S, not even s→0s. The prefix relation between strings is definable.
Equality is primitive, or it can be defined as s = t ⇔ ∀S (S(s) ⇔ S(t)) and S = T ⇔ ∀s (S(s) ⇔ T(s)).
In place of strings, one can use (for example) natural numbers with n→2n+1 and n→2n+2 but no other operations.
The set of all binary strings is denoted by {0,1}*, using Kleene star.
Arbitrary subsets of {0,1}* are sometimes identified with trees, specifically as a {0,1}-labeled tree {0,1}*; {0,1}* forms a complete infinite binary tree.
For formula complexity, the prefix relation on strings is typically treated as first order. Without it, not all formulas would be equivalent to Δ12 formulas.
For properties expressible in S2S (viewing the set of all binary strings as a tree), for each node, only O(1) bits can be communicated between the left subtree and the right subtree and the rest (see communication complexity).
For a fixed k, a function from strings to k (i.e. natural numbers below k) can be encoded by a single set. Moreover, s,t ⇒ s01t where t doubles every character of t is injective, and s ⇒ {s01t: t∈{0,1}*} is S2S definable. By contrast, by a communication complexity argument, in S1S (below) a pair of sets is not encodable by a single set.
Weakenings of S2S: Weak S2S (WS2S) requires all sets to be finite (note that finiteness is expressible in S2S using Kőnig's lemma). S1S can be obtained by requiring that '1' does not appear in strings, and WS1S also requires finiteness. Even WS1S can interpret Presburger arithmetic with a predicate for powers of 2, as sets can be used to represent unbounded binary numbers with definable addition.
Decision complexity
S2S is decidable, and each of S2S, S1S, WS2S, WS1S has a nonelementary decision complexity corresponding to a linearly growing stack of exponentials. For the lower bound, it suffices to consider Σ11 WS1S sentences. A single second order quantifier can be used to propose an arithmetic (or other) computation, which can be verified using first order quantifiers if we can test which numbers are equal. For this, if we appropriately encode numbers 1..m, we can encode a number with binary representation i1i2...im as i1 1 i2 2 ... im m, preceded by a guard. By merging testing of guards and reusing variable names, the number of bits is linear in the number of exponentials. For the upper bound, using the decision procedure (below), sentences with k-fold quantifier alternation can decided in time corresponding to k+O(1)-fold exponentiation of the sentence length (with uniform constants).
Axiomatization
WS2S can be axiomatized through certain basic properties plus induction schema.
S2S can be partially axiomatized by:
(1) ∃!s ∀t ( t0≠s ∧ t1≠s) (empty string, denoted by ε; ∃!s means "there is unique s")
(2) ∀s,t ∀i∈{0,1} ∀j∈{0,1} (si=tj ⇒ s=t ∧ i=j) (the use of i and j is an abbreviation; for i=j, 0 does not equal 1)
(3) ∀S (S(ε) ∧ ∀s (S(s) ⇒ S(s0) ∧ S(s1))⇒ ∀s S(s)) (induction)
(4) ∃S ∀s (S(s) ⇔ φ(s)) (S not free in φ)
(4) is the comprehension schema over formulas φ, which always holds for second order logic. As usual, if φ has free variables not shown, we take the universal closure of the axiom. If equality is primitive for predicates, one also adds extensionality S=T ⇔ ∀s (S(s) ⇔ T(s)). Since we have comprehension, induction can be a single statement rather than a schema.
The analogous axiomatization of S1S is complete. However, for S2S, completeness is open (as of 2021). While S1S has uniformization, there is no S2S definable (even allowing parameters) choice function that given a non-empty set S returns an element of S, and comprehension schemas are commonly augmented with various forms of the axiom of choice. However, (1)-(4) is complete when extended with a determinacy schema for certain parity games.
S2S can also be axiomatized by Π13 sentences (using the prefix relation on strings as a primitive). However, it is not finitely axiomatizable, nor can it be axiomatized by Σ13 sentences even if we add induction schema and a finite set of other sentences (this follows from its connection to Π12-CA0).
Theories related to S2S
For every finite k, the monadic second order (MSO) theory of countable graphs with treewidth ≤k (and a corresponding tree decomposition) is interpretable in S2S (see Courcelle's theorem). For example, the MSO theory of trees (as graphs) or of series-parallel graphs is decidable. Here (i.e. for bounded tree width), we can also interpret the finiteness quantifier for a set of vertices (or edges), and also count vertices (or edges) in a set modulo a fixed integer. Allowing uncountable graphs does not change the theory. Also, for comparison, S1S can interpret connected graphs of bounded pathwidth.
By contrast, for every set of graphs of unbounded treewidth, its existential (i.e. Σ11) MSO theory is undecidable if we allow predicates on both vertices and edges. Thus, in a sense, decidability of S2S is the best possible. Graphs with unbounded treewidth have large grid minors, which can be used to simulate a Turing machine.
By reduction to S2S, the MSO theory of countable orders is decidable, as is the MSO theory of countable trees with their Kleene–Brouwer orders. However, the MSO theory of (, <) is undecidable. The MSO theory of ordinals <ω2 is decidable; decidability for ω2 is independent of ZFC (assuming Con(ZFC + weakly compact cardinal)). Also, an ordinal is definable using monadic second order logic on ordinals iff it can be obtained from definable regular cardinals by ordinal addition and multiplication.
S2S is useful for decidability of certain modal logics, with Kripke semantics naturally leading to trees.
S2S+U (or just S1S+U) is undecidable if U is the unbounding quantifier — UX Φ(X) iff Φ(X) holds for some arbitrarily large finite X. However, WS2S+U, even with quantification over infinite paths, is decidable, even with S2S subformulas that do not contain U.
Formula complexity
A set of binary strings is definable in S2S iff it is regular (i.e. forms a regular language). In S1S, a (unary) predicate on sets is (parameter-free) definable iff it is an ω-regular language. For S2S, for formulas that use their free variables only on strings not containing a 1, the expressiveness is the same as for S1S.
For every S2S formula φ(S1,...,Sk), (with k free variables) and finite tree of binary strings T, φ(S1∩T,...,Sk∩T) can be computed in time linear in |T| (see Courcelle's theorem), but as noted above, the overhead can be iterated exponential in the formula size (more precisely, the time is ).
For S1S, every formula is equivalent to a Δ11 formula, and to a boolean combination of Π02 arithmetic formulas. Moreover, every S1S formula is equivalent to acceptance by a corresponding ω-automaton of the parameters of the formula. The automaton can be a deterministic parity automaton: A parity automaton has an integer priority for each state, and accepts iff the highest priority seen infinitely often is odd (alternatively, even).
For S2S, using tree automata (below), every formula is equivalent to a Δ12 formula. Moreover, every S2S formula is equivalent to a formula with just four quantifiers, ∃S∀T∃s∀t ... (assuming that our formalization has both the prefix relation and the successor functions). For S1S, three quantifiers (∃S∀s∃t) suffice, and for WS2S and WS1S, two quantifiers (∃S∀t) suffice; the prefix relation is not needed here for WS2S and WS1S.
However, with free second order variables, not every S2S formula can be expressed in second order arithmetic through just Π11 transfinite recursion (see reverse mathematics). RCA0 + (schema) {τ: τ is a true S2S sentence} is equivalent to (schema) {τ: τ is a Π13 sentence provable in Π12-CA0 }. Over a base theory, the schemas are equivalent to (schema over k) ∀S⊆ω ∃α1<...<αk Lα1(S) ≺Σ1 ... ≺Σ1 Lαk(S) where L is the constructible universe (see also large countable ordinal). Due to limited induction, Π12-CA0 does not prove that all true (under the standard decision procedure) Π13 S2S statements are actually true even though each such sentence is provable Π12-CA0.
Moreover, given sets of binary strings S and T, the following are equivalent:
(1) T is S2S definable from some set of binary strings polynomial time computable from S.
(2) T can be computed from the set of winning positions for some game whose payoff is a finite boolean combination of Π02(S) sets.
(3) T can be defined from S in arithmetic μ-calculus (arithmetic formulas + least fixed-point logic)
(4) T is in the least β-model (i.e. an ω-model whose set-theoretic counterpart is transitive) containing S and satisfying all Π13 consequences of in Π12-CA0.
Models of S1S and S2S
In addition to the standard model (which is the unique MSO model for S1S and S2S), there are other models for S1S and S2S, which use some rather than all subsets of the domain (see Henkin semantics).
For every S⊆ω, sets recursive in S form an elementary submodel of the standard S1S model, and same for every non-empty collection of subsets of ω closed under Turing join and Turing reducibility.
This follows from relative recursiveness of S1S definable sets plus uniformization:
- φ(s) (as a function of s) can be computed from the parameters of φ and the values of φ(s) for a finite set of s (with its size bounded by the number of states in a deterministic automaton for φ).
- A witness for ∃S φ(S) can be obtained by choosing k and a finite fragment of S of S, and repeatedly extending S such that the highest priority during each extension is k and that the extension can be completed into S satisfying φ without hitting priorities above k (these are permitted only for the initial S). Also, by using lexicographically least shortest choices, there is an S1S formula φ' such that φ'⇒φ and ∃S φ(S) ⇔∃!S φ'(S) (i.e. uniformization; φ may have free variables not shown; φ' depends only on the formula φ).
The minimal model of S2S consists of all regular languages on binary strings. It is an elementary submodel of the standard model, so if an S2S parameter-free definable set of trees is non-empty, then it includes a regular tree. A regular language can also be treated as a regular {0,1}-labeled complete infinite binary tree (identified with predicates on strings). A labeled tree is regular if it can be obtained by unrolling a vertex-labeled finite directed graph with an initial vertex; a (directed) cycle in the graph reachable from the initial vertex gives an infinite tree. With this interpretation and encoding of regular trees, every true S2S sentence may already be provable in elementary function arithmetic. It is non-regular trees that may require nonpredicative comprehension for determinacy (below). There are nonregular (i.e. containing nonregular languages) models of S1S (and presumably S2S) (both with and without standard first order part) with a computable satisfaction relation. However, the set of recursive sets of strings does not form a model of S2S due to failure of comprehension and determinacy.
Decidability of S2S
The proof of decidability is by showing that every formula is equivalent to acceptance by a nondeterministic tree automaton (see tree automaton and infinite-tree automaton). An infinite tree automaton starts at the root and moves up the tree, and accepts iff every tree branch accepts. A nondeterministic tree automaton accepts iff player 1 has a winning strategy, where player 1 chooses an allowed (for the current state and input) pair of new states (p0,p1), while player 2 chooses the branch, with the transition to p0 if 0 is chosen and p1 otherwise. For a co-nondeterministic automaton, all choices are by player 2, while for deterministic, (p0,p1) is fixed by the state and input; and for a game automaton, the two players play a finite game to set the branch and the state. Acceptance on a branch is based on states seen infinitely often on the branch; parity automata are sufficiently general here.
For converting the formulas to automata, the base case is easy, and nondeterminism gives closure under existential quantifiers, so we only need closure under complementation. Using positional determinacy of parity games (which is where we need impredicative comprehension), non-existence of player 1 winning strategy gives a player 2 winning strategy S, with a co-nondeterministic tree automaton verifying its soundness. The automaton can then be made deterministic (which is where we get an exponential increase in the number of states), and thus existence of S corresponds to acceptance by a non-deterministic automaton.
Determinacy: Provably in ZFC, Borel games are determined, and the determinacy proof for boolean combinations of Π02 formulas (with arbitrary real parameters) also gives a strategy here that depends only on the current state and the position in the tree. The proof is by induction on the number of priorities. Assume that there are k priorities, with the highest priority being k, and that k has the right parity for player 2. For each position (tree position + state) assign the least ordinal α (if any) such that player 1 has a winning strategy with all entered (after one or more steps) priority k positions (if any) having labels <α. Player 1 can win if the initial position is labeled: Each time a priority k state is reached, the ordinal is decreased, and moreover in between the decreases, player 1 can use a strategy for k-1 priorities. Player 2 can win if the position is unlabeled: By the determinacy for k-1 priorities, player 2 has a strategy that wins or enters an unlabeled priority k state, in which case player 2 can again use that strategy. To make the strategy positional (by induction on k), when playing the auxiliary game, if two chosen positional strategies lead to the same position, continue with the strategy with the lower α, or for the same α (or for player 2) lower initial position (so we can switch a strategy finitely many times).
Automata determinization: For determinization of co-nondeterministic tree automata, it suffices to consider ω-automata, treating branch choice as input, determinizing the automaton, and using it for the deterministic tree automaton. Note that this does not work for nondeterministic tree automata as the determinization for going left (i.e. s→s0) can depend on the contents of the right branch; by contrast to nondeterminism, deterministic tree automata cannot even accept precisely nonempty sets. To determinize a nondeterministic ω-automaton M (for co-nondeterministic, take the complement, noting that deterministic parity automata are closed under complements), we can use a Safra tree with each node storing a set of possible states of M, and node creation and deletion based on reaching high priority states. For details, see or.
Decidability of acceptance: Acceptance by a nondeterministic parity automaton of the empty tree corresponds to a parity game on a finite graph G. Using the above positional (also called memoryless) determinacy, this can be simulated by a finite game that ends when we reach a loop, with the winning condition based on the highest priority state in the loop. A clever optimization gives a quasipolynomial time algorithm, which is polynomial time when the number of priorities is small enough (which occurs commonly in practice).
Theory of trees: For decidability of MSO logic on trees (i.e. graphs that are trees), even with finiteness and modular counting quantifiers for first order objects, we can embed countable trees into the complete binary tree and use the decidability of S2S. For example, for a node s, we can represent its children by s1, s01, s001, and so on. For uncountable trees, we can use Shelah-Stup theorem (below). We can also add a predicate for a set first order objects having cardinality ω1, and the predicate for cardinality ω2, and so on for infinite regular cardinals. Graphs of bounded tree width are interpretable using trees, and without predicates over edges this also applies to graphs of bounded clique width.
Combining S2S with other decidable theories
Tree extensions of monadic theories: By Shelah-Stup theorem, if a monadic relational model M is decidable, then so is its tree counterpart. For example, (modulo choice of formalization) S2S is the tree counterpart of {0,1}. In the tree counterpart, the first order objects are finite sequences of elements of M ordered by extension, and an M-relation Pi is mapped to Pi'(vd1,...,vdk) ⇔ Pi(d1,...,dk) with Pi' false otherwise (dj∈M, and v is a (possibly empty) sequence of elements of M). The proof is similar to the S2S decidability proof. At each step, a (nondeterministic) automaton gets a tuple of M objects (possibly second order) as input, and an M formula determines which state transitions are permitted. Player 1 (as above) chooses a mapping child⇒state that is permitted by the formula (given the current state), and player 2 chooses the child (of the node) to continue. To witness rejection by a non-deterministic automaton, for each (node, state) pick a set of (child, state) pairs such that for every choice, at least one of the pairs is hit, and such that all the resulting paths lead to rejection.
Combining a monadic theory with a first order theory: Feferman–Vaught theorem extends/applies as follows. If M is an MSO model and N is a first order model, then M remains decidable relative to a (Theory(M), Theory(N)) oracle even if M is augmented with all functions M→N where M is identified with its first objects, and for each s∈M we use a disjoint copy of N, with the language modified accordingly. For example, if N is (,0,+,⋅), we can state ∀(function f) ∀s ∃r∈Ns f(s) +Ns r = 0Ns. If M is S2S (or more generally, the tree counterpart of some monadic model), the automata can now use N-formulas, and thereby convert f:M→Nk into a tuple of M sets. Disjointness is necessary as otherwise for every infinite N with equality, the extended S2S or just WS1S is undecidable. Also, for a (possibly incomplete) theory T, the theory TM of M-products of T is decidable relative to a (Theory(M), T) oracle, where a model of TM uses an arbitrary disjoint model Ns of T for each s∈M (as above, M is an MSO model; Theory(Ns) may depend on s). The proof is by induction on formula complexity. Let vs be the list of free Ns variables, including f(s) if function f is free. By induction, one shows that vs is only used through a finite set of N-formulas with |vs| free variables. Thus, we can quantify over all possible outcomes by using N (or T) to answer what is possible, and given a list possibilities (or constraints), formulate a corresponding sentence in M.
Coding into extensions of S2S: Every decidable predicate on strings can be encoded (with linear time encoding and decoding) for decidability of S2S (even with the extensions above) together with the encoded predicate. Proof: Given a nondeterministic infinite tree automaton, we can partition the set of finite binary labeled trees (having labels over which the automaton can operate) into finitely many classes such that if a complete infinite binary tree can be composed of same-class trees, acceptance depends only on the class and the initial state (i.e. state the automaton enters the tree). (Note a rough similarity with the pumping lemma.) For example (for a parity automaton), assign trees to the same class if they have the same predicate that given initial_state and set Q of (state, highest_priority_reached) pairs returns whether player 1 (i.e. nondeterminism) can simultaneously force all branches to correspond to elements of Q. Now, for each k, pick a finite set of trees (suitable for coding) that belong to the same class for automata 1-k, with the choice of class consistent across k. To encode a predicate, encode some bits using k=1, then more bits using k=2, and so on.
References
Additional reference:
Mathematical logic
Computability theory
Articles containing proofs | S2S (mathematics) | [
"Mathematics"
] | 5,391 | [
"Computability theory",
"Articles containing proofs",
"Mathematical logic"
] |
72,252,618 | https://en.wikipedia.org/wiki/Tetrahedral%20bipyramid | In 4-dimensional geometry, the tetrahedral bipyramid is the direct sum of a tetrahedron and a segment, {3,3} + { }. Each face of a central tetrahedron is attached with two tetrahedra, creating 8 tetrahedral cells, 16 triangular faces, 14 edges, and 6 vertices. A tetrahedral bipyramid can be seen as two tetrahedral pyramids augmented together at their base.
It is the dual of a tetrahedral prism, , so it can also be given a Coxeter-Dynkin diagram, , and both have Coxeter notation symmetry [2,3,3], order 48.
Being convex with all regular cells (tetrahedra) means that it is a Blind polytope.
This bipyramid exists as the cells of the dual of the uniform rectified 5-simplex, and rectified 5-cube or the dual of any uniform 5-polytope with a tetrahedral prism vertex figure. And, as well, it exists as the cells of the dual to the rectified 24-cell honeycomb.
See also
Triangular bipyramid - A lower dimensional analogy of the tetrahedral bipyramid.
Octahedral bipyramid - A lower symmetry form of the as 16-cell.
Cubic bipyramid
Dodecahedral bipyramid
Icosahedral bipyramid
References
4-polytopes | Tetrahedral bipyramid | [
"Mathematics"
] | 306 | [
"Geometry",
"Geometry stubs"
] |
72,252,654 | https://en.wikipedia.org/wiki/Icosahedral%20bipyramid | In 4-dimensional geometry, the icosahedral bipyramid is the direct sum of an icosahedron and a segment, Each face of a central icosahedron is attached with two tetrahedra, creating 40 tetrahedral cells, 80 triangular faces, 54 edges, and 14 vertices. An icosahedral bipyramid can be seen as two icosahedral pyramids augmented together at their bases.
It is the dual of a dodecahedral prism, Coxeter-Dynkin diagram , so the bipyramid can be described as . Both have Coxeter notation symmetry [2,3,5], order 240.
Having all regular cells (tetrahedra), it is a Blind polytope.
See also
Pentagonal bipyramid - A lower dimensional analogy
Tetrahedral bipyramid
Octahedral bipyramid - A lower symmetry form of the as 16-cell.
Cubic bipyramid
Dodecahedral bipyramid
References
External links
Icosahedral tegum
4-polytopes | Icosahedral bipyramid | [
"Mathematics"
] | 219 | [
"Geometry",
"Geometry stubs"
] |
72,258,037 | https://en.wikipedia.org/wiki/Intracellular%20space | Intracellular space is the interior space of the plasma membrane. It contains about two-thirds of TBW. Cellular rupture may occur if the intracellular space becomes dehydrated, or if the opposite happens, where it becomes too bloated. Thus it is important for the liquid to stay in optimal quantity.
See also
Extracellular space
References
Cell anatomy
Cell biology | Intracellular space | [
"Biology"
] | 78 | [
"Cell biology"
] |
65,020,736 | https://en.wikipedia.org/wiki/15-minute%20city | The 15-minute city (FMC or 15mC) is an urban planning concept in which most daily necessities and services, such as work, shopping, education, healthcare, and leisure can be easily reached by a 15-minute walk, bike ride, or public transit ride from any point in the city. This approach aims to reduce car dependency, promote healthy and sustainable living, and improve wellbeing and quality of life for city dwellers.
Implementing the 15-minute city concept requires a multi-disciplinary approach, involving transportation planning, urban design, and policymaking, to create well-designed public spaces, pedestrian-friendly streets, and mixed-use development. This change in lifestyle may include remote working which reduces daily commuting and is supported by the recent widespread availability of information and communications technology. The concept has been described as a "return to a local way of life".
As people spend more time working from home or near their homes, there is less demand for large central office spaces and more need for flexible, local co-working spaces. The 15-minute city concept suggests a shift toward a decentralized network of workspaces within residential neighbourhoods, reducing the need for long commutes and promoting work-life balance.
The concept's roots can be traced to pre-modern urban planning traditions where walkability and community living were the primary focus before the advent of street networks and automobiles. In recent times, it builds upon similar pedestrian-centered principles found in New Urbanism, transit-oriented development, and other proposals that promote walkability, mixed-use developments, and compact, livable communities. Numerous models have been proposed about how the concept can be implemented, such as 15-minute cities being built from a series of smaller 5-minute neighborhoods, also known as complete communities or walkable neighborhoods.
The concept gained significant traction in recent years after Paris mayor Anne Hidalgo included a plan to implement the 15-minute city concept during her 2020 re-election campaign. Since then, a number of cities worldwide have adopted the same goal and many researchers have used the 15-minute model as a spatial analysis tool to evaluate accessibility levels within the urban fabric.
History
The 15-minute city concept is derived from historical ideas about proximity and walkability, such as Clarence Perry's neighborhood unit. As an inspiration for the 15-minute city, Carlos Moreno, an advisor to Anne Hidalgo, cited Jane Jacobs's model presented in The Death and Life of Great American Cities.
The ongoing climate crisis and global COVID-19 pandemic have prompted a heightened focus on the 15-minute city concept. In July 2020, the C40 Cities Climate Leadership Group published a framework for cities to "build back better" using the 15-minute concept, referring specifically to plans implemented in Milan, Madrid, Edinburgh, and Seattle after COVID-19 outbreaks. Their report highlights the importance of inclusive community engagement through mechanisms like participatory budgeting and adjusting city plans and infrastructure to encourage dense, complete, overall communities.
A manifesto published in Barcelona in April 2020 by architecture theorist Massimo Paolini proposed radical change in the organization of cities in the wake of COVID-19, and was signed by 160 academics and 300 architects. The proposal has four key elements: reorganization of mobility, (re)naturalization of the city, de-commodification of housing, and de-growth.
In early 2023, far-right conspiracy theories began to flourish that described 15-minute cities as instruments of government repression, claiming that they were a pretext to introduce restrictions on travel by car. In fact, the '15-minute city' proposals do not involve any restrictions on travel by car unrelated measures introduced to reduce traffic in some cities have been somehow confused with '15-minute cities'.
Research models
The 15-minute city is a proposal for developing a polycentric city, where density is made pleasant, one's proximity is vibrant, and social intensity (a large number of productive, intricately linked social ties) is real. The key element of the model has been described by Carlos Moreno as "chrono-urbanism" or a refocus of interest on time value rather than time cost.
Moreno and the 15-minute city
Urbanist Carlos Moreno's 2021 article introduced the 15-minute city concept as a way to ensure that urban residents can fulfill six essential functions within a 15-minute walk or bike ride from their dwellings: living, working, commerce, healthcare, education and entertainment. The framework of this model has four components; density, proximity, diversity and digitalization.
Moreno cites the work of Nikos Salingaros, who theorizes that an optimal density for urban development exists which would encourage local solutions to local problems. The authors discuss proximity in terms of both space and time, arguing that a 15-minute city would reduce the space and time necessary for activity. Diversity in this 15-minute city model refers to mixed-use development and multicultural neighborhoods, both of which Moreno and others argue would improve the urban experience and boost community participation in the planning process. Digitalization is a key aspect of the 15-minute city derived from smart cities. Moreno and others argue that a Fourth Industrial Revolution has reduced the need for commuting because of access to technology like virtual communication and online shopping. They conclude by stating that these four components, when implemented at scale, would form an accessible city with a high quality of life.
Larson and the 20-minute city
Kent Larson described the concept of a 20-minute city in a 2012 TED talk, and his City Science Group at the MIT Media Lab has developed a neighborhood simulation platform to integrate the necessary design, technology, and policy interventions into "compact urban cells". In his "On Cities" masterclass for the Norman Foster Foundation, Larson proposed that the planet is becoming a network of cities, and that successful cities in the future will evolve into a network of high-performance, resilient, entrepreneurial communities.
D'Acci and the one-mile city
In 2013, Luca D'Acci (Associate Professor in Urban Studies at the Polytechnic University of Turin, Italy) proposed a city model "where each point can reach continuous natural areas, job locations, centralities, shops, amenities (recreational, medical, cultural), usual daily activities by 15/30 minute walking or within 15 minute biking". He called it a "one-mile green city", or "Isobenefit Urbanism". (The term "isobenefit" is a portmanteau word from "iso" meaning equal, and "benefit", which he defines as advantageous amenities, services, workplaces and green space.)
Weng and the 15-minute walkable neighborhood
In a 2019 article using Shanghai as a case study, Weng and his colleagues proposed the 15-minute walkable neighborhood with a focus on health, and specifically non-communicable diseases. The authors suggest that the 15-minute walkable neighborhood is a way to improve the health of residents, and they document existing disparities in walkability within Shanghai. They found that rural areas, on average, are significantly less walkable, and areas with low walkability tend to have a higher proportion of children. Compared to Moreno et al., the authors focused more on the health benefits of walking and differences in walkability and usage across age groups.
Da Silva and the 20-minute city
In their 2019 article, Da Silva et al. cite Tempe, Arizona, as a case study of an urban space where all needs could be met within 20 minutes by walking, biking, or transit. The authors found that Tempe is highly accessible, especially by bike, but that accessibility varies with geographic area. Compared to Moreno et al., the authors focused more on accessibility within the built environment.
Implementations
Asia
In 2019, Singapore's Land Transport Authority proposed a master plan that included the goals of "20-minute towns" and a "45-minute city" by 2040.
Israel has embraced the concept of a 15-minute city in new residential developments. According to Orli Ronen, the head of the Urban Innovation and Sustainability Lab at the Porter School for Environmental Studies at Tel Aviv University, Tel Aviv, Haifa, Beersheba, and central Jerusalem have been effective in delivering on the concept at least in part in new developments, but only Tel Aviv has been relatively successful.
Dubai launched the 20-minute city project in 2022, where residents are able to access daily needs & destinations within 20 minutes by foot or bicycle. The plan involves placing 55% of the residents within 800 meters of mass transit stations, allowing them to reach 80% of their daily needs and destinations.
In the Philippine's largest city, the government of Quezon City announced in 2023 its plans to implement the 15-minute city concept to establish a walkable, people-friendly, and sustainable community for its residents. Influenced by the city of Paris, the government aims to make urban development people-centered and to further reach the city's goal of reaching carbon neutrality by 2050.
China
The 2016 Master Plan for Shanghai called for "15-minute community life circles", where residents could complete all of their daily activities within 15 minutes of walking. The community life circle has been implemented in other Chinese cities, like Baoding and Guangzhou. Xiong'an is also being developed under the 15-minute life circle concept.
The Standard for urban residential area planning and design (GB 50180–2018), a national standard that came into effect in 2018, stipulates four levels of residential areas: 15-min pedestrian-scale neighborhood, 10-min pedestrian-scale neighborhood, 5-min pedestrian-scale neighborhood, and a neighborhood block. Among them, "15-min pedestrian-scale neighborhood" means "residential area divided according to the principle that residents can meet their material, living and cultural demand by walking for 15 minutes; usually surrounded by urban trunk roads or site boundaries, with a population of 50,000 to 100,000 people (about 17,000 to 32,000 households) and complete supporting facilities."
Chengdu, to combat urban sprawl, commissioned the "Great City" plan, where development on the edges of the city would be dense enough to support all necessary services within a 15-minute walk.
Europe
The mayor of Paris, Anne Hidalgo, introduced the 15-minute city concept in her 2020 re-election campaign and began implementing it during the COVID-19 pandemic. For example, school playgrounds were converted to parks after school hours, while the Place de la Bastille and other squares have been revamped with trees and bicycle lanes.
Cagliari, a city on the Italian island of Sardinia, began a strategic plan to revitalize the city and improve walkability. The city actively solicited public feedback through a participatory planning process, as described in the Moreno model. A unique aspect of the plan calls for re-purposing public spaces and buildings that were no longer being used, relating to the general model of urban intensification.
In Utrecht, the fourth-largest city in the Netherlands, 100 percent of residents can reach all city necessities in a 15-minute bike ride, and 94% in a 10-minute bike ride. The local municipality has plans to improve this further by 2040.
In September 2023, the UK Government announced plans to "protect drivers from over-zealous traffic enforcement", in what it says is "part of a long-term plan to back drivers". These included plans "to stop councils implementing so called '15-minute cities', by consulting on ways to prevent schemes which aggressively restrict where people can drive".
Polish city Pleszew is claimed to be a 15-minute city.
Copenhagen's Nordhavn neighbourhood was developed according to a five-minute city concept. This is based on all daily amenities being located at a distance of 400 m from the nearest public transit stop – a distance walkable within 5 minutes.
North America
In 2012, Portland, Oregon, developed a plan for complete neighborhoods within the city, which are aimed at supporting youth, providing affordable housing, and promoting community-driven development and commerce in historically under-served neighborhoods. Similar to the Weng et al. model, the Portland plan emphasizes walking and cycling as ways to increase overall health and stresses the importance of the availability of affordable healthy food. The Portland plan calls for a high degree of transparency and community engagement during the planning process, which is similar to the diversity component of the Moreno et al. model.
In 2015, Kirkland, Washington, developed a "10-Minute Neighborhood Analysis" tool to guide the city's 2035 Comprehensive Plan. This tool is intended to guide community discussion about how the 10-Minute Neighborhood concept can improve livability and explore the policy changes necessary to achieve that vision.
South America
In March 2021, Bogotá, Colombia, implemented 84 kilometers of bike lanes to encourage social distancing during the COVID-19 pandemic. This expansion complemented the Ciclovía practice that originated in Colombia in 1974, where bicycles are given primary control of the streets. The resulting bicycle lane network is the largest of its kind in the world.
Oceania
The city of Melbourne, Australia, developed Plan Melbourne 2017–2050 to accommodate growth and combat sprawl. The plan contains multiple elements of the 15-minute city concept, including new bike lanes and the construction of "20-minute neighborhoods".
Societal effects
The 15-minute city, with its emphasis on walkability and accessibility, has been put forward as a way to better serve groups of people that have historically been left out of planning, such as women, children, people with disabilities, people with lived experience of mental illness, and the elderly.
Social infrastructure is also emphasized to maximize urban functions such as schools, parks, and complementary activities for residents. There is also a large focus on access to green space, which may promote positive environmental impacts such as increasing urban biodiversity and helping to protect the city from invasive species. Studies have found that increased access to green spaces can also have a positive impact on the mental and physical health of a city's inhabitants, reducing stress and negative emotions, increasing happiness, improving sleep, and promoting positive social interactions. Urban residents living near green spaces have also been found to exercise more, improving their physical and mental health.
Limitations
Limitations of the 15-minute city concept include the difficulty or impracticality of implementing the 15-minute city concept in established urban areas, where land use patterns and infrastructure are already in place. Additionally, the concept may not be feasible in areas with low population density, such as those with extensive urban sprawl, or in areas where lower-income workers commute long distances to or from.
Noted exceptions include Chengdu, which used the 15-minute city concept to curb sprawl, and Melbourne, where Lord Mayor Sally Capp stressed the importance of public transit in expanding the radius of the 15-minute city.
In a paper published in the journal Sustainability, Georgia Pozoukidou and Zoi Chatziyiannaki write that the creation of dense, walkable urban cores often leads to gentrification or displacement of lower-income residents to outlying neighborhoods due to rising property values; to counteract this, the authors argue for affordable housing provisions to be integral with 15-minute city policies.
Furthermore, when the concept is applied as a literal spatial analysis research tool, it then refers to the use of an isochrone to express the radius of an area considered local. Isochrones have a long history of use in transportation planning and are constructed primarily using two variables: time and speed. However, the reliance on population-wide conventions, such as gait speed, to estimate the buffer zones of accessible areas may not accurately reflect the mobility capabilities of specific population groups, like the elderly. This may result in potential inaccuracies and fallacies in research models.
Obstacles to implementation
In the United States, several factors make the implementation of 15-minute cities challenging. The biggest roadblock is strict zoning regulations, especially single-family zoning which makes high density housing construction illegal. NIMBYism is also an obstacle, as are parking requirements and the perceived low quality of urban schools which causes childbearing couples to move from urban areas to suburban areas.
Conspiracy theories
In 2023 conspiracy theories about the 15-minute concept began to flourish, which described the model as an instrument of government oppression. These claims are often part of or linked to other conspiracy theories such as QAnon, anti-vaccine theories or anti-5G misinformation that assert that Western governments seek to oppress their populations. Proponents of the 15-minute concept, including Carlos Moreno, have received death threats.
Some conspiracy theorists conflate the 15-minute concept with the British low-traffic neighbourhood approach, which includes license plate scanners in some implementations. This has led to assertions that the 15-minute model would fine residents for leaving their home districts, or that it would confine people in "open-air prisons". Conspiracy theorists believe the World Economic Forum (WEF) wants to lock people in their homes on the pretext of climate change. Such beliefs are part of a larger network of conspiracy theories surrounding the concept of a "Great Reset".
In a 2023 protest by some 2,000 demonstrators in Oxford, signs described 15-minute cities as "ghettos" and an instrument of "tyrannical control" by the WEF. Canadian media commentator Jordan Peterson has described 15-minute cities as a "perversion". QAnon supporters have claimed a February 2023 derailment of a train carrying hazardous chemicals in Ohio was part of a deliberate plot to force rural residents into 15-minute cities to restrict their personal freedom. Similar claims have been made about wildfires on the island of Maui in August 2023.
In 2023, the British Conservative government began to criticise the idea by name. In February 2023 the Conservative MP Nick Fletcher called 15-minute cities an "international socialist concept" during a debate in the UK Parliament, which was met with laughter. At the Conservative party conference in October 2023, Transport Secretary Mark Harper announced that he was "calling time on the misuse of so-called '15-minute cities, criticising as "sinister" the idea that local councils could "decide how often you go to the shops and that they ration who uses the roads and when". No such powers have been proposed as part of the 15-minute city concept in the United Kingdom. Despite being specifically debunked in a guide given to MPs by the Leader of the House of Commons, Health Secretary Maria Caulfield included the fiction in a local election leaflet and reiterated it in a BBC interview.
See also
Fused grid Type of urban planning design that aims to reconcile the urban grid and Radburn design housing
Notes
References
Further reading
New Urbanism
Sustainable urban planning
Transportation planning
Sustainable transport
Mixed-use developments
Community building
Conspiracy theories | 15-minute city | [
"Physics"
] | 3,887 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
65,031,039 | https://en.wikipedia.org/wiki/Stormtroopers%20Advance%20Under%20a%20Gas%20Attack | Stormtroopers Advance Under a Gas Attack (German: Sturmtruppe geht unter Gas vor) is an engraving in aquatint by Otto Dix representing German soldiers in combat during the First World War. It is the twelfth in the series of fifty engravings entitled The War, published in 1924. Copies are kept at the German Historical Museum in Berlin, at the Museum of Modern Art in New York, and at the Minneapolis Institute of Art, among other public collections.
Description
The engraving is almost monochrome, rectangular in format (19.3 × 28.8 cm for the engraving, 34.8 × 47.3 cm for the sheet). The engraving represents five German stormtroopers, recognizable by their steel helmets, all wearing gas masks, as they are advancing into enemy lines, while suffering a gas attack.
References
Anti-war works
World War I in art
1920s prints
Otto Dix etchings | Stormtroopers Advance Under a Gas Attack | [
"Chemistry"
] | 188 | [
"Chemical war and weapons in popular culture",
"Chemical weapons"
] |
61,355,226 | https://en.wikipedia.org/wiki/NET%20Power%20Demonstration%20Facility | The NET Power Test Facility, located in La Porte, Tx, is an oxy-combustion, zero-emissions 50 MWth natural gas power plant owned and operated by NET Power. NET Power is owned by Constellation Energy Corporation, Occidental Petroleum Corporation (Oxy) Low Carbon Ventures, Baker Hughes Company and 8 Rivers Capital, the company holding the patents for the technology. The plant is a first of its kind Allam-Fetvedt Cycle which achieved first-fire in May of 2018. The Allam-Fetvedt cycle delivers lower cost power while eliminating atmospheric emissions. The plant was featured in The Global CCS Institutes 2018 Status of CCS report. In recognition of the Allam-Fetvedt Cycle demonstration plant in La Porte, Texas, NET Power was awarded the 2018 International Excellence in Energy Breakthrough Technological Project of the Year at the Abu Dhabi International Petroleum Exhibition and Conference (ADIPEC).
References
Energy
Natural gas-fired power stations in Texas | NET Power Demonstration Facility | [
"Physics"
] | 201 | [
"Energy (physics)",
"Energy",
"Physical quantities"
] |
63,544,575 | https://en.wikipedia.org/wiki/INTLAB | INTLAB (INTerval LABoratory) is an interval arithmetic library using MATLAB and GNU Octave, available in Windows and Linux, macOS. It was developed by S.M. Rump from Hamburg University of Technology. INTLAB was used to develop other MATLAB-based libraries such as VERSOFT and INTSOLVER, and it was used to solve some problems in the Hundred-dollar, Hundred-digit Challenge problems.
Version history
12/30/1998 Version 1
03/06/1999 Version 2
11/16/1999 Version 3
03/07/2002 Version 3.1
12/08/2002 Version 4
12/27/2002 Version 4.1
01/22/2003 Version 4.1.1
11/18/2003 Version 4.1.2
04/04/2004 Version 5
06/04/2005 Version 5.1
12/20/2005 Version 5.2
05/26/2006 Version 5.3
05/31/2007 Version 5.4
11/05/2008 Version 5.5
05/08/2009 Version 6
12/12/2012 Version 7
06/24/2013 Version 7.1
05/10/2014 Version 8
01/22/2015 Version 9
12/07/2016 Version 9.1
05/29/2017 Version 10
07/24/2017 Version 10.1
12/15/2017 Version 10.2
01/07/2019 Version 11
03/06/2020 Version 12
Functionality
INTLAB can help users to solve the following mathematical/numerical problems with interval arithmetic.
Works cited by INTLAB
INTLAB is based on the previous studies of the main author, including his works with co-authors.
External links
See also
List of numerical analysis software
Comparison of linear algebra libraries
References
Numerical analysis
Numerical software
Computational science
Computer arithmetic | INTLAB | [
"Mathematics"
] | 361 | [
"Applied mathematics",
"Computational mathematics",
"Computational science",
"Computer arithmetic",
"Arithmetic",
"Mathematical relations",
"Numerical software",
"Numerical analysis",
"Approximations",
"Mathematical software"
] |
63,544,717 | https://en.wikipedia.org/wiki/Solidarity%20trial | The Solidarity trial for treatments is a multinational Phase III-IV clinical trial organized by the World Health Organization (WHO) and partners to compare four untested treatments for hospitalized people with severe COVID-19 illness. The trial was announced 18 March 2020, and as of 6 August 2021, 12,000 patients in 30 countries had been recruited to participate in the trial.
In May, the WHO announced an international coalition for simultaneously developing several candidate vaccines to prevent COVID-19 disease, calling this effort the Solidarity trial for vaccines.
The treatments being investigated are remdesivir, lopinavir/ritonavir combined, lopinavir/ritonavir combined with interferon-beta, and hydroxychloroquine or chloroquine. Hydroxychloroquine or chloroquine investigation was discontinued in June 2020 due to concluding that it provided no benefit.
Solidarity trial for treatment candidates
The trial intends to rapidly assess thousands of COVID-19 infected people for the potential efficacy of existing antiviral and anti-inflammatory agents not yet evaluated specifically for COVID-19 illness, a process called "repurposing" or "repositioning" an already-approved drug for a different disease.
The Solidarity project is designed to give rapid insights to key clinical questions:
Do any of the drugs reduce mortality?
Do any of the drugs reduce the time a patient is hospitalized?
Do the treatments affect the need for people with COVID-19-induced pneumonia to be ventilated or maintained in intensive care?
Could such drugs be used to minimize the illness of COVID-19 infection in healthcare staff and people at high risk of developing severe illness?
Enrolling people with COVID-19 infection is simplified by gathering informed consent, and capturing data on an online clinical trial platform (Castor EDC). After the trial staff determine the drugs available at the hospital, the platform randomizes the hospitalized subject to one of the trial drugs or to the hospital standard of care for treating COVID-19. The trial physician records and submits follow-up information about the subject status and treatment, completing data input via the Castor EDC Platform. The design of the Solidarity trial is not double-blind – which is normally the standard in a high-quality clinical trial – but WHO needed speed with quality for the trial across many hospitals and countries. A global safety monitoring board of WHO physicians examine interim results to assist decisions on safety and effectiveness of the trial drugs, and alter the trial design or recommend an effective therapy. A similar web-based study to Solidarity, called "Discovery", was initiated in March across seven countries by INSERM (Paris, France).
The Solidarity trial seeks to implement coordination across hundreds of hospital sites in different countries – including those with poorly-developed infrastructure for clinical trials – yet needs to be conducted rapidly. According to John-Arne Røttingen, chief executive of the Research Council of Norway and chairman of the Solidarity trial international steering committee, the trial would be considered effective if therapies are determined to "reduce the proportion of patients that need ventilators by, say, 20%, that could have a huge impact on our national health-care systems."
Adaptive design
According to the WHO Director General, the aim of the trial is to "dramatically cut down the time needed to generate robust evidence about what drugs work", a process using an "adaptive design". The Solidarity and European Discovery trials apply adaptive design to rapidly alter trial parameters when results from the four experimental therapeutic strategies emerge.
Adaptive designs within ongoing Phase III-IV clinical trials – such as the Solidarity and Discovery projects – may shorten the trial duration and use fewer subjects, possibly expediting decisions for early termination to save costs if interim results are negative. If the Solidarity project shows early evidence of success, design changes across the project's international locations can be made rapidly to enhance overall outcomes of affected people and hasten use of the therapeutic drug.
Treatment candidates under study
The individual or combined drugs being studied in the Solidarity and Discovery projects are already approved for other diseases. They are:
Remdesivir
Lopinavir/ritonavir combined
Lopinavir/ritonavir combined with interferon-beta
Hydroxychloroquine or chloroquine (discontinued due to no benefit, June 2020)
Due to safety concerns and evidence of heart arrhythmias leading to higher death rates, the WHO suspended the hydroxychloroquine arm of the Solidarity trial in late May 2020, then reinstated it, then withdrew it again when an interim analysis in June showed that hydroxychloroquine provided no benefit to hospitalized people severely infected with COVID-19.
In October 2020, the World Health Organization Solidarity trial produced an interim report concluding that its "remdesivir, hydroxychloroquine, lopinavir and interferon regimens appeared to have little or no effect on hospitalized COVID-19, as indicated by overall mortality, initiation of ventilation and duration of hospital stay." Gilead – the manufacturer of remdesivir – criticized the Solidarity trial methodology after it showed no benefit of the treatments, claiming that the international nature of the Solidarity trial was a weakness, whereas many experts regard the multinational study as a strength. Purchase agreements between the EU and Gilead for remdesivir and granting of its Emergency Use Authorization by the US FDA during October were questioned by Solidarity trial scientists as not based on positive clinical trial data, when the interim analysis of the Solidarity trial had found remdesivir to be ineffective.
In January 2022, the Canadian component of the Solidarity trial reported that in-hospital people with COVID-19 treated with remdesivir had lower death rates (by about 4%) and reduced need for oxygen (less by 5%) and mechanical ventilation (less by 7%) compared to people receiving standard-of-care treatments.
Support and participation
During March, funding for the Solidarity trial reached million from 203,000 individual donations, charitable organizations and governments, with 45 countries involved in financing or trial management. As of 1 July 2020, nearly 5,500 patients in 21 countries of 39 that have approval to recruit were recruited to participate in the trial. More than 100 countries in all 6 WHO regions have expressed interest in participating.
Solidarity trial for vaccine candidates
The WHO has developed a multinational coalition of vaccine scientists defining a Global Target Product Profile (TPP) for COVID-19, identifying favorable attributes of safe and effective vaccines under two broad categories: "vaccines for the long-term protection of people at higher risk of COVID-19, such as healthcare workers", and other vaccines to provide rapid-response immunity for new outbreaks. The international TPP team was formed to 1) assess the development of the most promising candidate vaccines; 2) map candidate vaccines and their clinical trial worldwide, publishing a frequently-updated "landscape" of vaccines in development; 3) rapidly evaluate and screen for the most promising candidate vaccines simultaneously before they are tested in humans; and 4) design and coordinate a multiple-site, international randomized controlled trial the Solidarity trial for vaccines to enable simultaneous evaluation of the benefits and risks of different vaccine candidates under clinical trials in countries where there are high rates of COVID-19 disease, ensuring fast interpretation and sharing of results around the world. The WHO vaccine coalition will prioritize which vaccines should go into Phase II and III clinical trials, and determine harmonized Phase III protocols for all vaccines achieving the pivotal trial stage.
Solidarity Plus Trial
The WHO announced in August 2021 that it will roll out the next phase Solidarity trial under the name Solidarity PLUS trial in 52 countries. The trial will enroll hospitalized patients to test three new drugs for potential treatment of COVID-19. These drugs include artesunate, imatinib and infliximab. The selection of these therapies was done by an independent expert panel of WHO. These drugs are already used for other indications: artesunate is used for malaria, imatinib for cancers, and infliximab, an anti-TNF agent is used for Crohn's Disease and rheumatoid arthritis. The drugs will be donated for the purpose of trial by their manufacturers.
See also
COVID-19 drug repurposing research
COVID-19 drug development#Phase III-IV trials
RECOVERY Trial
PANORAMIC trial
AGILE trial
References
External links
"'Solidarity' clinical trial for COVID-19 treatment by the World Health Organization
COVID-19 (Questions & Answers) by the World Health Organization
COVID-19 (Q&A) by the US Centers for Disease Control and Prevention (CDC)
Coronaviruses by US National Institute for Allergy and Infectious Diseases
COVID-19 (Q&A) by the European Centre for Disease Prevention and Control
COVID-19 by the China National Health Commission
Anti-influenza agents
Clinical research
Clinical trials related to COVID-19
Drugs
Medical responses to the COVID-19 pandemic
Clinical trials | Solidarity trial | [
"Chemistry"
] | 1,852 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
63,548,048 | https://en.wikipedia.org/wiki/Subrata%20Roy%20%28scientist%29 | Subrata Roy (Bengali: সুব্রত রায়) is an Indian-born inventor, educator, and scientist known for his work in plasma-based flow control and plasma-based self-sterilizing technology. He is a professor of Mechanical and Aerospace Engineering at the University of Florida and the founding director of the Applied Physics Research Group at the University of Florida.
He is also the President and the founder of SurfPlasma Inc., a biotechnology company in Gainesville, Florida.
Biography
Subrata Roy earned his Ph.D. in engineering science from the University of Tennessee in Knoxville, TN in 1994. Roy was a senior research scientist at Computational Mechanics Corporation in Knoxville, Tennessee, and then professor of mechanical engineering at the Kettering University up to 2006. In 2006, Roy joined the University of Florida as a faculty member of the Department of Mechanical and Aerospace Engineering. He is a professor of Mechanical and Aerospace Engineering and the founding director of the Applied Physics Research Group at the University of Florida. He has also worked as a visiting professor at the University of Manchester and the Indian Institute of Technology Bombay.
Scientific work
Subrata Roy's research and scientific work encompasses computational fluid dynamics (CFD), plasma physics, heat transfer, magnetohydrodynamics, electric propulsion, and micro/nanoscale flows. In 2003, Roy incorporated Knudsen's theory that handles surface collisions of molecules by diffusive and specular reflections into hydrodynamic models, which has been used in shale gas seepage studies. In 2006, Roy invented the Wingless Electromagnetic Air Vehicle (WEAV) which was included in Scientific American in 2008 as the world's first wingless, electromagnetically driven air vehicle design. Roy is known for introducing various novel designs and configurations of plasma actuators for applications in mitigation of flow drag related fuel consumption, noise reduction, and active film cooling of turbine blades and propulsion. These designs and configurations include serpentine geometry plasma actuators, fan geometry plasma actuators, micro-scale actuators, multibarrier plasma actuators, and plasma actuated channels of atmospheric plasma actuators.
Roy also led multidisciplinary research on innovating eco-friendly ways of microorganism decontamination using plasma reactors.
Roy served as the Technical Discipline Chair for the 36th AIAA Thermophysics Conference in 2003, the 48th Aerospace Sciences Meeting (for Thermophysics) in 2010, the AIAA SciTech Plasma Dynamics and Lasers Conference in 2016, and served as the Forum Technical Chair for AIAA SciTech in 2018. Roy served (20052007) as an Associate Editor of the Journal of Fluids Engineering and served (20122017) as an Academic Editor of PLOS One. Roy serves as a nation appointed member to the NATO Science and Technology Organisation working group on plasma actuator technologies; a member of the editorial board of Scientific Reports-Nature; and, an Associate Editor of Frontiers in Physics, Frontiers in Astronomy and Space Sciences, and Journal of Fluid Flow, Heat and Mass Transfer. Roy is an inducted Fellow of the National Academy of Inventors, a Distinguished Visiting Fellow of the Royal Academy of Engineering, a Fellow of the Royal Aeronautical Society, a lifetime member and Fellow of the American Society of Mechanical Engineers, and an Associated Fellow of the American Institute of Aeronautics and Astronautics.
Honors
Fellow, National Academy of Inventors
Distinguished Visiting Fellow, Royal Academy of Engineering
Fellow, Royal Aeronautical Society
Lifetime Fellow, American Society of Mechanical Engineers
Space Act Award 2016 NASA
References
External links
University of Florida faculty
Living people
American engineers
Plasma physicists
Computational fluid dynamicists
Scientists from Kolkata
Indian emigrants to the United States
Bengali scientists
American Hindus
20th-century Indian physicists
21st-century American inventors
University of Tennessee alumni
Jadavpur University alumni
Year of birth missing (living people)
American academics of Indian descent
Indian scholars | Subrata Roy (scientist) | [
"Physics"
] | 795 | [
"Plasma physicists",
"Plasma physics"
] |
63,550,672 | https://en.wikipedia.org/wiki/N%2CN-Dimethylaminomethylferrocene | N,N-Dimethylaminomethylferrocene is the dimethylaminomethyl derivative of ferrocene, (C5H5)Fe(C5H4CH2N(CH3)2. It is an air-stable, dark-orange syrup that is soluble in common organic solvents. The compound is prepared by the reaction of ferrocene with formaldehyde and dimethylamine:
(C5H5)2Fe + CH2O + HN(CH3)2 → (C5H5)Fe(C5H4CH2N(CH3)2 + H2O
It is a precursor to prototypes of ferrocene-containing redox sensors and diverse ligands.
The amine can be quaternized, which provides access to many derivatives.
References
Ferrocenes
Sandwich compounds
Cyclopentadienyl complexes | N,N-Dimethylaminomethylferrocene | [
"Chemistry"
] | 187 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes",
"Sandwich compounds"
] |
63,553,560 | https://en.wikipedia.org/wiki/Cannon%E2%80%93Thurston%20map | In mathematics, a Cannon–Thurston map is any of a number of continuous group-equivariant maps between the boundaries of two hyperbolic metric spaces extending a discrete isometric actions of the group on those spaces.
The notion originated from a seminal 1980s preprint of James Cannon and William Thurston "Group-invariant Peano curves" (eventually published in 2007) about fibered hyperbolic 3-manifolds.
Cannon–Thurston maps provide many natural geometric examples of space-filling curves.
History
The Cannon–Thurston map first appeared in a mid-1980s preprint of James W. Cannon and William Thurston called "Group-invariant Peano curves". The preprint remained unpublished until 2007, but in the meantime had generated numerous follow-up works by other researchers.
In their paper Cannon and Thurston considered the following situation. Let M be a closed hyperbolic 3-manifold that fibers over the circle with fiber S. Then S itself is a closed hyperbolic surface, and its universal cover can be identified with the hyperbolic plane . Similarly, the universal cover of M can be identified with the hyperbolic 3-space . The inclusion lifts to a -invariant inclusion . This inclusion is highly distorted because the action of on
is not geometrically finite.
Nevertheless, Cannon and Thurston proved that this distorted inclusion extends to a continuous -equivariant map
,
where and . Moreover, in this case the map j is surjective, so that it provides a continuous onto function from the circle onto the 2-sphere, that is, a space-filling curve.
Cannon and Thurston also explicitly described the map , via collapsing stable and unstable laminations of the monodromy pseudo-Anosov homeomorphism of S for this fibration of M. In particular, this description implies that the map j is uniformly finite-to-one, with the pre-image of every point of having cardinality at most 2g, where g is the genus of S.
After the paper of Cannon and Thurston generated a large amount of follow-up work, with other researchers analyzing the existence or non-existence of analogs of the map j in various other set-ups motivated by the Cannon–Thurston result.
Cannon–Thurston maps and Kleinian groups
Kleinian representations of surface groups
The original example of Cannon and Thurston can be thought of in terms of Kleinian representations of the surface group . As a subgroup of , the group H acts on by isometries, and this action is properly discontinuous. Thus one gets a discrete representation .
The group also acts by isometries, properly discontinuously and co-compactly, on the universal cover , with the limit set being equal to . The Cannon–Thurston result can be interpreted as saying that these actions of H on and induce a continuous H-equivariant map .
One can ask, given a hyperbolic surface S and a discrete representation , if there exists an induced continuous map .
For Kleinian representations of surface groups, the most general result in this direction is due to Mahan Mj (2014).
Let S be a complete connected finite volume hyperbolic surface. Thus S is a surface without boundary, with a finite (possibly empty) set of cusps. Then one still has and (even if S has some cusps). In this setting Mj proved the following theorem:
Let S be a complete connected finite volume hyperbolic surface and let . Let be a discrete faithful representation without accidental parabolics. Then induces a continuous H-equivariant map .
Here the "without accidental parabolics" assumption means that for , the element is a parabolic isometry of if and only if is a parabolic isometry of . One of important applications of this result is that in the above situation the limit set is locally connected.
This result of Mj was preceded by numerous other results in the same direction, such as Minsky (1994), Alperin, Dicks and Porti (1999), McMullen (2001), Bowditch (2007) and (2013), Miyachi (2002), Souto (2006), Mj (2009), (2011), and others.
In particular, Bowditch's 2013 paper introduced the notion of a "stack" of Gromov-hyperbolic metric spaces and developed an alternative framework to that of Mj for proving various results about Cannon–Thurston maps.
General Kleinian groups
In a 2017 paper Mj proved the existence of the Cannon–Thurston map in the following setting:
Let be a discrete faithful representation where G is a word-hyperbolic group, and where contains no parabolic isometries of . Then induces a continuous G-equivariant map , where is the Gromov boundary of G, and where the image of j is the limit set of G in .
Here "induces" means that the map is continuous, where and (for some basepoint ). In the same paper Mj obtains a more general version of this result, allowing G to contain parabolics, under some extra technical assumptions on G. He also provided a description of the fibers of j in terms of ending laminations of .
Cannon–Thurston maps and word-hyperbolic groups
Existence and non-existence results
Let G be a word-hyperbolic group and let H ≤ G be a subgroup such that H is also word-hyperbolic. If the inclusion i:H → G extends to a continuous map ∂i: ∂H → ∂G between their hyperbolic boundaries, the map ∂i is called a Cannon–Thurston map. Here "extends" means that the map between hyperbolic compactifications , given by , is continuous. In this setting, if the map ∂i exists, it is unique and H-equivariant, and the image ∂i(∂H) is equal to the limit set .
If H ≤ G is quasi-isometrically embedded (i.e. quasiconvex) subgroup, then the Cannon–Thurston map ∂i: ∂H → ∂G exists and is a topological embedding.
However, it turns out that the Cannon–Thurston map exists in many other situations as well.
Mitra proved that if G is word-hyperbolic and H ≤ G is a normal word-hyperbolic subgroup, then the Cannon–Thurston map exists. (In this case if H and Q = G/H are infinite then H is not quasiconvex in G.) The original Cannon–Thurston theorem about fibered hyperbolic 3-manifolds is a special case of this result.
If H ≤ G are two word-hyperbolic groups and H is normal in G then, by a result of Mosher, the quotient group Q = G/H is also word-hyperbolic. In this setting Mitra also described the fibers of the map ∂i: ∂H → ∂G in terms of "algebraic ending laminations" on H, parameterized by the boundary points z ∈ ∂Q.
In another paper Mitra considered the case where a word-hyperbolic group G splits as the fundamental group of a graph of groups, where all vertex and edge groups are word-hyperbolic, and the edge-monomorphisms are quasi-isometric embeddings. In this setting Mitra proved that for every vertex group , for the inclusion map the Cannon–Thurston map does exist.
By combining and iterating these constructions, Mitra produced examples of hyperbolic subgroups of hyperbolic groups H ≤ G where the subgroup distortion of H in G is an arbitrarily high tower of exponentials, and the Cannon–Thurston map exists. Later Barker and Riley showed that one can arrange for H to have arbitrarily high primitive recursive distortion in G.
In a 2013 paper, Baker and Riley constructed the first example of a word-hyperbolic group G and a word-hyperbolic (in fact free) subgroup H ≤ G such that the Cannon–Thurston map does not exist.
Later Matsuda and Oguni generalized the Baker–Riley approach and showed that every non-elementary word-hyperbolic group H can be embedded in some word-hyperbolic group G in such a way that the Cannon–Thurston map does not exist.
Multiplicity of the Cannon–Thurston map
As noted above, if H is a quasi-isometrically embedded subgroup of a word-hyperbolic group G, then H is word-hyperbolic, and the Cannon–Thurston map exists and is injective. Moreover, it is known that the converse is also true: If H is a word-hyperbolic subgroup of a word-hyperbolic group G such that the Cannon–Thurston map exists and is injective, then H is uasi-isometrically embedded in G.
It is known, for more general convergence groups reasons, that if H is a word-hyperbolic subgroup of a word-hyperbolic group G such that the Cannon–Thurston map exists then for every conical limit point for H in has exactly one pre-image under . However, the converse fails: If exists and is non-injective, then there always exists a non-conical limit point of H in ∂G with exactly one preimage under ∂i.
It the context of the original Cannon–Thurston paper, and for many generalizations for the Kleinin representations the Cannon–Thurston map is known to be uniformly finite-to-one. That means that for every point , the full pre-image is a finite set with cardinality bounded by a constant depending only on S.
In general, it is known, as a consequence of the JSJ-decomposition theory for word-hyperbolic groups, that if is a short exact sequence of three infinite torsion-free word-hyperbolic groups, then H is isomorphic to a free product of some closed surface groups and of a free group.
If is the fundamental group of a closed hyperbolic surface S, such hyperbolic extensions of H are described by the theory of "convex cocompact" subgroups of the mapping class group Mod(S). Every subgroup Γ ≤ Mod(S) determines, via the Birman short exact sequence, an extension
Moreover, the group is word-hyperbolic if and only if Γ ≤ Mod(S) is convex-cocompact.
In this case, by Mitra's general result, the Cannon–Thurston map ∂i:∂H → ∂EΓ does exist. The fibers of the map ∂i are described by a collection of ending laminations on S determined by Γ. This description implies that map ∂i is uniformly finite-to-one.
If is a convex-cocompact purely atoroidal subgroup of (where ) then for the corresponding extension the group is word-hyperbolic. In this setting Dowdall, Kapovich and Taylor proved that the Cannon–Thurston map is uniformly finite-to-one, with point preimages having cardinality . This result was first proved by Kapovich and Lustig under the extra assumption that is infinite cyclic, that is, that is generated by an atoroidal fully irreducible element of .
Ghosh proved that for an arbitrary atoroidal (without requiring to be convex cocompact) the Cannon–Thurston map is uniformly finite-to-one, with a bound on the cardinality of point preimages depending only on n. (However, Ghosh's result does not provide an explicit bound in terms of n, and it is still unknown if the 2n bound always holds in this case.)
It remains unknown, whenever H is a word-hyperbolic subgroup of a word-hyperbolic group G such that the Cannon–Thurston map exists, if the map is finite-to-one.
However, it is known that in this setting for every such that p is a conical limit point, the set has cardinality 1.
Generalizations, applications and related results
As an application of the result about the existence of Cannon–Thurston maps for Kleinian surface group representations, Mj proved that if is a finitely generated Kleinian group such that the limit set is connected, then is locally connected.
Leininger, Mj and Schleimer, given a closed hyperbolic surface S, constructed a 'universal' Cannon–Thurston map from a subset of to the boundary of the curve complex of S with one puncture, such that this map, in a precise sense, encodes all the Cannon–Thurston maps corresponding to arbitrary ending laminations on S. As an application, they prove that is path-connected and locally path-connected.
Leininger, Long and Reid used Cannon–Thurston maps to show that any finitely generated torsion-free nonfree Kleinian group with limit set equal to , which is not a lattice and contains no parabolic elements, has discrete commensurator in .
Jeon and Ohshika used Cannon–Thurston maps to establish measurable rigidity for Kleinian groups.
Inclusions of relatively hyperbolic groups as subgroups of other relatively hyperbolic groups in many instances also induce equivariant continuous maps between their Bowditch boundaries; such maps are also referred to as Cannon–Thurston maps.
More generally, if G is a group acting as a discrete convergence group on two metrizable compacta M and Z, a continuous G-equivariant map M → Z (if such a map exists) is also referred to as a Cannon–Thurston map. Of particular interest in this setting is the case where G is word-hyperbolic and M = ∂G is the hyperbolic boundary of G, or where G is relatively hyperbolic and M = ∂G is the Bowditch boundary of G.
Mj and Pal obtained a generalization of Mitra's earlier result for graphs of groups to the relatively hyperbolic context.
Pal obtained a generalization of Mitra's earlier result, about the existence of the Cannon–Thurston map for short exact sequences of word-hyperbolic groups, to relatively hyperbolic contex.
Mj and Rafi used the Cannon–Thurston map to study which subgroups are quasiconvex in extensions of free groups and surface groups by convex cocompact subgroups of and of mapping class groups.
References
Further reading
Group theory
Dynamical systems
Geometric topology
Geometric group theory | Cannon–Thurston map | [
"Physics",
"Mathematics"
] | 3,033 | [
"Geometric group theory",
"Group actions",
"Geometric topology",
"Group theory",
"Fields of abstract algebra",
"Topology",
"Mechanics",
"Symmetry",
"Dynamical systems"
] |
63,554,061 | https://en.wikipedia.org/wiki/Dysprosium%28III%29%20fluoride | Dysprosium(III) fluoride is an inorganic compound of dysprosium with a chemical formula DyF3.
Production
Dysprosium(III) fluoride can be produced by mixing dysprosium(III) chloride or dysprosium(III) carbonate into 40% hydrofluoric acid.
DyF3 can also be produced by hydrothermal reaction of dysprosium nitrate and sodium tetrafluoroborate at 200 °C.
DyF3 can also be produced when dysprosium oxide and ammonium bifluoride are mixed and heated to 300 °C until the oxide is porous, and continued to heat to 700 °C. When hydrogen fluoride is introduced, a reaction occurs:
Properties
Dysprosium(III) fluoride is a white, odorless solid that is insoluble in water. It has an orthorhombic crystal structure with the space group Pnma (space group no. 62).
References
Fluorides
Dysprosium compounds
Lanthanide halides | Dysprosium(III) fluoride | [
"Chemistry"
] | 232 | [
"Inorganic compounds",
"Fluorides",
"Inorganic compound stubs",
"Salts"
] |
63,556,051 | https://en.wikipedia.org/wiki/METRNL | Meteorin-like/Meteorin-Beta (Metrnl)/IL-41, also known as subfatin and cometin, is a small (~27kDa) secreted cytokine, protein encoded by a gene called meteorin-like (METRNL).
Lower serum levels of Metrnl might be a risk factor for developing coronary artery disease and type 2 diabetes mellitus
References
Human proteins
Hormones
Cell signaling
Signal transduction
Cytokines | METRNL | [
"Chemistry",
"Biology"
] | 99 | [
"Biochemistry",
"Cytokines",
"Neurochemistry",
"Signal transduction"
] |
76,616,559 | https://en.wikipedia.org/wiki/Cannabis%20irradiation | Cannabis irradiation is a process used in the cannabis industry to remove or inactivate microbial contaminants (mold, fungus, etc.) from cannabis meant for human consumption, using various forms of radiation. The radiation applied may include gamma radiation, electron beam irradiation, and X-rays. Cold plasma has also been studied experimentally in Israel. As of 2021, the most common radiation used to decontaminate was gamma rays. In the regulated Canadian market, "irradiation is considered to be an effective way of meeting [microbial contaminants] standards", and it has been "standard practice" since at least 2016 in Canada and the Netherlands.
See also
Cannabis product testing
Food irradiation
References
Sources
Further reading
Cannabis industry
Radiation | Cannabis irradiation | [
"Physics",
"Chemistry"
] | 158 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
76,628,805 | https://en.wikipedia.org/wiki/Thorium%20dichloride | Thorium dichloride is a binary inorganic compound of thorium metal and chloride with the chemical formula .
Synthesis
Th-metal is dissolved in alkali chloride. Thorium tetrachloride melts up to .
References
Thorium compounds
Nuclear materials
Chlorides
Actinide halides | Thorium dichloride | [
"Physics",
"Chemistry"
] | 60 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Materials",
"Nuclear materials",
"Matter"
] |
76,633,054 | https://en.wikipedia.org/wiki/Californium%20dichloride | Californium dichloride is a binary inorganic compound of californium metal and chlorine with the chemical formula .
Synthesis
can be prepared by hydrogen reduction of at a high temperature (600 °C).
Physical properties
The compound forms moisture-sensitive amber solid.
References
Californium compounds
Nuclear materials
Chlorides
Actinide halides | Californium dichloride | [
"Physics",
"Chemistry"
] | 73 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Materials",
"Nuclear materials",
"Matter"
] |
59,987,820 | https://en.wikipedia.org/wiki/Orbiting%20Carbon%20Observatory%203 | The Orbiting Carbon Observatory-3 (OCO-3) is a NASA-JPL instrument designed to measure carbon dioxide in Earth's atmosphere. The instrument is mounted on the Japanese Experiment Module-Exposed Facility on board the International Space Station (ISS). OCO-3 was scheduled to be transported to space by a SpaceX Dragon from a Falcon 9 rocket on 30 April 2019, but the launch was delayed to 3 May, due to problems with the space station's electrical power system. This launch was further delayed to 4 May due to electrical issues aboard Of Course I Still Love You (OCISLY), the barge used to recover the Falcon 9’s first stage. OCO-3 was launched as part of CRS-17 on 4 May 2019 at 06:48 UTC. The nominal mission lifetime is ten years.
OCO-3 was assembled using spare materials from the Orbiting Carbon Observatory-2 satellite. Because the OCO-3 instrument is similar to the OCO-2 instrument, it is expected to have similar performance with its measurements used to quantify to 1 ppm precision or better at 3 Hz.
History and timeline
24 February 2009 - Orbiting Carbon Observatory was launched on a Taurus XL rocket but failed to achieve orbit when the fairing failed to separate from the satellite.
1 February 2010 - The 2010 President's budget included funding for development and re-flight of an OCO replacement.
October 2010 - The Orbiting Carbon Observatory-2 project went into implementation phase.
2 July 2014 - OCO-2 was successfully launched from Vandenberg Air Force Base with a Delta II rocket.
2015 - Funding for the OCO-3 project cancelled.
22 December 2015 - OCO-3 project authorized to proceed. Funding was included in the 2016 spending bill.
16 March 2017 - OCO-3 was not included in the proposed FY2018 presidential budget.
23 March 2018 - Funding for the OCO-3 project was restored.
May 2018 - Instrument underwent TVAC testing.
4 May 2019 - Launched using a Falcon 9 rocket from Cape Canaveral Air Force Station. The delivery was part of SpaceX CRS-17, which also included delivery of STP-H6 and a cargo resupply.
After arrival - Robotic installation onto Exposed Facility Unit 3 (EFU 3) on the JEM-EF.
Instrument design
OCO-3 is constructed from spare equipment from the OCO-2 mission. Thus its physical characteristics are similar, but with some adaptations. A 2-axis pointing mirror was added, which will allow targeting of cities and other areas on order of for area mapping (also called "snapshot mode"). A resolution context camera was also added. An onboard cryocooler will maintain detector temperatures of around . Entrance optics were modified to maintain a similar ground footprint to OCO-2.
Similar to OCO and OCO-2, the main measurement will be of reflected near-IR sunlight. Grating spectrometers separate incoming light energy into different components of the electromagnetic spectrum (or wavelengths or "colors"). Because and molecular oxygen absorb light at specific wavelengths, the signal or absorption levels at different wavelengths provide information on the amount of gases. Three bands are used called Weak (around 1.6 μm), Strong (around 2.0 μm), and Oxygen-A (around 0.76 μm). There are 1,016 spectral elements per band, and measurements are made simultaneously at 8 side-by-side locations or "footprints" each about or smaller, 3 times per second.
Expected data use
Overall measurements from OCO-3 will help quantify sources and sinks of carbon dioxide from terrestrial ecosystems, the oceans, and from anthropogenic sources. Due to the ISS orbit, measurements will be made at latitudes less than 52°. Data from OCO-3 are expected to significantly improve understanding of global emissions from human activities, for example, using measurements over cities. Near simultaneous observations from other instruments onboard the International Space Station such as ECOSTRESS (measuring plant temperatures) and Global Ecosystem Dynamics Investigation lidar (measuring forest structure) may be combined with OCO-3 observations to help improve the understanding of the terrestrial ecosystem. Similar to OCO-2, OCO-3 will also measure Solar Induced Fluorescence which is a process that occurs during plant photosynthesis.
See also
Greenhouse Gases Observing Satellite
Space-based measurements of carbon dioxide
Total Carbon Column Observing Network
References
Kibo (ISS module)
Satellite meteorology
SpaceX payloads contracted by NASA
Spacecraft instruments
Spectrometers
Spacecraft launched in 2019
Satellites monitoring GHG emissions | Orbiting Carbon Observatory 3 | [
"Physics",
"Chemistry",
"Environmental_science"
] | 941 | [
"Spectrum (physical sciences)",
"Environmental chemistry",
"Greenhouse gases",
"Spectrometers",
"Spectroscopy"
] |
59,990,826 | https://en.wikipedia.org/wiki/CRISPR%20gene%20editing | CRISPR gene editing (CRISPR, pronounced (crisper), refers to a clustered regularly interspaced short palindromic repeats") is a genetic engineering technique in molecular biology by which the genomes of living organisms may be modified. It is based on a simplified version of the bacterial CRISPR-Cas9 antiviral defense system. By delivering the Cas9 nuclease complexed with a synthetic guide RNA (gRNA) into a cell, the cell's genome can be cut at a desired location, allowing existing genes to be removed or new ones added in vivo.
The technique is considered highly significant in biotechnology and medicine as it enables editing genomes in vivo and is precise, cost-effective, and efficient. It can be used in the creation of new medicines, agricultural products, and genetically modified organisms, or as a means of controlling pathogens and pests. It also offers potential in the treatment of inherited genetic diseases as well as diseases arising from somatic mutations such as cancer. However, its use in human germline genetic modification is highly controversial. The development of this technique earned Jennifer Doudna and Emmanuelle Charpentier the Nobel Prize in Chemistry in 2020. The third researcher group that shared the Kavli Prize for the same discovery, led by Virginijus Šikšnys, was not awarded the Nobel prize.
Working like genetic scissors, the Cas9 nuclease opens both strands of the targeted sequence of DNA to introduce the modification by one of two methods. Knock-in mutations, facilitated via homology directed repair (HDR), is the traditional pathway of targeted genomic editing approaches. This allows for the introduction of targeted DNA damage and repair. HDR employs the use of similar DNA sequences to drive the repair of the break via the incorporation of exogenous DNA to function as the repair template. This method relies on the periodic and isolated occurrence of DNA damage at the target site in order for the repair to commence. Knock-out mutations caused by CRISPR-Cas9 result from the repair of the double-stranded break by means of non-homologous end joining (NHEJ) or POLQ/polymerase theta-mediated end-joining (TMEJ). These end-joining pathways can often result in random deletions or insertions at the repair site, which may disrupt or alter gene functionality. Therefore, genomic engineering by CRISPR-Cas9 gives researchers the ability to generate targeted random gene disruption.
While genome editing in eukaryotic cells has been possible using various methods since the 1980s, the methods employed had proven to be inefficient and impractical to implement on a large scale. With the discovery of CRISPR and specifically the Cas9 nuclease molecule, efficient and highly selective editing became possible. Cas9 derived from the bacterial species Streptococcus pyogenes has facilitated targeted genomic modification in eukaryotic cells by allowing for a reliable method of creating a targeted break at a specific location as designated by the crRNA and tracrRNA guide strands. Researcher can insert Cas9 and template RNA with ease in order to silence or cause point mutations at specific loci. This has proven invaluable for quick and efficient mapping of genomic models and biological processes associated with various genes in a variety of eukaryotes. Newly engineered variants of the Cas9 nuclease that significantly reduce off-target activity have been developed.
CRISPR-Cas9 genome editing techniques have many potential applications. The use of the CRISPR-Cas9-gRNA complex for genome editing was the AAAS's choice for Breakthrough of the Year in 2015. Many bioethical concerns have been raised about the prospect of using CRISPR for germline editing, especially in human embryos. In 2023, the first drug making use of CRISPR gene editing, Casgevy, was approved for use in the United Kingdom, to cure sickle-cell disease and beta thalassemia. Casgevy was approved for use in the United States on December 8, 2023, by the Food and Drug Administration.
History
Other methods
In the early 2000s, German researchers began developing zinc finger nucleases (ZFNs), synthetic proteins whose DNA-binding domains enable them to create double-stranded breaks in DNA at specific points. ZFNs have a higher precision and the advantage of being smaller than Cas9, but ZFNs are not as commonly used as CRISPR-based methods. In 2010, synthetic nucleases called transcription activator-like effector nucleases (TALENs) provided an easier way to target a double-stranded break to a specific location on the DNA strand. Both zinc finger nucleases and TALENs require the design and creation of a custom protein for each targeted DNA sequence, which is a much more difficult and time-consuming process than that of designing guide RNAs. CRISPRs are much easier to design because the process requires synthesizing only a short RNA sequence, a procedure that is already widely used for many other molecular biology techniques (e.g. creating oligonucleotide primers).
Whereas methods such as RNA interference (RNAi) do not fully suppress gene function, CRISPR, ZFNs, and TALENs provide full, irreversible gene knockout. CRISPR can also target several DNA sites simultaneously simply by introducing different gRNAs. In addition, the costs of employing CRISPR are relatively low.
Discovery
In 2005, Alexander Bolotin at the French National Institute for Agricultural Research (INRA) discovered a CRISPR locus that contained novel Cas genes, significantly one that encoded a large protein known as Cas9.
In 2006, Eugene Koonin at the US National Center for Biotechnology information, NIH, proposed an explanation as to how CRISPR cascades as a bacterial immune system.
In 2007, Philippe Horvath at Danisco France SAS displayed experimentally how CRISPR systems are an adaptive immune system, and integrate new phage DNA into the CRISPR array, which is how they fight off the next wave of attacking phage.
In 2012, the research team led by professor Jennifer Doudna (University of California, Berkeley) and professor Emmanuelle Charpentier (Umeå University) were the first people to identify, disclose, and file a patent application for the CRISPR-Cas9 system needed to edit DNA. They also published their finding that CRISPR-Cas9 could be programmed with RNA to edit genomic DNA, now considered one of the most significant discoveries in the history of biology.
Patents and commercialization
, SAGE Labs (part of Horizon Discovery group) had exclusive rights from one of those companies to produce and sell genetically engineered rats and non-exclusive rights for mouse and rabbit models. , Thermo Fisher Scientific had licensed intellectual property from ToolGen to develop CRISPR reagent kits.
, patent rights to CRISPR were contested. Several companies formed to develop related drugs and research tools. As companies ramped up financing, doubts as to whether CRISPR could be quickly monetized were raised. In 2014, Feng Zhang of the Broad Institute of MIT and Harvard and nine others were awarded US patent number 8,697,359 over the use of CRISPR–Cas9 gene editing in eukaryotes. Although Charpentier and Doudna (referred to as CVC) were credited for the conception of CRISPR, the Broad Institute was the first to achieve a "reduction to practice" according to patent judges Sally Gardner Lane, James T. Moore and Deborah Katz.
The first set of patents was awarded to the Broad team in 2015, prompting attorneys for the CVC group to request the first interference proceeding. In February 2017, the US Patent Office ruled on a patent interference case brought by University of California with respect to patents issued to the Broad Institute, and found that the Broad patents, with claims covering the application of CRISPR-Cas9 in eukaryotic cells, were distinct from the inventions claimed by University of California.
Shortly after, University of California filed an appeal of this ruling. In 2019 the second interference dispute was opened. This was in response to patent applications made by CVC that required the appeals board to determine the original inventor of the technology. The USPTO ruled in March 2022 against UC, stating that the Broad Institute were first to file. The decision affected many of the licensing agreements for the CRISPR editing technology that was licensed from UC Berkeley. UC stated its intent to appeal the USPTO's ruling.
Recent events
In March 2017, the European Patent Office (EPO) announced its intention to allow claims for editing all types of cells to Max-Planck Institute in Berlin, University of California, and University of Vienna, and in August 2017, the EPO announced its intention to allow CRISPR claims in a patent application that MilliporeSigma had filed. the patent situation in Europe was complex, with MilliporeSigma, ToolGen, Vilnius University, and Harvard contending for claims, along with University of California and Broad.
In July 2018, the ECJ ruled that gene editing for plants was a sub-category of GMO foods and therefore that the CRISPR technique would henceforth be regulated in the European Union by their rules and regulations for GMOs.
In February 2020, a US trial showed safe CRISPR gene editing on three cancer patients.
In October 2020, researchers Emmanuelle Charpentier and Jennifer Doudna were awarded the Nobel Prize in Chemistry for their work in this field. They made history as the first two women to share this award without a male contributor.
In June 2021, the first, small clinical trial of intravenous CRISPR gene editing in humans concluded with promising results.
In September 2021, the first CRISPR-edited food went on public sale in Japan. Tomatoes were genetically modified for around five times the normal amount of possibly calming GABA. CRISPR was first applied in tomatoes in 2014.
In December 2021, it was reported that the first CRISPR-gene-edited marine animal/seafood and second set of CRISPR-edited food has gone on public sale in Japan: two fish of which one species grows to twice the size of natural specimens due to disruption of leptin, which controls appetite, and the other grows to 1.2 the natural average size with the same amount of food due to disabled myostatin, which inhibits muscle growth.
A 2022 study has found that knowing more about CRISPR tomatoes had a strong effect on the participants' preference. "Almost half of the 32 participants from Germany who are scientists demonstrated constant choices, while the majority showed increased willingness to buy CRISPR tomatoes, mostly non-scientists."
In May 2021, UC Berkeley announced their intent to auction non-fungible tokens of both the patent for CRISPR gene editing as well as cancer immunotherapy. However, the university would in this case retain ownership of the patents. 85 % of funds gathered through the sale of the collection named The Fourth Pillar were to be used to finance research. It sold in June 2022 for 22 Ether, which was around at the time.
In November 2023, the United Kingdom's Medicines and Healthcare products Regulatory Agency (MHRA) became the first in the world to approve the use of the first drug based on CRISPR gene editing, Casgevy, to treat sickle-cell anemia and beta thalassemia. Casgevy, or exagamglogene autotemcel, directly acts on the genes of the stem cells inside the patient's bones, having them produce healthy red blood cells. This treatment thus avoids the need for regular, costly blood transfusions.
In December 2023, the FDA approved the first gene therapy in the US to treat patients with Sickle Cell Disease (SCD). The FDA approved two milestone treatments, Casgevy and Lyfgenia, representing the first cell-based gene therapies for the treatment of SCD.
Genome engineering
CRISPR-Cas9 genome editing uses a Type II CRISPR system. This system includes a ribonucleoprotein (RNP), consisting of Cas9, crRNA, and tracrRNA, along with an optional DNA repair template.
Major components
CRISPR-Cas9 often employs plasmids that code for the RNP components to transfect the target cells, or the RNP is assembled before addition to the cells via nucleofection. The main components of this plasmid are displayed in the image and listed in the table. The crRNA is uniquely designed for each application, as this is the sequence that Cas9 uses to identify and directly bind to specific sequences within the host cell's DNA. The crRNA must bind only where editing is desired. The repair template is also uniquely designed for each application, as it must complement to some degree the DNA sequences on either side of the cut and also contain whatever sequence is desired for insertion into the host genome.
Multiple crRNAs and the tracrRNA can be packaged together to form a single-guide RNA (sgRNA). This sgRNA can be included alongside the gene that codes for the Cas9 protein and made into a plasmid in order to be transfected into cells. Many online tools are available to aid in designing effective sgRNA sequences.
Alternatives to Cas9
Alternative proteins to Cas9 include the following:
Structure
CRISPR-Cas9 offers a high degree of fidelity and relatively simple construction. It depends on two factors for its specificity: the target sequence and the protospacer adjacent motif (PAM) sequence. The target sequence is 20 bases long as part of each CRISPR locus in the crRNA array. A typical crRNA array has multiple unique target sequences. Cas9 proteins select the correct location on the host's genome by utilizing the sequence to bond with base pairs on the host DNA. The sequence is not part of the Cas9 protein and as a result is customizable and can be independently synthesized.
The PAM sequence on the host genome is recognized by Cas9. Cas9 cannot be easily modified to recognize a different PAM sequence. However, this is ultimately not too limiting, as it is typically a very short and nonspecific sequence that occurs frequently at many places throughout the genome (e.g. the SpCas9 PAM sequence is 5'-NGG-3' and in the human genome occurs roughly every 8 to 12 base pairs).
Once these sequences have been assembled into a plasmid and transfected into cells, the Cas9 protein with the help of the crRNA finds the correct sequence in the host cell's DNA and – depending on the Cas9 variant – creates a single- or double-stranded break at the appropriate location in the DNA.
Properly spaced single-stranded breaks in the host DNA can trigger homology directed repair, which is less error-prone than the non-homologous end joining that typically follows a double-stranded break. Providing a DNA repair template allows for the insertion of a specific DNA sequence at an exact location within the genome. The repair template should extend 40 to 90 base pairs beyond the Cas9-induced DNA break. The goal is for the cell's native HDR process to utilize the provided repair template and thereby incorporate the new sequence into the genome. Once incorporated, this new sequence is now part of the cell's genetic material and passes into its daughter cells. Combined transient inhibition of NHEJ and TMEJ by a small molecule and siRNAs can increase HDR efficiency to up to 93% and simultaneously prevent off-target editing.
Delivery
Delivery of Cas9, sgRNA, and associated complexes into cells can occur via viral and non-viral systems. Electroporation of DNA, RNA, or ribonucleocomplexes is a common technique, though it can result in harmful effects on the target cells. Chemical transfection techniques utilizing lipids and peptides have also been used to introduce sgRNAs in complex with Cas9 into cells. Nanoparticle-based delivery has also been used for transfection. Types of cells that are more difficult to transfect (e.g., stem cells, neurons, and hematopoietic cells) require more efficient delivery systems, such as those based on lentivirus (LVs), adenovirus (AdV), and adeno-associated virus (AAV).
Efficiency of CRISPR-Cas9 has been found to greatly increase when various components of the system including the entire CRISPR/Cas9 structure to Cas9-gRNA complexes delivered in assembled form rather than using transgenics. This has found particular value in genetically modified crops for mass commercialization. Since the host's replication machinery is not needed to produce these proteins, the chance of the recognizing sequence of the sgRNA is almost none, decreasing the chance of off-target effects.
Controlled genome editing
Further improvements and variants of the CRISPR-Cas9 system have focused on introducing more control into its use. Specifically, the research aimed at improving this system includes improving its specificity, its efficiency, and the granularity of its editing power. Techniques can further be divided and classified by the component of the system they modify. These include using different variants or novel creations of the Cas protein, using an altogether different effector protein, modifying the sgRNA, or using an algorithmic approach to identify existing optimal solutions.
Specificity is an important aspect to improve the CRISPR-Cas9 system because the off-target effects it generates have serious consequences for the genome of the cell and invokes caution for its use. Minimizing off-target effects is thus maximizing the safety of the system. Novel variations of Cas9 proteins that increase specificity include effector proteins with comparable efficiency and specificity to the original SpCas9 that are able to target the previously untargetable sequences and a variant that has virtually no off-target mutations. Research has also been conducted in engineering new Cas9 proteins, including some that partially replace RNA nucleotides in crRNA with DNA and a structure-guided Cas9 mutant generating procedure that all had reduced off-target effects. Iteratively truncated sgRNAs and highly stabilized gRNAs have been shown to also decrease off-target effects. Computational methods including machine learning have been used to predict the affinity of and create unique sequences for the system to maximize specificity for given targets.
Several variants of CRISPR-Cas9 allow gene activation or genome editing with an external trigger such as light or small molecules. These include photoactivatable CRISPR systems developed by fusing light-responsive protein partners with an activator domain and a dCas9 for gene activation, or by fusing similar light-responsive domains with two constructs of split-Cas9, or by incorporating caged unnatural amino acids into Cas9, or by modifying the guide RNAs with photocleavable complements for genome editing.
Methods to control genome editing with small molecules include an allosteric Cas9, with no detectable background editing, that will activate binding and cleavage upon the addition of 4-hydroxytamoxifen (4-HT), 4-HT responsive intein-linked Cas9, or a Cas9 that is 4-HT responsive when fused to four ERT2 domains. Intein-inducible split-Cas9 allows dimerization of Cas9 fragments and rapamycin-inducible split-Cas9 system developed by fusing two constructs of split-Cas9 with FRB and FKBP fragments. Other studies have been able to induce transcription of Cas9 with a small molecule, doxycycline. Small molecules can also be used to improve homology directed repair, often by inhibiting the non-homologous end joining pathway and/or the theta-mediated end-joining pathway. A system with the Cpf1 effector protein was created that is induced by small molecules VE-822 and AZD-7762. These systems allow conditional control of CRISPR activity for improved precision, efficiency, and spatiotemporal control. Spatiotemporal control is a form of removing off-target effects—only certain cells or parts of the organism may need to be modified, and thus light or small molecules can be used as a way to conduct this. Efficiency of the CRISPR-Cas9 system is also greatly increased by proper delivery of the DNA instructions for creating the proteins and necessary reagents.
CRISPR also utilizes single base-pair editing proteins to create specific edits at one or two bases in the target sequence. CRISPR/Cas9 was fused with specific enzymes that initially could only change C to T and G to A mutations and their reverse. This was accomplished eventually without requiring any DNA cleavage. With the fusion of another enzyme, the base editing CRISPR-Cas9 system can also edit C to G and its reverse.
CRISPR screening
The clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system is a gene-editing technology that can induce double-strand breaks (DSBs) anywhere guide ribonucleic acids (gRNA) can bind with the protospacer adjacent motif (PAM) sequence. Single-strand nicks can also be induced by Cas9 active-site mutants, also known as Cas9 nickases. By simply changing the sequence of gRNA, the Cas9-endonuclease can be delivered to a gene of interest and induce DSBs. The efficiency of Cas9-endonuclease and the ease by which genes can be targeted led to the development of CRISPR-knockout (KO) libraries both for mouse and human cells, which can cover either specific gene sets of interest or the whole-genome. CRISPR screening helps scientists to create a systematic and high-throughput genetic perturbation within live model organisms. This genetic perturbation is necessary for fully understanding gene function and epigenetic regulation. The advantage of pooled CRISPR libraries is that more genes can be targeted at once.
Knock-out libraries are created in a way to achieve equal representation and performance across all expressed gRNAs and carry an antibiotic or fluorescent selection marker that can be used to recover transduced cells. There are two plasmid systems in CRISPR/Cas9 libraries. First, is all in one plasmid, where sgRNA and Cas9 are produced simultaneously in a transfected cell. Second, is a two-vector system: sgRNA and Cas9 plasmids are delivered separately. It is important to deliver thousands of unique sgRNAs-containing vectors to a single vessel of cells by viral transduction at low multiplicity of infection (MOI, typically at 0.1–0.6), it prevents the probability that an individual cell clone will get more than one type of sgRNA otherwise it can lead to incorrect assignment of genotype to phenotype.
Once a pooled library is prepared it is necessary to carry out a deep sequencing (NGS, next generation sequencing) of PCR-amplified plasmid DNA in order to reveal abundance of sgRNAs. Cells of interest can be consequentially infected by the library and then selected according to the phenotype. There are 2 types of selection: negative and positive. By negative selection dead or slow growing cells are efficiently detected. It can identify survival-essential genes, which can further serve as candidates for molecularly targeted drugs. On the other hand, positive selection gives a collection of growth-advantage acquired populations by random mutagenesis. After selection genomic DNA is collected and sequenced by NGS. Depletion or enrichment of sgRNAs is detected and compared to the original sgRNA library, annotated with the target gene that sgRNA corresponds to. Statistical analysis then identifies genes that are significantly likely to be relevant to the phenotype of interest.
Apart from knock-out there are also knock-down (CRISPRi) and activation (CRISPRa) libraries, which use the ability of proteolytically deactivated Cas9-fusion proteins (dCas9) to bind target DNA, which means that a gene of interest is not cut but is over-expressed or repressed. It made CRISPR/Cas9 system even more interesting in gene editing. Inactive dCas9 protein modulate gene expression by targeting dCas9-repressors or activators toward promoter or transcriptional start sites of target genes. For repressing genes Cas9 can be fused to KRAB effector domain that makes complex with gRNA, whereas CRISPRa utilizes dCas9 fused to different transcriptional activation domains, which are further directed by gRNA to promoter regions to upregulate expression.
Applications
Disease models
Cas9 genomic modification has allowed for the quick and efficient generation of transgenic models within the field of genetics. Cas9 can be easily introduced into the target cells along with sgRNA via plasmid transfection in order to model the spread of diseases and the cell's response to and defense against infection. The ability of Cas9 to be introduced in vivo allows for the creation of more accurate models of gene function and mutation effects, all while avoiding the off-target mutations typically observed with older methods of genetic engineering.
The CRISPR and Cas9 revolution in genomic modeling does not extend only to mammals. Traditional genomic models such as Drosophila melanogaster, one of the first model organisms, have seen further refinement in their resolution with the use of Cas9. Cas9 uses cell-specific promoters allowing a controlled use of the Cas9. Cas9 is an accurate method of treating diseases due to the targeting of the Cas9 enzyme only affecting certain cell types. The cells undergoing the Cas9 therapy can also be removed and reintroduced to provide amplified effects of the therapy.
CRISPR-Cas9 can be used to edit the DNA of organisms in vivo and to eliminate individual genes or even entire chromosomes from an organism at any point in its development. Chromosomes that have been successfully deleted in vivo using CRISPR techniques include the Y chromosome and X chromosome of adult lab mice and human chromosomes 14 and 21, in embryonic stem cell lines and aneuploid mice respectively. This method might be useful for treating genetic disorders caused by abnormal numbers of chromosomes, such as Down syndrome and intersex disorders.
Successful in vivo genome editing using CRISPR-Cas9 has been shown in numerous model organisms, including Escherichia coli, Saccharomyces cerevisiae, Candida albicans, Methanosarcina acetivorans, Caenorhabditis elegans, Arabidopsis spp., Danio rerio, and Mus musculus. Successes have been achieved in the study of basic biology, in the creation of disease models, and in the experimental treatment of disease models.
Concerns have been raised that off-target effects (editing of genes besides the ones intended) may confound the results of a CRISPR gene editing experiment (i.e. the observed phenotype change may not be due to modifying the target gene, but some other gene). Modifications to CRISPR have been made to minimize the possibility of off-target effects. Orthogonal CRISPR experiments are often recommended to confirm the results of a gene editing experiment.
CRISPR simplifies the creation of genetically modified organisms for research which mimic disease or show what happens when a gene is knocked down or mutated. CRISPR may be used at the germline level to create organisms in which the targeted gene is changed everywhere (i.e. in all cells/tissues/organs of a multicellular organism), or it may be used in non-germline cells to create local changes that only affect certain cell populations within the organism.
CRISPR can be utilized to create human cellular models of disease. For instance, when applied to human pluripotent stem cells, CRISPR has been used to introduce targeted mutations in genes relevant to polycystic kidney disease (PKD) and focal segmental glomerulosclerosis (FSGS). These CRISPR-modified pluripotent stem cells were subsequently grown into human kidney organoids that exhibited disease-specific phenotypes. Kidney organoids from stem cells with PKD mutations formed large, translucent cyst structures from kidney tubules. The cysts were capable of reaching macroscopic dimensions, up to one centimeter in diameter. Kidney organoids with mutations in a gene linked to FSGS developed junctional defects between podocytes, the filtering cells affected in that disease. This was traced to the inability of podocytes to form microvilli between adjacent cells. Importantly, these disease phenotypes were absent in control organoids of identical genetic background, but lacking the CRISPR modifications.
A similar approach was taken to model long QT syndrome in cardiomyocytes derived from pluripotent stem cells. These CRISPR-generated cellular models, with isogenic controls, provide a new way to study human disease and test drugs.
Biomedicine
CRISPR-Cas technology has been proposed as a treatment for multiple human diseases, especially those with a genetic cause. Its ability to modify specific DNA sequences makes it a tool with potential to fix disease-causing mutations. Early research in animal models suggest that therapies based on CRISPR technology have potential to treat a wide range of diseases, including cancer, progeria, beta-thalassemia, sickle cell disease, hemophilia, cystic fibrosis, Duchenne's muscular dystrophy, Huntington's disease, transthyretin amyloidosis and heart disease. CRISPR has also been used to cure malaria in mosquitos, which could eliminate the vector and the disease in humans. CRISPR may also have applications in tissue engineering and regenerative medicine, such as by creating human blood vessels that lack expression of MHC class II proteins, which often cause transplant rejection.
In addition, clinical trials to cure beta thalassemia and sickle cell disease in human patients using CRISPR-Cas9 technology have shown promising results. In December 2023, the US Food and Drug Administration (FDA) approved the first cell-based gene therapies for treating sickle cell disease, Casgevy and Lyfgenia. Casgevy is the first FDA approved gene therapy to use the CRISPR-Cas9 technology and works by modifying a patient's hematopoietic stem cells.
Nevertheless, there remains a few limitations of the technology's use in gene therapy: the relatively high frequency of off-target effect, the requirement for a PAM sequence near the target site, p53 mediated apoptosis by CRISPR-induced double-strand breaks and immunogenic toxicity due to the delivery system typically by virus.
Cancer
CRISPR has also found many applications in developing cell-based immunotherapies. The first clinical trial involving CRISPR started in 2016. It involved taking immune cells from people with lung cancer, using CRISPR to edit out the gene which expressed Programmed cell death protein 1 (PD-1), then administering the altered cells back to the same person. 20 other trials were under way or nearly ready, mostly in China, .
In 2016, the United States Food and Drug Administration (FDA) approved a clinical trial in which CRISPR would be used to alter T cells extracted from people with different kinds of cancer and then administer those engineered T cells back to the same people.
In November 2020, in mouse animal models, CRISPR was used effectively to treat glioblastoma (fast-growing brain tumor) and metastatic ovarian cancer, as those are two cancers with some of the worst best-case prognosis and are typically diagnosed during their later stages. The treatments have resulted in inhibited tumor growth, and increased survival by 80% for metastatic ovarian cancer and tumor cell apoptosis, inhibited tumor growth by 50%, and improved survival by 30% for glioblastoma.
In October 2021, CRISPR Therapeutics announced results from their ongoing US-based Phase 1 trial for an allogeneic T cell therapy. These cells are sourced from healthy donors and are edited to attack cancer cells and avoid being seen as a threat by the recipient's immune system, and then multiplied into huge batches which can be given to large numbers of recipients.
In December 2022, a 13-year British girl that had been diagnosed with incurable T-Cell Acute Lymphoblastic Leukaemia was cured by doctors at Great Ormond Street Hospital, in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where previous attempts of other treatments failed. The procedure included reprogramming a healthy T-Cell to destroy the cancerous T-Cells to first rid her of Leukaemia, and then rebuilding her immune system from scratch using healthy immune cells. The team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs.
Diabetes
Type 1 Diabetes is an endocrine disorder which results from a lack of pancreatic beta cells to produce insulin, a vital compound in transporting blood sugar to cells for producing energy. Researchers have been trying to transplant healthy beta cells. CRISPR is used to edit the cells in order to reduce the chance the patient's body will reject the transplant.
On November 17, 2021 CRISPR therapeutics and ViaCyte announced that the Canadian medical agency had approved their request for a clinical trial for VCTX210, a CRISPR-edited stem cell therapy designed to treat type 1 diabetes. This was significant because it was the first ever gene-edited therapy for diabetes that approached clinics. The same companies also developed a novel treatment for type 1 diabetes to produce insulin via a small medical implant that uses millions of pancreatic cells derived from CRISPR gene-edited stem cells.
In February 2022, a phase 1 trial was conducted in which one patient volunteer received treatment.
HIV/AIDS
Human immunodeficiency virus (HIV) is a virus that attacks the body's immune system. While effective treatments exist which can allow patients to live healthy lives, HIV is retroactive meaning that it embeds an inactive version of itself in the human genome. CRISPR can be used to selectively remove the virus from the genome by designing guide RNA to target the retroactive HIV genome. One issue with this approach is that it requires the removal of the HIV genome from almost all cells, which can be difficult to realistically achieve.
Initial results in the treatment and cure of HIV have been rather successful, in 2021 9 out of 23 humanized mice treated with a combination of anti-retrovirals and CRISPR/Cas-9 had the virus become undetectable, even after the usual rebound period. None of the two treatments alone had such an effect. Clinical trials in humans of a CRISPR–Cas9 based therapy, EBT-101 started in 2022. In October 2023 an early-stage study on 3 people of EBT-101 reported that the treatment appeared to be safe with no major side effects but no data on its effectiveness was disclosed. In March 2024 another CRISPR therapy from researchers of the university of Amsterdam reported the elimination of HIV in cell cultures.
Infection
CRISPR-Cas-based "RNA-guided nucleases" can be used to target virulence factors, genes encoding antibiotic resistance, and other medically relevant sequences of interest. This technology thus represents a novel form of antimicrobial therapy and a strategy by which to manipulate bacterial populations. Recent studies suggest a correlation between the interfering of the CRISPR-Cas locus and acquisition of antibiotic resistance. This system provides protection of bacteria against invading foreign DNA, such as transposons, bacteriophages, and plasmids. This system was shown to be a strong selective pressure for the acquisition of antibiotic resistance and virulence factor in bacterial pathogens.
Therapies based on CRISPR–Cas3 gene editing technology delivered by engineered bacteriophages could be used to destroy targeted DNA in pathogens. Cas3 is more destructive than the better known Cas9.
Research suggests that CRISPR is an effective way to limit replication of multiple herpesviruses. It was able to eradicate viral DNA in the case of Epstein–Barr virus (EBV). Anti-herpesvirus CRISPRs have promising applications such as removing cancer-causing EBV from tumor cells, helping rid donated organs for immunocompromised patients of viral invaders, or preventing cold sore outbreaks and recurrent eye infections by blocking HSV-1 reactivation. , these were awaiting testing.
CRISPR may revive the concept of transplanting animal organs into people. Retroviruses present in animal genomes could harm transplant recipients. In 2015, a team eliminated 62 copies of a particular retroviral DNA sequence from the pig genome in a kidney epithelial cell. Researchers recently demonstrated the ability to birth live pig specimens after removing these retroviruses from their genome using CRISPR for the first time.
Neurological disorders
CRISPR can be used to suppress gain of function mutations and to repair loss of function mutations in neurological disorders. The gene editing tool has become a foothold in vivo application for assimilation of molecular pathways.
CRISPR is unique to the development of solving neurological diseases for several reasons. For example, CRISPR allows researchers to quickly generate animal and human cell models, allowing them to study how genes function in a nervous system. By introducing mutations that pertain to various diseases within these cells, researchers can study the effects of the changes on nervous system development, function, and behavior. They can uncover the molecular mechanisms that contribute to these disorders, which is essential for developing effective treatments. This is particularly useful in modeling and treating complex neurological disorders such as Alzheimer's, Parkinson's, and epilepsy among others.
Alzheimer's Disease (AD) is a neurodegenerative disease categorized by neuron loss and an accumulation of intracellular neurofibrillary tangles and extracellular amyloid plaques in the brain. Three pathogenic genes that cause early onset AD in humans have been identified, specifically amyloid precursor protein (APP), presenilin 1 (PSEN1), and presenilin 2 (PSEN2). Over 300 mutations have been detected in these genes, resulting in an increase in total β-amyloid (Aβ), Aβ42/40 ratio, and/or Aβ polymerization.
CRISPR has been used to correct for the dystrophin gene, which is responsible for Duchenne muscular dystrophy, and for the SCN1A mutation responsible for the epilepsy disorder Dravet syndrome. A challenge of using CRISPR for neurological treatment is transferring its components across the blood-brain barrier. However, recent advancements in nanoparticle delivery systems and viral vectors have shown promise in overcoming this hurdle. Looking to the future, the use of CRISPR in neuroscience is expected to increase as technology evolves.
Blindness
The most commonly occurring worldwide eye diseases are cataract and retinitis pigmentosa (RP). These are caused by a missense mutation in the alpha chain that leads to permanent blindness. A challenge to the use of CRISPR in eye disease is that retinal tissue in the eye is free from body immune response. Researchers' approach for using CRISPR is to bag the gene coding retinal protein and edit the genome.
Leber congenital amaurosis
The CRISPR treatment for LCA10 (the most common variant of Leber congenital amaurosis which is the leading cause of inherited childhood blindness) modifies the patient's defective photoreceptor gene.
In March 2020, the first patient volunteer in this US-based study, sponsored by Editas Medicine, was given a low-dose of the treatment to test for safety. In June 2021, enrollment began for a high-dose adult and pediatric cohort of 4 patient volunteers each. Dosing of the new cohorts was expected to be completed by July 2022. In November 2022, Editas reported that 20% of the patients treated had significant improvements, but also announced that the resulting target population was too small to support continued independent development.
Cardiovascular diseases
CRISPR technology has been shown to work efficiently in the treatment of heart disease. In the case of familial hypercholesterolemia (FH), cholesterol deposition in the walls of the artery causes blockage of blood flow. This is caused by mutation in low density lipoprotein cholesterol receptors (LDLC) which results in excessive release of cholesterol into the blood. This can be treated by deletion of a base pair in exon 4 of the LDLC receptor. This is a nonsense mutation.
β-Hemoglobinopathies
This disease comes under genetic disorders which are caused by mutation occurring in the structure of hemoglobin or due to substitution of different amino acids in globin chains. Due to this, the red blood cells (RBC) cause a string of obstacles such as heart failure, hindrance of blood vessels, defects in growth and optical problems. To rehabilitate β-hemoglobinopathies, the patient's multipotent cells are transferred in a mice model to study the rate of gene therapy in ex-vivo which results in expression of mRNA and the gene being rectified. Intriguingly RBC half-life was also increased.
Hemophilia
Hemophilia is a loss of function in blood where clotting factors do not work properly. By using CRISPR-Cas9, a vector is inserted into bacteria. The vector used is Adenoviral vector which helps in correction of genes.
Agriculture
The application of CRISPR in plants was successfully achieved in the year 2013. CRISPR Cas9 has become an influential appliance in editing genomes in crops. It made a mark in present breeding systems,
To increase yield of cereal crops, the balance of cytokinin is changed. Cytokinin oxidase/dehydrogenase (CKX) is an enzyme that inhibits outgrowth buds in rice, so the gene that codes for this enzyme was knocked out to increase yield.
Grains have a high amount of amylose polysaccharide. To decrease the amylose content CRISPR is used to alter the amino acids to reduce saccharide production. Moreover, wheat contains the protein gluten, to which some people are intolerant, causing celiac disease. The gene editing tool targets the gluten-coding genes which results in wheat with lower gluten production.
Resistance to disease
The biotic stress of plants can be reduced by using CRISPR. The bacterial infections on rice leads to activation of transcription of genes, the products of which are susceptible to disease. By using CRISPR, scientists were able to generate heritable resistance to powdery mildew.
Gene therapy
There are currently about 6000 known genetic disorders, most of which are currently untreatable. The role of CRISPR in gene therapy is to substitute exogenous DNA in place of defective genes. Gene therapy has made a huge impact and opened many new possibilities in medical biotechnology.
Base editing
They are two types of base editings:
Cytidine base editor is a novel therapy in which the cytidine (C) changes to thymidine (T).
Adenine base editor (ABE), in this there is a change in base complements from adenine (A) to Guanine (G).
The mutations were directly installed in cellular DNA so that the donor template is not required. The base editings can only edit point mutations. Moreover, they can only fix up to four-point mutations. To address this problem, the CRISPR system introduced a new technique known as Cas9 fusion to increase the scale of genes that can be edited.
Gene silencing and activating
Furthermore, the CRISPR Cas9 protein can modulate genes either by activating or silencing based on genes of interest. There is a nuclease called dCas9 (endonuclease) used to silence or activate the expression of genes.
Limitations
The researchers are facing many challenges in gene editing. The major hurdles coming in the clinical applications are ethical issues and the transport system to the target site. As the units of CRISPR system taken from bacteria, when they are transferred to host cells it produces an immune response against them. Physical, chemical, viral vectors are used as vehicles to deliver the complex into the host. Due to this many complications are arising such as cell damage that leads to cell death. In the case of viral vectors, the capacity of the virus is small and Cas9 protein is large. So, to overcome these new methods were developed in which smaller strains of Cas9 are taken from bacteria. Finally, a great extent of work is still needed to improve the system.
As a diagnostic tool
CRISPR associated nucleases have shown to be useful as a tool for molecular testing due to their ability to specifically target nucleic acid sequences in a high background of non-target sequences. In 2016, the Cas9 nuclease was used to deplete unwanted nucleotide sequences in next-generation sequencing libraries while requiring only 250 picograms of initial RNA input. Beginning in 2017, CRISPR associated nucleases were also used for direct diagnostic testing of nucleic acids, down to single molecule sensitivity. CRISPR diversity is used as an analysis target to discern phylogeny and diversity in bacteria, such as in xanthomonads by Martins et al., 2019. Early detections of plant pathogens by molecular typing of the pathogen's CRISPRs can be used in agriculture as demonstrated by Shen et al., 2020.
By coupling CRISPR-based diagnostics to additional enzymatic processes, the detection of molecules beyond nucleic acids is possible. One example of a coupled technology is SHERLOCK-based Profiling of IN vitro Transcription (SPRINT). SPRINT can be used to detect a variety of substances, such as metabolites in patient samples or contaminants in environmental samples, with high throughput or with portable point-of-care devices. CRISPR-Cas platforms are also being explored for detection and inactivation of SARS-CoV-2, the virus that causes COVID-19. Two different comprehensive diagnostic tests, AIOD-CRISPR and SHERLOCK test have been identified for SARS-CoV-2. The SHERLOCK test is based on a fluorescently labelled press reporter RNA which has the ability to identify 10 copies per microliter. The AIOD-CRISPR helps with robust and highly sensitive visual detection of the viral nucleic acid.
Genetic anthropology
CRISPR-Cas9 can be used in investigating and identifying the genetic differences of humans to other apes, especially of the brain. For example, by reintroducing archaic gene variants into brain organoids to show an impact on neurogenesis, metaphase length of apical progenitors of the developing neocortex, or by knockout of a gene in embryonic stem cells to identify a genetic regulator that via early cell shape transition drives evolutionary expansion of the human forebrain. One study described a major impact of an archaic gene variant on neurodevelopment which may be an artefact of a CRISPR side effect, as it could not be replicated in a subsequent study.
By technique
Knockdown/activation
Using "dead" versions of Cas9 (dCas9) eliminates CRISPR's DNA-cutting ability, while preserving its ability to target desirable sequences. Multiple groups added various regulatory factors to dCas9s, enabling them to turn almost any gene on or off or adjust its level of activity. Like RNAi, CRISPR interference (CRISPRi) turns off genes in a reversible fashion by targeting, but not cutting a site. The targeted site is methylated, epigenetically modifying the gene. This modification inhibits transcription. These precisely placed modifications may then be used to regulate the effects on gene expressions and DNA dynamics after the inhibition of certain genome sequences within DNA. Within the past few years, epigenetic marks in different human cells have been closely researched and certain patterns within the marks have been found to correlate with everything ranging from tumor growth to brain activity. Conversely, CRISPR-mediated activation (CRISPRa) promotes gene transcription. Cas9 is an effective way of targeting and silencing specific genes at the DNA level. In bacteria, the presence of Cas9 alone is enough to block transcription. For mammalian applications, a section of protein is added. Its guide RNA targets regulatory DNA sequences called promoters that immediately precede the target gene.
Cas9 was used to carry synthetic transcription factors that activated specific human genes. The technique achieved a strong effect by targeting multiple CRISPR constructs to slightly different locations on the gene's promoter.
RNA editing
In 2016, researchers demonstrated that CRISPR from an ordinary mouth bacterium could be used to edit RNA. The researchers searched databases containing hundreds of millions of genetic sequences for those that resembled CRISPR genes. They considered the fusobacterium Leptotrichia shahii. It had a group of genes that resembled CRISPR genes, but with important differences. When the researchers equipped other bacteria with these genes, which they called C2c2, they found that the organisms gained a novel defense. C2c2 has later been renamed to Cas13a to fit the standard nomenclature for Cas genes.
Many viruses encode their genetic information in RNA rather than DNA that they repurpose to make new viruses. HIV and poliovirus are such viruses. Bacteria with Cas13 make molecules that can dismember RNA, destroying the virus. Tailoring these genes opened any RNA molecule to editing.
CRISPR-Cas systems can also be employed for editing of micro-RNA and long-noncoding RNA genes in plants.
Therapeutic applications
Comparison to DNA editing
Gene drive
Gene drives may provide a powerful tool to restore balance of ecosystems by eliminating invasive species. Concerns regarding efficacy, unintended consequences in the target species as well as non-target species have been raised particularly in the potential for accidental release from laboratories into the wild. Scientists have proposed several safeguards for ensuring the containment of experimental gene drives including molecular, reproductive, and ecological. Many recommend that immunization and reversal drives be developed in tandem with gene drives in order to overwrite their effects if necessary. There remains consensus that long-term effects must be studied more thoroughly particularly in the potential for ecological disruption that cannot be corrected with reversal drives.
In vitro genetic depletion
Unenriched sequencing libraries often have abundant undesired sequences. Cas9 can specifically deplete the undesired sequences with double strand breakage with up to 99% efficiency and without significant off-target effects as seen with restriction enzymes. Treatment with Cas9 can deplete abundant rRNA while increasing pathogen sensitivity in RNA-seq libraries.
Epigenome editing
Applications
CRISPR-directed integrases
Combination of CRISPR-Cas9 with integrases enabled a technique for without problematic double-stranded breaks, as demonstrated with in 2022. The researchers reported it could be used to deliver genes as long as 36,000 DNA base pairs to several types of human cells and thereby potentially for treating diseases caused by a large number of mutations.
Prime editing
Prime editing (or base editing) is a CRISPR refinement to accurately insert or delete sections of DNA. The CRISPR edits are not always perfect and the cuts can end up in the wrong place. Both issues are a problem for using the technology in medicine. Prime editing does not cut the double-stranded DNA but instead uses the CRISPR targeting apparatus to shuttle an additional enzyme to a desired sequence, where it converts a single nucleotide into another. The new guide, called a pegRNA, contains an RNA template for a new DNA sequence to be added to the genome at the target location. That requires a second protein, attached to Cas9: a reverse transcriptase enzyme, which can make a new DNA strand from the RNA template and insert it at the nicked site. Those three independent pairing events each provide an opportunity to prevent off-target sequences, which significantly increases targeting flexibility and editing precision. Prime editing was developed by researchers at the Broad Institute of MIT and Harvard in Massachusetts. More work is needed to optimize the methods.
Society and culture
Human germline modification
As of March 2015, multiple groups had announced ongoing research with the intention of laying the foundations for applying CRISPR to human embryos for human germline engineering, including labs in the US, China, and the UK, as well as US biotechnology company OvaScience. Scientists, including a CRISPR co-discoverer, urged a worldwide moratorium on applying CRISPR to the human germline, especially for clinical use. They said "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations". These scientists support further low-level research on CRISPR and do not see CRISPR as developed enough for any clinical use in making heritable changes to humans.
In April 2015, Chinese scientists reported results of an attempt to alter the DNA of non-viable human embryos using CRISPR to correct a mutation that causes beta thalassemia, a lethal heritable disorder. The study had previously been rejected by both Nature and Science in part because of ethical concerns. The experiments resulted in successfully changing only some of the intended genes, and had off-target effects on other genes. The researchers stated that CRISPR is not ready for clinical application in reproductive medicine. In April 2016, Chinese scientists were reported to have made a second unsuccessful attempt to alter the DNA of non-viable human embryos using CRISPR – this time to alter the CCR5 gene to make the embryo resistant to HIV infection.
In December 2015, an International Summit on Human Gene Editing took place in Washington under the guidance of David Baltimore. Members of national scientific academies of the US, UK, and China discussed the ethics of germline modification. They agreed to support basic and clinical research under certain legal and ethical guidelines. A specific distinction was made between somatic cells, where the effects of edits are limited to a single individual, and germline cells, where genome changes can be inherited by descendants. Heritable modifications could have unintended and far-reaching consequences for human evolution, genetically (e.g. gene–environment interactions) and culturally (e.g. social Darwinism). Altering of gametocytes and embryos to generate heritable changes in humans was defined to be irresponsible. The group agreed to initiate an international forum to address such concerns and harmonize regulations across countries.
In February 2017, the United States National Academies of Sciences, Engineering, and Medicine (NASEM) Committee on Human Gene Editing published a report reviewing ethical, legal, and scientific concerns of genomic engineering technology. The conclusion of the report stated that heritable genome editing is impermissible now but could be justified for certain medical conditions; however, they did not justify the usage of CRISPR for enhancement.
In November 2018, Jiankui He announced that he had edited two human embryos to attempt to disable the gene for CCR5, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier. He said that the girls still carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature. An international group of scientists called for a global moratorium on genetically editing human embryos.
Designer babies
The advent of CRISPR-Cas9 gene editing technology has led to the possibility of creating "designer babies." This technology has the possibility of eliminating certain genetic diseases, or improving health by enhancing certain genetic traits.
Policy barriers to genetic engineering
Policy regulations for the CRISPR-Cas9 system vary around the globe. In February 2016, British scientists were given permission by regulators to genetically modify human embryos by using CRISPR-Cas9 and related techniques. However, researchers were forbidden from implanting the embryos and the embryos were to be destroyed after seven days.
The US has an elaborate, interdepartmental regulatory system to evaluate new genetically modified foods and crops. For example, the Agriculture Risk Protection Act of 2000 gives the United States Department of Agriculture the authority to oversee the detection, control, eradication, suppression, prevention, or retardation of the spread of plant pests or noxious weeds to protect the agriculture, environment, and economy of the US. The act regulates any genetically modified organism that utilizes the genome of a predefined "plant pest" or any plant not previously categorized. In 2015, Yinong Yang successfully deactivated 16 specific genes in the white button mushroom to make them non-browning. Since he had not added any foreign-species (transgenic) DNA to his organism, the mushroom could not be regulated by the USDA under Section 340.2. Yang's white button mushroom was the first organism genetically modified with the CRISPR-Cas9 protein system to pass US regulation.
In 2016, the USDA sponsored a committee to consider future regulatory policy for upcoming genetic modification techniques. With the help of the US National Academies of Sciences, Engineering, and Medicine, special interests groups met on April 15 to contemplate the possible advancements in genetic engineering within the next five years and any new regulations that might be needed as a result. In 2017, the Food and Drug Administration proposed a rule that would classify genetic engineering modifications to animals as "animal drugs", subjecting them to strict regulation if offered for sale and reducing the ability for individuals and small businesses to make them profitable.
In China, where social conditions sharply contrast with those of the West, genetic diseases carry a heavy stigma. This leaves China with fewer policy barriers to the use of this technology.
Recognition
In 2012 and 2013, CRISPR was a runner-up in Science Magazine's Breakthrough of the Year award. In 2015, it was the winner of that award. CRISPR was named as one of MIT Technology Reviews 10 breakthrough technologies in 2014 and 2016. In 2016, Jennifer Doudna and Emmanuelle Charpentier, along with Rudolph Barrangou, Philippe Horvath, and Feng Zhang won the Gairdner International award. In 2017, Doudna and Charpentier were awarded the Japan Prize in Tokyo, Japan for their revolutionary invention of CRISPR-Cas9. In 2016, Charpentier, Doudna, and Zhang won the Tang Prize in Biopharmaceutical Science. In 2020, Charpentier and Doudna were awarded the Nobel Prize in Chemistry, the first such prize for an all-female team, "for the development of a method for genome editing."
See also
CRISPR/Cas Tools
The CRISPR Journal
Eugenics
DRACO
Zinc finger
Gene knockout
Genetics
Glossary of genetics
Human Nature (2019 documentary film)
LEAPER gene editing
Make People Better (2022 documentary)
RNAi
SiRNA
Surveyor nuclease assay
Synthetic biology
References
Biotechnology
Genetic engineering
Genome editing | CRISPR gene editing | [
"Chemistry",
"Engineering",
"Biology"
] | 12,206 | [
"Genetics techniques",
"Biological engineering",
"Genome editing",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology"
] |
59,991,544 | https://en.wikipedia.org/wiki/Non-bonding%20electron | A non-bonding electron is an electron not involved in chemical bonding. This can refer to:
Lone pair, with the electron localized on one atom.
Non-bonding orbital, with the electron delocalized throughout the molecule.
Chemical bonding | Non-bonding electron | [
"Physics",
"Chemistry",
"Materials_science"
] | 49 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
59,993,212 | https://en.wikipedia.org/wiki/Equivalence%20problem | In theoretical computer science and formal language theory, the equivalence problem is the question of determining, given two representations of formal languages, whether they denote the same formal language.
The complexity and decidability of this decision problem depend upon the type of representation under consideration.
For instance, in the case of finite-state automata, equivalence is decidable, and the problem is PSPACE-complete.
Further, in the case of deterministic pushdown automata, equivalence is decidable, Géraud Sénizergues won the Gödel Prize for this result. Subsequently, the problem was shown to lie in TOWER, the least non-elementary complexity class.
It becomes an undecidable problem for pushdown automata or any machine that can decide context-free languages or more powerful languages.
References
Formal languages | Equivalence problem | [
"Mathematics",
"Technology"
] | 167 | [
"Formal languages",
"Mathematical logic",
"Computer science stubs",
"Computer science",
"Computing stubs"
] |
69,263,899 | https://en.wikipedia.org/wiki/Carla%20Seatzu | Carla Seatzu (born 1971) is an Italian electrical engineer whose research concerns discrete-event simulation, Petri nets, fault detection and isolation, and networked control systems, with applications in manufacturing and transportation. She is an ordinary professor (equivalent to full professor) in the faculty of engineering at the University of Cagliari.
Education and career
Seatzu earned a laurea in electrical engineering in 1996 at the University of Cagliari, at the same time passing the state examination in engineering. She completed her doctorate at the University of Cagliari in 2000. Her dissertation, Decentralized control of open-channel hydraulic systems, was supervised by Elio Usai.
After working as a research assistant and researcher at the University of Cagliari from 2000 to 2011, she became an associate professor there in 2011, and earned a habilitation in 2012–2013. She has been a full professor since 2017.
References
External links
1971 births
Living people
Italian electrical engineers
Italian women engineers
Control theorists
Academic staff of the University of Cagliari | Carla Seatzu | [
"Engineering"
] | 210 | [
"Control engineering",
"Control theorists"
] |
69,265,237 | https://en.wikipedia.org/wiki/Network%20Protocol%20Virtualization | Network Protocol Virtualization or Network Protocol Stack Virtualization is a concept of providing network connections as a service, without concerning application developer to decide the exact communication stack composition.
Concept
Network Protocol Virtualization (NPV) was firstly proposed by Heuschkel et al. in 2015 as a rough sketch as part of a transition concept for network protocol stacks. The concept evolved and was published in a deployable state in 2018.
The key idea is to decouple applications from their communication stacks. Today the socket API requires application developer to compose the communication stack by hand by choosing between IPv4/IPv6 and UDP/TCP. NPV proposes the network protocol stack should be tailored to the observed network environment (e.g. link layer technology, or current network performance). Thus, the network stack should not be composed at development time, but at runtime and it needs the possibility to be adapted if needed.
Additionally, the decoupling relaxes the chains of the ISO OSI network layer model, and thus enables alternative concepts of communication stacks. Heuschkel et al. proposes the concept of Application layer middleboxes as an example to add additional layers to the communication stack to enrich the communication with useful services (e.g. HTTP optimizations)
The Figure illustrates the data flow. Applications interface to the NPV software through some kind of API. Heuschkel et al. proposed socket API equivalent replacements but envisioned more sophisticated interfaces for future applications. A scheduler assigns the application payload to one (of potentially many) communication stack to get processed to network packets, that get sent using networking hardware. A management component decides how communication stacks get composed and the scheduling scheme. To support decisions a management interface is provided to integrate the management system in software-defined networking contexts.
NPV has been further investigated as a central element of LPWAN Internet of Things (IoT) scenarios. Specifically, the deployment of applications that are agnostic to the underlying transport, network, link and physical layers was explored by Rolando Herrero in 2020. In this context, NPV becomes a very successful and flexible tool to accomplish the deployment and management of constrained sensors, actuators and controllers in massive IoT access networks.
Implementations
Currently there is just one academic implementation available to demonstrate the concept. Heuschkel et al. published this implementation as demonstrator in 2016.
The last iteration of this code is available under AGPLv3 on Github.
See also
Application virtualization
Hardware virtualization
Virtualization
References
External links
An introduction to Virtualization
MAKI
VirtualStack (NPV Prototype)
Computer networking | Network Protocol Virtualization | [
"Technology",
"Engineering"
] | 533 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
69,267,058 | https://en.wikipedia.org/wiki/Alfa%20Romeo%20Tipo%201035 | The Alfa Romeo Tipo 1035 is a naturally-aspirated, 3.5-liter, V10 racing engine, designed and built by Alfa Romeo. It was originally specially designed for the Ligier Formula One team, but was later used in the experimental Alfa Romeo 164 Procar touring car, and the Alfa Romeo SE 048SP Group C sports prototype race car.
Engine design
In 1990, the Group C regulations underwent a major revamp, with the primary focus being on changing the engines to 3.5-litre units sourced from Formula One cars.
The project itself was a well-kept secret, and very little was ever revealed about the car's specifications. One thing that Alfa Romeo did reveal was that it used the 3.5-litre Tipo 1035 V10 engine from the still-born Alfa Romeo 164 Procar; this was a naturally aspirated 72 degree V10 originally designed for the Ligier Formula One team, and produced a claimed output of at 13,300 RPM.
Applications
Alfa Romeo 164 Procar
Alfa Romeo SE 048SP
References
Engines by model
Gasoline engines by model
Alfa Romeo
Group C
Formula One engines
V10 engines
Alfa Romeo in motorsport
Alfa Romeo engines | Alfa Romeo Tipo 1035 | [
"Technology"
] | 245 | [
"Engines",
"Engines by model"
] |
58,350,478 | https://en.wikipedia.org/wiki/Kakutani%27s%20theorem%20%28measure%20theory%29 | In measure theory, a branch of mathematics, Kakutani's theorem is a fundamental result on the equivalence or mutual singularity of countable product measures. It gives an "if and only if" characterisation of when two such measures are equivalent, and hence it is extremely useful when trying to establish change-of-measure formulae for measures on function spaces. The result is due to the Japanese mathematician Shizuo Kakutani. Kakutani's theorem can be used, for example, to determine whether a translate of a Gaussian measure is equivalent to (only when the translation vector lies in the Cameron–Martin space of ), or whether a dilation of is equivalent to (only when the absolute value of the dilation factor is 1, which is part of the Feldman–Hájek theorem).
Statement of the theorem
For each , let and be measures on the real line , and let and be the corresponding product measures on . Suppose also that, for each , and are equivalent (i.e. have the same null sets). Then either and are equivalent, or else they are mutually singular. Furthermore, equivalence holds precisely when the infinite product
has a nonzero limit; or, equivalently, when the infinite series
converges.
References
(See Theorem 2.12.7)
Probability theorems
Theorems in measure theory | Kakutani's theorem (measure theory) | [
"Mathematics"
] | 277 | [
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems"
] |
58,354,188 | https://en.wikipedia.org/wiki/Call-to-gate%20system | A call-to-gate system is an airport terminal design in which passengers are kept in a central area until shortly before their flight is due to board, rather than waiting near their gate. The international terminal at Calgary International Airport was the first terminal in North America to use this system, which is also used by European airports such as London Heathrow. The system is used to decrease the amount of time that passengers spend around the gate area, thereby increasing the amount of time they spend in the retail areas of the terminal.
References
Airport infrastructure | Call-to-gate system | [
"Engineering"
] | 109 | [
"Airport infrastructure",
"Aerospace engineering"
] |
58,362,519 | https://en.wikipedia.org/wiki/Alexander%20Boyd%20Stewart | Prof Alexander Boyd Stewart CBE FRSE FRIC (1904–1981) was a 20th century Scottish organic chemist and agriculturalist. He was President of the British Society of Soil Science.
Life
He was born on 3 November 1904 at Tarland in Aberdeenshire, the son of Donald Stewart, a farmer. He was educated at Robert Gordon's College in Aberdeen. He then studied science at Aberdeen University graduating MA in 1925 and BSc in 1928. He then continued as a postgraduate, gaining his doctorate (PhD) in 1932. He immediately obtained a post as Head of the Soil Fertility Department at the Macaulay Institute. Remaining at the institute he became its deputy director in 1954.
In 1955 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Donald McArthur, David Cuthbertson, A. T. Phillipson, Thomas Phemister, James Robert Matthews and Murray Macgregor.
In 1958 he left to become Professor of Agriculture at Aberdeen University. He was created a Commander of the Order of the British Empire (CBE) in 1962. He returned to the Macaulay Institute in 1964 as its director.
He retired in 1968 and died at his home, 3 Woodburn Place (a 1980s bungalow) in Aberdeen on 27 February 1981.
Family
In 1939 he married Alice F. Bowman.
Publications
Soil Fertility Investigations in India (1946)
Agriculture in the University of Aberdeen (1959)
References
1904 births
1981 deaths
British organic chemists
People from Tarland
People educated at Robert Gordon's College
Scottish agriculturalists
Alumni of the University of Aberdeen
Academics of the University of Aberdeen
Commanders of the Order of the British Empire
Fellows of the Royal Society of Edinburgh | Alexander Boyd Stewart | [
"Chemistry"
] | 333 | [
"Organic chemists",
"British organic chemists"
] |
78,029,987 | https://en.wikipedia.org/wiki/Marburg%20vaccine | A Marburg vaccine would protect against Marburg virus disease (MVD). There are currently no Food and Drug Administration-approved vaccines for the prevention of MVD. Many candidate vaccines have been developed and tested in various animal models. There is not yet an approved vaccine, because of economic factors in vaccine development, and because filoviruses killed few before the 2010s.
The most promising candidate vaccines are DNA vaccines or based on Venezuelan equine encephalitis virus replicons, vesicular stomatitis Indiana virus (VSIV) or filovirus-like particles (VLPs) as all of these candidates could protect nonhuman primates from marburgvirus-induced disease. DNA vaccines have entered clinical trials.
History
The first clinical study testing the efficacy of a Marburg virus vaccine was conducted in 2014. The study tested a DNA vaccine and concluded that individuals inoculated with the vaccine exhibited some level of antibodies. However, these vaccines were not expected to provide definitive immunity. Several animal models have shown to be effective in the research of Marburg virus, such as hamsters, mice, and non-human primates (NHPs). Mice are useful in the initial phases of vaccine development as they are ample models for mammalian disease, but their immune systems are still different enough from humans to warrant trials with other mammals. Of these models, the infection in macaques seems to be the most similar to the effects in humans. A variety of other vaccines have been considered. Virus replicon particles (VRPs) were shown to be effective in guinea pigs, but lost efficacy once tested on NHPs. Additionally, an inactivated virus vaccine proved ineffective. DNA vaccines showed some efficacy in NHPs, but all inoculated individuals showed signs of infection.
Because Marburg virus and Ebola virus belong to the same family, Filoviridae, some scientists have attempted to create a single-injection vaccine for both viruses. This would both make the vaccine more practical and lower the cost for developing countries. Using a single-injection vaccine has shown to not cause any adverse reactogenicity, which the possible immune response to vaccination, in comparison to two separate vaccinations.
There is a candidate vaccine against the Marburg virus called rVSV-MARV. It was developed alongside vaccines for closely-related Ebolaviruses by the Canadian government in the early 2000s, twenty years before the outbreak. Production and testing of rVSV-MARV is blocked by legal monopolies held by the Merck Group. Merck acquired rights to all the closely-related candidate vaccines in 2014, but declined to work on most of them, including the Marburg vaccine, for economic reasons. While Merck returned the rights to the abandoned vaccines to the Public Health Agency of Canada, the vital rVSV vaccine production techniques which Merck had gained (while bringing the closely-related rVSV-ZEBOV vaccine into commercial use in 2019, with GAVI funding) remain Merck's, and cannot be used by anyone else wishing to develop a rVSV vaccine.
As of June 23, 2022, researchers working with the Public Health Agency of Canada conducted a study which showed promising results of a recombinant vesicular stomatitis virus (rVSV) vaccine in guinea pigs, entitled PHV01. According to the study, inoculation with the vaccine approximately one month prior to infection with the virus provided a high level of protection.
Even though there is much experimental research on Marburg virus, there is still no prominent vaccine. Human vaccination trials are either ultimately unsuccessful or are missing data specifically regarding Marburg virus. Due to the cost needed to handle Marburg virus at qualified facilities, the relatively few number of fatalities, and lack of commercial interest, the possibility of a vaccine has simply not come to fruition (see also economics of vaccines).
See also
Rwanda Marburg virus disease outbreak
References
Marburg virus disease
Hemorrhagic fevers
Vaccines | Marburg vaccine | [
"Biology"
] | 820 | [
"Vaccination",
"Vaccines"
] |
78,030,572 | https://en.wikipedia.org/wiki/Opus%20figlinum | Opus figlinum or opus figulinum, literally "pottery", is a type of masonry construction used in Roman architecture. It is a pavement formed of squares of pottery or terracotta, set flat and on edge alternately.
Description
Pavements in opus figlinum are usually made up of rectangular ceramic pottery or brick fragments of the same size, placed in groups of three. The orientation of adjacent groups is alternately vertical and horizontal; thus, the juxtaposition creates the visual impression of a braid pattern. The fragments are fixed to the subfloor with mortar. The stone fragments used are very small: about 2.5 x 2.5 x 2 cm.
References
Building stone
Pavements
Architectural elements
Ancient Roman construction techniques | Opus figlinum | [
"Technology",
"Engineering"
] | 149 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
78,034,325 | https://en.wikipedia.org/wiki/Type%20VIII%20secretion%20system | A Type VIII secretion system is a type of secretion system found within the inner and outer membranes of gram-negative bacteria. This system is also referred to as the curli biogenesis pathway or the extracellular nucleation-precipitation pathway. It is associated with the formation of biofilms and infecting hosts. Curli formation is especially efficient at evading the host's immune system due to the subunits being able to quickly assemble in a single process and not having intermediates. This system is associated with curli-specific genes and utilizes multiple proteins in its process to form curli fibers. These proteins include CsgA CsgB, CsgC, CsgD, CsgE, CsgF, and CsgG. Type VIII secretion system facilitates the assembly and translocation of curli fibers.
Curli fibers and their virulence
Curli fibers are made through the curli biogenesis system, also known as the type VIII secretion system, and are essentially long, linear structures made from proteins that are secreted to the outside of the cell into its surrounding environment. They are made mostly by gram-negative bacteria and, upon secretion, they form compact clusters around the outside of the cell. The main function of the curli fibers involves their interactions with biofilms. In pathogenic bacteria, curlis can contribute to virulence by helping in cell invasion and activating the innate immune response.
Knowing how curli fibers are made, and how the type VIII secretion system works, can help develop an inhibitor to stop or reduce the production of these curli fibers and overall reduce the virulence of the bacteria that produce them. Understanding these mechanisms can also play a big role in creating treatments for infections that are associated with biofilms.
Curli development and control
Curli biogenesis is an adaptable process that uses a direct route and can transform from an intrinsically disordered complex system to a simple amyloid state. The proteins in this system are encoded by two separate operons. One operon codes for CsgA, CsgB, and CsgC, whereas the other codes for CsgD, CsgE, CsgF, and CsgG. The two major subunits involved in this process are CsgA and CsgB, with CsgA being the most important to the system. CsgA and CsgB are responsible for the system's control and extension of fibers. CsgA can transition from a disorder to an ordered amyloid state while the CsgB functions as a nucleator to help promote the polymerization of CsgA. Then, CsgC is introduced as a chaperone and works to keep CsgA from reaching the amyloid state prematurely. The process by which CsgC prevents this is still misunderstood, but the positive charge beta-strand is most theorized. CsgG is part of a secretion channel that facilitates the translocation of CsgA to the periplasm. CsgE functions as a specificity binder to help guide CsgA to the CsgG secretion channel so that CsgA will be the correct conformation for polymerization. Throughout this process, CsgF interacts with CsgA and CsgB to help enhance the assembly of CsgA and coordinate the nucleating activity of CsgB. CsgD functions as a transcriptional regulator that influences the expression of CsgA and CsgB through environmental factors.
The resulting structure is made up of alternating CsgA and CsgB subunits with CsgF unit at the base and the entirety of the structure will be on the outside of the bacterial cell.
Secretion mechanisms
The secretion of the assembled units requires energy. Energy within a bacterial cell is typically supplied by ATP or GTP, proton motive force, or other membrane potentials. However, with type VIII secretions systems, it is unlikely that energy is derived from one of these typical methods due to its location on the outer membrane of gram-negative bacteria. The CsgG protein complex is the channel used to allow the assembled CsgA, CsgB, and CsgF subunits to move through the membrane to the outside of the cell where they remain in close proximity to the CsgG protein. It is thought that the energy released from the subunits folding and unfolding as well as the potential from the movement of the subunits across the membrane gives the necessary energy for secretion. While the type VIII secretion pathway is most desirable, some bacterial species may use the functional amyloid pathway, or Fap, to form a biofilm so it can attach to surfaces.
References
Secretion
Cell biology | Type VIII secretion system | [
"Biology"
] | 929 | [
"Cell biology"
] |
78,035,813 | https://en.wikipedia.org/wiki/Nemo%20Power%20Tools | Nemo Power Tools is the first manufacturer of a full line of power tools that are pressurized to work under water for boating, scuba and other deep sea activities. The company also uses the same technology to manufacture a clean room drill, the GRABO lifter and a special-ops line of underwater tools. The company was founded by Nimo Rotem and Oleg Zhukov.
History
Nemo Power Tools was started in 2010 when founder Nimo Rotem was approached by the Israeli military about creating an underwater drill for them. At the time he wasn't even a certified diver and had to acquire his Padi Open Water certification as part of the development process. In 2013, Nimo and Oleg decided to expand the market for their tools to the commercial market.
Rotem built the first tool from scratch, featuring a durable die-cast aluminum body, powered by two 18-volt Li-ion batteries. It also includes a keyless metal chuck and a rotating seal, similar to those found in boat drive shafts. The initial drill was designed to go to work at a depth of up to 165 feet, making it the perfect tool for divers, boat repairs and scientific researchers.
A freshwater version that can operate in chlorine water was later developed, as well as a special ops line of hammer drill, with salt-water-resistant paint, no logos, that can operate in a depth of up to 328’. A similarly performing angle grinder was also developed under the special ops performance standards.
The company has continued to expand its underwater line of products to include a variety of drills, grinders, saws and a hull cleaner. The company also produces a 15,000 lumens underwater floodlight for working underwater.
Do to their unique ability to work in marine environments, their tools have been seen on a number of TV shows including being heavily featured by Dustin Hurt on the Discovery Channel's Gold Rush: White Water.
Clean Room Drill
Using technology from their underwater tools, Nemo Power Tools created a clean room drill which seals the internal compartment and works with no internal ventilation, making the drill viable in clean room environments such as medical labs, pharmaceutical companies, and industrial manufacturing.
Manufactured with a sealed design and construction, the drill/driver generates minimal airborne particles. An independent third-party laboratory performed an airborne particle count statistical analysis, which confirmed that the Nemo Clean Room Drill satisfies the particulate requirements for an ISO Class 7 clean room at 0.5, 1.0, and 5.0 micrometers under operational conditions. This conclusion is based on the data collected and statistical analysis calculations in accordance with ISO 14644-1.
GRABO
After working on a number of tools for the Nemo line of power tools, in 2019 Nemo and Oleg began playing with some of the technology they had developed and created the GRABO using sealing and fabrication methods from the marine industry.
The GRABO is a portable, battery-operated vacuum lifter designed to handle a wide range of materials, including tiles, metal, pavers, glass, and plywood and drywall sheet goods. Its dual-seal system is effective even on uneven surfaces like textured glass, riven tiles, and rough paving, as well as on dusty or wet areas. What sets it apart is its ability to grip textured materials as well as porous materials, such as plasterboard, plywood, and certain porous stones, by continuously running its vacuum motor to offset air loss through the material. The GRABO can lift up to 375 lbs.
In 2024, the company entered into a licensing partnership with Stanley Black & Decker to create and distribute the DeWalt GRABO.
References
Links
Grabo
American brands
Material handling
Woodworking hand-held power tools
Power tool manufacturers
Manufacturing companies based in Nevada
Las Vegas
Tool manufacturing companies of the United States | Nemo Power Tools | [
"Physics"
] | 776 | [
"Materials",
"Material handling",
"Matter"
] |
78,038,285 | https://en.wikipedia.org/wiki/Cryogenic%20Observatory%20for%20Signatures%20Seen%20in%20Next-Generation%20Underground%20Searches | The Cryogenic Observatory for SIgnatures seen in Next-generation Underground Searches (COSINUS) is a scientific collaboration aimed at developing cryogenic detectors for the direct detection of dark matter, particularly in relation to results observed by other experiments like DAMA/LIBRA. The goal of COSINUS is to confirm or refute these results by using different detection techniques while maintaining high sensitivity to dark matter interactions.
The participating institutes in the COSINUS collaboration include the Max Planck Institute for Physics (Germany), the Gran Sasso Science Institute (Italy), the Helsinki Institute of Physics (Finland), the Institute of High Energy Physics (Austria), the Technical University of Vienna (Austria), the University of L'Aquila (Italy), the Istituto Nazionale di Fisica Nucleare (Italy), and the Shanghai Institute of Ceramics, Chinese Academy of Sciences (China). The experiment is conducted in the underground laboratory of the Gran Sasso National Laboratory (LNGS) in Italy, which provides the necessary shielding from cosmic radiation and environmental interference for the detection of rare dark matter interactions.
Similar to CRESST, COSINUS utilizes cryogenic detectors that operate at temperatures of a few millikelvin to achieve high energy resolution. The detectors are designed to measure both phonon (heat) and photon (light) signals, using scintillating sodium iodide (NaI) crystals, to discriminate between dark matter signals and background noise.
COSINUS was inaugurated in Spring 2024 and will start recording data in early 2025.
References
External links
COSINUS Official Website
Gran Sasso National Laboratory
Experiments for dark matter search
Astronomical observatories
Dark matter | Cryogenic Observatory for Signatures Seen in Next-Generation Underground Searches | [
"Physics",
"Astronomy"
] | 346 | [
"Dark matter",
"Unsolved problems in astronomy",
"Astronomical observatories",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astronomy organizations",
"Experiments for dark matter search",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
66,350,944 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Fricke%20equation | The Maxwell–Fricke equation relates the resistivity of blood to hematocrit. This relationship has been shown to hold for humans, and a variety on non-human warm-blooded species, including canines.
Equation
The Maxwell–Fricke equation is written as:
where ρ is the resistivity of blood, ρ1 is the resistivity of plasma, ρ2 is the resistivity of blood cells and φ is the hematocrit.
References
Mathematics in medicine | Maxwell–Fricke equation | [
"Mathematics"
] | 99 | [
"Applied mathematics",
"Mathematics in medicine",
"Applied mathematics stubs"
] |
70,780,040 | https://en.wikipedia.org/wiki/Symmetron | The symmetron is a hypothesized elementary particle that mediates a fifth force in particle physics. It emerged as one potential solution to the symmetron field, a hypothesizedscalar field.
See also
List of hypothetical particles
References
Hypothetical elementary particles
Bosons
Subatomic particles with spin 0
Force carriers | Symmetron | [
"Physics"
] | 69 | [
"Physical phenomena",
"Force carriers",
"Unsolved problems in physics",
"Bosons",
"Subatomic particles",
"Particle physics",
"Particle physics stubs",
"Fundamental interactions",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Matter"
] |
70,780,754 | https://en.wikipedia.org/wiki/Diffusive%E2%80%93thermal%20instability | Diffusive–thermal instability or thermo–diffusive instability is an intrinsic flame instability that occurs both in premixed flames and in diffusion flames and arises because of the difference in the diffusion coefficient values for the fuel and heat transport, characterized by non-unity values of Lewis numbers. The instability mechanism that arises here is the same as in Turing instability explaining chemical morphogenesis, although the mechanism was first discovered in the context of combustion by Yakov Zeldovich in 1944 to explain the cellular structures appearing in lean hydrogen flames. Quantitative stability theory for premixed flames were developed by Gregory Sivashinsky (1977), Guy Joulin and Paul Clavin (1979) and for diffusion flames by Jong S. Kim and Forman A. Williams (1996,1997).
Dispersion relation for premixed flames
To neglect the influences by hydrodynamic instabilities such as Darrieus–Landau instability, Rayleigh–Taylor instability etc., the analysis usually neglects effects due to the thermal expansion of the gas mixture by assuming a constant density model. Such an approximation is referred to as diffusive-thermal approximation or thermo-diffusive approximation which was first introduced by Grigory Barenblatt, Yakov Zeldovich and A. G. Istratov in 1962. With a one-step chemistry model and assuming the perturbations to a steady planar flame in the form , where is the transverse coordinate system perpendicular to flame, is the time, is the perturbation wavevector and is the temporal growth rate of the disturbance, the dispersion relation for one-reactant flames is given implicitly by
where , , is the Lewis number of the fuel and is the Zeldovich number. This relation provides in general three roots for in which the one with maximum would determine the stability character. The stability margins are given by the following equations
describing two curves in the vs. plane. The first curve is associated with condition , whereas on the second curve The first curve separates the region of stable mode from the region corresponding to cellular instability, whereas the second condition indicates the presence of traveling and/or pulsating instability.
See also
Turing pattern
Darrieus–Landau instability
Kuramoto–Sivashinsky equation
Clavin–Garcia equation
Double diffusive convection
References
Fluid dynamics
Combustion
Fluid dynamic instabilities | Diffusive–thermal instability | [
"Chemistry",
"Engineering"
] | 494 | [
"Fluid dynamic instabilities",
"Chemical engineering",
"Combustion",
"Piping",
"Fluid dynamics"
] |
65,051,238 | https://en.wikipedia.org/wiki/Continuous%20spin%20particle | In theoretical physics, a continuous spin particle (CSP), sometimes called an infinite spin particle, is a massless particle never observed before in nature. This particle is one of Poincaré group's massless representations which, along with ordinary massless particles, was classified by Eugene Wigner in 1939. Historically, a compatible theory that could describe this elementary particle was unknown; however, 75 years after Wigner's classification, the first local action principle for bosonic continuous spin particles was introduced in 2014, and the first local action principle for fermionic continuous spin particles was suggested in 2015. It has been illustrated that this particle can interact with matter in flat spacetime. Supersymmetric continuous spin gauge theory has been studied in three and four spacetime dimensions.
In condensed matter systems, CSPs can be understood as massless generalizations of the anyon.
References
Hypothetical particles | Continuous spin particle | [
"Physics"
] | 182 | [
"Hypothetical particles",
"Unsolved problems in physics",
"Subatomic particles",
"Particle physics",
"Particle physics stubs",
"Physics beyond the Standard Model",
"Matter"
] |
73,677,675 | https://en.wikipedia.org/wiki/Combined%20photothermal%20and%20photodynamic%20therapy | Photodynamic/photothermal combination therapy involves the usage of a chemical compound or nanomaterial that, when irradiated at a certain wavelength, converts light energy into reactive oxygen species (ROS) and heat. This has shown to be highly effective in the treatment of skin infections, showing increased wound healing rates and a lower impact on human cell viability than photodynamic (PD) or photothermal (PT) therapies. The compounds involved often employ additional mechanisms of action or side effect reduction mechanisms, further increasing their efficacy.
Phototherapies are minimally invasive, with the primary toxicity issues surrounding phototoxicity and the nonspecific ROS and heat mechanisms of action affecting healthy human cells (albeit in lower amounts than the target cells). In skin wound infections, multiple phototherapeutic approaches have observed increased rates of wound closure over nontreated controls. This is typically due to an upregulation of vascular endothelial growth factor (VEGF) and hypoxia-inducible factor (HIF). Phototherapies are also active against both gram-positive and gram-negative bacteria, with photodynamic therapy having some exceptions.
To apply this technique, a photosensitizer is localized to the wound or tumor site, either topically or intravenously. Once localized, the target area is exposed to a laser of a selected wavelength and intensity for a predetermined irradiation time. The wavelength, localization technique, laser intensity, and irradiation period are determined based on the individual phototherapeutic agent, as these factors can vary greatly from compound to compound. Topical applications may be through the incorporation of the phototherapeutic agent with a hydrogel that will slowly leech the compound into the wound, allowing for a more controlled production of ROS and/or heat.
Phototherapy Types
Photodynamic Therapy
Photosensitizers and approved treatments
A photosensitizer is a chemical compound or nanomaterial capable of capturing light energy and using this energy to generate ROS. Currently, there are 6 photosensitizers that are clinically-approved or undergoing clinical trials for the treatment of cancers and 1 approved for the treatment of eye disorders and diseases. Photodynamic therapy (PDT) is also often used for acne treatment as well as various dermatological conditions such as psoriasis, atopic dermatitis, and vitiligo. It is highly unlikely that bacteria would gain resistance to a photosensitizer or PDT treatment, as the photosensitizers can generate ROS within or outside of the target cell, both of which damage the membrane
Mechanism of action
A photosensitizer generates ROS through one of two processes. Type I involves a redox reaction that results in the creation of superoxides (O2•−), hydroxyl radicals (OH•), and radical peroxides, whereas Type II generates singlet oxygen directly through an electron transfer from the photosensitizer. These ROS go on to nonspecifically damage a variety of cellular components, including proteins, DNA, and lipids as they seek to remove the radical.
Limitations
Due to the necessity of oxygen for PDT, these treatments do not work as well in hypoxic environments, including in developed tumors and some deep wounds. Dental infections tend to also respond better to photothermal therapy than photodynamic therapy, though both have a strong effect. The efficacy of PDT for antimicrobial usage is limited by the properties of the membrane of the target cell such as the electrical gradient (membrane potential) and lipid composition. Whereas high cell death is observed for Escherichia coli and Staphylococcus aureus, other bacterial species such as Klebsiella pneumoniae and Acinetobacter baumannii tend to see very low impact from PDT due to these factors. This limits potential as a broadband antibiotic, but may also allow for specificity in targeting the pathogenic cells over human and skin microbiome cells.
Photothermal Therapy
Indocyanine green is an FDA-approved photothermal agent that is primarily used in imaging techniques, but also displays anticancer and antimicrobial activity through photothermal therapy (PTT) treatments. Photothermal agents are active against diseased cells by accumulating in or around target cells, then converting light energy directly to heat, killing the target through heat-related damage.
PTT has a low level of selectivity beyond the accumulation stage, in which it tends to preferentially accumulate within diseased and bacterial cells. This increases broadband antibiotic activity and decreases the likelihood of resistance development, but also raises the impact on human cells. Human cells experience irreversible damage in the range of 46-60 °C, which is below temperatures reached by some photothermal agents during photothermal therapy. Human cell viability may be maintained through low temperature PTT (≤ 45 °C), which is typically only possible in combination with an additional antibiotic or photodynamic activity.
Combination Therapy - Antibacterial
Photodynamic/photothermal combination therapy combines the mechanisms of ROS production and heat generation into one treatment for a heightened effect on the target bacterial cells. In many cases, this can be done with a single compound or nanomaterial (phototherapeutic agent) and wavelength.
Advantages over monotherapy
Increased antibiotic efficacy
Due to the presence of both ROS and excess heat, target cells are less able to resist each effect. Increased heat corresponds to heightened cell membrane permeability, allowing the generation of ROS within the target cell. This also removes/reduces the selectivity observed for PDT, as it is able to enter the cell unhindered.
Lower side effects
Both photosensitizers and photothermal agents have some degree of selectivity for target cells over healthy human cells, but in utilizing both of these mechanisms this selectivity is bolstered. Increased antibiotic efficacy indicates a lower likelihood of requiring follow-up treatments, so the damage is minimal. In addition, some of these combination phototherapeutic agents have antioxidant/reactive oxygen scavenging properties, reducing the amount of collateral damage sustained by the surrounding human cells.
Incorporation of tertiary mechanisms
Many phototherapeutic agents that display both PD and PT activity come with added effects, such as antibiotic metal ions, physical antibiotic mechanisms, or peroxidase-like activity. These added effects further increase antibiotic activity, often demonstrating broadband activity with 99% cell death or above regardless of strain or drug resistance.
References
Light therapy
Medical physics
Medical treatments | Combined photothermal and photodynamic therapy | [
"Physics"
] | 1,370 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
73,682,080 | https://en.wikipedia.org/wiki/General%20equation%20of%20heat%20transfer | In fluid dynamics, the general equation of heat transfer is a nonlinear partial differential equation describing specific entropy production in a Newtonian fluid subject to thermal conduction and viscous forces:where is the specific entropy, is the fluid's density, is the fluid's temperature, is the material derivative, is the thermal conductivity, is the dynamic viscosity, is the second Lamé parameter, is the flow velocity, is the del operator used to characterize the gradient and divergence, and is the Kronecker delta.
If the flow velocity is negligible, the general equation of heat transfer reduces to the standard heat equation. It may also be extended to rotating, stratified flows, such as those encountered in geophysical fluid dynamics.
Derivation
Extension of the ideal fluid energy equation
For a viscous, Newtonian fluid, the governing equations for mass conservation and momentum conservation are the continuity equation and the Navier-Stokes equations:where is the pressure and is the viscous stress tensor, with the components of the viscous stress tensor given by:The energy of a unit volume of the fluid is the sum of the kinetic energy and the internal energy , where is the specific internal energy. In an ideal fluid, as described by the Euler equations, the conservation of energy is defined by the equation:where is the specific enthalpy. However, for conservation of energy to hold in a viscous fluid subject to thermal conduction, the energy flux due to advection must be supplemented by a heat flux given by Fourier's law and a flux due to internal friction . Then the general equation for conservation of energy is:
Equation for entropy production
Note that the thermodynamic relations for the internal energy and enthalpy are given by:We may also obtain an equation for the kinetic energy by taking the dot product of the Navier-Stokes equation with the flow velocity to yield:The second term on the righthand side may be expanded to read:With the aid of the thermodynamic relation for enthalpy and the last result, we may then put the kinetic energy equation into the form:Now expanding the time derivative of the total energy, we have:Then by expanding each of these terms, we find that:And collecting terms, we are left with:Now adding the divergence of the heat flux due to thermal conduction to each side, we have that:However, we know that by the conservation of energy on the lefthand side is equal to zero, leaving us with:The product of the viscous stress tensor and the velocity gradient can be expanded as:Thus leading to the final form of the equation for specific entropy production:In the case where thermal conduction and viscous forces are absent, the equation for entropy production collapses to - showing that ideal fluid flow is isentropic.
Application
This equation is derived in Section 49, at the opening of the chapter on "Thermal Conduction in Fluids" in the sixth volume of L.D. Landau and E.M. Lifshitz's Course of Theoretical Physics. It might be used to measure the heat transfer and air flow in a domestic refrigerator, to do a harmonic analysis of regenerators, or to understand the physics of glaciers.
See also
Dissipation
Heat transfer
Turbulent kinetic energy
References
Further reading
Partial differential equations
Heat transfer
Heat conduction
Equations of fluid dynamics | General equation of heat transfer | [
"Physics",
"Chemistry"
] | 686 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Equations of fluid dynamics",
"Equations of physics",
"Thermodynamics",
"Heat conduction",
"Fluid dynamics"
] |
73,682,940 | https://en.wikipedia.org/wiki/Data-driven%20model | Data-driven models are a class of computational models that primarily rely on historical data collected throughout a system's or process' lifetime to establish relationships between input, internal, and output variables. Commonly found in numerous articles and publications, data-driven models have evolved from earlier statistical models, overcoming limitations posed by strict assumptions about probability distributions. These models have gained prominence across various fields, particularly in the era of big data, artificial intelligence, and machine learning, where they offer valuable insights and predictions based on the available data.
Background
These models have evolved from earlier statistical models, which were based on certain assumptions about probability distributions that often proved to be overly restrictive. The emergence of data-driven models in the 1950s and 1960s coincided with the development of digital computers, advancements in artificial intelligence research, and the introduction of new approaches in non-behavioural modelling, such as pattern recognition and automatic classification.
Key Concepts
Data-driven models encompass a wide range of techniques and methodologies that aim to intelligently process and analyse large datasets. Examples include fuzzy logic, fuzzy and rough sets for handling uncertainty, neural networks for approximating functions, global optimization and evolutionary computing, statistical learning theory, and Bayesian methods. These models have found applications in various fields, including economics, customer relations management, financial services, medicine, and the military, among others.
Machine learning, a subfield of artificial intelligence, is closely related to data-driven modelling as it also focuses on using historical data to create models that can make predictions and identify patterns. In fact, many data-driven models incorporate machine learning techniques, such as regression, classification, and clustering algorithms, to process and analyse data.
In recent years, the concept of data-driven models has gained considerable attention in the field of water resources, with numerous applications, academic courses, and scientific publications using the term as a generalization for models that rely on data rather than physics. This classification has been featured in various publications and has even spurred the development of hybrid models in the past decade. Hybrid models attempt to quantify the degree of physically based information used in hydrological models and determine whether the process of building the model is primarily driven by physics or purely data-based. As a result, data-driven models have become an essential topic of discussion and exploration within water resources management and research.
The term "data-driven modelling" (DDM) refers to the overarching paradigm of using historical data in conjunction with advanced computational techniques, including machine learning and artificial intelligence, to create models that can reveal underlying trends, patterns, and, in some cases, make predictions Data-driven models can be built with or without detailed knowledge of the underlying processes governing the system behavior, which makes them particularly useful when such knowledge is missing or fragmented.
References
Models of computation
Machine learning
Data analysis
Statistical models | Data-driven model | [
"Engineering"
] | 573 | [
"Artificial intelligence engineering",
"Machine learning"
] |
72,261,704 | https://en.wikipedia.org/wiki/Transition%20metal%20sulfoxide%20complex | A transition metal sulfoxide complex is a coordination complex containing one or more sulfoxide ligands. The inventory is large.
Scope of sulfoxide ligands
The most common sulfoxide ligand is dimethyl sulfoxide (dmso). Many sulfoxides are known because an enormous range of organic substituents are possible. When the two substituents differ, the ligand is chiral. Chiral sulfoxides are configurationally stable. One example is methyl phenyl sulfoxide.
Structures
Sulfoxides can bind to metals by the oxygen atom or by sulfur. This dichotomy is called linkage isomerism. O-bonded sulfoxide ligands are far more common, especially for 1st row metals. S-bonded sulfoxides are only found for soft metal centers, such as Ru(II). Complexes with both O- and S-bonded sulfoxide ligands are known. In some cases, sulfoxides are bridging ligands, with S bonded to one metal and O bonded to the other.
Synthesis and reactions
Being a polar solvent with a high dielectric constants, dmso dissolves many metal salts to give the corresponding complexes. Other ligand-solvent combinations include acetonitrile and water, which respectively form metal-acetonitrile complexes and metal aquo complexes. Treatment of thioether complexes with peroxide reagents gives sulfoxide complex. In rare cases, sulfoxide complexes are prepared by S-alkylation of sulfenito complexes.
Metal thioether complexes are susceptible to sulfoxidation with dimethyldioxirane.
Reactions
Being weakly basic, sulfoxide ligands are generally labile, i.e. they are rapidly displaced by other more basic ligands.
O-bonded sulfoxide ligands are susceptible to oxidation at sulfur. In this way, the weakly bonded ligand is converted into a leaving group, such as dimethylsulfone. Since dmso is susceptible to deprotonation by strong base, cationic dmso complexes might be expected to undergo H-D exchange under basic conditions. Such behavior is not observed even for the trication .
Several metal sulfoxide complexes have been investigated as catalysts. The molybdoenzyme DMSO reductase catalyzes the reduction of dmso to dimethyl sulfide.
Examples
Several homoleptic octahedral complexes of sulfoxides have been characterized by X-ray crystallography. These include the complexes for M = Cr(III), Mn(II), Fe(II), Fe(III), Co(II), Co(III), Ni(II), Cu(II), Zn(II), Cd(II), and Hg(II). All such derivatives feature O-bonded sulfoxides. The tricationic complex in features one S-bonded and five O-bonded sulfoxide ligands. The complex is square planar, in contrast to the derivative with dmso ligands. The square planar d8 complex features a pairs of S- and O-bonded sulfoxide ligands.
References
Coordination complexes | Transition metal sulfoxide complex | [
"Chemistry"
] | 679 | [
"Coordination chemistry",
"Coordination complexes"
] |
72,262,256 | https://en.wikipedia.org/wiki/Penetration%20enhancer | Penetration enhancers (also called chemical penetration enhancers, absorption enhancers or sorption promotors) are chemical compounds that can facilitate the penetration of active pharmaceutical ingredients (API) into or through the poorly permeable biological membranes. These compounds are used in some pharmaceutical formulations to enhance the penetration of APIs in transdermal drug delivery and transmucosal drug delivery (for example, ocular, nasal, oral and buccal). They typically penetrate into the biological membranes and reversibly decrease their barrier properties.
Transdermal drug delivery
Human skin is a very impermeable membrane that protects the body from ingress of harmful substances and prevents water loss from underlying organs. However, this seriously limits the use of skin as a site for drug administration. One of the approaches to facilitate transdermal drug delivery is the use of penetration enhancers. Many different compounds have been explored as potential penetration enhancers to facilitate transdermal drug delivery. These include dimethylsulphoxide, azones (such as laurocapram), pyrrolidones (for example 2-pyrrolidone), alcohols (ethanol and decanol), glycols (for example propylene glycol), surfactants, urea, various hydrocarbons and terpenes. Different potential skin site and modes of action were identified for penetration enhancement through the skin. In some cases, penetration enhancers may disrupt the packing motif of the intercellular lipid matrix or keratin domains. In other cases, drug penetration to the skin is facilitated because the penetration enhancer saturates the tissue and becomes a better system to dissolve the molecules of API.
Ocular drug delivery
Topical administration to the eye is usually characterised by very poor drug bioavailability due to several natural defence mechanisms, including nasolacrymal drainage, blinking, and poor permeability of the cornea. Enhancement of the corneal permeability to drug molecules is one of the strategies to improve the efficiency of topical drug delivery to the eye. Several classes of compounds have been researched as potential penetration enhancers through ocular membranes. These include chelating agents, cyclodextrins, surfactants, bile acids and salts, and crown ethers. There are also reports on the use of cell penetrating peptides and chitosan as penetration enhancers in ocular drug delivery. The most commonly used penetration enhancers in ocular formulations are benzalkonium chloride and ethylenediamine tetraacetate (EDTA). Benzalkonium chloride is often used as an antimicrobial preservative in eye drops and EDTA is used as a chelating agent.
Nasal drug delivery
Cyclodextrins, chitosan, some surfactants, bile acids and salts, sodium tauro-24,25-dihydro-fusidate, and phospholipids were reported as penetration enhancers in nasal drug delivery both for humans and equines. Chitosan is one of the most widely researched penetration enhancers in nasal drug delivery and it enhances the penetration of drugs by opening the tight junctions in the cell membranes.
Oral drug delivery
Penetration enhancers have been applied to improve the absorption of poorly permeable, hydrophilic drugs or macromolecules. Permeation enhancers that have been used successfully for oral drug development include medium-chain fatty acids like caprylic acid or caprate, or its amino acid ester like Salcaprozate sodium (SNAC). The above-mentioned permeation/penetration enhancers have a surfactant-like activity where they perturb the intestinal epithelium, promoting transcellular or paracellular absorption.
References
Pharmacology | Penetration enhancer | [
"Chemistry"
] | 782 | [
"Pharmacology",
"Medicinal chemistry"
] |
72,267,085 | https://en.wikipedia.org/wiki/Weather%20drone | A weather drone, or weather-sensing uncrewed aerial vehicle (UAV), – is a remotely piloted aircraft weighing less than 25 kg and carrying sensors that collect thermodynamic and kinematic data from the mid and lower atmosphere (e.g. up to 6 km).
Weather drones are not yet used to support National Meteorological and Hydrological Services (NMHS) due to ongoing negotiations on UAVs’ access to airspace and compliance with airspace regulations and technological development needed to meet the World Meteorological Organization's requirements.
Mostly, weather drones are deployed to support scientific research missions and industry-specific operations.
History
Early proposals
The first recorded UAV for measuring atmospheric parameters was in 1970, when a “small radio-controlled aircraft [was used] as a measuring platform” for sharing meteorological measurement results. The study was supported by the Air Force Cambridge Research Laboratory and NASA, Wallops Station. The authors pointed out the need for “a simple, economical, controllable, and recoverable platform to carry meteorological sensors and instrumentation” and demonstrated that using a small, radio-controlled aircraft to collect weather data was both feasible and useful.
The second milestone in the development of weather drones was the prototype built by a group of researchers at the University of Colorado, sponsored by the U.S. Office of Naval Research (ONR) in 1993. The goal of the fixed-wing drone called Aerosonde was to enable weather data collection in remote and inaccessible regions of the globe. In 1995, further developments were conducted in Australia by Environmental Systems and Services (ES&S) Pty Ltd. having the Australian Bureau of Meteorology and Insitu Group as subcontractors. In 1999, all operations and development started to be undertaken by Australian-based Aerosonde Ltd. Since 2007, Aerosonde Ltd. has been part of the American industrial conglomerate Textron Inc. By 2016, the Aerosonde had become an intelligence, surveillance and reconnaissance (ISR) aircraft for military operations and its weather data collection feature, secondary.
Later development
In 2009, the American National Research Council published the report “Observing Weather and Climate from the Ground Up: A Nationwide Network of Networks”, emphasizing the need for more adequate vertical mesoscale observation methods than radiosondes launched by weather balloons – the major system used to collect data from that atmospheric layer.
Since then, research programs focusing on weather drones have been increasing. The Center for Autonomous Sensing and Sampling at the University of Oklahoma is the most active group in this domain. Its researchers have been developing the CopterSonde and created the 3D Mesonet concept, a network of stations from which weather drones are launched every hour or two to collect data from the mesoscale.
In 2022, the US National Oceanic and Atmospheric Administration (NOAA) deployed a weather drone, the Area-I Altius-600, into a hurricane (Hurricane Ian) for the first time. The fixed-wing drone flew at lower heights (900 m - 1.3 km) inside the eye of the hurricane and into the eyewall to collect temperature, pressure, and moisture values.
Commercially available weather drones are scarce, with most of the market being supplied by Swiss company Meteomatics AG, developer and manufacturer of Meteodrones since 2013. In 2020, British company Menapia entered the market with MetSprite.
Types
Fixed-wing
The first weather drones used fixed-wings as it allowed researchers to implement technological advances from the piloted aircraft domain and to cover a larger area owing to its capacity to fly for long hours.
Rotary-wing
Rotary-wing weather drones are more popular because they are more versatile, easier to operate, and more suitable for vertical profiles than radiosondes which drift away.
Advantages and limitations
In 2019, in cooperation with the French national meteorology service Météo-France, the World Meteorological Organization (WMO) organized the “WMO Workshop on Use of Unmanned Aerial Vehicles (UAV) for Operational Meteorology Report”, the first workshop to discuss the application of weather drones. Amongst the participants, there were members of national meteorological centers, university research groups, and private companies.
The workshop discussions concluded that weather drones were useful to collect in-situ measurements from the boundary layer, closing the data gap and improving the numerical weather prediction accuracy. But a list of barriers needed to be addressed before weather drones could support national meteorological services, including:
Lack of drone-specific regulations in national or region wide airspace regulation
Limited level of automation of flight, refueling, and maintenance of fuel levels
Furthermore, resolving in-flight atmospheric icing and excessive wind resistance was also needed to ensure weather drones' safety and prevent loss. Since the development of the first Aerosonde, in the 1990s, research has been conducted to solve the issue of icing, which has caused the loss of many aircraft. In 2016, Swiss company Meteomatics was the first organization to develop a deicing system that heats the rotor blades whenever icing risk is detected.
References
Meteorological instrumentation and equipment
Unmanned aerial vehicles | Weather drone | [
"Technology",
"Engineering"
] | 1,024 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
72,270,279 | https://en.wikipedia.org/wiki/Cubical%20bipyramid | In 4-dimensional geometry, the cubical bipyramid is the direct sum of a cube and a segment, {4,3} + { }. Each face of a central cube is attached with two square pyramids, creating 12 square pyramidal cells, 30 triangular faces, 28 edges, and 10 vertices. A cubical bipyramid can be seen as two cubic pyramids augmented together at their base.
It is the dual of a octahedral prism.
Being convex and regular-faced, it is a CRF polytope.
Coordinates
It is a Hanner polytope with coordinates:
[2] (0, 0, 0; ±1)
[8] (±1, ±1, ±1; 0)
See also
Tetrahedral bipyramid
Dodecahedral bipyramid
Icosahedral bipyramid
References
External links
Cubic tegum
4-polytopes | Cubical bipyramid | [
"Mathematics"
] | 190 | [
"Geometry",
"Geometry stubs"
] |
72,272,631 | https://en.wikipedia.org/wiki/Dmitry%20Bandura | Dmitry Bandura is a Soviet-born Canadian scientist, notable for being one of the co-inventors of the Mass cytometry technology. Bandura co-founded DVS Sciences in 2004 (acquired by Fluidigm in 2014 and then renamed to Standard BioTools in 2022) along with Drs Vladimir Baranov, Scott D. Tanner, and Olga Ornatsky.
Biography
Bandura grew up in Chernivtsi, Ukraine, where he graduated from school #35 with distinction. He received an MSc in engineering physics in 1985 and a PhD in technical sciences, both supervised by Professor Alexander A. Sysoev at Moscow Engineering Physics Institute. His PhD thesis research focused on elemental analysis of hypervelocity microparticles via time-of-flight mass spectrometry (TOF-MS) of their impact-induced plasma.
Bandura emigrated to Australia in 1992, where he worked as a Research Physicist at GBC Scientific Equipment. There, he worked on the development of inductively coupled plasma mass spectrometry (ICP-MS), contributing to the release of the award-winning Optimass 8000 ICP-TOF-MS in 1998. Bandura then relocated to Toronto, Canada, where he joined MDS SCIEX (now Sciex) to continue working on the development of new ICP-MS instrumentation methods, particularly in the area of collision and reaction cells.
In 2005, together with Scott D. Tanner and Vladimir Baranov, Bandura began independently developing an ICP-TOF-MS based cytometer and became a researcher at the University of Toronto in March 2005. After securing ample funding by 2010 from various sources, including National Institutes of Health, Ontario Institute for Cancer Research (OICR), the Ministry of Research and Innovation, Ontario Centres of Excellence, Health Technology Exchange, and Genome Canada via the Ontario Genomics Institute, and venture capital from 5 AM Ventures, Bandura and the DVS Sciences team successfully commercialized their technology, leading to the acquisition of DVS Sciences by Fluidigm in 2014
Bandura headed R&D and Canadian operations at Fluidigm Canada following the merger and Standard BioTools Canada (formerly DVS Sciences) following a capital infusion in 2022, stewarding the development of the next generation of mass cytometry and imaging mass cytometry instruments and reagents.
Awards and honors
2019 HUPO Award (Human Proteome Organization) for “development of a unique high-parameter mass cytometry technology that brings unprecedented understanding of single cell proteomics”, together with the co-inventors Scott D. Tanner, Vladimir Baranov and Olga Ornatsky.
The Analytical Scientist Innovation Award 2017: #1 New Product of 2017 for the Fluidigm Hyperion Imaging System, as a leader of the development team
2004 Elsevier / Spectrochimica Acta Atomic Spectroscopy Award for the most important paper published in Spectrochimica Acta Part B in 2002 (Title: Reaction cells and collision cells for ICP-MS: a tutorial review) in co-authorship with Scott D. Tanner and Vladimir Baranov
1998 R&D World 100 Winners in Analytical instrumentation for the GBC Optimass 8000
Fellow of the Royal Society of Chemistry (UK)
Publications
Sept 2010 - Highly Multiparametric Analysis by Mass Cytometry
Aug 2009 - Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoassay Based on Inductively Coupled Plasma Time-Of-Flight Mass Spectrometry (1148 citations as of January 28, 2023)
Sept 2002 - Reaction Cells and Collision Cells for ICP-MS: A Tutorial Review
Feb 2002 - A Sensitive and Quantitative Element-Tagged Immunoassay with ICPMS Detection
Feb 2002 - Detection of Ultratrace Phosphorus and Sulfur by Quadrupole ICPMS with Dynamic Reaction Cell
July 2001 - Reaction Chemistry and Collisional Processes in Multipole Ddevices for Resolving Isobaric Interferences in ICP–MS
A more complete listing of his publications can be found on Google scholar
Book
References
External links
Standard BioTools Corporate website (Formerly Fluidigm, Formerly DVS Sciences)
Living people
Canadian physicists
Mass spectrometrists
Moscow Engineering Physics Institute alumni
Scientists from Chernivtsi
Year of birth missing (living people) | Dmitry Bandura | [
"Physics",
"Chemistry"
] | 896 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
72,272,643 | https://en.wikipedia.org/wiki/Vladimir%20Baranov | Vladimir Baranov is a Soviet born Canadian scientist and one of the original co-inventors of Mass cytometry technology...
He co-founded DVS Sciences in 2004 (acquired by Fluidigm in 2014 and then renamed to Standard BioTools in 2022) along with Dmitry Bandura,Scott D. Tanner and Olga Ornatsky.
Biography
In 1993, he immigrated to Canada. Prior to the formation of DVS Sciences. Dr. Baranov, a senior scientist at MDS SCIEX, was a key member of the research team that developed and promoted the Dynamic Reaction Cell®, which remains today at the pinnacle of quadrupole ICP-MS technology.
In 2005, together with Scott D. Tanner and Dmitry Bandura, he began independently developing an ICP-TOF-MS based cytometer and became a researcher at the University of Toronto in March 2005. After securing ample funding by 2010 from various sources, including National Institutes of Health, Ontario Institute for Cancer Research (OICR), the Ministry of Research and Innovation, Ontario Centres of Excellence, Health Technology Exchange, and Genome Canada via the Ontario Genomics Institute, and venture capital from 5 AM Ventures, Vladimir and the DVS Sciences team successfully commercialized their technology, leading to the acquisition of DVS Sciences by Fluidigm in 2014
Baranov was a principal scientist at DVS Sciences (and then Fluidigm) developing instrumental concepts and algorithmics that advance the CyTOF® line of products. He also and played a fundamental role in the development of the MaxPar line of metal-labeling reagents until his retirement in 2019.
Education
M.Sc. at Moscow State University
PhD in physical chemistry at Moscow State University - 1987
Career
Assistant to the chair of physical chemistry at Moscow State University.
Research associate at York University
Senior scientist at MDS SCIEX
Associate professor at UofT in IBBME (2005–2008) and chemistry (2008–2011).
Adjunct professor at York University.
Principal scientist at DVS Sciences - 2005–2019 (acquired by Fluidigm in 2014 and then Standard BioTools in 2022)
Research
Quadrupole theory (stability, acceptance and transmission of multipole RF and electrostatic driven devises), molecular gas dynamics, and supersonic beam expansion into vacuum.
Development of the DRC Collision/reaction cell.
Development of mass spectrometry (CyTOF), including fundamentals of operation and design of different MS instrumentation.
Awards and honors
2019 HUPO Award (Human Proteome Organization)
2004 Elsevier / Spectrochimica Acta Atomic Spectroscopy Award for the most important paper published in Spectrochimica Acta Part B in 2002 (Title: Reaction cells and collision cells for ICP-MS: a tutorial review) in co-authorship with Scott D. Tanner and Dmitry Bandura
2001 Manning Innovation Award, Award of Distinction Dr. Vladimir Baranov, together with Scott D. Tanner, received the Manning Award of Distinction from the Manning Innovation Awards Foundation for the remarkable invention of the ICP-MS Dynamic Reaction Cell (Collision/reaction cell).
1999 Pittcon Editors' Awards Perkin-Elmer Sciex for their ELAN 6100 DRC (Dynamic Reaction Cell) ICP-MS system.
Publications
Feb 2017 - Imaging Mass Cytometry.
Sept 2010 - Highly Multiparametric Analysis by Mass Cytometry.
July 2009 - Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoassay based on Inductively Coupled Plasma Time-Of-Flight Mass Spectrometry
Aug 2007 - Polymer‐Based Elemental Tags for Sensitive Bioassays.
Sept 2002 - Reaction Cells and Collision Cells for ICP-MS: a tutorial review.
May 2002 - A Sensitive and Quantitative Element-Tagged Immunoassay with ICPMS Detection.
Feb 2002 - Detection of Ultratrace Phosphorus and Sulfur by Quadrupole ICPMS with Dynamic Reaction Cell.
July 2001 - Reaction Chemistry and Collisional Processes in Multipole Devices for Resolving Isobaric Interferences in ICP–MS.
Jan 2000 - A Dynamic Reaction Cell for Inductively Coupled Plasma Mass Spectrometry (ICP-DRC-MS). Part III. Optimization and Analytical Performance.
Nov 1999 - A Dynamic Reaction Cell for Inductively Coupled Plasma Mass Spectrometry (ICP-DRC-MS). Part II. Reduction of Interferences Produced within the Cell.
March 1999 - Theory, Design, and Operation of a Dynamic Reaction Cell for ICP-MS.
Feb 1997 - Activation of Hydrogen and Methane by Thermalized FeO+ in the Gas Phase as Studied by Multiple Mass Spectrometric Techniques.
A more complete listing of his publications can be found on Google Scholar
References
External links
Standard BioTools Corporate website (Formerly Fluidigm, Formerly DVS Sciences)
Mass spectrometrists
Canadian chemists
Moscow State University alumni
Living people
Year of birth missing (living people) | Vladimir Baranov | [
"Physics",
"Chemistry"
] | 1,016 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
72,272,651 | https://en.wikipedia.org/wiki/Scott%20D.%20Tanner | Scott Tanner is a Canadian scientist, inventor, and entrepreneur. His areas of expertise include mass spectroscopy, especially inductively coupled plasma mass spectrometry (ICP-MS), and mass cytometry.
Tanner is best known for his work on the fundamentals of inductively coupled plasma mass spectrometry, for the invention of mass cytometry, and co-founding (with Dmitry Bandura, Vladimir Baranov and Olga Ornatsky) DVS Sciences in 2004,(acquired by Fluidigm in 2014 and then renamed to Standard BioTools in 2022) the company that first commercialized the instrument and reagents of mass cytometry.
Early life and education
Tanner was born and raised in St. Catharines, Ontario Canada. He bought his first chemistry set, from his brother, at age 6. Through his early teenage years, he was provided with laboratory space at Brock University, under the guidance of Dr. E.A. Cherniak and Dr. F.P. Koffyberg, where he attempted to replicate Geiger–Marsden experiments also known as Rutherford's experiment (scattering of alpha particles by gold foil) using various home-built instruments, including cloud chambers.
Tanner graduated with a BSc in chemistry from York University in 1976. During his undergraduate years, he became a nationally ranked gymnast. An injury at the Olympic trials ended his competitive gymnastics career, and he took up marathon running during graduate school (best time 2:47:13). He received a Doctor of Philosophy (Chemistry) from York University in 1980, having studied ion-molecule reaction kinetics and flame ion chemistry with Drs. D.K Bohme and J.M. Goodings.
Biography
Tanner joined SCIEX, which later became MDS SCIEX, in 1980 as a research scientist. He became principal scientist in 2000. In his 25 years at SCIEX, Tanner developed and helped to commercialize a string of mass spectrometry products.
Tanner published over 74 peer-reviewed scientific articles, and holds 22 US patents (with corresponding filings in other countries), including 13 patents on Mass Cytometry technology
Tanner was a co-founder of DVS Sciences and, as the president and CEO, saw the company through the development and commercial launch of its first products.
The products that DVS Sciences brought to the global market were originally developed at the University of Toronto where Tanner was a professor in the Institute of Biomaterials and Biomedical Engineering and then in chemistry.
Career
Principal Scientist SCIEX - 1980-2005
Associate Professor (CLTA) at the University of Toronto, first in the Institute of Biomaterials and Biomedical Engineering (2005–2008) and then in Chemistry 2008–2013.
President DVS Sciences 2004 - 2015 (acquired by Fluidigm in 2014)
Adjunct Professor at York University in the Department of Chemistry 2015–2018
Volunteer Work
Chair of the Three Churches Heritage Foundation in Mahone Bay - 2020–Present
Books
2003 - Plasma Source Mass Spectrometry: Applications and Emerging Technologies
2001 - Plasma Source Mass Spectrometry: The New Millennium
1999 - Plasma Source Mass Spectrometry: New Developments and Applications
1997 - Plasma Source Mass Spectrometry: Developments and Applications
Research
Detection of TCDD (dioxin) in soil and water, explosives, chemical agents.
Development of the ELAN 6000 ICP-MS
Development of the DRC Collision/reaction cell
Development of Mass cytometry see also CyTOF
Awards and honors
2024 York U Alumni Award for Outstanding Achievement
2020 Lifetime Achievement Award in Plasma Spectrochemistry
2019 HUPO Award (Human Proteome Organization)
2014 Fellow of the American Institute for Medical and Biological Engineering
2011 University of Toronto Inventor of the Year Award for Biomedical and Life Sciences
2011 Thermo Fisher Scientific Spectroscopy Award
2004 Elsevier / Spectrochimica Acta Atomic Spectroscopy Award for the most important paper published in Spectrochimica Acta Part B in 2002 (Title: Reaction cells and collision cells for ICP-MS: a tutorial review) in co-authorship with Dmitry Bandura and Vladimir Baranov
2003 W.A.E. McBryde medal from the Canadian Chemical Society of the Chemical Institute of Canada
2001 Manning Innovation Award, Award of Distinction Dr. Scott Tanner, together with Dr. Vladimir Baranov, received the Manning Award of Distinction from the Manning Innovation Awards Foundation for the remarkable invention of the ICP-MS Dynamic Reaction Cell (Collision/reaction cell).
1999 Pittcon Editors' Awards Perkin-Elmer Sciex for their ELAN 6100 DRC (Dynamic Reaction Cell) ICP-MS system.
Fellow of the Royal Society of Chemistry (UK)
Fellow of the American Institute for Medical and Biological Engineering (AIMBE)
Publications
Scott has published more than 75 peer-reviewed articles. The 13 most cited (more than 200 citations each) include:
May 2011 - Single-Cell Mass Cytometry of Differential Immune and Drug Responses Across a Human Hematopoietic Continuum )
Sept 2010 - Highly Multiparametric Analysis by Mass Cytometry
Aug 2009 - Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoassay Based on Inductively Coupled Plasma Time-Of-Flight Mass Spectrometry
Sept 2002 - Reaction Cells and Collision Cells for ICP-MS: A Tutorial Review
April 2002 - A Sensitive and Quantitative Element-Tagged Immunoassay with ICPMS Detection
April 2002 - Detection of Ultratrace Phosphorus and Sulfur by Quadrupole ICPMS with Dynamic Reaction Cell
July 2001 - Reaction Chemistry and Collisional Processes in Multipole Devices for Resolving Isobaric Interferences in ICP–MS
Aug 2000 - A Dynamic Reaction Cell for Inductively Coupled Plasma Mass Spectrometry (ICP-DRC-MS). Part III.
Nov 1999 - A Dynamic Reaction Cell for Inductively Coupled Plasma Mass Spectrometry (ICP-DRC-MS). Part II. Reduction of Interferences Produced within the Cell
March 1999 - Theory, Design, and Operation of a Dynamic Reaction Cell for ICP-MS
Jan 1995 - Characterization of Ionization and Matrix Suppression in Inductively Coupled ‘Cold’ Plasma Mass Spectrometry
June 1992 - Space Charge in ICP-MS: Calculation and Implications
July 1988 - Nonspectroscopic Interelement Interferences in Inductively Coupled Plasma Mass Spectrometry
A more complete listing of his publications can be found on Google Scholar
References
External links
Standard BioTools Corporate website (Formerly Fluidigm, Formerly DVS Sciences)
Mass spectrometrists
Canadian chemists
Fellows of the American Institute for Medical and Biological Engineering
Fellows of the Royal Society of Chemistry
York University alumni
Living people
Year of birth missing (living people) | Scott D. Tanner | [
"Physics",
"Chemistry"
] | 1,387 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
72,272,657 | https://en.wikipedia.org/wiki/Olga%20Ornatsky | Olga Ornatsky is a Soviet born, Canadian scientist. Ornatsky co-founded DVS Sciences in 2004 (acquired by Fluidigm in 2014 and then renamed to Standard BioTools in 2022) along with Dmitry Bandura, Vladimir Baranov and Scott D. Tanner.
Biography
Ornatsky graduated from the Moscow State University, Department of Biology. In 1989, she completed her Ph.D. in cell and molecular biology, and worked as a research scientist at the Cardiology Centre studying vascular smooth muscle involvement in atherosclerosis. In 1993, she immigrated to Canada, and became a postdoctoral research fellow at York University. She quickly progressed to become senior research associate in the Laboratory of Vascular Biology and Cardiac Surgery at St. Michael's Hospital (Toronto). Her achievements brought her to MDS Proteomics Inc. (now Protana Inc), where she led her research group as a senior scientist for four years.
In 2005, she left MDS to pursue a different direction. Together with the co-founders of DVS Sciences Inc. Scott D. Tanner, Vladimir Baranov and Dmitry Bandura, she helped develop the CyTOF™ Mass Cytometer, for highly multi-parametric single cell analysis at the Department of Chemistry, University of Toronto. Olga held the position of director of Bioassay Development at DVS Sciences, Inc. After the merger with Fluidigm Inc in 2014, she transitioned to principal scientist, Proteomics division, and led a group of biology and chemistry researchers involved in developing new metal-tagged affinity reagents, as well as methods and applications for Mass cytometry until her retirement in 2019.
Education
M.Sc. in biology Moscow State University
Ph.D. in cell and molecular biology Moscow State University- In 1989
Career
Research scientist in cardiology at Moscow State University
Postdoctoral research fellow at York University - 1993
Senior research associate, Laboratory of Vascular Biology and Cardiac Surgery at St. Michael's Hospital (Toronto)
Senior scientist MDS Proteomics, Inc. - 2001- 2005
Research associate, Institute of Biomaterials and Biomedical Engineering (IBBME), University of Toronto
Principal scientist, biology DVS Sciences - 2005 - 2019 (acquired by Fluidigm in 2014 and then Standard BioTools in 2022)
Research
Ornatsky has more than fifteen years of experience in the commercial environment as a senior strategic product application developer and in providing advanced customer/collaborator support. Her primary field of expertise is in cellular and molecular biology, with the objective of developing bioanalytical assays for mass cytometry(CyTOF). Olga is a principal inventor on several patents.
Awards and honors
2019 HUPO Award (Human Proteome Organization)
Publications
Mar 2021 Establishing CD19 B-cell reference control materials for comparable and quantitative cytometric expression analysis
April 2020 Enabling Indium Channels for Mass Cytometry by Using Reinforced Cyclambased Chelating Polylysine
April 2020 Tantalum Oxide Nanoparticle-Based Mass Tag for Mass Cytometry
Dec 2019 Tumor Platinum Concentrations and Pathological Responses Following Cisplatin-Containing Chemotherapy in Gastric Cancer Patients
Dec 2019 Skin platinum deposition in colorectal cancer patients following oxaliplatin-based therapy
Nov 2019 Automated Data Cleanup for Mass Cytometry
Aug 2019 A Metal-Chelating Polymer for Chelating Zirconium and its Use in Mass Cytometry
June 2019 Multidimensional profiling of drug-treated cells by Imaging Mass Cytometry
July 2019 Human lymphoid organ cDC2 and macrophages play complementary roles in T follicular helper responses
Feb 2019 Multidimensional Profiling of Drug-Treated Cells by Imaging Mass Cytometry
July 2019 Human lymphoid organ cDC2 and macrophages play complementary roles in T follicular helper responses
Jan 2019 Lanthanide nanoparticles for high sensitivity multiparameter single cell analysis
Mar 2018 Aptamer-facilitated mass cytometry
Feb 2018 Tumor platinum concentrations and pathological responses following preoperative cisplatin-containing chemotherapy in gastric or gastroesophageal junction cancer patients.
Feb 2018 Platinum deposition in skin as a possible mechanism for peripheral sensory neuropathy (PSN) in patients (pts) with colorectal cancer (CRC) following oxaliplatin-based therapy.
Nov 2017 Simultaneous Detection of Protein and mRNA in Jurkat and KG‐1a Cells by Mass Cytometry
July 2017 Abstract 2104: In vitro drug effects on cancer cell morphology and functional state revealed by multiparameter imaging mass cytometry
June 2017 Liposome-Encapsulated NaLnF 4 Nanoparticles for Mass Cytometry: Evaluating Nonspecific Binding to Cells
Aug 2014 Metal-chelating polymers developed for mass cytometry as a potential route to high activity radioimmunotherapeutic agents
July 2014 Single cell measurement of the uptake, intratumoral distribution, and cell cycle effects of cisplatin using mass cytometry
Nov 2013 Dual-Purpose Polymer Labels Majonis Biomacromolecules 2013
May 2013 The Means: Cytometry and Mass Spectrometry Converge in a Single Cell Deep Profiling Platform
April 2013 Dual-Purpose Polymer Labels for Fluorescent and Mass Cytometric Affinity Bioassays
April 2013 An introduction to mass cytometry: Fundamentals and applications
July 2012 Metal-Chelating Polymers by Anionic Ring-Opening Polymerization and Their Use in Quantitative Mass Cytometry
July 2012 Human CD4+ lymphocytes for antigen quantification: Characterization using conventional flow cytometry and mass cytometry
Nov 2011 MASSIVELY MULTIPARAMETER SINGLE CELL ANALYSIS BY MASS CYTOMETRY
Sept 2011 Curious Results with Palladium- and Platinum-Carrying Polymers in Mass Cytometry Bioassays and an Unexpected Application as a Dead Cell Stain
Aug 2011 Development of mass cytometry methods for bacterial discrimination
June 2011 Surface Functionalization Methods To Enhance Bioconjugation in Metal-Labeled Polystyrene Particles
June 2011 Surface Functionalization Methods To Enhance Bioconjugation in Metal-Labeled Polystyrene Particles
May 2011 Single-Cell Mass Cytometry of Differential Immune and Drug Responses Across a Human Hematopoietic Continuum
Jan 2011 Multiplexed protease assays using element-tagged substrates
Oct 2010 Synthesis of a Functional Metal-Chelating Polymer and Steps toward Quantitative Mass Cytometry Bioassays
Sept 2010 Highly multiparametric analysis by mass cytometry
Aug 2010 A Distinctive DNA Damage Response in Human Hematopoietic Stem Cells Reveals an Apoptosis-Independent Role for p53 in Self-Renewal
June 2010 Hybrid nanogels by encapsulation of lanthanide-doped LaF3 nanoparticles as elemental tags for detection by atomic mass spectrometry
Feb 2010 Metal-Containing Polystyrene Beads as Standards for Mass Cytometry
Feb 2010 Bio-Functional, Lanthanide-Labeled Polymer Particles by Seeded Emulsion Polymerization and their Characterization by Novel ICP-MS Detection
Feb 2010 Lanthanide-Containing Polymer Microspheres by Multiple-Stage Dispersion Polymerization for Highly Multiplexed Bioassays
Nov 2009 Development of inductively coupled plasma-mass spectrometry-based protease assays
July 2009 Mass Cytometry: Technique for Real Time Single Cell Multitarget Immunoassay Based on Inductively Coupled Plasma Time-of-Flight Mass Spectrometry
Mar 2009 The influence of PEG macromonomers on the size and properties of thermosensitive aqueous microgels
Feb 2009 ICP-MS-Based Multiplex Profiling of Glycoproteins Using Lectins Conjugated to Lanthanide-Chelating Polymers
Dec 2008 Flow cytometer with mass spectrometer detection for massively multiplexed single-cell biomarker assay
Dec 2008 Biocompatible Hybrid Nanogels
Aug 2008 Element-tagged immunoassay with ICP-MS detection: Evaluation and comparison to conventional immunoassays
May 2008 Study of Cell Antigens and Intracellular DNA by Identification of Element-Containing Labels and Metallointercalators Using Inductively Coupled Plasma Mass Spectrometry
Feb 2008 Development of analytical methods for multiplex bio-assay with inductively coupled plasma mass spectrometry
Dec 2007 Lanthanide-Containing Polymer Nanoparticles for Biological Tagging Applications: Nonspecific Endocytosis and Cell Adhesion
Aug 2007 Polymer-Based Elemental Tags for Sensitive Bioassays
Mar 2007 Multiplex bio-assay with inductively coupled plasma mass spectrometry: Towards a massively multivariate single-cell technology
Mar 2007 Large-scale mapping of human protein-protein interactions by mass spectrometry
Sept 2006 Messenger RNA Detection in Leukemia Cell lines by Novel Metal-Tagged in situ Hybridization using Inductively Coupled Plasma Mass Spectrometry
Feb 2006 Multiple cellular antigen detection by ICP-MS
Jan 2006 Phosphoproteomics in Drug Discovery and Development
Aug 2004 The Reproducible Acquisition of Comparative Liquid Chromatography/Tandem Mass Spectrometry Data from Complex Biological Samples
Mar 2004 Differential Phosphoprofiles of EGF and EGFR Kinase Inhibitor-Treated Human Tumor Cells and Mouse Xenografts
Jan 2004 Characterization of phosphorus content of biological samples by ICP-DRC-MS: Potential tool for cancer research
Jan 2002 Administration of exogenous endothelin-1 following vascular balloon injury: early and late effects on intimal hyperplasia
Jan 2001 Effects of Estrogen Replacement on Infarct Size, Cardiac Remodeling, and the Endothelin System After Myocardial Infarction in Ovariectomized Rats
Aug 1999 Post-translational control of the MEF2A transcriptional regulatory protein
Nov 1998 (MEF2) with a mitogen-activated protein kinase, ERK5/BMK1
Jan 1998 A Dominant-Negative Form of Transcription Factor MEF2 Inhibits Myogenesis
Aug 1997 Molecular cloning of up-regulated cytoskeletal genes from regenerating skeletal muscle: Potential role of myocyte enhancer factor 2 proteins in the activation of muscle-regeneration-associated genes
Nov 1996 MEF2 Protein Expression, DNA Binding Specificity and Complex Composition, and Transcriptional Activity in Muscle and Non-muscle Cells
Feb 1996 Dystrophin, vinculin, and aciculin in skeletal muscle subject to chronic use and disuse
Nov 1995 Effects of hypothyroidism and aortic constriction on mitochondria during cardiac hypertrophy
Nov 1995 Expression of stress proteins and mitochondrial chaperones in chronically stimulated skeletal muscle
June 1995 Mitochondrial biogenesis during pressure overload induced cardiac hypertrophy in adult rats
Mar 1989 Identification and immunolocalization of a new component of human cardiac muscle intercalated disc
Jan 1989 Modulation of human aorta smooth muscle cell phenotype: A study of muscle-specific variants of vinculin, caldesmon, and actin expression
Sept 1988 Immunolocalization of meta-vinculin in human smooth and cardiac muscles
June 1988 Diversity of vinculin/meta-vinculin in human tissues and cultivated cells. Expression of muscle specific variants of vinculin in human aorta smooth muscle cells
July 1987 Immunoreactive forms of caldesmon in cultivated human vascular smooth muscle cells
Feb 1987 Identification of smooth muscle-derived foam cells in the atherosclerotic plaque of human aorta with monoclonal antibody IIG10
Nov 1986 Metavinculin distribution in adult human tissues and cultuED cells
April 1986 Red blood cell targeting to smooth muscle cells
Oct 1985 Monoclonal antibodies that distinguish between human aorta smooth muscle and endothelial cells
Systems immunology (WS-086) WS/PP-086-01 What is “Systems Immunology” for? WS/PP-086-02 Cholinergic stimulation modulates T cell-independent humoral immune responses in the spleenWS/PP-086-03 Next-generation 31-parameter flow cytometry reveals systems-level relationships in human bone marrow signaling and homeostasisWS/PP-086-04 Organization of the autoantibody repertoire in healthy newborns and adults revealed by system level informatics of antigen microarray dataWS/PP-086-05 Methods for develo
Systems immunology (WS-086) WS/PP-086-01 What is âSystems Immunologyâ for? WS/PP-086-02 Cholinergic stimulation modulates T cell-independent humoral immune responses in the spleenWS/PP-086-03 Next-generation 31-parameter flow cytometry reveals systems-level relationships in human bone marrow signaling and homeostasisWS/PP-086-04 Organization of the autoantibody repertoire in healthy newborns and adults revealed by system level informatics of antigen microarray dataWS/PP-086-05 Methods for de
Novel polymer-based elemental tags for sensitive bio-assays
References
External links
Standard BioTools Corporate website (Formerly Fluidigm, Formerly DVS Sciences)
Mass spectrometrists
21st-century Canadian biologists
Canadian women scientists
Moscow State University alumni
Living people
Year of birth missing (living people) | Olga Ornatsky | [
"Physics",
"Chemistry"
] | 2,786 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
63,556,609 | https://en.wikipedia.org/wiki/Ursescu%20theorem | In mathematics, particularly in functional analysis and convex analysis, the Ursescu theorem is a theorem that generalizes the closed graph theorem, the open mapping theorem, and the uniform boundedness principle.
Ursescu theorem
The following notation and notions are used, where is a set-valued function and is a non-empty subset of a topological vector space :
the affine span of is denoted by and the linear span is denoted by
denotes the algebraic interior of in
denotes the relative algebraic interior of (i.e. the algebraic interior of in ).
if is barreled for some/every while otherwise.
If is convex then it can be shown that for any if and only if the cone generated by is a barreled linear subspace of or equivalently, if and only if is a barreled linear subspace of
The domain of is
The image of is For any subset
The graph of is
is closed (respectively, convex) if the graph of is closed (resp. convex) in
Note that is convex if and only if for all and all
The inverse of is the set-valued function defined by For any subset
If is a function, then its inverse is the set-valued function obtained from canonically identifying with the set-valued function defined by
is the topological interior of with respect to where
is the interior of with respect to
Statement
Corollaries
Closed graph theorem
Uniform boundedness principle
Open mapping theorem
Additional corollaries
The following notation and notions are used for these corollaries, where is a set-valued function, is a non-empty subset of a topological vector space :
a convex series with elements of is a series of the form where all and is a series of non-negative numbers. If converges then the series is called convergent while if is bounded then the series is called bounded and b-convex.
is ideally convex if any convergent b-convex series of elements of has its sum in
is lower ideally convex if there exists a Fréchet space such that is equal to the projection onto of some ideally convex subset B of Every ideally convex set is lower ideally convex.
Related theorems
Simons' theorem
Robinson–Ursescu theorem
The implication (1) (2) in the following theorem is known as the Robinson–Ursescu theorem.
See also
Notes
References
Theorems involving convexity
Theorems in functional analysis | Ursescu theorem | [
"Mathematics"
] | 474 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
63,560,704 | https://en.wikipedia.org/wiki/Lippmann%20diagram | A Lippmann diagram is a graphical plot showing the solidus/solutus equilibrium states for a given binary solid solution (e.g., , barite/celestite) in equilibrium with an aqueous solution containing the two substituting ions: and (solid solution – aqueous solution system, or SS-AS). It was proposed in the 1970s by F. Lippmann to determine excess Gibbs functions. This diagram summarizes the thermodynamic basis of solid-solution aqueous-solution systems (SS-AS) equilibria and helps to predict the nucleation kinetics for solid solutions crystallizing from an aqueous solution.
In the diagram, the abscissa (horizontal axis) represents two variables with different scales to represent both the solid phase mole fraction and the aqueous activity fraction. The ordinate (vertical axis) represents the solid phase.
There are two variants of Lippmann diagrams:
Ion-activity Lippmann diagram
Total-scale Lippmann diagram
References
Solutions
Phase transitions
Diagrams | Lippmann diagram | [
"Physics",
"Chemistry"
] | 220 | [
"Physical phenomena",
"Phase transitions",
"Materials stubs",
"Phases of matter",
"Critical phenomena",
"Homogeneous chemical mixtures",
"Materials",
"Solutions",
"Statistical mechanics",
"Matter"
] |
63,562,105 | https://en.wikipedia.org/wiki/Surjection%20of%20Fr%C3%A9chet%20spaces | The theorem on the surjection of Fréchet spaces is an important theorem, due to Stefan Banach, that characterizes when a continuous linear operator between Fréchet spaces is surjective.
The importance of this theorem is related to the open mapping theorem, which states that a continuous linear surjection between Fréchet spaces is an open map. Often in practice, one knows that they have a continuous linear map between Fréchet spaces and wishes to show that it is surjective in order to use the open mapping theorem to deduce that it is also an open mapping. This theorem may help reach that goal.
Preliminaries, definitions, and notation
Let be a continuous linear map between topological vector spaces.
The continuous dual space of is denoted by
The transpose of is the map defined by If is surjective then will be injective, but the converse is not true in general.
The weak topology on (resp. ) is denoted by (resp. ). The set endowed with this topology is denoted by The topology is the weakest topology on making all linear functionals in continuous.
If then the polar of in is denoted by
If is a seminorm on , then will denoted the vector space endowed with the weakest TVS topology making continuous. A neighborhood basis of at the origin consists of the sets as ranges over the positive reals. If is not a norm then is not Hausdorff and is a linear subspace of .
If is continuous then the identity map is continuous so we may identify the continuous dual space of as a subset of via the transpose of the identity map which is injective.
Surjection of Fréchet spaces
Extensions of the theorem
Lemmas
The following lemmas are used to prove the theorems on the surjectivity of Fréchet spaces. They are useful even on their own.
Applications
Borel's theorem on power series expansions
Linear partial differential operators
being means that for every relatively compact open subset of , the following condition holds:
to every there is some such that in .
being means that for every compact subset and every integer there is a compact subset of such that for every distribution with compact support in , the following condition holds:
if is of order and if then
See also
References
Bibliography
Theorems in functional analysis | Surjection of Fréchet spaces | [
"Mathematics"
] | 473 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
75,049,006 | https://en.wikipedia.org/wiki/Christopher%20B.%20Murray | Christopher Bruce Murray is the Richard Perry University Professor of Chemistry and Materials Science and Engineering at the University of Pennsylvania. He is a member of the National Academy of Engineering and a Fellow of the Materials Research Society. He was a Clarivate Citation Laureate in 2020. He is known for his contributions to quantum dots and other nanoscale materials.
Early life and education
Murray studied chemistry at St. Mary's University in Halifax, Nova Scotia, Canada from 1985, graduating with a Bachelor's Degree with Honors in Chemistry in 1988. He spent a year as a Rotary International Fellow at the University of Auckland in 1989. From 1990 he studied at the Massachusetts Institute of Technology (MIT), where he received his doctorate in chemistry in 1995.
Career
From 1995 Murray worked at the Thomas J. Watson Research Center at IBM. From 2000 to 2006 he headed their Nanoscale Materials and Devices Department. In 2006 the University of Pennsylvania announced his appointment as the Richard Perry University Professor, with appointments in Chemistry and Materials Science, in the schools of Arts and Sciences, and Engineering and Applied Science.
Research
Murray, David Norris and Manoj Nirmal were the first graduate students to work with Moungi Bawendi at MIT. As part of his thesis work, Murray helped to develop synthetic methods for making quantum dots, including identifying a longer chain version of trioctylphosphine oxide as being cheaper and having additional benefits when used in synthesis. In 1993, Murray, Norris and Bawendi published a breakthrough paper describing the hot injection synthesis method for making quantum dots. Both Murray's and Bawendi's contributions to the synthesis and characterization of semiconductor quantum dots were recognized by the American Chemical Society with its 1997 Nobel Laureate Signature Award.
Their method was both adaptable and reproducible, making it possible to consistently synthesise monodisperse nanoparticles and develop large-scale applications using quantum dots. Bawendi received the 2023 Nobel Prize in Chemistry for the development of this method.
Much of Murray's has work focused on the synthesis and characterization of nanoscale materials, including nanoscale magnets, semiconductor nanocrystals, and nanocrystal superlattices. Murray was recognized at IBM as a Master Inventor and patent evaluator. He holds at least 26 nanascale patents.
Murray is concerned with the synthesis and self-assembly properties of nanocrystals and the potential to create new mesoscopic materials with interesting properties and potential applications in energy, environmental sustainability, health, and information processing.
Murray is the Founding Chair of the World Economic Forum’s Global Councils on Nanotechnology (2008-2009) and Global Council on Emerging Technologies (2009-2010).
Awards and honors
1997 Nobel Laureate Signature Award of the American Chemical Society (with Moungi Bawendi)
2000, one of the most influential innovators younger than 35, Technology Review
2011, honorary doctorate, Utrecht University
2012 Fellow, Materials Research Society, for "innovations in the synthesis of nanomaterials with precisely controlled dimensions by chemical approaches; outstanding contributions in nanoparticle self-assembly; and pioneering research in the design of nanoparticle-based devices."
2019, Member, National Academy of Engineering for the "invention and development of solvothermal synthesis of monodisperse nanocrystal quantum dots for displays, photovoltaics and memory."
2020, Clarivate Citation Laureate with Moungi G. Bawendi and Taeghwan Hyeon "For synthesis of nanocrystals with precise attributes for a wide range of applications in physical, biological, and medical systems."
Selected publications
References
21st-century American chemists
American chemists
Living people
Massachusetts Institute of Technology School of Science alumni
Physical chemists
Nanotechnologists
Scientists at Bell Labs
University of Pennsylvania faculty
Members of the United States National Academy of Engineering
Year of birth missing (living people) | Christopher B. Murray | [
"Materials_science"
] | 794 | [
"Nanotechnology",
"Nanotechnologists"
] |
75,049,975 | https://en.wikipedia.org/wiki/Compound%202f%20%28SARM%29 | Compound 2f (SARM) is a drug which acts as a selective androgen receptor modulator (SARM), originally developed by Takeda Pharmaceutical for the treatment of prostate cancer. It is a potent but tissue specific androgen agonist with an EC50 of 4.7nM, producing anabolic effects on muscles and in the central nervous system but with little effect on the prostate gland, and inducing sexual behaviour in animal studies.
See also
LY305
References
Selective androgen receptor modulators
Anilines
Nitriles | Compound 2f (SARM) | [
"Chemistry"
] | 111 | [
"Nitriles",
"Functional groups"
] |
75,050,943 | https://en.wikipedia.org/wiki/Serum%20vitamin%20B12 | Serum vitamin B12 is a medical laboratory test that measure vitamin B12 only in the blood binding to both transcobalamins. Most of the time, 80–94% of vitamin B12 in the blood binds to haptocorrin, while only 6–20% is binds to transcobalamin ll. Only transcobalamin ll is "active" and can be used by the body. Normal total body vitamin B12 is between 2 and 5 mg with 50% of that stored in the liver. Total serum vitamin B12 may not be a reliable biomarker for reflecting what the body stores inside cells. Vitamin B12 levels can be falsely high or low and data for sensitivity and specificity vary widely. There is no gold standard human assay to confirm a vitamin B12 deficiency.
Healthcare providers use this test when a vitamin B12 deficiency is suspected, which can cause anemia and irreversible nerve damage. The cutoff between normal vitamin B12 levels and deficiency varies by country and region. A diagnosis of vitamin B12 deficiency is determined by blood levels lower than 200 or 250 picograms per ml (148 or 185 picomoles per liter). Some people can have symptoms with their normal levels of the vitamin, or may have low levels despite having no symptoms. Other tests may be done to ensure individuals status. Measuring vitamin B12 values in individuals during or after treatment, in order to measure the effectiveness of treatment, is useless.
Normal range
A blood test shows vitamin B12 levels in the blood. Vitamin B12 deficiency can be determined, but not always. This means it measures forms of vitamin B12 that are "active" and can be used by the body, as well as the "inactive" forms, which cannot. However, also normal or supraphysiological vitamin B12 levels should be carefully assessed in the context of the individual state of health. Elevated or normal serum vitamin B12 levels may also be associated with a functional vitamin deficiency. Functional deficiency has been described despite high B12 concentrations and is due to a failure of cellular uptake, intracellular processing, trafficking, or utilization. However, low vitamin B12 levels may occur other than the true deficiency for various reasons and circumstances. High or supraphysiological serum levels are usually not of concern, although without supplementation they have been associated with many pathological conditions.
Reference range for vitamin B12:
200 to 1000 picograms per ml (148 to 748 picomoles per liter)
Laboratories often use different units and "normal" may vary by population and the lab techniques used. Some researchers have suggested that current standards for vitamin B12 levels are too low.
References
Blood tests | Serum vitamin B12 | [
"Chemistry"
] | 552 | [
"Blood tests",
"Chemical pathology"
] |
75,054,010 | https://en.wikipedia.org/wiki/Keel%20block | In marine terms, a keel block, is a concrete or dense wood cuboid that rests under a ship during a time of repair, construction, or in the event of a dock being drained. The block rests under the keel of a ship.
Purpose
The purpose of a keel block is to prevent the ship from sitting directly on the ground and to prevent damage or instability that sitting on the ground may cause.
References
Nautical terminology
Shipbuilding | Keel block | [
"Engineering"
] | 87 | [
"Shipbuilding",
"Marine engineering"
] |
75,059,246 | https://en.wikipedia.org/wiki/Energy%20expenditure | Energy expenditure, often estimated as the total daily energy expenditure (TDEE), is the amount of energy burned by the human body.
Causes of energy expenditure
Resting metabolic rate
Resting metabolic rate generally composes 60 to 75 percent of TDEE. Because adipose tissue does not use much energy to maintain, fat free mass is a better predictor of metabolic rate. A taller person will typically have less fat mass than a shorter person at the same weight and therefore burn more energy. Men also carry more skeletal muscle tissue on average than women, and other sex differences in organ size account for sex differences in metabolic rate. Obese individuals burn more energy than lean individuals due to increase in the amount of calories needed to maintain adipose tissue and other organs that grow in size in response to obesity. At rest, the largest fractions of energy are burned by the skeletal muscles, brain, and liver; around 20 percent each. Increasing skeletal muscle tissue can increase metabolic rate.
Activity
Energy burned during physical activity includes the exercise activity thermogenesis (EAT) and non-exercise activity thermogenesis (NEAT).
Thermic effect of food
Thermic effect of food is the amount of energy burned digesting food, around 10 percent of TDEE. Proteins are the component of food requiring the most energy to digest.
Changing energy expenditure
Weight change
Losing or gaining weight affects the energy expenditure. Reduced energy expenditure after weight loss can be a major challenge for people seeking to avoid weight regain after weight loss. It is controversial whether losing weight causes a decrease in energy expenditure greater than expected by the loss of adipose tissue and fat-free mass during weight loss. This excess reduction is termed adaptive thermogenesis and it is estimated that it might compose 50 to 100 kcal/day in people actively losing weight. Some studies have reported that it disappears after a short period of weight stability, while others report longer-lasting effects.
Changing the activity level
Increasing exercise is recommended as a way to increase energy expenditure in individuals seeking to lose weight.
Drugs
Some drugs used for weight loss work by increasing energy expenditure. Two of the earliest weight loss drugs, 2,4-dinitrophenol and thyroid hormone, increase energy expenditure, but both were withdrawn from use due to risks. Adrenergic agonists, especially those that work on the beta-2 adrenergic receptor, increase energy expenditure. Although some such as clenbuterol are used without medical approval for weight loss, none have achieved approval for this indication due to cardiac risks.
Other drugs such as atypical antipsychotics are believed to reduce energy expenditure.
Effects
Energy expenditure is a leading factor in regulating appetite and energy intake in humans.
Measurement
Formulas have been devised to estimate energy expenditure in humans, but they may not be accurate for people with certain illnesses or the elderly. Not all formula are accurate in overweight or obese individuals.
Wearable devices can help estimate energy expenditure from physical activity but their accuracy varies.
References
Human physiology
Metabolism | Energy expenditure | [
"Chemistry",
"Biology"
] | 608 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
63,567,957 | https://en.wikipedia.org/wiki/Living%20medicine | A living medicine is a type of biologic that consists of a living organism that is used to treat a disease. This usually takes the form of a cell (animal, bacterial, or fungal) or a virus that has been genetically engineered to possess therapeutic properties that is injected into a patient. Perhaps the oldest use of a living medicine is the use of leeches for bloodletting, though living medicines have advanced tremendously since that time.
Examples of living medicines include cellular therapeutics (including immunotherapeutics), phage therapeutics, and bacterial therapeutics, a subset of the latter being probiotics.
Development of living medicines
Development of living medicines is an extremely active research area in the fields of synthetic biology and microbiology. Currently, there is a large focus on: 1) identifying microbes that naturally produce therapeutic effects (for example, probiotic bacteria), and 2) genetically programming organisms to produce therapeutic effects.
Applications
Cancer therapy
There is tremendous interest in using bacteria as a therapy to treat tumors. In particular, tumor-homing bacteria that thrive in hypoxic environments are particularly attractive for this purpose, as they will tend to migrate to, invade (through the leaky vasculature in the tumor microenvironment) and colonize tumors. This property tends to increase their residence time in the tumor, giving them longer to exert their therapeutic effects, in contrast to other bacteria that would be quickly cleared by the immune system.
References
Bacteria and humans
Biological engineering
Biotechnology
Biotechnology products
Biopharmaceuticals
Pharmaceutical industry
Life sciences industry
Specialty drugs
Pharmacy | Living medicine | [
"Chemistry",
"Engineering",
"Biology"
] | 325 | [
"Pharmacology",
"Biological engineering",
"Specialty drugs",
"Life sciences industry",
"Biotechnology products",
"Pharmacy",
"Pharmaceutical industry",
"Biotechnology",
"Bacteria",
"nan",
"Bacteria and humans",
"Biopharmaceuticals"
] |
63,568,142 | https://en.wikipedia.org/wiki/Bacterial%20therapy | Bacterial therapy is the therapeutic use of bacteria to treat diseases. Bacterial therapeutics are living medicines, and may be wild type bacteria (often in the form of probiotics) or bacteria that have been genetically engineered to possess therapeutic properties that is injected into a patient.
Other examples of living medicines include cellular therapeutics (including immunotherapeutics), activators of anti-tumor immunity, or synergizing with existing tools and approaches. and phage therapeutics, or as delivery vehicles for treatment, diagnosis, or imaging, complementing or synergizing with existing tools and approaches.
Development
Development of bacterial therapeutics is an extremely active research area in the fields of synthetic biology and microbiology. Currently, there is a large focus on: 1) identifying bacteria that naturally produce therapeutic effects (for example, probiotic bacteria), and 2) genetically programming bacteria to produce therapeutic effects.
Design
Optimal strain design often requires a balance between strain suitability for function in the target microenvironment and concerns for feasibility of manufacturing and clinical development.
The development workflow should incorporate technologies for optimizing strain potency, as well as predictive in vitro and in vivo assays, as well quantitative pharmacology models, to maximize translational potential for patient populations.
Applications
Cancer therapy
There is tremendous interest in using bacteria as a therapy to treat tumors. In particular, tumor-homing bacteria that thrive in hypoxic environments are particularly attractive for this purpose, as they will tend to migrate to, invade (through the leaky vasculature in the tumor microenvironment) and colonize tumors. This property tends to increase their residence time in the tumor, giving them longer to exert their therapeutic effects, in contrast to other bacteria that would be quickly cleared by the immune system. In addition, colonized bacteria can lyze the tumor, activate anti-tumor immune response, can be engineered as a delivery vehicle for anti-cancer therapeutics and may have the potential as contrast agents for cancer imaging. Microbial-based cancer therapy may offer an opportunity to address the issue of global cancer therapy disparity and introduce more suitable cancer immunotherapy approach to low- and middle-income countries.
Mechanism
After systemic administration, bacteria localize to the tumor microenvironment. The interactions between bacteria, cancer cells, and the surrounding microenvironment cause various alterations in tumor-infiltrating immune cells, cytokines, and chemokines, which further facilitate tumor regression. ① Bacterial toxins from S. Typhimurium, Listeria, and Clostridium can kill tumor cells directly by inducing apoptosis or autophagy. Toxins delivered via Salmonella can upregulate Connexin 43 (Cx43), leading to bacteria-induced gap junctions between the tumor and dendritic cells (DCs), which allow cross-presentation of tumor antigens to the DCs. ② Upon exposure to tumor antigens and interaction with bacterial components, DCs secrete robust amounts of the proinflammatory cytokine IL-1β, which subsequently activates CD8+ T cells. ③ The antitumor response of the activated CD8+ T cells is further enhanced by bacterial flagellin (a protein subunit of the bacterial flagellum) via TLR5 activation. The perforin and granzyme proteins secreted by activated CD8+ T cells efficiently kill tumor cells in primary and metastatic tumors. ④ Flagellin and TLR5 signaling also decreases the abundance of CD4+ CD25+ regulatory T (Treg) cells, which subsequently improves the antitumor response of the activated CD8+ T cells. ⑤ S. Typhimurium flagellin stimulates NK cells to produce interferon-γ (IFN-γ), an important cytokine for both innate and adaptive immunity. ⑥ Listeria-infected MDSCs shift into an immune-stimulating phenotype characterized by increased IL-12 production, which further enhances the CD8+ T and NK cell responses. ⑦ Both S. Typhimurium and Clostridium infection can stimulate significant neutrophil accumulation. Elevated secretion of TNF-α and TNF-related apoptosis-inducing ligand (TRAIL) by neutrophils enhances the immune response and kills tumor cells by inducing apoptosis. ⑧ The macrophage inflammasome is activated through contact with bacterial components (LPS and flagellin) and Salmonella-damaged cancer cells, leading to elevated secretion of IL-1β and TNF-α into the tumor microenvironment. NK cell: natural killer cell. Treg cell: regulatory T cell. MDSCs: myeloid-derived suppressor cells. P2X7 receptor: purinoceptor 7-extracellular ATP receptor. LPS: lipopolysaccharide
Clostridioides difficile infection therapy
Alterations in the gut microbiome are thought to be associated with C. difficile infection and recurrence. Therapies include probiotics and fecal microbiota transplantation.
Microbiome engineering
There is considerable interest in using bacterial therapeutics to alter human gastrointestinal microbiota to help diseases like small intestinal bacterial overgrowth, gut dysbiosis associated with the pathogenesis of food allergy, and other forms of dysbiosis.
See also
Living medicine
References
Bacteria and humans
Biological engineering
Biotechnology
Biotechnology products
Biopharmaceuticals
Pharmaceutical industry
Life sciences industry
Specialty drugs
Pharmacy | Bacterial therapy | [
"Chemistry",
"Engineering",
"Biology"
] | 1,163 | [
"Pharmacology",
"Biological engineering",
"Specialty drugs",
"Life sciences industry",
"Biotechnology products",
"Pharmacy",
"Pharmaceutical industry",
"Biotechnology",
"Bacteria",
"nan",
"Bacteria and humans",
"Biopharmaceuticals"
] |
63,570,220 | https://en.wikipedia.org/wiki/Integrable%20algorithm | Integrable algorithms are numerical algorithms that rely on basic ideas from the mathematical theory of integrable systems.
Background
The theory of integrable systems has advanced with the connection between numerical analysis. For example, the discovery of solitons came from the numerical experiments to the KdV equation by Norman Zabusky and Martin David Kruskal. Today, various relations between numerical analysis and integrable systems have been found (Toda lattice and numerical linear algebra, discrete soliton equations and series acceleration), and studies to apply integrable systems to numerical computation are rapidly advancing.
Integrable difference schemes
Generally, it is hard to accurately compute the solutions of nonlinear differential equations due to its non-linearity. In order to overcome this difficulty, R. Hirota has made discrete versions of integrable systems with the viewpoint of "Preserve mathematical structures of integrable systems in the discrete versions".
At the same time, Mark J. Ablowitz and others have not only made discrete soliton equations with discrete Lax pair but also compared numerical results between integrable difference schemes and ordinary methods. As a result of their experiments, they have found that the accuracy can be improved with integrable difference schemes at some cases.
References
See also
Soliton
Integrable system
Numerical analysis
Computational science
Applied mathematics
Partial differential equations | Integrable algorithm | [
"Physics",
"Mathematics"
] | 275 | [
"Integrable systems",
"Applied mathematics",
"Theoretical physics",
"Computational mathematics",
"Computational science",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
63,572,008 | https://en.wikipedia.org/wiki/Thermodynamics%20and%20an%20Introduction%20to%20Thermostatistics | Thermodynamics and an Introduction to Thermostatistics is a textbook written by Herbert Callen that explains the basics of classical thermodynamics and discusses advanced topics in both classical and quantum frameworks. It covers the subject in an abstract and rigorous manner and contains discussions of applications. The textbook contains three parts, each building upon the previous. The first edition was published in 1960 and a second followed in 1985.
Overview
The first part of the book starts by presenting the problem thermodynamics is trying to solve, and provides the postulates on which thermodynamics is founded. It then develops upon this foundation to discuss reversible processes, heat engines, thermodynamics potentials, Maxwell's relations, stability of thermodynamics systems, and first-order phase transitions. As the author lays down the basics of thermodynamics, he then goes to discuss more advanced topics such as critical phenomena and irreversible processes.
The second part of the text presents the foundations of classical statistical mechanics. The concept of Boltzmann's entropy is introduced and used to describe the Einstein model, the two-state system, and the polymer model. Afterwards, the different statistical ensembles are discussed from which the thermodynamics potentials are derived. Quantum fluids and fluctuations are also discussed.
The last part of the text is a brief discussion on symmetry and the conceptual foundations of thermostatistics. In the final chapter, Callen advances his thesis that the symmetries of the fundamental laws of physics underlie the very foundations of thermodynamics and seeks to illuminate the crucial role thermodynamics plays in science.
Callen advises that a one-semester course for advanced undergraduates should cover the first seven chapters plus chapters 15 and 16 if time permits.
Second edition
Background
The second edition provides a descriptive account of the thermodynamics of critical phenomena, which progressed dramatically in the 1960s and 1970s. Drawing on feedback from students and instructors, Callen improved many explanations, explicitly solved examples, and added many exercises, many of which have complete or partial answers. He also provided an introduction to statistical mechanics with an emphasis on the core principles rather than the applications. However, he sought to neither separate thermodynamics and statistical mechanics completely nor subsume the former under the latter under the banner of "thermal physics." Indeed, thermal physics courses often emphasizes statistical mechanics at the expense of thermodynamics, despite its importance for industry, as a survey of business leaders conducted by the American Physical Society in 1971 suggested. Callen observed that thermodynamics had subsequently been de-emphasized.
Table of Contents
Part I: General Principles of Classical Thermodynamics
Introduction: The Nature of Thermodynamics and the Basis of Thermostatistics
Chapter 1: The Problem and the Postulates
Chapter 2: The Conditions of Equilibrium
Chapter 3: Some Formal Relationships, and Sample Systems
Chapter 4: Reversible Processes and the Maximum Work Theorem
Chapter 5: Alternative Formulations and Legendre Transformations
Chapter 6: The Extremum Principle in the Legendre Transformed Representations
Chapter 7: Maxwell Relations
Chapter 8: Stability of Thermodynamic Systems
Chapter 9: First-order Phase Transitions
Chapter 10: Critical Phenomena
Chapter 11: The Nernst Postulate
Chapter 12: Summary of Principles for General Systems
Chapter 13: Properties of Materials
Chapter 14: Irreversible Thermodynamics
Part II: Statistical Mechanics
Chapter 15: Statistical Mechanics in the Entropy Representation
Chapter 16: The Canonical Formalism; Statistical Mechanics in Helmholtz Representation
Chapter 17: Entropy and Disorder; Generalized Canonical Formulations
Chapter 18: Quantum Fluids
Chapter 19: Fluctuations
Chapter 20: Variational Properties, Perturbation Expansions, and Mean Field Theory
Part III: Foundations
Chapter 21: Postlude: Symmetry and the Conceptual Foundations of Thermostatistics
Appendix A: Some Relations Involving Partial Derivatives
Appendix B: Magnetic Systems
General References
Index
Reception
Robert B. Griffiths, a specialist in thermodynamics and statistical mechanics at the Carnegie Mellon University, commented that both editions of this book presents clearly and concisely the core of thermodynamics within the first eight chapters. At the time of writing (1987), Griffiths knew of books that explained the principles of thermodynamics, but Callen's was had the best presentation of the material. He believed Callen offered a pedagogical, if abrupt, treatment of the subject. His book begins in an abstract manner, assuming the existence and properties of entropy and derive the consequences for various processes of interest rather than through heat engines and thermodynamic cycles or by statistical mechanics and Boltzmann's entropy formula . However, he argued that Callen's treatment of critical phenomena (Chapter 10) contains some technical flaws. Callen thought that classical analysis had broken down. But Griffiths wrote that the problem lies not in the breakdown of thermodynamics but rather the Taylor-expansion of thermodynamic quantities, and that precise expressions of the functions appearing in a fundamental relation should be determined by statistical mechanics and experiments, not thermodynamics. Nevertheless, Griffiths still believed this book to be an excellent resource for learning the basics of thermodynamics.
According to L.C. Scott, who studied statistical mechanics and biophysics at Oklahoma State University, Thermodynamics and an Introduction to Thermostatistics is a popular textbook that begins with some basic postulates based on intuitive classical, empirical, and macroscopic arguments. He found that it is remarkable for the whole edifice of classical thermodynamics to follow from just a few basic assumptions. However, Scott preferred the discussion of temperature in Heat and Thermodynamics by Mark W. Zemansky and Richard H. Dittman because it is based on thermometry and forces students to contemplate the empirical basis of concept of temperature, leaving aside the molecular basis of heat. He argued that such an approach yields greater appreciation for the meaning of temperature and its statistical-mechanical basis which students will encounter later. In contrast, Callen's book does not mention temperature till Chapter 2, where Callen defines temperature as the reciprocal of the derivative of entropy with respect to internal energy then shows, using the postulates, that this definition is consistent with our intuition. While Zemansky and Dittman cover the first law of thermodynamics empirically, Callen simply assumes the existence of the internal energy function the invokes the conservative nature of inter-atomic forces. Whereas Zemansky and Dittman treated the second law of thermodynamics using heat engines and simply state the Clausius and Kelvin formulations of it, in Callen's book, the second law is contained within the postulates. Scott was unsure which approach is more understandable for students. In general, Zemansky and Dittman employed an empirical approach while that of Callen is deductive. Scott opined that Zemansky and Dittman's book is more suitable for beginning students while Callen's is more appropriate for an advanced course or as a reference.
Editions
See also
List of textbooks in thermodynamics and statistical mechanics
List of textbooks on classical mechanics and quantum mechanics
References
1960 books
1985 books
Physics textbooks
Thermodynamics literature | Thermodynamics and an Introduction to Thermostatistics | [
"Physics",
"Chemistry"
] | 1,512 | [
"Thermodynamics literature",
"Thermodynamics"
] |
63,576,316 | https://en.wikipedia.org/wiki/Trace%20metal%20stable%20isotope%20biogeochemistry | Trace metal stable isotope biogeochemistry is the study of the distribution and relative abundances of trace metal isotopes in order to better understand the biological, geological, and chemical processes occurring in an environment. Trace metals are elements such as iron, magnesium, copper, and zinc that occur at low levels in the environment. Trace metals are critically important in biology and are involved in many processes that allow organisms to grow and generate energy. In addition, trace metals are constituents of numerous rocks and minerals, thus serving as an important component of the geosphere. Both stable and radioactive isotopes of trace metals exist, but this article focuses on those that are stable. Isotopic variations of trace metals in samples are used as isotopic fingerprints to elucidate the processes occurring in an environment and answer questions relating to biology, geochemistry, and medicine.
Isotope notation
In order to study trace metal stable isotope biogeochemistry, it is necessary to compare the relative abundances of isotopes of trace metals in a given biological, geological, or chemical pool to a standard (discussed individually for each isotope system below) and monitor how those relative abundances change as a result of various biogeochemical processes. Conventional notations used to mathematically describe isotope abundances, as exemplified here for 56Fe, include the isotope ratio (56R), fractional abundance (56F) and delta notation (δ56Fe). Furthermore, as different biogeochemical processes vary the relative abundances of the isotopes of a given trace metal, different reaction pools or substances will become enriched or depleted in specific isotopes. This partial separation of isotopes between different pools is termed isotope fractionation, and is mathematically described by fractionation factors α or ε (which express the difference in isotope ratio between two pools), or by "cap delta" (Δ; the difference between two δ values). For a more complete description of these notations, see the isotope notation section in Hydrogen isotope biogeochemistry.
Naturally occurring trace metal isotope variations and fractionations
In nature, variations in isotopic ratios of trace metals on the order of a few tenths to several ‰ are observed within and across diverse environments spanning the geosphere, hydrosphere and biosphere. A complete understanding of all processes that fractionate trace metal isotopes is presently lacking, but in general, isotopes of trace metals are fractionated during various chemical and biological processes due to kinetic and equilibrium isotope effects.
Geochemical fractionations
Certain isotopes of trace metals are preferentially oxidized or reduced; thus, transitions between redox species of the metal ions (e.g., Fe2+ → Fe3+) are fractionating, resulting in different isotopic compositions between the different redox pools in the environment. Additionally, at high temperatures, metals ions can evaporate (and subsequently condense upon cooling), and the relative differences in isotope masses of a given heavy metal leads to fractionation during these evaporation and condensation processes. Diffusion of isotopes through a solution or material can also result in fractionations, as the lighter mass isotopes are able to diffuse at a faster rate. Additionally, isotopes can have slight variations in their solubility and other chemical and physical properties, which can also drive fractionation.
Biological fractionations
In sediments, oceans, and rivers, distinct trace metal isotope ratios exist due to biological processes such as metal ion uptake and abiotic processes such as adsorption to particulate matter that preferentially remove certain isotopes. The trace metal isotopic composition of a given organism results from a combination of the isotopic compositions of source material (i.e., food and water) and any fractionations imparted during metal ion uptake, translocation and processing inside cells.
Applications of trace metal isotope ratios
Stable isotope ratios of trace metals can be used to answer a variety of questions spanning diverse fields, including oceanography, geochemistry, biology, medicine, anthropology and astronomy. In addition to their modern applications, trace metal isotopic compositions can provide insight into ancient biogeochemical processes operated on Earth. These signatures arise because the processes that form and modify samples are recorded in the trace metal isotopic compositions of the samples. By analyzing and understanding trace metal isotopic compositions in biological, chemical or geological materials, one can answer questions such as the sources of nutrients for phytoplankton in the ocean, processes that drove the formation of geologic structures, the diets of modern or ancient organisms, and accretionary processes that took place in the early Solar System. Trace metal stable isotope biogeochemistry is still an emerging field, yet each trace metal isotope system has clear, powerful applications to diverse and important questions. Important heavy metal isotope systems are discussed (in order of increasing atomic mass) in the proceeding sections.
Iron
Stable isotopes and natural abundances
Naturally occurring iron has four stable isotopes, 54Fe, 56Fe, 57Fe, and 58Fe.
Stable iron isotopes are described as the relative abundance of each of the stable isotopes with respect to 54Fe. The standard for iron is elemental iron, IRMM-014, and it is distributed by the Institute for Reference Materials and Measurement. The delta value is compared to this standard, and is defined as:
Delta values are often reported as per mil values (‰), or part-per-thousand differences from the standard. Iron isotopic fractionation is also commonly described in units of per mil per atomic mass unit.
In many cases, the δ56Fe value can be related to the δ57Fe and δ58Fe values through mass-dependent fractionation:
Chemistry
One of the most prevalent features of iron chemistry is its redox chemistry. Iron has three oxidation states: metallic iron (Fe0), ferrous iron (Fe2+), and ferric iron (Fe3+). Ferrous iron is the reduced form of iron, and ferric iron is the oxidized form of iron. In the presence of oxygen, ferrous iron is oxidized to ferric iron, thus ferric iron is the dominant redox state of iron at Earth's surface conditions. However, ferrous iron is the dominant redox state below the surface at depth. Because of this redox chemistry, iron can act as either an electron donor or receptor, making it a metabolically useful species.
Each form of iron has a specific distribution of electrons (i.e., electron configuration), tabulated below:
Equilibrium Isotope Fractionation
Variations in iron isotopes are caused by a number of chemical processes which result in the preferential incorporation of certain isotopes of iron into certain phases. Many of the chemical processes which fractionate iron are not well understood and are still being studied. The most well-documented chemical processes which fractionate iron isotopes relate to its redox chemistry, the evaporation and condensation of iron, and the diffusion of dissolved iron through systems. These processes are described in more detail below.
Fractionation as a result of redox chemistry
To first order, reduced iron favors isotopically light iron and oxidized iron favors isotopically heavy iron. This effect has been studied in regards to the abiotic oxidation of Fe2+ to Fe3+, which results in fractionation. The mineral ferrihydrite, which forms in acidic aquatic conditions, is precipitated via the oxidation of aqueous Fe2+ to Fe3+. Precipitated ferrihydrite has been found to be enriched in the heavy isotopes by 0.45‰ per atomic mass unit with respect to the starting material. This indicates that heavier iron isotopes are preferentially precipitated as a result of oxidizing processes.
Theoretical calculations in combination with experimental data have also aimed to quantify the fractionation between Fe(III)aq and Fe(II)aq in HCl. Based on modeling, the fractionation factor between the two species is temperature dependent:
Fractionation as a result of evaporation and condensation
Evaporation and condensation can give rise to both kinetic and equilibrium isotope effects. While equilibrium mass fractionation is present evaporation and condensation, it is negligible compared to kinetic effects. During condensation, the condensate is enriched in the light isotope, whereas in evaporation, the gas phase is enriched in the light isotope. Using the kinetic theory of gases, for 56Fe/54Fe, a fractionation factor of α = 1.01835 for the evaporation of a pool containing equimolar amounts of 56Fe and 54Fe. In evaporation experiments, the evaporation of FeO at 1,823 K gave a fractionation factor of α = 1.01877. Presently, there have been no experimental attempts to determine the 56Fe/54Fe fractionation factor of condensation.
Fractionation as a result of diffusion
Kinetic fractionation of dissolved iron occurs as a result of diffusion. When isotopes diffuse, the lower mass isotopes diffuse more quickly than the heavier isotopes, resulting in fractionation. This difference in diffusion rates has been approximated as:
In this equation, D1 and D2 are the diffusivities of the isotopes, m1 and m2 are the masses of the isotopes, and β, which can vary between 0 and 0.5, depending on the system. More work is required to fully understand fractionation as a result of diffusion, studies of diffusion of iron on metal have consistently given β values of approximately 0.25. Iron diffusion between silicate melts and basaltic/rhyolitic melts have given lower β values (~0.030). In aqueous environments, a β value of 0.0025 has been obtained.
Fractionation as a result of phase partitioning
There may be equilibrium fractionation between coexisting minerals. This would be particularly relevant when considering the formation of planetary bodies early in the solar system. Experiments have aimed to simulate the formation of the Earth at high temperatures using a platinum-iron alloy and an analog for the silicate earth at 1,500 °C. However, the observed fractionation was very small, less than 0.2‰ per atomic mass unit. More experimental work is needed to fully understand this effect.
Biology
In biology, iron plays a number of roles. Iron is widespread in most living organisms and is essential for their function. In microbes, iron redox chemistry is utilized as an electron donor or receptor in microbial metabolism, allowing microbes to generate energy. In the oceans, iron is essential for the growth and survival of phytoplankton, which use iron to fix nitrogen. Iron is also important in plants, given that they need iron to transfer electrons during photosynthesis. Finally, in animals, iron plays many roles, however, its most essential function is to transport oxygen in the bloodstream throughout the body. Thus, iron undergoes many biological processes, each of which have variations in which isotopes of iron they preferentially use. While iron isotopic fractionations are observed in many organisms, they are still not well understood. Improvements in the understanding the iron isotope fractionations observed in biology will enable the development of a more complete knowledge of the enzymatic, metabolic, and other biologic pathways in different organisms. Below, the known iron isotopic variations for different classes of organisms are described.
Iron reducing bacteria
Iron reducing bacteria reduce ferric iron to ferrous iron under anaerobic conditions. One of the first studies that studied iron fractionation in iron-reducing bacteria studied the bacterium Shewanella algae. S. algae was grown on a ferrihydrite substrate, and was then allowed to reduce iron. The study found that S. algae preferentially reduced 54Fe over 56Fe, with a δ56/54Fe value of -1.3‰.
More recent experiments have studied the bacterium Shewanella putrefaciens and its reduction of Fe(III) in goethite. These studies have found δ56/54Fe values of -1.2‰ relative to the goethite. The kinetics of this fractionation were also studied in this experiment, and it was suggested that the iron isotope fractionation is likely related to the kinetics of the electron transfer step.
Most studies of other iron reducing bacteria have found δ56/54Fe values of approximately -1.3‰. At high Fe(III) reduction rates, δ56/54Fe values of -2 – -3‰ relative to the substrate have been observed. The study of iron isotopes in iron reducing bacteria enable the development of an improved understanding regarding the metabolic processes operating in these organisms.
Iron oxidizing bacteria
While most iron is oxidized as a result of interaction with atmospheric oxygen or oxygenated waters, oxidation by bacteria is an active process in anoxic environments and in oxygenated, low pH (<3) environments. Studies of the acidophilic Fe(II)-oxidizing bacterium, Acidthiobacillus ferrooxidans, have been used to determine the fractionation as a result of iron-oxidizing bacteria. In most cases, δ56/54Fe values between 2 and 3‰ were measured. However, a Rayleigh trend with a fractionation factor of αFe(III)aq-Fe(II)aq ~ 1.0022 was observed, which is smaller than the fractionation factor in the abiotic control experiments (αFe(III)aq-Fe(II)aq ~ 1.0034), which has been inferred to reflect a biological isotope effect. Using iron isotopes, an improvement in the understanding of the metabolic processes controlling iron oxidation and energy production in these organisms can be developed.
Photoautrophic bacteria, which oxidize Fe(II) under anaerobic conditions, have also been studied. The Thiodictyon bacteria precipitate poorly crystalline hydrous ferric oxide when they oxidize iron. The precipitate was enriched in the 56Fe relative to Fe(II)aq, with a δ56/54Fe value of +1.5 ± 0.2‰.
Magnetotactic bacteria
Magnetotactic bacteria are bacteria with magnetosomes that contain magnetic crystals, usually magnetite or greigite, which allow them to orient themselves with the Earth’s magnetic field lines. These bacteria mineralize magnetite via the reduction of Fe(III), usually in microaerobic or anoxic environments. In the magnetotactic bacteria that have been studied, there was no significant iron isotope fractionation observed.
Phytoplankton
Iron is important for the growth of phytoplankton. In phytoplankton, iron is used for electron transfer reactions in photosynthesis in both photosystem I and photosystem II. Additionally, iron is an important component of the enzyme nitrogenase, which is used to fix nitrogen. In measurements at open ocean stations, phytoplankton are isotopically light, with the fractionation as a result of biological uptake measured between -0.25‰ and -0.13‰. Improvement in the understanding of this fractionation will enable the more precise understanding of phytoplankton photosynthetic processes.
Animals
Iron has many important roles in animal biology, specifically when considering oxygen transport in the bloodstream, oxygen storage in muscles, and enzymes. Known isotope variations are shown in the figure below. Iron isotopes could be useful tracers of the iron biochemical pathways in animals, and also be indicative of trophic levels in a food chain.
Iron isotope variations in humans reflects a number of processes. Specifically, iron in the blood stream reflects dietary iron, which is isotopically lighter than iron in the geosphere. Iron isotopes are distributed heterogeneously throughout the body, primarily to red blood cells, the liver, muscle, skin, enzymes, nails, and hair. Iron losses in the body (intestinal bleeding, bile, sweat, etc.) favor the loss of isotopically heavy iron, with mean losses averaging a δ56Fe of +10‰. Iron absorption in the intestine favors lighter iron isotopes. This is largely due to the fact that iron is carried by transport proteins and transferrin, both of which are kinetic processes, resulting in the preferential uptake of isotopically light iron.
The observed iron isotopic variations in humans and animals are particularly important as tracers. Iron isotopic signatures are utilized to determine the geographic origin of food. Additionally, anthropologists and paleontologists use iron isotope data in order to track the transfer of iron between the geosphere and the biosphere, specifically between plant foods and animals. This allows for the reconstruction of ancient dietary habits based on the variations in iron isotopes in food.
Geochemistry
By mass, iron is the most common element on Earth, and it is the fourth most abundant element in the Earth's crust. Thus, iron is widespread throughout the geosphere, and is also common on other planetary bodies. Natural variations in the iron in the geosphere are relatively small. Currently, the values of δ56/54Fe measured in rocks and minerals range from -2.5‰ to +1.5‰. Iron isotope composition is homogeneous in igneous rocks to ±0.05‰, indicating that much of the geologic isotopic variability is a result of the formation of rocks and minerals at low temperature. This homogeneity is particularly useful when tracing processes which result in fractionation through the system. While fractionation of igneous rocks is relatively constant, there are larger variations in the iron isotopic composition of chemical sediments. Thus, iron isotopes are used to determine the origin of the protolith of heavily metamorphosed rocks of a sedimentary origin. Improvements of the understanding regarding the way in which iron isotopes fractionate in the geosphere can help to better understand geologic processes of formation.
Natural iron isotopic variations
To date, iron is one of the most widely studied trace metals, and iron isotope compositions are relatively well-documented. Based on measurements, iron isotopes exhibit minimal variation (±3‰) in the terrestrial environment. A list of iron isotopic values of different materials from different environments is presented below.
In terrestrial environments
There is an extreme constancy of the isotopic composition of igneous rocks. The mean value of δ56Fe of terrestrial rocks is 0.00 ± 0.05‰. More precise isotopic measurements indicate that the small deviations from 0.00‰ may reflect a slight mass-dependent fractionation. This mass fractionation has been proposed to be FFe = 0.039 ± 0.008‰ per atomic mass unit relative to IRMM-014. There may also be slight isotopic variations in igneous rocks depending on their composition and process of formation. The average value of δ56Fe for ultramafic igneous rocks is -0.06‰, whereas the average value of δ56Fe for mid-ocean ridge basalts (MORB) is +0.03‰. Sedimentary rocks exhibit slightly larger variations in δ56Fe, with values between -1.6‰ and +0.9‰ relative to IRMM-014. Banded iron formations δ56Fe span the entire range observed on Earth, from -2.5‰ to +1‰.
In the oceans
There are slight iron isotopic variations in the oceans relative to IRMM-014, which likely reflect variations in the biogeochemical cycling of iron within a given ocean basin. In the southeastern Atlantic, δ56Fe values between -0.13 and +0.21‰ have been measured. In the north Atlantic, δ56Fe values between -1.35 and +0.80‰ have been measured. In the equatorial Pacific δ56Fe values between -0.03 and +0.58‰ have been measured. The supply of aerosol iron particles to the ocean have an isotopic composition of approximately 0‰. Dissolved iron riverine input to the ocean is isotopically light relative to igneous rocks, with δ56Fe values between -1 and 0‰.
Most modern marine sediments have δ56Fe values similar to those of igneous δ56Fe values. Marine ferromanganese nodules have δ56Fe values between -0.8 and 0‰.
In hydrothermal systems
Hot (> 300 °C) hydrothermal fluids from mid ocean ridges are isotopically light, with δ56Fe between -0.2 and -0.8‰. Particles in hydrothermal plumes are isotopically heavy relative to the hydrothermal fluids, with δ56Fe between 0.1 and 1.1‰. Hydrothermal deposits have average δ56Fe between -1.6 and 0.3‰. The sulfide minerals within these deposits have δ56Fe between -2.0 and 1.1‰.
In extraterrestrial objects
Variations in iron isotopic composition have been observed in meteorite samples from other planetary bodies. The Moon has variations in iron isotopes of 0.4‰ per atomic mass unit. Mars has very small isotope fractionation of 0.001 ± 0.006‰ per atomic mass unit. Vesta has iron fractionations of 0.010 ± 0.010‰ per atomic mass unit. The chondritic reservoir exhibits fractionations of 0.069 ± 0.010‰ per atomic mass unit. Isotopic variations observed on planetary bodies can help to constrain and better understand their formation and processes occurring in the early Solar System.
Measurement
High precision iron isotope measurements are obtained either via thermal ionization mass spectrometry (TIMS) or multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS).
Applications of iron isotopes
Iron isotopes have many applications in the geosciences, biology, medicine, and other fields. Their ability to act as isotopic tracers allows for their use to determine information regarding the formation of geologic units and as a potential proxy for life on Earth and other planets. Iron isotopes also have applications in anthropology and paleontology, as they are used to study the diets of ancient civilizations and animals. The widespread uses of iron in biology make its isotopes a promising frontier in biomedical research, specifically their use to prevent and treat blood conditions and other pathological blood diseases. Some of the more prevalent applications of iron isotopes are described below.
Banded iron formations
Banded iron formations (BIFs) are particularly important when considering the surface environments of the early Earth, which were significantly different from the surface environments observed today. This is manifested in the mineralogy of these formations, which are indicative of different redox conditions. Additionally, BIFs are interesting in that they were deposited while major changes were occurring in the atmosphere and in the biosphere 2.8 to 1.8 billion years ago. Iron isotopic studies can reveal the details of the formation of BIFs, which allows for the reconstruction of redox and climatic conditions at the time of deposition.
BIFs formed as a result of the oxidation of iron by oxygen, which was likely generated by the evolution of cyanobacteria. This was followed by the subsequent precipitation of iron particles in the ocean. Observed variations in the iron isotopic composition of BIFs span the entire range observed on Earth, with δ56/54Fe values between -2.5 and +1‰. The cause of these variations are hypothesized to occur for three reasons. The first relates to the varying mineralogy of the BIFs. Within the BIFs, minerals such as hematite, magnetite, siderite, and pyrite are observed. These minerals each having varying isotopic fractionation, likely as a result of their structures and the kinetics of their growth. The isotopic composition of the BIFs is indicative of the fluids from which they precipitated, which has applications when reconstructing environmental conditions of the ancient Earth. It has also been suggested that BIFs may be biologic in origin. The range of their δ56/54Fe values fall within the range of those observed to occur as a result of biologic processes relating to bacterial metabolic processes, such as those of anoxygenic phototrophic iron-oxidizing bacteria. Ultimately, the improved understanding of BIFs using iron isotope fractionations would allow for the reconstruction of past environments and the constraint of processes occurring on the ancient Earth. However, given that the values observed as a result of biogenic and abiogenic fractionation are relatively similar, the exact processes of BIFs are still unclear. Thus, the continued study and improved understanding of biologic and abiologic fractionation effects would be beneficial in providing better details regarding BIF formation.
Iron cycling in the ocean
Iron isotopes have become particularly useful in recent years for tracing biogeochemical cycling in the oceans. Iron is an important micronutrient for living species in the ocean, particularly for the growth of phytoplankton. Iron is estimated to limit phytoplankton growth in about one half of the ocean. As a result, the development of a better understanding of sources and cycling of iron in the modern oceans is important. Iron isotopes have been used to better constrain these pathways through data collected by the GEOTRACES program, which has collected iron isotopic data throughout the ocean. Based on the variations in iron isotopes, biogeochemical cycling and other processes controlling iron distribution in the ocean can be elucidated.
For example, the combination of iron concentration and iron isotope data can use to determine the sources of oceanic iron. In the South Atlantic and in the Southern Ocean, isotopically light iron is observed in intermediate waters (200 - 1,300 meters), whereas isotopically heavy iron is observed in surface waters and deep waters (> 1,300 meters). To first order, this demonstrates that there are different sources, sinks, and processes contributing to the iron cycle in varying water masses. The isotopically light iron in intermediate waters suggests that the dominant iron sources include remineralized organic matter. This organic matter is isotopically light because phytoplankton preferentially take up light iron. In the surface ocean, the isotopically heavy iron represents the external sources of iron, such as dust, which is isotopically heavy relative to IRMM-014, and the sink of light isotopes as a result of their preferential uptake by phytoplankton. The isotopically heavy iron in the deep ocean suggests that the iron cycle is dominated by the abiotic, non-reductive release of iron, via desorption or dissolution, from particles. Isotopic analyses similar to the one above are utilized throughout all of the world's oceans to better understand regional variability in the processes which control iron cycling. These analyses can then be synthesized to better model the global biogeochemical cycling of iron, which is particularly important when considering primary production in the ocean.
Constraining processes on extraterrestrial bodies
Iron isotopes have been applied for a number of purposes on planetary bodies. Their variations have been measured to more precisely determine the processes that occurred during planetary accretion. In the future, the comparison of observed biological fractionation of iron on Earth to fractionation on other planetary bodies may have astrobiological implications.
Planetary accretion
One of the primary challenges in the study of planetary accretion is the fact that many tracers of the processes occurring in the early Solar System have been eliminated as a result of subsequent geologic events. Because transition metals do not show large stable isotope fractionations as a result of these events and because iron is one of the most abundant elements in the terrestrial planets, its isotopic variability has been used as a tracer of early Solar System processes.
Variations in δ57/54Fe between samples from Vesta, Mars, the Moon, and Earth have been observed, and these variations cannot be explained by any known petrological, geochemical, or planetary processes, thus, it has been inferred that the observed fractionations are a result of planetary accretion. It is interesting to note that the isotopic compositions of the Earth and the Moon are much heavier than that of Vesta and Mars. This provides strong support for the giant-impact hypothesis as an impact of this energy would generate large amounts of energy, which would melt and vaporize iron, leading to the preferential escape of the lighter iron isotopes to space. More of the heavier isotopes would remain, resulting in the heavier iron isotopic compositions observed for the Earth and the Moon. The samples from Vesta and Mars exhibit minimal fractionation, consistent with the theory of runaway growth for their formations, as this process would not yield significant fractionations. Further study of the stable isotope of iron in other planetary bodies and samples could provide further evidence and more precise constraints for planetary accretion and other processes that occurred in the early Solar System.
Astrobiology
The use of iron isotopes may also have applications when studying potential evidence for life on other planets. The ability of microbes to utilize iron in their metabolisms makes it possible for organisms to survive in anoxic, iron-rich environments, such as Mars. Thus, the continual improvement of knowledge regarding the biological fractionations of iron observed on Earth can have applications when studying extraterrestrial samples in the future. While this field of research is still developing, this could provide evidence regarding whether a sample was generated as a result of biologic or abiologic processes depending on the isotopic fractionation. For example, it has been hypothesized that magnetite crystals found in Martian meteorites may have formed biologically as a result of their striking similarity to magnetite crystals produced by magnetotactic bacteria on Earth. Iron isotopes could be used to study the origin of the proposed "magnetofossils" and other rock formations on Mars.
Biomedical research
Iron plays many roles in human biology, specifically in oxygen transport, short-term oxygen storage, and metabolism Iron also plays a role in the body's immune system. Current biomedical research aims to use iron isotopes to better understand the speciation of iron in the body, with hopes of eventually being able to reduce the availability of free iron, as this would help to defend against infection.
Iron isotopes can also be utilized to better understand iron absorption in humans. The iron isotopic composition of blood reflects an individual's long-term absorption of dietary iron. This allows for the study of genetic predisposition to blood conditions, such as anemia, which will ultimately enable the prevention, identification, and resolution of blood disorders. Iron isotopic data could also aid in identifying impairments of the iron absorption regulatory system in the body, which would help to prevent the development of pathological conditions related to issues with iron regulation.
Cobalt
Nickel
Copper
Stable isotopes and natural abundances
Copper has two naturally occurring stable isotopes: 63Cu and 65Cu, which exist in the following natural abundances:
The isotopic composition of Cu is conventionally reported in delta notation (in ‰) relative to a NIST SRM 976 standard:
Chemistry
Copper can exist in non-ionic form (as Cu0) or in one of two redox states: Cu1+ (reduced) or Cu2+ (oxidized). Each form of Cu has a specific distribution of electrons (i.e., electron configuration), tabulated below:
The electronic configurations of Cu control the number and types of bonds Cu can form with other atoms (e.g., see Copper Biology section). These diverse coordination chemistries are what enable Cu to participate in many different biological and chemical reactions.
Finally, due to its full d-orbital, Cu1+ has diamagnetic resonance. In contrast, Cu2+ has one unpaired electron in its d-orbital, giving it paramagnetic resonance. The different resonances of the Cu ions enable determination of Cu's redox state by techniques such as electron paramagnetic resonance (epr) spectroscopy, which can identify atoms with unpaired electrons by exciting electron spins.
Equilibrium isotope fractionation
Transitions between redox species Cu1+ and Cu2+ fractionate Cu isotopes. 63Cu2+ is preferentially reduced over 65Cu2+, leaving the residual Cu2+ enriched in 65Cu. The equilibrium fractionation factor for speciation between Cu2+ and Cu1+ (αCu(II)-Cu(I)) is 1.00403 (i.e., dissolved Cu2+ is enriched in 65Cu by ~+4‰ relative to Cu1+).
Biology
Copper can be found in the active sites of most enzymes that catalyze redox reactions (i.e., oxidoreductases), as it facilitates single electron transfers while reversibly oscillating between the Cu1+ and Cu2+ redox states. Enzymes typically contain between one (mononuclear) and four (tetranuclear) copper centers, which enable enzymes to catalyze different reactions. These copper centers coordinate with different ligands depending on the Cu redox state. Oxidized Cu2+ preferentially coordinates with "hard donor" ligands (e.g., N- or O-containing ligands such as histidine, aspartic acid, glutamic acid or tyrosine), while reduced Cu1+ preferentially coordinates with "soft donor" ligands (e.g., S-containing ligands such as cysteine or methionine). Copper's powerful redox capability makes it critically important for biology, but comes at a cost: Cu1+ is a highly toxic metal to cells because it readily abstracts single electrons from organic compounds and cellular material, leading to production of free radicals. Thus, cells have evolved specific strategies for carefully controlling the activity of Cu1+ while exploiting its redox behavior.
Examples of copper-based enzymes
Copper serves catalytic and structural roles in many essential enzymes in biology. In the context of catalytic activity, copper proteins function as electron or oxygen carriers, oxidases, mono- and dioxygenases and nitrite reductases. In particular, copper-containing enzymes include hemocyanins, one flavor of superoxide dismutase (SOD), metallothionein, cytochrome c oxidase, multicopper oxidase and particulate methane monooxygenase (pMMO).
Biological fractionation
Biological processes that fractionate Cu isotopes are not well-understood, but play an important role in driving the δ65Cu values of materials observed in the marine and terrestrial environments. The natural 65Cu/63Cu varies according to copper's redox form and the ligand to which copper binds. Oxidized Cu2+ preferentially coordinates with hard donor ligands (e.g., N- or O-containing ligands), while reduced Cu1+ preferentially coordinates with soft donor ligands (e.g., S-containing ligands). As 65Cu is preferentially oxidized over 63Cu, these isotopes tend to coordinate with hard and soft donor ligands, respectively. Cu isotopes can fractionate upon Cu-bacteria interactions from processes that include Cu adsorption to cells, intracellular uptake, metabolic regulation and redox speciation. Fractionation of Cu isotopes upon adsorption to cellular walls appears to depend on the surface functional groups that Cu complexes with, and can span positive and negative values. Furthermore, bacteria preferentially incorporate the lighter Cu isotope intracellularly and into proteins. For example, E. coli, B. subtilis and a natural consortia of microbes sequestered Cu with apparent fractionations (ε65Cu) ranging from ~-1.0 to -4.4‰. Additionally, fractionation of Cu upon incorporation into the apoprotein of azurin was ~-1‰ in P. aeruginosa, and -1.5‰ in E. coli, while ε65Cu values of Cu incorporation into Cu-metallothionein and Cu-Zn-SOD in yeast were -1.7 and -1.2‰, respectively.
Geochemistry
The concentration of Cu in bulk silicate Earth is ~30 ppm, slightly less than its average concentration (~72 ppm) in fresh mid-oceanic ridge basalt (MORB) glass. and form a variety of sulfides (often in association with Fe), as well as carbonates and hydroxides (e.g., chalcopyrite, chalcocite, cuprite and malachite). In mafic and ultramafic rocks, Cu tends to be concentrated in sulfidic materials. In freshwater, the predominant form of Cu is free Cu2+; in seawater, Cu complexes with carbonate ligands to form and .
Measurement
In order to measure Cu isotope ratios of various materials, several steps must be taken prior to the isotopic measurement in order to extract and purify copper. The first step in the analytical pipeline to measure Cu isotopes is to liberate Cu from its host material. Liberation should be quantitative, otherwise fractionation may be introduced at this step. Cu-containing rocks are generally dissolved with HF; biological materials are commonly digested with HNO3. Seawater samples must be concentrated due to the low (nM) concentrations of Cu in the ocean. The sample material is subsequently run through an anion-exchange column to isolated and purify Cu. This step can also introduce Cu isotope fractionation if Cu is not quantitatively recovered from the column. If samples are from seawater, other ions (e.g., Na+, Mg2+, Ca2+) must be removed in order to eliminate isobaric interferences during the isotope measurement. Prior to 1992, 65Cu/63Cu ratios were measured via thermal ionization mass spectrometry (TIMS). Today, Cu isotopic compositions are measured via multi-conductor inductively coupled plasma mass spectrometry (MC-ICP-MS), which ionizes samples using inductively coupled plasma and introduces smaller errors than TIMS.
Natural copper isotopic variations
The field of Cu isotope biogeochemistry is still in a relatively early stage, so the Cu isotope compositions of materials in the environment are not well-documented. However, based on a compilation of measurements already made, it appears that Cu isotope ratios vary somewhat widely within and between environmental materials (e.g., plants, minerals, seawater, etc.), though as a whole, these ratios do not vary by more than ±10‰.
In humans
In human bodies, coppers is an important constituent of many essential enzymes, including ceruloplasmin (which carries Cu and oxidizes Fe2+ in human plasma), cytochrome c oxidase, metallothionein and superoxide dismutase 1. Serum in human blood is typically 65Cu-depleted by ~0.8‰ relative to erythrocytes (i.e., red blood cells). In a study of 49 male and female blood donors, the average δ65Cu value of the donors' blood serum was -0.26 ± 0.40‰, while that of their erythrocytes was +0.56 ± 0.50‰. In a separate study, δ65Cu values of serum in 20 healthy patients ranged from -0.39 to +0.38‰, while the δ65Cu values of their erythrocytes ranged from +0.57 to +1.24‰. To balance Cu loss due to menstruation, a large portion of Cu in the blood of menstruating women comes from their liver. Due to fractionation associated with Cu transport from the liver to the blood, the total blood of pre-menopausal women is generally 65Cu-depleted relative to that of males and non-menstruating women. The δ65Cu values of healthy human liver tissue in 7 patients ranged from -0.45 to -0.11‰.
In the terrestrial environment
To first order, δ65Cu values in organisms are driven by the δ65Cu values of source materials. The δ65Cu values of various soils from different regions have been found to vary from -0.34 to +0.33‰ depending on the biogeochemical processes taking place in the soil and the ligands with which Cu complexes. Organic-rich soils generally have lighter δ65Cu values than mineral soils because the organic layers result from plant litter, which is isotopically light.
In plants, δ65Cu values vary between the different components (seeds, roots, stem and leaves). The δ65Cu values the roots of rice, lettuce, tomato and durum wheat plants were found to be 0.5 to 1.0‰ 65Cu-depleted relative to their source, while their shoots were up to 0.5‰ lighter than the roots. Seeds appear to be the most isotopically light component of plants, followed by leaves, then stems.
Rivers sampled throughout the world have a range of dissolved δ65Cu values from +0.02 to +1.45‰. The average δ65Cu values of the Amazon, Brahmaputra and Nile rivers are 0.69, 0.64 and 0.58‰, respectively. The average δ65Cu value of the Chang Jiang river is 1.32‰, while that of the Missouri river is 0.13‰.
In rocks and minerals
In general, igneous, metamorphic and sedimentary processes do not appear to strongly fractionate Cu isotopes, while δ65Cu values of Cu minerals vary widely. The average Cu isotopic composition of bulk silicate Earth has been measured as 0.06 ± 0.20‰ based on 132 different terrestrial samples. MORBs and oceanic island basalts (OIBs) generally have homogenous Cu isotopic compositions that fall around 0‰, while arc and continental basalts have more heterogeneous Cu isotope compositions that range from -0.19 to +0.47‰. These Cu isotope ratios of basalts suggest that mantle partial melting imparts negligible Cu isotopic fractionation, while recycling of crustal materials leads to widely variable δ65Cu values. The Cu isotope compositions of copper-containing minerals vary over a wide range, likely due to alteration of the primary high-temperature deposits. In one study that investigated Cu isotopic compositions of various minerals from hydrothermal fields along the mid-Atlantic ridge, chalcopyrite from mafic igneous rocks had δ65Cu values of -0.1 to -0.2‰, while Cu minerals in black smokers (chalcopyrite, bornite, covellite and atacamite) exhibited a wider range of δ65Cu values from -1.0 to +4.0‰. Additionally, atacamite lining the outer rims of black smokers can be up to 2.5‰ heavier than chalcopyrite contained within the black smoker. δ65Cu values of Cu minerals (including chrysocolle, azurite, malachite, cuprite and native copper) in low-temperature deposits have been observed to vary widely over a range of -3.0 to +5.6‰.
In the marine environment
Cu is strongly cycled in the surface and deep ocean. In the deep ocean, Cu concentrations are ~5 nM in the Pacific and ~1.5 nM in the Atlantic. The deep/surface ratio of Cu in the ocean is typically <10, and vertical concentration profiles for Cu are roughly linear due to biological recycling and scavenging processes as well as adsorption to particles.
Due to equilibrium and biological processes that fractionate Cu isotopes in the marine environment, the bulk copper isotopic composition (δ65Cu = +0.6 to +1.5‰) is different from the δ65Cu values of the riverine input (δ65Cu = +0.02 to +1.45‰, with discharge-weighted average δ65Cu = +0.68‰) to the oceans. δ65Cu values of the surface layers of FeMn-nodules are fairly homogenous throughout the oceans (average = 0.31‰), suggesting low biological demand for Cu in the marine environment compared to that of Fe or Zn. Additionally, δ65Cu values in the Atlantic ocean do not markedly vary with depth, ranging from +0.56 to +0.72‰. However, Cu isotope compositions of material collected on sediment traps at depths of 1,000 and 2,500 m in the central Atlantic ocean show seasonal variation with heaviest δ65Cu values in the spring and summer seasons suggesting seasonal preferential uptake of 63Cu by biological processes.
Equilibrium processes that fractionate Cu isotopes include high temperature ion exchange and redox speciation between mineral phases, and low temperature ion exchange between aqueous species or redox speciation between inorganic species. In riverine and marine environments, 65Cu/63Cu ratios are driven by preferential adsorption of 63Cu to particulate matter and preferential binding of 65Cu to organic complexes. As a net result, ocean sediments tend to be depleted in 63Cu relative to the bulk ocean. For example, the downcore δ65Cu values of a 760 cm sedimentary core taken from the Central Pacific ocean varied from -0.94 to -2.83‰, significantly lighter than the bulk ocean.
Applications of copper isotopes
Medicine
Due to its relatively short turnover time of ~6 weeks in the human body, Cu serves as an important indicator of cancer and other diseases that rapidly evolve. The serum of cancer patients contains significantly higher levels of Cu than that of healthy patients due to copper chelation by lactate, which is produced via anaerobic glycolysis by tumor cells. These imbalances in Cu homeostasis are reflected isotopically in the serum and organ tissues of patients with various types of cancer, where the serum of cancer patients is generally 65Cu-depleted relative to the serum of healthy patients, while organ tumors are generally 65Cu-enriched. In one study, the blood components of patients with hepatocellular carcinomas (HCC) was found to be, on average, depleted in 65Cu by 0.4‰ relative to the blood of non-cancer patients. In particular, the δ65Cu values of the serum in patients with HCC ranged from -0.66 to +0.47‰ (compared to serum δ65Cu values of -0.39 to +0.38‰ in matched control patients), and the δ65Cu values of the erythrocytes in the HCC patients ranged from -0.07 to +0.92‰ (compared to erythrocyte δ65Cu values of +0.57 to +1.24‰ in matched control patients). The liver tumor tissues in the HCC patients were 65Cu-enriched relative to healthy liver tissue in the same patients (δ65Culiver, HCC = -0.02 to +0.43‰; δ65Culiver, healthy = -0.45 to -0.11‰), and the magnitude of 65Cu-enrichment mirrored that of the 65Cu-depletion observed in the cancer patients' serum. Though our understanding of how copper isotopes are fractionated during cancer physiologies is still limited, it is clear that copper isotope ratios may serve as a powerful biomarker of cancer presence and progression.
Zinc
Stable isotopes and natural abundances
Zinc has five stable isotopes, tabulated along with their natural abundances below:
The isotopic composition of Zn is reported in delta notation (in ‰):
where xZn is a Zn isotope other than 64Zn (commonly either 66Zn or 68Zn). Standard reference materials used for Zn isotope measurements are JMC 3-0749C, NIST-SRM 683 or NIST-SRM 682.
Chemistry
Because it has just one valence state (Zn2+), zinc is a redox-inert element. The electronic configurations of Zn0 and Zn2+ are shown below:
Biology
Zinc is present in almost 3,000 human proteins, and thus is essential for nearly all cellular functions. Zn is also a key constituent of enzymes involved in cell regulation. Consistent with its ubiquitous presence, total cellular Zn concentrations are typically very high (~200 μM), while the concentrations of free Zn ions in the cytoplasms of cells can be as low as a few hundred picomolar, maintained within a narrow range to avoid deficiency and toxicity. One feature of Zn that makes it so critical in cellular biology is its flexibility in coordination to different numbers and types of ligands. Zn can coordinate with anywhere between three and six N-, O- and S-containing ligands (such as histidine, glutamic acid, aspartic acid and cysteine), resulting in a large number of possible coordination chemistries. Zn tends to bind to metal sites of proteins with relatively high affinities compared to other metal ions which, aside from its important functions in enzymatic reactions, partly explains its ubiquitous presence in cellular enzymes.
Examples of zinc-based enzymes
Zn is present in the active sites of most hydrolytic enzymes, and is used as an electrophilic catalyst to activate water molecules that ultimately hydrolyze chemical bonds. Examples of zinc-based enzymes include superoxide dismutase (SOD), metallothionein, carbonic anhydrase, Zn finger proteins, alcohol dehydrogenase and carboxypeptidase.
Biological fractionation
Relatively little is known about isotopic fractionation of zinc by biological processes, but several studies have elucidated that Zn isotopes fractionate during surface adsorption, intracellular uptake processes and speciation. Many organisms, including certain species of fish, plants and marine phytoplankton, have both high- and low-affinity Zn transport systems, which appear to fractionate Zn isotopes differently. A study by John et al. observed apparent isotope effects associated with Zn uptake by the marine diatom Thalassiosira oceanica of -0.2‰ for high-affinity uptake (at low Zn concentrations) and -0.8‰ for low-affinity uptake (at high Zn concentrations). Additionally, in this study, unwashed cells were enriched in 65Zn, indicating preferential adsorption of 65Zn to the extracellular surfaces of T. oceanica. Results from John et al. demonstrating apparent discrimination against the heavy isotope (66Zn) during uptake conflict with results by Gélabert et al. in which marine phytoplankton and freshwater periphytic organisms preferentially uptook 66Zn from solution. The latter authors explained these results as due to a preferential partitioning of 66Zn into a tetrahedrally coordinated structure (i.e., with carboxylate, amine or silanol groups on or inside the cell) over an octahedral coordination with six water molecules in the aqueous phase, consistent with quantum mechanical predictions. Kafantaris and Borrok grew model organisms B. subtilis, P. mendocina and E. coli, as well as a natural bacterial consortium collected from soil, on high and low concentrations of Zn. In the high [Zn] condition, the average fractionation of Zn isotopes imparted by cellular surface adsorption was +0.46‰ (i.e., 66Zn was preferentially adsorbed), while fractionation upon intracellular incorporation varied from -0.2 to +0.5‰ depending on the bacterial species and growth phase. Empirical models of the low [Zn] condition estimated larger Zn isotope fractionation factors for surface adsorption ranging from +2 to +3‰. Overall, Zn isotope ratios in microbes appear to be driven by a number of complex factors including surface interactions, bacterial metal metabolism and metal speciation, but by understanding the relative contributions of these factors to Zn isotope signals, one can use Zn isotopes to investigate metal-binding pathways operating in natural communities of microbes.
Geochemistry
The concentration of Zn in bulk silicate Earth is ~55 ppm, while its average concentration in fresh mid-oceanic ridge basalt (MORB) glass is ~87 ppm. Like Cu, Zn commonly associates with Fe to form a variety of zinc sulfide minerals such as sphalerite. Additionally, Zn associates with carbonates and hydroxides to form numerous diverse minerals (e.g., smithsonite, sweetite, etc.). In mafic and ultramafic rocks, Zn tends to concentrate in oxides such as spinel and magnetite. In freshwater, Zn predominantly complexes with water to form an octahedrally coordinated aqua ion . In seawater, Cl− ions replace up to four water molecules in the Zn aqua ion, forming , and .
Measurement
The analytical pipeline for preparation of sample material for Zn isotope measurements is similar to that of Cu, consisting of digestion of host material or concentration from seawater, isolation and purification via anion-exchange chromatography, removal of ions of interfering mass (in particular, 64Ni) and isotope measurement via MC-ICP-MS (see Copper Isotope Measurement section for more details).
Natural zinc isotopic variations
As with Cu, the field of Zn isotope biogeochemistry is still in a relatively early stage, so the Zn isotope compositions of materials in the environment are not well-documented. However, based on a compilation of some reported measurements, it appears that Zn isotope ratios do not vary widely among environmental materials (e.g., plants, minerals, seawater, etc.), as δ66Zn values of materials typically fall within a range of -1 to +1‰.
In humans
Zn isotope ratios vary between individual blood components, bones and the different organs in humans, though in general, δ66Zn values fall within a narrow range. In the blood of healthy individuals, the Zn isotopic composition of erythrocytes is typically ~0.3‰ lighter than that of serum, and no significant differences in erythrocyte or serum δ66Zn values exist between men and women. For example, in the blood of 49 healthy blood donors, the average erythrocyte δ66Zn value was +0.44 ± 0.33‰, while that of serum was +0.17 ± 0.26‰. In a separate study on 29 donors, a similar average δ66Zn value of +0.29 ± 0.27‰ was obtained for the patients' serum. Additionally, in a small sample set of volunteers, whole blood δ66Zn values were ~+0.15‰ higher for vegetarians than for omnivores, suggesting diet plays an important role in driving Zn isotope compositions in the human body.
In the terrestrial environment
Zn isotope ratios vary on small scales throughout the terrestrial biosphere. Zn is released into soils during mineral weathering, and isotopes of Zn fractionate upon interaction with mineral and organic components in the soil. In 5 soil profiles collected from Iceland (all derived from the same parent basalt), soil δ66Zn values varied from +0.10 to +0.35‰, and the organic-rich layers were 66Zn-depleted relative to the mineral-rich layers, likely due to contribution by isotopically light organic matter and Zn loss by leaching.
Isotopic discrimination of Zn varies in different components of higher plants, likely due to the various processes involved in Zn uptake, binding, transport, diffusion, speciation and compartmentalization. For example, Weiss et al. observed heavier δ66Zn values in the roots of several plants (rice, lettuce and tomato) relative to the bulk solution in which the plants were grown, and the shoots of those plants were 66Zn-depleted relative to both their roots and bulk solution. Furthermore, Zn isotopes partition differently between different Zn-ligand complexes, so the form of Zn incorporated by organisms in the terrestrial biosphere plays a role in driving Zn isotope compositions of the organisms. In particular, based on ab initio calculations, Zn-phosphate complexes are expected to be isotopically heavier than Zn-citrates, Zn-malates and Zn-histidine complexes by 0.6 to 1‰.
The discharge- and [Zn]-weighted average δ66Zn value of rivers throughout the world is +0.33‰. In particular, the average δ66Zn values of the Kalix and Chang Jiang rivers are +0.64 and +0.56‰, respectively. The Amazon, Missouri and Brahmaputra rivers have average δ66Zn values near +0.30‰, and the average δ66Zn value of the Nile river is +0.21‰.
In rocks and minerals
In general, δ66Zn values of various rocks and minerals do not appear to significantly vary. The δ66Zn value of bulk silicate Earth (BSE) is +0.28 + 0.05‰. Fractionation of Zn isotopes by igneous processes is generally insignificant, and δ66Zn values of basalt fall within the range of +0.2 to +0.3‰, encompassing the value for BSE. δ66Zn values of clay minerals from diverse environments and of diverse ages have been found to fall within the same range as basalts, suggesting negligible fractionation between the basaltic precursors and sedimentary materials. Carbonates appear to be more 66Zn-enriched than other sedimentary and igneous rocks. For example, the δ66Zn value of a limestone core taken from the Central Pacific was +0.6‰ at the surface and increased to +1.2‰ with depth The Zn isotopic compositions of various ores are not well-characterized, but smithsonites and sphalerites (Zn carbonates and Zn sulfides, respectively) collected from various localities in Europe had δ66Zn values ranging from -0.06 to +0.69‰, with smithsonite potentially slightly heavier by 0.3‰ than sphalerite.
In the marine environment
Zn is an essential biological nutrient in the oceans, and its concentration is largely controlled by uptake by phytoplankton and remineralization. In addition to its critical role in many metalloenzymes (see Zinc Biology section), Zn is an important component of the carbonate shells of foraminifera and siliceous frustules in diatoms. The main inputs of Zn to the ocean are thought to be from rivers and dust. In some photic zones in the ocean, Zn is a limiting nutrient for phytoplankton, and thus its concentration in surface waters serves as one control on marine primary productivity. Zn concentrations are extremely low in the surface ocean (<0.1 nM) but are maximal at depth (~2 nM in the deep Atlantic; ~10 nM in the deep Pacific), indicating a deep regeneration cycle. The deep/surface ratio of Zn is typically on the order of 100, significantly larger than that observed for Cu.
A multitude of complex processes fractionate Zn isotopes in the marine environment. As seen with copper isotopes, the bulk isotopic composition of zinc in the oceans (δ66Zn = +0.5‰) is heavier than that of the riverine input (δ66Zn = +0.3‰), reflecting both equilibrium, biological and other processes that affect Zn isotope ratios in the ocean. In the surface ocean, phytoplankton preferentially uptake 64Zn, and as a result have average δ66Zn values of ~+0.16‰ (i.e., 0.34‰ lighter than the bulk ocean). This preferential removal of 64Zn by photosynthetic marine organisms in the photic zone is most prominent in the spring and summer seasons when primary productivity is highest, and the seasonal variability of Zn isotope ratios is reflected in the δ66Zn values of settling materials, which are heavier (e.g., by ~+0.20‰ in the Atlantic Ocean) during spring and summer than during the colder seasons. Additionally, the surface layers of FeMn-nodules are 66Zn enriched at high-latitudes (average δ66Zn = +1‰), while δ66Zn values of low-latitude samples are smaller and more variable (spanning +0.5 to +1‰). This observation has been interpreted as due to high levels of Zn consumption and preferential uptake of 64Zn above the seasonal thermocline at high latitudes during warmer seasons, and transfer of this heavy δ66Zn signal to the settling sedimentary Fe-Mn hydroxides.
Sources and sinks for Zn isotopes are further highlighted in the vertical profile of 66Zn/64Zn in the water column. In the upper 2,000 m of the Atlantic Ocean, δ66Zn values are highly variable near the surface (δ66Zn = +0.05 to +0.33‰) due to biological uptake and other surface processes, then gradually increase to ~+0.50‰ at 2,000 m depth. Potential sinks for light Zn isotopes, which enrich the residual bulk Zn isotope ratios in the ocean, include binding to and burial with sinking particulate matter, as well as Zn sulfide precipitation in buried sediments. As a result of preferential burial of 64Zn over the heavier Zn isotopes, sediments in the ocean are generally isotopically lighter than that of bulk seawater. For example, δ66Zn values in 8 sedimentary cores from three different continental margins were depleted in 66Zn relative to the bulk ocean (δ66Zncores = -0.15 to +0.2‰), and furthermore the vertical profiles of δ66Zn values in the cores showed no downcore isotopic variability, suggesting diagenesis does not significantly fractionate Zn isotopes.
Applications of zinc isotopes
Medicine
Zn isotopes may be useful as a tracer for breast cancer. Relative to non-cancerous patients, breast cancer patients are known to have significantly higher concentrations of Zn in their breast tissue, but lower concentrations in their blood serum and erythrocytes, due to overexpression of Zn transporters in breast cancer cells. Consistent with these body-wide shifts in Zn homeostasis, δ66Zn values in breast cancer tumors of 5 patients were found to be anomalously light (varying from -0.9 to -0.6‰) relative to healthy tissue in 3 breast cancer patients and 1 healthy control (δ66Zn = -0.5 to -0.3‰). In this study, δ66Zn values of blood and serum were not found to be significantly different between cancerous and non-cancerous patients, suggesting an unknown isotopically heavy pool of Zn must exist in cancer patients. Though results from this study are promising regarding the use of Zn isotope ratios as a biomarker for breast cancer, a mechanistic understanding of how Zn isotopes fractionate during tumor formation in breast cancer is still lacking. Fortunately, increasing attention is being devoted to the use of stable metal isotopes as tracers of cancer and other diseases, and the usefulness of these isotope systems in medical applications will become more apparent in the next few decades.
Molybdenum
Uranium
References
Biogeochemistry | Trace metal stable isotope biogeochemistry | [
"Chemistry",
"Environmental_science"
] | 13,355 | [
"Environmental isotopes",
"Environmental chemistry",
"Isotopes",
"Chemical oceanography",
"Biogeochemistry"
] |
78,045,164 | https://en.wikipedia.org/wiki/Neknampur%20Lake | Neknampur Lake, also known as Ibrahim Bagh Cheruvu, is a lake in Hyderabad, India. It was once part of a water reservoir network that was used for irrigation and providing drinking water in the surrounding areas.
History
The lake was first dug up in the late 16th century by Ibrahim Qutb Shah, the fourth ruler of Golconda, and later flooded by his grandson Abdullah Qutb Shah. The construction was entrusted to Neknam Khan, one of Shah's courtiers. Rather than using water from the adjacent Musi, Neknam Khan commissioned channels to fill the lake from water bodies behind the Golconda Fort. Neknampur Lake is one of the three major lakes that were created during the reign of Quli Qutub Shah alongside Ibrahimpatnam Lake and Hussainsagar. There was a proposal by Greater Hyderabad Municipal Corporation to use the lake to dump sewage from surrounding housing colonies. The lake is today divided into two parts known as Chinna Cheruvu, which is smaller, and Pedda Cheruvu, which is larger. The Chinna Cheruvu has been partially restored and converted into a scenic spot whereas the Pedda Cheruvu continues to struggle with pollution. The lake is polluted with various chemicals and also used as a garbage dump by the residential colonies surrounding it. Encroachments and illegal structures surrounding the lake were demolished by government authorities. However these structures are being illegally rebuilt by the encroachers.
Restoration efforts
The lake was gradually occupied by land grabbers and converted into a dump yard for construction debris, garbage, sewage discharge and covered in water hyacinth. At one stage, the surface area of the lake was less than . Efforts to restore the lake were undertaken in 2016 with the help of NGOs based in Hyderabad. The restoration and rejuvenation of the lake included cleaning the lake and floating wetland treatment to tackle the growth of water hyacinth. Contaminants were removed using plants and with the use of microorganisms. NITI Aayog has recognised these efforts and "it has been identified as a role model for 'best restoration practices' in the country." Neknampur Lake restoration "has been recognised as a role model in the 'watershed development' category along with four other projects" in India. According to Niti Aayog, there has been a 90% reduction in Biochemical Oxygen Demand (BOD) of the lake. Centre for Science and Environment (CSE) has also recognised Neknampur Lake "as the best model of lake restoration in India."
Reference
Nature conservation in India
Lakes of Hyderabad, India
Water conservation in India
Ecological restoration
Water pollution in India | Neknampur Lake | [
"Chemistry",
"Engineering"
] | 555 | [
"Ecological restoration",
"Environmental engineering"
] |
78,052,150 | https://en.wikipedia.org/wiki/EndoMac%20progenitor%20cell | EndoMac progenitor cells are a type of endothelial-macrophage progenitor cells, more specifically a population of hemangioblasts from postnatal tissue. They were discovered by Australian researchers in the aorta of mice.
References
Cell biology
Stem cells | EndoMac progenitor cell | [
"Biology"
] | 59 | [
"Cell biology"
] |
76,656,706 | https://en.wikipedia.org/wiki/Jong-Soo%20Rhyee | Jong-Soo Rhyee is a South Korean physicist and materials scientist. He is a professor in the Department of Applied Physics at the Applied Science College of Kyung Hee University and serves as the Outside Director at KPT, the Representative CEO of V-memory, and the CTO of R-Materials in South Korea.
Rhyee's research spans across domains of material science, encompassing magnetic and energy materials, crystal growth, thermoelectric materials, high thermal conductivity materials, magneto-caloric effect materials, unconventional properties of oxides and intermetallics, and superconductivity. He is the recipient of the 2009 Young Investigator Award by the International Thermoelectric Society and the 2018 IAAM Scientist Medal by the International Association of Advanced Materials.
Rhyee holds 19 Korean patents along with 32 international patents.
Education and early career
Rhyee obtained his Bachelor's in Physics from Chung-buk National University in 1998, and a Master's in Experimental Solid-State Physics from Pohang University of Science and Technology (POSTECH) in 2000 under advisor Sung Ik Lee. He pursued a Ph.D. in Magnetic Materials at Gwangju Institute of Science and Technology (GIST) from 2000 to 2005, researching Hexaboride compounds under advisor Beong Ki Cho.
Career
Rhyee worked as a Postdoc Researcher in the Crystal Growth group at Max Planck Institute for Solid State Research in Germany from April 2006 to April 2007 and then served as an R&D Staff Researcher at the Materials Research Lab at Samsung Advanced Institute of Technology (SAIT) from May 2007 to August 2010. He moved into academia as an assistant professor at the Department of Applied Physics of the Applied Science College at Kyung Hee University in South Korea in 2010, becoming associate professor in 2014 and Professor in 2019
While in the role of associate professor, Rhyee concurrently held the position of department chair for the Department of Applied Physics from March 2017 to February 2019, and as the Vice Dean of the Applied Science College at Kyung Hee University from March 2018 to February 2019. He has been serving as the Outside Director at KPT since June 2022, as well as the CTO at R-Materials in South Korea since January 2023, and has also been acting as the Representative CEO of V-memory in South Korea since January 2020.
Research
Rhyee's research has focused on developing new materials in fields such as magnetic, superconductivity, and energy materials. He has investigated crystal growth in intermetallic and oxide compounds, studied thermoelectric materials for waste heat recovery and high thermal conductive materials for electronic applications. Additionally, his research has encompassed magneto-caloric effect materials for solid-state cooling, unconventional properties of oxides and intermetallics, and quasi-one-dimensional electronic transport. He has also explored soft magnetic materials, topological and Weyl semimetallic system, and superconductivity.
Magnetism and thermoelectric research
During his time at SAIT's Materials Research Center, Rhyee developed high-performance thermoelectric materials In4Se3−δ, published in Nature on 2009. This research proposed an approach to enhance ZT thermoelectric materials through Peierls distortion. He provided both experimental evidence and theoretical insights demonstrating that alloying SnTe with Ca significantly improved its transport properties, leading to a ZT of 1.35 at 873 K, the highest reported ZT value for singly doped SnTe materials. The study predicted approximately 10% efficiency for high-temperature thermoelectric power generation using SnTe-based materials, assuming a 400 K temperature difference. Furthermore, his work enhanced the thermoelectric properties of In4Se3–xCl0.03 bulk crystals through Ca alloying, and showed that intercalation of Cu nanoparticles between Te layers in Bi2Te3 transforms its native p-type character to n-type, reducing thermal conductivity and enhancing thermoelectric performance with a figure of merit (ZT) of 1.15 at approximately 300 K. His research also addressed the development of high-mobility transistors using CVD-grown MoSe2 films for applications like high-resolution displays.
Within his magnetism and thermoelectric research, Rhyee has explored unconventional magnetism in boride and intermetallic compounds, with a focus on magnetic polaronic transport and correlated properties. He examined the link between topological states and thermoelectricity, discovering that the topological phase transition in Dirac semimetals boosts thermoelectric performance. Further investigations revealed that selective charge Anderson localization is a novel avenue for enhancing thermoelectricity, yielding a ZT value of 2.0 in n-type thermoelectric power generation. In a collaborative work, he presented a novel magnetic field-induced type II Weyl semimetallic state in the Shastry-Sutherland lattice, characterized by non-trivial Berry phase, magnetic field-induced Weyl nodes and spin chirality, chiral anomaly, anomalous magnetoconductivity, and demonstrated topological phase evolution.
Awards and honors
2009 – Young Investigator Award, by International Thermoelectric Society
2018 – IAAM Scientist Medal, International Association of Advanced Materials
Selected articles
Rhyee, J. S., Lee, K. H., Lee, S. M., Cho, E., Kim, S. I., Lee, E., ... & Kotliar, G. (2009). Peierls distortion as a route to high thermoelectric performance in In4Se3-δ crystals. Nature, 459(7249), 965–968.
Rhyee, J. S., Ahn, K., Lee, K. H., Ji, H. S., & Shim, J. H. (2011). Enhancement of the Thermoelectric Figure‐of‐Merit in a Wide Temperature Range in In4Se3–xCl0. 03 Bulk Crystals. Advanced Materials, 23(19), 2191–2194.
Han, M. K., Ahn, K., Kim, H., Rhyee, J. S., & Kim, S. J. (2011). Formation of Cu nanoparticles in layered Bi 2 Te 3 and their effect on ZT enhancement. Journal of Materials Chemistry, 21(30), 11365–11370.
Al Rahal Al Orabi, R., Mecholsky, N. A., Hwang, J., Kim, W., Rhyee, J. S., Wee, D., & Fornari, M. (2016). Band degeneracy, low thermal conductivity, and high thermoelectric figure of merit in SnTe–CaTe alloys. Chemistry of Materials, 28(1), 376–384.
Rhyee, J. S., Kwon, J., Dak, P., Kim, J. H., Kim, S. M., Park, J., ... & Kim, S. (2016). High‐mobility transistors based on large‐area and highly crystalline CVD‐grown MoSe2 films on insulating substrates. Advanced Materials, 28(12), 2316–2321.
References
South Korean physicists
South Korean scientists
Materials scientists and engineers
Chungbuk National University alumni
Pohang University of Science and Technology alumni
Gwangju University alumni
Academic staff of Kyung Hee University
Year of birth missing (living people)
Living people | Jong-Soo Rhyee | [
"Materials_science",
"Engineering"
] | 1,603 | [
"Materials scientists and engineers",
"Materials science"
] |
76,671,176 | https://en.wikipedia.org/wiki/JANNAF | The JANNAF Interagency Propulsion Committee (JANNAF IPC, or simply JANNAF) is a joint-agency committee chartered by the USDOD and NASA. JANNAF is composed of two committees: the Technical Committee and the Programmatic & Industrial Base (PIB) Committee. The Technical Committee is itself divided into subcommittees focused on specific technology areas of mutual interest to the DoD and NASA. The JANNAF PIB Committee is a forum for the discussion of strategic program planning and industrial base capabilities in the area of rocket propulsion and energetic systems and components for military and civil space, tactical and strategic missiles, and large gun systems.
JANNAF was re-chartered on June 19th 2014 with the signatures of Frank Kendall III, Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and Robert Lightfoot Jr., Associate Administrator of the National Aeronautics and Space Administration (NASA)
References
External links
NIST-JANAF Thermochemical Tables
JANNAF/GL-2016-0001 Simulation Credibility - Advances in Verification, Validation, and Uncertainty Quantification
JANNAF DRAFT: Test and Evaluation Guideline for Liquid Rocket Engines
United States military associations
Aerospace engineering organizations
1945 establishments in Maryland | JANNAF | [
"Engineering"
] | 255 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
58,383,744 | https://en.wikipedia.org/wiki/Separation%20principle%20in%20stochastic%20control | The separation principle is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain conditions. In its most basic formulation it deals with a linear stochastic system
with a state process , an output process and a control , where is a vector-valued Wiener process, is a zero-mean Gaussian random vector independent of , , and , , , , are matrix-valued functions which generally are taken to be continuous of bounded variation. Moreover, is nonsingular on some interval . The problem is to design an output feedback law which maps the observed process to the control input in a nonanticipatory manner so as to minimize the functional
where denotes expected value, prime () denotes transpose. and and are continuous matrix functions of bounded variation, is positive semi-definite and is positive definite for all . Under suitable conditions, which need to be properly stated, the optimal policy can be chosen in the form
where is the linear least-squares estimate of the state vector obtained from the Kalman filter
where is the gain of the optimal linear-quadratic regulator obtained by taking and deterministic, and where is the Kalman gain. There is also a non-Gaussian version of this problem (to be discussed below) where the Wiener process is replaced by a more general square-integrable martingale with possible jumps. In this case, the Kalman filter needs to be replaced by a nonlinear filter providing an estimate of the (strict sense) conditional mean
where
is the filtration generated by the output process; i.e., the family of increasing sigma fields representing the data as it is produced.
In the early literature on the separation principle it was common to allow as admissible controls all processes that are adapted to the filtration . This is equivalent to allowing all non-anticipatory Borel functions as feedback laws, which raises the question of existence of a unique solution to the equations of the feedback loop. Moreover, one needs to exclude the possibility that a nonlinear controller extracts more information from the data than what is possible with a linear control law.
Choices of the class of admissible control laws
Linear-quadratic control problems are often solved by a completion-of-squares argument. In our present context we have
in which the first term takes the form
where is the covariance matrix
The separation principle would now follow immediately if were independent of the control. However this needs to be established.
The state equation can be integrated to take the form
where is the state process obtained by setting and is the transition matrix function. By linearity, equals
where . Consequently,
but we need to establish that does not depend on the control. This would be the case if
where is the output process obtained by setting . This issue was discussed in detail by Lindquist. In fact, since the control process is in general a nonlinear function of the data and thus non-Gaussian, then so is the output process . To avoid these problems one might begin by uncoupling the feedback loop and determine an optimal control process in the class of stochastic processes that are adapted to the family of sigma fields. This problem, where one optimizes over the class of all control processes adapted to a fixed filtration, is called a stochastic open loop (SOL) problem. It is not uncommon in the literature to assume from the outset that the control is adapted to ; see, e.g., Section 2.3 in Bensoussan, also van Handel and Willems.
In Lindquist 1973 a procedure was proposed for how to embed the class of admissible controls in various SOL classes in a problem-dependent manner, and then construct the corresponding feedback law. The largest class of admissible feedback laws consists of the non-anticipatory functions such that the feedback equation has a unique solution and the corresponding control process is adapted to .
Next, we give a few examples of specific classes of feedback laws that belong to this general class, as well as some other strategies in the literature to overcome the problems described above.
Linear control laws
The admissible class of control laws could be restricted to contain only certain linear ones as in Davis. More generally, the linear class
where is a deterministic function and is an kernel, ensures that is independent of the control. In fact, the Gaussian property will then be preserved, and will be generated by the Kalman filter. Then the error process is generated by
which is clearly independent of the choice of control, and thus so is .
Lipschitz-continuous control laws
Wonham proved a separation theorem for controls in the class , even for a more general cost functional than J(u). However, the proof is far from simple and there are many technical assumptions. For example, must square and have a determinant bounded away from zero, which is a serious restriction. A later proof by Fleming and Rishel is considerably simpler. They also prove the separation theorem with quadratic cost functional for a class of Lipschitz continuous feedback laws, namely , where is a non-anticipatory function of which is Lipschitz continuous in this argument. Kushner proposed a more restricted class , where the modified state process is given by
leading to the identity .
Imposing delay
If there is a delay in the processing of the observed data so that, for each , is a function of , then , , see Example 3 in Georgiou and Lindquist. Consequently, is independent of the control. Nevertheless, the control policy must be such that the feedback equations have a unique solution.
Consequently, the problem with possibly control-dependent sigma fields does not occur in the usual discrete-time formulation. However, a procedure used in several textbooks to construct the continuous-time as the limit of finite difference quotients of the discrete-time , which does not depend on the control, is circular or a best incomplete; see Remark 4 in Georgiou and Lindquist.
Weak solutions
An approach introduced by Duncan and Varaiya and Davis and Varaiya, see also Section 2.4 in Bensoussan
is based on weak solutions of the stochastic differential equation. Considering such solutions of
we can change the probability measure (that depends on ) via a Girsanov transformation so that
becomes a new Wiener process, which (under the new probability measure) can be assumed to be unaffected by the control. The question of how this could be implemented in an engineering system is left open.
Nonlinear filtering solutions
Although a nonlinear control law will produce a non-Gaussian state process, it can be shown, using nonlinear filtering theory (Chapters 16.1 in Lipster and Shirayev
), that the state process is conditionally Gaussian given the filtration . This fact can be used to show that is actually generated by a Kalman filter (see Chapters 11 and 12 in Lipster and Shirayev). However, this requires quite a sophisticated analysis and is restricted to the case where the driving noise is a Wiener process.
Additional historical perspective can be found in Mitter.
Issues on feedback in linear stochastic systems
At this point it is suitable to consider a more general class of controlled linear stochastic systems that also covers systems with time delays, namely
with a stochastic vector process which does not depend on the control. The standard stochastic system is then obtained as a special case where , and . We shall use the short-hand notation
for the feedback system, where
is a Volterra operator.
In this more general formulation the embedding procedure of Lindquist defines the class of admissible feedback laws as the class of non-anticipatory functions such that the feedback equation has a unique solution and is adapted to .
In Georgiou and Lindquist a new framework for the separation principle was proposed. This approach considers stochastic systems as well-defined maps between sample paths rather than between stochastic processes and allows us to extend the separation principle to systems driven by martingales with possible jumps. The approach is motivated by engineering thinking where systems and feedback loops process signals, and not stochastic processes per se or transformations of probability measures. Hence the purpose is to create a natural class of admissible control laws that make engineering sense, including those that are nonlinear and discontinuous.
The feedback equation has a unique strong solution if there exists a non-anticipating function such that satisfies the equation with probability one and all other solutions coincide with with probability one. However, in the sample-wise setting, more is required, namely that such a unique solution exists and that holds for all , not just almost all. The resulting feedback loop is deterministically well-posedin the sense that the feedback equations admit a unique solution that causally depends on the input for each input sample path.
In this context, a signal is defined to be a sample path of a stochastic process with possible discontinuities. More precisely, signals will belong to the Skorohod space , i.e., the space of functions which are continuous on the right and have a left limit at all points (càdlàg functions). In particular, the space of continuous functions is a proper subspace of . Hence the response of a typical nonlinear operation that involves thresholding and switching can be modeled as a signal. The same goes for sample paths of counting processes and other martingales. A system is defined to be a measurable non-anticipatory map sending sample paths to sample paths so that their outputs at any time is a measurable function of past values of the input and time. For example, stochastic differential equations with Lipschitz coefficients driven by a Wiener process
induce maps between corresponding path spaces, see page 127 in Rogers and Williams, and pages 126-128 in Klebaner. Also, under fairly general conditions (see e.g., Chapter V in Protter), stochastic differential equations driven by martingales with sample paths in have strong solutions who are semi-martingales.
For the time setting , the feedback system can be written , where can be interpreted as an input.
Definition. A feedback loop is deterministically well-posed if it has a unique solution for all inputs and is a system.
This implies that the processes and define identical filtrations. Consequently, no new information is created by the loop. However, what we need is that for . This is ensured by the following lemma (Lemma 8 in Georgiou and Lindquist).
Key Lemma. If the feedback loop is deterministically well-posed, is a system, and is a linear system having a right inverse that is also a system, then is a system and for .
The condition on in this lemma is clearly satisfied in the standard linear stochastic system, for which , and hence . The remaining conditions are collected in the following definition.
Definition. A feedback law is deterministically well-posed for the system if is a system and the feedback system deterministically well-posed.
Examples of simple systems that are not deterministically well-posed are given in Remark 12 in Georgiou and Lindquist.
A separation principle for physically realizable control laws
By only considering feedback laws that are deterministically well-posed, all admissible control laws are physically realizable in the engineering sense that they induce a signal that travels through the feedback loop.
The proof of the following theorem can be found in Georgiou and Lindquist 2013.
Separation theorem.
Given the linear stochastic system
where is a vector-valued Wiener process, is a zero-mean Gaussian random vector independent of , consider the problem of minimizing the quadratic functional J(u) over the class of all deterministically well-posed feedback laws . Then the unique optimal control law is given by where is defined as above and is given by the Kalman filter. More generally, if is a square-integrable martingale and is an arbitrary zero mean random vector, , where , is the optimal control law provided it is deterministically well-posed.
In the general non-Gaussian case, which may involve counting processes, the Kalman filter needs to be replaced by a nonlinear filter.
A Separation principle for delay-differential systems
Stochastic control for time-delay systems were first studied in Lindquist,
and Brooks, although Brooks relies on the strong assumption that the observation is functionally independent of the control , thus avoiding the key question of feedback.
Consider the delay-differential system
where is now a (square-integrable) Gaussian (vector) martingale, and where and are of bounded variation in the first argument and continuous on the right in the second, is deterministic for , and .
More precisely, for , for , and the total variation of is bounded by an integrable function in the variable , and the same holds for .
We want to determine a control law which minimizes
where is a positive Stieltjes measure. The corresponding deterministic problem obtained by setting is given by
with .
The following separation principle for the delay system above can be found in Georgiou and Lindquist 2013 and generalizes the corresponding result in Lindquist 1973
Theorem. There is a unique feedback law in the class of deterministically well-posed control laws that minimizes , and it is given by
where is the deterministic control gain and is given by the linear (distributed) filter
where is the innovation process
and the gain is as defined in page 120 in Lindquist.
References
Control theory
Stochastic control | Separation principle in stochastic control | [
"Mathematics"
] | 2,812 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
65,059,134 | https://en.wikipedia.org/wiki/Mercury%20%28crystallography%29 | Mercury is a freeware developed by the Cambridge Crystallographic Data Centre, originally designed as a crystal structure visualization tool. Mercury helps three dimensional visualization of crystal structure and assists in drawing and analysis of crystal packing and intermolecular interactions. Current version Mercury can read "cif", ".mol", ".mol2", ".pdb", ".res", ".sd" and ".xyz" types of files. Mercury has its own file format with filename extension ".mryx".
History
The Cambridge Crystallographic Data Centre (CCDC) developed and launched two programs, named ConQuest and Mercury that run under Windows and various types of Unix, including Linux. ConQuest as a search interface to the Cambridge Structural Database (CSD), with Fortran code that performs a large variety of tasks, such as two dimensional and three-dimensional substructure searching. Mercury introduced as a crystal structure visualizer having the facilities for exploring the intermolecular contacts. The mercury program entirely written in object oriented C++. The C++ Qt library is used for building the GUI and OpenGL for three-dimensional graphics rendering. The primary objective of the first generation Mercury is to provide the three dimensional viewing of crystal structures with .MOL2, .PDB, .CIF, .MOL file formats. The first version have approximately 2800 users signed on to the Mercury e-mail announcement list. Mercury 2.0 launched in 2008, with additional tools to interpret and compare packing trends in crystal structures. Mercury version released in 2015 and later provides an additional functionality to generate 3D print. The current Version 4.0 of Mercury developed its visual interface up to a greater extent by comparing with its old versions.
Licence
Mercury is available as a free download software and full version Mercury with more advanced features available with a CSD licence, advanced features are disabled in the absence of such a licence. Cambridge Crystallographic Data Centre (CCDC) provides CSD licence to academic institutions.
See also
Cambridge Crystallographic Data Centre
Crystallographic Information File
International Union of Crystallography
Protein Data Bank (file format)
CrystalExplorer
References
External links
Computational chemistry software | Mercury (crystallography) | [
"Chemistry"
] | 457 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
65,069,059 | https://en.wikipedia.org/wiki/WASP-35 | WASP-35 is a G-type main-sequence star about 660 light-years away. The star's age cannot be well constrained, but it is probably older than the Sun. WASP-35 is similar in concentration of heavy elements compared to the Sun.
The star has no detectable starspot activity. An imaging survey in 2015 found no detectable stellar companions, although a spectroscopic survey in 2016 yielded a suspected red dwarf companion with a temperature of .
Planetary system
In 2011 a transiting hot Jupiter planet b was detected. The planet's equilibrium temperature is .
References
Eridanus (constellation)
G-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
J05041962-0613473
Durchmusterung objects | WASP-35 | [
"Astronomy"
] | 159 | [
"Eridanus (constellation)",
"Constellations"
] |
75,063,524 | https://en.wikipedia.org/wiki/Bromal%20hydrate | Bromal hydrate is an organobromine compound with the chemical formula . It is the bromine analogue of chloral hydrate. Bromal hydrate forms when bromal is reacted with water. It decomposes to bromal and water upon distillation. It has hypnotic and analgesic properties but acts like a stimulant at lower doses. Bromal hydrate is more physiologically active than its chlorine analogue, chloral hydrate. Its direct effect on the heart muscles is stronger than that of chloral hydrate. Its analgesic effects were attributed to the proposed metabolism to bromoform.
It was also tried as a medication for epilepsy, but was found ineffective.
References
Hydrates
Hypnotics
Geminal diols
Organobromides | Bromal hydrate | [
"Chemistry",
"Biology"
] | 164 | [
"Hypnotics",
"Behavior",
"Sleep",
"Hydrates"
] |
75,064,357 | https://en.wikipedia.org/wiki/Tris%28bipyridine%29iron%28II%29%20chloride | Tris(bipyridine)iron(II) chloride is the chloride salt of the coordination complex tris(bipyridine)iron(II), . It is a red solid. In contrast to tris(bipyridine)ruthenium(II), this iron complex is not a useful photosensitizer because its excited states relax too rapidly, a consequence of the primogenic effect.
Tris(bipyridine)iron(II) chloride features an octahedral Fe(II) center bound to three 2,2'-Bipyridine ligands. The complex has been isolated as salts with many anions.
Synthesis and reactions
The sulfate salt is produced by combining ferrous sulfate with excess bipy in aqueous solution. This result illustrates the preference of Fe(II) for bipyridine vs water. Addition of cyanide to this solution precipitates solid .
Related complexes
Tris(o-phenanthroline)iron(II)
Reference
Iron complexes
Bipyridine complexes
Iron(II) compounds
Chlorides | Tris(bipyridine)iron(II) chloride | [
"Chemistry"
] | 229 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
75,065,921 | https://en.wikipedia.org/wiki/Emergent%20Universe | An emergent Universe scenario is a cosmological model that features the Universe being in a low-entropy "dormant" state before the Big Bang or the beginning of the cosmic inflation. Several such scenarios have been proposed in the literature.
"Cosmic egg" scenarios
A popular version proposed by George Ellis and others involves the Universe shaped like a 3-dimensional sphere (or another compact manifold) until a rolling scalar field begins inflating it. These models are notable as potentially avoiding both a Big Bang singularity and a quantum gravity era.
Criticism
This proposal has been criticised by Vilenkin and Mithani and on different grounds by Aguirre and Kehayias as inconsistent if quantum-mechanical effects are taken into account.
References
Physical cosmology | Emergent Universe | [
"Physics",
"Astronomy"
] | 152 | [
"Astrophysics",
"Theoretical physics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
75,068,957 | https://en.wikipedia.org/wiki/282%20%28number%29 | 282 (two hundred [and] eighty-two) is the natural number following 281 and preceding 283.
In mathematics
282 is an even composite number with three prime factors.
282 is a palindromic number. This is a number that is the same backwards as it is forwards. 282 is the smallest multi-digit palindromic number that is between twin primes, numbers that are prime and are 2 away from another prime number.
282 is equal to the sum of its divisors containing the number 4. It is the sum of 47 + 94 + 141.
282 is the number of planar partitions of 9. This means that 282 is the number of ways to separate 9 units.
References
Integers | 282 (number) | [
"Mathematics"
] | 146 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
75,077,678 | https://en.wikipedia.org/wiki/Overhaul%20hook%20ball | An overhaul hook ball, also known as an overhaul ball or headache ball, is a heavy weight that is attached to the end of a crane's cable, above the lifting hook. It is used to keep the cable under sufficient tension even when no load is attached. Although commonly spherical as the name suggests, overhaul balls may also be ellipsoidal or cylindrical.
Overhaul balls should be distinguished from wrecking balls, which although superficially similar looking, are different and serve a different purpose.
References
Lifting equipment | Overhaul hook ball | [
"Physics",
"Technology"
] | 104 | [
"Physical systems",
"Machines",
"Lifting equipment"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.