id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
62,712,050
https://en.wikipedia.org/wiki/Hawking%20Index
The Hawking Index (HI) is a mock mathematical measure on how far people will, on average, read through a book before giving up. It was invented by American mathematician Jordan Ellenberg, who created it in a blog for the Wall Street Journal in 2014. The index is named after English physicist Stephen Hawking, whose book A Brief History of Time has been dubbed "the most unread book of all time". Calculation Ellenberg's method of calculating the index draws on the "popular highlights", the five most highlighted passages marked by Amazon Kindle readers of each title. A wide spread of highlights throughout the work means that most readers will have read the entire book, resulting in a high on the index. If the spread of highlights occurs only at the beginning of the book, then it means that fewer people will have read the book completely and it will thus score low on the index. When the index was created, this information was easier to access, as "popular highlights" were available to everyone, but since then this information has only been made available to people who buy the books on Kindle. Hawking Index scores When Ellenberg first used the index, he used the following books as his examples. References Literature Units of measurement
Hawking Index
[ "Mathematics" ]
253
[ "Quantity", "Units of measurement" ]
65,566,394
https://en.wikipedia.org/wiki/Macroscope%20%28Wild-Leica%29
A macroscope or photomacroscope in its camera-equipped version (in German: makroskop / photomakroskop) is a type of optical microscope developed and named by Swiss microscope manufacturers Wild Heerbrugg and later, after that company's merger with Leica in 1987, by Leica Microsystems of Germany, optimised for high quality macro photography and/or viewing using a single objective lens and light path, rather than stereoscopic viewing of specimens, at magnifications up to around x40 (which can be increased further with optional supplementary lenses or higher power eyepieces). The Wild, subsequently Leica "macroscope" line was in production from approximately 1976–2003; it was succeeded by the Leica Z6 and Z16 offerings, which continued an equivalent (optically improved) functionality, but without the "macroscope" designation. The macroscope remains a useful, if somewhat specialised, instrument for examination of relevant specimens in various laboratories today. Description The macroscope outwardly resembles a binocular microscope (stereo microscope) but has a single light path in place of the dual light paths of the latter, which is relayed as an identical image to both eyepieces, and optionally, a camera port. In the Wild era it was offered as the M400, M410, M420 and M450 instruments, subsequently sold as the Leica M420 only. Optimised for macro photography in particular (since the camera path passed through the centre of the imaging lens, thereby offering the best optical performance), the instrument was not intended for mass sale but for specialised technical use, such as in universities and research laboratories, for example for production of images for use in scientific publications, or for inspection and production control for microscopic circuits, etc., in the semiconductor industry. In addition to their generally good optical performance, macroscopes offer a large, fixed working distance independent of magnification setting between the bottom of the objective lens and the subject, which is advantageous for manipulation of the specimens and/or introduction of supplementary lighting, etc. A further benefit of the macroscope principle (in contrast to the stereo microscope) is there is no parallax error (apparent lateral shift of the specimen) when acquiring "z-stacks" or focus stacked images for subsequent merging. Model features and history The Wild M400 and M450 were introduced in 1976, the M450 being essentially an M400 without the dedicated photo tube (thus intended for observation only). The M400 was sold with a dedicated range of camera bodies controlled by a large electronics box separate from the microscope, with some of the camera elements (the exposure sensor and the shutter) incorporated into the microscope body, and was an expensive model to both make and purchase. Both it and the later M420 incorporated a manual aperture control (to control depth-of-field) and it was available with a "Macrozoom" objective with a magnification range of 6.3x to 32x, i.e., an approximate zoom ratio of 1:5. Supplementary lenses were available which modified the objective magnification by 0.5x, 1.5x or 2x. There also was a version of the M450 marketed as the EpiMakroScop, which had the "Epizoom" objective, essentially a Macrozoom with a permanently attached 2x front element, thus twice the magnification but only half of the potential field-of-view of its "standard" equivalent. The (later) M420 was a cheaper option than the M400, lacking the electronic controls and in-microscope photography-associated features, but allowing the user to mount a camera of their choice on the top; the M410 was its sister model, designed for observation only (no photo tube). After the M420 production was taken over by Leica, initial models were offered with the same Macrozoom objective, and later ones with a new "Apozoom" objective possessing a zoom range between 5.8x and 35x (1:6), designed to provide higher resolution and better colour correction. Final magnification in the image plane was calculated incorporating a 1.25x magnification in both the viewing and photo tubes, leading to (for example) a range of 7.875x to 40x for the Macrozoom-equipped system (quoted range using the 10x eyepieces), further variable via a choice of different eyepiece magnifications and/or the additional supplementary lenses that could be mounted on the objective. Successors The original "macroscope" line was formally dropped in around 2003, in favour of a redesigned product range, the Leica Z6 APO (6.3x zoom range) and Z16 APO (16x zoom range); these follow similar design principles but no longer incorporate "macroscope" in their model names, although the word persists as a descriptive term in some product literature. A fluorescence-equipped version of these instruments is also marketed as the Leica MacroFluo. Similar/earlier instruments The term "macroscope" was not actually invented by Wild Heerbrugg but appears to have been a generic term used earlier by some other optical manufacturers including Bausch and Lomb (for a small monocular device), and EdnaLite Research Corporation, whose "MacroScope" (also marketed as the "S/P (=Scientific Products) Macroscope system") included "four separate levels of magnification". References Microscopes
Macroscope (Wild-Leica)
[ "Chemistry", "Technology", "Engineering" ]
1,147
[ "Microscopes", "Measuring instruments", "Microscopy" ]
65,567,153
https://en.wikipedia.org/wiki/Serious%20mental%20illness
Serious mental illness (SMI) is characterized as any mental health condition that impairs seriously or severely from one to several significant life activities, including day-to-day functioning. Five common examples of SMI include bipolar disorders, borderline personality disorder, psychotic disorders (i.e. schizophrenia), post-traumatic stress disorders, and major depressive disorders. People having SMI experience symptoms that prevent them from having experiences that contribute to a good quality of life, due to social, physical, and psychological limitations of their illnesses. In 2021, there was a 5.5% prevalence rate of U.S. adults diagnosed with SMI, with the highest percentage being in the 18 to 25 year-old group (11.4%). Also in the study, 65.4% of the 5.5% diagnosed adults with SMI received mental health care services. SMI is a subset of AMI, an abbreviation for any mental illness. Hospitalizations Many people living with SMI experience institutional recidivism, which is the process of being admitted and readmitted into the hospital. This cycle is due in part to a lack of support being available for people living with SMI after being released from the hospital, frequent encounters between them and the police, as well as miscommunication between clinicians and police officers. There are also instances where poor insight into one's mental illness has resulted in increased psychiatric symptoms which ultimately leads to hospitalization and a lower quality of life generally. Highly symptomatic patients are more likely to seek emergency room services. Patients with schizophrenia have the lowest risk of being hospitalized, likely due to frequent encounters with case managers to manage the chronic and persistent symptoms of schizophrenia. To reduce the occurrence of institutional recidivism, the Georgia chapter of the National Alliance on Mental Illness (NAMI) created the Opening Doors to Recovery (ODR) program. ODR established a treatment team of licensed mental health professionals, peer specialists, and family peer specialists (a family member of someone who has SMI) to reduce institutional recidivism by providing treatment, ensuring safe housing, and supporting their recovery. SMI patients who were enrolled in ODR had less hospitalizations and fewer days in the hospital compared to their hospitalizations prior to enrollment. Older adults with SMI are more likely to seek medical services and have longer hospital stays than patients who regularly see a doctor. People with SMI seek medical services for a variety of non-mental health conditions, including diabetes, coronary artery disease, congestive heart failure, urinary conditions, pneumonia, chronic obstructive pulmonary disease, thyroid disease, digestive conditions and cancer. This may be due to psychosomatic factors, as well as poor lifestyle habits associated with reduced mental health such as smoking, poor diet, and lack of exercise. People with SMI typically take antipsychotic medications to manage their condition, however, second-generation antipsychotics can cause poor glycemic control for patients with diabetes, furthering complications in this population. Second-generation antipsychotics, also known as atypical antipsychotics are medications used to effectively treat the positive (e.g. hallucinations and delusions) and negative (e.g. flat affect and lack of motivation) symptoms of schizophrenia. This means that people with both SMI and diabetes are more frequently readmitted to hospitals one month after their initial hospitalization. Notably, patients with SMI have increasing reports of falls and substance abuse, including alcoholism. Homelessness Adults with SMI are 25 to 50 percent more likely to experience homelessness compared to the general population. One predictor of homelessness is poor therapeutic alliance with case managers. Adults with SMI often lack social support from family, friends and the community, which can put them at risk for experiencing homelessness. In 2019, the U.S. Department of Housing and Urban Development reported that there are 52,243 people living with SMI who were living on the street. During that time, 15,153 people with SMI were in transitional housing, which is temporary housing when people are transitioning from emergency shelters to permanent housing. 48,783 people with SMI were living in emergency shelters. People with SMI who experience homelessness have even greater difficulty accessing mental health and primary care services due to cost, lack of transportation, and lack of consistent access to a charged cell phone. These difficulties can add additional stress, which may be why people with SMI experience a high rate of suicidal ideation and suicide attempts. When surveyed, 8% of people with SMI who were homeless reported that they had made a suicide attempt in the past 30 days. Researchers found that the housing first approach to ending homelessness improved quality of life and psychosocial functioning faster than treatment as usual, also known as standard treatment. In addition, researchers found that SMI patients remained homeless for longer and had fewer housing stability when receiving mental health services in the absence of receiving housing. Combining housing first with Assertive Community Treatment leads to improved quality of life one year after initially starting housing first compared to just receiving outpatient mental health services. Additionally, housing first reduced number of days hospitalized and number of emergency room visits for people with SMI. Stigma People with SMI often experience stigma due to frequently stigmatizing representations of people with SMI in the media that portrays them as violent, criminals, and accountable for their condition because of weak character. People with SMI experience two kinds of stigma; public stigma and self-stigma. Public stigma refers to negative beliefs/perceptions that the public has about SMI; such as people with SMI should be feared, are irresponsible, that they should be responsible for their life decisions, and that they are childlike, needing constant care. Self-stigma refers to prejudice that an individual with SMI may feel about themselves, such as "I am dangerous. I am afraid of myself. I am worthless.". This can also manifest as an internalization of public stigma. In a study conducted on patients who were involuntarily hospitalized, researchers found that poor quality of life and low self-esteem could be predicted by high levels of self-stigma and fewer experiences of empowerment. Self-stigma can be reduced by increasing empowerment in individuals with SMI through counseling and/or peer support and other self-disclosing of their own struggles with mental illness. People who suffer from SMI can reduce the amount of stigma that they experience by maintaining insight into their condition with the assistance of social supports. Consumer services, such as drop-in centers, peer support, mentoring services, and educational programs can increase empowerment in individuals with SMI. References Mental disorders Abnormal psychology Psychiatric assessment
Serious mental illness
[ "Biology" ]
1,374
[ "Mental disorders", "Behavior", "Behavioural sciences", "Abnormal psychology", "Human behavior" ]
65,571,716
https://en.wikipedia.org/wiki/Nickel%20monosilicide
Nickel monosilicide is an intermetallic compound formed out of nickel and silicon. Like other nickel silicides, NiSi is of importance in the area of microelectronics. Preparation Nickel monosilicide can be prepared by depositing a nickel layer on silicon and subsequent annealing. In the case of Ni films with thicknesses above 4 nm, the normal phase transition is given by Ni2Si at 250 °C followed by NiSi at 350 °C and NiSi2 at approximately 800 °C. For films with an initial Ni thickness below 4 nm a direct transition from orthorhombic Ni2Si to epitaxial NiSi2−x, skipping the nickel monosilicide phase, is observed. Uses Several properties make NiSi an important local contact material in the area of microelectronics, among them a reduced thermal budget, low resistivity of 13–14 μΩ·cm and a reduced Si consumption when compared to alternative compounds. References Intermetallics Silicon compounds Nickel compounds
Nickel monosilicide
[ "Physics", "Chemistry", "Materials_science" ]
207
[ "Inorganic compounds", "Metallurgy", "Inorganic compound stubs", "Alloys", "Intermetallics", "Condensed matter physics" ]
50,507,581
https://en.wikipedia.org/wiki/Christiane%20Bonnelle
Christiane Bonnelle (, died 21 August 2016) was a French physicist and pioneering spectroscopist, who served as professor emeritus at Pierre and Marie Curie University. Career and education Bonnelle studied at the Sorbonne, from which she received Bachelor of Science and Doctor of Science degrees. After completing her degree, Bonnelle worked at CNRS in 1955, where she worked as an intern and then researcher. In 1960 she started working as an assistant professor at the Sorbonne, becoming a professor in 1967. She moved to Pierre and Marie Curie University in 1974, where she became director of the Laboratory of Physical Chemistry in 1979. Bonnelle worked in this role until 1990. Research Bonnelle's research on solid-solid interfaces produced developments in the measurement of the nanometre, which facilitated developments in electron probes produced by CAMECA. Awards In 1967 she received a CNRS bronze medal. She is also a chevalier of the Ordre des Palmes Académiques. Selected publications "Distribution des états f dans les métaux et les oxyde de terres rares", Journal de Physique, 32, C4-230 (1971) with R.C. Karnatak (DOI: 10.1051/jphyscol:1971443). This paper described observations of excited states. "Analyses par spectroscopie X des distributions 5f de l'uranium dans le métal et UO", Journal de Physique, 35 295-299 (1979) with G. Lachère (DOI: 10.1051/jphys:01974003503029500). This paper dealt with radiative transitions on uranium. "Resonant X-ray Emission Spectroscopy in Solids", Advances in X-Ray Spectroscopy: Contributions in Honour of Professor Y. Cauchois (Elsevier, 2013). References French physical chemists Spectroscopists Year of birth missing 2016 deaths French women chemists French women physicists Chevaliers of the Ordre des Palmes Académiques 20th-century French chemists 20th-century French physicists 21st-century French physicists 20th-century French women scientists 21st-century French women scientists
Christiane Bonnelle
[ "Physics", "Chemistry" ]
454
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
50,512,532
https://en.wikipedia.org/wiki/Fixed-point%20ocean%20observatory
A fixed-point ocean observatory is an ocean observing autonomous system of automatic sensors and samplers that continuously gathers data from deep sea, water column and lower atmosphere, and transmits the data to shore in real or near real-time. Infrastructure Fixed-point ocean observatories are typically composed of a cable anchored to the sea floor to which several automatic sensors and samplers are attached. The cable ends with a buoy at the ocean surface that may have some more sensors attached. Most observatories have communicating buoys that transmit data to shore, and which allow changes to the acquisition method of the sensors, as required. These unmanned platforms can be linked via a cable to the shore transmitting data via an internet connection, or they can transmit data to relay buoys which are able to provide a satellite link to the shore. An example for a network of observatories is the Ocean Observatories Initiative. Instrumentation A typical multi-disciplinary observatory is equipped with sensors and instruments to measure physical and biogeochemical variables along the water column. Additionally the surface buoy can hold several sensors measuring atmospheric parameters at sea level. Main measured variables: In order to do so, typically the ocean observatories are equipped with instruments like: ADCP – Acoustic Doppler current profiler, to measure currents; CTD (conductivity, temperature and depth) sensors, to measure conductivity and thermal variations at a known depth; Hydrophone – to record sounds; Sediment Trap – to quantify the quantity of sinking material; Deep sea camera – to capture footage on location; Seismometer – to record the earth motion; CO2 analyser – to measure CO2; Dissolved Oxygen sensor – to measure dissolved oxygen; Fluorometers – to measure Chlorophyll; Turbidity sensor – to measure turbidity. Purpose Ocean observatories can collect data for different purposes from scientific research to environmental monitoring for marine operations or governance for the benefit of economy and society as a whole. Ocean observatories provide real-time, or near real time data allowing to detect changes as they happen, such as geo-hazards for example. Furthermore continuous time series data allow to investigate interannual-to-decadal changes and to capture episodic events, changes in ocean circulation, water properties, water mass formation and ecosystems, to quantify air-sea fluxes, and to analyse the role of the oceans for the climate. The data collected by the several ocean observatories around the globe on the sub-sea-floor, seafloor, and water column, allows to improve our knowledge of the ocean including: Ocean physics and climate change Biodiversity and ecosystem assessment Carbon cycle and ocean acidification Geophysics and geodynamics Moreover networks of ocean observatories can also be used to input data into global ocean models and to calibrate them thus allowing for the investigation of future changes in ocean circulation and ecosystems. See also VENUS Canada, an ocean observatory operated by Ocean Networks Canada. NEPTUNE Canada, a sister observatory to VENUS, also operated by Ocean Networks Canada. MARS, a similar MBARI cabled-based oceanography observatory. SATURN, Science and Technology University Research Network, a coastal margin, or river-to-ocean, testbed observatory for the United States Pacific Northwest, a project of the National Science Foundation Science and Technology Center for Coastal Margin Observation and Prediction. Ocean development References External links JERICO-NEXT Project (Joint European Research Infrastructure network for Coastal Observatories) European Multidisciplinary Seafloor and water-column Observatory Oceanography
Fixed-point ocean observatory
[ "Physics", "Environmental_science" ]
739
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
50,518,346
https://en.wikipedia.org/wiki/Astro%20microbiology
Astro microbiology, or exo microbiology, is the study of microorganisms in outer space. It stems from an interdisciplinary approach, which incorporates both microbiology and astrobiology. Astrobiology's efforts are aimed at understanding the origins of life and the search for life other than on Earth. Because microorganisms are the most widespread form of life on Earth, and are capable of colonising almost any environment, scientists usually focus on microbial life in the field of astrobiology. Moreover, small and simple cells usually evolve first on a planet rather than larger, multicellular organisms, and have an increased likelihood of being transported from one planet to another via the panspermia theory. Planetary exploration The search for extraterrestrial microbial life have focused mostly on Mars due to its promising environment and close proximity; however, other astrobiological sites include the moons Europa, Titan and Enceladus. All of these sites currently have or have had a recent history of possessing liquid water, which scientists hypothesize as the most consequential precursor for biological life. Europa and Enceladus appear to have large amounts of liquid water hidden beneath the layers of ice that covers their surfaces. Titan, on the other hand, is only planetary body besides Earth with liquid hydrocarbons on its surface. Mars is the main area of interest for the search for life primarily because of convincing evidence that suggests surface liquid water activity in recent history. Furthermore, Mars has an atmosphere containing abundant amounts of carbon and nitrogen, both essential elements needed for life. Discoveries In 1975, NASA's Viking program launched two identical landers to the surface of Mars, each carrying scientific instruments. Their biological experiments included gas chromatography–mass spectrometry to identify the components of Martian soil, analysis of gas exchange with the Martian environment, pyrolytic release of radioactive 14C to check if carbon fixation occurred, and labelled release of additional 14C alongside seven Miller-Urey products to discern metabolic processes. The labelled release experiment produced the only results that could not be conclusively explained by non-biological chemical reactions until NASA's Phoenix spacecraft showed the presence of perchlorates on Mars' surface in 2008. Such compounds could produce the radiolabeled CO2 recorded by the landers. In operation since 2012, NASA's Curiosity rover has found evidence of historical conditions on Mars being suitable for life, such as organic matter being preserved in rocks and evidence of past groundwater, though no lifeforms have been found. In 2014, Vladimir Solovyov of Russia's TASS news agency claimed that cosmonauts found plankton on the International Space Station. NASA officials found no evidence for the claims, though they predicted that some extremophile microorganisms could survive in space. Future missions Experimentation Earth Many studies on Earth have been conducted to collect data on the response of terrestrial microbes to various simulated environmental conditions of outer space. The responses of microbes, such as viruses, bacterial cells, bacterial and fungal spores, and lichens, to isolated factors of outer space (microgravity, galactic cosmic radiation, solar UV radiation, and space vacuum) were determined in space and laboratory simulation experiments. In general, microorganisms tended to thrive in the simulated space flight environment – subjects showed symptoms of enhanced growth and an uncharacteristic ability to proliferate despite the presence of normally suppressive levels of antibiotics. In fact, in one study, trace (background levels) of antibiotic exposure resulted in acquisition of antibiotic resistance under simulated microgravity. The mechanisms responsible for explaining these enhanced responses have yet to be discovered. Space The ability of microorganisms to survive in an outer space environment was investigated to approximate upper boundaries of the biosphere and to determine the accuracy of the interplanetary transport theory for microorganisms. Among the investigated variables, solar UV radiation had the most harmful effect on microbial samples. Among all the samples, only lichens (Rhizocarpon geographicum and Xanthoria elegans) fully survived the 2 weeks of exposure to outer space. Earth's ozone layer greatly protects against the deleterious effects of solar UV, which is why organisms typically are unable to survive without ozone protection. When shielded against solar UV, various samples were able to survive for long periods of times; spores of B. subtilis, for example, were able to proliferate in space for up to 6 years. The data support the likelihood of interplanetary transfer of microorganisms within meteorites, called lithopanspermia hypothesis. Mars Modern technology has already allowed us to use microbes to assist us in extracting materials on Earth, including over 25% of the our current copper supply. Similarly, microbes could help serve a similar purpose on other planets to mine resources, extract useful materials, or create self-sustaining reactors. The most promising of these candidates known to date is cyanobacteria. Billions of years ago, cyanobacteria originally helped us create a habitable Earth by pumping oxygen into the atmosphere, and manage to exist in the darkest corners of the Earth. Cyanobacteria, along with some other rock-eating microbes, seem to be able to withstand the harsh conditions of the vacuum of space without much effort. On Mars, however, cyanobacteria will not even have to endure such harsh conditions. Scientists are currently working on the possibility of installing bioreactors or similar facilities on Mars, which would run entirely on cyanobacteria and provide material for the creation of fuel cells, soil crust formation, regolith amelioration, extraction of useful metals/elements, nutrient release into the soil, and dust removal; a variety of other potentially useful functions are also in the works. References Microbiology
Astro microbiology
[ "Chemistry", "Biology" ]
1,194
[ "Microbiology", "Microscopy" ]
47,256,032
https://en.wikipedia.org/wiki/Evapoporometry
Evapoporometry is a method used to determine pore-size in synthetic membranes. Based on the Kelvin equation, this technique is most accurate for detection of pore diameters between 4 nm to 150 nm. Theory Evapoporometry uses modified forms of the Kelvin equation to relate the evaporation of a wetting liquid (usually 2-propanol) from a membrane to the average diameter of the pores in that membrane. The primary equation used in this technique is: Where is the pore diameter, is the surface tension, is the vapor molar volume, is the gas constant, is the absolute temperature, is the instantaneous evaporation rate in , and is the average evaporation rate of the free-standing liquid layer in . Method Evapoporometry has the significant advantage of requiring only a lab scale, 2-propanol (or another wetting fluid), and a cell in which to contain the sample and 2-propanol. The sample is immersed for some time in 2-propanol prior to measurement to ensure saturation of pores, and is then placed into the cell in an analytical balance and immersed again in 2-propanol, after which the change in mass due to evaporation of the free-standing liquid layer and then draining of liquid from pores is measured by the analytical balance. Instantaneous evaporation rates are calculated from the mass data and input into the above equation to yield a pore-size distribution for the sample. can be used to quantitatively determine value of at which pore draining begins, which is equal to , where is the standard deviation of . This analysis is enabled by the principle that evaporation from small pores will only occur after 2-propanol in larger pores has completely evaporated. It is important to note that there exists a nanoscale layer of wetting fluid on both the membrane and the test cell material known as the "t-layer," the mass of which is to be excluded from measurement to increase accuracy, otherwise these points may be incorrectly attributed to subnanometer pores. Akhondi et al describe methods for correction of the t-layers of the test cell and membrane, as well as a correction for swelling of membranes during the experiment. The correction for the t-layer of the test cell itself can be made by performing the evapoporometry procedure as described above with an empty test cell, integrating from the start point of pore draining until the point at which = 4 nm to yield the mass of the t-layer. This mass plus the mass of the membrane's t-layer will constitute the endpoint of pore diameter calculation for the principal evapoporometry measurement. See also Synthetic membrane Porosimetry Hollow fiber membrane External links Evapore, an open-source Python tool for Evapoporometry data analysis. References Polymers Polymer chemistry
Evapoporometry
[ "Chemistry", "Materials_science", "Engineering" ]
592
[ "Polymers", "Materials science", "Polymer chemistry" ]
47,259,004
https://en.wikipedia.org/wiki/Lightning%20splitter
A lightning splitter is an architectural design referring to wood-framed homes with sharply angled multi-story gable roof. The sharply angled gable was believed in local Rhode Island folklore to split or deflect bolts of lightning. Architectural evidence suggests that new constructions and modification of existing homes to the style were predominantly in the mid-19th century. By 1980, the number of surviving lightning splitter homes was believed to be about half a dozen. Background and design Lightning splitters are named from the folklore or superstitious belief that the sharp angle of the gable will deflect or split lightning. The unique style arose in and around Providence, Rhode Island with the greatest prominence in constructions or modifications occurring in the mid-19th century. Surviving examples The Bicknell–Armington Lightning Splitter House is the only example that is listed on the National Register of Historic Places. Constructed around 1827 and heavily modified around 1850, this house is named for the first two owners of the property. The Daniel Pearce (c1755 - 1800) House at 53 Transit Street in Providence is another historic example. Constructed about 1781 and modified to the lightning splitter style circa 1860, the home exhibits a narrower and sharper angle than the Bicknell-Armington House. There is also a Lightning Splitter house in Buckland, MA known also as the "Arvid Crittenden House" built in 1900. ScenicByway_Ch5_Franklin-County References Buildings and structures by type Rhode Island culture
Lightning splitter
[ "Engineering" ]
300
[ "Buildings and structures by type", "Architecture" ]
47,261,590
https://en.wikipedia.org/wiki/Vivado
Vivado Design Suite is a software suite for synthesis and analysis of hardware description language (HDL) designs, superseding Xilinx ISE with additional features for system on a chip development and high-level synthesis (HLS). Vivado represents a ground-up rewrite and re-thinking of the entire design flow (compared to ISE). Like the later versions of ISE, Vivado includes the in-built logic simulator. Vivado also introduces high-level synthesis, with a toolchain that converts C code into programmable logic. Replacing the 15 year old ISE with Vivado Design Suite took 1000 man-years and cost US$200 million. Features Vivado was introduced in April 2012, and is an integrated design environment (IDE) with system-to-IC level tools built on a shared scalable data model and a common debug environment. Vivado includes electronic system level (ESL) design tools for synthesizing and verifying C-based algorithmic IP; standards based packaging of both algorithmic and RTL IP for reuse; standards based IP stitching and systems integration of all types of system building blocks; and the verification of blocks and systems. A free version WebPACK Edition of Vivado provides designers with a limited version of the design environment. Components The Vivado High-Level Synthesis compiler enables C, C++ and SystemC programs to be directly targeted into Xilinx devices without the need to manually create RTL. Vivado HLS is widely reviewed to increase developer productivity, and is confirmed to support C++ classes, templates, functions and operator overloading. Vivado 2014.1 introduced support for automatically converting OpenCL kernels to IP for Xilinx devices. OpenCL kernels are programs that execute across various CPU, GPU and FPGA platforms. The Vivado Simulator is a component of the Vivado Design Suite. It is a compiled-language simulator that supports mixed-language, Tcl scripts, encrypted IP and enhanced verification. The Vivado IP Integrator allows engineers to quickly integrate and configure IP from the large Xilinx IP library. The Integrator is also tuned for MathWorks Simulink designs built with Xilinx's System Generator and Vivado High-Level Synthesis. The Vivado Tcl Store is a scripting system for developing add-ons to Vivado, and can be used to add and modify Vivado's capabilities. Tcl is the scripting language on which Vivado itself is based. All of Vivado's underlying functions can be invoked and controlled via Tcl scripts. Device support Vivado supports Xilinx's 7-series and all the newer devices (UltraScale and UltraScale+ series). For development targeting older Xilinx's devices and CPLDs, the already discontinued Xilinx ISE has to be used. See also Xilinx ISE Intel Quartus Prime ModelSim References Computer-aided design software Electronic design automation software Digital electronics AMD software
Vivado
[ "Engineering" ]
632
[ "Electronic engineering", "Digital electronics" ]
47,263,487
https://en.wikipedia.org/wiki/Tumbler%20%28glass%29
A tumbler is a flat-floored beverage container usually made of plastic, glass or stainless steel. Theories vary as to the etymology of the word tumbler. One such theory is that the glass originally had a pointed or convex base and could not be set down without spilling. Another is that they had weighted bottoms which caused them to right themselves if knocked over. Originally, the term tumbler referred to a type of drinking glass with a pointed or rounded base, which prevented it from being put down until it was empty, encouraging the drinker to finish their beverage in one go. Over time, the design evolved into the flat-bottomed glassware we are familiar with today, which can comfortably sit on tables and counters without tipping over. The modern tumbler comes in various sizes and shapes, designed to accommodate a wide range of beverages from water and juice to sophisticated cocktails. Types of tumblers Dizzy Cocktail glass, a glass with a wide, shallow bowl, comparable to a normal cocktail glass but without the stem Collins glass, for a tall mixed drink Highball glass, for mixed drinks Iced tea glass Juice glass, for fruit juices and vegetable juices. Old fashioned glass, traditionally, for a simple cocktail or liquor "on the rocks". Contemporary American "rocks" glasses may be much larger, and used for a variety of beverages over ice Shot glass, a small glass for up to four ounces of liquor. The modern shot glass has a thicker base and sides than the older whiskey glass Table glass, faceted glass, or granyonyi stakan, common in Russia and made of particularly hard and thick glass Water glass Whiskey tumbler, a small, thin-walled glass for a straight shot of liquor Tumblers can also be adorned with decor, such as gemstones and rhinestones. Political The Jana Sena Party from India has been assigned a glass tumbler as a common election symbol being one of the twenty-nine political parties in India to have one. Culinary measurement unit The tumbler is a measurement unit for cooking in the United Kingdom. 1 tumbler is 10 British imperial fluid ounces ( British imperial pint; about 9·61 US customary fluid ounces or 284·13 millilitres). The tumbler, the breakfast cup (8 British imperial fluid ounces), the cup (6 British imperial fluid ounces), the teacup (5 British imperial fluid ounces), the coffee cup (2 British imperial fluid ounces), and the wine glass (2 British imperial fluid ounces) are the traditional British equivalents of the US customary cup and the metric cup, used in situations where a US cook would use the US customary cup and a cook using metric units the metric cup. The breakfast cup is the most similar in size to the US customary cup and the metric cup. Which of these six units is used depends on the quantity or volume of the ingredient: there is division of labour between these six units, like the tablespoon and the teaspoon. British cookery books and recipes, especially those from the days before the UK’s partial metrication, commonly use two or more of the aforesaid units simultaneously: for example, the same recipe may call for a ‘tumblerful’ of one ingredient and a ‘wineglassful’ of another one; or a ‘breakfastcupful’ or ‘cupful’ of one ingredient, a ‘teacupful’ of a second one, and a ‘coffeecupful’ of a third one. Unlike the US customary cup and the metric cup, a tumbler, a breakfast cup, a cup, a teacup, a coffee cup, and a wine glass are not measuring cups: they are simply everyday drinking vessels commonly found in British households and typically having the respective aforementioned capacities; due to long‑term and widespread use, they have been transformed into measurement units for cooking. There is not a British imperial unit⁠–⁠based culinary measuring cup. See also Breakfast cup Cup (unit)#British cup Teacup (unit) Coffee cup (unit) Wine glass#Capacity measure Cooking weights and measures References Drinkware Glass jars Measurement Units of volume Imperial units Cooking weights and measures
Tumbler (glass)
[ "Physics", "Mathematics" ]
847
[ "Units of volume", "Physical quantities", "Quantity", "Measurement", "Size", "Units of measurement" ]
57,511,265
https://en.wikipedia.org/wiki/Kinetic%20isotope%20effects%20of%20RuBisCO
The kinetic isotope effect (KIE) of ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCO) is the isotopic fractionation associated solely with the step in the Calvin-Benson cycle where a molecule of carbon dioxide () is attached to the 5-carbon sugar ribulose-1,5-bisphosphate (RuBP) to produce two 3-carbon sugars called 3-phosphoglycerate (3 PGA). This chemical reaction is catalyzed by the enzyme RuBisCO, and this enzyme-catalyzed reaction creates the primary kinetic isotope effect of photosynthesis. It is also largely responsible for the isotopic compositions of photosynthetic organisms and the heterotrophs that eat them. Understanding the intrinsic KIE of RuBisCO is of interest to earth scientists, botanists, and ecologists because this isotopic biosignature can be used to reconstruct the evolution of photosynthesis and the rise of oxygen in the geologic record, reconstruct past evolutionary relationships and environmental conditions, and infer plant relationships and productivity in modern environments. Reaction details and energetics The fixation of by RuBisCO is a multi-step process. First, a molecule (that is not the molecule that is eventually fixed) attaches to the uncharged ε-amino group of lysine 201 in the active site to form a carbamate. This carbamate then binds to the magnesium ion (Mg2+) in RuBisCO's active site. A molecule of RuBP then binds to the Mg2+ ion. The bound RuBP then loses a proton to form a reactive, enodiolate species. The rate-limiting step of the Calvin-Benson cycle is the addition of CO2 to this 2,3-enediol form of RuBP. This is the stage where the intrinsic KIE of Rubisco occurs because a new C-C bond is formed. The newly formed 2-carboxy-3-keto-D-arabinitol 1,5-bisphosphate molecule is then hydrated and cleaved to form two molecules of 3-phosphoglycerate (3 PGA). 3 PGA is then converted into hexoses to be used in the photosynthetic organism's central metabolism. The isotopic substitutions that can occur in this reaction are for carbon, oxygen, and/or hydrogen, though currently only a significant isotope effect is seen for carbon isotope substitution. Isotopes are atoms that have the same number of protons but varying numbers of neutrons. "Lighter" isotopes (like the stable carbon-12 isotope) have a smaller overall mass, and "heavier" isotopes (like the stable carbon-13 isotope or radioactive carbon-14 isotope) have a larger overall mass. Stable isotope geochemistry is concerned with how varying chemical and physical processes preferentially enrich or deplete stable isotopes. Enzymes like RuBisCO cause isotopic fractionation because molecules containing lighter isotopes have higher zero-point energies (ZPE), the lowest possible quantum energy state for a given molecular arrangement. For this reaction, 13CO2 has a lower ZPE than 12CO2 and sits lower in the potential energy well of the reactants. When enzymes catalyze chemical reactions, the lighter isotope is preferentially selected because it has a lower activation energy and is thus more energetically favorable to overcome the high potential-energy transition state and proceed through the reaction. Here, 12CO2 has a lower activation energy so more 12CO2 than 13CO2 goes through the reaction, resulting in the product (3 PGA) being lighter. Ecological trade-offs influence isotope effects The observed intrinsic KIEs of RuBisCO have been correlated with two aspects of its enzyme kinetics: 1) Its "specificity" for CO2 over O2, and 2) Its rate of carboxylation. Specificity (SC/O) The reactive enodiolate species is also sensitive to oxygen (O2), which results in the dual carboxylase / oxygenase activity of RuBisCO. This reaction is considered wasteful as it produces products (3-phosphoglycerate and 2-phosphoglycolate) that must be catabolized through photorespiration. This process requires energy and is a missed-opportunity for CO2 fixation, which results in the net loss of carbon fixation efficiency for the organism. The dual carboxylase / oxygenase activity of RuBisCO is exacerbated by the fact that O2 and CO2 are small, relatively indistinguishable molecules that can bind only weakly, if at all, in Michaelis-Menten complexes. There are four forms of RuBisCO (Form I, II, III, and IV), with Form I being the most abundantly used form. Form I is used extensively by higher plants, eukaryotic algae, cyanobacteria, and Pseudomonadota (formerly proteobacteria). Form II is also used but much less widespread, and can be found in some species of Pseudomonadota and in dinoflagellates. RuBisCOs from different photosynthetic organisms display varying abilities to distinguish between CO2 and O2. This property can be quantified and is termed "specificity" (Sc/o). A higher value of Sc/o means that a RuBisCO's carboxylase activity is greater than its oxygenase activity. Rate of carboxylation (VC) and Michaelis-Menten constant (KC) The rate of carboxylation (VC) is the rate that RuBisCO fixes CO2 to RuBP under substrate saturated conditions. A higher value of VC corresponds to a higher rate of carboxylation. This rate of carboxylation can also be represented through its Michaelis-Menten constant KC, with a higher value of KC corresponding to a higher rate of carboxylation. VC is represented by Vmax, and KC is represented as KM in the generalized Michaelis-Menten curve. Although the rate of carboxylation varies among RuBisCO types, RuBisCO on average fixes only three molecules of CO2 per second. This is remarkably slow compared to typical enzyme catalytic rates, which usually catalyze reactions at the rate of thousands of molecules per second. Phylogenetic patterns It has been observed among natural RuBisCOs that an increased ability to distinguish between CO2 and O2 (larger values of Sc/o) corresponds with a decreased rate of carboxylation (lower values of VC and KC). The variation and trade-off between Sc/o and KC has been observed across all photosynthetic organisms, from photosynthetic bacteria and algae to higher plants. Organisms using RuBisCOs with high values of VC / KC, and low values of Sc/o have localized RuBisCO to areas within the cell with artificially high local CO2 concentrations. In cyanobacteria, concentrations of CO2 are increased using a carboxysome, an icosahedral protein compartment about 100 nm in diameter that selectively uptakes bicarbonate and converts it to CO2 in the presence of RuBisCO. Organisms without a CCM, like certain plants, instead utilize RuBisCOs with high values of Sc/o and low values of VC and KC. It has been theorized that groups with a CCM have been able to maximize KC at the expense of decreasing Sc/o, because artificially enhancing the concentration of CO2 would decrease the concentration of O2 and remove the need for high CO2 specificity. However, the opposite is true for organisms without a CCM, who must optimize Sc/o at the expense of KC because O2 is readily present in the atmosphere. This trade-off between Sc/o and VC or KC observed in extant organisms suggest that RuBisCO has evolved through geologic time to be maximally optimized in its current, modern environment. RuBisCO evolved over 2.5 billion years ago when atmospheric CO2 concentrations were 300 to 600 times higher than present day concentrations, and oxygen concentrations were only 5-18% of present-day levels. Therefore, because CO2 was abundant and O2 rare, there was no need for the ancestral RuBisCO enzyme to have high specificity. This is supported by the biochemical characterization of an ancestral RuBisCO enzyme, which has intermediate values of VC and SC/O between the extreme end-members. It has been theorized that this ecological trade-off is due to the form that 2-carboxy-3-keto-D-arabinitol 1,5-bisphophate in its transient transition state before cleaving into two 3PGA molecules. The more closely the Mg2+-bound CO2 moiety resembles the carboxylate group in 2-carboxy-3-keto-D-arabinitol 1,5-bisphophate, the greater the structural difference between the transition states of carboxylation and oxygenation. The larger structural difference allows RuBisCO to better distinguish between CO2 and O2, resulting in larger values of Sc/o. However, this increasing structural similarity between the transition state and the product state requires strong binding at the carboxyketone group, and this binding is so strong that the rate of cleavage into two product 3PGA molecules is slowed. Therefore, an increased specificity for CO2 over O2 necessitates a lower overall rate of carboxylation. This theory implies that there is a physical chemistry limitation at the heart of Rubisco's active site, and may preclude any efforts to engineer a simultaneously more selective and faster Rubisco. Isotope effects Sc/o has been positively correlated with the magnitude of carbon isotope fractionation (represented by Δ13C), with larger values of Sc/o corresponding with a larger values of Δ13C. It has been theorized that because increasing Sc/o means the transition state is more like the product, the O2C---C-2 bond will be shorter, resulting in a higher overall potential energy & vibrational energy. This creates a higher energy transition state, which makes it even harder for 13CO2 (lower in the potential energy well than 12CO2) to overcome the required activation energy. The RuBisCOs used by varying photosynthetic organisms vary slightly in their enzyme structure, and this enzyme structure results in varying transition states. This diversity in enzyme structure is reflected in the resulting Δ13C values measured from different photosynthetic organisms. However, overlap exists between the Δ13C values of different groups because the carbon isotope values measured are generally of the entire organism, and not just its RuBisCO enzyme. Many other factors, including growth rate and the isotopic composition of the starting substrate, can affect the carbon isotope values of whole organism and cause the spread seen in C isotope measurements. See also Isotope geochemistry Fractionation of carbon isotopes in oxygenic photosynthesis Isotopes of carbon Isotopic signature References Chemical kinetics Photosynthesis Isotope separation
Kinetic isotope effects of RuBisCO
[ "Chemistry", "Biology" ]
2,341
[ "Biochemistry", "Chemical reaction engineering", "Photosynthesis", "Chemical kinetics" ]
57,511,618
https://en.wikipedia.org/wiki/SMSS%20J215728.21-360215.1
SMSS J215728.21-360215.1, commonly known as J2157-3602, is one of the fastest growing black holes and one of the most powerful quasars known to exist . The quasar is located at redshift 4.75, corresponding to a comoving distance of from Earth and to a light-travel distance of . It was discovered with the SkyMapper telescope at Australian National University's Siding Spring Observatory, announced in May 2018. It has an intrinsic bolometric luminosity of () and an absolute magnitude of -32.36. In July 2020 the black hole associated with the quasar was reported to be 34 billion solar masses, based on a study published in Monthly Notices of the Royal Astronomical Society. References Quasars Supermassive black holes Piscis Austrinus
SMSS J215728.21-360215.1
[ "Physics", "Astronomy" ]
178
[ "Piscis Austrinus", "Black holes", "Galaxy stubs", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Astronomy stubs" ]
57,515,942
https://en.wikipedia.org/wiki/Monitored%20Emergency%20Use%20of%20Unregistered%20and%20Investigational%20Interventions
Monitored Emergency Use of Unregistered and Investigational Interventions (MEURI) is an ethical protocol developed by the World Health Organization to evaluate the potential use of experimental drugs in the event of public health emergencies. The protocol was created by the WHO Ebola Ethics Working Group in 2014 in the context of the 2014 West Africa Ebola outbreak. The WHO recommends that the term be preferred to the term "compassionate use" or "expanded access" for the controlled use of unregistered treatments in public health emergency measures. References Medical ethics Clinical pharmacology Health policy Ebola
Monitored Emergency Use of Unregistered and Investigational Interventions
[ "Chemistry" ]
118
[ "Pharmacology", "Clinical pharmacology" ]
54,200,847
https://en.wikipedia.org/wiki/Dislocation%20avalanches
Dislocation avalanches are rapid discrete events during plastic deformation, in which defects are reorganized collectively. This intermittent flow behavior has been observed in microcrystals, whereas macroscopic plasticity appears as a smooth process. Intermittent plastic flow has been observed in several different systems. In AlMg Alloys, interaction between solute and dislocations can cause sudden jump during dynamic strain aging. In metallic glass, it can be observed via shear banding with stress localization; and single crystal plasticity, it shows up as slip burst. However, analysis of the events with orders-magnitude difference in sizes with different crystallographic structure reveals power-law scaling between the number of events and their magnitude, or scale-free flow. This microscopic instability of plasticity can have profound consequences on mechanical behavior of microcrystals. The increased relative size of the fluctuations makes it difficult to control the plastic forming process. Moreover, at small specimen sizes the yield stress is not well defined by the 0.2% plastic strain criterion anymore, since this value varies specimen by specimen. Similar intermittent effects has been studied in many completely different systems, including intermittency of energy dissipation in magnetism (Barkhausen effect), superconductivity, earthquakes, and friction. Background Macroscopic plasticity are well-described by continuum model. Dislocations motions are characterized by an average velocity which is known as Orowan's equation. However, this approach completely fails to account for well-known intermittent deformation phenomena such as the spatial localization of dislocation flow into "slip bands"(also known as Lüders band) and the temporal fluctuations in stress-strain curves (the Portevin–Le Chatelier effect first reported in the 1920s). Experimental Approach Although evidence of intermittent flow behavior is long known and studied, it is not until past two decades that a quantitative understanding of the phenomenon is developed with help of novel experimental techniques. Acoustic emission Acoustic emission (AE) is used to record the crackling noise from deforming crystals. The amplitudes of the acoustic signals can be related to the area swept by the fast-moving dislocations and hence to the energy dissipated during deformation events. The result shows that cracking noise is not smooth, with no specific energy scale. Effect of grain structure for "supercritical" flow has been studied in polycrystalline ice. Direct mechanical measurement Recent developments in small scale mechanical testing, with sub-nm resolution in displacement and sub-μN resolution in force, now allow to directly study discrete events in stress and strain. Currently, the most prominent method is a miniaturized compression experiment, where a nanoindenter equipped with a flat indentation tip is used. Equipped with in-situ techniques in combination with Transmission electron microscopy, Scanning electron microscopy, and micro-diffraction methods, this nanomechanical testing method can give us rich detail in nanoscale plasticity instabilities in real time. One potential concern in nanomechanical measurement is: how fast can the system respond? Can indentation tip remain contact with the sample and track the deformation? Since dislocation velocity is strongly effected by stress, velocity can be many orders different in different systems. Also, multiscale nature of dislocation avalanche event gives dislocation velocity a large range. For example, single dislocations have been shown to move at speeds of ~10 ms−1 in pure Cu, but dislocation groups moved with ~10−6 ms−1 in Cu-0.5%Al. The opposite is found for iron, where dislocation groups are found to move six orders of magnitude faster in a FeSi-alloy than individual dislocations in pure iron. To resolve this issue, Sparks et al. has designed an experiment to measure first fracture of Si beam and compare with theoretical prediction to determine the respond speed of the system. In addition to regular compression experiments, in-situ electrical contact resistance measurements (ECR) were performed. During these in-situ tests, a constant voltage was applied during the deformation experiment to record current evolution during intermittent plastic flow. Result shows that indentation tip remains contact with the sample throughout experiments, which proves the respond speed is fast enough. Theoretical analysis and simulations Avalanche strain distributions have the general form where C is a normalization constant, t is a scaling exponent, and s0 is the characteristic strain of the largest avalanches. Dislocation dynamic simulations have shown that can be sometimes close to 1.5, but also, many times higher exponents are observed, with values that may even approach 3 in special circumstances. While traditional mean-field theory predictions suggest the value of 1.5, more advanced mean-field investigations have demonstrated that larger exponents can be caused by non-trivial, but prevalent mechanisms in crystal plasticity that suppress mobile dislocation populations. Effect of crystal structure on dislocation avalanches In FCC crystals, scaled velocity shows a main peak in distribution with relatively smooth curve, which is expected from theory except for some disagreement at low velocity. However, in BCC crystal, distribution of scaled velocity is broader and much more dispersed. The result also shows that scaled velocity in BCC is a lot slower than FCC, which is not predicted by mean field theory. A possible explanation this discrepancy is based on different moving speed of edge and screw dislocations in two type of crystals. In FCC crystals, two parts of dislocation move at same velocity, resulting in smooth averaged avalanches profile; whereas in BCC crystals, edge components move fast and escape rapidly, while screw parts propagate slowly, which drag the overall velocity. Based on this explanation, we will also expect a direction dependence of avalanche events in HCP crystals, which currently lack in experimental data. References See also Dislocation Slip (materials science) Intermittency Lüders band Portevin–Le Chatelier effect Barkhausen effect Materials science Solid mechanics Deformation (mechanics) Plasticity (physics)
Dislocation avalanches
[ "Physics", "Materials_science", "Engineering" ]
1,229
[ "Solid mechanics", "Applied and interdisciplinary physics", "Deformation (mechanics)", "Materials science", "Plasticity (physics)", "Mechanics", "nan" ]
54,201,297
https://en.wikipedia.org/wiki/Dacuronium%20bromide
Dacuronium bromide (INN, BAN) (developmental code name NB-68) is an aminosteroid neuromuscular blocking agent which was never marketed. It acts as a competitive antagonist of the nicotinic acetylcholine receptor (nAChR). References Muscle relaxants Nicotinic antagonists Quaternary ammonium compounds Abandoned drugs Neuromuscular blockers Cyclopentanols Piperidines Acetate esters Bromides
Dacuronium bromide
[ "Chemistry" ]
99
[ "Bromides", "Salts", "Drug safety", "Abandoned drugs" ]
54,202,739
https://en.wikipedia.org/wiki/Amphipathic%20lipid%20packing%20sensor%20motifs
Amphipathic Lipid Packing Sensor (ALPS) motifs were first identified in 2005 in ARFGAP1 and have been reviewed. The curving of a phospholipid bilayer, for example into a liposome, causes disturbances to the packing of the lipids on the side of the bilayer that has the larger surface area (the outside of a liposome for example). The less "ordered" or "looser" packing of the lipids is recognized by ALPS motifs. ALPS motifs are 20 to 40 amino acid long portions of proteins that have important collections of types of amino acid residues. Bulky hydrophobic amino acid residues, such as Phenylalanine, Leucine, and Tryptophan are present every 3 or 4 positions, with many polar but uncharged amino acid residues such as Glycine, Serine and Threonine between. The ALPS is unstructured in solution but folds as an alpha helix when associated with the membrane bilayer, such that the hydrophobic residues insert between loosely packed lipids and the polar residues point toward the aqueous cytoplasm. References Membrane biology Molecular biology Proteins es:Motivo estructural ALPS
Amphipathic lipid packing sensor motifs
[ "Chemistry", "Biology" ]
248
[ "Biomolecules by chemical classification", "Membrane biology", "Molecular biology", "Biochemistry", "Proteins" ]
48,992,685
https://en.wikipedia.org/wiki/Transition%20metal%20dithiophosphate%20complex
Transition metal dithiophosphate complexes are coordination compounds containing dithiophosphate ligands, i.e. ligands of the formula (RO)2PS. The homoleptic complexes have formulas M[S2P(OR)2]2 and M[S2P(OR)2]3. These neutral complexes tend to be soluble in organic solvents, especially when R is branched. Perhaps the most important members are zinc dialkyldithiophosphates, which are oil additives. Such compounds are prepared by the reaction of dialkoxydithiophoric acid with metal oxides, chlorides, and acetates. References Phosphorothioates
Transition metal dithiophosphate complex
[ "Chemistry" ]
147
[ "Phosphorothioates", "Functional groups" ]
39,893,334
https://en.wikipedia.org/wiki/FuseNet
FuseNet is a nuclear fusion focused educational organization. Between 2008 and 2013 it was funded by a European Union grant under EURATOM: Fusion Energy Research. The FP7 Project The purpose of FuseNet is to coordinate and facilitate fusion education, to share best practices, to jointly develop educational tools, to organize educational events. The members of FuseNet have jointly established academic criteria for the award of European Fusion Doctorate and Master Certificates. These criteria are set to stimulate a high level of fusion education throughout Europe. The Association FuseNet is the umbrella organization and single voice for the training and education of the next generation fusion engineers and scientists. FuseNet is recognized as such by the European Commission. References External links FuseNet Website Nuclear fusion Science education 2008 establishments
FuseNet
[ "Physics", "Chemistry" ]
151
[ "Nuclear fusion", "Nuclear chemistry stubs", "Nuclear physics" ]
39,900,181
https://en.wikipedia.org/wiki/Light-oxygen-voltage-sensing%20domain
A Light-oxygen-voltage-sensing domain (LOV domain) is a protein sensor used by a large variety of higher plants, microalgae, fungi and bacteria to sense environmental conditions. In higher plants, they are used to control phototropism, chloroplast relocation, and stomatal opening, whereas in fungal organisms, they are used for adjusting the circadian temporal organization of the cells to the daily and seasonal periods. They are a subset of PAS domains. Chromophore Common to all LOV proteins is the blue-light sensitive flavin chromophore, which in the signaling state is covalently linked to the protein core via an adjacent cysteine residue. LOV domains are e.g. encountered in phototropins, which are blue-light-sensitive protein complexes regulating a great diversity of biological processes in higher plants as well as in micro-algae. Phototropins are composed of two LOV domains, each containing a non-covalently bound flavin mononucleotide (FMN) chromophore in its dark-state form, and a C-terminal Ser-Thr kinase. Upon blue-light absorption, a covalent bond between the FMN chromophore and an adjacent reactive cysteine residue of the apo-protein is formed in the LOV2 domain. This subsequently mediates the activation of the kinase, which induces a signal in the organism through phototropin autophosphorylation. While the photochemical reactivity of the LOV2 domain has been found to be essential for the activation of the kinase, the in vivo functionality of the LOV1 domain within the protein complex still remains unclear. Fungus In case of the fungus Neurospora crassa, the circadian clock is controlled by two light-sensitive domains, known as the white-collar-complex (WCC) and the LOV domain vivid (VVD-LOV). WCC is primarily responsible for the light-induced transcription on the control-gene frequency (FRQ) under day-light conditions, which drives the expression of VVD-LOV and governs the negative feedback loop onto the circadian clock. By contrast, the role of VVD-LOV is mainly modulatory and does not directly affect FRQ. Natural and engineered functions of LOV domains LOV domains have been found to control gene expression through DNA binding and to be involved in redox-dependent regulation, like e.g. in the bacterium Rhodobacter sphaeroides. Notably, LOV-based optogenetic tools have been gaining wide popularity in recent years to control a myriad of cellular events, including cell motility, subcellular organelle distribution, formation of membrane contact sites, microtubule dynamics, transcription, and protein degradation. See also FMN-binding fluorescent proteins References Sensory receptors Signal transduction Molecular biology Plant physiology
Light-oxygen-voltage-sensing domain
[ "Chemistry", "Biology" ]
604
[ "Plant physiology", "Plants", "Signal transduction", "Molecular biology", "Biochemistry", "Neurochemistry" ]
39,901,852
https://en.wikipedia.org/wiki/Reactive%20%26%20Functional%20Polymers
Reactive & Functional Polymers is a monthly peer-reviewed scientific journal, established in 1982 and published by Elsevier. It covers research on both the science and the technology of reactive polymers (those with functional groups) including polymers and other polymers with specific chemical reactivity or other functionality. The journal publishes both original research and review papers. The editor-in-chief is Alexander Bismarck (University of Vienna). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.975. References External links Materials science journals Chemistry journals Elsevier academic journals
Reactive & Functional Polymers
[ "Materials_science", "Engineering" ]
128
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
39,904,688
https://en.wikipedia.org/wiki/Blitzar
In astronomy, blitzars are a hypothetical type of neutron star, specifically pulsars that can rapidly collapse into black holes if their spinning slows down. Heino Falcke and Luciano Rezzolla proposed these stars in 2013 as an explanation for fast radio bursts. Overview These stars, if they exist, are thought to start from a neutron star with a mass that would cause it to collapse into a black hole if it were not rapidly spinning. Instead, the neutron star spins fast enough so that its centrifugal force overcomes gravity. This makes the neutron star a typical but doomed pulsar whose strong magnetic field radiates energy away and slows its spin. Eventually the weakening centrifugal force is no longer able to halt the pulsar from collapsing into a black hole. At that moment, part of the pulsar's magnetic field outside the black hole is suddenly cut off from its vanished source. This magnetic energy is instantly transformed into a burst of wide spectrum radio energy. As of January 2015, seven radio events detected so far might represent such possible collapses; they are projected to occur every 10 seconds within the observable universe. Because the magnetic field had previously cleared the surrounding space of gas and dust, there is no nearby material that will fall into the new black hole. Thus there is no burst of X-rays or gamma rays that usually happens when other black holes form. If blitzars exist, they may offer a new way to observe details of black hole formation. References Hypothetical stars Neutron stars Black holes
Blitzar
[ "Physics", "Astronomy" ]
314
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
38,500,906
https://en.wikipedia.org/wiki/Gagliardo%E2%80%93Nirenberg%20interpolation%20inequality
In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haïm Brezis and Petru Mironescu in the late 2010s. History The Gagliardo-Nirenberg inequality was originally proposed by Emilio Gagliardo and Louis Nirenberg in two independent contributions during the International Congress of Mathematicians held in Edinburgh from August 14, 1958 through August 21, 1958. In the following year, both authors improved their results and published them independently. Nonetheless, a complete proof of the inequality went missing in the literature for a long time. Indeed, to some extent, both original works of Gagliardo and Nirenberg do not contain a full and rigorous argument proving the result. For example, Nirenberg firstly included the inequality in a collection of lectures given in Pisa from September 1 to September 10, 1958. The transcription of the lectures was later published in 1959, and the author explicitly states only the main steps of the proof. On the other hand, the proof of Gagliardo did not yield the result in full generality, i.e. for all possible values of the parameters appearing in the statement. A detailed proof in the whole Euclidean space was published in 2021. From its original formulation, several mathematicians worked on proving and generalizing Gagliardo-Nirenberg type inequalities. The Italian mathematician Carlo Miranda developed a first generalization in 1963, which was addressed and refined by Nirenberg later in 1966. The investigation of Gagliardo-Nirenberg type inequalities continued in the following decades. For instance, a careful study on negative exponents has been carried out extending the work of Nirenberg in 2018, while Brezis and Mironescu characterized in full generality the embeddings between Sobolev spaces extending the inequality to fractional orders. Statement of the inequality For any extended real (i.e. possibly infinite) positive quantity and any integer , let denote the usual spaces, while denotes the Sobolev space consisting of all real-valued functions in such that all their weak derivatives up to order are also in . Both families of spaces are intended to be endowed with their standard norms, namely: where stands for essential supremum. Above, for the sake of convenience, the same notation is used for scalar, vector and tensor-valued Lebesgue and Sobolev spaces. The original version of the theorem, for functions defined on the whole Euclidean space , can be stated as follows. Notice that the parameter is determined uniquely by all the other ones and usually assumed to be finite. However, there are sharper formulations in which is considered (but other values may be excluded, for example ). Relevant corollaries of the Gagliardo-Nirenberg inequality The Gagliardo-Nirenberg inequality generalizes a collection of well-known results in the field of functional analysis. Indeed, given a suitable choice of the seven parameters appearing in the statement of the theorem, one obtains several useful and recurring inequalities in the theory of partial differential equations: The Sobolev embedding theorem establishes the existence of continuous embeddings between Sobolev spaces with different orders of differentiation and/or integrability. It can be obtained from the Gagliardo-Nirenberg inequality setting (so that the choice of becomes irrelevant, and the same goes for the associated requirement ) and the remaining parameters in such a way that and the other hypotheses are satisfied. The result reads then for any such that . In particular, setting and yields that , namely the Sobolev conjugate exponent of , and we have the embedding Notice that, in the embedding above, we also implicitly assume that and hence the first exceptional case does not apply. The Ladyzhenskaya inequality is a special case of the Gagliardo-Nirenberg inequality. Considering the most common cases, namely and , we have the former corresponding to the parameter choice yielding for any The constant is universal and can be proven to be . In three space dimensions, a slightly different choice of parameters is needed, namelyyieldingfor any . Here, it holds . The Nash inequality, which was published by John Nash in 1958, is yet another result generalized by the Gagliardo-Nirenberg inequality. Indeed, choosingone gets which is oftentimes recast asor its squared version. Proof of the Gagliardo-Nirenberg inequality A complete and detailed proof of the Gagliardo-Nirenberg inequality has been missing in literature for a long time since its first statements. Indeed, both original works of Gagliardo and Nirenberg lacked some details, or even presented only the main steps of the proof. The most delicate point concerns the limiting case . In order to avoid the two exceptional cases, we further assume that is finite and that , so in particular . The core of the proof is based on two proofs by induction. Throughout the proof, given and , we shall assume that . A double induction argument is applied to the couple of integers , representing the orders of differentiation. The other parameters are constructed in such a way that they comply with the hypotheses of the theorem. As base case, we assume that the Gagliardo-Nirenberg inequality holds for and (hence ). Here, in order for the inequality to hold, the remaining parameters should satisfy The first induction step goes as follows. Assume the Gagliardo-Nirenberg inequality holds for some strictly greater than and (hence ). We are going to prove that it also holds for and (with ). To this end, the remaining parameters necessarily satisfy Fix them as such. Then, let be such that From the base case, we can infer that Now, from the two relations between the parameters, through some algebraic manipulations we arrive at therefore the inequality with applied to implies The two inequalities imply the sought Gagliardo-Nirenberg inequality, namely The second induction step is similar, but allows to change. Assume the Gagliardo-Nirenberg inequality holds for some pair with (hence ). It is enough to prove that it also holds for and (with ). Again, fix the parameters in such a way that and let be such that The inequality with and applied to entails Since, by the first induction step, we can assume the Gagliardo-Nirenberg inequality holds with and , we get The proof is completed by combining the two inequalities. In order to prove the base case, several technical lemmas are necessary, while the remaining values of can be recovered by interpolation and a proof can be found, for instance, in the original work of Nirenberg. The Gagliardo-Nirenberg inequality in bounded domains In many problems coming from the theory of partial differential equations, one has to deal with functions whose domain is not the whole Euclidean space , but rather some given bounded, open and connected set In the following, we also assume that has finite Lebesgue measure and satisfies the cone condition (among those are the widely used Lipschitz domains). Both Gagliardo and Nirenberg found out that their theorem could be extended to this case adding a penalization term to the right hand side. Precisely, The necessity of a different formulation with respect to the case is rather straightforward to prove. Indeed, since has finite Lebesgue measure, any affine function belongs to for every (including ). Of course, it holds much more: affine functions belong to and all their derivatives of order greater than or equal to two are identically equal to zero in . It can be easily seen that the Gagliardo-Nirenberg inequality for the case fails to be true for any non constant affine function, since a contradiction is immediately achieved when and , and therefore cannot hold in general for integrable functions defined on bounded domains. That being said, under slightly stronger assumptions, it is possible to recast the theorem in such a way that the penalization term is "absorbed" in the first term at right hand side. Indeed, if , then one can choose and get This formulation has the advantage of recovering the structure of the theorem in the full Euclidean space, with the only caution that the Sobolev seminorm is replaced by the full -norm. For this reason, the Gagliardo-Nirenberg inequality in bounded domains is commonly stated in this way. Finally, observe that the first exceptional case appearing in the statement of the Gagliardo-Nirenberg inequality for the whole space is no longer relevant in bounded domains, since for finite measure sets we have that for any finite Generalization to non-integer orders The problem of interpolating different Sobolev spaces has been solved in full generality by Haïm Brezis and Petru Mironescu in two works dated 2018 and 2019. Furthermore, their results do not depend on the dimension and allow real values of and , rather than integer. Here, is either the full space, a half-space or a bounded and Lipschitz domain. If and is an extended real quantity, the space is defined as follows and if we set where and denote the integer part and the fractional part of , respectively, i.e. . In this definition, there is the understanding that , so that the usual Sobolev spaces are recovered whenever is a positive integer. These spaces are often referred to as fractional Sobolev spaces. A generalization of the Gagliardo-Nirenberg inequality to these spaces reads For example, the parameter choice gives the estimate The validity of the estimate is granted, for instance, from the fact that . See also Metric (mathematics) Functional analysis Function space References Theorems in functional analysis Inequalities Sobolev spaces
Gagliardo–Nirenberg interpolation inequality
[ "Mathematics" ]
2,105
[ "Theorems in mathematical analysis", "Mathematical theorems", "Binary relations", "Theorems in functional analysis", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
38,502,761
https://en.wikipedia.org/wiki/Ada%20Yonath
Ada E. Yonath (, ; born 22 June 1939) is an Israeli crystallographer and Nobel laureate in Chemistry, best known for her pioneering work on the structure of ribosomes. She is the current director of the Helen and Milton A. Kimmelman Center for Biomolecular Structure and Assembly of the Weizmann Institute of Science. In 2009, Yonath received the Nobel Prize in Chemistry along with Venkatraman Ramakrishnan and Thomas A. Steitz for her studies on the structure and function of the ribosome, becoming the first Israeli woman to win the Nobel Prize out of ten Israeli Nobel laureates, the first woman from the Middle East to win a Nobel prize in the sciences, and the first woman in 45 years to win the Nobel Prize for Chemistry. Biography Ada Lifshitz (later Yonath) was born in the Geula quarter of Jerusalem. Her parents, Hillel and Esther Lifshitz, were Zionist Jews who immigrated to the British Mandate of Palestine (now Israel) from Zduńska Wola, Poland in 1933 before the establishment of Israel. Her father was a rabbi and came from a rabbinical family. They settled in Jerusalem and ran a grocery, but found it difficult to make ends meet. They lived in cramped quarters with several other families, and Yonath remembers "books" being the only thing she had to keep her occupied. Despite their poverty, her parents sent her to school in the upscale Beit HaKerem neighborhood to assure her a good education. When her father died at the age of 42, the family moved to Tel Aviv. Yonath was accepted to Tichon Hadash high school although her mother could not pay the tuition. She gave math lessons to students in return. As a youngster, she says she was inspired by the Polish and naturalized-French scientist Marie Curie. However, she stresses that Curie, whom she as a child was fascinated by after reading her biography, was not her "role model". She returned to Jerusalem for college, graduating from the Hebrew University of Jerusalem with a bachelor's degree in chemistry in 1962, and a master's degree in biochemistry in 1964. In 1968, she obtained her PhD from the Weizmann Institute of Science for X-ray crystallographic studies on the structure of collagen, with Wolfie Traub as her PhD advisor. She has one daughter, Hagit Yonath, a doctor at Sheba Medical Center, and a granddaughter, Noa. She is the cousin of anti-occupation activist Ruchama Marton. Scientific career Yonath accepted postdoctoral positions at Carnegie Mellon University (1969) and MIT (1970). While a postdoc at MIT she spent some time in the lab of subsequent 1976 chemistry Nobel Prize winner William N. Lipscomb, Jr. of Harvard University where she was inspired to pursue very large structures. In 1970, she established what was for nearly a decade the only protein crystallography laboratory in Israel. Then, from 1979 to 1984 she was a group leader with Heinz-Günter Wittmann at the Max Planck Institute for Molecular Genetics in Berlin. She was visiting professor at the University of Chicago in 1977–78. She headed a Max-Planck Institute Research Unit at DESY in Hamburg, Germany (1986–2004) in parallel to her research activities at the Weizmann Institute. Yonath focuses on the mechanisms underlying protein biosynthesis, by ribosomal crystallography, a research line she pioneered over twenty years ago despite considerable skepticism of the international scientific community. Ribosomes translate RNA into protein and because they have slightly different structures in microbes, when compared to eukaryotes, such as human cells, they are often a target for antibiotics. In 2000 and 2001, she determined the complete high-resolution structures of both ribosomal subunits and discovered within the otherwise asymmetric ribosome, the universal symmetrical region that provides the framework and navigates the process of polypeptide polymerization. Consequently, she showed that the ribosome is a ribozyme that places its substrates in stereochemistry suitable for peptide bond formation and for substrate-mediated catalysis. In 1993 she visualized the path taken by the nascent proteins, namely the ribosomal tunnel, and recently revealed the dynamics elements enabling its involvement in elongation arrest, gating, intra-cellular regulation and nascent chain trafficking into their folding space. Additionally, Yonath elucidated the modes of action of over twenty different antibiotics targeting the ribosome, illuminated mechanisms of drug resistance and synergism, deciphered the structural basis for antibiotic selectivity and showed how it plays a key role in clinical usefulness and therapeutic effectiveness, thus paving the way for structure-based drug design. For enabling ribosomal crystallography Yonath introduced a novel technique, cryo bio-crystallography, which became routine in structural biology and allowed intricate projects otherwise considered formidable. At the Weizmann Institute, Yonath is the incumbent of the Martin S. and Helen Kimmel Professorial Chair. Political Views She has called for the unconditional release of all Hamas prisoners, saying that "holding Palestinians captive encourages and perpetuates their motivation to harm Israel and its citizens ... once we don't have any prisoners to release they will have no reason to kidnap soldiers". Awards and recognition Yonath is a member of the United States National Academy of Sciences; the American Academy of Arts and Sciences; the Israel Academy of Sciences and Humanities; the European Academy of Sciences and Art and the European Molecular Biology Organization. On Saturday, 18 October 2014, Professor Yonath was named an ordinary member of the Pontifical Academy of Sciences by Pope Francis. Her awards and honors include the following: In 2002, Israel Prize In 2002, Harvey Prize In 2004, Massry Prize In 2004, Paul Karrer Gold Medal In 2005, Louisa Gross Horwitz Prize In 2006, Wolf Prize in Chemistry along with George Feher. In 2006, Rothschild Prize in Life Sciences. In 2006, The EMET Prize for Art, Science and Culture in Life Sciences, along with Professor Peretz Lavie (Medicine) and Professor Eli Keshet (Biology) In 2007, Paul Ehrlich and Ludwig Darmstaedter Prize along with Harry Noller In 2008, the Albert Einstein World Award of Science for her pioneering contributions to protein biosynthesis in the field of ribosomal crystallography and her introduction of innovative techniques in cryo bio-crystallography. In 2009, the Nobel Prize in Chemistry (co-recipient with Thomas Steitz and Venkatraman Ramakrishnan). She was the first Israeli woman to be awarded a Nobel Prize. In 2010, Wilhelm Exner Medal In 2011, Marie Curie Medal awarded by the Polish Chemical Society In 2013 she became a member of the German Academy of Sciences Leopoldina. In 2015, she was awarded Honorary Doctorates from the University of Southern California, the De La Salle University, Manila/Philippines; the Joseph Fourier University, Grenoble/France; the Medical University of Lodz, Lodz/Poland; and the University of Warwick, UK. In 2018, she was awarded an Honorary Doctorate from Carnegie Mellon University In 2020, she was elected a Foreign Member of the Royal Society In 2023, she was awarded an Honorary Doctorate from the Jagiellonian University. See also Women of Israel History of RNA biology List of Israel Prize recipients List of female Nobel laureates List of Israeli Nobel laureates List of Jewish Nobel laureates List of peace activists List of RNA biologists Timeline of women in science Women in chemistry References External links "APS user shares the “Israeli Nobel” for chemistry", from the Argonne National Laboratory Advanced Photon Source (APS), United States Department of Energy The Official Site of Louisa Gross Horwitz Prize Weizmann Institute of Science, Yonath-Site Ada Yonath's Publication list Talk of Ada Yonath at the Origins 2011 congress 1939 births Living people People from Rehovot Nobel laureates in Chemistry Women Nobel laureates Israeli Nobel laureates Albert Einstein World Award of Science Laureates Crystallographers Israeli biochemists Israel Prize in chemistry recipients Israel Prize women recipients Israeli Jews Israeli people of Polish-Jewish descent Israeli pacifists Israeli women chemists Israeli women scientists Jewish chemists Jewish pacifists Jews from Mandatory Palestine Jewish women scientists L'Oréal-UNESCO Awards for Women in Science laureates Members of the European Academy of Sciences and Arts Members of the European Molecular Biology Organization Members of the Israel Academy of Sciences and Humanities Foreign associates of the National Academy of Sciences Scientists from Jerusalem Scientists from Tel Aviv University of Chicago faculty Academic staff of Weizmann Institute of Science Weizmann Institute of Science alumni Wolf Prize in Chemistry laureates Women biologists Carnegie Mellon University fellows Massry Prize recipients Articles containing video clips 20th-century women scientists 21st-century women scientists Foreign members of the Royal Society Members of the German National Academy of Sciences Leopoldina
Ada Yonath
[ "Chemistry", "Materials_science", "Technology" ]
1,834
[ "Crystallographers", "Women Nobel laureates", "Crystallography", "Women in science and technology" ]
38,503,933
https://en.wikipedia.org/wiki/Alexander%20Boden
Alexander Boden (28 May 1913 – 18 December 1993) was a philanthropist, industrialist (manufacturing chemist), publisher (including education author and researcher), founder of the Boden Chair of Human Nutrition at the University of Sydney, a Fellow Australian Academy of Science 1982, a founder of Bioclone Australia, Hardman Chemicals and Science Press and was awarded Leighton Medal of Royal Australian Chemical Institute in 1986. He was educated at the University of Sydney (BSc 1933, Hon DSc 1984) and received an Order of Australia (AO) and he was also the author of A Handbook of Chemistry, initially published by the Shakespeare Head Press and later by his own Science Press. After he graduated, he joined a research laboratory, which he soon took over, and renamed it Hardman Australia. Hardman Australia was turned into a manufacturing company producing in particular DDT. In 1981 he formed Bioclone Australia, which exports diagnostic products. Alex was elected to the Australian Academy of Science on the nomination of Professor John Swan, upon nomination Professor Swan said: "Alex Boden was a man of remarkable talents, concealed by a modest, even humble, exterior. I never saw him angry. He was greatly admired as a man who had achieved much in life but whose ambition was to contribute to family, social and community welfare, to give rather than take, to be supportive of others, and above all to foster the advancement of science." Early years Alexander Boden was born on 28 May 1913, shortly after his parents William and Helena Boden arrived in Australia from Ireland. His parents established a drapery business in the main shopping centre of the Sydney suburb of Chatswood. Alex was the middle child between his two sisters. His father, William Boden, was born in Ballinasloe on the border of counties Galway and Roscommon. In Williams youth, he went to join his uncle in the latter's evidently prosperous drapery story in Magherafelt, County Londonderry. A surviving photograph of the staff of the store is impressive: some fifty men and women in starched collars and prim blouses stand in well-ordered ranks. The move to Australia in 1913 followed the emigration of William's two brothers and a sister. His mother, formerly Helena Isabella Hutchinson, a schoolteacher, came from Knockboy, near Broughshane, County Antrim, of a family of schoolteachers and clerics. Alex Boden's education was at Willoughby Public School and North Sydney Boys High School. His father's premises were owned by the pharmacists Washington H. Soul, Pattinson and Co. and one day, while the young Alex was still at school, his father asked his landlord what was the best career for a boy. ' Buyin' and sellin' ' was Dr. Pattinson's counsel. In a greatly expanded sense it could be said that Alex Boden followed this advice. Student years In 1929 Alex passed the Leaving Certificate with honours in Mathematics and Chemistry. An exhibition took him to the University of Sydney, where he enrolled in science. He was drawn to this area due to chemical experiments he undertook while at school. I can trace my interest in chemistry to my first chemical experiment in school, changing the colour of litmus paper. I took some paper home and spent an exciting afternoon changing it from pink to blue with vinegar and washing soda. This was something I could do without instruction or interference from others. The last sentence revealing the hallmark of his life – self-reliance. During his university years, Alex underwent extracurricular activity and set a possible record in ecumenism through his simultaneous membership of: the Student Christian Movement (he had been a Sunday school teacher at Willoughby Presbyterian church), of Professor John Anderson's notoriously subversive Freethought Society, and the Sydney University Regiment (Corporal 1931). He became a highly qualified Boy Scout leader. He spoke at the Sydney University Union's parliamentary-style Union Night debates and engaged in hockey and wrestling. Alex recorded in his notebooks with carefully marshalled tables of the books he had read and his opinions of them. In his first university year, he records reading, wholly or in part, about a hundred books. Representative entries from that year include Better Ballroom Dancing by Scott (75% read) with a note 'Correction of mistakes etc.'; Goodbye to All That by Robert Graves (all read) 'Good realistic. No censoring of language'; Handbook of Photography by Sinclair (most read) 'Pretty good but a bit old-fashioned'; Religion and Science by Draper (all read) 'V. readable'; English Regal Copper Coins, by Bamah (most read); 'Coins 16721860. No pics. may be good for reference'; La Vie des Abeilles by Maeterlinck (2/3 read) 'V.g. Hard French. Interesting and novel'; Communist Manifesto by Marx and Engels (all read) 'Quite fair. Rather old but still interesting'. Alex continued, with unabated assiduity and eclecticism, right through to the end of his fourth year – 400 titles, all similarly noted. He graduated with honours in the year 1933. While this account must shortly take up his business career, it will be convenient here to carry on with one of his subsequent extra-professional interests – the theatre. He joined The Playmakers and in 1934 made his debut in Crime Made Legal. Advance publicity noted that 'Alex Boden is a newcomer to the Society and is making his first appearance in the important part of Inspector Burke. His fine speaking voice and confident bearing are sure to find favour.' It must be assumed that they did, for he made at least a dozen subsequent appearances, mostly with Sydney's oldest repertory company, The Sydney Players. His notices were generally flattering, as in A Midsummer Night's Dream: 'As Theseus, Alex Boden was easily the most competent of last night's performers. He alone gave real dignity to his lines.' After 1936, however, the store of programmes and press clippings stops. Life had acquired other dimensions. He wrote in his notebook: Aged twenty-four and watching now the last grains of 1937 run through our fingers. A book [his Handbook of Chemistry] was born in January. Perhaps it will be worthy of rebirth. Almost a beginning on another. Finances are dull but they have been smoothed sufficiently to give a little takeoff for 1938. Sentiments not entirely controlled and showing no practical advance. See also List of Old Falconians (The notable alumni of North Sydney Boys High School) List of Fellows of the Australian Academy of Science Sydney University Regiment References External links Related Corporate Bodies Biographical Memoirs of Deceased Fellows – Encyclopedia of Australian Science Australian Academy of Science Related Events Boden Conference 1981 Published Resources Ross, I. G., 'Alexander Boden 1913–1993', Historical Records of Australian Science, vol. 11, no. 4, 1997, pp. 523–540 Online Resources Boden, Alexander, Trove, National Library of Australia, 2009. Sydney University School benefactors 1913 births 1993 deaths Australian chemists Fellows of the Australian Academy of Science People educated at North Sydney Boys High School Computational chemists Officers of the Order of Australia
Alexander Boden
[ "Chemistry" ]
1,482
[ "Computational chemistry", "Theoretical chemists", "Computational chemists" ]
38,505,581
https://en.wikipedia.org/wiki/TeraView
TeraView Limited, or TeraView, is a company that designs terahertz imaging and spectroscopy instruments and equipment for measurement and evaluation of pharmaceutical tablets, nanomaterials, ceramics and composites, integrated circuit chips and more. TeraView was co-founded by Michael Pepper (CSO) and Dr Don Arnone (CEO) as a spin-out of Toshiba Research Europe in April 2001. The company was set up to exploit the intellectual property and expertise developed in sourcing and detecting terahertz radiation (1 THz= 33.3 cm−1), using semiconductor technologies. Leading industry proponents of the technology sit on its Advisory Board, and TeraView maintains close links with the Cavendish Laboratory at the University of Cambridge, which was one of the research universities which had an interest in Terahertz techniques. It is also where Professor Pepper, has held the position of Professor of Physics since 1987. Products TeraView has developed a number of instruments that harness the properties of terahertz radiation. Terahertz light has some interesting application. Many common materials and living tissues are semi-transparent and have ‘terahertz fingerprints’, permitting them to be imaged, identified, and analyzed. Moreover, the non-ionizing properties of terahertz radiation and the relatively low power levels used indicate that it is safe. TeraPulse 4000 - spectrometer with modular sample compartment for transmission, attenuated total reflection analysis, cryostats, variable temperature cells and reflection modules for imaging. Spectral range – 0.06 THz to 5.0 THz (2 cm−1 – 150  cm−1) with a dynamic range of greater than 90 dB at peak. EOTPR 3000 - Electro-optical terahertz pulsed reflectometer - spectrometer for identification and isolation of faults on interconnects of advanced packages, such as flip chip, package on package (PoP) and potentially through-silicon vias. By applying different software analysis packages, the same base technologies can be brought to bear to several applications. Research and development areas The company's primary focus of investigation includes the development of terahertz light into a useful spectroscopic and imaging technique. The ‘terahertz gap’ – where until recently bright sources of light and sensitive means of detection were difficult to access – encompasses frequencies invisible to the naked eye in the electromagnetic spectrum, lying between microwave and infrared in the range from 0.3 to 3THz. TeraView's existing instruments generate, detect and manipulate THz light and have been tested in a number of application areas. Pharmaceutical industry The applications of terahertz radiation in the pharmaceutical industry include nondestructive estimation of critical quality attributes in pharmaceutical products such as crystalline structure, thickness and chemical composition analysis. TeraView has demonstrated that terahertz instruments can produce 3D coating thickness maps for multiple coating layers and structural features models allowing better understanding and control of product scale up and manufacture. Medical imaging Due in part to its ability to recognize spectral fingerprints, terahertz pulsed imaging can be applied to provide contrast between different types of soft tissue. Also, it is a sensitive means of detecting the degree of water content and markers of cancer and other diseases. Attempts have been made to apply Terahertz to image cancers like breast, cancer as well as other diseases in medicine, oral health care, and related areas. The company announced it has been cleared by the Medicines and Healthcare products Regulatory Agency (MHRA) to trial in-vivo terahertz spectroscopy for bio-medical research. The trials will be held in Guy's Hospital in London and aim to determine if the technology can be applied real-time for accurate removal of cancer tissue. Homeland security and defense Terahertz technology has the potential to safely, noninvasively and quickly image through different types of clothing and other concealment and confusion materials. It has been hypothesized that because THz light is absorbed by explosive materials at certain frequencies it may be possible to find unique 'terahertz fingerprints' that can be distinguished from clothing or other materials. This has never been proved in a practical sense. The company's technology has been used by the Naval Surface Warfare Command to test the presence of different types of plastic explosives through clothing, including PETN (Pentaerythritol tetranitrate). Material characterization THz spectroscopy can be used as a non-contact analytical method. The absorption coefficient and refractive index measured by terahertz pulsed spectroscopy can be used directly to obtain the high frequency-dependent complex conductivities of materials in the 0.1 – 3 THz (3 – 100 cm−1) region of the electromagnetic spectrum. The technology has been applied to some areas of solid state physics research such as semiconductors, high-temperature superconductors, terahertz metamaterials, carrier density dynamics, graphene, carbon nanotubes, magnetism and more. Nondestructive testing Terahertz light can be used as non-contact technique for analysis in material integrity studies. It has proved to be effective in nondestructive inspection of layers in paints and coatings, detecting structural defects in ceramic and composite materials and imaging the physical structure of paintings and manuscripts. The use of THz waves for non-destructive evaluation enables inspection of multi-layered structures and can identify abnormalities from foreign material inclusions, disbond and delamination, mechanical impact damage, heat damage, and water or hydraulic fluid ingression. The company's Chief Scientific Director, Sir Michael Pepper, explains that THz imaging can measure thickness across a substrate precisely and it can also obtain the density of the coating: "The radiation is reflected each time there is a change in material. The time of arrival is measured and then various algorithms complete the picture by developing 3D fine feature images and precise material identifications". Further research by the company and active collaboration with the University of Cambridge is aiming to develop a terahertz sensor that can be used to measure the quality of paint coatings on cars. Semiconductor industry Terahertz technology allows high resolution 3D imaging of semiconductor packages and integrated circuit devices. THz time-domain reflectometry (TDR) offers significant advantages in imaging resolution compared to existing fault isolation techniques and conventional millimetre wave systems. Working with Intel on the applications of THz technology for the semiconductor industry, TeraView developed a new technique which combines electro-optics and THz pulses in a non-destructive Electro Optical Terahertz Pulse Reflectometry (EOTPR) which operates at up to 2 THz with resolution of 10 μm for improved fault isolation and failure analysis process-flow studies. "The unique capabilities of terahertz TDR and its advantages over the conventional TDR have been recognized. With such revolutionary concept, innovative design and superior performance, EOTPR will become an essential tool for microelectronic package fault isolation and failure analysis." Yongming Cai, Zhiyong Wang, Rajen Dias, and Deepak Goyal, Intel Corporation. See also Terahertz radiation Terahertz time-domain spectroscopy Terahertz metamaterials Terahertz nondestructive evaluation Terahertz tomography Michael Pepper References External links TeraView Ltd TeraView Blog University of Cambridge, Department of Physics Cavendish Laboratory Terahertz technology
TeraView
[ "Physics" ]
1,526
[ "Spectrum (physical sciences)", "Electromagnetic spectrum", "Terahertz technology" ]
38,506,960
https://en.wikipedia.org/wiki/C12H15NO
{{DISPLAYTITLE:C12H15NO}} The molecular formula C12H15NO (molar mass: 189.25 g/mol, exact mass: 189.1154 u) may refer to: 5-MAPB 6-MAPB Molecular formulas
C12H15NO
[ "Physics", "Chemistry" ]
59
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
60,587,863
https://en.wikipedia.org/wiki/Pierre%20Deymier
Pierre A. Deymier is a researcher in phononics, acoustic metamaterial, and materials science. He is a Professor of Materials Science and Engineering and previously department head at the University of Arizona. He holds appointments with the applied mathematics graduate interdisciplinary program, BIO5 institute, and School of Sustainable Engineered Systems at the University of Arizona. More recently, he has proposed a novel approach akin to quantum computing using the properties of phonons rather than qubits, which he has dubbed "phi-bits" or "phase-bits". Biography Education Deymier received his engineer's degree in materials science in 1982 from University of Montpellier in France and his Ph.D. in Materials Science & Engineering from MIT in 1985. His dissertation research was focused on computational materials science. He became assistant professor of materials science & engineering at the University of Arizona in 1985. Personal life His daughter, Alix Deymier, is a professor of biomedical engineering at the University of Connecticut. Publications Deymier has published over 180 peer-reviewed publications. Some of his most highly cited works are: Deymier, P. A.(Ed.). (2013). Acoustic metamaterials and phononic crystals (Vol. 173). Springer Science & Business Media. (Cited 714 times, according to Google Scholar) Vasseur, J. O., Deymier, P. A.. Chenni, B., Djafari-Rouhani, B., Dobrzynski, L., & Prevost, D. (2001). Experimental and theoretical evidence for the existence of absolute acoustic band gaps in two-dimensional solid phononic crystals. Physical Review Letters, 86(14), 3012. (open access) (Cited 574 times, according to Google Scholar.) Sukhovich A, Merheb B, Muralidharan K, Vasseur JO, Pennec Y, Deymier PA, Page JH. Experimental and theoretical evidence for subwavelength imaging in phononic crystals. Physical review letters. 2009 Apr 17;102(15):154301 (open access) (Cited 314 times, according to Google Scholar.) Pennec Y, Vasseur JO, Djafari-Rouhani B, Dobrzyński L, Deymier PA. Two-dimensional phononic crystals: Examples and applications. Surface Science Reports. 2010 Aug 31;65(8):229-91. (Cited 491 times, according to Google Scholar.) Vasseur, J. O., Deymier, P. A. Djafari-Rouhani, B., Pennec, Y., & Hladky-Hennion, A. C. (2008). Absolute forbidden bands and waveguiding in two-dimensional phononic crystal plates. Physical Review B, 77(8), 085415. (open access) (Cited 307 times, according to Google Scholar.) Awards Felix Bloch Award, 2023, . The prize honors individuals who have made “outstanding and sustained contributions in the field of phononics”. References External links Group Research Page - University of Arizona Living people Materials scientists and engineers University of Arizona faculty Year of birth missing (living people) Massachusetts Institute of Technology alumni University of Montpellier alumni
Pierre Deymier
[ "Materials_science", "Engineering" ]
695
[ "Materials scientists and engineers", "Materials science" ]
60,590,206
https://en.wikipedia.org/wiki/List%20of%20lime%20kilns
Historic or notable lime kilns include. Australia Lime Kiln Remains, Ipswich Pipers Creek Lime Kilns Raffan's Mill and Brick Bottle Kilns There were a number of lime kilns at Wool Bay, South Australia. One kiln remains and was listed along with the jetty under the name of Wool Bay Lime Kiln & Jetty on the South Australian Heritage Register on 28 November 1985. There also are or were lime kilns at: Adelaide Brighton Cement Anna Creek Station Blayney, New South Wales Bower, South Australia Claremont, Ipswich Coopers Creek, Victoria Galong, New South Wales Kingston and Arthur's Vale Historic Area Langshaw Marble Lime Works Marmor, Queensland New Farm, Queensland North Coogee Platina railway station Point Nepean Portland Cement Works Precinct Portland, New South Wales Quarry Amphitheatre Quartz Roasting Pits Complex South Fremantle, Western Australia Walkerville, Victoria Waurn Ponds, Victoria United Kingdom Annery kiln, Monkleigh, England Limekilns at Kiln Park, Pembrokeshire, Penally, Wales Cocking Lime Works, West Sussex, England Grove Lime Kiln, Isle of Portland, England Minera Limeworks, Wrexham, Wales Solva limekilns, Pembrokeshire, Wales There are or were lime kilns at many other places in the United Kingdom. United States See also Lime Kiln (disambiguation) :Category:Lime kilns in Canada :Category:Lime kilns in France :Category:Lime kilns in Germany :Category:Lime kilns in Hong Kong :Category:Lime kilns in Hungary :Category:Lime kilns in Ireland :Category:Lime kilns in Italy :Category:Lime kilns in Latvia :Category:Lime kilns in Portugal :Category:Lime kilns in Slovenia :Category:Lime kilns in South Africa :Category:Lime kilns in Sweden References lime kilns
List of lime kilns
[ "Chemistry", "Engineering" ]
409
[ "Lime kilns", "Kilns" ]
67,037,360
https://en.wikipedia.org/wiki/Molecular%20glue
Molecular glue refers to a class of chemical compounds or molecules that play a crucial role in binding and stabilizing protein-protein interactions in biological systems. These molecules act as "glue" by enhancing the affinity between proteins, ultimately influencing various cellular processes. Molecular glue compounds have gained significant attention in the fields of drug discovery, chemical biology, and fundamental research due to their potential to modulate protein interactions, and thus, impact various cellular pathways. They have unlocked avenues in medicine previously thought to be "undruggable". History The concept of "molecular glue" originated in the late 20th century, with immunosuppressants like cyclosporine A (CsA) and FK506 identified as pioneering examples. CsA, discovered in 1971 during routine screening for antifungal antibiotics, exhibited immunosuppressive properties by inhibiting the peptidyl–prolyl isomerase activity of cyclophilin, ultimately preventing organ transplant rejections. By 1979, CsA was used clinically, and FK506 (tacrolimus), discovered in 1987 by Fujisawa, emerged as a more potent immunosuppressant. The ensuing 4-year race to understand CsA and FK506's mechanisms led to the identification of FKBP12 as a common binding partner, marking the birth of the "molecular glue" concept. The term molecular glue found its way into publications in 1992, highlighting the selective gluing of specific proteins by antigenic peptides, akin to immunosuppressants acting as docking assemblies. The term, however, remained esoteric and hidden from keyword searches. In the early 1990s, researchers delved into understanding the role of proximity in biological processes. The creation of synthetic "chemical inducers of proximity" (CIPs), such as FK1012, opened the door to more complex molecular glues. Rimiducid, a purposefully synthesized molecular glue, demonstrated its effectiveness in eliminating graft-versus-host disease by inducing dimerization of death-receptor fusion targets. The exploration of molecular glues took a significant turn in 1996 with the discovery that discodermolide stabilized the association of alpha and beta tubulin monomers, functioning as a "molecular clamp" rather than inducing neo-associations. In 2000, the revelation that a synthetic compound, synstab-A, could induce associations of native proteins marked a shift towards the discovery of non-natural molecular glues. In 2001, Kathleen Sakamoto, Craig M. Crews and Raymond J. Deshaies raised the concept of PROTACs, which consist of a heterobifunctional molecule with a ligand of an E3 ubiquitin ligase linked to a ligand of a target protein. PROTACs are synthetic CIPs acting as protein degraders. In 2007, the term “molecular glue” became popularized after it was independently coined by Ning Zheng to describe the mechanism of action of auxin, a class of plant hormones regulating many aspects of plant growth and development. By promoting the interaction between a plant E3 ubiquitin ligase, TIR1, and its substrate proteins, auxin induces the degradation of a family of transcriptional repressors. Auxin is chemically known as indole-3-acetic acid and has a molecular weight of 175 dalton. Unlike PROTACs and immunosupressants such as CsA and FK506, auxin is a chemically simple and monovalent compound with drug-like properties obeying Lipinski’s rule of five. With no detectable affinity to the polyubiquitination substrate proteins of TIR1, auxin leverages the intrinsic weak affinity between the E3 ligase and its substrate proteins to enable stable protein complex formation. The same mechanism of action is shared by jasmonate, another plant hormone involved in wound and stress responses. The term “molecular glue” has since been used, particularly in the context of targeted protein degradation, to specifically describe monovalent compounds with drug-like properties capable of promoting productive protein-protein interactions, instead of CIPs in general. In 2013, the mechanism of thalidomide analogs as molecular glue degraders had been revealed. Notably, thalidomide, discovered as a CRBN ligand in 2010, and lenalidomide enhance the binding of CK1α to the E3 ubiquitin ligase, solidifying their role as molecular glues. Subsequently, indisulam was identified as a molecular glue capable of degrading RBM39 by targeting DCAF15 in 2017. These compounds are considered molecular glues because of their monovalency and chemical simplicity, which are consistent with the definition proposed by Shiyun Cao and Ning Zheng. Analogous to auxin, these compounds are distinct from PROTACs, displaying no detectable affinity to the substrate proteins of the E3 ubiquitin ligases. The year 2020 saw the discovery of autophagic molecular degraders and the identification of BI-3802 as a molecular glue inducing the polymerization and degradation of BCL6. Additionally, chemogenomic screening revealed structurally diverse molecular glue degraders targeting cyclin K. The discovery that manumycin polyketides acted as molecular glues, fostering interactions between UBR7 and P53, further expanded the understanding of molecular glue functions. In recent years, the field of molecular glues has witnessed an explosion of discoveries targeting native proteins. Examples include synthetic FKBP12-binding glues like FKBP12-rapadocin, which targets the adenosine transporter SLC29A1. Thalidomide and lenalidomide, classified as immunomodulatory drugs (IMiDs), were identified as small-molecule glues inducing ubiquitination of transcription factors via E3 ligase complexes. Computational searches for molecular-glue degraders since 2020 have added novel probes to the ever-expanding landscape of molecular glues. Furthermore, computational methods are starting to shed light onto molecular glues mechanisms of action. The transformative power of molecular glues in medicine became evident as drugs like sandimmune, tacrolimus, sirolimus, thalidomide, lenalidomide, and taxotere proved effective. The concept of inducing protein associations has shown promise in gene therapy and has become a potent tool in understanding cell circuitry. As the field continues to advance, the discovery of new molecular glues offers the potential to reshape drug discovery and overcome previously labeled "undruggable" targets. The future of molecular glues holds promise for rewiring cellular circuitry and providing innovative solutions in precision medicine. Properties and mechanisms Molecular glue compounds are typically small molecules that can bridge interactions between proteins. They often have specific binding sites on their target proteins and can enhance the association between these proteins. They do so by changing the surfaces of the proteins, encouraging binding between them when they would not usually interact. Molecular glue can enhance the stability of protein complexes, making them more resistant to dissociation. This can have a profound impact on cellular processes, as many biological functions are carried out by protein complexes. By influencing protein-protein interactions, molecular glue can modify the functions, localization or stability of the target proteins. This can lead to both therapeutic and research applications. In the current era, molecular glues have become a more commonly utilized approach for targeted protein degradation, offering advantages over traditional small molecule drugs and PROTACs. The recognition of substrates by E3 ubiquitin ligases, governed by protein-protein interactions (PPIs), plays a critical role in cellular function. There is significant therapeutic potential in developing small molecules that modulate these interactions, especially in the context of hard-to-drug proteins. A recent study reported the identification and rational design of potent small molecules acting as molecular glues to enhance the interaction between an oncogenic transcription factor, β-Catenin, and its cognate E3 ligase, SCFβ-TrCP. These enhancers demonstrated the ability to potentiate ubiquitylation and induce the degradation of mutant β-Catenin both in vitro and in cellular systems. Unlike PROTACs, these drug-like small molecules insert into a naturally occurring PPI interface, optimizing contacts for both the substrate and ligase within a single molecular entity. Molecular glues offer a unique advantage in degrading non-ligand-bound proteins by promoting the PPI between ubiquitin ligase and the target protein. Notably, molecular glues exhibit superior therapeutic effects compared to small molecule drugs. This is attributed to their lower molecular weight, higher cell permeability, and better oral absorption, aligning with the "Five Rules for Drugs". In contrast, PROTACs face challenges such as high molecular weight, poor cell permeability, and unfavorable pharmacokinetic characteristics, hindering their clinical development. Recent advances in the field have led to the development of BCL6 and Cyclin K Degraders, which leverage both protein-ligand and protein-protein interfaces for tight complex formation. These molecular glue degrader drugs are characterized by their small size (<500 Da) and exhibit high affinity between the ligase and neosubstrate in the presence of the small molecule. The complementary nature of protein-protein interfaces suggests the potential for natural interactions between the two proteins even in the absence of the compound. The identification of molecular-glue-type degraders has typically occurred retrospectively and serendipitously, but recent chemical-profiling approaches aim to prospectively identify small molecules acting as molecular glues. Researchers are exploring alternative small molecules, like CR8, to induce ubiquitination of targets in a top-down approach for induced protein degradation. CR8, identified through correlation analysis, operates via protein degradation by inducing ubiquitination through a molecular glue-like mechanism. The study emphasizes the potential of small molecules beyond PROTACs for targeted protein degradation. There have also been reports of molecular glues that stabilize protein-RNA interactions and protein-lipid interactions. Applications Cancer therapy Molecular glue compounds have demonstrated significant potential in cancer treatment by influencing protein-protein interactions (PPIs) and subsequently modulating pathways promoting cancer growth. These compounds act as targeted protein degraders, contributing to the development of innovative cancer therapies. The high efficacy of small-molecule molecular glue compounds in cancer treatment is notable, as they can interact with and control multiple key protein targets involved in cancer etiology. This approach, with its wider range of action and ability to target "undruggable" proteins, holds promise for overcoming drug resistance and changing the landscape of drug development in cancer therapy. Neurodegenerative diseases Molecular glue compounds are being explored for their potential in influencing protein interactions associated with neurodegenerative diseases such as Alzheimer's and Parkinson's. By modulating these interactions, researchers aim to develop treatments that could slow or prevent the progression of these diseases. Additionally, the versatility of small-molecule molecular glue compounds in targeting various proteins implicated in disease mechanisms provides a valuable avenue for unraveling the complexities of neurodegenerative disorders. Antiviral research Molecular glue compounds, particularly those involved in targeted protein degradation (TPD), offer a novel strategy for inhibiting viral protein interactions and combating viral infections. Unlike traditional direct-acting antivirals (DAAs), TPD-based molecules exert their pharmacological activity through event-driven mechanisms, inducing target degradation. This unique approach can lead to prolonged pharmacodynamic efficacy with lower pharmacokinetic exposure, potentially reducing toxicity and the risk of antiviral resistance. The protein-protein interactions induced by TPD molecules may also enhance selectivity, making them a promising avenue for antiviral research. Chemical biology Molecular glue serves as a valuable tool in chemical biology, enabling scientists to manipulate and understand protein functions and interactions in a controlled manner. The emergence of targeted protein degradation as a modality in drug discovery has further expanded the applications of molecular glue in chemical biology. The ability of small-molecule molecular glue compounds to induce iterative cycles of target degradation provides researchers with a powerful method for studying protein-protein interactions and opens avenues for drug development in various human diseases. Challenges and future prospects While molecular glue compounds hold great potential in various fields, there are challenges to overcome. Ensuring the specificity of these compounds and minimizing off-target effects is essential. Additionally, understanding the long-term consequences of manipulating protein interactions is crucial for their safe and effective application in medicine. Ongoing research in molecular glue is unlocking new compounds and insights into their mechanisms. With an expanding understanding of protein-protein interactions, molecular glue holds significant promise across biology, medicine, and chemistry, potentially revolutionizing cellular processes and advancing innovative disease treatments. As this field progresses, it may open new therapeutic avenues and deepen our understanding of life's molecular intricacies. Examples Cyclophilin Cyclosporin [Cyclophilin A-Calcineurin] RMC-7977 [Cyclophin A-KRAS] FKBP12 FK506 (tacrolimus) [FKBP12-Calcineurin] Rapamycin (sirolimus) [FKBP12-mTOR] WBD002 [FKBP12-CEP250] Auxin [TIR1-Aux/IAA] Other NST-628 [RAF-MEK] NVS-STG2 [STING] BIO-2007817 [Parkin-phosphoubiquitin] 14-3-3/ERα glues Degraders CRBN Lenalidomide [CRBN-IKZF1, IKZF3, CK1α] (see also thalidomide, pomalidomide, mezigdomide, iberdomide) NVP-DKY709 [CRBN-IKZF2] (see also PLX-4545) DEG-35 [CRBN-IKZF2, CK1α] SJ3149 [CRBN-IKZF1, IKZF3, CK1α] dWIZ [CRBN-WIZ] CC-90009 [CRBN-GSPT1] Other CR8 [CDK12-DDB1] PF-07208254 [BDK-BCKDH E2] E7820 [DCAF15-RBM39, RBM23] [see also indisulam, tasisulam] SRI-41315 [eRF1-ribosome] BI-3802 [BCL6] AMPTX- [BRD9-DCAF16] References Medicinal chemistry Biotechnology
Molecular glue
[ "Chemistry", "Biology" ]
3,072
[ "Biochemistry", "Biotechnology", "nan", "Medicinal chemistry" ]
67,041,304
https://en.wikipedia.org/wiki/Alkali%20sulfur%20liquid%20battery
Alkaline sulfur liquid battery (SLIQ) is a liquid battery which consists of only one rechargeable liquid and a technology which can be used for grid storage. Battery chemistry and active material One of the most promising possibilities of enhancing battery energy storage is to use sulphur as the positive electrode. Lithium-sulphur batteries are a tempting solution due to sulphur having a high theoretical capacity (1675 mAh g-1), as well as being non-toxic, abundant, and very low in cost. The discharge reaction in a lithium-sulphur cell, when using elemental sulphur as the positive electrode, can be written in its simplified form below: S8+16 Li →8Li2S In reality, the reactions taking place are much more complicated and occur through several intermediates, collectively known as polysulphides. For the charging or discharging reactions to take place, carbon must be present as a catalytic current collector, allowing electrons to pass through. Carbon is used as a three-phase interphase, giving a site where sulphur or polysulphides, lithium ions and electrons can all interact in one step. This greatly increases the rate of reaction, making lithium-sulphur cells a practical possibility not just a theoretical one. For a long time, the exact nature of the intermediate reactions during the charging or discharging of the cell were left up to speculation, due to the highly reactive nature of lithium polysulphides. This high reactivity makes it almost impossible to perform any type of analysis on the substrates, as they inevitably degrade within seconds. SLIQ battery closely resembles the voltage profile and the gravimetric capacity of a lithium sulfur battery cell. SLIQ battery has four discrete stages during discharge. The capacity Vs voltage plot of a SLIQ battery is shown below which clearly shows these four discrete stages during discharging. The first stage occurs between the voltages of 2.4 V and 2.1 V, with the sharp drop in voltage against capacity between 2.3 V and 2.1 V representing the second stage. The third stage is gently sloping plateau between 2.1 V and 2.05 V before the second sharp drop that is the fourth stage, occurring between 2.05 V – 1.5 V. Reactions taking place in each stage of the discharge are listed below; Primary stage of reactions are: S8 + 2e− → S82- S8 + 4e− → S72- + S2- S8 + 4e− → S62- + S22- S8 + 4e− → S52- + S32- S8 + 4e− → 2S42- Secondary stage of reactions are; S82- + 2e− → S62- + S22- S82- + 2e− → S52- + S32- S82- + 2e−→ 2S42- S72- + 2e− → S52- + S32- S62- + 2e− → 2S32- S52- + 2e− → S32- + S22- S42- + 2e− → 2S22- The third stage of reactions are; S32- + 2e− → S22- + S2- The fourth stage of reactions are; S22- + 2e− → 2S2- Longer chain polysulphides (Li2Sx, where 4 ≤x≤ 8) are mainly generated in the first stage with a smaller contribution being made by the second stage. This is important to note, as longer chain polysulphides are responsible for one of the main issues in lithium sulphur cells: polysulphide shuttling. SLIQ has been developed by solving the polysulphide shuttling issue using chemical, electrical and mechanical methods. The third and fourth stages of the discharge are where S2- is produced. This molecule is known as an insulating product (Li2S), and the formation of it can cause a passivation layer on the electrode, causing the sulphur to be under-utilised. The passivation layer conducts electrons even more poorly than elemental sulphur, a known insulator, so the sulphur must migrate to the current collector surface in order to react. The passivation layer further interferes with this process as Li2S does not easily conduct elemental sulphur (S8), making the reaction kinetics very sluggish. SLIQ technology has been developed providing practical solutions to solve these known problems. In this battery technology, the electrolyte and part of the cathode is converted to continuously refreshing and free flowing liquid, making this a different battery variant while solving some of the inherent problems explained above. The result is a refreshing polysulfide redox Single Liquid Battery (SLIQ ). In addition to low cost, the SLIQ has high energy density, millisecond response time due to the use of Li-S chemistry and longer lifetime due to flushing and dosing techniques. The power and energy are independently scalable giving complete flexibility. The size of the power stack defines the power of the battery and the amount of the liquid in the tank defines the battery capacity. high DC-DC efficiency of (96.8%) has been demonstrated by the first demonstrator installed at Inverie on the Knoydart peninsula in Scotland, UK. Technologies such as the SLIQ are crucial to fight the present climate crisis. Performance The alkaline sulfur liquid battery is an interesting concept due to the simplicity, low cost, durability, thermal stability (no thermal runaway), low carbon foot print, eliminating the need of rare earth minerals for storage and its applicability to transportation systems. The internal electrolytes and the catholyte gets refreshed continuously making the life time very long. This storage technology has a low carbon footprint per kWh of energy storage medium. A BEIS Report describes this technology as a novel flow battery technology that is low-cost, long lasting and easily scalable. Can be used for very long term storage economically and sustainably. A demonstrator which has been operational since 2017 has shown an overall AC-AC roundtrip efficiency of 93.15%, which is comparatively higher other comparable battery storage systems with online references. This technology is based on lithium-sulfur battery technology which have a high theoretical energy density of 2600 Wh kg−1 and theoretical energy capacity of 1675 mAh g−1. Therefore the theoretical energy density and theoretical energy capacity of SLIQ technology can be very close to 2600 Wh kg−1 and 1675 mAh g−1 respectively. This alkali sulphur liquid battery has been chosen to be included in an ARPA-E report on ‘The Cost and Performance Requirements for “Flexible” Advanced Nuclear Plants in Future Power Markets’ published in the official ARPA-E website. This ARPA-E report indicates that the SLIQ technology has a cost lower than $94/kWh with a life time exceeding 20 years, which makes this technology a definite candidate as an electrical storage technology for the fight against climate crisis. Invention and awards The Single Liquid battery or the Alkali sulfur liquid battery was invented in 2013 by Pasidu Pallawela. According to World Intellectual property organisation WIPO Pasidu Pallawela and StorTera holds patent rights to this technology. This technology has been presented in several high-profile industrial energy conferences such as All Energy conference and exhibition in the UK. As a business innovation, this technology was nominated for MEL British Chamber of commerce Awards in 2017 as the best business innovation and won the category award. SLIQ also won several Rushlight awards in 2020, winning the overall Rushlight award, energy environmental group category award and the Energy Efficiency group Award. Prototypes and industrial applications Sustainable Islands International website reports that a 30kWh/8 kW prototype has been installed in Scotland to support a remote community and has been running successfully since 2013. This product has been recently selected by Canadian and UK governments to install large scale SLIQ battery systems to support the grid and to support storage of renewable generation. According to this news article in a British news paper this technology and supporting electronics will demonstrate how this energy storage systems can increase uptake of renewables, save money for customers and utilities, and accelerate carbon reductions by boosting the use of electric energy. The University of Strathclyde is leading a research project aimed at reducing the cost and improving the performance of battery technologies, for use in developing countries and emerging economies using this technology. With the development of this technology for developing countries, the faraday institute and University of Strathclyde believe they can help communities with low or no connectivity to have reliable access to energy sources and bringing economic, social and environment benefits to developing countries and emerging economies. In addition this technology has been used to set up a smart energy network for Perth & Kinross council to decarbonise all their assets and to achieve net-zero status. References External links Pasidu Pallawela's Alkali sulfur liquid battery patent Sustainable technologies Electrochemical engineering Fuel cells Battery types
Alkali sulfur liquid battery
[ "Chemistry", "Engineering" ]
1,878
[ "Chemical engineering", "Electrical engineering", "Electrochemistry", "Electrochemical engineering" ]
64,105,317
https://en.wikipedia.org/wiki/Flortaucipir%20%2818F%29
{{DISPLAYTITLE:Flortaucipir (18F)}} Flortaucipir (18F), sold under the brand name Tauvid, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) imaging to image the brain. The most common adverse reactions include headache, injection site pain and increased blood pressure. Two proteins – tau and amyloid – are recognized as hallmarks of Alzheimer's disease. In people with Alzheimer's disease, pathological forms of tau proteins develop inside neurons in the brain, creating neurofibrillary tangles. After flortaucipir (18F) is administered intravenously, it binds to sites in the brain associated with this tau protein misfolding. The brain can then be imaged with a PET scan to help identify the presence of tau pathology. It is the first drug used to help image a distinctive characteristic of Alzheimer's disease in the brain called tau pathology. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. Medical uses Flortaucipir (18F) is a radioactive diagnostic agent for adults with cognitive impairment who are being evaluated for Alzheimer's disease. It is indicated for positron emission tomography (PET) imaging of the brain to estimate the density and distribution of aggregated tau neurofibrillary tangles (NFTs), a primary marker of Alzheimer's disease. Flortaucipir (18F) is not indicated for use in the evaluation of people for chronic traumatic encephalopathy (CTE). Chemistry Chemically, flortaucipir F 18 is 7-(6-[F-18]fluoropyridin-3-yl)-5H-pyrido[4,3 b]indole. History Flortaucipir (aka 18F-T807) was discovered by the Siemens biomarker research group, headed by Hartmuth Kolb and Katrin Szardenings, who also conducted first in human trials. Flortaucipir (18F) was approved for medical use in the United States in May 2020. The safety and effectiveness of flortaucipir (18F) imaging was evaluated in two clinical studies. In each study, five evaluators read and interpreted the flortaucipir (18F) imaging. The evaluators were blinded to clinical information and interpreted the imaging as positive or negative. The first study enrolled 156 participants who were terminally ill and agreed to undergo flortaucipir (18F) imaging and participate in a post-mortem brain donation program. In 64 of the participants who died within nine months of the flortaucipir (18F) brain scan, evaluators' reading of the flortaucipir (18F) scan was compared to post-mortem readings from independent pathologists who evaluated the density and distribution of neurofibrillary tangles (NFTs) in the same brain. The study showed evaluators reading the flortaucipir (18F) images had a high probability of correctly evaluating participants with tau pathology and had an average-to-high probability of correctly evaluating participants without tau pathology. The second study included the same participants with terminal illness as the first study, plus 18 additional participants with terminal illness, and 159 participants with cognitive impairment being evaluated for Alzheimer's disease (the indicated patient population). The study gauged how well flortaucipir (18F) evaluators' readings agreed with each other's assessments of the readings. Perfect reader agreement would be 1, while no reader agreement would be 0. In this study, reader agreement was 0.87 across all 241 participants. In a separate subgroup analysis that included the 82 terminally ill participants diagnosed after death and the 159 participants with cognitive impairment, reader agreement was 0.90 for the participants in the indicated population and 0.82 in the terminally ill participants. The US Food and Drug Administration (FDA) approved flortaucipir (18F) based on evidence of 1921 participants from 19 trials conducted at 322 sites in the United States, Australia, Belgium, Canada, France, Japan, Netherlands and Poland. The ability of flortaucipir (18F) to detect tau pathology was assessed in participants with generally severe stages of dementia and may be lower in participants in earlier stages of cognitive decline than in the participants with terminal illness who were studied. Society and culture Legal status The US Food and Drug Administration (FDA) granted the application for flortaucipir (18F) priority review and it granted approval of Tauvid to Avid Radiopharmaceuticals, Inc. In June 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Tauvid, intended for the diagnosis of Alzheimer's disease. The applicant for this medicinal product is Eli Lilly Nederland B.V. Flortaucipir (18F) was approved for medical use in the European Union in August 2024. Names Flortaucipir (18F) is the international nonproprietary name (INN). References Further reading External links Alzheimer's disease Medicinal radiochemistry PET radiotracers Radiopharmaceuticals
Flortaucipir (18F)
[ "Chemistry" ]
1,135
[ "Medicinal radiochemistry", "PET radiotracers", "Radiopharmaceuticals", "Medicinal chemistry", "Chemicals in medicine" ]
64,110,135
https://en.wikipedia.org/wiki/Vinervine
Vinervine is a monoterpene indole alkaloid of the Vinca sub-group. It is a derivative of akuammicine, with one additional hydroxy (OH) group in the indole portion, hence it is also known as 12-hydroxyakuammicine. History The alkaloids are a large group of natural products which are classified according to the part-structure which members of a particular group contain. Vinervine is a monoterpene indole alkaloid of the Vinca sub-group which shares a common biosynthesis with other members, namely that they are derived from strictosidine. It was first characterised in 1964 and the structures of closely related materials including akuammicine were confirmed in 1983. Natural occurrence Vinervine is found in a variety of plants of the Apocynaceae family, including Vinca erecta, Tabernaemontana divaricata and several other flowering plants species that are native to Africa, Asia, and Europe. Biosynthesis As with other indole alkaloids, the biosynthesis of vinervine starts from the amino acid tryptophan. This is converted into strictosidine before further elaboration. Research Plant metabolites have long been studied for their biological activity and alkaloids in particular are major subjects for ethnobotanical research. However, vinervine has had little reported utility. See also Conophylline Vobtusine Rauwolscine Yohimbine References Further reading Indole alkaloids Alkaloids found in Apocynaceae Vinca alkaloids
Vinervine
[ "Chemistry" ]
338
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
64,110,202
https://en.wikipedia.org/wiki/19%2C20-Dihydroervahanine%20A
19,20-Dihydroervahanine A is an alkaloid, a natural product which is found in the root of the Southeast Asian plant Tabernaemontana divaricata. It inhibits acetylcholinesterase in vitro more potently than galantamine. See also Coronaridine Ibogamine References Tryptamine alkaloids Acetylcholinesterase inhibitors
19,20-Dihydroervahanine A
[ "Chemistry" ]
84
[ "Tryptamine alkaloids", "Alkaloids by chemical classification" ]
64,117,353
https://en.wikipedia.org/wiki/Ytterbium%28II%29%20hydride
Ytterbium(II) hydride is the hydride of ytterbium with the chemical formula YbH2. In this compound, the ytterbium atom has an oxidation state of +2 and the hydrogen atoms have an oxidation state of -1. Its resistivity at room temperature is 107 Ω·cm. Ytterbium hydride has a high thermostability. Production Ytterbium hydride can be produced by reacting ytterbium with hydrogen gas: Yb + H2 → YbH2 References Ytterbium(II) compounds Metal hydrides
Ytterbium(II) hydride
[ "Chemistry" ]
130
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
55,765,541
https://en.wikipedia.org/wiki/Eugene%20Terentjev
Eugene M. Terentjev (born 21 June 1959) is professor of Polymer physics at the University of Cambridge, and fellow of Queens' College where he is the Director of Studies in Natural Sciences. Terentjev earned his MSc in Physics from Moscow State University, and his PhD from Institute of Crystallography, Russian Academy of Sciences, Moscow. He then carried out postdoctoral research at Case Western Reserve University in Cleveland, Ohio, before moving to Cambridge in 1992. Terentjev's h-index is over 60, with over 16000 citations to his articles. His most notable contributions are in the scientific field of liquid crystal elastomers, and in biophysics. Selected publications "F1 rotary motor of ATP synthase is driven by the torsionally asymmetric drive shaft." O. Kulish, A.D. Wright, E.M. Terentjev: Sci. Rep., 6, 28180 (2016). "How cells feel: stochastic model for a molecular mechanosensor." M. Escude, M.K. Rigozzi, E.M. Terentjev: Biophys. J., 106, 124–133 (2014). "Mouldable liquid-crystalline elastomer actuators with exchangeable covalent bonds." Z. Pei, Y. Yang, Q. Chen, E.M. Terentjev, Y.Wei, Y. Ji: Nature Mater. 13, 36-41 (2013). "A chain mechanism for flagellum growth." L.D.B. Evans, S. Poulter, E.M. Terentjev, C. Hughes, G.M. Fraser: Nature, 504, 287-290 (2013). "Strength of nanotubes, filaments and nanowires from sonication-induced scission." Y.Y. Huang, T.P.J. Knowles and E.M. Terentjev: Adv. Mater. 21, 1–4 (2009). "Photo-mechanical actuation in polymer-nanotube composites." S.V. Ahir, E.M. Terentjev: Nature Mater. 4, 491–495 (2005). References External links Fellows of Queens' College, Cambridge Russian physicists Polymer scientists and engineers Scientists of the Cavendish Laboratory Living people 1959 births
Eugene Terentjev
[ "Chemistry", "Materials_science" ]
505
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
41,299,452
https://en.wikipedia.org/wiki/Min-entropy
The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the most likely outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability. As with the classical Shannon entropy and its quantum generalization, the von Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog of conditional quantum entropy. To interpret a conditional information measure, suppose Alice and Bob were to share a bipartite quantum state . Alice has access to system and Bob to system . The conditional entropy measures the average uncertainty Bob has about Alice's state upon sampling from his own system. The min-entropy can be interpreted as the distance of a state from a maximally entangled state. This concept is useful in quantum cryptography, in the context of privacy amplification (See for example ). Definition for classical distributions If is a classical finite probability distribution, its min-entropy can be defined as One way to justify the name of the quantity is to compare it with the more standard definition of entropy, which reads , and can thus be written concisely as the expectation value of over the distribution. If instead of taking the expectation value of this quantity we take its minimum value, we get precisely the above definition of . From an operational perspective, the min-entropy equals the negative logarithm of the probability of successfully guessing the outcome of a random draw from . This is because it is optimal to guess the element with the largest probability and the chance of success equals the probability of that element. Definition for quantum states A natural way to generalize "min-entropy" from classical to quantum states is to leverage the simple observation that quantum states define classical probability distributions when measured in some basis. There is however the added difficulty that a single quantum state can result in infinitely many possible probability distributions, depending on how it is measured. A natural path is then, given a quantum state , to still define as , but this time defining as the maximum possible probability that can be obtained measuring , maximizing over all possible projective measurements. Using this, one gets the operational definition that the min-entropy of equals the negative logarithm of the probability of successfully guessing the outcome of any measurement of . Formally, this leads to the definition where we are maximizing over the set of all projective measurements , represent the measurement outcomes in the POVM formalism, and is therefore the probability of observing the -th outcome when the measurement is . A more concise method to write the double maximization is to observe that any element of any POVM is a Hermitian operator such that , and thus we can equivalently directly maximize over these to get In fact, this maximization can be performed explicitly and the maximum is obtained when is the projection onto (any of) the largest eigenvalue(s) of . We thus get yet another expression for the min-entropy as: remembering that the operator norm of a Hermitian positive semidefinite operator equals its largest eigenvalue. Conditional entropies Let be a bipartite density operator on the space . The min-entropy of conditioned on is defined to be where the infimum ranges over all density operators on the space . The measure is the maximum relative entropy defined as The smooth min-entropy is defined in terms of the min-entropy. where the sup and inf range over density operators which are -close to . This measure of -close is defined in terms of the purified distance where is the fidelity measure. These quantities can be seen as generalizations of the von Neumann entropy. Indeed, the von Neumann entropy can be expressed as This is called the fully quantum asymptotic equipartition theorem. The smoothed entropies share many interesting properties with the von Neumann entropy. For example, the smooth min-entropy satisfy a data-processing inequality: Operational interpretation of smoothed min-entropy Henceforth, we shall drop the subscript from the min-entropy when it is obvious from the context on what state it is evaluated. Min-entropy as uncertainty about classical information Suppose an agent had access to a quantum system whose state depends on some classical variable . Furthermore, suppose that each of its elements is distributed according to some distribution . This can be described by the following state over the system . where form an orthonormal basis. We would like to know what the agent can learn about the classical variable . Let be the probability that the agent guesses when using an optimal measurement strategy where is the POVM that maximizes this expression. It can be shown that this optimum can be expressed in terms of the min-entropy as If the state is a product state i.e. for some density operators and , then there is no correlation between the systems and . In this case, it turns out that Since the conditional min-entropy is always smaller than the conditional Von Neumann entropy, it follows that Min-entropy as overlap with the maximally entangled state The maximally entangled state on a bipartite system is defined as where and form an orthonormal basis for the spaces and respectively. For a bipartite quantum state , we define the maximum overlap with the maximally entangled state as where the maximum is over all CPTP operations and is the dimension of subsystem . This is a measure of how correlated the state is. It can be shown that . If the information contained in is classical, this reduces to the expression above for the guessing probability. Proof of operational characterization of min-entropy The proof is from a paper by König, Schaffner, Renner in 2008. It involves the machinery of semidefinite programs. Suppose we are given some bipartite density operator . From the definition of the min-entropy, we have This can be re-written as subject to the conditions We notice that the infimum is taken over compact sets and hence can be replaced by a minimum. This can then be expressed succinctly as a semidefinite program. Consider the primal problem This primal problem can also be fully specified by the matrices where is the adjoint of the partial trace over . The action of on operators on can be written as We can express the dual problem as a maximization over operators on the space as Using the Choi–Jamiołkowski isomorphism, we can define the channel such that where the bell state is defined over the space . This means that we can express the objective function of the dual problem as as desired. Notice that in the event that the system is a partly classical state as above, then the quantity that we are after reduces to We can interpret as a guessing strategy and this then reduces to the interpretation given above where an adversary wants to find the string given access to quantum information via system . See also von Neumann entropy Generalized relative entropy max-entropy References Quantum mechanical entropy
Min-entropy
[ "Physics" ]
1,520
[ "Quantum mechanical entropy", "Entropy", "Physical quantities" ]
41,299,466
https://en.wikipedia.org/wiki/Generalized%20relative%20entropy
Generalized relative entropy (-relative entropy) is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity. In the study of quantum information theory, we typically assume that information processing tasks are repeated multiple times, independently. The corresponding information-theoretic notions are therefore defined in the asymptotic limit. The quintessential entropy measure, von Neumann entropy, is one such notion. In contrast, the study of one-shot quantum information theory is concerned with information processing when a task is conducted only once. New entropic measures emerge in this scenario, as traditional notions cease to give a precise characterization of resource requirements. -relative entropy is one such particularly interesting measure. In the asymptotic scenario, relative entropy acts as a parent quantity for other measures besides being an important measure itself. Similarly, -relative entropy functions as a parent quantity for other measures in the one-shot scenario. Definition To motivate the definition of the -relative entropy , consider the information processing task of hypothesis testing. In hypothesis testing, we wish to devise a strategy to distinguish between two density operators and . A strategy is a POVM with elements and . The probability that the strategy produces a correct guess on input is given by and the probability that it produces a wrong guess is given by . -relative entropy captures the minimum probability of error when the state is , given that the success probability for is at least . For , the -relative entropy between two quantum states and is defined as From the definition, it is clear that . This inequality is saturated if and only if , as shown below. Relationship to the trace distance Suppose the trace distance between two density operators and is For , it holds that a) In particular, this implies the following analogue of the Pinsker inequality b) Furthermore, the proposition implies that for any , if and only if , inheriting this property from the trace distance. This result and its proof can be found in Dupuis et al. Proof of inequality a) Upper bound: Trace distance can be written as This maximum is achieved when is the orthogonal projector onto the positive eigenspace of . For any POVM element we have so that if , we have From the definition of the -relative entropy, we get Lower bound: Let be the orthogonal projection onto the positive eigenspace of , and let be the following convex combination of and : where This means and thus Moreover, Using , our choice of , and finally the definition of , we can re-write this as Hence Proof of inequality b) To derive this Pinsker-like inequality, observe that Alternative proof of the Data Processing inequality A fundamental property of von Neumann entropy is strong subadditivity. Let denote the von Neumann entropy of the quantum state , and let be a quantum state on the tensor product Hilbert space . Strong subadditivity states that where refer to the reduced density matrices on the spaces indicated by the subscripts. When re-written in terms of mutual information, this inequality has an intuitive interpretation; it states that the information content in a system cannot increase by the action of a local quantum operation on that system. In this form, it is better known as the data processing inequality, and is equivalent to the monotonicity of relative entropy under quantum operations: for every CPTP map , where denotes the relative entropy of the quantum states . It is readily seen that -relative entropy also obeys monotonicity under quantum operations: , for any CPTP map . To see this, suppose we have a POVM to distinguish between and such that . We construct a new POVM to distinguish between and . Since the adjoint of any CPTP map is also positive and unital, this is a valid POVM. Note that , where is the POVM that achieves . Not only is this interesting in itself, but it also gives us the following alternative method to prove the data processing inequality. By the quantum analogue of the Stein lemma, where the minimum is taken over such that Applying the data processing inequality to the states and with the CPTP map , we get Dividing by on either side and taking the limit as , we get the desired result. See also Entropic value at risk Quantum relative entropy Strong subadditivity Classical information theory Min-entropy References Quantum mechanical entropy
Generalized relative entropy
[ "Physics" ]
896
[ "Quantum mechanical entropy", "Entropy", "Physical quantities" ]
52,865,540
https://en.wikipedia.org/wiki/Bioremediation%20of%20polychlorinated%20biphenyls
Polychorinated biphenyls, or PCBs, are a type of chemical that was widely used in the 1960s and 1970s, and which are a contamination source of soil and water. They are fairly stable and therefore persistent in the environment. Bioremediation of PCBs is the use of microorganisms to degrade PCBs from contaminated sites, relying on multiple microorganisms' co-metabolism. Anaerobic microorganisms dechlorinate PCBs first, and other microorganisms that are capable of doing BH pathway can break down the dechlorinated PCBs to usable intermediates like acyl-CoA or carbon dioxide. If no BH pathway-capable microorganisms are present, dechlorinated PCBs can be mineralized with help of fungi and plants. However, there are multiple limiting factors for this co-metabolism. Overview PCBs Polychlorinated biphenyls (PCBs) are various biphenyl based artificial products that are widely used as a dielectric fluid, industrial coolant, and lubricants in the 1960s and 1970s. There is no evidence its synthesis occurs naturally. They are classified as persistent organic pollutants. PCBs share the basic chemical structure of biphenyl and one or more of the hydrogen atoms on the aromatic rings are replaced by chlorine atoms. PCBs is in viscous liquid form at normal temperature and has a poor solubility in water. The aromatic hydrocarbon structure gives PCBs relatively high molecular stability. The chlorine substitution further reinforces its insolubility and chemical stability. Hence, the degradation of PCBs in the natural environment is very slow, which can range from 3 to 37 years depending on the number of chloride substitutions and their positions. Bioremediation Bioremediation is a waste removal method that uses microorganisms to degrade or remove wastes like organic waste and heavy metal from contaminated sites including both soil and water. The advantages of bioremediation are that it is environment-friendly, inexpensive and can remove multiple wastes simultaneously comparing with traditional chemical and physical processes. Degradation Process Various microorganisms are involved in a two-stage process of degradation of PCBs, which happens in aerobic and anaerobic environments. Degrading PCBs is similar to the degradation of biphenyl. However, the chlorines on PCBs prevent them from being utilized as a substrate of biphenyl degradation. Due to high chemical stability, PCBs cannot be used as energy sources. However, due to the chlorination, PCBs can be used as electron acceptors in anaerobic respiration to store energy, which is also the first stage of the degradation pathway, reductive dechlorination. Once the PCBs are dechlorinated to a certain degree, usually lower than five chlorines presenting in the structure and one aromatic ring has no chlorine, they can undergo the biphenyl degradation pathway (BP pathway) to be degraded to accessible carbon or CO2 in the aerobic environment. BP pathway is a pathway that utilizes series of enzymes (BphA, B, C, D, E, F, G) to convert biphenyl to TCA cycle intermediates (pyruvate and Acyl-CoA) and benzoate. However, there are few microorganisms that can dechlorinate substrate under natural conditions. Even with selective media, the accumulation of PCB dechlorinating microorganisms is still slow, which is one reason for the slow degradation rate. As a result, PCBs usually go through a co-metabolism pathway that involving different microorganism species. Generally speaking, there are four steps in this process: In order for PCBs to enter the cell, they firstly need to be solubilized. PCBs are dechlorinated by anaerobic bacteria, then transport the metabolites to aerobic bacteria or fungi through a biofilm. The presence of PCBs metabolites triggers the expression of enzymes in BP pathway. PCBs are broken down to Acetyl-CoA and then can be utilized or carbon dioxide. The Figure below shows the complete degradation pathway. Limiting Factors PCBs entering the cells PCBs have low water solubility, so they adhere tightly to soil and cannot be easily accessed by bacteria. Especially, if the contaminations site has been exposed to PCBs for long period, PCBs can be integrated into soil or sediment matrices, then further decrease their bioavailability. Some surfactants can help solubilize but cannot increase the rate of PCBs degrading. However, if PCBs are linked to surfactants tightly too, then this process cannot promote the absorption of PCBs and even lower it. Also, many surfactants have been proven to be toxic to cells and the high cost of surfactants is another issues. PCBs properties PCBs are toxic to bacterial communities above 1000 mg/kg. However, if the concentration is too low (lower than 50 mg/kg), the degradation slows down significantly, for there is not enough material to stimulate the expression of required genes and support the growth of competent microorganisms. PCBs includes various different compounds with slightly different structures. Those slight differences make big differences in metabolic rate. Generally speaking, the more chlorines in a PCB molecule, the harder for it to be degraded. In particular, microorganisms cannot degrade di- and tetra-ortho substituted well. It is possible that those structures prevent enzymes from accessing reaction sites. Soil and sediment characteristics First, the soil and sediment structures will determine how tightly PCBs are adhered to them and affect the absorption of PCBs into cells. The PCBs’ availability suffers from increasing organic carbon and clay content, for they promote the absorption of PCBs into soil or sediment matrices. Second, the soil contains necessary nutrition for the growth of microbes and anaerobic and aerobic environments. Finally, the local microbial population has significant impacts on the rate of degradation of PCBs, which varies based on the microbial strains and their activities. Then, if there is no history of exposure to PCBs, it may take months for microbes to activates their ability to dechlorinate PCBs and break them down. Gene expression and bottleneck effect of metabolites PCBs or biphenyl cannot provide energy for microbes, so they are primary energy and carbon sources. As stated before it takes months sometimes for microorganisms to activate their gene for dichlorine after the first exposure to PCBs. It has been proposed to use analogs to promote the activation of genes. However, even after the metabolic pathway is activated, the intermediates of the pathway create a bottleneck effect due to their toxicity. Also, there is the possibility that BP pathway leads to protoanemonin which is a dead-end metabolite that cannot be utilized by cells. Due to the high energy cost of this pathway, if no preferred energy source present in the system, cells will not activate this pathway. References Biochemistry Bioremediation Environmental microbiology
Bioremediation of polychlorinated biphenyls
[ "Chemistry", "Biology", "Environmental_science" ]
1,486
[ "Environmental soil science", "Biodegradation", "Ecological techniques", "nan", "Bioremediation", "Biochemistry", "Environmental microbiology" ]
52,866,154
https://en.wikipedia.org/wiki/C16H8
{{DISPLAYTITLE:C16H8}} The molecular formula C16H8 (molar mass: 200.24 g/mol, exact mass: 200.0626 u) may refer to: Bicalicene Quadrannulene See also List of compounds with carbon number 16 Molecular formulas
C16H8
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
52,866,970
https://en.wikipedia.org/wiki/Histone%20variants
Histone variants are proteins that substitute for the core canonical histones (H3, H4, H2A, H2B) in nucleosomes in eukaryotes and often confer specific structural and functional features. The term might also include a set of linker histone (H1) variants, which lack a distinct canonical isoform. The differences between the core canonical histones and their variants can be summarized as follows: (1) canonical histones are replication-dependent and are expressed during the S-phase of cell cycle whereas histone variants are replication-independent and are expressed during the whole cell cycle; (2) in animals, the genes encoding canonical histones are typically clustered along the chromosome, are present in multiple copies and are among the most conserved proteins known, whereas histone variants are often single-copy genes and show high degree of variation among species; (3) canonical histone genes lack introns and use a stem loop structure at the 3’ end of their mRNA, whereas histone variant genes may have introns and their mRNA tail is usually polyadenylated. Complex multicellular organisms typically have a large number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants nomenclature Different names historically assigned to homologous proteins in different species complicate the nomenclature of histone variants. A recently suggested unified nomenclature of histone variants follows phylogeny-based approach to naming the variants. According to this nomenclature, letter suffixes or prefixes are mainly used to denote structurally distinct monophyletic clades of a histone family (e.g. H2A.Z, H2B.W, subH2B). Number suffixes are assumed to be species-specific (e.g. H1.1), but are encouraged to be used consistently between species where unique orthologies are clear. However, due to historical reasons naming of certain variants may still deviate from these rules. Variants of histone H3 Throughout eukaryotes the most common histone H3 variants are H3.3 and centromeric H3 variant (cenH3, called also CENPA in humans). Well studied species specific variants include H3.1, H3.2, TS H3.4 (mammals), H3.5 (hominids), H3.Y (primates). Except for cenH3 histone, H3 variants are highly sequence conserved differing only by a few amino acids. Histone H3.3 has been found to play an important role in maintaining genome integrity during the mammalian development. Variants of histone H4 Histone H4 is one of the slowest evolving proteins with no functional variants in the majority of species. The reason for a lack of sequence variants remains unclear. Trypanosoma are known to have a variant of H4 named H4.V. In Drosophila there are H4 replacement genes that are constitutively expressed throughout the cell cycle that encode proteins that are identical in sequence to the major H4. Variants of histone H2A Histone H2A has the highest number of known variants, some of which are relatively well characterized. H2A.X is the most common H2A variant, with the defining sequence motif ‘SQ(E/D)Φ’ (where Φ-represents a hydrophobic residue, usually Tyr in mammals). It becomes phosphorylated during the DNA damage response, chromatin remodeling, and X-chromosome inactivation in somatic cells. H2A.X and canonical H2A have diverged several times in phylogenetic history, but each H2A.X version is characterized by similar structure and function, suggesting it may represent the ancestral state. H2A.Z regulates transcription, DNA repair, suppression of antisense RNA, and RNA Polymerase II recruitment. Notable features of H2A.Z include a sequence motif ‘DEELD,’ a one amino acid insertion in L1-loop, and a one amino acid deletion in the docking domain relative to canonical H2A. Variant H2A.Z.2 was suggested to be driving the progression of malignant melanoma. Canonical H2A can be exchanged in nucleosomes for H2A.Z with special remodeling enzymes. macroH2A contains a histone fold domain and an extra, long C-terminal macro domain which can bind poly-ADP-ribose. This histone variant is used in X-inactivation and transcriptional regulation. Structures of both domains are available, but the inter-domain linker is too flexible to be crystallized. H2A.B (Barr body deficient variant) is a rapidly evolving mammal specific variant, known for its involvement in spermatogenesis. H2A.B has a shortened docking domain, which wraps around a short DNA region. H2A.L and H2A.P variants are closely related to H2A.B, but are less studied. H2A.W is a plant specific variant with SPKK motifs at the N-terminus with a putative minor-groove-binding activity. H2A.1 is a mammalian testis, oocyte and zygote specific variant. It can preferentially dimerize with H2B.1. It is so far characterized only in mouse, but a similar gene in human is available which is located at the end of the largest histone gene cluster. Currently other less extensively studied H2A variants are starting to emerge such as H2A.J. Variants of histone H2B H2B histone type is known to have a limited number of variants at least in mammals, apicomplexa and sea urchins. H2B.1 is a testis, oocyte and zygote specific variant that forms subnucleosomal particles, at least, in spermatids. It can dimerize with H2A.L and H2A.1. H2B.W is involved in spermatogenesis, telomere associated functions in sperm and is found in spermatogenic cells. It is characterized by the extension of the N-terminal tail. subH2B participates in regulation of spermiogenesis and is found in non-nucleosomal particle in the subacrosome of spermatozoa. This variant has a bipartite nuclear localization signal. H2B.Z is an apicomplexan specific variant that is known to interact with H2A.Z. ‘sperm H2B’ is a putative group that contains sperm H2B histones from sea and sand urchins and potentially is common for Echinacea. Recently discovered variant H2B.E is involved in the regulation of olfactory neuron function in mice. Databases and resources "HistoneDB 2.0 - with variants", a database of histones and their variants maintained by National Center for Biotechnology Information, currently serves as the most comprehensive manually curated resource on histones and their variants that follows the new unified phylogeny-based nomenclature of histone variants. "Histome: The Histone Infobase" is manually curated database of histone variants in humans and associated post-translational modifications as well as modifying enzymes. MS_HistoneDB is a proteomics-oriented manually curated databases for mouse and human histone variants. References External links HistoneDB 2.0 - Database of histones and variants at NCBI Proteins
Histone variants
[ "Chemistry" ]
1,591
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
42,701,243
https://en.wikipedia.org/wiki/Property%20%28mathematics%29
In mathematics, a property is any characteristic that applies to a given set. Rigorously, a property p defined for all elements of a set X is usually defined as a function p: X → {true, false}, that is true whenever the property holds; or, equivalently, as the subset of X for which p holds; i.e. the set {x | p(x) = true}; p is its indicator function. However, it may be objected that the rigorous definition defines merely the extension of a property, and says nothing about what causes the property to hold for exactly those values. Examples Of objects: Parity is the property of an integer of whether it is even or odd For more examples, see :Category:Algebraic properties of elements. Of operations: associative property commutative property of binary operations between real and complex numbers distributive property For more examples, see :Category:Properties of binary operations. See also Unary relation References Mathematical terminology Mathematical relations
Property (mathematics)
[ "Mathematics" ]
207
[ "Mathematical analysis", "Predicate logic", "Basic concepts in set theory", "Mathematical relations", "nan" ]
42,705,189
https://en.wikipedia.org/wiki/BTTC%20Centre
BTTC Centre is a Class A 12-story green building located at Ortigas Avenue corner Roosevelt Avenue, Greenhills, San Juan, Metro Manila, Philippines. It is the first green building in Greenhills, San Juan to receive a Gold pre-certification for Core & Shell under LEED. Developed by Hantex Corporation, it is an office, commercial and retail type of property with a floor plate of 1,384 square meters per floor. BTTC Centre is also among the 58 projects currently registered for LEED certification, together with the Zuellig Building in Makati and Megaworld 8 Campus Building and Wells Fargo Headquarters, which are both in Bonifacio Global City. It is also an IT-Center PEZA Certified Building. Design and features BTTC Centre is managed by real estate services firm CBRE Philippines. Completed in December 2012, this building has maintenance systems and recycles water with its built-in sewerage treatment plant, installed with double-glazed glass to provide abundant entry of light and also cut approximately around 70% of the heat from outside, keeping the building cool at all times. The BTTC Centre was designed by ADGo Architecture and Design, Inc. headed by its principal architect, Architect Daniel Go. See also San Juan References Sustainable building Buildings and structures in San Juan, Metro Manila Office buildings in Metro Manila
BTTC Centre
[ "Engineering" ]
277
[ "Building engineering", "Sustainable building", "Construction" ]
42,706,145
https://en.wikipedia.org/wiki/Sulfate%20attack%20in%20concrete%20and%20mortar
Cement hydration and strength development mainly depend on two silicate phases: tricalcium silicate (C3S) (alite), and dicalcium silicate (C2S) (belite). Upon hydration, the main reaction products are calcium silicate hydrates (C-S-H) and calcium hydroxide Ca(OH)2, written as CH in the cement chemist notation. C-S-H is the phase playing the role of the glue in the cement hardened paste and responsible of its cohesion. Cement also contains two aluminate phases: C3A and C4AF, respectively the tricalcium aluminate and the tetracalcium aluminoferrite. C3A hydration products are AFm, calcium aluminoferrite monosulfate, and ettringite, a calcium aluminoferrite trisulfate (AFt). C4AF hydrates as hydrogarnet and ferrous ettringite. Sulfate attack typically happens to ground floor slabs in contact with soils containing a source of sulfates. Sulfates dissolved by ground moisture migrate into the concrete of the slab where they react with different mineral phases of the hardened cement paste. The attack arises from soils containing ions, such as MgSO4 or Na2SO4 soluble and hygroscopic salts. The tricalcium aluminate (C3A) hydrates first interact with sulfate ions to form ettringite (AFt). Ettringite crystallizes into small acicular needles slowly growing in the concrete pores. Once the pores are completely filled, ettringite can develop a high crystallization pressure inside the pores, exerting a considerable tensile stress in the concrete matrix causing the formation of cracks. Ultimately, Ca2+ ions in equilibrium with portlandite (Ca(OH)2) and C-S-H and dissolved in the concrete interstitial water can also react with ions to precipitate CaSO4·2H2O (gypsum). A fraction of ions can also be trapped, or sorbed, into the layered structure of C-S-H. These successive reactions lead to the precipitation of expansive mineral phases inside the concrete porosity responsible for the concrete degradation, cracks and ultimately the failure of the structure. External attack This is the more common type and typically occurs where groundwater containing dissolved sulfate are in contact with concrete. Sulfate ions diffusing into concrete react with portlandite (CH) to form gypsum: When the concentration of sulfate ions decreases, ettringite breaks down into monosulfate aluminates (AFm): When it reacts with concrete, it causes the slab to expand, lifting, distorting and cracking as well as exerting a pressure onto the surrounding walls which can cause movements significantly weakening the structure. Some infill materials frequently encountered in building fondations and causing sulfate attack are the following: Red Ash (shale) Black ash Slag Grey fly ash Other industrial materials and building rubble can also cause problems. These materials were used extensively in the North West of England as they were widely available and waste products from industries such as coal mines, steelworks, foundries and power stations. Excess of gypsum in concrete If gypsum is present in excess in concrete, it reacts with the monosulfate aluminates to form ettringite: A fairly well-defined reaction front can often be observed in thin sections; ahead of the front the concrete is normal, or near normal. Behind the reaction front, the composition and the microstructure of concrete are modified. These changes may vary in type or severity but commonly include: Extensive cracking Expansion Loss of bond between the cement paste and aggregate Alteration of hardened cement paste composition, with monosulfate aluminates phase converting to ettringite and, in later stages, gypsum formation. The necessary additional calcium is provided by the calcium hydroxide and calcium silicate hydrate in the cement paste The effect of these changes is an overall loss of concrete strength. The above effects are typical of attack by solutions of sodium sulfate or potassium sulfate. Solutions containing magnesium sulfate are generally more aggressive, for the same concentration. This is because magnesium also takes part in the reactions, replacing calcium in the solid phases with the formation of brucite (magnesium hydroxide) and magnesium silicate hydrates. The displaced calcium precipitates mainly as gypsum. Sources of sulfates Oxidation of pyrite in clay formations in contact with concrete – this produces sulfuric acid which reacts with concrete. Bacterial activity in sewers – anaerobic sulfate reduction at work in the organic-rich sludges accumulated under water in the conduits produces hydrogen sulfide gas (H2S). After its released in the air of the galleries, H2S is further oxidized into sulfuric acid by atmospheric oxygen. In masonry, sulfates produced by the oxidation of pyrite in clay materials can be present in bricks. They are gradually released over a long period of time, causing sulfate attack of mortar, especially where moisture movement concentrates the sulfates. Seawater: sulfate is the second anion present in seawater after chloride. Identification Sulfate attacks are identified through a remedial survey but they can often be overlooked when undertaking a damp survey as they can be considered as a structural rather than a dampness issue but moisture is required to promote the reaction. A first visual and leveling inspection of the structure and the underlying terrain is a first step to recognize a sulfate issue. To characterize the type and depth of the infill, exploration holes are needed. If water is present in the subfloor of the structure, a structural engineer may need to be instructed, subject to the level of damage or movement to the walls. Remedial action The remedial action depends on the severity of the attack and on the risk related to its evolution. If repairs are required because of the extent of damages, often, the affected slab must be demolished and removed, the spoil should not be used as hardcore under the replacement slab. History and literature Sulfur has long been known to contribute to damage. This is true for many materials such as metal corrosion, or concrete degradation. In King Lear, Shakespeare says:There’s hell, there’s darkness,    there is the sulphurous pit, Burning, scalding, stench, consumption;    fie, fie, fie! See also Concrete degradation Pitting corrosion (effect of sulfur and sulfides) References Further reading Chemistry of construction methods Concrete degradation
Sulfate attack in concrete and mortar
[ "Materials_science" ]
1,375
[ "Materials degradation", "Concrete degradation" ]
42,709,337
https://en.wikipedia.org/wiki/Gordon%E2%80%93Loeb%20model
The Gordon–Loeb model is an economic model that analyzes the optimal level of investment in information security. The benefits of investing in cybersecurity stem from reducing the costs associated with cyber breaches. The Gordon-Loeb model provides a framework for determining how much to invest in cybersecurity, using a cost-benefit approach. The model includes the following key components: Organizational data vulnerable to cyber-attacks, with vulnerability denoted by (), representing the probability of a breach occurring under current conditions. The potential loss from a breach, represented by , which can be expressed in monetary terms. The expected loss is calculated as before additional cybersecurity investments. Investment in cybersecurity, denoted as , reduces based on the effectiveness of the security measures, known as the security breach probability function. Gordon and Loeb demonstrated that the optimal level of security investment, , does not exceed 37% of the expected loss from a breach. Specifically, . Overview The model was first introduced by Lawrence A. Gordon and Martin P. Loeb in a 2002 paper published in ACM Transactions on Information and System Security, titled "The Economics of Information Security Investment". It was reprinted in the 2004 book Economics of Information Security. Both authors are professors at the University of Maryland's Robert H. Smith School of Business. The model is widely regarded as one of the leading analytical tools in cybersecurity economics. It has been extensively referenced in academic and industry literature. It has also been tested in various contexts by researchers such as Marc Lelarge and Yuliy Baryshnikov. The model has also been covered by mainstream media, including The Wall Street Journal and The Financial Times. Subsequent research has critiqued the model's assumptions, suggesting that some security breach functions may require fixing no less than the expected loss, challenging the universality of the factor. Alternative formulations even propose that some loss functions may justify investment at the full estimated loss. See also Genuine progress indicator References Data security Mathematical economics
Gordon–Loeb model
[ "Mathematics", "Engineering" ]
408
[ "Applied mathematics", "Data security", "Mathematical economics", "Cybersecurity engineering" ]
42,710,134
https://en.wikipedia.org/wiki/D-DIA
The D-DIA or deformation-DIA is an apparatus used for high pressure and high temperature deformation experiments. The advantage of this apparatus is the ability to apply pressures up to approximately 15 GPa while independently creating uniaxial strains up to 50%. Theory The D-DIA utilizes the same principle that other high pressure apparatuses (such as the diamond anvil cell) use to create elevated pressure on a specimen. Pressure = Force/area By generating a force, in the case of the D-DIA through a hydraulic ram, a greater force can then be applied to the sample by decreasing the area of the anvils on the end that are in contact with the sample assembly. Design The D-DIA is based on the similar DIA, which is a cubic-anvil apparatus. The D-DIA is a type of multi-anvil deformation apparatus that uses 6 cubically arranged anvils to provide independent pressurization and deformation of the sample. Four anvils of the cubic arrangement are oriented in the horizontal opposing at 90°, and the remaining two anvils are oriented in the vertical within two guide blocks. The back side of each horizontal anvil comprises two faces of a virtual octahedron. By the symmetry imposed from the advancing guide blocks and anvils, all axes of the virtual octahedron are then strained equally and thus provide hydrostatic pressure to the sample. In order to create a deviatoric stress, oil is pumped using two differential rams behind the top and bottom anvils located within the guide blocks allowing them to advance independent of the other four. By advancing just one anvil pair, a deviatoric stress is created thus altering the previously cubic stress field to one that is tetragonal. The induced flow is approximately axially-symmetric with respect to the cylindrical sample). By advancing an anvil pair pressure would begin to increase on the sample as deformation progresses, but the D-DIA has the capability of bleeding off oil from the main ram (which engages the guide blocks) while advancing the differential pumps, in order to maintain a constant sample pressure during deformation. Sample assembly There are multiple designs of sample assemblies that are currently used in the D-DIA. The various sample assembly designs use different materials in their construction to accomplish different goals, but all contain the same common elements: internal resistive heater, pressure medium and upper/lower pistons. The overall shape of the sample assembly is a cube (typically around 6mm), this shape allows for each of the 6 anvils to make contact with each face of the sample assembly. The outer portion of the sample assembly is the pressure medium, which is commonly either boron epoxy (BE) or mullite. The choice of pressure medium used in the sample assembly depends on the ultimate goal of the experiment. Boron epoxy is a self gasketing material in the D-DIA, which means it can produce a seal between all the anvils during deformation, but it has been shown to impart a significant amount of water to the sample during the experiment. This added water to the sample makes it impossible to conduct rheology experiments under anhydrous conditions. The other pressure medium material, mullite, leaves the sample very dry, but does not have the ability to self-gasket in the D-DIA. For this reason, when mullite is used as a pressure medium it needs to be used in combination with a gasket material. Typically the gasket material used is pyrophyllite, and the mullite will be machined into a sphere which sits in pyrophyllite “seats”, forming a cube. In the sample assembly, inboard of the pressure medium and surrounding the sample is an internal resistive heater. The heater is a sleeve which the cylindrical sample fits into, and typically is made of graphite, or can also be made of different types of metal. In deformation experiments pistons are needed on either side of the sample. Alumina is commonly used as it is harder than most sample materials, allowing deformation of the sample. Another design element that can be included into the sample assembly is a thermocouple. Thermocouples can be placed either as side entry (one that enters the center of the cube from and edge) or can be a top entry thermocouple (one that enters the top face). In the case of the top entry thermocouple, it can be simultaneously be used as the top piston, but the temperature is being read far from the sample center. The side entry thermocouple reads the temperature closer to the sample center, but the placement usually involves a hole to be drilled in the middle of the furnace, altering the heating characteristics of the furnace. To avoid both downside associated with wither thermocouple some sample assemblies do not use a thermocouple; temperature is instead either calibrated from the relationship of watts vs. temperature or calculated using the known pressure and calculated sample volume from in-situ x-ray diffraction data. X-ray diffraction abilities The design of the anvils used in the D-DIA allows for the transmission of synchrotron X-ray radiation through the sample. This X-ray data can be used for both in-situ stress and strain measurements to be taken during the deformation of the sample. Strain In-situ [strain] measurements can be made by collecting and analyzing X-ray radiographs. Typically this is achieved by utilizing a fluorescent yttrium aluminum garnet (YAG) crystal in combination with a charge-coupled device (CCD) camera. By placing metal foils (typically platinum or nickel) on the top and bottom of the sample, the total sample length can be easily observed in the x-ray radiographs during the deformation experiment. Using the initial length measurement and subsequent length measurements during deformation, the following relation can be used to calculate strain. ε = (L0 – L)/L0 Where strain is equal to the difference of the initial and final length, divided by the initial length. Stress The determination of stress is made utilizing data gathered from in-situ [x-ray diffraction]. Diffraction data is used to determine the d-spacing of certain crystallographic planes within the sample and from these values of d-spacing there exists various ways to determine the stress state. A common way of calculating the differential stress inside the polycrystal utilizes the d-spacing values measured in the radial and axial directions of the cylindrical sample. This technique takes advantage of the cylindrically symmetric stress field that is imposed by the D-DIA, but also requires the assumption of a Reuss state (or isostress state) of stress throughout each grain in the polycrystal. The other common technique of deviatoric stress determination utilizes differential lattice strains and single crystal elastic constants. In this method the lattice strain is first calculated using measured values d-spacing dm(hkl), as well as d-spacing values determined under hydrostatic conditions dp(hkl). εD(hkl) = [dm(hkl)- dp(hkl)] / dp(hkl) Once the lattice strains are calculated, the product of these values and the x-ray shear modulus, also known as the diffraction elastic constant GR(HKL), provides the stress on different lattice planes, τ (HKL). τ(HKL) = [(2GR(HKL)] εD(hkl) References Measurement
D-DIA
[ "Physics", "Mathematics" ]
1,574
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
59,111,950
https://en.wikipedia.org/wiki/WENO%20methods
In numerical solution of differential equations, WENO (weighted essentially non-oscillatory) methods are classes of high-resolution schemes. WENO are used in the numerical solution of hyperbolic partial differential equations. These methods were developed from ENO methods (essentially non-oscillatory). The first WENO scheme was developed by Liu, Osher and Chan in 1994. In 1996, Guang-Sh and Chi-Wang Shu developed a new WENO scheme called WENO-JS. Nowadays, there are many WENO methods. See also High-resolution scheme ENO methods References Further reading Numerical differential equations Computational fluid dynamics
WENO methods
[ "Physics", "Chemistry" ]
132
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
59,112,216
https://en.wikipedia.org/wiki/ENO%20methods
ENO (essentially non-oscillatory) methods are classes of high-resolution schemes in numerical solution of differential equations. History The first ENO scheme was developed by Harten, Engquist, Osher and Chakravarthy in 1987. In 1994, the first weighted version of ENO was developed. See also High-resolution scheme WENO methods Shock-capturing method References Numerical differential equations Computational fluid dynamics
ENO methods
[ "Physics", "Chemistry", "Mathematics" ]
86
[ "Computational physics stubs", "Computational fluid dynamics", "Applied mathematics", "Computational physics", "Applied mathematics stubs", "Fluid dynamics stubs", "Fluid dynamics" ]
65,584,884
https://en.wikipedia.org/wiki/Carbonaceous%20sulfur%20hydride
Carbonaceous sulfur hydride (CSH) is a potential superconductor that was announced in October 2020 by the lab of Ranga Dias at the University of Rochester, in a Nature paper that was later retracted. It was reported to have a superconducting transition temperature of at a pressure of 267 gigapascals (GPa), which would have made it the highest-temperature superconductor discovered. The paper faced criticism due to its non-standard data analysis calling into question its conclusions, and in September 2022 it was retracted by Nature. In July 2023 a second paper by the authors was retracted from Physical Review Letters due to suspected data fabrication, and in September 2023 a third paper by the authors about N-doped lutetium hydride was retracted from Nature. CSH is an uncharacterized ternary polyhydride compound of carbon, sulfur and hydrogen with a chemical formula that is thought to be CH8S. Measurements under extreme pressure are difficult, and in particular the elements are too light for an X-ray determination of crystal structure (X-ray crystallography). Background Prior to 1911, all known electrical conductors exhibited electrical resistance, due to collisions of the charge carrier with atoms in the material. Researchers discovered that in certain materials at low temperatures, the charge carriers interact with phonons in the material and form Cooper pairs, as described by BCS theory. This process results in the formation of a superconductor, with zero electrical resistance. During the transition to the superconducting state, the magnetic field lines are expelled from the interior of the material, which allows for the possibility of magnetic levitation. The effect has historically been known to occur at only low temperatures, but researchers have spent decades attempting to find a material that could operate at room temperature. Synthesis The material is a ternary polyhydride compound of carbon, sulfur and hydrogen with a chemical formula that is thought to be CSH8. As of October 2020, the material's molecular structure remains uncharacterized, as extreme pressures and the light elements used are unsuitable for most measurements, such as X-ray determination. The material was reportedly synthesized by compressing methane (CH4), hydrogen sulfide (H2S) and hydrogen (H2) in a diamond anvil cell and illuminating with a 532 nm green laser. A starting compound of carbon and sulfur is synthesized with a 1:1 molar ratio, formed into balls less than five microns in diameter, and placed into a diamond anvil cell. Hydrogen gas is then added and the system is compressed to 4.0 GPa and illuminated with a 532-nm laser for several hours. It was reported that the crystal is not stable under 10 GPa and can be destroyed if left at room temperature overnight. Other researchers were skeptical that such materials could serve as room temperature superconductors, as the absence of van Hove singularities or similar peaks in the electronic density of states of more than 3000 candidate phases rules out conventional superconductivity. Claims of superconductivity Superconductivity for sulfur hydrides without carbon was first reported in 2015. On 14 October 2020, a paper by Elliot Snider, et al. from the Dias lab was published, claiming that carbonaceous sulfur hydride was a room-temperature superconductor. Two years later, the paper was retracted. The claims in the paper included a superconducting state at temperatures as high as , almost higher than the existing record holder for high-temperature superconductivity. This state was claimed to be observable only at the very high pressure of , a million times the pressure in a typical car tire. The report was published in Nature and received significant media coverage. Criticism and retraction The validity of these results was called into question by Jorge E. Hirsch as well as others. Unavailability of the data prompted an editor's note on the original paper. additional criticism focused on the measurements of AC susceptibility used to test the superconductivity as the more definitive Meissner effect was too hard to observe at the scale of the experiments. As of 2022, no other lab had been able to reproduce the result, and the criticisms of the data analysis in the paper had not been addressed. On February 15, 2022, Nature added a cautionary Editor's Note to the article, and on 26 September 26, 2022, retracted the article entirely. By the end of 2023 two other papers from the lab had been retracted from Physical Review Letters and Nature, due to suspicions of data fabrication. At this point other publications by the lab were scrutinized more closely and as of March 2024 a total of nine of their papers had been retracted. References External links Hydrogen compounds Superconductors 2020 in science Sulfur compounds Carbon compounds Scientific controversies
Carbonaceous sulfur hydride
[ "Chemistry", "Materials_science" ]
1,010
[ "Superconductivity", "Superconductors" ]
61,747,014
https://en.wikipedia.org/wiki/Martin%27s%20sulfurane
Martin's sulfurane is the organosulfur compound with the formula Ph2S[OC(CF3)2Ph]2 (Ph = C6H5). It is a white solid that easily undergoes sublimation. The compound is an example of a hypervalent sulfur compound called a sulfurane. As such, the sulfur adopts a see-saw structure, with a lone pair of electrons as the equatorial fifth coordinate of a trigonal bipyramid, like that of sulfur tetrafluoride (SF4). The compound is a reagent in organic synthesis. One application is for the dehydration of a secondary alcohol to give an alkene: RCH(OH)CH2R' + Ph2S[OC(CF3)2Ph]2 → RCH=CHR' + Ph2SO + 2 HOC(CF3)2Ph References Trifluoromethyl compounds Reagents for organic chemistry Sulfur fluorides Fluorinating agents Hypervalent molecules Phenyl compounds
Martin's sulfurane
[ "Physics", "Chemistry" ]
219
[ "Molecules", "Fluorinating agents", "Hypervalent molecules", "Reagents for organic chemistry", "Matter" ]
61,749,817
https://en.wikipedia.org/wiki/Alain%20Bensoussan
Alain Bensoussan (born 12 May 1940) is a French mathematician. He is Professor Emeritus at the University of Paris-Dauphine and Professor at the University of Texas at Dallas. Early life and education Alain Bensoussan was born on 12 May 1940 in Tunis, Tunisia. Bensoussan is a former student of the École polytechnique (X1960), a graduate of ENSAE and a doctor of mathematics from the Faculty of Sciences in Paris (1969) under the supervision of Jacques-Louis Lions. Career He was a lecturer at the École polytechnique from 1970 to 1986 and a professor at the École normale supérieure from 1980 to 1985. He was Director of the European Institute for Advanced Studies in Management, Brussels from 1975 to 1977. He was President of INRIA from 1984 to 1996, President of the National Centre for Space Studies (CNES) from 1996 to 2003, President of the Council of the European Space Agency (ESA) from 1999 to 2002. Scientific works Alain Bensoussan's work has focused on automation and applied mathematics, but he has also focused on information and communication sciences and technologies as well as management and engineering sciences. He was one of the initiators of stochastic control for distributed systems and demonstrated in particular the principle of separation of estimation and control, which he then extended to differential sets. His former students include Peng Shige, Guy Pujolle, Étienne Pardoux, Jean-Michel Lasry. Awards and honours Member of the French Academy of Sciences Gay-Lussac Humboldt Prize (1984) IEEE Fellow (1985) Member of Academia Europaea (1985) Commandeur of the Ordre National du Mérite (2000) Member of the French Academy of Technologies (2000) Member of the International Academy of Astronautics (2001) Distinguished Public Service Medal, NASA (2001) Special award from the Association aéronautique et astronautique de France (2002) Officier of the Légion d'Honneur (2003) Officer of the Order of Merit of the Federal Republic of Germany (2003) SIAM Fellow, 2009 Fellow of the American Mathematical Society (2012) Reid Award (2014) Top 2% most highly cited scientists IEEE Control Systems Award 2024 Alain Bensoussan Fellowship / Alain Bensoussan Career Development Enhancer ERCIM - the European Research Consortium for Informatics and Mathematics - aims to foster collaborative work within the European research community and to increase co-operation with European industry. Leading European research institutes are members of ERCIM. The ERCIM Fellowship Programme has been established as one of the premier activities of ERCIM. The programme is open to young researchers from all over the world. It focuses on a broad range of fields in Computer Science and Applied Mathematics. This enables early-career scientists (obtained PhD degrees during the 8 years prior to the application year deadline) to conduct research at leading European centres outside their own country. Since its inception in 1991, over 750 fellows have benefited from the programme. Since 2005, ERCIM Fellowships are named Alain Bensoussan Fellowships, in honour of Professor Alain Bensoussan. The Alain Bensoussan Fellowship Programme has enabled bright young scientists from all over the world to work on challenging problems within ERCIM member institutes. The prestigious Alain Bensoussan Fellowships are co-funded by Marie Skłodowska-Curie Actions. Throughout the programme, the fellows are supported by the ERCIM Human Resources Task Force to drive their personal development scheme and to assist them in their future career plans, whether in European research institutions or in European Industry. Moreover, given the strategic nature of this training scheme focusing on ICT and novel technologies, this Fellowship Programme also enhances its impact over European research and competitiveness at large. References External links 20th-century French mathematicians 1940 births People from Tunis Members of the French Academy of Sciences Fellows of the American Mathematical Society Living people Tunisian emigrants to France 21st-century French mathematicians CNES presidents École Polytechnique alumni Academic staff of the École Normale Supérieure Members of Academia Europaea Fellows of the IEEE Control theorists Members of the French Academy of Technologies
Alain Bensoussan
[ "Engineering" ]
845
[ "Control engineering", "Control theorists" ]
61,750,020
https://en.wikipedia.org/wiki/Minimum%20information%20standard
Minimum information standards are sets of guidelines and formats for reporting data derived by specific high-throughput methods. Their purpose is to ensure the data generated by these methods can be easily verified, analysed and interpreted by the wider scientific community. Ultimately, they facilitate the transfer of data from journal articles (unstructured data) into databases (structured data) in a form that enables data to be mined across multiple data sets. Minimal information standards are available for a vast variety of experiment types including microarray (MIAME), RNAseq (MINSEQE), metabolomics (MSI) and proteomics (MIAPE). Minimum information standards typically have two parts. Firstly, there is a set of reporting requirements – typically presented as a table or a checklist. Secondly, there is a data format. Information about an experiment needs to be converted into the appropriate data format for it to be submitted to the relevant database. In the case of MIAME, the data format is provided in spreadsheet format (MAGE-TAB). Some of the communities that maintain minimum information standards also provide tools to help experimental researchers to annotate their data. MI Standards The individual minimum information standards are brought by the communities of cross-disciplinary specialists focused on the problematic of the specific method used in experimental biology. The standards then provide specifications what information about the experiments (metadata) is crucial and important to be reported together with the resultant data to make it comprehensive. The need for this standardization is largely driven by the development of high-throughput experimental methods that provide tremendous amounts of data. The development of minimum information standards of different methods is since 2008 being harmonized by "Minimum Information about a Biomedical or Biological Investigation" (MIBBI) project. MIAPPE, Minimum Information About a Plant Phenotyping Experiment MIAPPE is an open, community driven project to harmonize data from plant phenotyping experiments. MIAPPE comprises both a conceptual checklist of metadata required to adequately describe a plant phenotyping experiment. MIQE, Minimum Information for Publication of Quantitative Real-Time PCR Experiments Published in 2009 these guidelines for the basis of requirements by many journals when submitting QPCR data, sadly they are not adhered to enough. MIAME, gene expression microarray Minimum Information About a Microarray Experiment (MIAME) describes the Minimum Information About a Microarray Experiment that is needed to enable the interpretation of the results of the experiment unambiguously and potentially to reproduce the experiment and is aimed at facilitating the dissemination of data from microarray experiments. It was published by the FGED Society in 2001 and was the first published minimum information standard for high-throughput experiments in the life sciences. MIAME contains a number of extensions to cover specific biological domains, including MIAME-env, MIAME-nut and MIAME-tox, covering environmental genomics, nutritional genomics and toxogenomics, respectively. MINI: Minimum Information about a Neuroscience Investigation MINI: Electrophysiology Electrophysiology is a technology used to study the electrical properties of biological cells and tissues. Electrophysiology typically involves the measurements of voltage change or electric current flow on a wide variety of scales from single ion channel proteins to whole tissues. This document is a single module, as part of the Minimum Information about a Neuroscience investigation (MINI) family of reporting guideline documents, produced by community consultation and continually available for public comment. A MINI module represents the minimum information that should be reported about a dataset to facilitate computational access and analysis to allow a reader to interpret and critically evaluate the processes performed and the conclusions reached, and to support their experimental corroboration. In practice a MINI module comprises a checklist of information that should be provided (for example about the protocols employed) when a data set is described for publication. The full specification of the MINI module can be found here. MIARE, RNAi experiment Minimum Information About an RNAi Experiment (MIARE) is a data reporting guideline which describes the minimum information that should be reported about an RNAi experiment to enable the unambiguous interpretation and reproduction of the results. MIACA, cell based assay Advances in genomics and functional genomics have enabled large-scale analyses of gene and protein function by means of high-throughput cell biological analyses. Thereby, cells in culture can be perturbed in vitro and the induced effects recorded and analyzed. Perturbations can be triggered in several ways, for instance with molecules (siRNAs, expression constructs, small chemical compounds, ligands for receptors, etc.), through environmental stresses (such as temperature shift, serum starvation, oxygen deprivation, etc.), or combinations thereof. The cellular responses to such perturbations are analyzed in order to identify molecular events in the biological processes addressed and understand biological principles. We propose the Minimum Information About a Cellular Assay (MIACA) for reporting a cellular assay, and CA-OM, the modular cellular assay object model, to facilitate exchange of data and accompanying information, and to compare and integrate data that originate from different, albeit complementary approaches, and to elucidate higher order principles. Documents describing MIACA are available and provide further information as well as the checklist of terms that should be reported. MIAPE, proteomic experiments The Minimum Information About a Proteomic Experiment documents describe information which should be given along with a proteomic experiment. The parent document describes the processes and principles underpinning the development of a series of domain specific documents which now cover all aspects of a MS-based proteomics workflow. MIMIx, molecular interactions This document has been developed and maintained by the Molecular Interaction worktrack of the HUPO-PSI (www.psidev.info) and describes the Minimum Information about a Molecular Interaction experiment. MIAPAR, protein affinity reagents The Minimum Information About a Protein Affinity Reagent has been developed and maintained by the Molecular Interaction worktrack of the HUPO-PSI (www.psidev.info)in conjunction with the HUPO Antibody Initiative and a European consortium of binder producers and seeks to encourage users to improve their description of binding reagents, such as antibodies, used in the process of protein identification. MIABE, bioactive entities The Minimum Information About a Bioactive Entity was produced by representatives from both large pharma and academia who are looking to improve the description of usually small molecules which bind to, and potentially modulate the activity of, specific targets in a living organism. This document encompasses drug-like molecules as well as herbicides, pesticides and food additives. It is primarily maintained through the EMBL-EBI Industry program (www.ebi.ac.uk/industry). MIGS/MIMS, genome/metagenome sequences This specification is being developed by the Genomic Standards Consortium MIFlowCyt, flow cytometry Minimum Information about a Flow Cytometry Experiment The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) is a standard related to flow cytometry which establishes criteria to record information on experimental overview, samples, instrumentation and data analysis. It promotes consistent annotation of clinical, biological and technical issues surrounding a flow cytometry experiment. MISFISHIE, In Situ Hybridization and Immunohistochemistry Experiments MIAPA, Phylogenetic Analysis Criteria for Minimum Information About a Phylogenetic Analysis were described in 2006. MIRAGE, Glycomics The MIRAGE project is supported and coordinated by the Beilstein-Institut to establish guidelines for data handling and processing in glycomics research MIAO, ORF MIAMET, METabolomics experiment MIAFGE, Functional Genomics Experiment MIRIAM, Minimum Information Required in the Annotation of Models The Minimal Information Required In the Annotation of Models (MIRIAM), is a set of rules for the curation and annotation of quantitative models of biological systems. MIASE, Minimum Information About a Simulation Experiment The Minimum Information About a Simulation Experiment (MIASE) is an effort to standardize the description of simulation experiments in the field of systems biology. CIMR, Core Information for Metabolomics Reporting STRENDA, Standards for Reporting Enzymology Data The Standards for Reporting Enzymology Data (STRENDA) is an initiative which specifically focuses on the development of guidelines for reporting (describing metadata) enzymology experiments with the aim to improve the quality of enzymology data published in the scientific literature. References External links MIBBI (Minimum Information for Biological and Biomedical Investigations) A ‘one-stop shop’ for exploring the range of extant projects, foster collaborative development and ultimately promote gradual integration. BioSharing catalogue Bioinformatics Knowledge representation
Minimum information standard
[ "Engineering", "Biology" ]
1,786
[ "Bioinformatics", "Biological engineering" ]
61,751,032
https://en.wikipedia.org/wiki/Compton%20generator
A Compton generator or Compton tube is an apparatus for experiment to demonstrate the Earth's rotation, similar to the Foucault pendulum and to gyroscope devices. Arthur Compton (Nobel Prize in Physics in 1927) published it during his fourth year at the College of Wooster in 1913. Explanation of apparatus A Compton generator is a circular hollow glass ring tube shaped like a doughnut, the inside of which is filled with water. If the ring lies flat on the table, the water in the ring is stationary, and it is then turned over by rotating itself 180 degree around a diameter, such that it again lies flat on the table surface, which is horizontal. The result of the experiment is that the water moves with a certain constant drift velocity around the tube after the doughnut has been rotated. If there were no friction with the walls, the water would continue to circulate indefinitely. The ring used in the initial experiment was made of one inch brass tubing bent into a circle eighteen inches in diameter, where the windows were placed the tube was constricted to a diameter of about 3/8 inches (9.5 mm). Compton used small droplets of coal oil mixed in the water to measure the drift velocity under a microscope. Analysis Assume the diameter of the glass tube is much smaller than the diameter of the ring, and is the radius of the ring, is the Earth rotation rate and is the latitude. Initially the ring is horizontal and the water is stationary. Second the ring is then quickly rotated by 180° around its East-West diameter and stopped, such that it again lies flat on the table surface, which is horizontal. At this time, the velocity of the water in the tube is given by Note that a rotation from the vertical to the vertical position produces the velocity of the water in the tube is given by This is derived by first integrating the torque due to the Coriolis force around the ring, then integrating the torque over the time it takes for the ring to flip, to obtain the change in angular momentum. With these two equations, one can solve for both , thus finding both the rotation speed of earth and the latitude of the apparatus. Experimental verification Compton used this measured drift velocity to determine his latitude to within 3% accuracy. He also used it to measure the rotational period of earth to an accuracy of 16 minutes per day (accuracy of 1%). By careful methods he could observe the effect in a ring with a radius of only 9 inches (23 cm). Earth's rotation is 7.3 × 10−5 radians/second. In the original report, Compton used a ring of 1 meter in radius at the College of Wooster (latitude 41 degrees). This would translate to a velocity of about . References Bibliography Journal Books Physics experiments
Compton generator
[ "Physics" ]
567
[ "Experimental physics", "Physics experiments" ]
61,754,083
https://en.wikipedia.org/wiki/Emma%20Kendrick%20%28academic%29
Emma Kendrick is Professor of Energy Materials at the University of Birmingham where her work is focused on new materials for batteries and fuel cells. She is a Fellow of the Royal Society of Chemistry and Institute of Materials, Minerals and Mining. Early life and education Kendrick studied chemistry at the University of Manchester, later moving to University of Aberdeen in Scotland where she earned a master's degree in solid state chemistry. For her doctoral thesis, Kendrick went to Keele University to study low temperature synthetic routes to inorganic pigments. She later did postdoctoral research with Sandra Dann at the Loughborough University, as well as Peter Slater and Saiful Islam at the University of Surrey. Research and career Kendrick spent several years in industry, during which she worked at both Fife Batteries and Surion Energy Limited. She joined Sharp Corporation in 2010 where she established a research and development program in sodium-ion batteries, a low cost alternative to lithium-ion batteries. Her focus at Sharp was on the development of high energy density devices using cathodes optimized for stable voltage and capacity. She notably demonstrated a sodium-ion battery pouch cell with high volumetric energy density that has applications in the automotive and portable electronics industries, resulting in a promotion to Chief Technologist of Energy Storage. In 2016, Kendrick was appointed to Reader in Electrochemical Energy Materials at the Warwick Manufacturing Group. In 2018, Kendrick joined the University of Birmingham as a member of the Materials Chemistry Division of the Royal Society of Chemistry as well as serving on the materials science self-assessment team at the Engineering and Physical Sciences Research Council. She has several patents in chemical synthesis of materials for batteries. She holds an honorary position at University College London. She is a member of the Energy Research Accelerator Research Council. In addition to her sodium-ion battery materials development work, Kendrick has also established herself in the area of lithium-ion battery manufacturing and lithium-ion battery materials recycling, a new research program designed to reclaim and reuse material from end of life electric vehicle batteries. Kendrick is particularly concerned about the implications of supply chain issues associated the loss (or export) of rare and mined materials that are used in modern battery chemistries. She has pioneered efforts to increase the safety of the recovery processes used to reclaim battery materials, through the use of a brine discharge method using neutral salts that minimizes the rate of corrosion making it possible to recover the separated cathode and anode materials. In support of her recycling efforts, Kendrick has called on battery manufacturers to make batteries that are easier to dismantle. Her research is supported by the Faraday Battery Challenge, a four-year investment by the Government of the United Kingdom that looks to develop new lower cost materials, advance recycling processes, and identify battery degradation pathways. References British women scientists British women academics British chemists Alumni of the University of Manchester Alumni of the University of Aberdeen Alumni of Loughborough University Academics of the University of Birmingham Living people Solid state chemists Year of birth missing (living people)
Emma Kendrick (academic)
[ "Chemistry" ]
599
[ "Solid state chemists" ]
61,760,121
https://en.wikipedia.org/wiki/Rotor%20solidity
Rotor solidity is a dimensionless quantity used in design and analysis of rotorcraft, propellers and wind turbines. Rotor solidity is a function of the aspect ratio and number of blades in the rotor and is widely used as a parameter for ensuring geometric similarity in rotorcraft experiments. It provides a measure of how close a lifting rotor system is to an ideal actuator disk in momentum theory. It also plays an important role in determining the fluid speed across the rotor disk when lift is generated and consequentially the performance of the rotor; amount of downwash around it, and noise levels the rotor generates. It is also used to compare performance characteristics between rotors of different sizes. Typical values of rotor solidity ratio for helicopters fall in the range 0.05 to 0.12. Definitions Rotor solidity is the ratio of area of the rotor blades to the area of the rotor disk. For a rotor with blades, each of radius and chord , rotor solidity is: where is the blade area and is the disk area. For blades with a non-rectangular planform, solidity is often computed using an equivalent weighted form as where: is a weighting function corresponding to the blade section is solidity corresponding to the blade section is radial length to the blade section The weighing function is determined by the aerodynamic performance parameter that is assumed to be constant in comparison to an equivalent rotor having a rectangular blade planform. For example, when rotor thrust coefficient is assumed to be constant, the weighing function comes out to be: and the corresponding weighted solidity ratio is known as the thrust-weighted solidity ratio. When rotor power or torque coefficient is assumed constant, the weighing function is: and the corresponding weighted solidity ratio is known as the power or torque-weighted solidity ratio. This solidity ratio is analogous to the activity factor used in propeller design and is also used in wind turbine analysis. However, it is rarely used in helicopter design. Geometric significance A crude idea of what a rotor or propeller geometry looks like can be obtained from the rotor solidity ratio. Rotors with stubbier and/or a larger number of blades have a larger solidity ratio since they cover a larger fraction of the rotor disk. Rotorcraft like helicopters typically use blades with very low solidity ratios compared to fixed-wing and marine propellers. References Aerodynamics
Rotor solidity
[ "Chemistry", "Engineering" ]
474
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
38,507,281
https://en.wikipedia.org/wiki/C9H7NO4
{{DISPLAYTITLE:C9H7NO4}} The molecular formula C9H7NO4 (molar mass: 193.16 g/mol, exact mass: 193.0375 u) may refer to: DHICA Dopachrome Molecular formulas
C9H7NO4
[ "Physics", "Chemistry" ]
59
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
38,508,096
https://en.wikipedia.org/wiki/EPI-001
EPI-001 is the first inhibitor of the androgen receptor amino-terminal domain. The single stereoisomer of EPI-001, EPI-002, is a first-in-class drug that the USAN council assigned a new stem class "-aniten" and the generic name "ralaniten". This distinguishes the anitens novel molecular mechanism from anti androgens that bind the C-terminus ligand-binding domain and have the stem class "lutamide" (such as flutamide, nilutamide, bicalutamide, enzalutamide, etc.). EPI-001 and its stereoisomers and analogues were discovered by Marianne Sadar and Raymond Andersen, who co-founded the pharmaceutical company ESSA Pharma Inc (Vancouver, Canada) for the clinical development of anitens for the treatment of castration-resistant prostate cancer (CRPC). EPI-001 is an antagonist of the androgen receptor (AR) that acts by binding covalently to the N-terminal domain (NTD) of the AR and blocking protein-protein interactions required for transcriptional activity of the AR and its splice variants (IC50 for inhibition of AR NTD transactivation ≈ 6 μM). This is different from all currently-used antiandrogens, which, conversely, bind to the C-terminal ligand-binding domain (LBD) of the AR and competitively block binding and activation of the receptor by androgens. Due to its unique mechanism of action, EPI-001 type compounds may prove to be effective in the treatment of advanced prostate cancer resistant to conventional antiandrogens such as enzalutamide. EPI-001's successor, ralaniten acetate (EPI-506), a prodrug of ralaniten (EPI-002), one of the four stereoisomers of EPI-001, was under clinical investigation in a phase I study. EPI-506 was the first drug that directly binds to an intrinsically disordered region to be tested in humans and marks a leap in drug development from folded drug targets. Pharmacology Pharmacodynamics EPI-001 is a mixture of four stereoisomers. EPI-001 binds to the activation function-1 (AF-1) region in the NTD of the AR, as opposed to other AR antagonists, which bind to the C-terminal LBD. A functional AF-1 is essential for the AR to have transcriptional activity. If AF-1 is deleted or mutated, the AR will still bind androgens, but will have no transcriptional activity. Importantly, if the AR lacks an LBD, the receptor will be nuclear and constitutively-active. Constitutively active splice variants of the AR that lack the C-terminal LBD are correlated to CRPC and poor survival. EPI-001 is an inhibitor of constitutively active splice variant of ARs that lack the C-terminal LBD. Conventional antiandrogens do not inhibit constitutively-active variants of AR that have a truncated or deleted C-terminal LBD. In the absence of androgen, all known antiandrogens cause translocation of AR from the cytoplasm to the nucleus, whereas EPI-001 does not cause the AR to become nuclear. Binding of EPI-001 to the NTD of the AR blocks protein-protein interactions that are essential for its transcriptional activity. Specifically, EPI-001 blocks AR interactions with CREB-binding protein, RAP74, and between the NTD and C-terminal domain (termed N/C interaction) required for antiparallel dimer formation of AR. Unlike antiandrogens such as bicalutamide, EPI-001 does not cause the AR to bind to androgen response elements on the DNA of target genes. EPI-001 at extremely high concentrations of 50 to 200 uM has also been found to act as a selective PPARγ modulator (SPPARM), with both agonistic and antagonistic actions on the PPARγ. Via PPARγ activation, EPI-001 has been found to inhibit AR expression and activity in prostate cancer cells, indicating at least one AR-independent action by which EPI-001 exhibits antiandrogen properties in the prostate. EPI-001 inhibits AR-dependent proliferation of human prostate cancer cells while having no significant effects on cells that do not require the AR for growth and survival. EPI-001 has specificity to the AR (aside from the PPARγ) and has excellent anti-tumor activity in vivo with xenografts of CRPC. See also EPI-002 EPI-7386 References Abandoned drugs Alkylating agents 2,2-Bis(4-hydroxyphenyl)propanes Halohydrins Nonsteroidal antiandrogens Organochlorides PPAR agonists Triols Glycerols
EPI-001
[ "Chemistry" ]
1,048
[ "Alkylating agents", "Drug safety", "Abandoned drugs", "Reagents for organic chemistry" ]
38,513,558
https://en.wikipedia.org/wiki/Dualizing%20module
In abstract algebra, a dualizing module, also called a canonical module, is a module over a commutative ring that is analogous to the canonical bundle of a smooth variety. It is used in Grothendieck local duality. Definition A dualizing module for a Noetherian ring R is a finitely generated module M such that for any maximal ideal m, the R/m vector space vanishes if n ≠ height(m) and is 1-dimensional if n = height(m). A dualizing module need not be unique because the tensor product of any dualizing module with a rank 1 projective module is also a dualizing module. However this is the only way in which the dualizing module fails to be unique: given any two dualizing modules, one is isomorphic to the tensor product of the other with a rank 1 projective module. In particular if the ring is local the dualizing module is unique up to isomorphism. A Noetherian ring does not necessarily have a dualizing module. Any ring with a dualizing module must be Cohen–Macaulay. Conversely if a Cohen–Macaulay ring is a quotient of a Gorenstein ring then it has a dualizing module. In particular any complete local Cohen–Macaulay ring has a dualizing module. For rings without a dualizing module it is sometimes possible to use the dualizing complex as a substitute. Examples If R is a Gorenstein ring, then R considered as a module over itself is a dualizing module. If R is an Artinian local ring then the Matlis module of R (the injective hull of the residue field) is the dualizing module. The Artinian local ring R = k[x,y]/(x2,y2,xy) has a unique dualizing module, but it is not isomorphic to R. The ring Z[] has two non-isomorphic dualizing modules, corresponding to the two classes of invertible ideals. The local ring k[x,y]/(y2,xy) is not Cohen–Macaulay so does not have a dualizing module. See also dualizing sheaf References Commutative algebra
Dualizing module
[ "Mathematics" ]
451
[ "Fields of abstract algebra", "Commutative algebra" ]
38,516,236
https://en.wikipedia.org/wiki/Pairwise%20error%20probability
Pairwise error probability is the error probability that for a transmitted signal () its corresponding but distorted version () will be received. This type of probability is called ″pair-wise error probability″ because the probability exists with a pair of signal vectors in a signal constellation. It's mainly used in communication systems. Expansion of the definition In general, the received signal is a distorted version of the transmitted signal. Thus, we introduce the symbol error probability, which is the probability that the demodulator will make a wrong estimation of the transmitted symbol based on the received symbol, which is defined as follows: where is the size of signal constellation. The pairwise error probability is defined as the probability that, when is transmitted, is received. can be expressed as the probability that at least one is closer than to . Using the upper bound to the probability of a union of events, it can be written: Finally: Closed form computation For the simple case of the additive white Gaussian noise (AWGN) channel: The PEP can be computed in closed form as follows: is a Gaussian random variable with mean 0 and variance . For a zero mean, variance Gaussian random variable: Hence, See also Signal processing Telecommunication Electrical engineering Random variable References Further reading Signal processing Probability theory
Pairwise error probability
[ "Technology", "Engineering" ]
261
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
38,517,640
https://en.wikipedia.org/wiki/Flow%20distribution%20in%20manifolds
The flow in manifolds is extensively encountered in many industrial processes when it is necessary to distribute a large fluid stream into several parallel streams, or to collect them into one discharge stream, such as in fuel cells, heat exchangers, radial flow reactors, hydronics, fire protection, and irrigation. Manifolds can usually be categorized into one of the following types: dividing, combining, Z-type and U-type manifolds (Fig. 1). A key question is the uniformity of the flow distribution and pressure drop. Traditionally, most of theoretical models are based on Bernoulli equation after taking the frictional losses into account using a control volume (Fig. 2). The frictional loss is described using the Darcy–Weisbach equation. One obtains a governing equation of dividing flow as follows: where is the velocity, is the pressure, is the density, is the hydraulic diameter, is the frictional coefficient, is the axial coordinate in the manifold, ∆X = L/n. The n is the number of ports and L the length of the manifold (Fig. 2). This is fundamental of manifold and network models. Thus, a T-junction (Fig. 3) can be represented by two Bernoulli equations according to two flow outlets. A flow in manifold can be represented by a channel network model. A multi-scale parallel channel networks is usually described as the lattice network using analogy with the conventional electric circuit methods. A generalized model of the flow distribution in channel networks of planar fuel cells. Similar to Ohm's law, the pressure drop is assumed to be proportional to the flow rates. The relationship of pressure drop, flow rate and flow resistance is described as Q2 = ∆P/R. f = 64/Re for laminar flow where Re is the Reynolds number. The frictional resistance, using Poiseuille's law. Since they have same diameter and length in Fig. 3, their resistances are same, R2 = R3. Thus the velocities should be equal in two outlets or the flow rates should be equal according to the assumptions. Obviously this disobeys our observations. Our observations show that the greater the velocity (or momentum), the more fluid fraction through the straight direction. Only under very slow laminar flow, Q2 may be equal to Q3. The question raised from the experiments by McNown and by Acrivos et al. Their experimental results showed a pressure rise after T-junction due to flow branching. This phenomenon was explained by Wang. Because of inertial effects, the fluid will prefer to the straight direction. Thus the flow rate of the straight pipe is greater than that of the vertical one. Furthermore, because the lower energy fluid in the boundary layer branches through the channels the higher energy fluid in the pipe centre remains in the pipe as shown in Fig. 4. Thus, mass, momentum and energy conservations must be employed together for description of flow in manifolds. Wang recently carried out a series of studies of flow distribution in manifold systems. He unified main models into one theoretical framework and developed the most generalised model, based on the same control volume in Fig. 2. The governing equations can be obtained for the dividing, combining, U-type and Z-type arrangements. The Governing equation of the dividing flow: or to a discrete equation: In , the inertial effects are corrected by a momentum factor, β. is a fundamental equation for most of discrete models. The equation can be solved by recurrence and iteration method for a manifold. It is clear that is limiting case of when ∆X → 0. is simplified to Bernoulli equation without the potential energy term when β=1 whilst is simplified to Kee's model when β=0. Moreover, can be simplified to Acrivos et al.’s model after substituting Blasius’ equation, . Therefore, these main models are just a special case of . Similarly, one can obtain the governing equations of the combining, U-type and Z-type arrangement. The Governing equation of the combining flow: or to a discrete equation: The Governing equation of the U-type flow: or to a discrete equation: The Governing equation of the Z-type flow: or to a discrete equation: - are second order nonlinear ordinary differential equations for dividing, combining, U-type and Z-type manifolds, respectively. The second term in the left hand represents a frictional contribution known as the frictional term, and the third term does the momentum contribution as the momentum term. Their analytical solutions had been well-known challenges in this field for 50 years until 2008. Wang elaborated the most complete analytical solutions of - . The present models have been extended into more complex configurations, such as single serpentine, multiple serpentine and straight parallel layout configurations, as shown in Fig. 5. Wang also established a direct, quantitative and systematic relationship between flow distribution, pressure drop, configurations, structures and flow conditions and developed an effective design procedures, measurements, criteria with characteristic parameters and guidelines on how to ensure uniformity of flow distribution as a powerful design tool. See also Plate heat exchanger Fuel Cells References Fluid dynamics
Flow distribution in manifolds
[ "Chemistry", "Engineering" ]
1,062
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
50,519,584
https://en.wikipedia.org/wiki/Time-variation%20of%20fundamental%20constants
The term physical constant expresses the notion of a physical quantity subject to experimental measurement which is independent of the time or location of the experiment. The constancy (immutability) of any "physical constant" is thus subject to experimental verification. Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments conducted since then have put upper bounds on their time-dependence. This concerns the fine-structure constant, the gravitational constant and the proton-to-electron mass ratio specifically, for all of which there are ongoing efforts to improve tests on their time-dependence. The immutability of these fundamental constants is an important cornerstone of the laws of physics as currently known; the postulate of the time-independence of physical laws is tied to that of the conservation of energy (Noether's theorem), so that the discovery of any variation would imply the discovery of a previously unknown law of force. In a more philosophical context, the conclusion that these quantities are constant raises the question of why they have the specific value they do in what appears to be a "fine-tuned universe", while their being variable would mean that their known values are merely an accident of the current time at which we happen to measure them. Dimensionality It is problematic to discuss the proposed rate of change (or lack thereof) of a single dimensional physical constant in isolation. The reason for this is that the choice of a system of units may arbitrarily select any physical constant as its basis, making the question of which constant is undergoing change an artefact of the choice of units. For example, in SI units, the speed of light has been given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now. Tests on the immutability of physical constants look at dimensionless quantities, i.e. ratios between quantities of like dimensions, in order to escape this problem. Changes in physical constants are not meaningful if they result in an observationally indistinguishable universe. For example, a "change" in the speed of light c would be meaningless if accompanied by a corresponding "change" in the elementary charge e so that the ratio e2:c (the fine-structure constant) remained unchanged. Natural units are systems of units entirely based in fundamental constants. In such systems, it is meaningful to measure any specific quantity which is not used in the definition of units. For example, in Stoney units, the elementary charge is set to while the reduced Planck constant is subject to measurement, , and in Planck units, the reduced Planck constant is set to , while the elementary charge is subject to measurement, . The 2019 revision of the SI expresses all SI base units in terms of fundamental physical constants, effectively transforming the SI system into a system of natural units. Fine-structure constant In 1999, evidence for time variability of the fine-structure constant based on observation of quasars was announced but a much more precise study based on CH molecules did not find any variation. An upper bound of 10−17 per year for the time variation, based on laboratory measurements, was published in 2008. Observations of a quasar of the universe at only 0.8 billion years old with AI analysis method employed on the Very Large Telescope (VLT) found a spatial variation preferred over a no-variation model at the level. The time-variation of fine-structure constant is equivalent to the time-variation of one or more of: speed of light, Planck constant, vacuum permittivity, and elementary charge, since . Speed of light Gravitational constant The gravitational constant is difficult to measure with precision, and conflicting measurements in the 2000s have inspired the controversial suggestions of a periodic variation of its value in a 2015 paper. However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for over the last nine billion years. The quantity is simply the change in time of the gravitational constant, denoted by , divided by . As a dimensional quantity, the value of the gravitational constant and its possible variation will depend on the choice of units; in Planck units, for example, its value is fixed at by definition. A meaningful test on the time-variation of G would require comparison with a non-gravitational force to obtain a dimensionless quantity, e.g. through the ratio of the gravitational force to the electrostatic force between two electrons, which in turn is related to the dimensionless fine-structure constant. Proton-to-electron mass ratio An upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy. Cosmological constant The cosmological constant is a measure of the energy density of the vacuum. It was first measured, and found to have a positive value, in the 1990s. It is currently (as of 2015) estimated at 10−122 in Planck units. Possible variations of the cosmological constant over time or space are not amenable to observation, but it has been noted that, in Planck units, its measured value is suggestively close to the reciprocal of the age of the universe squared, . Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains throughout the history of the universe. See also Dirac large numbers hypothesis References Dara Faroughy, "Slowly evolving early universe and a phenomenological model for time-dependent fundamental constants and the leptonic masses" (2008), arXiv:0801.1935. Jean-Philippe Uzan, "Varying Constants, Gravitation and Cosmology", Living Rev. Relativ., 14.2 (2011). Physical constants Fundamental constants Time in physics
Time-variation of fundamental constants
[ "Physics", "Mathematics" ]
1,275
[ "Time in physics", "Physical phenomena", "Physical quantities", "Quantity", "Physical constants", "Fundamental constants" ]
50,525,886
https://en.wikipedia.org/wiki/Hydrogen%20isotope%20biogeochemistry
Hydrogen isotope biogeochemistry (HIBGC) is the scientific study of biological, geological, and chemical processes in the environment using the distribution and relative abundance of hydrogen isotopes. Hydrogen has two stable isotopes, protium H and deuterium H, which vary in relative abundance on the order of hundreds of permil. The ratio between these two species can be called the hydrogen isotopic signature of a substance. Understanding isotopic fingerprints and the sources of fractionation that lead to variation between them can be applied to address a diverse array of questions ranging from ecology and hydrology to geochemistry and paleoclimate reconstructions. Since specialized techniques are required to measure natural hydrogen isotopic composition (HIC), HIBGC provides uniquely specialized tools to more traditional fields like ecology and geochemistry. History of hydrogen isotopes Earliest work The study of hydrogen stable isotopes began with the discovery of deuterium by chemist Harold Urey. Even though the neutron was not realized until 1932, Urey began searching for "heavy hydrogen" in 1931. Urey and his colleague George Murphy calculated the redshift of heavy hydrogen from the Balmer series and observed very faint lines on a spectrographic study. To intensify the spectroscopic lines for publishable data, Murphy and Urey paired with Ferdinand Brickwedde and distilled a more concentrated pool of heavy hydrogen, now called deuterium. This work on hydrogen isotopes won Urey the 1934 Nobel Prize in Chemistry. Also in 1934, scientists Ernest Rutherford, Mark Oliphant, and Paul Harteck, produced the radioisotope tritium (hydrogen-3, H) by hitting deuterium with high-energy nuclei. The deuterium used in the experiment was a generous gift of heavy water from UC Berkeley physicist Gilbert N. Lewis. Bombarding deuterium produced two previously undetected isotopes, helium-3 (He) and H. Rutherford and his colleagues successfully created H, but incorrectly assumed that He was the radioactive component. The work of Luis Walter Alvarez and Robert Cornog first isolated H and reversed Rutherford's incorrect notion. Alvarez reasoned that tritium was radioactive, but did not measure the half-life, though calculations at the time suggested >10 years. At the end of World War II, physical chemist Willard Libby detected the residual radioactivity of a tritium sample with a Geiger counter, providing a more accurate understanding of the half-life, now accepted as 12.3 years. Impact on physical chemistry The discovery of hydrogen isotopes also impacted physics in the 1940s, as nuclear magnetic resonance spectroscopy was first invented. Organic chemists now use nuclear magnetic resonance (NMR) to map protein interactions or identify small compounds, but NMR was first a passion project of physicists. All three isotopes of hydrogen were found to have magnetic properties suitable for NMR spectroscopy. The first chemist to fully express an application of NMR was George Pake, who measured gypsum (CaSO4.2H2O) as a crystal and powder. The signal observed, called the Pake doublet, was from the magnetically active hydrogens in water. Pake then calculated the proton-proton bond length. NMR measurements were further revolutionized when commercial machines became available in the 1960s. Before this, NMR experiments involved constructing massive projects, locating large magnets, and hand wiring miles of copper coil. Proton NMR remained the most popular technique throughout advancements in following decades, but H and H were used in other flavors of NMR spectroscopy. H has a different magnetic moment and spin than H, but generally a much smaller signal. Historically, deuterium NMR is a poor alternative to proton NMR, but has been used to study the behavior of lipids on cell membranes. A variant of H NMR called H-SNIF has shown potential for understating position-specific isotope compositions and comprehending biosynthetic pathways. Tritium is also used in NMR, as it is the only nucleus more sensitive than H, generating very large signals. However, tritium's radioactivity discouraged many studies of H-NMR. While tritium's radioactivity discourages use in spectroscopy, tritium is essential for nuclear weapons. Scientists began understanding nuclear energy as early as the 1800s, but large advancements were made in studies of the atomic bomb in the early 1940s. Wartime research, especially the Manhattan Project, greatly advanced the understanding of radioactivity. H is a byproduct in reactors, a result of hitting lithium-6 with neutrons, producing almost 5 MeV of energy. In boosted fission weapons a mix of H and H is heated until there is thermonuclear fusion to produce helium and free neutrons. These fast neutrons then cause further fission, creating "boosting". In 1951, in Operation Greenhouse, a prototype named George, validated the proof of concept for such a weapon. However, the first true boosted fission bomb, Greenhouse Item, was successfully tested in 1952, giving a 45.5-kiloton yield, nearly double that of an unboosted bomb. The United States stopped producing tritium in nuclear reactors in 1988, but nuclear tests in the 1950s added large spikes of radionuclides to the air, especially carbon-14 and H. This complicated measurements for geologists using carbon dating. However, some oceanographers benefited from the H increase, using the signal in the water to trace physical mixing of water masses. Impact on biogeochemistry In biogeochemistry, scientists focused mainly on deuterium as a tracer for environmental processes, especially the water cycle. American geochemist Harmon Craig, once a graduate student of Urey, discovered the relationship between rainwater's hydrogen and oxygen isotope ratios. The linear correlation between the two heavy isotopes occurs worldwide and is called the global meteoric water line. By the late 1960s, the focus of hydrogen isotopes shifted away from water and toward organic molecules. Plants use water to form biomass, but a 1967 study by Zebrowski, Ponticorvo, and Rittenberg found that the organic material in plants had less H than the water source. Zebrowski's research measured the deuterium concentration of fatty acids and amino acids derived from sediments in the Mohole drilling project. Further studies by Bruce Smith and Samuel Epstein in 1970 confirmed the depletion of H in organics compared to environmental water. Another duo in 1970, Schiegl and Vogel, analyzed the HIC as water became biomass, as biomass became coal and oil, and as oil became natural gas. In each step they found H further depleted. A landmark paper in 1980 by Marilyn Epstep, now M. Fogel, and Thomas Hoering titled "Biogeochemistry of the stable hydrogen isotopes" refined the links between organic materials and sources. In this early stage of hydrogen stable isotope study, most isotope compositions or fractionations were reported as bulk measurements of all organic or all inorganic matter. Some exceptions include cellulose and methane, as these compounds are easily separated. Another advantage of methane for compound-specific measurements is the lack of hydrogen exchange. Cellulose has exchangeable hydrogen, but chemical derivatization can prevent swapping of cellulose hydrogen with water or mineral hydrogen sources. Cellulose and methane studies in the 1970s and 1980s set the standard for modern hydrogen isotope geochemistry. Measurement of individual compounds was made possible in the late 1990s and early 2000s with advances in mass spectrometry. The Thermo Delta+XL transformed measurements as the first instrument capable of compound specific isotope analysis. It was then possible to look at smaller samples with more precision. Hydrogen isotope applications quickly emerged in petroleum geochemistry by measuring oil, paleoclimatology by observing lipid biomarkers, and ecology by constructing trophic dynamics. Advances are underway in the clumped-isotope composition of methane after development of the carbonate thermometer. Precise measurements are also enabling focus on microbial biosynthetic pathways involving hydrogen. Ecologists studying trophic levels are especially interested in compound specific measurements for reconstructing past diets and tracing predator-prey relationships. Highly advanced machines now promise position-specific hydrogen-isotope analysis of biomolecules and natural gas. Important concepts Stable vs radioactive isotopes All isotopes of an element have the same number of protons with varying numbers of neutrons. Hydrogen has three naturally occurring isotopes: H, H and H; called protium (H), deuterium (D) and tritium (T), respectively. Both H and H are stable, while H is unstable and beta-decays to He. While there are some important applications of H in geochemistry (such as its use as an ocean circulation tracer) these will not be discussed further here. Isotope notation The study of stable isotope biogeochemistry involves the description of the relative abundances of various isotopes in a certain chemical pool, as well as the way in which physicochemical processes change the fraction of those isotopes in one pool vs. another. Various type of notation have been developed to describe the abundance and change in the abundance of isotopes in these processes, and these are summarized below. In most cases only the relative amounts of an isotope are of interest, the absolute concentration of any one isotope is of little importance. Isotope ratio and fractional abundance The most fundamental description of hydrogen isotopes in a system is the relative abundance of H and H. This value can be reported as isotope ratio R or fractional abundance F defined as: ^2R\ =\ \frac{^2H}{^1H} and ^2F\ =\ \frac{^2H}{{^1H}+{^2H}} where H is amount of isotope H. Fractional abundance is equivalent to mole fraction, and yields atom percent when multiplied by 100. In some instances atom percent excess is used, which reports the atom percent of a sample minus the atom percent of a standard. Delta (δ) notation Isotope ratios for a substance are often reported compared to a standard with known isotopic composition, and measurements of relative masses are always made in conjuncture with measuring a standard. For hydrogen, the Vienna Standard Mean Ocean Water standard is used which has an isotope ratio of 155.76±0.1 ppm. The delta value as compared to this standard is defined as: \delta^2H_{VSMOW}\ =\ \frac{^2R_{sample}}{^2R_{VSMOW}}-1 These delta values are often quite small, and are usually reported as per mil values (‰) which come from multiplying the above equation by a factor of 1000. Measures of fractionation The study of HIBGC relies on the fact that various physicochemical processes preferentially enrich or deplete H relative to H (see kinetic isotope effect [KIE], etc.). Various measures have been developed to describe the fractionation in an isotope between two pools, often the product and reactant of a physiochemical process. α notation describes the difference between two hydrogen pools A and B with the equation: \alpha_{A/B}\ =\ \frac{^2R^{A}}{^2R^{B}} where δH is the delta value of pool A relative to VSMOW. As many delta values do not vary greatly from one another the α value is often very close to unity. A related measure called epsilon (ε) is often used which is given simply by: These values are often very close to zero, and are reported as per mill values by multiplying by 1000. One final measure is Δ, pronounced "cap delta", which is simply: \Delta_{A/B}\ =\ \delta^2H^{A}-\delta^2H^{B} Conservation of mass in mixing calculations H and H are stable isotopes. Therefore, the H/H ratio of a pool containing hydrogen, remains constant as long as no hydrogen is added or removed, a property known as conservation of mass. When two pools of hydrogen A and B mix with molar amounts of hydrogen m and m, each with their own starting fractional abundance of deuterium (F and F), then the fractional abundance of the resulting mixture is given by the following exact equation: The terms with Σ represent the values for the combined pools. The following approximation is often used for calculations regarding the mixing of two pools with a known isotopic composition: This approximation is convenient and applicable with little error in most applications having to deal with pools of hydrogen from natural processes. The maximum difference between the calculated delta (δ) value with the approximate and exact equations is given by the following equation: This error is quite small for nearly all mixing of naturally occurring isotope values, even for hydrogen which can have quite large natural variations in δ values. The estimation is usually avoided when unnaturally large δ values are encountered, which is especially common in isotopic labeling experiments. Naturally occurring isotope variation Natural processes result in broad variations in the D/H ratio (DHR) in different pools of hydrogen. KIEs and physical changes such as precipitation and evaporation lead to these observed variations. Seawater varies slightly, between 0 and −10 per mil, while atmospheric water can vary between about −200‰ to +100‰. Biomolecules synthesized by organisms, retain some of the D/H signature of the water which they were grown on, plus a large fractionation factor which can be as great as several hundred ‰. Large D/H differences, of thousands of ‰, can be found between Earth and other planetary bodies such as Mars, likely due to variations in isotope fractionation during planet formation and the loss of hydrogen into space. List of well known fractionation effects A number of common processes fractionate hydrogen isotopes to produce the isotope variations found in nature. Common physical processes include precipitation and evaporation. Chemical reactions can also heavily influence the partitioning of heavy and light isotopes between pools. The rate of a chemical reaction depends in part on the energies of the chemical bonds formed and broken in the reaction. Since different isotopes have different masses, the bond energies differ between isotopologues of a chemical species. This will result in a difference in the rate of a reaction for the different isotopologues, resulting in a fractionation of the different isotopes between the reactant and product in a chemical reaction. This is known as the kinetic isotope effect (KIE). A classic example of KIE is the DHR difference in the equilibrium between HO and H which can have an α value of as much as 3–4. Isotope ratio as tracer for fingerprint In many areas of study the origin of a chemical or group of chemicals is of central importance. Questions such as the source of environmental pollutants, the origin of hormones in an athlete's body, or the authenticity of foods and flavorings are all examples where chemical compounds need to be identified and sourced. Hydrogen isotopes have found uses in these and many other diverse areas of study. Since many processes can affect the DHR of a given compound this ratio can be a diagnostic signature for compounds produced in a specific location or via a certain process. Once the DHRs of a number of sources are known the measurement of this ratio for a sample of unknown origin can often be used to link it back to a certain source or production method. Physical chemistry Hydrogen isotope formation H, with one proton and no neutrons, is the most abundant nuclide in the Solar System, formed in the earliest rounds of stellar explosions after the Big Bang. After the universe exploded into life, the hot and dense cloud of particles began to cool, first forming subatomic particles like quarks and electrons, which then condensed to form protons and neutrons. Elements larger than hydrogen and helium were produced with successive stars, forming from the energy released during supernovae. Deuterium, H, with one proton and one neutron, is also known to have cosmic origin. Like protium, deuterium was produced very early in the universe's history, during Big Bang nucleosynthesis (BBN). As protons and neutrons combined, helium-4 was produced with a deuterium intermediate. Alpha reactions with He produce many of the larger elements that dominate today's Solar System. However, before the universe cooled, high-energy photons destroyed any deuterium, preventing larger element formation. This is called the deuterium bottleneck, a restriction on the timeline for nucleosynthesis. All of today's deuterium originated from this proton-proton fusion after enough cooling. Tritium, H, with one proton and two neutrons, was produced by proton and neutron collisions in the early universe as well, but it has since radioactively decayed to helium-3. Today's tritium cannot be from BBN, due to tritium's short half-life, 12.3 years. Today's H concentration is instead governed by nuclear reactions and cosmic rays. The beta decay of H to He releases an electron and an antineutrino, and about 18 keV of energy. This is a low-energy decay, so the radiation cannot permeate skin. Tritium is thus only hazardous if directly ingested or inhaled. Quantum properties H is a spin-1/2 subatomic particle and therefore a fermion. Other fermions include neutrons, electrons, and tritium. Fermions are governed by the Pauli exclusion principle, where no two particles can have the same quantum number. However, bosons like deuterium and photons, are not bound by exclusion and multiple particles can occupy the same energy state. This fundamental difference in H and H manifests in many physical properties. Integer-spin particles like deuterium follow Bose–Einstein statistics while fermions with half-integer spins follow Fermi–Dirac statistics. Wave functions that describe multiple fermions must be antisymmetric with respect to swapping particles, while boson wave functions are symmetric. Because bosons are indistinguishable and can occupy the same state, collections of bosons behave very differently than fermions at colder temperatures. As bosons are cooled and relaxed to the lowest energy state, phenomena like superfluidity and superconductivity occur. Kinetic and equilibrium isotope effects Isotopes differ by number of neutrons, which directly impacts physical properties based on mass and size. Normal hydrogen (protium, H) has no neutron. Deuterium (H) has one neutron, and tritium (H) has two. Neutrons add mass to the atom, leading to different chemical physical properties. This effect is especially strong for hydrogen isotopes, since the added neutron doubles the mass from H to H. For heavier elements like carbon, nitrogen, oxygen, or sulfur, the mass difference is diluted. Physical chemists often model chemical bonding with the quantum harmonic oscillator (QHO), simplifying a hydrogen-hydrogen bond as two balls connected by a spring. The QHO is based on Hooke's law and is a good approximation of the Morse potential that accurately describes bonding. Modeling H/H in a chemical reaction demonstrates the energy distributions of isotopes in products and reactants. Lower energy levels for the heavier isotope H can be explained mathematically by the QHO's dependence on the inverse of the reduced mass μ. Thus, a larger reduced mass is a larger denominator and thus a smaller zero point energy and a lower energy state in the quantum well. Calculating the reduced mass of a H–H bond versus a H–H bond gives: The quantum harmonic oscillator has energy levels of the following form, where k is the spring constant and h is the Planck constant. The effects of this energy distribution manifest in the kinetic isotope effect (KIE) and the equilibrium isotope effect. In a reversible reaction, under equilibrium conditions, the reaction proceeds forward and backward, distributing the isotopes to minimize thermodynamic free energy. Some time later, at equilibrium, more heavy isotopes will be on the product side. The stability of the lower energy drives the products to be enriched in H relative to reactants. Conversely, under kinetic conditions, reactions are generally irreversible. The limiting step in the reaction is overcoming the activation energy barrier to reach an intermediate state. The lighter isotope has a higher energy state in the quantum well and will thus be preferentially formed into products. Thus under kinetic conditions the product will be relatively depleted in H. KIEs are common in biological systems and are especially important for HIBGC. KIEs usually result in larger fractionations than equilibrium reactions. In any isotope system, KIEs are stronger for larger mass differences. Light isotopes in most systems also tend to move faster but form weaker bonds. At high temperature, entropy explains a large signal in isotope composition. However, when temperature decreases isotope effects are more expressed and randomness plays less of a role. These general trends are exposed in further understanding of bond breaking, diffusion or effusion, and condensation or evaporation reactions. Chemistry of hydrogen exchange One of the major complications in studying hydrogen isotopes is the issue of exchangeability. At many time scales, ranging from hours to geological epochs, scientists have to consider if the hydrogen moieties in studied molecules are the original species or if they represent exchange with water or mineral hydrogen near by. Research in this area is still inconclusive in regards to rates of exchange, but it is generally understood that hydrogen exchange complicates the preservation of information in isotope studies. Rapid exchange Hydrogen atoms easily separate from electronegative bonds such as hydroxyl bonds (O–H), nitrogen bonds (N–H), and thiol/mercapto bonds (S–H) on hour to day long timescales. This rapid exchange is particularly problematic for measurements of bulk organic matter with these functional groups because isotope compositions are more likely to reflect the source water and not the isotope effect. Therefore, records of paleoclimate that are not measuring ancient waters, rely on other isotopic markers. Advancements in the 1990s held promising potential to resolve this problem: samples were equilibrated with two variations of heavy water and compared. Their ratios represent an exchange factor that can calibrate measurements to correct for H/H swapping. Carbon bound hydrogen exchange For some time, researchers believed that large hydrocarbon molecules were impervious to hydrogen exchange, but recent work has identified many reactions that allow isotope reordering. The isotopic exchange becomes relevant at geologic time scales and has impacted work of biologists studying lipid biomarkers, and geologists studying ancient oil. Reactions responsible for exchange include Radical reactions that cleave C–H bonds. Ion exchange that of tertiary and aromatic hydrogen. Enolizations that activate hydrogens on ketone alpha carbons. Stereochemical exchange that causes stereochemical inversion. Constitutional exchange like methyl shifts, double bond migrations and carbon backbone rearrangements. Detailed kinetics of these reactions have not been determined. However, it is known that clay minerals catalyze ionic hydrogen exchange faster than other minerals. Thus hydrocarbons formed in clastic environments exchange more than those in carbonate settings. Aromatic and tertiary hydrogen also have greater exchange rates than primary hydrogen. This is due to the increasing stability of associated carbocations. Primary carbocations are considered too unstable to exist and have never been isolated in an FT-ICR spectrometer. On the other hand, tertiary carbocations are relatively stable and are often intermediates in organic chemistry reactions. This stability, which increases the likelihood of proton loss, is due to the electron donation of nearby carbon atoms. Resonance and nearby lone pairs can also stabilize carbocations via electron donation. Aromatic carbons are thus relatively easy to exchange. Many of these reactions have a strong temperature dependence; higher temperature typically accelerates exchange. However, different mechanisms may prevail at each temperature window. Ion exchange, for example, is most significant at low temperature. In such low-temperature environments, there is potential for preserving the original hydrogen isotope signal over hundreds of millions of years. However, many rocks in geologic time have reached significant thermal maturity. Even by the onset of the oil window it appears that much of the hydrogen has exchanged. Recently, scientists have explored a silver lining: hydrogen exchange is a zero order kinetic reaction (for carbon bound hydrogen at 80–100°C, the half-times are likely 10–10 years). Applying the mathematics of rate constants would allow extrapolation to original isotopic compositions. While this solution holds promise, there is too much disagreement in the literature for robust calibrations. Vapor isotope effects Vapor isotope effects occur for H, H, and H; since each isotope has different thermodynamic properties in the liquid and gas phases. For water, the condensed phase is more enriched while the vapor is more depleted. For example, rain condensing from a cloud, is heavier than the vapor starting point. Generally, the large variations in deuterium concentration in water are from fractionations between liquid, vapor, and solid reservoirs. In contrast to the fractionation pattern of water, non-polar molecules like oils and lipids, have gaseous counterparts enriched with deuterium relative to the liquid. This is thought to be associated with the polarity from hydrogen bonding in water that does not interfere in long-chain hydrocarbons. Observed variations in isotope abundance Due to physical and chemical fractionation processes, the variations in the isotopic compositions of elements are reported, and the standard atomic weights of hydrogen isotopes have been published by IUPAC's Commission on Atomic Weights and Isotopic Abundances. The HICs are reported relative to the International Atomic Energy Agency (IAEA) reference water. In the equilibrium isotope reactions of H/H in general, enrichment of the heavy isotope is observed in the compound with the higher oxidation state. However, in our natural environment, HIC varies greatly depending on the sources and organisms due to complexities of interacting elements in disequilibrium states. In this section, the observed variations in HIC of water sources (hydrosphere), living organisms (biosphere), organic substances (geosphere), and extraterrestrial materials in the Solar system are described. Hydrosphere Oceans Variations in δD of different water sources and ice caps are observed due to evaporation and condensation processes. (See section 6 for more details.) When seawater is well-mixed, the δD at equilibrium is near 0‰ (‰ SMOW) with a DHR of 155.76 ppm. However, continuous variations in δD are caused by evaporation or precipitation processes which lead to disequilibrium in fractionation processes. A large HIC gradient occurs in surface waters of the oceans, and the fluctuation value in the Northwest Atlantic surface water is around 20‰. According to the data examining the southern supersegment of the Pacific Ocean, as latitude decreases from 65˚S to 40˚S, δD fluctuates between around −50‰ and −70‰. The HIC of seawater (not just surface water) is mostly in the range of 0‰ to −10‰. The estimates of δD for different parts of the ocean across the world are shown on the map. Ice caps Typical δDs for ice sheets in the polar regions range from around −400‰ to −300‰ (‰SMOW). Ice caps' δDs are affected by distance from open ocean, latitude, atmospheric circulation, and the amount of insolation and temperature. The temperature change affects the HIC of ice caps, so the HIC of ice can give estimates for the historical climate cycles such as the timelines for interglacial and glacial periods. [See section 7.2. Paleo-reconstruction for more details] The δDs of ice caps from 70 km south of Vostok Station and in East Antarctica are −453.7‰ and −448.4‰ respectively, and are shown on the map. Atmosphere The analysis done based on satellite measurement data, estimates δD for the air in various parts of the world. The general trend is that δD is more negative at higher latitude, so air above Antarctica and the Arctic is D-depleted to around −230‰ to −260‰ or even lower. The estimated atmospheric δDs are shown on the map. A vast portion of global atmospheric water vapor comes from the Western Pacific near the tropics, (mean 2009) and the HIC of air depends on temperature and humidity. Hot, humid regions generally have higher δD. Water vapor in the air is in general more depleted than terrestrial water sources, since HO evaporates faster than HHO due to higher vapor pressure. On the other hand, rain water is in general more enriched than atmospheric water vapor. Precipitation δDs of annual precipitation in different regions of the world are shown on the map. The precipitation is more D-enriched near the equator in the Tropics. The δDs generally fall in the range of around −30 ~ −150‰ in the northern hemisphere and −30~+30‰ over land areas of the southern hemisphere. In North America, the δD of average monthly precipitation across regions is lower in January (ranging up to around −300‰ in Canada) than in July (up to around −190‰). The overall mean precipitation is determined by the balance between evaporation of water from the oceans and other surface water and condensation of water vapor in the form of rain. Net evaporation should equal net precipitation, and the δD for precipitation is around −22‰ (global average). The Global Network of Isotopes in Precipitation (GNIP) investigates and monitors the isotopic composition of precipitation at various sites all over the world. The mean precipitation can be estimated by the equation, δH = 8.17(±0.07) δO + 11.27(±0.65)‰ VSMOW. (Rozanski et al., 1993) This equation is the slightly modified version from the general global meteoric water line (GMWL) equation, δH = 8.13δO + 10.8, which provides the average relationship between δH and δO of natural terrestrial waters. Lakes and rivers The δDs vs. VSMOW of lakes in different regions are shown on the map. The general pattern observed, indicates that δDs of surface waters including lakes and rivers, are similar to that of local precipitation. Soil water The isotopic composition of soil is controlled by the input of precipitation. Therefore, the δD of soil is similar to that of local precipitation. However, due to evaporation, soil tends to be more D-enriched than precipitation. The degree of enrichment varies greatly depending on atmospheric humidity, local temperature as well as the depth of the soil beneath the surface. According to the study by Meinzer et al. (1999), as the depth in the soil increases, the δD of soil water decreases. Biosphere Marine algae The factors affecting δD of algal lipids are: δD of water, algal species (up to 160%), lipid type (up to 170%), salinity (+0.9±0.2% per PSU), growth rate (0 ~ −30% per day) and temperature (−2 ~ −8% per °C). In a study by Zhang et al. (2009), the δDs of fatty acids in Thalassiosira pseudonana chemostat cultures were −197.3‰, −211.2‰ and −208.0‰ for C14, C16 and C18 fatty acids respectively. The δD of C16 fatty acid in the algae A. e. unicocca at 25°C, was determined using the empirical equation y = 0.890x − 91.730, where x is the δD of water at harvest. For another algal species, B. v. aureus, the equation was y = 0.869x − 74.651. The degree of D/H fractionation in most algal lipids increases with increasing temperature and decreases with increasing salinity. The growth rates have different impacts on the D/H fractionation depending on the species types. Phytoplankton and bacteria The δD of lipids from phytoplankton is largely affected by δD of water, and there seems to be a linear correlation between those two values. The δD of most other biosynthetic products in phytoplankton or cyanobacteria are more negative than that of the surrounding water. The δD values of fatty acids in methanotrophs living in seawater lie between −50 and −170‰, and that of sterols and hopanols range between −150 and −270‰. The HIC of photoautotrophs can be estimated using the equation, , where , and are the DHRs of lipids, water, and substrates, respectively. is the mole fraction of lipid H derived from external water, whereas and denote the net isotopic fractionations associated with uptake and utilization of water and substrate hydrogen, respectively. For phototrophs, is calculated assuming that = 1. The isotopic fractionation between lipids and methane () is 0.94 for fatty acids and 0.79 for isoprenoid lipids. The isotopic fractionation between lipids and water () is 0.95 for fatty acids and 0.85 for isoprenoid lipids. For plants and algae, the isotopic fractionation between lipids and methane () is 0.94 for fatty acids and 0.79 for isoprenoid lipids. δD values for lipids in bacterial species Source: Lipids in organisms growing on heterotrophic substrates: Growing on sugar: depleted 200‰ ~ 300‰ relative to water Growing on direct precursor of TCA cycle (e.g. acetate (δD = −76‰) or succinate): enriched −50‰ ~ +200‰ relative to water : −150‰ ~ +200‰ Lipids in organisms growing photoautotrophically: Depleted 50‰ ~ 190‰ relative to water : −150‰ ~ −250‰ Lipids in organisms growing chemoautotrophically: : −200‰ ~ −400‰ Plants δDs for n-C alkane(‰) vs. VSMOW for different plant groups are as follows. Here, represents δDs for n-C alkane(‰) vs. VSMOW, and represents δDs for mean annual precipitation (‰) vs. VSMOW). For plant leaf wax, the relative humidity, the timing of leaf wax formation and the growth conditions including light levels affect the D/H fractionation of plant wax. From the Craig–Gordon model, it can be understood that leaf water in the growth chamber gasses is significantly D-enriched due to transpiration. Sugars The global abundance of H in plants is in the following order: phenylpropanoids > carbohydrates > bulk material > hydrolyzable lipids > steroids. In plants, δDs of carbohydrates, which typically range around −70‰ to −140‰, are good indicators of the photosynthetic metabolism. Photosynthetically produced hydrogen which is bound to carbon backbones is ~100‰–170‰ more D-depleted than the water in plant tissues. Heterotrophic processing of carbohydrates involves isomerization of triose phosphates and interconversion between fructose-6-phosphate and glucose-6-phosphate. These cellular processes promote the exchange between organic H and within the plant tissues leading to around 158‰ of D-enrichment of those exchanged sites. The δD of plants such as sugar beet, orange and grape ranges from −132‰ to −117‰, and that of plants such as sugar cane and maize ranges from −91‰ to −75‰. The δD of Crassulacean acid metabolism (CAM) such as pineapple is estimated at around −75‰. Sugar beet and sugar cane contain sucrose, and maize contain glucose. Orange and pineapple are the sources of glucose and fructose. The deuterium content of the sugars from the above plant species are not distinctive. In plants, hydrogen attached to carbons in 4 and 5 positions of the glucose typically comes from NADPH in the photosynthetic pathway, and is found to be more D-enriched. Whereas in plants, hydrogen attached to carbons 1 and 6 positions is more D-enriched. D-enrichment patterns in CAM species tend to be closer to that in species. Bulk organic matter The HIC of leaf water is variable during the biosynthesis, and the enrichment in the whole leaf can be described by the equation, △D = △D × ([1 − e]/P) The typical δD of bulk plant is around −160‰, while δDs for cellulose and lignin are −110‰ and −70‰ respectively. Animals HIC in animal tissues is hard to estimate due to complexities in the diet intake and the isotopic composition of surrounding water sources. When fish species were investigated, average HIC of proteins was in a large range of −128‰ ~ +203‰. In the bulk tissue of organisms, all lipids were found to be D-depleted, and the values of δD for lipids tend to be lower than that for proteins. The average δD for Chironomid and fish protein was estimated to be in the range of −128‰ to +203‰. Most hydrogen in heterotrophic tissues comes from water not from diet sources, but the proportion coming from water varies. In general, hydrogen from water is transferred to NADPH and then taken up to the tissues. An apparent trophic effect (compounding effect) can be observed for δD in heterotrophs, so significant D-enrichments result from the intake of surrounding water the in aquatic food webs. The δD of proteins in animal tissues are in cases affected more by diet sources than by surrounding water. Though different δDs for the same class of compounds may arise in different organisms growing in water with the same δD, those compounds generally have the same δD within each organism itself. [See Section 7.5. Ecology for more details] Lipids δDs of fatty acids in living organisms, are typically −73‰ to −237‰. The δDs of individual fatty acids vary widely between cultures (−362‰ to +331‰), but typically by less than around 30‰ between different fatty acids from the same species. The differences in δD for the compounds within the same lipid class is generally less than 50‰, whereas the difference falls in the range of 50‰–150‰ for the compounds in different lipid classes. δDs for typical lipid groups are determined using the following equation: ; where = net or apparent fractionation, = lipid product and = source water. The δDs of common lipid classes found in living organisms are: n-alkyl: −170‰ ± 50‰ (113‰–262‰ more D-depleted than growth water) isoprenoid: −270‰ ± 75‰ (142‰–376‰ more D-depleted than growth water) phytol: −360‰ ± 50‰ (more depleted than the other two categories) Polyisoprenoid lipids are more depleted than acetogenic (n-alkyl) lipids with more negative δDs. Geosphere Oil Source: Oil samples from northeast Japan: from −130‰ to around −110‰ with higher maturity. Oil samples from Portiguar Basin: −90‰ (lancustrine environment), −120‰ to –135‰ (marine-evaporitic environment), Alkenones The isotopic composition of alkenones often reflect the isotopic enrichment or depletion of the surrounding environment, and δDs of alkenones in different regions are shown on the map. Coals Source: According to the studies by Reddings et al., δDs for coals from various sources range from around −90‰ to −170‰. The δDs of coals in different regions are shown on the map. Natural gas Source: Methane Methane produced by marine methanogens is typically more D-enriched than methane produced by methanogens grown in freshwater. δDs for thermogenic methane range from −275‰ to −100‰, and from −400‰ to −150‰ for microbial methane. H2 gas The δD of atmospheric H is around +180‰, the biggest δD known for natural terrestrials (mole fraction H: 183.8 ppm). The δD of natural gas from a Kansas well is around −836‰ (mole fraction H: 25.5 ppm) In electrolysis of water, hydrogen gas is produced at the cathode, but incomplete electrolysis of water may cause isotopic fractionation leading to enrichment of H in the sample water and the production of hydrogen gas with deuterium components. Mineral H The δDs of hydroxyl-bearing minerals of the mantle were estimated at −80‰ ~ −40‰ via analysis of the isotopic composition for juvenile water. Hydrogen minerals generally have large isotope effects, and the isotopic composition often follows the pattern observed for precipitation. Clay minerals The D/H fractionations in clays such as kaolinite, illite, smectite are in most cases consistent when no significant external forces are applied under constant temperature and pressure. The following is an empirically determined equation for estimating the D/H fractionation factor: 1000 In α = −2.2 × 10 × T − 7.7. The δDs vs. ‰SMOW for hydrogen minerals found in mantle, metamorphic rock, shales, marine clays, marine carbonates and sedimentary rocks are shown in the table. Extraterrestrial objects Variations of DHR in the Solar System Earth The HIC of mantle rocks on Earth is highly variable; and that of mantle water is around −80‰ ~ −50‰ depending on its states such as fluid, hydrous phase, hydroxyl point defect, juvenile water (from degassing of the mantle), magmatic water (water equilibrated with a magma). Sun The Sun's DHR is around 21 ± 5 × 10. Mars The current HIC is enriched by a factor of 5 relative to Earth's seawater due to continual losses of H in Martian atmosphere. Therefore, the δD is estimated at around +4000‰. The DHRs of Jupiter and Saturn are nearly in the order of 10, and the DHRs of Uranus and Neptune are closer to 10. Hydrogen is the most abundant element in the universe. Variations in isotopic composition of extraterrestrial materials stem from planetary accretion or other planetary processes such as atmospheric escape, and are larger for H and N than for C and O. The preservation of D-enrichment is observed in chondritic meteorites, interplanetary dust particles and cometary Volatiles. From the helium isotope abundance data, the cosmic DHR is estimated at around 20 ppm: much lower than the terrestrial DHR of 150 ppm. The enrichment of D/H from the proto-solar reservoir occurs for most of the planets except for Jupiter and Saturn, the massive gaseous planets. The DHRs of the atmospheres of Venus and Mars are ~2 × 10 and ~8 × 10 respectively. The DHRs of Uranus and Neptune are larger than that of protosolar reservoir by a factor of ~3 due to their deuterium-rich icy cores. The DHRs for comets are much larger than the values for the planets in the Solar System with δD of around 1000‰. The HICs in the galaxy and the Solar System are shown in the table. Measurement techniques DHR can be determined with a combination of different preparation techniques and instruments for different purposes. There are several types of HIC measurement: (i) organic hydrogen or water are converted to H first, followed by high-precision isotope-ratio mass spectrometry (IRMS) measurement; (ii) H/H and O/O are directly measured as HO by laser spectroscopy also with high precision; (iii) the intact molecules are directly measured by NMR or mass spectrometry with lower precision than IRMS. Offline combustion and reduction Conversion to simple molecules (i.e. H for hydrogen) is required prior to IRMS for stable isotopes. This is for several reasons with regard to hydrogen: The classical offline preparation for the conversion is combustion over CuO at >800°C in sealed quartz tubes, followed by the isolation of resulting water and the reduction to H over hot metal at 400 ~1000°C on a vacuum line. The produced gas is then directly injected into the dual-inlet mass spectrometer for measurement. The metals used for reduction to H includes U, Zn, Cr, Mg and Mn, etc. U and Zn had been widely used since the 1950s until Cr was successfully employed in the late 1990s. The offline combustion/reduction has the highest accuracy and precision for HIC measurement without limits for sample types. The analytical uncertainty is typically 1~2‰ in δD. Thus it is still used today when highest levels of precision are required. However, the offline preparation procedure is very time-consuming and complicated. It also requires a large sample (several 100 mg). Thus, online preparation based on combustion/reduction coupled with the subsequent continuous flow-IRMS (CF-IRMS) system has been more often used nowadays. Chromium reduction or high temperature conversion are the dominant online preparation methods for detection of HIC by IRMS. High temperature conversion/elemental analyzer (TC/EA) TC/EA (or HTC, high temperature conversion; HTP, high temperature pyrolysis; HTCR, high temperature carbon reduction) is an "online" or "continuous flow" preparation method typically followed by IRMS detection. This is a "bulk" technique that measures all the hydrogen in a sample and provides the average isotope signal. The weighed sample is placed in a tin or silver capsule and dropped into a pyrolysis tube of TC/EA. The tube is made of glassy carbon with glassy carbon filling, so oxygen isotopes can be measured simultaneously without oxygen exchange with ceramic (AlO) surface. The molecules are then reduced into CO and H at high temperature (>1400°C) in the reactor. The gaseous products are separated through gas chromatography (GC) using helium as the carrier gas, followed by a split-flow interface, and finally detected by IRMS. TC/EA method can be problematic for organic compounds with halogen or nitrogen due to the competition between the pyrolysis byproducts (e.g. HCl and HCN) and H formation. In addition, it is susceptible to contamination with water, so samples must be scrupulously dried. An adaption of this method is to determine the non-exchangeable (C-H) and exchangeable hydrogen (bounds to other elements, e.g. O, S and N) in organic matter. The samples are equilibrated with water in sealed autosampler carousels at 115°C and then transferred into pyrolysis EA followed by IRMS measurement. TC/EA method is quick with fairly high precision (~1‰). It was limited to solid samples; however, liquid sample recently can also be measured in TC/EA-IRMS system by adapting an autosampler for liquids. The drawback of TC/EA is the relatively big sample size (~ mg), which is smaller than offline combustion/reduction but larger than GC/pyrolysis. It cannot separate different compounds as GC/pyrolysis does and thus only the average for the whole sample can be provided, which is also a drawback for some research. Gas chromatography/pyrolysis (GC/pyrolysis) GC-interface (combustion or pyrolysis) is also an online preparation method followed by IRMS detection. This is a 'compound-specific' method, allowing separation of analytes prior to measurement and thus providing information about the isotopic composition of each individual compound. After GC separation, samples are converted to smaller gaseous molecules for isotope measurements. GC/pyrolysis uses the pyrolysis interface between GC and IRMS for the conversion of H and O in the molecules into H and CO. GC-IRMS was first introduced by Matthews and Hayes in the late 1970s, and was later used for δC, δN, δO and δS. Helium is used as the carrier gas in the GC systems. However, the separation of DH (m/z=3) signal from the tail of He beam was problematic due to the intense signal of He. During the early 1990s, intense efforts were made in solving the difficulties to measure δD by GC/pyrolysis-IRMS. In 1999, Hilkert et al. developed a robust method by integrating the high temperature conversion (TC) into GC-IRMS and adding a pre-cup electrostatic sector and a retardation lens in front of the m/z=3 cup collector. Several different groups were working on this at the same time. This GC/pyrolysis-IRMS based on TC has been widely used for δD measurement nowadays. The commercial products of GC-IRMS include both combustion and pyrolysis interfaces so that δC and δD can be measured simultaneously. The significant advantage of GC/pyrolysis method for HIC measurement is that it can separate different compounds in the samples. It requires the smallest sample size (typically ~200 ng) relative to other methods and has a high precision of 1~5 ‰. But this method is relatively slow and limited to the samples which can be applied in GC system. Laser spectroscopy Laser spectroscopy (or cavity ring-down spectroscopy, CRDS) is able to directly measure H/H, O/O and O/O isotopic composition in water or methane. The use of laser spectroscopy on hydrogen isotopes was first reported by Bergamaschi et al. in 1994. They directly measured CHD/CH in atmospheric methane using a lead salt tunable diode laser spectroscopy. The development of CRDS was first reported by O'Keefe et al. in 1988. In 1999, Kerstel et al. successfully applied this technique to determine HIC in water. The system consists of a laser and a cavity equipped with high finesse reflectivity mirrors. Laser light is injected into the cavity, where the resonance takes place due to the constructive interference. The laser then is turned off. The decay of light intensity is measured. In the presence of a water sample, the photo-absorption by water isotopologues follows the kinetic law. The optical spectrum is obtained by recording ring-down time of the HO spectral features of interest at certain laser wavelength. The concentration of each isotopologue is proportional to the area under each measured isotopologue spectral feature. Laser spectroscopy is quick, simple, and relatively cheap; and the equipment is portable. So it can be used in the field for measuring water samples. H/H and O/O can be determined simultaneously from a single injection. It requires a small sample size, < 1 μL for water. Typical precision is ~ 1‰. However, this is a compound-specific instrument, i.e. only one specific compound can be measured. And coexisting organic compounds (i.e. ethanol) could interfere with the optical light absorption features of water, resulting in cross-contamination. SNIF-NMR H-Site-specific Natural Isotope Fractionation-Nuclear Magnetic Resonance (H-SNIF-NMR) is a type of NMR specialized in measuring the H concentration of organic molecules at natural abundances. The NMR spectra distinguish hydrogen atoms in different chemical environments (e.g. the order of carbon that hydrogen binds to, adjacent functional groups, and even geminal positions of methylene groups), making it a powerful tool for position-specific isotope analysis. The chemical shift (in frequency units) of H is 6.5x lower than that of H. Thus, it is hard to resolve H peaks. To provide enough resolution to separate H peaks, high-strength magnetic field instruments (~11.4T) are applied. Use of NMR to study hydrogen isotopes of natural products, was pioneered by Gerard Martin and his co-workers in the 1980s. For several decades it has been developed and expanded. The D/H NMR measurement is sometimes coupled with IR-MS measurement to create a referential standard. The sensitivity of SNIF-NMR is relatively low, typically requiring ~1 mmol of samples for each measurement. The precision with respect to isotope ratio is also poor compared to mass spectrometry. Even state-of-art instruments can only measure DHR with around 50~200‰ error depending on the compound. Therefore, so far technique can only distinguish the large D/H variations in preserved materials. In 2007, Philippe Lesot and his colleagues advanced this technique with a 2-dimensional NMR using chiral liquid crystals (CLC) instead of isotropic solvents to dissolve organic molecules. This enables the measurements of quadrupolar doublets for each nonequivalent deuterium atom. Thus reduces peak overlaps and provides more detailed information of hydrogen chemical environment. Mainstream uses of H-SNIF-NMR have been in source attribution, forensics and biosynthetic pathway studies. (See also Gray's section "Source attribution and Forensics") When measuring sugar compounds, a timesaving strategy is to convert them into ethanol through fermentation because H-SNIF NMR for ethanol is well established. Several studies have proved that hydrogen isotopes on the methyl and methylene position of the resulting ethanol is not affected by either fermentation rate or media. Another example is the study of monoterpenes. since the 1980s SNIF-NMR study of α-pinene has found large variations in DHR among its sites. Particularly ex-C position has a strong depletion (~-750‰), which was in disagreement with accepted biosynthetic mechanism (mevalonate mechanism) at that time, and lead to new development in pathways. More recently, Ina Ehlers published their work on the D6/D6 ratios of glucose molecules. The stereochemical diteterium distribution was found to correlate to photorespiration/photosynthesis ratios. Photorespiration/photosynthesis ratios are driven by fertilization, thus this might lead to new proxies in reconstructing paleo- concentration. Work has also been done for long-chain fatty acids and found that even-numbered sites, which are thought to be derived from C position of the acetyl group, are more enriched in H than odd-numbered hydrogen that come from C1 position of the acetyl group. Duan et al. reported a strong KIE during the desaturation from oleic acid to linoleic acid. In summary, the underlying physics of SNIF-NMR enables it to measure isotopomers. Another advantage of NMR measurement over mass spectrometry is that it analyzes samples non-destructively. H SNIF-NMR has been well industrialized in source identification and forensics, and has contributed much to biochemical pathway studies. The application of H SNIF-NMR to geological records is sporadic and still needs exploring. Intact molecular isotope ratio mass spectrometry Conventionally, mass spectrometry, such as gas chromatography-mass spectrometry (GC-MS) and gas chromatography -time of flight(GC-TOF), is a common technique for analyzing isotopically labeled molecules. This method involves ionizing and analyzing isotopologues of an intact organic molecule of interest rather than its products of pyrolysis or conversion. However, it does not work for natural abundance hydrogen isotopes because conventional mass spectrometers do not have enough mass-resolving power to measure the C/D isotopologues of intact organic molecules or molecular fragments at natural abundance. For example, to resolve the single D substituted isotopologue peak of any hydrocarbons one will have to be able to at least exclude single C substituted isotopologue peak, which sits at the same cardinal mass yet 0.0029 amu lighter and is of orders of magnitude more abundant. Recent advances in analytical instruments enable direct measurement of natural abundance DHRs in organic molecules. The new instruments have the same framework as any conventional gas source IRMS, but incorporate new features such as larger magnetic sector, double focusing sectors, quadrupole mass filter and multi-collectors. Two commercial examples are the Nu Panorama and the Thermo Scientific 253 Ultra. These instruments generally have good sensitivity and precision. Using only tens of nanomoles of methane, the Ultra can achieve a stable high precision of ~0.1‰ error in δD. One of the first examples of this type of measurement has been clumped isotopes of methane.(See section of "natural gas" in Fossil fuels) Another strength of this kind of instruments is the ability to do site-specific isotopic ratio measurements. This technique is based on measuring DHRs of fragments from the ion source (e.g. CHCH of propane molecule) that samples hydrogen atoms from different parts of the molecule. In summary, direct molecular mass-spectrometry has been commonly used to measure laboratory spiked isotope tracers. Recently advanced high resolution gas source isotope ratio mass spectrometers can measure hydrogen isotopes of organic molecules directly. These mass spectrometers can provide high precision and high sensitivity. The drawback of this type of instruments includes high cost, and standardization difficulty. Also, studying site-specific isotopes with mass spectrometry is less straightforward and needs more constraints than the SNIF-NMR method, and can only distinguish isotopologues but not isotopomers. Hydrologic cycle Isotope fractionation in the water cycle Water is the main source of hydrogen for all living things, so the isotopic composition of environmental water is a first-order control on that of the biosphere. The water (hydrological) cycle moves water around Earth's surface, significantly fractionating the hydrogen isotopes in water. As the atmosphere's main moisture source, the ocean has a fairly uniform HIC across the globe around 0‰ (VSMOW). Variations of δD larger than 10‰ in the ocean are generally confined to surface water due to evaporation, sea ice formation, and addition of meteoric water by precipitation, rivers or icebergs. In the water cycle, the two main processes that fractionate hydrogen isotopes from seawater are evaporation and condensation. Oxygen isotopic composition (O/O) of water is also an important tracer in the water cycle, and cannot be separated from hydrogen isotopes when we talk about isotope fractionation processes associated with water. When water evaporates from the ocean to the air, both equilibrium and kinetic isotope effects occur to determine the hydrogen and oxygen isotopic composition of the resulting water vapor. At the water-air interface, a stagnant boundary layer is saturated with water vapor (100% relative humidity), and the isotopic composition of water vapor in the boundary layer reflects an equilibrium fractionation with liquid water. The liquid-vapor equilibrium fractionations for hydrogen and oxygen isotopes are temperature-dependent: (‰) (‰) The amount of liquid-vapor equilibrium fractionation for hydrogen isotopes is about 8x that of oxygen isotopes at Earth surface temperatures, which reflects the relative mass differences of the two isotope systems: H is 100% heavier than H, O is 12.5% heavier than O. Above the boundary layer, there is a transition zone with relative humidity less than 100%, and there is a kinetic isotope fractionation associated with water vapor diffusion from the boundary layer to the transition zone, which is empirically related to the relative humidity (h): ‰ ‰ The KIE associated with diffusion reflects the mass difference of the heavy-isotope water molecules HHO and HO relative to the normal isotopolog (HO). After water evaporates to the air, condensation and precipitation transport it and return it to the surface. Water vapor condenses in ascending air masses that develop a lower temperature and saturation vapor pressure. Since the cooling and condensation happen relatively slowly, it is a process with equilibrium isotope effects. However, as water vapor is progressively condensed and lost from the air during moisture transport, the isotopic composition of the remaining vapor, as well as the resulting precipitation, can be largely depleted due to the process of Rayleigh distillation. The equation for Rayleigh distillation is: where R is the isotope ratio in the initial water vapor, R is the isotope ratio in the remaining water vapor after some condensation, f is the fraction of water vapor remaining in the air, and α is the liquid-vapor equilibrium fractionation factor (α=1+ε). The isotopic composition of the resulting precipitation (R) can be derived from the composition of the remaining vapor: As f decreases progressively during condensation, the remaining vapor becomes more and more depleted of the heavy isotopes, and the depletion becomes larger as f approaches zero. Rayleigh distillation can explain some first-order spatial patterns observed in the isotopic composition of precipitation across the globe, including isotopic depletion from the tropics to the poles, isotopic depletion from coastal to inland regions, and isotopic depletion with elevation over a mountain range, all of which are associated with progressive moisture loss during transport. The Rayleigh distillation model can also be used to explain the strong correlation between δD and δO in global precipitation, expressed as the global meteoric water line (GMWL): δD = 8δO+10 (later updated to δD = 8.17±0.07 δO+11.27±0.65) The slope of the GMWL reflects the relative magnitude of hydrogen and oxygen isotope fractionation during condensation. The intercept of GMWL is non-zero (called deuterium-excess, or d-excess), which means ocean water does fall on GMWL. This is associated with the KIE during evaporation when water vapor diffuses from the saturated boundary layer to the unsaturated transition zone, and cannot be explained by the Rayleigh model. Nevertheless, the robust pattern in GMWL strongly suggests a single dominant moisture source to the global atmosphere, which is the tropical West Pacific. It should also be pointed out that a local meteoric water line can have a different slope and intercept from the GMWL, due to differences in humidity and evaporation intensity at different places. Hydrogen and oxygen isotopes in water thus serve as an excellent tracer of the hydrological cycle both globally and locally. Water isotopes and climate Based on the processes that fractionate isotopes in the water cycle, isotopic composition of meteoric water can be used to infer related environmental variables such as air temperature, precipitation amount, past elevations, lake levels, as well as to trace moisture sources. These studies form the field of isotope hydrology. Examples of isotope hydrology applications include: Temperature reconstruction Isotopic composition of precipitation can be used to infer changes in air temperature based on the Rayleigh process. Lower temperature corresponds to lower saturation vapor pressure, which leads to more condensation that drives the residual vapor toward isotope depletion. The resulting precipitation thus has a more negative δD and δO value at lower temperature. This precipitation isotope thermometer is more sensitive at lower temperatures, and widely applied at high latitudes. For example, δD and δO were found to have a temperature sensitivity of 8‰/°C and 0.9‰/°C in Antarctic snow, and a sensitivity of 5.6‰/°C and 0.69‰/°C across Arctic sites. δD and δO of ice cores in Greenland, Antarctica and alpine glaciers are important archives of temperature change in the geological past. Precipitation amount effect In contrast to temperature control at high latitudes, the isotopic composition of precipitation in the tropics is mainly influenced by rainfall amount (negative correlation). This "amount effect" is also observed for summer precipitation in the subtropics. Willi Dansgaard, who first proposed the term "amount effect", suggested several possible reasons for the correlation: (1) As cooling and condensation progress, the rainfall isotopic composition reflects an integrated isotopic depletion by the Rayleigh process; (2) A small amount of rainfall is more likely to be influenced by evaporation and exchange with surrounding moisture, which tend to make it more isotopically enriched. At low latitudes, the amount effect for δO is around −1.6‰/100mm precipitation increase at island stations, and −2.0‰/100mm at continental stations. It was also noted that the amount effect was most pronounced when comparing isotopic composition of monthly precipitation at different places in the tropics. The amount effect is also expected for HIC, but there are not as many calibration studies. Across southeast Asia, the δD sensitivity to monthly precipitation amount varies between −15 and −25‰/100mm depending on location. In temperate regions, the isotopic composition of precipitation is dominated by rainfall amount in summer, but more controlled by temperature in the winter. The amount effect may also be complicated by changes in regional moisture sources. Reconstructions of rainfall amount in the tropics in the geological past are mostly based on δO of speleothems or δD of biogenic lipids, both of which are thought of as proxies for the isotopic composition of precipitation. Applications Isotope hydrology Hydrogen and oxygen isotopes also work as tracers for water budget in terrestrial reservoirs, including lakes, rivers, groundwater and soil water. For a lake, both the amount of water in the lake and the isotopic composition of the water are determined by a balance between inputs (precipitation, stream and ground water inflow) and outputs (evaporation, stream and ground water outflow). The isotopic composition of lake water can often be used to track evaporation, which causes isotope enrichment in the lake water, as well as a δD-δO slope that is shallower than the meteoric water line. The isotopic composition of river water is highly variable and have complicated sources over different timescales, but can generally be treated as a two-endmember mixing problem, a base-flow endmember (mainly ground water recharge) and an overland-flow endmember (mainly storm events). The isotope data suggest that the long-term integrated base-flow endmember is more important in most rivers, even during peak flows in summer. Systematic river isotope data were collected across the world by the Global Network of Isotopes in Rivers (GNIR).The isotopic composition of groundwater can also be used to trace its sources and flow paths. An example is a groundwater isotope mapping study in Sacramento, California, which showed lateral flow of river water with a distinct isotope composition into the groundwater that developed a significant water table depression due to pumping for human use. The same study also showed an isotopic signal of agricultural water being recharged into the giant alluvial aquifer in California's Central Valley. Finally, the isotopic composition of soil water is important for the study of plants. Below the water table, the soil has a relatively constant source of water with a certain isotopic composition. Above the water table, the isotopic composition of soil water is enriched by evaporation until a maximum at the surface. The vertical profile of isotopic composition of soil water is maintained by the diffusion of both liquid and vapor water. A comparison of soil water and plant xylem water δD can be used to infer the depth at which plant roots get water from the soil. Paleo-reconstruction Ice core records The isotopic compositions of ice cores from continental ice sheets and alpine glaciers have been developed as temperature proxies since the 1950s. Samuel Epstein was one of the first to show the applicability of this proxy by measuring oxygen isotopes in Antarctic snow, and also pointed out complications in the stable isotope-temperature correlation caused by the history of the air masses from which the snow formed. Ice cores in Greenland and Antarctica can be thousands of meters thick and record snow isotopic composition of the past few glacial-interglacial cycles. Ice cores can be dated by layer counting on the top and ice flow modeling at depth, with additional age constraints from volcanic ash. Cores from Greenland and Antarctica can be aligned in age at high-resolution by comparing globally well-mixed trace gas (e.g. CH) concentrations in the air bubbles trapped in the cores. Some of the first ice core records from Greenland and Antarctica with age estimates go back to the last 10 years, and showed a depletion in δD and δO in the last ice age. The ice core record has since been extended to the last 800,000 years in Antarctica, and at least 250,000 years in Greenland. One of the best δD-based ice core temperature records is from the Vostok ice core in Antarctica, which goes back to 420,000 years. The δD-temperature (of the inversion layer where snow forms) conversion in east Antarctica based on modern spatial gradient of δD (9‰/°C) is ΔT=(ΔδD-8ΔδO)/9, which takes into account variations in seawater isotopic composition caused by global ice volume changes. Many local effects can influence ice δD in addition to temperature. These effects include moisture origin and transport pathways, evaporation conditions and precipitation seasonality, which can be accounted for in more complicated models. Nevertheless, the Vostok ice core record shows some very important results: (1) A consistent δD depletion of ~70‰ during the last four glacial periods compared to interglacial times, corresponding to a cooling of 8°C in Antarctica; (2) A consistent drop of atmospheric CO concentration by 100 ppmv and CH drop by ~300 ppbv during glacial times relative to interglacials, suggesting a role of greenhouse gases in regulating global climate; (3) Antarctic air temperature and greenhouse gas concentration changes precede global ice volume and Greenland air temperature changes during glacial terminations, and greenhouse gases may be an amplifier of insolation forcing during glacial-interglacial cycles. Greenland ice core isotope records, in addition to showing glacial-interglacial cycles, also shows millennial-scale climate oscillations that may reflect reorganization in ocean circulation caused by ice melt charges. There have also been ice core records generated in alpine glacials on different continents. A record from the Andes Mountains in Peru shows a temperature decrease of 5-6°C in the tropics during the last ice age. A record from the Tibetan plateau shows a similar isotope shift and cooling during the last ice age. Other existing alpine glacial isotope records include Mount Kilimanjaro in Tanzania, Mount Altai and West Belukha Plateau in Russia, Mount Logan in Canada, the Fremont Glacier in Wyoming, USA, and the Illimani Ice Core in Bolivia, most of which cover an interval of the Holocene epoch. Biomolecules The isotopic composition of biomolecules preserved in the sedimentary record can be used as a proxy for paleoenvironment reconstructions. Since water is the main hydrogen source for photoautotrophs, the HIC of their biomass can be related to the composition of their growth water and thereby used to gain insight into some properties of ancient environments. Studying hydrogen isotopes can be very valuable, as hydrogen is more directly related to climate than other relevant stable isotope systems. However, hydrogen atoms bonded to oxygen, nitrogen, or sulfur are exchangeable with environmental hydrogen, which makes this system less straightforward [ref to earlier H exchange section]. To study the HIC of biomolecules, it is preferable to use compounds where the hydrogen is largely bound to carbon, and thus not exchangeable on experimental timescales. By this criterion, lipids are a much better subject for hydrogen isotope studies than sugars or amino acids. The net fractionation between source water and lipids is denoted ε: where w refers to the water, and l refers to the lipids. While the δD of source water is the biggest influence on the δD of lipids, discrepancies between fractionation factor values obtained from the slope and from the intercept of the regression suggest that the relationship is more complex than a two-pool fractionation. In other words, there are multiple fractionation steps that must be taken into account in understanding the isotopic composition of lipids. Cellulose The carbon-bonded HIC of cellulose, as inherited from leaf water, has the potential to preserve the original meteoric water signal. This was first demonstrated in the 1970s. In a systematic survey across North America, tree cellulose δD was found to have temperature sensitivity 5.8‰/°C, similar to precipitation δD sensitivity of 5.6‰/°C. This spatial correlation may be complicated by local effects of soil evaporation and leaf transpiration, and the spatial gradient may not be representative of temporal changes in tree ring cellulose at a single place. The mechanism that generates the δD signal in cellulose from meteoric water is not fully understood, but at least includes leaf water transpiration, synthesis of carbohydrates, synthesis of cellulose from photosynthetic sugars, and exchange of sugars with xylem water. Modeling studies show that observed tree ring cellulose δD can be produced when 36% of the hydrogen in sugars can exchange with xylem water, and effects such as humidity and rainfall seasonality may complicate the cellulose δD proxy. Despite these complications, tree ring δD have been used for paleoclimate reconstructions of the past few millennia. For example, a tree ring cellulose δD records from pine trees in the White Mountains, California shows a 50‰ depletion from 6800 year ago to present. The cooling trend since the mid-Holocene thermal maximum is consistent with ice core and pollen records, but the corresponding magnitude of cooling is elusive due to complicated influences from local effects such as humidity and soil water composition. The meaning of isotopes in cellulose and its applications is still an area of active study. Plant leaf waxes Terrestrial plants make leaf waxes to coat the surfaces of their leaves, to minimize water loss. These waxes are largely straight-chain n-alkyl lipids. They are insoluble, non-volatile, chemically inert, and resistant to degradation, making them easily preserved in the sedimentary record, and therefore good targets as biomarkers. The main water source for land plants is soil water, which largely resembles the HIC of rain water, but varies between environments and with enrichment by precipitation, depletion by evaporation, and exchange with atmospheric water vapor. There can be a significant offset between the δD value of source water and the δD value of leaf water at the site of lipid biosynthesis. No fractionation is associated with water uptake by roots, a process usually driven by capillary tension, with the one exception of xerophytes that burn ATP to pump water in extremely arid environments (with a roughly 10‰ depletion). However, leaf water can be substantially enriched relative to soil water due to transpiration, an evaporative process which is influenced by temperature, humidity, and the composition of surrounding water vapor. The leaf water HIC can be described with a modified Craig-Gordon model, where ΔD is the steady state enrichment of leaf water, ε is the temperature-dependent equilibrium fractionation between liquid water and vapor, ε is the KIE from diffusion between leaf internal air space and the atmosphere, ΔD is the leaf/air disequilibrium, e is atmospheric vapor pressure, and e is internal leaf vapor pressure. The Péclet effect, which describes the opposing forces of advection and diffusion can be added to the model as where E is transpiration rate, L is length scale of transport, C is concentration of water, and D is diffusion coefficient. While the role of rain water δD as the fundamental control on the final δD of lipids is well documented, the importance of fractionation effects from rain water to soil water and leaf water on ε is appreciated but remains poorly understood. Organic biomolecules are generally depleted relative to the δD of leaf water. However, differences between organisms, biosynthetic pathways, and biological roles of different molecules can lead to huge variability in fractionation; the diversity of lipid biomarkers spans a 600‰ range of δD values. Lipid biosynthesis is biochemically complex, involving multiple enzyme-dependent steps that can lead to isotope fractionations. There are three major pathways of lipid biosynthesis, known as the mevalonate pathway, the acetogenic pathway, and the 1-deoxyD-xylulose-5-phosphate/2-methylerythroyl-4-phosphate pathway. The acetogenic pathway is responsible for the production of n-alkyl lipids like leaf waxes, and is associated with a smaller δD depletion relative to source water than the other two lipid biosynthesis pathways. While leaf water is the main source of hydrogen in leaf biomolecules, relatively depleted hydrogen from acetate or NADPH is often added during biosynthesis, and contributes to the HIC of the final molecule. Secondary hydrogen exchange reactions, meaning hydrogenation and dehydrogenation reactions outside of the primary biosynthetic pathway, also contribute substantially to the variability of lipid HIC. It is important to note that biological differences in fractionation stem not only from biochemical differences between different molecules, but also from physiological differences between different organisms. For example, the δDs of multiple leaf wax molecules are enriched in shrubs (median ~ −90‰) relative to trees (median ~ −135‰), which themselves are enriched relative to both (median ~ −160‰) and grasses (median ~ −140‰). Between individual species, substantial variation in δD has been documented. Other physiological factors that contribute to variable leaf wax δD values include the seasonal timing of leaf development, response to external stress or environmental variability, and the presence or absence of stomata It can be difficult to distinguish between physiological factors and environmental factors, when many physiological adaptations are directly related to environment. Several environmental factors have been shown to contribute to leaf wax δD variability, in addition to environmental effects on the δD of source water. Humidity is known to impact lipid δD at moderate humidity levels, but not at particularly high (>80%) or low (<40%) humidity levels, and a broad trend of enriched δDs, meaning smaller ε, is seen in arid regions. Temperature and sunlight intensity, both correlated to latitude, have strong effects on the rates of metabolism and transpiration, and by extension on ε. Also, the average chain length of leaf wax molecules varies with geographic latitude, and ε has been shown to increase with increasing chain length. When using biomarkers as a proxy for reconstructing ancient environments, it is important to be aware of the biases inherent in the sedimentary record. Leaf matter incorporated into sediment is largely deposited in the autumn, so seasonal variations in leaf waxes must be considered accordingly. Furthermore, sediments average leaf waxes over lots of different plants in both space and time, making it difficult to calibrate the biological constraints on ε. Finally, preservation of biomolecules in the geologic record does not faithfully represent whole ecosystems, and there is always the threat of hydrogen exchange, particularly if the sediments are subjected to high temperatures. The HIC of leaf waxes can be summarized as the δD of rain water, with three main fractionation steps: evaporation from soil water, transpiration from leaf water, and lipid biosynthesis, which can be combined and measured as the net fractionation, ε. With ever-improving measurement techniques for single molecules, and correlation with other independent proxies in the geological record that can help constrain some variables, investigating the HIC of leaf waxes can be extremely productive. Leaf wax δD data has been successfully applied to improving our understanding of climate driven changes in terrestrial hydrology, by showing that ocean circulation and surface temperature have a significant effect on continental precipitation. Leaf wax δD values have also been used as records of paleoaltimetry to reconstruct the elevation gradients in ancient mountain ranges based on the effect of altitude on rain water δD. Alkenones Another group of molecules often used in paleoreconstructions are alkenones, long-chain largely unsaturated lipids produced exclusively by coccolithophores. Coccolithophores are marine haptophyte algae, and include the globally iconic species Emiliania huxleyi, one of the main CaCO producers in the ocean. The δDs of alkenones are highly correlated to the δDs of seawater, and so can be used to reconstruct paleoenvironmental properties that constrain the isotopic composition of sea water. The most notable reconstruction that alkenone δD values are applied to is the salinity of ancient oceans. Both the δDs of sea water and the fractionations associated with hyptophyte biochemistry (ε) are fairly well understood, so alkenones can be readily used to observe the secondary effect of salinity on δD. There is a well established positive linear correlation between salinity and ε, on the order of a ~3‰ change in fractionation per salinity unit. Hypothesized mechanisms for this effect include enrichment of D in intracellular water due to reduced exchange with extracellular water at higher salinity, removal of H from intracellular water due to increased production of solutes to maintain osmotic pressure at higher salinity, and lower haptophyte growth rates at higher salinity Alkenone δDs have been used successfully to reconstruct past salinity changes in the Mediterranean Sea, Black Sea, Panama Basin, and Mozambique Channel. As an extension of salinity, this data was also used to draw further conclusions about ancient environments, such as ancient freshwater flooding events, and the evolution of plankton in response to environmental changes Stable isotope paleoaltimetry The possibility of using water isotope depletion with elevation to reconstruct paleoaltimetry was demonstrated as early as the late 1960s, when Caltech geochemist Samuel Epstein tried to collect rainwater at different elevations in a single storm. The δO and δD lapse rates vary within -1 to -5‰/km and -10 to -40‰/km respectively, but can vary with locations and seasons, and are not exactly linear with altitude. One of the first studies in stable isotope paleoaltimetry demonstrated a meteoric water δD signature of -90 to -139‰ in fluid inclusions in quartz and adularia in an epithermal gold-silver deposit in Nevada, and suggested the applicability of stable isotopes in reconstruction of ancient topography in the Great Basin. The hydrogen and oxygen isotopes of hydrous silicate minerals have since then been used to reconstruct topographic histories in mountain ranges across the world, including the North American Cordillera, the Rocky Mountains, the Himalayas, the European Alps, and Southern Alps in New Zealand. Lab experiments with clay minerals have shown that the hydrogen and oxygen isotope compositions are relatively resistant to alteration at moderate temperature (<100°C), and can preserve the original meteoric water signal. One important effect of mountain ranges on rainfall stable isotopes is the rain shadow effect, in which an isotopic depletion happens in precipitation on the leeward side compared to the windward side. A change in the difference in isotopic composition of precipitation on the two sides of a mountain can be used to infer the magnitude of the rain shadow effect. In one such study, an isotope enrichment was observed in smectite on the east side of the Sierra Nevada in California from mid-Miocene to late Pliocene, suggesting a decrease in elevation during this period. Another study found δDs around −140‰ in muscovite in the North America Cordillera during the early Eocene, which would suggest an elevation 1km higher than today at the time. In addition to hydrous minerals, hydrogen isotopes in biomarkers such as leaf waxes have also been developed for paleoaltimetry studies. The δD lapse rate in leaf waxes (−21‰/km) falls in the range of meteoric water observations. As an example study, leaf wax δD data has been used to confirm hydrous mineral paleoaltimetry for the high elevation of the Sierra Nevada during the Eocene. Fossil fuels The HIC of oil, gas and coal is an important geochemical tool to study the formation, storage, migration and many other processes. The HIC signal of fossil fuels results from both inheritance of source material and water as well as fractionations during hydrocarbon generation and subsequent alteration by processes such as isotopic exchange or biodegradation. When interpreting HIC data of sedimentary organic matter one must take all the processes that might have an isotope effect into consideration. Almost all the organic hydrogen is exchangeable to some extent. Isotopic exchange of organic hydrogen will reorder the distribution of deuterium and often incorporate external hydrogen. Generally, more mature materials are more heavily exchanged. With effective exchange, aliphatic hydrogen can finally reach isotopic equilibrium at the final stage. Equilibrium fractionation factor varies between hydrogen sites. For example, aliphatic hydrogen isotope fractionation depends on the carbon atom that the hydrogen atom bonds with. To first order, alkyl HIC follows this trend: δD < δD < δD. The fractionation factors between carbon sites also decrease with increasing temperature. This can be potentially used as a thermo-history indicator. The fractionation between whole molecule and water can be estimated by averaging all hydrogen-positions, and this leads to a relatively small variation of equilibrium fractionation between different groups of hydrocarbons and water. A theoretical prediction estimated this to be −80‰ to −95‰ for steranes, −90‰ to −95‰ for hopanes, and −70‰ to −95‰ for typical cycloparaffins at 0−100°C. At the temperature of the oil window and gas window, the equilibrium fractionation between different group of organic molecules is relatively small, as compared with large primary signals. The study of hydrogen isotopes of fossil fuels has been applied as proxies and tools in the following aspects: Reconstruction of paleoenvironments of the sources. Because of the high-sensitivity of D content of terrestrial water to hydrological cycles, organic δD can reflect the environment of source formation. To the first order, DHRs of coals and n-alkanes from oils have been shown to correlate with paleolatitude. Source correlation. Marine and lacustrine environments are characterized by distinctly different δD values. Many studies have tried to relate measured δD with source types. For methane, D concentration and clumped isotopes is particularly diagnostic of sources. Possible maturity indicators. For example, isoprenoids synthesized by plants are strongly depleted in D (See "Observed variations in isotopic abundance" section), typically ~100‰ to n-alkyl lipids. This gap tends to decrease as rock matures because of the higher D/H exchange rates of isoprenoids. The correlation of δD difference between pristane, phytane and n-alkanes and other maturity indicators has been established across a wide maturity range. Another possible maturity indicator based on the "isotope slope" of vs. n-alkane chain length was proposed by Tang et al. Quantitative apportionment. Since alkanes are main components of oil and gas, the isotopic data of n-alkanes have been used to study their migration and mixing. The advantage of hydrogen isotopes over carbon is higher resolution because of larger fractionation. Studying the clumped isotopes of methane provides a new dimension of mixing-constraints. The mixing line in the clumped isotope notation space is a curve rather than a straight line. Fingerprinting pollutant/oil spills. Kerogens and coals The first stage that sedimentary organic matter (SOM) experiences after deposition is diagenesis. During diagenesis, biological decomposition can alter the DHR of organics. Several experimental studies have shown that some biodegraded materials become slightly enriched in D (less than 50‰). Most organics become kerogen by the end of diagenesis. Generally, δD of kerogen spans a wide range. Many factors contribute to the kerogen we observe in geologic records, including: Source water hydrogen isotope patterns: For example, lake systems are more sensitive to hydrologic cycles than marine environments. Differential fractionation for various organisms and metabolic pathways: differences in organic composition can also reflect in primary signal. Isotopic exchange, H loss and H addition: This can involve mixing water-derived D with the primary signal. Generation of bitumen, oil and gas: There's a fractionation between the product and kerogen. Research on the Australian basins showed that δD of lacustrine algal sourced kerogen with terrestrial contributions varies from −105‰ to −200‰, and δD of kerogen from near-coastal depositional environment has a narrower range, −75‰ to −120‰. The smaller span in DHRs of coastal kerogen is thought to reflect the relatively stable regional climate. Pedentchouk and his colleagues reported δD values of -70‰ to -120‰ in immature to low mature kerogen from early Cretaceous lacustrine sediments in West Africa. Coals are from type III kerogen mostly derived from land plants, which should have a primary D/H signal sensitive to local meteoric water. Reddings et al. analyzed coals of various origins and found them randomly scattered across the range of −90‰ to −170‰. Rigby et al. found D contents decrease from −70‰ to −100‰ with increasing maturity in coal from Bass Basin and attributed this to latter exchange with low D water. Smith et al. studied H isotopes of coal samples from Antarctica and Australia. They found a strong negative correlation between δD and inferred paleolatitude. For coal samples originating near the Equator, δD is around −50‰, while for those originating from polar regions, δD is around −150‰. This δD trend along latitude is consistent meteoric water trend and thus is an evidence that coals can preserve much of the original signals. There are two types of approach to study the alteration of DHRs of kerogen during catagenesis: (1) laboratory incubation of organic matter that enables mechanistic study with controlled experiments; (2) natural sample measurement that provides information of combined effects over geologic timescales. The complex composition and chemistry of kerogen complicates the results. Nevertheless, most research on HIC of kerogen show D enrichment with increasing maturity. Type II kerogen (marine derived) from New Albany Shale is reported to have δD rise from −120‰ to −70‰ as vitrinite reflectance increase from 0.3% to 1.5%. Two main mechanisms have been proposed for enrichment. One of them is kinetic fractionation during hydrocarbon generation while the other is isotopic exchange with surrounding water. Anhydrous incubation experiments have shown that the products are generally more D-depleted than their precursors, causing enrichment in residual kerogen. Schimmelmann et al. studied the relationship between terrestrially-derived oil and their source rock kerogens from four Australian Basins. They found that on average the oil is depleted to corresponding kerogen by 23‰. Hydrous incubation experiments suggest that 36–79% of bulk organic hydrogen may come from water at moderate maturity. While still under debate, it appears likely that incorporation of water hydrogen isotopes is the more dominant process for kerogen D- enrichment during catagenesis. In summary, D content of kerogen and coal is complicated and hard to resolve due to the complex chemistry. Nevertheless, studies have found the possible correlation between coal δD and paleo-latitude. Natural gas Commonly, HIC of natural gas from the same well has a trend of δD < δD < δD < δD. This is because most natural gas is thought to be generated by stepwise thermal cracking that is mostly irreversible and thus governed by normal kinetic isotope effects (KIE) that favor light isotopes. The same trend, known as "the normal order", holds for carbon isotopes in natural gas. For example, Angola gas reportedly has a methane δD range of −190‰ to −140‰, an ethane δD of −146‰ to −107‰, a propane δD of −116‰ to −90‰, and a butane δD of −118‰ to −85‰. However, some recent studies show that opposite patterns could also exist, meaning δD > δD > δD. This phenomenon is often called 'isotopic reversal' or 'isotopic rollover'. The isotopic order could also be partly reversed, like δD > δD < δD or δD < δD > δD. Burruss et al. found that in the deepest samples of northern Appalachian basin the hydrogen isotopic order for methane and ethane is reversed. Liu et al., also found partial reversal in oil-related gas from the Tarim Basin. The mechanism causing this reversal is still unknown. Possible explanations include mixing between gases of different maturities and sources, oxidation of methane, etc. Jon Telling et al., synthesized isotopically reversed (in both C and H) low-molecular alkanes using gas-phase radical recombination reactions in electrical discharge experiments, providing another possible mechanism. Methane is the main component of natural gas. Geosphere methane is intriguing for the large input of microbial methanogenesis. This process exhibits a strong KIE, resulting in greater D-depletion in methane relative to other hydrocarbons. δD ranges from −275‰ to −100‰ in thermogenic methane, and from −400‰ to −150‰ in microbial methane. Also, methane formed by marine methanogens is generally enriched in D relative to methane from freshwater methanogens. δD of methane has been plotted together with other geochemical tools (like δC, gas wetness) to categorize and identify natural gas. A δD-δC diagram (sometimes called CD diagram, Whiticar diagram, or Schoell diagram) is widely used to place methane in one of the three distinct groups: thermogenic methane that is higher in both δC and δD; marine microbial methane that is more depleted in C and freshwater microbial methane that is more depleted in D. Hydrogenotrophic methanogenesis produces less D-depleted methane relative to acetoclastic methanogenesis. The location where the organism lives and substrate concentration also affect isotopic composition: rumen methanogenesis, which occurs in a more closed system and with higher partial pressures of hydrogen, exhibits a greater fractionation (−300 to −400‰) than wetland methanogenesis (−250 to −170‰). Recent advances in analytical chemistry have enabled high-precision measurements of multiply substituted (or 'clumped') isotopologues like CHH. This is a novel tool for studying methane formation. This proxy is based on the abundance of clumped isotopologues of methane, which should be enriched compared to the stochastic distribution at thermodynamic equilibrium because the reduced zero-point energy for heavy-heavy isotope bonding is more than twice the reduced zero-point energy of heavy-light isotope bonding. The extent of enrichment decreases with increasing temperature, as higher entropy tends to randomize isotope distribution. Stolper et al. established this temperature calibration using laboratory equilibrated methane and field methane from known formation temperature, and applied this to several gas reservoirs to study natural gas formation and mixing. Wang et al. also reported strong non-equilibrium isotope effect in methane clumped isotopes from lab-cultured methanogens and field samples. These methane samples have relatively low abundance of clumped isotopologues, sometimes even lower than the stochastic distribution. This indicates that there are irreversible steps in enzymatic reactions during methanogenesis that fractionation against clumped isotopologues to create the depleted signal. Isotope clumping in methane has proven a robust proxy, and scientists are now moving towards higher-order alkane molecules like ethane for further work. Oil Oil is generally a product of thermal breakdown of type I and type II kerogen during catagenesis. The HIC should reflect the source kerogen signal, generation fractionation, isotopic exchange and other maturation effects. Thermal maturation at the oil window can erase much of the HIC primary signals. The formation of oil involves breaking C-C and C-H bonds, resulting in depletion of C and H in the products and enrichment in the residual reactants due to KIEs. Yongchun Tang and his colleagues modeled this process based on laboratory-calibrated kinetics data and found that the frequency factor ratio for D/H is 1.07. Moreover, oil is also affected by isotope fractionation from phase changes. However, the behavior of oil gas-liquid fractionation differs from water as the vapor phase of oil is H-enriched. This depletes residual oil as it gets evaporated. Biodegradation of oil is also expected to fractionate hydrogen isotopes, as enzymatic breaking of C-H bond has a normal KIE. Several degradation experiments show that this fractionation is generally mild, ranging from −11‰ to −79‰. This process should also enrich partially degraded oil. Finally, oil stored in a reservoir often had migrated through subsurface (aka geochromatography) from another source region, interacting with water. No data has been published to confirm the fractionation associated with migration, yet theoretical prediction shows that this is likely to be very small. Many studies of natural samples have shown slight increases in δD with thermal maturity. Amane Waseda reported δD of oil samples in northeast Japan to increase from around −130‰ to around −110‰ with higher maturity. At low thermal maturity, dos Santos Neto and Hayes reported δD of saturate fraction of oil in Portiguar Basin derived from a lacustrine environment is -90‰, and from a marine-evaporitic environment is -120‰ to −135‰. Bulk analysis of oil, which yields a complex mixture of organic compounds, obscures much of the valuable information. Switching to compound-specific study greatly expanded our understanding of hydrogen isotopes of oil. Analyzing HIC at the compound level avoids problems from differences in exchange rates, simplifies sources and products relationships, and draws a much more detailed picture. δDs of n-alkanes are generally thought to be representative of oil as they are the major components. Schimmelmann et al. confirmed that alkane fractions have almost the same DHRs as whole oils. Depending on source material type and maturity, δD of n-alkanes can vary from −100‰ to −180‰. A common phenomenon of various oil and matured rock derived n-alkanes is a trend of increasing δD with chain length. For example, Li et al. analyzed oils from the Western Canada Sedimentary Basin and found δD increased between 20‰ and 40‰ from C to C. This "isotope slope" is an artifact of kinetic fractionation associated with thermal cracking of carbon chains. This trend has been experimentally reproduced and theoretically modeled by Tang et al. N-alkanes are also known to preserve detailed information of source material. Li et al. studied oils from the marine-derived Upper Cretaceous Second White Speckled Shale and found strong depleted signal around −180‰ in C-C. The low δD of this marine samples was explained by the discharge of a large high latitude river. Schimmelmann et al. found that the δD of the oil sampled from coaly facies of the Crayfish group reaches down to −230‰ where as those sampled from algal facies of the same group are around −100‰. Such huge variation is hard to explain by any other causes than Australia splitting from Antarctica in late Cretaceous. Another special case reported by Xiong et al. studied Ordovician carbonates from Bohai Bay Basin. They found big differences between δD of n-alkanes, reflecting that the original signal is preserved rather than being homogenized. The result is not obvious as the sample is very mature (inferred vitrinite reflectance R up to 2.3). Thus this is strong evidence that carbonate systems have much lower catalytic efficiency of hydrogen exchange on hydrocarbons. Strong enrichment (~40‰) in odd carbon numbered alkanes to even carbon numbered alkanes is also found in some subset of samples and the reason is unclear at this point. This odd-even effect is also observed in immature clastic sediments. Ecohydrology Ecohydrology is concerned with the interaction between ecosystems and water cycling, from measuring the small scale drainage of water into soil to tracking the broad movements of water evaporating from trees. Because deuterium acts as a conservative tracer, it works well for tracking water movement through plants and ecosystems. Though water movement in single-process phenomena such as evaporation is relatively simple to track, many systems (e.g. cloud forests) in the environment have multiple sources, and tracking water movement becomes more complicated. Isotope spiking can also be done to determine water transport through soil and into plants by injecting deuterated water directly into the ground. Stable isotope analysis of xylem water can be used to follow the movement of water from soil into the plants and therefore provide a record of the depth of water acquisition. An advantage to using xylem water is that in theory, the HIC should directly reflect the input water without being affected by leaf transpiration. For example, Dawson and Ehleringer used this approach to determine whether trees that grow next to streams are using the surface waters from that stream. Water from the surface would have the same isotopic composition as the stream, while water from farther below in the ground would be from past precipitation inputs. In this case, younger trees had a xylem water isotopic composition very close to the adjacent stream and likely used surface waters to get established. Older trees had depleted xylem water relative to the stream, reflecting that they source their water from deeper underground. Other stable isotope studies have also determined that plants in redwood forests do not just take up water from their roots but acquire a significant proportion of water via stomatal uptake on leaves. Plant water can be used to characterize other plant physiological processes that affect the water cycle; for example, leaf water is widely used for modeling transpiration and water-use efficiency (WUE). In transpiration, the Craig-Gordon model for lake water enrichment through evaporation has been found experimentally to fit well for modelling leaf water enrichment. Transpiration can be measured by direct injection of deuterated water into the base of the tree, trapping all water vapor transpired from the leaves and measuring the subsequent condensate. Water use can also be measured and is calculated from a heavy water injection as follows: , where WU is water use in kg/day, M is mass of deuterated water injected in grams, T is the final day of the experiment, C is concentration of deuterium at time interval i in grams/kilogram, and Δt is the length of time interval i in days. Though the calculated water use via thermal-dissipation-probing of some tropical plants such as bamboos, correlates strongly with measured water use found by tracking DO movement, the exact values are not the same. In fact, with the legume tree Gliricidia sepium, which produces a heartwood, transpired water did not even correlate strongly with injected HO concentrations, which would further complicate water use measurements from direct injections. This possibly occurred because heartwoods could accumulate heavy water rather than move the water directly through xylem and to leaves. WUE, the ratio of carbon fixation to transpiration, has previously been associated with C/C ratios using the equation: , where \Delta^{13}C\ =\ \frac{\delta^{13}C_{atm}-\delta^{13}C_{plant}}{1+\delta^{13}C_{atm}}, φ is fraction of fixed carbon that is respired, p is partial pressure of in the atmosphere, ε is the fractionation of carboxylation, and ε is the fractionation of diffusion in air. The relation of δD in plant leaf waxes to has been empirically measured and results in a negative correlation of δD to water use efficiency. This can be explained in part by lower water use efficiency being associated with higher transpiration rates. Transpiration exhibits a normal isotope effect, causing H-enrichment in plant leaf water and therefore enrichment of leaf waxes. Ecology Migration patterns HIC can be useful in tracking animal migration. Animals with metabolically inert tissue (e.g. feathers or hair) synthesize that tissue using hydrogen from source water and food, but ideally do not incorporate subsequent water during migration. Because δD varies geographically, the difference between animal tissue δD and post-migration water δD, after accounting for the biological fractionation of assimilation, can provide information regarding animal movement. In monarch butterflies, for example, wing chitin is metabolically inert after it is built, so it can reflect the isotopic composition of the environmental water at the time and location of wing growth. This then creates a record of butterfly origin and can be used to determine migration distance. This approach can also be used in bats and birds, using hair and feathers, respectively. Since rainwater becomes depleted as elevation increases, this method can also track altitudinal migration. However, this is technically hard to do, and the resolution seems to be too poor to track small altitudinal changes. H is most useful in tracking movement of species between areas with large continental water variation, since species movement can be complicated by the similarity of local water δD between places. For example, source water from Baja California may have the same δD as water from Maine. Further, a proportion of the HIC in the tissue can exchange with water and complicate the interpretation of measurements. To determine this percentage of isotopic exchange, which varies according to local humidity levels, standards of metabolically inert tissue from the species of interest can be constructed and equilibriated to local conditions. This allows measured δD from different regions to be compared against each other. Trophic interactions Assimilation of diet into tissue has a tissue-specific fractionation known as the trophic discrimination factor. Diet sources can be tracked through a food web via deuterium isotope profiles, though this is complicated by deuterium having two potential sources – water and food. Food more strongly impacts δD than does exchange with surrounding water, and that signal is seen across trophic levels. However, different organisms derive organic hydrogen in varying ratios of water to food: for example, in quail, 20-30% of organic hydrogen was from water and the remainder from food. The precise percentage of hydrogen from water, depended on tissue source and metabolic activity. In chironomids, 31-47% of biomass hydrogen derived from water, and in microbes as much as 100% of fatty acid hydrogen can be derived from water depending on substrate. In caterpillars, diet δD from organic matter correlates linearly with tissue δD. The same relationship does not appear to hold consistently for diet δD from water, however – water derived from either the caterpillar or its prey plant is more H-enriched than their organic material. Going up trophic levels from prey (plant) to predator (caterpillar) results in an isotopic enrichment. This same trend of enrichment is seen in many other animals - carnivores, omnivores, and herbivores - and seems to follow N relative abundances. Carnivores at the same trophic level tend to exhibit the same level of H enrichment. Because, as mentioned earlier, the amount of organic hydrogen produced from water varies between species, a model of trophic level related to absolute fractionation is difficult to make if the participating species are not known. Consistency in measuring the same tissues is also important, as different tissues fractionate deuterium differently. In aquatic systems, tracking trophic interactions is valuable for not only understanding the ecology of the system, but also for determining the degree of terrestrial input. The patterns of deuterium enrichment consistent within trophic levels is a useful tool for assessing the nature of these interactions in the environment. Microbial metabolism Biological deuterium fractionation through metabolism is very organism and pathway dependent, resulting in a wide variability in fractionations. Despite this, some trends still hold. Hydrogen isotopes tend to fractionate very strongly in autotrophs relative to heterotrophs during lipid biosynthesis - chemoautotrophs produce extremely depleted lipids, with the fractionation ranging from roughly −200 to −400‰. This has been observed both in laboratory-grown cultures fed a known quantity of deuterated water and in the environment. Proteins, however, do not follow as significant a trend, with both heterotrophs and autotrophs capable of generating large and variable fractionations. In part, kinetic fractionation of the lighter isotope during formation of reducing equivalents NADH and NADPH result in lipids and proteins that are isotopically lighter. Salinity appears to play a role in the degree of deuterium fractionation as well; more saline waters affect growth rate, the rate of hydrogen exchange, and evaporation rate. All of these factors influence lipid δD upon hydrogen being incorporated into biomass. In coccolithophores Emiliania huxleyi and Gephyrocapsa oceanica, alkenone δD has been found to correlate strongly to organism growth rate divided by salinity. The relationship between deuterium fractionation and salinity could potentially be used in paleoenvironment reconstruction with preserved lipids in the rock record to determine, for example, ocean salinity at the time of organismal growth. However, the degree of fractionation is not necessarily consistent between organisms, complicating the determination of paleosalinity with this method. There also appears to be a negative correlation between growth rate and fractionation in these coccolithophores. Further experiments on unicellular algae Eudorina unicocca and Volvox aureus show no effect of growth rate (controlled by nitrogen limitation) on fatty acid δD. However, sterols become more D-depleted as growth rate increases, in agreement with alkenone isotopic composition in coccolithophores. Overall, although there are some strong trends with lipid δD, the specific fractionations are compound-specific. As a result, any attempt to create a salinometer through δD measurements will necessarily be specific to a single compound type. Environmental chemistry An important goal of environmental chemistry is tracing the source and degradation of pollutants. Various methods have been used for fingerprinting pools of environmental pollutants such as the bulk chemical composition of a spill, isotope ratios of the bulk chemical mixture, or isotope ratios of individual constituent compounds. Stable isotopes of carbon and hydrogen can be used as complementary fingerprinting techniques for natural gas. The DHR of hydrocarbons from the Deepwater Horizon oil spill was used to verify that they were likely from the Macondo well. HICs have also been used as a measure of the relative amount of biodegradation that has occurred in oil reservoirs in China, and studies on pure cultures of n-alkane degrading organisms have shown a chain-length dependence on the amount of hydrogen isotope fractionation during degradation. Additional studies have also shown hydrogen isotope effects in the degradation of methyl tert-butyl ether and toluene that have been suggested to be useful in the evaluation of the level of degradation of these polluting compounds in the environment. In both cases the residual unreacted compounds became H-enriched to a few tens of ‰, with variations exhibited between different organisms and degree of reaction completeness. These observations of heavy residual compounds have been applied to field observations of biodegradation reactions such as removal of benzene and ethylbenzene, which imparted a D/H fractionation of 27 and 50 ‰, respectively. Also, analysis of o-xylene in a polluted site showed high residual DHRs after biodegradation, consistent with activation of C-H bonds being a rate limiting step in this process Source attribution and forensics Stable isotope ratios have found uses in various instances where the authenticity or origin of a chemical compound is called into question. Such situations include assessing the authenticity of food, wine and natural flavors; drug screening in sports (see doping in sport); pharmaceuticals; illicit drugs; and even helping identify human remains. In these cases it is often not enough to detect or quantify a certain compound, since the question is the origin of the compound. The strength of hydrogen isotope analysis in answering these questions is that the DHR of a natural product is often related to the natural water DHRs in the area where the product was formed (see: Hydrologic cycle). Since DHRs vary significantly between different areas, this can be a powerful tool in locating the original source of many different substance. Food and flavor authentication Foods, flavorings and scents are often sold with the guarantee that chemical additives come from natural sources. This claim becomes hard to test when the chemical compound has a known structure and is readily synthesized in a lab. Authentication of claims about the origins of these chemicals has made good use of various stable isotopes, including those of hydrogen. Combined carbon and hydrogen isotope analysis has been used to test the authenticity of (E)-methyl cinnamate, γ-decalactone and δ-decalactone. Hydrogen and nitrogen isotope ratios have been used for the authentication of alkylpyrazines used as "natural" coffee flavorings. Doping The isotope ratio of carbon in athletes' steroids has been used to determine whether these steroids came from the athlete's body or an outside source. This test has been used in a number of high-profile anti-doping cases and has various benefits over simply characterizing the concentration of various compounds. Attempts are being made to create similar tests based on stable hydrogen isotopes which could be used to complement the existing testing methods. One concern with this method was that the natural steroids produced by the human body may vary significantly based on the H content of drinking water, leading to false detection of doping based on HIC differences. This concern has been addressed in a recent study which concluded that the effect of DHR of drinking water did not pose an insurmountable source of error for this anti-doping testing strategy. Pharmaceutical copies The pharmaceutical industry has revenues of hundreds of billions of dollars a year globally. With such a large industry counterfeiting and copyright infringement are serious issues, and hydrogen isotope fingerprinting has become a useful tool in verifying the authenticity of various drugs. As described in the preceding sections, the utility of DHRs is highest when combined with measurements of other isotope ratios. In an early study on the stable isotope compositions of tropicamide, hydrocortisone, quinine and tryptophan; carbon, nitrogen, oxygen and hydrogen stable isotopes were analyzed by EA-IRMS; clear distinctions were able to be made between manufacturers and even batches of the drugs based on their isotope signatures. In this study it was determined that the hydrogen and oxygen isotope ratios were the two best fingerprints for distinguishing between different drug sources. A follow-up study analyzing naproxen from various lots and manufacturers also showed similar ability to distinguish between sources of the drugs. The use of these isotope signatures could not only be used to distinguish between different manufacturers, but also between different synthetic pathways for the same compound. These studies relied on the natural variations that occurred in the synthesis of these drugs, but other studies have used starting ingredients that are intentionally labeled D and C, and showed that these labels could be traced into the final pharmaceutical product. DHRs can also be determined for specific sites in a drug by H NMR, and has been used to distinguish between different synthetic methods for ibuprofen and naproxen in one study, and prozac and fluoxetine in another. These studies show that bulk DHR information for EA-IRMS, and site-specific DHRs from H NMR have great utility for pharmaceutical drug authenticity testing. Illicit drugs The sources and production mechanisms of illegal drugs has been another area that has seen successful application of hydrogen isotope characterization. Usually, as with other applications of stable isotope techniques, results are best when data for multiple stable isotopes are compared with one another. δH, δC and δN have been used together to analyze tablets of MDA and MDMA and has successfully identified differences which could be used to differentiate between different sources or production mechanisms. The same combination of stable isotopes with the addition of δO was applied to heroin and associated packaging and could successfully distinguish between different samples. Analysis using deuterium NMR was also able to shed light on the origin and processing of both cocaine and heroin. In the case of heroin this site-specific natural isotopic fraction measured by deuterium NMR (SNIF-NMR) method could be used for determining the geographic origin of the molecule by analyzing so-called natural sites (which were present in the morphine from which heroin is made), as well as gaining information on the synthesis process by analyzing the artificial sites (added during drug processing). Provenance of human remains The geographic variation in DHR in human drinking water is recorded in hair. Studies have shown a very strong relation between an individual's hair and drinking water DHRs. Since tap water DHR depends strongly on geography, a person's hair DHR can be used to determine regions where they most likely lived during hair growth. This idea has been used in criminal investigations to try to constrain the movements of a person prior to their death, in much the same way DHRs have been used to track animal migration. By analyzing sections of hair of varying ages, one can determine in what D/H regions a person was living at a specific time before their death. See also Abundance of chemical elements Hydrogen isotope Natural abundance Stable isotopes References Biochemistry methods Isotopes of hydrogen Deuterium Biogeochemistry Limnology Chemical oceanography
Hydrogen isotope biogeochemistry
[ "Chemistry", "Biology", "Environmental_science" ]
25,117
[ "Biochemistry methods", "Isotopes of hydrogen", "Environmental isotopes", "Environmental chemistry", "Isotopes", "Chemical oceanography", "Biogeochemistry", "Biochemistry" ]
50,532,084
https://en.wikipedia.org/wiki/Proteolysis%20targeting%20chimera
A proteolysis targeting chimera (PROTAC) is a molecule that can remove specific unwanted proteins. Rather than acting as a conventional enzyme inhibitor, a PROTAC works by inducing selective intracellular proteolysis. A heterobifunctional molecule with two active domains and a linker, PROTACs consist of two covalently linked protein-binding molecules: one capable of engaging an E3 ubiquitin ligase, and another that binds to a target protein meant for degradation. Recruitment of the E3 ligase to the target protein results in ubiquitination and subsequent degradation of the target protein via the proteasome. Because PROTACs need only to bind their targets with high selectivity (rather than inhibit the target protein's enzymatic activity), there are currently many efforts to retool previously ineffective inhibitor molecules as PROTACs for next-generation drugs. Initially described by Kathleen Sakamoto, Craig Crews and Ray Deshaies in 2001, the PROTAC technology has been applied by a number of drug discovery labs using various E3 ligases, including pVHL, CRBN, Mdm2, beta-TrCP1, DCAF11, DCAF15, DCAF16, RNF114, and c-IAP1. Yale University licensed the PROTAC technology to Arvinas in 2013–14. In 2019, Arvinas put two PROTACs into clinical trials: bavdegalutamide (ARV-110), an androgen receptor degrader, and vepdegestrant (ARV-471), an estrogen receptor degrader. In 2021, Arvinas put a second androgen receptor PROTAC, Luxdegalutamide (ARV-766), into the clinic. Mechanism of action PROTACs achieve degradation through "hijacking" the cell's ubiquitin–proteasome system (UPS) by bringing together the target protein and an E3 ligase. First, the E1 activates and conjugates the ubiquitin to the E2. The E2 then forms a complex with the E3 ligase. The E3 ligase targets proteins and covalently attaches the ubiquitin to the protein of interest. Eventually, after a ubiquitin chain is formed, the protein is recognized and degraded by the 26S proteasome. PROTACs take advantage of this cellular system by putting the protein of interest in close proximity to the E3 ligase to catalyze degradation. Unlike traditional inhibitors, PROTACs have a catalytic mechanism, with the PROTAC itself being recycled after the target protein is degraded. Design and development The protein targeting warhead, E3 ligase, and linker must all be considered for PROTAC development. Formation of a ternary complex between the protein of interest, PROTAC, and E3 ligase may be evaluated to characterize PROTAC activity because it often leads to ubiquitination and subsequent degradation of the targeted protein. A hook effect is commonly observed with high concentrations of PROTACs due to the bifunctional nature of the degrader. Currently, pVHL and CRBN have been used in preclinical trials as E3 ligases. However, there still remains hundreds of E3 ligases to be explored, with some giving the opportunity for cell specificity. Benefits Compared to traditional inhibitors, PROTACs display multiple benefits that make them desirable drug candidates. Due to their catalytic mechanism, PROTACs can be administered at lower doses compared to their inhibitor analogues, though care needs to be taken in achieving oral bioavailability if administered by that route. Some PROTACs have been shown to be more selective than their inhibitor analogues, reducing off-target effects. PROTACs have the ability to target previously undruggable proteins, as they do not need to target catalytic pockets. This also helps prevent mutation-driven drug resistance often found with enzymatic inhibitors. PROTAC databases BioGRID is an open public resource containing manually curated molecular interaction data. In addition to its extensive catalogue of genetic and protein interactions, BioGRID also curates chemical interactions including experimentally-determined PROTACs and PROTAC-related molecules with accompanying target and E3 information. PROTACpedia, a manually curated and user-contributed PROTAC-specific public access database. E3 Atlas, a comprehensive E3 database that characterizes the potential for specific E3 ligases to be employed for PROTAC design. References Pharmacology Biotechnology Chemical biology
Proteolysis targeting chimera
[ "Chemistry", "Biology" ]
962
[ "Pharmacology", "Biotechnology", "nan", "Medicinal chemistry", "Chemical biology" ]
62,725,162
https://en.wikipedia.org/wiki/Swietenia%20Puspa%20Lestari
Swietenia Puspa Lestari (born 23 December 1994) is an Indonesian underwater diver, environmental engineer and environmental activist. Life Lestari is a native of Pramuka Island in the Java Sea. A keen diver from childhood, she studied environmental engineering at Bandung Institute of Technology, graduating in 2017. She is executive director and co-founder of Jakarta-based Divers Clean Action (DCA) and leads a team of volunteer divers who clear rubbish, especially plastic waste from the reefs and recycle what they find. Beginning with three people in 2015, the DCA has grown to 12 team members and nearly 1,500 volunteers across Indonesia. Lestari related that diving to collect waste can be dangerous because of the high currents, but that the rapid increase in tourism since 2007 has led to far more trash being dumped into the formerly pristine seas around Indonesia's many islands. In 2017 Lestari founded the Indonesian Youth Marine Debris Summit (IYMDS). The same year, she represented Indonesia and spoke at the 2017 United Nations Climate Change Conference in Bonn, Germany. She also helped initiate an anti-plastic drinking straw campaign in Indonesia and convinced 700 restaurants to reduce the use of single-use straws. In 2019, Lestari was listed among the BBC's 100 Women, a list of 100 inspiring and influential women. Later that year, she was invited to attend Barack and Michelle Obama's 'Obama Foundation Leaders Forum', which was held in Kuala Lumpur, Malaysia, in December. She was subsequently included in Forbes' "30 Under 30 - Asia - Social Entrepreneurs 2020". References External links 21st-century Indonesian women scientists 1994 births Bandung Institute of Technology alumni Underwater divers Indonesian women environmentalists Living people Indonesian women engineers 21st-century women engineers Environmental engineers 21st-century Indonesian engineers
Swietenia Puspa Lestari
[ "Chemistry", "Engineering" ]
365
[ "Environmental engineers", "Environmental engineering" ]
47,268,817
https://en.wikipedia.org/wiki/PICRUSt
PICRUSt is a bioinformatics software package. The name is an abbreviation for Phylogenetic Investigation of Communities by Reconstruction of Unobserved States. The tool serves in the field of metagenomic analysis where it allows inference of the functional profile of a microbial community based on marker gene survey along one or more samples. In essence, PICRUSt takes a user supplied operational taxonomic unit table (typically referred to as an OTU table), representing the marker gene sequences (most commonly a 16S cluster) accompanied with its relative abundance in each of the samples. The output of PICRUSt is a sample by functional-gene-count matrix, telling the count of each functional-gene in each of the samples surveyed. The ability of PICRUSt to estimate the functional-gene profile for a given sample relies on a set of known sequenced genomes. This could also be thought of as an automated alternative to manually researching the gene families likely to be present in organisms whose sequences are found in a 16S ribosomal RNA amplicon library. The below description corresponds to the original version of PICRUSt, but a major update to this tool is currently being developed. Genome prediction algorithm In an initial preprocessing phase, PICRUSt constructs confidence intervals and point predictions for the number of copies of each gene family in each bacterial and archaeal strain in a reference tree, using organisms with sequenced genomes as a reference. More specifically, for each gene family, PICRUSt maps known gene copy numbers (from complete sequenced genomes) onto a reference tree of life. These gene family copy numbers are treated as continuous traits, and an evolutionary model constructed under the assumption of Brownian Motion. These evolutionary models can be constructed with either Maximum Likelihood, Relaxed Maximum Likelihood or Wagner Parsimony This evolutionary model is then used to predict both a point estimate and a confidence interval for the copy number of microorganisms without sequenced genomes. This 'genome prediction' step produces a large table of bacterial types (specifically operational taxonomic unit or OTUs) vs. gene family copy numbers. This table is distributed to end users. It is important to note that this prediction method is not the same as a nearest neighbor approach (i.e. just looking up the nearest sequenced genome), and was shown to give a small but significant improvement in accuracy over that strategy. However, nearest neighbor prediction is available as an option in PICRUSt. Notably, while this functionality is typically used for prediction of gene copy numbers in bacteria, it could, in principle, be used for prediction of any other continuous trait given trait data for diverse organisms and a reference phylogeny. Langille et al. tested the accuracy of this genome prediction step using leave-one-out cross validation on the input set of sequenced genomes. Additional tests examined sensitivity to errors in phylogenetic inference, lack of genomic data, and the accuracy of the confidence intervals on gene content. A similar step predicts the copy number of 16S rRNA genes. Metagenome prediction algorithm When applying PICRUSt to a 16S rRNA gene library, PICRUSt matches reference operational taxonomic units against the tables, and retrieves a predicted 16S rRNA copy number and gene copy number for each gene family. The abundance of each OTU is divided by its predicted copy number (if a bacterium has multiple 16S copies, its apparent abundance in 16S rRNA data will be inflated), and then multiplied by the copy number of the gene family. This gives a prediction for the contribution of each OTU to the overall gene content of the sample (the metagenome). Finally, these individual contributions are summed together to produce an estimate of the genes present in the metagenome. Langille et al., 2013 tested the accuracy of this genome prediction step by using previously reported datasets in which the same biological sample was subjected to 16S rRNA gene amplification and shotgun metagenomics. In these cases, the shotgun metagenomic results were taken as a representation of the 'true' community, and the 16S rRNA gene amplicon libraries fed into PICRUSt to attempt to predict those data. Test datasets included human microbiome samples from the Human Microbiome Project, soil samples, diverse mammalian samples, and samples from the Guerrero Negro microbial mats The Nearest Sequenced Taxon Index Because PICRUSt, and evolutionary comparative genomics in general, depends on sequenced genomes, biological samples from well-studied environments (many sequenced genomes) will be better predicted than poorly studied environments. In order to assess how many genomes are available, PICRUSt optionally allows users to calculate a Nearest Sequenced Taxon Index (NSTI) for their samples. This index reflects the average phylogenetic distance between each 16S rRNA gene sequence in their sample, and a 16S rRNA gene sequence from a fully sequenced genome. In general, the lower the NSTI score, the more accurate PICRUSt's predictions are expected to be. For example, showed that PICRUSt was much more accurate on diverse soil samples and samples from the Human Microbiome Project than on microbial mat samples from Guerrero Negro, which contained many bacteria without any sequenced relatives. Related tools Okuda et al., 2012 published a similar method that used a bounded k-Nearest Neighbor approach to predict virtual metagenomes. They validated their approach using 16S rRNA gene sequences extracted from shotgun metagenomes, and compared the predictions of their method against the full metagenome. CopyRighter, like PICRUSt, uses evolutionary modeling and phylogenetic trait prediction to estimate 16S rRNA gene sequence copy numbers for each bacterial and archaeal type in a sample, and then uses these estimates to correct estimates of community composition. PanFP presented a similar method, but based on genome predictions for each taxonomic group. Benchmarking showed highly similar performance to PICRUSt when compared on the same datasets. One advantage is that all OTUs, not just those in a reference phylogeny table can be used. One disadvantage is that confidence intervals and evolutionary models are not constructed. PAPRICA is a metagenome prediction tool based on placing input 16S rRNA gene sequences into a known phylogenetic tree based corresponding to reference genomes. The main prediction output corresponds to Enzyme Commission numbers. Piphillin is a tool produced by the company Second Genome that produces metagenome predictions based on nearest-neighbour clustering of input 16S rRNA gene sequences with 16S rRNA gene sequences from reference genomes. There is a web portal for running this tool on the Second Genome website. This tool is under continual development and undergoing validation as summarized in a 2020 publication. Tax4Fun is a similar tool based on linking the 16S ribosomal RNA genes from all KEGG organisms with 16S rRNA gene sequences found in the SILVA ribosomal RNA database. Originally this tool was restricted to 16S rRNA gene sequences found within the SILVA database. However, the latest version of this tool, Tax4Fun2, can be used with OTUs or amplicon sequence variants from any clustering pipeline. References Metagenomics Bioinformatics software Environmental microbiology
PICRUSt
[ "Biology", "Environmental_science" ]
1,484
[ "Bioinformatics", "Environmental microbiology", "Bioinformatics software" ]
47,273,717
https://en.wikipedia.org/wiki/Conformal%20bootstrap
The conformal bootstrap is a non-perturbative mathematical method to constrain and solve conformal field theories, i.e. models of particle physics or statistical physics that exhibit similar properties at different levels of resolution. Overview Unlike more traditional techniques of quantum field theory, conformal bootstrap does not use the Lagrangian of the theory. Instead, it operates with the general axiomatic parameters, such as the scaling dimensions of the local operators and their operator product expansion coefficients. A key axiom is that the product of local operators must be expressible as a sum over local operators (thus turning the product into an algebra); the sum must have a non-zero radius of convergence. This leads to decompositions of correlation functions into structure constants and conformal blocks. The main ideas of the conformal bootstrap were formulated in the 1970s by the Soviet physicists Alexander Polyakov and Alexander Migdal and the Italian physicists Sergio Ferrara, and Aurelio Grillo. Other early pioneers of this idea were and . In two dimensions, the conformal bootstrap was demonstrated to work in 1983 by Alexander Belavin, Alexander Polyakov and Alexander Zamolodchikov. Many two-dimensional conformal field theories were solved using this method, notably the minimal models and the Liouville field theory. In higher dimensions, the conformal bootstrap started to develop following the 2008 paper by Riccardo Rattazzi, Slava Rychkov, Erik Tonni and Alessandro Vichi. The method was since used to obtain many general results about conformal and superconformal field theories in three, four, five and six dimensions. Applied to the conformal field theory describing the critical point of the three-dimensional Ising model, it produced the most precise predictions for its critical exponents. Current research The international Simons Collaboration on the Nonperturbative Bootstrap unites researchers devoted to developing and applying the conformal bootstrap and other related techniques in quantum field theory. History of the name The modern usage of the term "conformal bootstrap" was introduced in 1984 by Belavin et al. In the earlier literature, the name was sometimes used to denote a different approach to conformal field theories, nowadays referred to as the skeleton expansion or the "old bootstrap". This older method is perturbative in nature, and is not directly related to the conformal bootstrap in the modern sense of the term. External links Open problems in conformal bootstrap References Conformal field theory
Conformal bootstrap
[ "Physics" ]
529
[ "Quantum mechanics", "Quantum physics stubs" ]
44,158,633
https://en.wikipedia.org/wiki/International%20Institute%20for%20Nanotechnology
The International Institute for Nanotechnology (IIN) was established by Northwestern University in 2000. It was the first institute of its kind in the United States and is one of the premier nanoscience research centers in the world. Today, the IIN represents and unites more than $1 billion in nanotechnology research, educational programs, and supporting infrastructure. IIN faculty includes 20 members of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. One of Northwestern University's largest collaborative efforts, IIN brings together more than 240 chemists, engineers, biologists, physicians, and business experts, to focus on society's most perplexing problems. But the IIN's influence extends far beyond Northwestern's campus. It has developed collaborative partnerships with academic institutions in 30 countries, as well as with more than a dozen U.S. federal agencies and 100 countries. Since its inception, more than 2,000 products and systems have been commercialized worldwide. Twenty-three start-up companies have been launched based upon IIN research, and they have attracted over $1 billion in venture capital funding. The IIN is changing the face of research in fields from medical diagnostics to materials science. The IIN drives innovation-based business formation, employment and economic growth. The role of the institute is to support meaningful efforts in nanotechnology, house state-of-the-art nanomaterials characterization facilities, and nucleate individual and group efforts aimed at addressing and solving key problems in nanotechnology. The IIN positions Northwestern University and its partners in academia, industry, and national labs as leaders in this exciting field. Research Areas Research is organized into the following eight pillars, each focused on a critical societal issue: NanoMedicine NanoOncology Molecular Electronics NanoEnabled Energy Solutions Environmental Nanotechnology NanoEnabled Solutions for Food and Water Nanotechnology for Security and Defense NanoEducation Northwestern Faculty Members Involved with IIN IIN faculty are drawn from 32 departments and four schools at Northwestern: Executive Council The IIN Executive Council is a group of business people, led by David Kabiller, committed to advocating for nanotechnology research and education; promoting the IIN as a high-impact philanthropic opportunity; and advising IIN leadership on philanthropy, marketing, and bringing technology from the laboratory to market. Kabiller Prize Nanomedicine is an emerging field that focuses on using nanotechnology to impact the field of medicine. Powerful new ways of studying, diagnosing, and treating diseases have been the dividends of basic research in the field of nanoscience. Indeed, this field and the materials devices that derive from it, have a chance to revolutionize medicine as we currently know it. Through a generous donation from entrepreneur David Kabiller, the IIN established the $250,000 Kabiller Prize in Nanoscience and Nanomedicine and the $10,000 Kabiller Young Investigator Award in Nanoscience and Nanomedicine. Every other year, the Kabiller Prize recognizes individuals who have made a career-long, significant impact in the field of nanotechnology applied to medicine and biology. The Kabiller Young Investigator Award recognizes individuals who have made breaking discoveries within the last few years in the same area that have the potential to make a lasting impact. Education The IIN seeks to develop and nurture the scientists, engineers, technicians, and teachers of tomorrow; enrich the academic environment; and inform and engage the public through the following programs: Undergraduate Research Ryan Graduate Fellowships IIN Postdoctoral Fellowships Worldwide Nanotechnology Town Halls Annual international IIN Symposium Nano Boot Camp for Clinicians All Scout Nano Day Frontiers in Nanotechnology Seminar Series Innovation Ecosystem The IIN has created a new kind of research coalition with a large precompetitive nanoscale science and engineering platform for developing applications, demonstrating manufacturability and training skilled researchers. Nanotechnology Corporate Partners (NCP) Program The IIN works on joint research initiatives with corporations including: Small Business Partnership Commercialization Program This program links institute researchers with venture capital experts and has resulted in the formation the 23 companies below, which have collectively raised over $1 billion in financing: References Nanotechnology institutions
International Institute for Nanotechnology
[ "Materials_science" ]
868
[ "Nanotechnology", "Nanotechnology institutions" ]
44,160,992
https://en.wikipedia.org/wiki/Electron%20phenomenological%20spectroscopy
Electron phenomenological spectroscopy (EPS) is based on the correlations between integral optical characteristics and properties of substance as a single whole quantum continuum: spectrum-properties and color-properties. According to these laws the physicochemical properties of substance solutions in ultraviolet (UV), visible light and near infrared (IR) regions of the electromagnetic spectrum are in proportion to the quantity of radiation absorbed. Such aspects of electron spectroscopy have been shown in the works of Mikhail Yu Dolomatov and has been named electron phenomenological spectroscopy because the integral characteristics of the system are studied. Qualitatively, new laws appear on the integral level. Unlike conventional spectroscopic methods, the EPS studies substances as a comprehensive quantum continuum without separating the spectrum of the substance into characteristic spectral bands on certain frequencies or wavelengths of individual functional groups or components. New physical phenomena appear in consideration of the integral systems which absorb radiation. For example, EPS is based on the regularities of the correlation of physico-chemical properties and integral spectral characteristics for UV or (and) visible regions of the electromagnetic spectrum (so-called law spectrum-properties). Color is also an integral characteristic of a visible spectrum. Therefore, the consequence of this is so-called law color-properties. All this allow the use of EPS methods for studying individual and complex multicomponent substances. Methods of EPS were developed after 1988 by the group of Mikhail Yu Dolomatov. The EPS methods belong to number of new effective techniques of monitoring and control and can be used in petroleum and petrochemical industries, environmental monitoring, electronics, biophysics, medicine, criminalistics, space exploration and other fields. References Spectroscopy
Electron phenomenological spectroscopy
[ "Physics", "Chemistry" ]
343
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
67,046,091
https://en.wikipedia.org/wiki/WISP%20%28particle%20physics%29
In particle physics, the acronym WISP refers to a largely hypothetical weakly interacting sub-eV particle, or weakly interacting slender particle, or weakly interacting slim particle – low-mass particles which rarely interact with conventional particles. The term is used to generally categorize a type of dark matter candidate, and is essentially synonymous with axion-like particle (ALP). WISPs are generally hypothetical particles. WISPs are the low-mass counterpart of weakly interacting massive particles (WIMPs). Discussion Except for conventional, active neutrinos, all WISPs are candidate dark matter constituents, and many proposed experiments to detect WISPs might possibly be able to detect several different kinds. “WISP” is most often used to refer to a low-mass hypothetical particles which are viable dark matter candidates. Examples include: Axion – long standing hypothetical strong force related light particle Sterile neutrino – never-observed particles explicitly excluded from weak interactions (if they exist) as well as the strong and electromagnetic forces Supersymmetric particles, particularly the lightest supersymmetric particle which might be a Neutralino – supersymmetric fermions that are electrically neutral composites of superpartners to bosons Excluded active neutrinos Although ordinary “active” neutrinos (left-chiral neutrinos and right-chiral antineutrinos) are particles known to exist, and though active neutrinos do indeed technically satisfy the description of the term, they are often excluded from lists of "WISP" particles. The reason that active neutrinos are often not included among WISPs is that they are no longer viable dark matter candidates: Current estimated limits on their number density and mass indicate that their cumulative mass-density could not be high enough to account for the amount of dark matter inferred from its observed effects, although they certainly do make some small contribution to dark matter density. Sources The various sources of WISPs could possibly include, hot astrophysical plasma and energy transport in stars. Note however, that since they remain hypothetical (except for active neutrinos) the means of creation of WISPs depends on the theoretical framework used to propose them. See also Axion Feebly interacting particle (FIP) Hot dark matter Light dark matter Lightest supersymmetric particle (LSP) Sterile neutrino Weakly interacting massive particle (WIMP) References Dark matter Hypothetical elementary particles Physics beyond the Standard Model
WISP (particle physics)
[ "Physics", "Astronomy" ]
508
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Particle physics", "Particle physics stubs", "Exotic matter", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Matter" ]
67,050,369
https://en.wikipedia.org/wiki/Modular%20Product%20Architecture
A Modular Product Architecture is a product design practice, using principles of modularity. In short, a Modular Product Architecture can be defined as a collection of modules with unique functions and strategies, protected by interfaces to deliver an evolving family of market-driven products. Karl Ulrich, Professor in Mechanical Engineering, defines a Product Architecture as “(1) the arrangement of functional elements; (2) the mapping from functional elements to physical components; (3) the specification of the interfaces among interacting physical components”. A Modular Product Architecture consists of interchangeable building blocks (modules) that can be combined into a variety of product variants. Assigning strategic intent to each module enables the producing company to connect its business objectives with the design of the product:. This can be done by the use of Module Drivers. The Module Drivers were first defined in 1998 by Gunnar Erixon, PhD in Design Engineering at KTH Royal Institute of Technology, and grouped into Primary and Secondary Module Drivers. The Primary drivers defines the strategy of the module based on its need for development or variance, as follows Carry Over: Describes a part that is unlikely to undergo any design changes during the life of the Modular Product Architecture. Common Unit: Describes a part that can be used for the entire product assortment or large parts of it. Technical Specification: Describes a part that carries the product's variance and performance properties. Styling: Describes visible parts of the product that represent identity towards the customer. These parts are strongly influenced by trends and are often connected to a brand or trademark. Technology Push: Describes a part that is likely to undergo design changes due to changing demands or technology shifts. Planned Development: Describes a part that the company intends to further develop, for example to better fulfill a customer value or to cut cost. The planned changes are described in the Modular Product Architecture development plan. Using standardized interfaces between the modules enables interchangeability of different variants of modules and ensures that the Modular Product Architecture can be maintained over time. This enables the producing company to continuously update and improve the Modular Product Architecture and respond to changing needs in the market. Modular Product Architectures can be developed by the use of Modular function deployment. References Modularity Systems engineering Product development
Modular Product Architecture
[ "Engineering" ]
442
[ "Systems engineering" ]
49,010,273
https://en.wikipedia.org/wiki/Quality%20control%20in%20tissue%20engineering
The rapid development in the multidisciplinary field of tissue engineering has resulted in a variety of new and innovative medicinal products, often carrying living cells, intended to repair, regenerate or replace damaged human tissue. Tissue engineered medicinal products (TEMPs) vary in terms of the type and origin of cells and the product’s complexity. As all medicinal products, the safety and efficacy of TEMPs must be consistent throughout the manufacturing process. Quality control and assurance are of paramount importance and products are constantly assessed throughout the manufacturing process to ensure their safety, efficacy, consistency and reproducibility between batches. The European Medicines Agency (EMA) is responsible for the development, assessment and supervision of medicines in the EU. The appointed committees are involved in referral procedures concerning safety or the balance of benefit/risk of a medicinal product. In addition, the committees organize inspections with regards to the conditions under which medicinal products are being manufactured. For example, the compliance with good manufacturing practice (GMP), good clinical practice (GCP), good laboratory practice (GLP) and pharmacovigilance (PhV). Risk assessment When quality control of TEMPs is considered, a risk assessment needs to be conducted. A risk is defined as a "potentially unfavourable effect that can be attributed to the clinical use of advanced therapy medicinal products (ATMPs) and is of concern to the patient and/or to other populations (e.g. caregivers and off-spring)". Some risks include immunogenicity, disease transmission, tumor formation, treatment failure, undesirable tissue formation, and inadvertent germ transduction. A risk factor is defined as a "qualitative or quantitative characteristic that contributes to a specific risk following handling and/or administration of an ATMP". The integration of all available information on risks and risk factors is called risk profiling. Due to the fact that every TEMP is different, the risks associated with each one of them vary and, subsequently, the procedures that must be implemented to ensure its quality are also unique to the product. Once the risks associated with the TEMP are identified, the appropriate tests must be developed and validated accordingly. Thus, there is no standard set of tests for the quality control of TEMPs. The EMA has released a set of regulatory guidelines on the topics to be considered by companies involved in the development and marketing of medicines for use in the European Union. These guidelines have to be followed in order for the marketing authorization of a product to be issued. Fictitious examples of risk analysis for further elucidation of the process are provided in the EMA guidelines. Quality considerations Careful and detailed documentation concerning the characteristics of the starting materials (e.g. history of the cell line derivation and cell banking) and manufacturing process steps (e.g. procurement of tissue or cells and manipulation) must be maintained. The cellular part of every cell-based medicinal product must be characterized in terms of identity, purity, potency, viability and suitability for the intended use. The non-cellular constituents must be also characterized with regards to their intended function in the final product. For example, scaffolds or membranes that are used to support the cells must be identified and characterized in terms of porosity, density, microscopic structure and particular size. The same requirement for characterization applies for biologically active molecules, such as growth factors or cytokines. Release specifications Proper quality control involves the release testing of the final product through updated and validated methods. The release specifications of the product must be selected on the basis of the parameters defined during the characterization studies and the appropriate release tests must be performed. In case a release test cannot be performed on the final product but only on previous stages of the manufacturing, exceptions can be made after proper justification. However, in these cases adequate quality control has to rise from the manufacturing process. Specifications about the stability of the product, the presence or not of genetically modified cells, structural components and whether it is a combination product must also be defined. References Tissue engineering Health care quality
Quality control in tissue engineering
[ "Chemistry", "Engineering", "Biology" ]
831
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
55,785,489
https://en.wikipedia.org/wiki/Glass%20melting%20furnace
A glass melting furnace is designed to melt raw materials into glass. Depending on the intended use, there are various designs of glass melting furnaces available. They use different power sources. These sources are mainly fossil fueled or by fully electric power. A combination of both energy sources is also realized. A glass melting furnace is made from a refractory material. Basics The glass raw materials are fed to the glass melting tank in batches or continuously. The components (the batch) are melted to form a liquid glass melt. In addition to the basic components, the batch also contains cullet from recycled glass to save energy. The cullet content can be up to approx. 85% - 90% (green glass), depending on the requirements of the desired glass color. When changing the glass color (recoloring), the entire process often takes several days in large glass melting furnaces. For economical operation, the glass melting furnaces are operated around the clock throughout the year for so-called mass glass (hollow glass, flat glass). Apart from one to max. two smaller planned intermediate repairs, during which the furnace is taken out of operation, a so-called furnace journey (campaign) up to the general repair (rebuild) can last up to 16 years and more (depending on the product group). The capacity can range from about one ton to over 2000 tons and the daily throughput can range from a few kilograms to over 1000 tons. The operating temperature inside the furnace, above the so-called glass bath is about 1550 °C. This temperature is determined by the composition of the batch and by the required amount of molten glass - the daily production - as well as the design-related energy losses. Glass furnaces are operated with a flue gas heat recovery system to increase the energy efficiency. The reduction in emissions required, due to the climate change mitigation, has led to various concepts to reduce or replace the use of fossil fuels, as well as to avoid the released during the melting of the batch through an increased recycling content. Day tanks This historical type of glass melting tank produces in batches (discontinuously); it is used to melt glasses that are only required in small quantities. The maximum melting area of day tanks is 10 m2, and the melting capacity is between 0.4 and 0.8 t/m2 of melting area. The pot furnace is one type of this. The furnace consists of a refractory masonry basin with a depth of 40 to 60 cm (bottom furnace), which is covered with a vault with a diameter of 70 to 80 cm (top furnace). At the beginning of the 21st century, day tanks still existed in some mouth glass works and artisan workshops, as well as in some special glass manufacturers, where small quantities of high quality glass are melted, e.g. optical glass. Day tanks are not taken out of service at the end of the day; the temperature is simply lowered overnight. Since the refractory material typically cannot tolerate large temperature changes and this leads to increased corrosion (consumption) of the same, such rapid cooling cannot occur anyway. If the day tank is taken out of operation, e.g. for maintenance, cooling/heating times (two to several days) must be observed which are adapted to the refractory material. Smaller furnaces (studio furnaces) in artisan studios are excepted. There, the refractory lining is designed accordingly. Continuously operated glass melting furnaces Continuously operated furnaces consist of two sections, the melting tank and the working tank. These are separated by a passage or a constriction (float glass). In the melting tank, the batch is melted and refined. The melt then passes through the passage into the working tank and from there into the feeder (forehearth). There the glass is removed. In hollow glass production (hollow glass), the glass machine below is fed with glass gobs. In flat glass production (float glass), the glass is fed at special wide outlets as a glass ribbon over a so-called float bath of liquid tin (for flat glass without structure : e.g. window glass, car glass) or for flat glass with structure over a profiled roller. The melting tanks are made of refractory materials and consist of groups of alumina (Al2O3), silica (SiO2), magnesia (MgO), Zirconia (ZrO2) as well as combinations of them to produce the necessary refractory ceramic materials. When creating glass melting furnaces (melting tank including regenerative chambers), up to 2000 t of refractory material can be used for the hollow glass sector and up to 9000 t for the flat glass sector. The heat source in 2021 is typically natural gas, heavy and light oil, and electric current fed directly into the glass bath by means of electrodes. Fossil fuel heating is often combined with supplemental electric heating. Fully electrically heated glass melting tanks are also used. Using pure oxygen instead of air to burn fossil fuels (preferably gas) saves energy and, in the best case, reduces operating costs. The combustion temperature, and therefore the heat transfer, is higher and the volume of gas to be heated is lower. However, oxygen-fired glass furnaces are usually not viable for the production of bulk glass, such as hollow and flat glass, due to the high cost of oxygen production. There are many different types of glass melting furnaces. The types of furnaces used in glass manufacturing include the so-called "end-fired", "side-fired" and "oxy-fuel" furnaces. The latest development is the hybrid furnace. A number of projects are currently under construction for this type and some are operational already in 2023. Typically, the size of a furnace is classified by its production capacity in metric tonnes per day (MTPD). In order to save energy in the glass melting process, in addition to using as much recycled glass as possible (approx. 2% energy savings for every 10% cullet), the heating of the combustion air to a temperature level as high as possible by means of using a regenerative or a recuperator system is a fundamental part of the process. Regenerator In the most commonly used regenerator, the hot exhaust gases (1300 °C - 1400 °C) are fed discontinuously in chambers through a latticework of refractory, rectangular or special shaped bricks. This so-called lattice work is heated in the process. After this warm-up period (storage of the thermal energy of the exhaust gas by the lattice), the direction of the gas flow is reversed and the fresh, cold air required for combustion now flows through the previously heated lattice work of the chamber. The combustion air is thereby preheated to approx. 1200 °C - 1300 °C. This results in considerable energy savings. After combustion, the exhaust gases in turn enter the grating of another chamber, where they reheat the now more previously cooled grating. The process is repeated periodically at intervals of 20 to 30 minutes. The chambers are thus operated discontinuously. The degree of recovery is approx. 65% Recuperator A recuperator operates continuously and consist of a metallic heat exchanger between the exhaust gas and fresh air. Because of the metallic exchanger surface (heat-resistant high-alloy steel tubes in combination with a metallic double shell), a recuperator can only be operated at lower exhaust gas temperatures and therefore work less effectively (40%). Thus, only relatively lower preheating temperatures ( max. 800 °C) are achieved here. A recuperator is less expensive to install and require less space and investment. This results in cost advantages in terms of investment costs, which are, however, considerably reduced by the lower effectiveness or can even have a negative impact for a long period of operation. In the case of structural restrictions for the installation of a regenerator, a combinations of regenerator and recuperator have also been developed and implemented in order to achieve the most energy-saving or efficient operation of the system possible As a further measure, in order to utilize the heat content of the exhaust gas (temperature > 700 °C), a downstream heat/power coupling is technically possible or has already been tested on a large scale. However, the necessary maintenance effort of such a system is associated with considerable costs and is therefore to be evaluated as critical with regard to the associated operating costs. Therefore, this particular concept of downstream energy recovery is generally not pursued further at present. Innovative revisions of this concept must be tested in practice in the productive environment in the long term at great expense. However, this requires a certain willingness to take risks on the part of the companies, which, due to the fierce competition in this industry, is generally not taken. Future development Triggered by the climate debate, several developments and research projects have now been launched to significantly reduce the climate-damaging in production. Among other things, an initiative has been launched in Europe to establish a new type of glass melting furnace. Various European glass manufacturers are working on this project together with technology suppliers with the aim of realizing a corresponding plant on an industrial scale. It is intended to put the plant into operation in 2022 with a melting capacity of 350 tons per day. This glass melting so called Hybrid-Furnace will be operated with 80% electricity generated from renewable energy sources and is expected to enable a reduction of by 50%. The industry, a community of interest of 19 European container glass companies, tried to be supported financially by the EU Innovation Fund. However this was not successful in being awarded a grant by the EU Innovation Fund, despite the project achieving very high evaluation scores in terms of innovation, sectoral approach and scalability. Although the involved companies volunteered to contribute financially to the project, the EU grant was still representing a significant contribution to the additional CAPEX and OPEX compared to a conventional furnace. Without the EU grant, the project could not be pursued as initially planned. However, the industry is now evaluating how to proceed with their decarbonisation efforts. End of year 2024 a project furnace was realized and went to commissioning. Furthermore, there are research projects to heat glass melting furnaces alternatively with so-called green hydrogen. The combustion of hydrogen only produces water vapor. However, the water vapor has an influence on the melting process and the glass composition as well as the properties of the glass produced. The way in which this influence can be controlled and corrected is the subject of further investigation. A large-scale industrial trial was successfully conducted in August 2021. Hydrogen, however, has a considerably lower calorific value per cubic meter compared to natural gas. This is only about one-third of that of natural gas. This results in new requirements for gas pipelines to transport hydrogen. The currently existing natural gas network is not easily designed for this. To provide the same amount of energy, the pipelines must either be approx. 70% larger or designed for a higher pressure, or a flow rate three times higher must be realized at the same pressure. The latter measure could be applied in existing pipeline networks. However, this can lead to increased vibrations, mainly caused by the existing installations in the pipeline, which promote the formation of cracks and thus trigger major damage events in the long term. It is known that under certain conditions, 100% hydrogen will embrittle the material at this point, accelerating deeper crack formation. However, an initially partial admixture of hydrogen to the natural gas is possible and has already been implemented. At present, a broad scientific discussion is being held on this, as well as by pipe suppliers. The alternative use of biofuel was also tested in a large-scale industrial trial. A reduction of 80% was achieved. However, the required gas quantities were not fully available for a longer period of time, so that the large-scale test was limited to 4 days. References Glass production Furnaces Glass
Glass melting furnace
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,449
[ "Glass engineering and science", "Furnaces", "Glass", "Glass production", "Unsolved problems in physics", "Combustion engineering", "Homogeneous chemical mixtures", "Amorphous solids" ]
39,906,398
https://en.wikipedia.org/wiki/Slice%20theorem%20%28differential%20geometry%29
In differential geometry, the slice theorem states: given a manifold on which a Lie group acts as diffeomorphisms, for any in , the map extends to an invariant neighborhood of (viewed as a zero section) in so that it defines an equivariant diffeomorphism from the neighborhood to its image, which contains the orbit of . The important application of the theorem is a proof of the fact that the quotient admits a manifold structure when is compact and the action is free. In algebraic geometry, there is an analog of the slice theorem; it is called Luna's slice theorem. Idea of proof when G is compact Since is compact, there exists an invariant metric; i.e., acts as isometries. One then adapts the usual proof of the existence of a tubular neighborhood using this metric. See also Luna's slice theorem, an analogous result for reductive algebraic group actions on algebraic varieties References External links On a proof of the existence of tubular neighborhoods Theorems in differential geometry
Slice theorem (differential geometry)
[ "Mathematics" ]
212
[ "Theorems in differential geometry", "Theorems in geometry" ]
39,914,061
https://en.wikipedia.org/wiki/Direct%20acoustic%20cochlear%20implant
A direct acoustic cochlear implant - also DACI - is an acoustic implant which converts sound in mechanical vibrations that stimulate directly the perilymph inside the cochlea. The hearing function of the external and middle ear is being taken over by a little motor of a cochlear implant, directly stimulating the cochlea. With a DACI, people with no or almost no residual hearing but with a still functioning inner ear, can again perceive speech, sounds and music. DACI is an official product category, as indicated by the nomenclature of GMDN. A DACI tries to provide an answer for people with hearing problems for which no solution exists today. People with some problems at the level of the cochlea can be helped with a hearing aid. A hearing aid will absorb the incoming sound from a microphone, and offer enhanced through the natural way. For larger reinforcements, this may cause problems with feedback and distortion. A hearing aid also simply provides more loudness, no more resolution. Users will view this often as, "all sounds louder, but I understand nothing more than before." Once a hearing aid offers no solution anymore, one can switch to a cochlear implant. A Cochlear implant captures the sound and sends it electrically, through the cochlea, to the auditory nerve. In this way, completely deaf patients can perceive sounds again. However, As soon as there are problems not only at the level of the cochlea, but also in the middle ear (the so-called conductive losses), then there are more efficient ways to get sound to the partially functioning cochlea. The most obvious solution is a BAHA, which brings the sound to the cochlea via bone conduction. However, patients who have both problems with the cochlea, as with the middle ear (i.e. patients with mixed losses), none of the above solutions is ideal. To this end, the direct acoustic cochlear implant was developed. A DACI brings the sound directly to the cochlea, and provides the most natural way of sound amplification. History The first DACI was implanted in Hannover. In Belgium, the first DACI was implanted at the Catholic University Hospital of Leuven. In the Netherlands, the Radboud clinic in Nijmegen was the first while in Poland it was first implanted at the Institute of Physiology and Pathology of Hearing in Warsaw. See also BAHA Hearing Cochlear implant References External links DACI in the Netherlands DACI in Belgium DACI in Poland Artificial organs Otology Neuroprosthetics Implants (medicine) Audiology Bionics Otorhinolaryngology
Direct acoustic cochlear implant
[ "Engineering", "Biology" ]
554
[ "Bionics", "Artificial organs" ]
42,713,242
https://en.wikipedia.org/wiki/Pentamethylcyclopentadienyl%20ruthenium%20dichloride%20dimer
Pentamethylcyclopentadienyl ruthenium dichloride is an organoruthenium chemistry with the formula [(C5(CH3)5)RuCl2]2, commonly abbreviated [Cp*RuCl2]2. This brown paramagnetic solid is a reagent in organometallic chemistry. It is an unusual example of a compound that exists as isomers that differ in the intermetallic separation, a difference that is manifested in a number of physical properties. Preparation, structure, reactions The compound has C2h symmetry, with each metal atom having pseudo-octahedral geometry. In the crystal structure, two isomers are observed in the unit cell, one with a 2.93 Å ruthenium–ruthenium bond and the other with a long internuclear distance of 3.75 Å. The former isomer is diamagnetic, and the latter is magnetic. It is prepared by the reaction of hydrated ruthenium trichloride with pentamethylcyclopentadiene. 2 Cp*H + 2 RuCl3·3H2O → [Cp*RuCl2]2 + 2 HCl + 6 H2O The reaction is accompanied by formation of decamethylruthenocene. Pentamethylcyclopentadienyl ruthenium dichloride can reduced to the diamagnetic tetramer of Ru(II): 2 [Cp*RuCl2]2 + 2 Zn → [Cp*RuCl]4 + 2 ZnCl2 Methoxide also can be used to produce a related diruthenium(II) derivative, which is also diamagnetic: [Cp*RuCl2]2 + 3 NaOCH3 + HOCH3 → [Cp*RuOCH3]2] + 3 NaCl + CH2O + HCl Treating the tetramer with 1,5-cyclooctadiene in etheral solvent gives the mononuclear complex chloro(1,5-cyclooctadiene)(pentamethylcyclopentadienyl)ruthenium(II). 0.25 [Cp*RuCl]4 + 1,5-cyclooctadiene → Cp*RuCl(1,5-cyclooctadiene) Compounds like Cp*RuCl(1,5-cyclooctadiene), the tetramer [Cp*RuCl]4, and related diamagnetic Cp*Ru(III) complexes have been investigated as hydrogenation catalysts. See also (Cymene)ruthenium dichloride dimer - [(cymene)RuCl2]2 Pentamethylcyclopentadienyl iridium dichloride dimer - [Cp*IrCl2]2 References Metallocenes Organoruthenium compounds Dimers (chemistry) Pentamethylcyclopentadienyl complexes Chloro complexes Ruthenium(III) compounds
Pentamethylcyclopentadienyl ruthenium dichloride dimer
[ "Chemistry", "Materials_science" ]
646
[ "Dimers (chemistry)", "Polymer chemistry" ]
42,718,949
https://en.wikipedia.org/wiki/Multi-messenger%20astronomy
Multi-messenger astronomy is the coordinated observation and interpretation of multiple signals received from the same astronomical event. Many types of cosmological events involve complex interactions between a variety of astrophysical processes, each of which may independently emit signals of a characteristic "messenger" type: electromagnetic radiation (including infrared, visible light and X-rays), gravitational waves, neutrinos, and cosmic rays. When received on Earth, identifying that disparate observations were generated by the same source can allow for improved reconstruction or a better understanding of the event, and reveals more information about the source. The main multi-messenger sources outside the heliosphere are: compact binary pairs (black holes and neutron stars), supernovae, irregular neutron stars, gamma-ray bursts, active galactic nuclei, and relativistic jets. The table below lists several types of events and expected messengers. Detection from one messenger and non-detection from a different messenger can also be informative. Lack of any electromagnetic counterpart, for example, could be evidence in support of the remnant being a black hole. Networks The Supernova Early Warning System (SNEWS), established in 1999 at Brookhaven National Laboratory and automated since 2005, combines multiple neutrino detectors to generate supernova alerts. (See also neutrino astronomy). The Astrophysical Multimessenger Observatory Network (AMON), created in 2013, is a broader and more ambitious project to facilitate the sharing of preliminary observations and to encourage the search for "sub-threshold" events which are not perceptible to any single instrument. It is based at Pennsylvania State University. Milestones 1940s: Some cosmic rays are identified as forming in solar flares. 1987: Supernova SN 1987A emitted neutrinos that were detected at the Kamiokande-II, IMB and Baksan neutrino observatories, a couple of hours before the supernova light was detected with optical telescopes. August 2017: A neutron star collision in the galaxy NGC 4993 produced the gravitational wave signal GW170817, which was observed by the LIGO/Virgo collaboration. After 1.7 seconds, it was observed as the gamma ray burst GRB 170817A by the Fermi Gamma-ray Space Telescope and INTEGRAL, and its optical counterpart SSS17a was detected 11 hours later at the Las Campanas Observatory, then by the Hubble Space Telescope and the Dark Energy Camera. Ultraviolet observations by the Neil Gehrels Swift Observatory, X-ray observations by the Chandra X-ray Observatory and radio observations by the Karl G. Jansky Very Large Array complemented the detection. This was the first gravitational wave event observed with an electromagnetic counterpart, thereby marking a significant breakthrough for multi-messenger astronomy. Non-observation of neutrinos was attributed to the jets being strongly off-axis. In October 2020, astronomers reported lingering X-ray emission from GW170817/GRB 170817A/SSS17a. September 2017 (announced July 2018): On September 22, the extremely-high-energy (about 290 TeV) neutrino event IceCube-170922A was recorded by the IceCube Collaboration, which sent out an alert with coordinates for the possible source. The detection of gamma rays above 100 MeV by the Fermi-LAT Collaboration and between 100 GeV and 400 GeV by the MAGIC Collaboration from the blazar TXS 0506+056 (reported September 28 and October 4, respectively) was deemed positionally consistent with the neutrino signal. The signals can be explained by ultra-high-energy protons accelerated in blazar jets, producing neutral pions (decaying into gamma rays) and charged pions (decaying into neutrinos). This is the first time that a neutrino detector has been used to locate an object in space and a source of cosmic rays has been identified. October 2019 (announced February 2021): On October 1, a high energy neutrino was detected at IceCube and follow-up measurements in visible light, ultraviolet, x-rays and radio waves identified the tidal disruption event AT2019dsg as possible source. November 2019 (announced June 2022): A second high energy neutrino detected by IceCube associated with a tidal disruption event AT2019fdr. June 2023: Astronomers led by Naoko Kurahashi Neilson used a new cascade neutrino technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy, creating the first neutrino-based galactic map. References External links AMON website Astronomy Observational astronomy Astrophysics Astronomical sub-disciplines
Multi-messenger astronomy
[ "Physics", "Astronomy" ]
975
[ "nan", "Astronomical sub-disciplines", "Observational astronomy", "Astrophysics" ]
42,719,835
https://en.wikipedia.org/wiki/Uruz%20Project
The Uruz Project had the goal of breeding back the extinct aurochs (Bos p. primigenius). Uruz is the old Germanic word for aurochs. The Uruz Project was initiated in 2013 by the True Nature Foundation and presented at TEDx DeExtinction, a day-long conference organised by the Long Now Foundation with the support of TED and in partnership with National Geographic Society, to showcase the prospects of bringing extinct species back to life. The de-extinction movement itself is spearheaded by the Long Now Foundation. Technically, Bos primigenius is not wholly extinct. The wild subspecies B. p. primigenius, indicus and africanus are, but the species is still represented by domestic cattle. Most, or all, of the relevant Aurochs characteristics, and therefore the underlying DNA, needed to "breed back" an aurochs-like cattle type can be found in B. p. taurus. Domestic cattle originated in the middle east, and there also has been introgression of European aurochs into domestic cattle in ancient times. The Uruz Project's goal is to collect all relevant data and reunite scattered aurochs characteristics, and thus DNA, in one animal. Background Ecological restoration projects cannot be complete without bringing back those key elements that help shape and reshape wild landscapes. The European aurochs (Bos p. primigenius) was a large and long-horned wild bovine herbivore that existed from the most western tip of Europe until Siberia in present-day Russia. Aurochs have played a major role in human history. They are often depicted in rock-art, including the famous, well-conserved cave paintings made by Cro-Magnon people in the Lascaux Caves, estimated to be 17,300 years old. Aurochs and other large animals portrayed in Paleolithic cave art were often hunted for food. Hunting and habitat loss caused by humans, including agricultural land conversion, caused the aurochs to go extinct in 1627, when the last individual, a female, died in Poland’s Jaktorów Forest. The aurochs is one of the keystone species that is missing in Europe. Their grazing and browsing patterns, trampling of the soil and faeces had a profound impact on the vegetation and landscapes it inhabited. Grazing results in a greater variety of plant species, structures and ecological niches in a landscape that benefit both biodiversity and production. Megaherbivores like the aurochs also controlled vegetation development. Breeding strategy The Uruz Project aims to breed an aurochs-like breed of cattle from a limited number of carefully selected primitive cattle breeds with known Aurochs characteristics. The project uses Sayaguesa cattle, Maremmana primitiva or Hungarian grey cattle, Chianina and Watusi. The genome of the Aurochs has been completely reconstructed and serves as the baseline for the reconstruction of the Aurochs. See also Breeding back De-extinction Breeding of aurochs-like cattle Tauros Programme References External links Aurochs at the True Nature Foundation Long Now Foundation Genetic engineering Nature conservation organisations based in Europe Animal breeding Mammal conservation
Uruz Project
[ "Chemistry", "Engineering", "Biology" ]
657
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
60,600,619
https://en.wikipedia.org/wiki/Reflective%20transformative%20design
The reflective transformative design process (RTDP) is a design method developed at Eindhoven University of technology department of Industrial design and proposes a more dynamic, and open design process compared to the more classical double diamond design process. Where the focus is on transforming society through design by continuously reflecting on your vision and the impact that the design will have. The sequence of the design path is not fixed, the focus is on reflection on the process and the result after each activity. It has overlap with the Lean startup method, in a way that it describes a cyclic, continuous process. It also has overlap with Action research, where research is done by actively participating in the research, in the case of RTDP the designer's artifact is participating in the context. Elements of Reflective transformative design The main element of RTDP is ideating integrating, realizing which contains methods like prototyping, wizzard of ozz, sketching Envisioning/transforming: Storytelling Analyzing/Abstracting: Mapping, Systems theory Validating Quality: This category is related to user-testing or making a Minimum viable product Sensing perceiving doing: Workmanship, hacking The process is defined as a path though these actions, where the designer reflects on whether the design process is leading towards the envisioned goal. References Industrial design
Reflective transformative design
[ "Engineering" ]
263
[ "Industrial design", "Design stubs", "Design", "Design engineering" ]
57,525,937
https://en.wikipedia.org/wiki/Orbital%20angular%20momentum%20of%20free%20electrons
Electrons in free space can carry quantized orbital angular momentum (OAM) projected along the direction of propagation. This orbital angular momentum corresponds to helical wavefronts, or, equivalently, a phase proportional to the azimuthal angle. Electron beams with quantized orbital angular momentum are also called electron vortex beams. Theory An electron in free space travelling at non-relativistic speeds, follows the Schrödinger equation for a free particle, that is where is the reduced Planck constant, is the single-electron wave function, its mass, the position vector, and is time. This equation is a type of wave equation and when written in the Cartesian coordinate system (,,), the solutions are given by a linear combination of plane waves, in the form of where is the linear momentum and is the electron energy, given by the usual dispersion relation . By measuring the momentum of the electron, its wave function must collapse and give a particular value. If the energy of the electron beam is selected beforehand, the total momentum (not its directional components) of the electrons is fixed to a certain degree of precision. When the Schrödinger equation is written in the cylindrical coordinate system (,,), the solutions are no longer plane waves, but instead are given by Bessel beams, solutions that are a linear combination of that is, the product of three types of functions: a plane wave with momentum in the -direction, a radial component written as a Bessel function of the first kind , where is the linear momentum in the radial direction, and finally an azimuthal component written as where (sometimes written ) is the magnetic quantum number related to the angular momentum in the -direction. Thus, the dispersion relation reads . By azimuthal symmetry, the wave function has the property that is necessarily an integer, thus is quantized. If a measurement of is performed on an electron with selected energy, as does not depend on , it can give any integer value. It is possible to experimentally prepare states with non-zero by adding an azimuthal phase to an initial state with ; experimental techniques designed to measure the orbital angular momentum of a single electron are under development. Simultaneous measurement of electron energy and orbital angular momentum is allowed because the Hamiltonian commutes with the angular momentum operator related to . Note that the equations above follow for any free quantum particle with mass, not necessarily electrons. The quantization of can also be shown in the spherical coordinate system, where the wave function reduces to a product of spherical Bessel functions and spherical harmonics. Preparation There are a variety of methods to prepare an electron in an orbital angular momentum state. All methods involve an interaction with an optical element such that the electron acquires an azimuthal phase. The optical element can be material, magnetostatic, or electrostatic. It is possible to either directly imprint an azimuthal phase, or to imprint an azimuthal phase with a holographic diffraction grating, where grating pattern is defined by the interference of the azimuthal phase and a planar or spherical carrier wave. Applications Electron vortex beams have a variety of proposed and demonstrated applications, including for mapping magnetization, studying chiral molecules and chiral plasmon resonances, and identification of crystal chirality. Measurement Interferometric methods borrowed from light optics also work to determine the orbital angular momentum of free electrons in pure states. Interference with a planar reference wave, diffractive filtering and self-interference can serve to characterize a prepared electron orbital angular momentum state. In order to measure the orbital angular momentum of a superposition or of the mixed state that results from interaction with an atom or material, a non-interferometric method is necessary. Wavefront flattening, transformation of an orbital angular momentum state into a planar wave, or cylindrically symmetric Stern-Gerlach-like measurement is necessary to measure the orbital angular momentum mixed or superposition state. See also Matter wave References Angular momentum Electron beam Electron microscopy Orbital angular momentum of waves
Orbital angular momentum of free electrons
[ "Physics", "Chemistry", "Mathematics" ]
835
[ "Electron", "Physical phenomena", "Electron microscopy", "Physical quantities", "Electron beam", "Quantity", "Angular momentum of light", "Waves", "Orbital angular momentum of waves", "Microscopy", "Angular momentum", "Momentum", "Moment (physics)" ]
57,528,170
https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Lorentz%20pendulum
Rayleigh–Lorentz pendulum (or Lorentz pendulum) is a simple pendulum, but subjected to a slowly varying frequency due to an external action (frequency is varied by varying the pendulum length), named after Lord Rayleigh and Hendrik Lorentz. This problem formed the basis for the concept of adiabatic invariants in mechanics. On account of the slow variation of frequency, it is shown that the ratio of average energy to frequency is constant. History The pendulum problem was first formulated by Lord Rayleigh in 1902, although some mathematical aspects have been discussed before by Léon Lecornu in 1895 and Charles Bossut in 1778. Unaware of Rayleigh's work, at the first Solvay conference in 1911, Hendrik Lorentz proposed a question, How does a simple pendulum behave when the length of the suspending thread is gradually shortened?, in order to clarify the quantum theory at that time. To that Albert Einstein responded the next day by saying that both energy and frequency of the quantum pendulum changes such that their ratio is constant, so that the pendulum is in the same quantum state as the initial state. These two separate works formed the basis for the concept of adiabatic invariant, which found applications in various fields and old quantum theory. In 1958, Subrahmanyan Chandrasekhar took interest in the problem and studied it so that a renewed interest in the problem was set, subsequently to be studied by many other researchers like John Edensor Littlewood etc. Mathematical description The equation of the simple harmonic motion with frequency for the displacement is given by If the frequency is constant, the solution is simply given by . But if the frequency is allowed to vary slowly with time , or precisely, if the characteristic time scale for the frequency variation is much smaller than the time period of oscillation, i.e., then it can be shown that where is the average energy averaged over an oscillation. Since the frequency is changing with time due to external action, conservation of energy no longer holds and the energy over a single oscillation is not constant. During an oscillation, the frequency changes (however slowly), so does its energy. Therefore, to describe the system, one defines the average energy per unit mass for a given potential as follows where the closed integral denotes that it is taken over a complete oscillation. Defined this way, it can be seen that the averaging is done, weighting each element of the orbit by the fraction of time that the pendulum spends in that element. For simple harmonic oscillator, it reduces to where both the amplitude and frequency are now functions of time. References Classical mechanics
Rayleigh–Lorentz pendulum
[ "Physics" ]
535
[ "Mechanics", "Classical mechanics" ]
57,530,282
https://en.wikipedia.org/wiki/Tennis%20ball%20theorem
In geometry, the tennis ball theorem states that any smooth curve on the surface of a sphere that divides the sphere into two equal-area subsets without touching or crossing itself must have at least four inflection points, points at which the curve does not consistently bend to only one side of its tangent line. The tennis ball theorem was first published under this name by Vladimir Arnold in 1994, and is often attributed to Arnold, but a closely related result appears earlier in a 1968 paper by Beniamino Segre, and the tennis ball theorem itself is a special case of a theorem in a 1977 paper by Joel L. Weiner. The name of the theorem comes from the standard shape of a tennis ball, whose seam forms a curve that meets the conditions of the theorem; the same kind of curve is also used for the seams on baseballs. The tennis ball theorem can be generalized to any curve that is not contained in a closed hemisphere. A centrally symmetric curve on the sphere must have at least six inflection points. The theorem is analogous to the four-vertex theorem according to which any smooth closed plane curve has at least four points of extreme curvature. Statement Precisely, an inflection point of a doubly continuously differentiable () curve on the surface of a sphere is a point with the following property: let be the connected component containing of the intersection of the curve with its tangent great circle at . (For most curves will just be itself, but it could also be an arc of the great circle.) Then, for to be an inflection point, every neighborhood of must contain points of the curve that belong to both of the hemispheres separated by this great circle. The theorem states that every curve that partitions the sphere into two equal-area components has at least four inflection points in this sense. Examples The tennis ball and baseball seams can be modeled mathematically by a curve made of four semicircular arcs, with exactly four inflection points where pairs of these arcs meet. A great circle also bisects the sphere's surface, and has infinitely many inflection points, one at each point of the curve. However, the condition that the curve divide the sphere's surface area equally is a necessary part of the theorem. Other curves that do not divide the area equally, such as circles that are not great circles, may have no inflection points at all. Proof by curve shortening One proof of the tennis ball theorem uses the curve-shortening flow, a process for continuously moving the points of the curve towards their local centers of curvature. Applying this flow to the given curve can be shown to preserve the smoothness and area-bisecting property of the curve. Additionally, as the curve flows, its number of inflection points never increases. This flow eventually causes the curve to transform into a great circle, and the convergence to this circle can be approximated by a Fourier series. Because curve-shortening does not change any other great circle, the first term in this series is zero, and combining this with a theorem of Sturm on the number of zeros of Fourier series shows that, as the curve nears this great circle, it has at least four inflection points. Therefore, the original curve also has at least four inflection points. Related theorems A generalization of the tennis ball theorem applies to any simple smooth curve on the sphere that is not contained in a closed hemisphere. As in the original tennis ball theorem, such curves must have at least four inflection points. If a curve on the sphere is centrally symmetric, it must have at least six inflection points. A closely related theorem of also concerns simple closed spherical curves, on spheres embedded into three-dimensional space. If, for such a curve, is any point of the three-dimensional convex hull of a smooth curve on the sphere that is not a vertex of the curve, then at least four points of the curve have osculating planes passing through . In particular, for a curve not contained in a hemisphere, this theorem can be applied with at the center of the sphere. Every inflection point of a spherical curve has an osculating plane that passes through the center of the sphere, but this might also be true of some other points. This theorem is analogous to the four-vertex theorem, that every smooth simple closed curve in the plane has four vertices (extreme points of curvature). It is also analogous to a theorem of August Ferdinand Möbius that every non-contractible smooth curve in the projective plane has at least three inflection points. References External links Theorems in differential geometry Theorems about curves Spherical geometry Spherical curves
Tennis ball theorem
[ "Mathematics" ]
958
[ "Theorems in differential geometry", "Theorems about curves", "Theorems in geometry" ]
57,531,234
https://en.wikipedia.org/wiki/Goal%20structuring%20notation
Goal structuring notation (GSN) is a graphical diagram notation used to show the elements of an argument and the relationships between those elements in a clearer format than plain text. Often used in safety engineering, GSN was developed at the University of York during the 1990s to present safety cases. The notation gained popularity as a method of presenting safety assurances but can be applied to any type of argument and was standardized in 2011. GSN has been used to track safety assurances in industries such as clinical care aviation, automotive, rail, traffic management, and nuclear power and has been used in other contexts such as security cases, patent claims, debate strategy, and legal arguments. History The goal structuring notation was first developed at the University of York during the ASAM-II (A Safety Argument Manager II) project in the early 1990s, to overcome perceived issues in expressing safety arguments using the Toulmin method. The notation was further developed and expanded by Tim Kelly, whose PhD thesis contributed systematic methods for constructing and maintaining GSN diagrams, and the concept of ′safety case patterns′ to promote the re-use of argument fragments. During the late 1990s and early 2000s, the GSN methodology was taught in the Safety Critical Systems Engineering course at York, and various extensions to the GSN methodology were proposed by Kelly and other members of the university's High Integrity Systems Engineering group, led by Prof John McDermid. By 2007, goal structuring notation was sufficiently popular that a group of industry and academic users came together to standardise the notation and its surrounding methodology, resulting in the publication of the GSN Community Standard in 2011. From 2014, maintenance of the GSN standard moved under the auspices of the SCSC's Assurance Case Working Group. As at 2022, the standard has reached Version 3. Criticism Charles Haddon-Cave in his review of the Nimrod accident commented that the top goal of a GSN argument can drive a conclusion that is already assumed, such as that a platform is deemed acceptably safe. This could lead to the safety case becoming a "self-fulfilling prophesy", giving a "warm sense of over-confidence" rather than highlighting uncertainties, gaps in knowledge or areas where the mitigation argument was not straightforward. This had already been recognised by Habli and Kelly, who warned that a GSN diagram was just a depiction, not the safety case itself, and likened it to Magritte's painting The Treachery of Images. Haddon-Cave also criticised the practice of consultants producing "outsize GSN charts" that could be yards long and became an end in themselves rather than an aid to structured thinking. See also Design rationale References Safety Diagrams Notation
Goal structuring notation
[ "Mathematics" ]
557
[ "Symbols", "Notation" ]
57,531,279
https://en.wikipedia.org/wiki/John%20F.%20Archard
John Frederick Archard (1918–1989) was a British engineer known for his wear studies. Career Archard went to the Worthing High School for Boys before he entered the University College of Southampton. Afterwards, he served six years in the Royal Air Force (RAF), including at the headquarters of Coastal Command. As a member of the RAF radar staff, he also made a trip to Washington. In 1946, he returned to Southampton for postgraduate research in optics. Starting in 1949 he worked in the surface physics section of the Associated Electrical Industries Research Laboratory, where he investigated the lubrication of heavily loaded contacts. In the 1950s he developed an analytical model used to describe abrasive wear based on the theory of contact of asperities, which became known in the literature as wear equation or Archard equation. Archard was a reader at Leicester University until his retirement in the early 1980s. He ran a successful experimental tribology research program. He was a Fellow of the Physical Society and of the Institute of Physics. In 1989 he received the Mayo D. Hersey Award for his scientific contributions in the field of tribology. Private life Archard lived in Tilehurst, was married and had two sons. See also Archard wear equation References 1918 births 1989 deaths 20th-century British engineers English mechanical engineers Tribologists People educated at Worthing High School
John F. Archard
[ "Materials_science" ]
276
[ "Tribology", "Tribologists" ]
57,533,821
https://en.wikipedia.org/wiki/Bodenstein%20number
The Bodenstein number (abbreviated Bo, named after Max Bodenstein) is a dimensionless parameter in chemical reaction engineering, which describes the ratio of the amount of substance introduced by convection to that introduced by diffusion. Hence, it characterises the backmixing in a system and allows statements whether and how much volume elements or substances within a chemical reactor mix due to the prevalent currents. It is defined as the ratio of the convection current to the dispersion current. The Bodenstein number is an element of the dispersion model of residence times and is therefore also called the dimensionless dispersion coefficient. Mathematically, two idealized extreme cases exist for the Bodenstein number. These, however, cannot be fully reached in practice: corresponds to full backmixing, which is the ideal state to be reached in a continuous stirred-tank reactor. corresponds to no backmixing, but a continuous through flow as in an ideal flow channel. Control of the flow velocity within a reactor allows to adjust the Bodenstein number to a pre-calculated desired value, so that the desired degree of backmixing of the substances in the reactor can be reached. Determination of the Bodenstein number The Bodenstein number is calculated according to where : flow velocity : length of the reactor : axial dispersion coefficient It can also be determined experimentally from the distribution of the residence times. Assuming an open system: holds, where : dimensionless variance : variance of the mean residence time : hydrodynamic residence time References Dimensionless numbers of chemistry Dimensionless numbers of fluid mechanics Chemical engineering
Bodenstein number
[ "Chemistry", "Engineering" ]
325
[ "Chemical engineering", "Dimensionless numbers of chemistry", "nan" ]
54,210,779
https://en.wikipedia.org/wiki/Berman%20flow
In fluid dynamics, Berman flow is a steady flow created inside a rectangular channel with two equally porous walls. The concept is named after a scientist Abraham S. Berman who formulated the problem in 1953. Flow description Consider a rectangular channel of width much longer than the height. Let the distance between the top and bottom wall be and choose the coordinates such that lies in the midway between the two walls, with points perpendicular to the planes. Let both walls be porous with equal velocity . Then the continuity equation and Navier–Stokes equations for incompressible fluid become with boundary conditions The boundary conditions at the center is due to symmetry. Since the solution is symmetric above the plane , it is enough to describe only half of the flow, say for . If we look for a solution, that is independent of , the continuity equation dictates that the horizontal velocity can at most be a linear function of . Therefore, Berman introduced the following form, where is the average value (averaged cross-sectionally) of at , that is to say This constant will be eliminated out of the problem and will have no influence on the solution. Substituting this into the momentum equation leads to Differentiating the second equation with respect to gives this can substituted into the first equation after taking the derivative with respect to which leads to where is the Reynolds number. Integrating once, we get with boundary conditions This third order nonlinear ordinary differential equation requires three boundary condition and the fourth boundary condition is to determine the constant . and this equation is found to possess multiple solutions. The figure shows the numerical solution for low Reynolds number, solving the equation for large Reynolds number is not a trivial computation. Limiting solutions In the limit , the solution can be written as In the limit , the leading-order solution is given by The above solution satisfies all the necessary boundary conditions even though Reynolds number is infinite (see also Taylor–Culick flow) Axisymmetric case The corresponding problem in porous pipe flows was addressed by S. W. Yuan and A. Finkelstein in 1955. See also Taylor–Culick flow References Flow regimes Fluid dynamics
Berman flow
[ "Chemistry", "Engineering" ]
424
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
54,213,333
https://en.wikipedia.org/wiki/Hardware%20security
Hardware security is a discipline originated from the cryptographic engineering and involves hardware design, access control, secure multi-party computation, secure key storage, ensuring code authenticity, measures to ensure that the supply chain that built the product is secure among other things. A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. Some providers in this discipline consider that the key difference between hardware security and software security is that hardware security is implemented using "non-Turing-machine" logic (raw combinatorial logic or simple state machines). One approach, referred to as "hardsec", uses FPGAs to implement non-Turing-machine security controls as a way of combining the security of hardware with the flexibility of software. Hardware backdoors are backdoors in hardware. Conceptionally related, a hardware Trojan (HT) is a malicious modification of electronic system, particularly in the context of integrated circuit. A physical unclonable function (PUF) is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict. Further, an individual PUF device must be easy to make but practically impossible to duplicate, even given the exact manufacturing process that produced it. In this respect it is the hardware analog of a one-way function. The name "physical unclonable function" might be a little misleading as some PUFs are clonable, and most PUFs are noisy and therefore do not achieve the requirements for a function. Today, PUFs are usually implemented in integrated circuits and are typically used in applications with high security requirements. Many attacks on sensitive data and resources reported by organizations occur from within the organization itself. See also U.S. NRC, 10 CFR 73.54 Cybersecurity - Protection of digital computer and communication systems and networks NEI 08-09: Cybersecurity Plan for Nuclear Power Plants Computer security compromised by hardware failure Computer compatibility Proprietary software Free and open-source software Comparison of open-source operating systems Trusted Computing Computational trust Fingerprint (computing) Side-channel attack Power analysis Electromagnetic attack Acoustic cryptanalysis Timing attack Supply chain security List of computer hardware manufacturers Consumer protection Security switch Vulnerability (computing) Defense strategy (computing) Turing completeness Universal Turing machine Finite-state machine Automata theory References External links Hardsec: practical non-Turing-machine security for threat elimination "Hardsec" concept outline Computer hardware Cyberwarfare Product design Cybersecurity engineering
Hardware security
[ "Technology", "Engineering" ]
545
[ "Cybersecurity engineering", "Computer engineering", "Computer hardware", "Computer networks engineering", "Computer systems", "Computer science", "Product design", "Design", "Computers" ]
54,219,225
https://en.wikipedia.org/wiki/NNI-351
NNI-351 is an orally active inhibitor of and neurogenesis enhancer which is under development by NeuroNascent, Inc. for the treatment of Down syndrome, depression, and post-traumatic stress disorder (PTSD). As of 2017, it is in the preclinical development stage, and has yet to progress to human clinical trials. In July 2022, NNI-351 was granted orphan drug status by the FDA for the treatment of Fragile X syndrome. See also List of investigational antidepressants References External links NNI-351 - NeuroNascent, Inc NNI-351 - AdisInsight Methods and pharmaceutical compositions for treating down syndrome (patent) Diazepanes Ethers Experimental drugs 2-Fluorophenyl compounds Nitriles Quinolines Thioamides Thioketones
NNI-351
[ "Chemistry" ]
174
[ "Functional groups", "Organic compounds", "Thioketones", "Ethers", "Thioamides", "Nitriles" ]
64,120,117
https://en.wikipedia.org/wiki/Robodebt%20scheme
The Robodebt scheme was an unlawful method of automated debt assessment and recovery implemented in Australia under the Liberal-National Coalition governments of Tony Abbott, Malcolm Turnbull, and Scott Morrison, and employed by the Australian government agency Services Australia as part of its Centrelink payment compliance program. Put in place in July 2016 and announced to the public in December of the same year, the scheme aimed to replace the formerly manual system of calculating overpayments and issuing debt notices to welfare recipients with an automated data-matching system that compared Centrelink records with averaged income data from the Australian Taxation Office. The scheme has been the subject of considerable controversy, having been criticised by media, academics, advocacy groups, and politicians due to allegations of false or incorrectly calculated debt notices being issued, concerns over impacts on the physical and mental health of debt notice recipients, and questions around the lawfulness of the scheme. Robodebt has been the subject of an investigation by the Commonwealth Ombudsman, two Senate committee inquiries, several legal challenges, and a royal commission, Australia's highest form of public inquiry. In May 2020, the Morrison government announced that it would scrap the debt recovery scheme, with 470,000 wrongly-issued debts to be repaid in full. Amid enormous public pressure, Prime Minister Scott Morrison stated during Question Time that "I would apologise for any hurt or harm in the way that the Government has dealt with that issue and to anyone else who has found themselves in those situations." However, the Morrison government never offered a formal apology before it was voted out of office in 2022. The Australian government lost a 2019 lawsuit over the legality of the income averaging process and settled a class-action lawsuit in 2020. The scheme was further condemned by Federal Court Justice Bernard Murphy in his June 2021 ruling against the government, where he approved a A$1.8 billion settlement, including repayments of debts paid, wiping of outstanding debts, and legal costs. Going into the 2022 Australian federal election, Australian Labor Party (ALP) leader Anthony Albanese pledged to hold a royal commission into the Robodebt scheme if his party was elected. After winning the election, the Albanese government officially commenced the Royal Commission into the Robodebt Scheme in August 2022. The commission handed down its report in July 2023, which called the scheme a "costly failure of public administration, in both human and economic terms", and referred several individuals to law enforcement agencies for prosecution. The report also specifically criticised former Prime Minister Scott Morrison, who oversaw the introduction of the scheme when he was the Minister for Social Services, for misleading Cabinet and failing in his ministerial duties. In October 2022, the Albanese government effectively forgave the debts of 197,000 people that were still under review. In August 2023, the Albanese government passed a formal motion of apology in the House of Representatives, apologising for the scheme on behalf of the Parliament. Origins Background Since the late 1970s, the Australian Tax Office (ATO) has used data-matching systems to compare income data received from external sources with income reported by taxpayers, to ensure taxation compliance. In 2001, Services Australia (then the Department of Human Services) piloted a program that compared a customer’s Centrelink income details with ATO data, to identify discrepancies in the information provided to Centrelink. Where there was a discrepancy, Services Australia would decide if the customer had been overpaid and had a debt that should be recovered. This program (known as the Income Matching System, or IMS) was fully rolled out in 2004. The IMS identified roughly 300,000 possible discrepancies per year. Services Australia would identify and investigate roughly 20,000 of the highest risk discrepancies per year, but were unable to investigate the remaining discrepancies, due to the costs and resources involved in manually investigating and raising debts. The IMS continued largely unchanged until the introduction of the Robodebt scheme in 2016. Creation and announcement In April 2015, measures to create budgetary savings by increasing the pursuit of outstanding debts and investigation of cases of fraud in the Australian welfare system were first flagged by the Minister for Social Services Scott Morrison and the Minister for Human Services Marise Payne, and formally announced by the Abbott government in the 2015 Australian federal budget. Initial estimates in the 2015 budget projected that the scheme would recoup A$1.5 billion for the government. In 2015, the Department of Human Services conducted a two-stage pilot of the Robodebt scheme, targeting debts of selected welfare recipients that were accrued between 2011–2013. Following the 2015 Liberal Party Leadership Spill and 2016 Australian federal election, the Turnbull government implemented an overhaul of the federal welfare budget in an effort to crack down on Centrelink overpayments believed to have occurred between 2010 and 2013 under the Gillard government. On 20 September 2015, Prime Minister Malcolm Turnbull announced that Christian Porter would replace Scott Morrison as Social Services Minister as part of a Cabinet overhaul. In July 2016, the manual system began to be replaced with the Online Compliance Intervention, an automated data-matching technique with less human oversight, capable of identifying and issuing computer-generated debt notices to welfare recipients who had potentially been overpaid. The new system was fully online by September 2016. In December 2016, Minister for Social Services Christian Porter publicly announced the implementation of this new automated debt recovery scheme – which was given the colloquial name "Robodebt" by the media – was estimated to be capable of issuing debt notices at a rate of 20,000 a week. Operation and public reaction Iterations and official names The scheme went through several iterations and formal names, including: PAYG Manual Compliance Intervention program, from 1 July 2015 to 1 July 2016, including the associated pilot programs from early 2015 to 30 June 2015. Online Compliance Intervention from 1 July 2016 to 10 February 2017. Employment Income Confirmation from 11 February 2017 to 30 September 2018. Check and Update Past Income from 30 September 2018 to 29 May 2020. Debt recovery efforts In early January 2017, six months after the commencement of automated debt recovery, it was announced that the scheme had issued 169,000 debt notices and recovered . Based on these figures, it was suggested that a similar automated debt recovery system would be applied to the Aged Pension and Disability Pension, in order to potentially recover a further . The 2018 Australian federal budget indicated that the Robodebt data matching scheme would be extended into 2021 with the aim of recovering an additional from welfare recipients. Services Australia announced in September 2019 that expenditure on the Robodebt program was while recouping . Reactions and critiques Opponents of the Robodebt scheme said that errors in the system were leading to welfare recipients paying non-existent debts or debts that were larger than what they actually owed, whilst some welfare recipients had been required to make payments while contesting their debts. In some cases, the debts being pursued dated back further than the ATO requests that Australians retain their documentation. Particular criticism focused on the burden of proof being moved from Centrelink needing to verify the information, to being on the individual to prove they did not owe the funds, with human interaction being very limited in the dispatch of the debt letters. Politicians from the Australian Labor Party, Australian Greens, Pauline Hanson's One Nation, and Independent Andrew Wilkie criticized the scheme and its automated debt calculation methods. The scheme was also criticized by advocacy groups for people affected by poverty, disadvantage, and inequality, including the Australian Council of Social Services (ACOSS) and the Saint Vincent de Paul Society. Allegations of misconduct Allegations levelled against the scheme by the media, former and current welfare recipients, advocacy groups, politicians and relatives of welfare recipients include: Welfare recipients' suicide after receiving automated debt recovery notices for significant sums. Debt notices were issued to deceased people. Issuing debt notices to disability pensioners. Revelations that debt notices were issued to 663 vulnerable people (people with complex needs like mental illness and abuse victims) who died soon after. Initial investigations Commonwealth Ombudsman investigation After the Turnbull government implemented the Robodebt scheme, many recipients of debt notices filed complaints with the Commonwealth Ombudsman. This led to the agency investigating the scheme, with the final report and recommendations delivered in April 2017. The ombudsman recommended that the Department of Human Services (DHS) should: reassess the debts raised by the scheme improve the clarity of debt notices and give customers better information inform customers that their ATO income will be averaged across the relevant period if they do not enter their income information notify welfare recipients that debts based on averaged ATO income may be less accurate help welfare recipients to gather evidence with which to effectively respond to debt notices. The ombudsman also recommended that before expanding the scheme, the DHS should undertake a comprehensive evaluation of the scheme in its current form, and consider how to mitigate the risk of possible over-recovery of debts. First Senate committee inquiry The Robodebt scheme was the subject of a Senate committee inquiry beginning in 2017. The inquiry had a number of findings and made a number of recommendations, including: "That a lack of procedural fairness is evident in every stage of the program, which should be put on hold until all procedural fairness flaws are addressed". "That the Robodebt scheme disempowered people, causing emotional trauma, stress and shame". "That the Department of Human Services has a fundamental conflict of interest – the harder it is for people to navigate this system and prove their correct income data, the more money the department recoups". "That the Department of Human Services should resume full responsibility for calculating verifiable debts (including manual checking) relating to income support overpayments, which are based on actual fortnightly earnings and not an assumed average; and provide those issued debt notices with the debt calculation data required to be assured any debts are correct". Legal challenges In February 2019, Legal Aid Victoria announced a federal court challenge of the scheme's calculations used to estimate debt, stating that the calculations assumed that people are working regular, full-time hours when calculating income. In November 2019, the federal government agreed to orders by the Federal Court of Australia in Amato v the Commonwealth that the averaging process using ATO income data to calculate debts was unlawful, and announced that it would no longer raise debts without first gathering evidence – such as payslips – to prove a person had underreported their earnings to Centrelink. In September 2019 Gordon Legal announced their intention of filing a class action suit challenging the legal foundations of the Robodebt system. On 16 November 2020, the day before the trial was due to begin, the Australian government announced that it had struck a deal with Gordon Legal, to settle out-of-court. The deal saw 400,000 victims of Robodebt share in an additional compensation, on top of the additional 470,000 Robodebts (totalling around ) that the Commonwealth government had already agreed to refund or cease pursuing. Demise and further investigations Demise On 29 May 2020, Stuart Robert, Minister for Government Services announced that the Robodebt debt recovery scheme was to be scrapped by the Government, with 470,000 wrongly-issued debts to be repaid in full. Initially, the total sum of the repayments was estimated to be . However, in November 2020 this figure expanded to after the Australian government settled a class-action lawsuit before it could go to trial. On 31 May 2020, Attorney-General Christian Porter, who was Minister for Social Services when the Robodebt system was first implemented, and who had previously defended the scheme, conceded that the use of averaged income data to calculate welfare overpayments was unlawful, stating that there was "no lawful basis for it". After weeks of criticism from the Opposition, in June 2020, Prime Minister Scott Morrison, in response to a question from the opposition concerning a particular victim of the scheme, stated in parliament that "I would apologise for any hurt or harm in the way that the Government has dealt with that issue and to anyone else who has found themselves in those situations". As of 31 July 2020, it was announced that had been repaid to more than 145,000 welfare recipients. On 11 June 2021, the Federal Court approved a A$1.872 billion settlement incorporating repayment of A$751 million, wiping of all remaining debts, and the legal costs running to A$8.4 million. In ruling against the scheme, Justice Bernard Murphy described it as a "shameful chapter in the administration of the commonwealth" and "a massive failure of public administration”. The Federal Treasurer Josh Frydenberg said the government accepted the settlement, but distanced himself from the suicides and mental health issues surrounding the administration of the scheme. Services Australia has stated they will commence repayments in 2022 to people who have overpaid according to debt recalculations. In October 2022, the Albanese Government effectively forgave the debts of 197,000 people who were still under review. Second Senate committee inquiry The scheme was again the subject of a Senate committee inquiry, which began in 2019. In the July 2020 hearing, Kathryn Campbell (former head of Services Australia) denied that the scheme had led to welfare recipients suiciding after receiving debt notices, despite allegations from Centrelink staff and the family members of welfare recipients who took their own lives. Senator O'Neill in the August 2020 hearing, read two letters from mothers whose sons died by suicide following the receiving of a Robodebt notice. Initially meant to report its findings in December 2019, the inquiry's deadline was extended six times, with the Senate committee delivering its final report in May 2022. The five interim reports made several findings, including: "That the Robodebt scheme indiscriminately targeted some of Australia’s most vulnerable people, causing significant and widespread harm to their psychological and financial wellbeing". That the use of technology by Government must be supported by appropriate safeguards to protect vulnerable people That the Government had not applied the necessary rigour to ensure that people are always treated fairly That the program had ignored warnings that began within months of the July 2016 start of the scheme, and had continued to issue debt notices that had no basis in law. That the government was still withholding critical information about the Income Compliance Program and the committee had been hindered in producing its final report due to "entrenched resistance and opacity" from ministers and departments. That the Australian public, especially Robodebt victims, deserve to know what advice was provided to Government and how this advice informed decision-making. The sixth and final report made a single recommendation: "That the Commonwealth Government should establish a Royal Commission into the Robodebt scheme". Royal Commission and aftermath In June 2020, the Greens and Labor called for a Royal Commission into Robodebt, to "determine those responsible for the scheme, and its impact on Australians". These calls have been reiterated by university academics, and by ACOSS, which stated that "although some restitution has been delivered to victims of Robodebt, they have not received justice". In May 2022, the sixth and final report from the second Senate inquiry into the scheme recommended a Royal Commission, "to completely understand how the failures of the Income Compliance Program came to pass, and why they were allowed to continue for so long despite the dire impacts on people issued with debts". In June 2020 Labor had stated that only a Royal Commission would be able to obtain the truth about Robodebt. Labor subsequently budgeted $30M in its election costings for the 2022 election for a Royal Commission into the Robodebt Scheme. ACOSS chief executive Cassandra Goldie welcomed this saying "The Robodebt affair was not just a maladministration scandal, it was a human tragedy that resulted in people taking their lives". Following Labor’s election win, Prime Minister Anthony Albanese announced the Royal Commission into the Robodebt Scheme, with Letters Patent issued on 25 August 2022. The Royal Commission was chaired by former Queensland Supreme Court Justice Catherine Holmes and was expected to conclude on 18 April 2023. The deadline was extended twice, first until 30 June and later until 7 July 2023. In November 2022 it was disclosed that legal advice before the scheme started was that it did not comply with legislation. Commissioner Catherine Holmes asked DSS lawyer Anne Pulford, "You get an advice in draft, and if it's not favourable you just leave it that way?"; Pulford responded "Yes, Commissioner". The final report of the Royal Commission was released on 7 July 2023. Along with 57 recommendations, a sealed section referred several unnamed individuals for further investigation or action, to four separate bodies. Kathryn Campbell, then working on the AUKUS program at the Department of Defence, was suspended without pay from her role on 20 July. Kathryn Campbell resigned from the Department of Defence effective 21 July 2023. Colleen Taylor, a former employee of the department, received a 2024 King's Birthday Honour for her efforts to expose the scheme. Taylor had tried to raise concerns internally in 2017, and had testified at the Royal Commission. National Anti-Corruption Commission In June 2024, the National Anti-Corruption Commission (NACC) decided not to pursue investigations of six individuals referred to it by the Royal Commission. The NACC stated it was unlikely to obtain new evidence and noted that five out of the six were already under investigation by the Australian Public Service Commission. A former NSW Supreme Court judge, Anthony Whealy, stated that the NACC's refusal to investigate the individuals meant that it had "betrayed its core obligation and failed to carry out its primary statutory duty". The NACC's decision received over 1200 complaints, sparking an independent inquiry into the decision by the Inspector of the NACC, Ms Gail Furness SC. The Inspector obtained documents relating to the decision, and requested submissions from the NACC by October. The Inspector found that Commissioner Paul Brereton had a perceived conflict on interest due to a "close association" with one of the individuals involved, and should have recused himself from the decision. The NACC appointed an independent person to reconsider the decision not to investigate. Australian Public Service Commission In September 2024, the Australian Public Service Commission announced that its investigation into the individuals had concluded, leading to several fines and demotions. No individuals were fired from their role. Following the findings of public service misconduct, lawyers representing the class action announced they would appeal their previous $1.8B settlement, seeking compensation for the further breaches uncovered. See also Dutch childcare benefits scandal British Post Office scandal References Welfare in Australia Public policy in Australia Australia Political controversies in Australia Government by algorithm
Robodebt scheme
[ "Engineering" ]
3,829
[ "Government by algorithm", "Automation" ]
64,121,498
https://en.wikipedia.org/wiki/Lurbinectedin
Lurbinectedin, sold under the brand name Zepzelca, is a medication used for the treatment of small cell lung cancer. The most common side effects include leukopenia, lymphopenia, fatigue, anemia, neutropenia, increased creatinine, increased alanine aminotransferase, increased glucose, thrombocytopenia, nausea, decreased appetite, musculoskeletal pain, decreased albumin, constipation, dyspnea, decreased sodium, increased aspartate aminotransferase, vomiting, cough, decreased magnesium and diarrhea. Lurbinectedin is a synthetic tetrahydropyrrolo [4,3,2-de]quinolin-8(1H)-one alkaloid analogue with potential antineoplastic activity. Lurbinectedin covalently binds to residues lying in the minor groove of DNA, which may result in delayed progression through S phase, cell cycle arrest in the G2/M phase and cell death. Lurbinectedin was approved for medical use in the United States in June 2020. Medical uses Lurbinectedin is indicated for the treatment of adults with metastatic small cell lung cancer (SCLC) with disease progression on or after platinum-based chemotherapy. Structure Lurbinectedin is structurally similar to trabectedin, although the tetrahydroisoquinoline present in trabectedin is replaced with a tetrahydro β-carboline which enables lurbinectedin to exhibit increased antitumor activity compared with trabectedin. Synthesis Synthesis of lurbinectedin starts from small, common starting materials that require twenty-six individual steps to produce the drug with overall yield of 1.6%. Mechanism of action According to PharmaMar, lurbinectedin inhibits the active transcription of the encoding genes. This has two consequences. It promotes tumor cell death and normalizes the tumor microenvironment. Active transcription is the process by which there are specific signal where information contained in the DNA sequence is transferred to an RNA molecule. This activity depends on the activity of an enzyme called RNA polymerase II. Lurbinectedin inhibits transcription through a very precise mechanism. Firstly, lurbinectedin binds to specific DNA sequences. It is at these precise spots that slides down the DNA to produce RNA polymerase II that is blocked and degraded by lurbinectedin. Lurbinectedin also has important role in tumor microenvironment. The tumor cells act upon macrophages to avoid them from behaving like an activator of the immune system. Macrophages can contribute to tumor growth and progression by promoting tumor cell proliferation and invasion, fostering tumor angiogenesis and suppressing antitumor immune cells. Attracted to oxygen-starved (hypoxic) and necrotic tumor cells they promote chronic inflammation. So, not only that macrophages inhibit immune system avoiding the destruction of tumor cells, but they also create tumor tissue that allows tumor growth. However, macrophages associated with tumors are cells that are addicted to the transcription process. Lurbinectedin acts specifically on the macrophages associated with tumors in two ways: firstly, by inhibiting the transcription of macrophages that leads to cell death and secondly, inhibiting the production of tumor growth factors. In this way, lurbinectedin normalizes the tumor microenvironment. History Lurbinectedin was approved for medical use in the United States in June 2020. Efficacy was demonstrated in the PM1183-B-005-14 trial (Study B-005; NCT02454972), a multicenter open-label, multi-cohort study enrolling 105 participants with metastatic SCLC who had disease progression on or after platinum-based chemotherapy. Participants received lurbinectedin 3.2 mg/m2 by intravenous infusion every 21 days until disease progression or unacceptable toxicity. The trial was conducted at 26 sites in the United States, Great Britain, Belgium, France, Italy, Spain and Czech Republic. The U.S. Food and Drug Administration (FDA) granted the application for lurbinectedin priority review and orphan drug designations and granted the approval of Zepzelca to Pharma Mar S.A. Research Clinical Trials Lurbinectedin can be used as monotherapy in the treatment of SCLC. Lurbinectedin monotherapy demonstrated the following clinical results in relapsed extensive stage SCLC: For sensitive disease (chemotherapy-free interval of ≥ 90 days) overall response rate (ORR) was 46.6% with 79.3% disease control rate and median overall survival (OS) being increased to 15.2 months. For resistant disease (chemotherapy-free interval of < 90 days) overall response rate (ORR) was 21.3% with 46.8% disease control rate and 5.1 months median overall survival (OS). Lurbinectedin is also being investigated in combination with doxorubicin as second-line therapy in a randomized Phase III trial. While overall survival in this trial is not yet known, response rates at second line were 91.7% in sensitive disease with median progression-free survival of 5.8 months, and 33.3% in resistant disease with median progression-free of 3.5 months. Lurbinectedin is available in the U.S. under Expanded Access Program (EAP). References External links Antineoplastic drugs Lung cancer Orphan drugs Heterocyclic compounds with 7 or more rings Sulfur heterocycles Oxygen heterocycles Nitrogen heterocycles Tetrahydropyridines Indoles Spiro compounds Lactones Acetate esters Methoxy compounds
Lurbinectedin
[ "Chemistry" ]
1,219
[ "Organic compounds", "Spiro compounds" ]
64,122,159
https://en.wikipedia.org/wiki/Lergotrile
Lergotrile (, ) is an ergoline derivative which acts as a dopamine receptor agonist. It was developed for the treatment of Parkinson's disease, but failed in clinical trials due to liver toxicity. References Chloroarenes Dopamine agonists Ergolines Nitriles Prolactin inhibitors
Lergotrile
[ "Chemistry" ]
71
[ "Nitriles", "Functional groups" ]
64,125,292
https://en.wikipedia.org/wiki/Selective%20embryo%20abortion
Selective embryo abortion (also known as selective seed abortion and selective ovule abortion ), is a form of non-random, premature termination of embryonic development in plants. Selective embryo abortion assumes that embryo termination depends on the genetic quality of seeds developing within an ovary, and predicts that successfully matured seeds will be of greater fitness than aborted seeds. Consequently, selective embryo abortion has the potential to act as a unique stage of natural selection, influencing the evolution of plant populations and species. This concept was described by botanist John T. Buchholz in 1922 under his framework of developmental selection, which referred to selective embryo abortion as “interovular selection.” Selective embryo abortion may result from competition among embryos for maternal resources. The maternal plant may also play an active role by recognizing and selectively aborting genetically inferior embryos. Evidence of offspring fitness effects support the hypothesis that abortion is a form of selection. However, abortion in some species may be due to factors independent of embryo fitness, including the position of embryos within an ovary and late-acting self-incompatibility. Mechanisms of selective embryo abortion The body of literature on selective embryo abortion was primarily published in the 1980s. During this time period, researchers proposed and investigated several hypotheses for the mechanisms of selective embryo abortion. These can be broadly grouped into two camps: competition among developing embryos and active “female choice.” The former suggests that selective embryo abortion is driven by interactions among embryos as they compete for maternal resources such as sugars, water, and minerals. From the perspective of the source-sink hypothesis, each embryo acts as a sink, or recipient, of finite resources from roots and photosynthetic tissues. Since resources are limited, the number of existing ovules would be greater than the number of seeds the maternal plant can support, leading to competition for resources. Embryos may compete for resources by producing phytohormones involved in metabolism (such as auxin). Competition may also involve the production of biochemicals that directly hinder the development and growth of other embryos. In contrast, the hypothesis of female choice states that selective embryo abortion may be driven by the maternal plant, which identifies and aborts inferior embryos. However, it is unclear how the maternal plant may be able to assess genetic quality. It is possible, however, that interactions between the maternal plant and competing embryos may affect patterns of seed abortion; for example, patterns of resource consumption among embryos may signal to the maternal plant which embryos are of low fitness. Selective embryo abortion can also occur indirectly through fruit abortion. Indeed, seed and fruit development are interrelated and occur simultaneously. Accordingly, seed maturation—and therefore seed success—can be precluded by fruit abortion. Effects on offspring fitness In general, there is significant overlap in gene expression between embryo development and plant maturation. Selective embryo abortion may therefore act on traits affecting plant survival and fitness following germination. Most studies that tested the effects of selective embryo abortion on offspring fitness, did so by reducing or eliminating competition among embryos; these studies typically evaluate differences in average fitness between offspring from unmanipulated plants and offspring from plants manipulated by random removal of embryos. Relative increases in certain measures of fitness among the former have been observed in species such as Cryptantha flava, Cryptantha officinale, Lotus Corniculatus, and others. Position-dependent abortion Many species exhibit less variable patterns of embryo abortion. In species such as Medicago lupulinus, Nemophila breviflora, and Phaseolus coccineus, abortion appears to be affected by the relative position of an ovule within an ovary. A variety of within-ovary, position-dependent patterns have been observed, including: Consistent maturation of embryos in particular positions Greater probability of abortion among embryos closer to the style Greater probability of abortion among embryos closer to the peduncle Greater probability of maturation among embryos in the middle portion of the ovary Alternation between matured and aborted embryos M. lupulinus and N. breviflora are also examples of species with a fixed number of matured seeds per fruit (in these cases, one seed), despite having multiple fertilized ovules. The arrangement of ovarian vascular bundles, which transport nutrients to ovules, has been proposed as a potential influence on position-dependent probabilities of abortion. Alternatively, in species where the order of ovule fertilization and relative positions of matured embryos correlate, fertilization time may have an effect; late-fertilized ovules are expected to lag behind in embryonic development, making them weaker competitors. More specifically, gametophytic selection may cause a correlation between fertilization order and position of matured embryos, since the fastest growing pollen tubes are expected to be the most fit and the first to fertilize ovules. In this scenario, the ovules fertilized first are expected to be stronger competitors due to their genetic quality, hence their higher probability of maturation. Thus, some cases of position-dependent abortion have the potential to be driven by selective embryo abortion. Abortion of self-fertilized embryos Early-acting inbreeding depression is a form of selective embryo abortion that acts on embryos produced by selfing or mating of close relatives. Inbreeding increases genetic homozygosity, allowing selection to elimination recessive, deleterious or lethal alleles (the presence of these deleterious alleles is referred to as genetic load). Thus, selective embryo abortion would be expected to purge genetic load among inbred offspring by aborting those embryos with deleterious genotypes. However, late-acting self-incompatibility also causes abortion of self-fertilized seeds, confounding identification of early-acting inbreeding depression. References Plant reproduction
Selective embryo abortion
[ "Biology" ]
1,237
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
64,125,867
https://en.wikipedia.org/wiki/Total%20subset
In mathematics, more specifically in functional analysis, a subset of a topological vector space is said to be a total subset of if the linear span of is a dense subset of This condition arises frequently in many theorems of functional analysis. Examples Unbounded self-adjoint operators on Hilbert spaces are defined on total subsets. See also References Functional analysis Topological vector spaces
Total subset
[ "Mathematics" ]
76
[ "Functions and mappings", "Functional analysis", "Vector spaces", "Mathematical objects", "Topological vector spaces", "Space (mathematics)", "Mathematical relations" ]
64,126,587
https://en.wikipedia.org/wiki/Nuclear%20Destruction
Nuclear Destruction is a play-by-mail (PBM) game. It was published by Rick Loomis of Flying Buffalo Inc. in 1970. As the first professional PBM game, it started the commercial PBM industry. Offered by postal mail initially, the game is available by email as well in the 21st century. Active for 53 years, as of October 2021, Rick Loomis PBM Games publishes the game. Players use strategic missiles, factories, money, and other elements of gameplay with a focus on diplomacy to win by becoming the sole survivor. Development Nuclear Destruction was the first game offered by Flying Buffalo Inc., and started the professional PBM industry. It was the first professional PBM game. Flying Buffalo Inc. offered the game through mail initially, but it is a play-by-email (PBEM) game in the 21st century as well. As of October 2021, Rick Loomis PBM Games publishes the game. Nuclear Destruction has been active since 1970. Gameplay According to reviewer Charles Mosteller, editor in chief of Suspense and Decision, the modern PBM magazine, Nuclear Destruction is a "Strategic missile game with emphasis on diplomacy". The object is to be the sole survivor at the end of the game, by arranging for the other players to be destroyed with nuclear missiles. Player tools include missiles, "anti-missiles", factories, and money for influencing other players. Flying Buffalo ran multiple versions of Nuclear Destruction. In 1973 there was a "Ladies ND" as well as a two-player version where, in the latter case, the players were USSR and China. Also in 1973, groups of four to six players could play against each other in "Gang-War ND", and "Private ND" games were available for play with friends. In 1979, there was a "Partners ND" where two friends could play together; "Blitz ND", a costlier game with shorter turnaround times and priority mailing; and "Bribery ND" where players did not pay turn fees but could purchase extra resources (e.g., missiles and spies). Reception In Issue 9 of Command, Dennis Agosta admired Nuclear Destruction for the lack of any random factor. "It's intellect against intellect, where the outcome of the game is determined by how you and your allies (if any) make your moves." He concluded, "The excitement level of PBM Nuclear Destruction is very high, especially when the game is run on one or two week deadlines." See also List of play-by-mail games Notes References Bibliography Further reading American games Multiplayer games Nuclear warfare Play-by-mail games Strategy games Tabletop games Wargames Wargames introduced in the 1970s Wargames introduced in 1970 Grand strategy wargames
Nuclear Destruction
[ "Chemistry" ]
566
[ "Radioactivity", "Nuclear warfare" ]
64,129,234
https://en.wikipedia.org/wiki/Mercier%20criterion
The Mercier criterion is a criterion for plasma used in the theoretical study of plasma instability. It was first proposed in 1954 by C. Mercier, who applied the perturbation method (in which represents the frequency and is the z-direction of the unit vector) to the plasma mathematical model to compute calculations. References Plasma instabilities
Mercier criterion
[ "Physics" ]
70
[ "Plasma phenomena", "Physical phenomena", "Plasma instabilities" ]
51,348,948
https://en.wikipedia.org/wiki/ATM%20Industry%20Association
The ATM Industry Association (ATMIA), originally the ATM Owners Association, was established in 1997 in the United States as a global nonprofit trade association to service an industry that built around the global growth of the ATM. History Liberalization of the retail banking markets in the US during the 1980s and early 1990s, resulted in depository institutions losing their monopoly on ATMs while independent ATM deployers were allowed to compete in the provision of after-hours access to cash. Growth in this market led Tom Harper and Alan Fryrear to establish the ATM Owners' Association (ATMOA) in late 1997 with no staff (except Harper), zero budget, and only a handful of members. The first official ATMOA planning meeting took place on October 9, 1998, at the end of the Faulkner & Gray Advanced ATM Conference in San Diego, California. The group voted Lyle Elias as the new president, ratified a motion to change their name to the ATM Industry Association, formed several committees and took steps to launch their own industry conference. In 2000, Michael Lee joined ATMIA as their European executive director and in 2004, he was named chief executive officer and board member. Progress in bringing industry participants together resulted in the New York Times identifying ATMIA as "the leading trade group" in the global cash distribution industry in 2003. In 2016, ATMIA had over 8,000 members in 66 countries. The membership base included banks and other depository institutions, IADs, payment card companies, cash management service companies, interbank network companies, ATM design and manufacturing companies, and other related service providers. Impact ATMIA provides a forum for common issues among members. These include technical matters such as coordinating the global adoption of operating systems, promoting industry specific networking tools, advising on security of transactions, setting common standards to give access to people with disabilities, and the future of the ATM. It also promoted a worldwide standard for ATM security in collaboration with Accenture, a global ATM benchmarking service. It was responsible for designing a recognizable worldwide "ATM here" sign, based on an international contest won by Andy Kitt, formerly of the NCR Corporation. The "Official Global Pictogram for the ATM", was then registered as an international public sign in 2008 (ISO 7001:PI CF 005). It has worked with James Shepherd-Barron in humanitarian efforts to facilitate the provision of cash and the extended use of mobile ATMs to victims of disasters and political conflict. ATMIA members and directors collaborate to address issues of global concern such as the ATM ram raids in Australia in 2010, and in 2012 around money laundering regulations, including a framework for non-bank ATMs in Canada. ATMIA represents its membership in front of financial authorities and regulators. and also approaches legislators directly This includes the provision of cash services for those in the lower income brackets and other vulnerable consumers. This actions have included studies on worldwide use of banknotes and coins. See also Automated teller machine Payment systems Australian Payments Network Cash and cash equivalents Independent ATM deployer Operation Choke Point Talking ATM References External links Financial services companies established in 1997 Bankers associations Automated teller machines Banking in the United States 1997 establishments in the United States Organizations established in 1997 Organizations based in Sioux Falls, South Dakota Business and finance professional associations in the United States
ATM Industry Association
[ "Engineering" ]
660
[ "Automation", "Automated teller machines" ]
51,349,549
https://en.wikipedia.org/wiki/Cementitious%20foam%20insulation
Cementitious foam insulation is a cement-based thermal and acoustic insulation, with an R-value similar to that of fiberglass. It is installed as a foam with a consistency like shaving cream, or as pre-cast slabs. The current cost is similar to that of polyurethane foams. Unlike many foam-in-place polyurethane foams, it is nonflammable and non-toxic. As it is water-based, it offgasses water vapour while curing, requiring ventilation and in some cases a dehumidifier. It cures more slowly than organic foams. However, it does not offgas volatile organic compounds as many organic foams do. Like cement, it is water-soluble until cured, but after curing it is water-resistant, but water-permeable. It does not expand on setting, but may shrink slightly in open cavities. Structurally, it does not resemble concrete; at the low densities that make it well-insulating, it is quite fragile. It can be crumbled away to re-expose wiring or pipes, making a pile of grey powder. Also unlike concrete, it is quite lightweight. It is not a new product, having been around for some decades, but exclusive rights to an established cementitious foam product have recently been purchased by a company that has been giving it more publicity. References Building materials
Cementitious foam insulation
[ "Physics", "Engineering" ]
293
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
52,873,758
https://en.wikipedia.org/wiki/Polysome%20profiling
Polysome profiling is a technique in molecular biology that is used to study the association of mRNAs with ribosomes. It is different from ribosome profiling. Both techniques have been reviewed and both are used in analysis of the translatome, but the data they generate are at very different levels of specificity. When employed by experts, the technique is remarkably reproducible: the 3 profiles in the first image are from 3 different experiments. The procedure The procedure begins by making a cell lysate of the cells of interest. This lysate contains polysomes, monosomes (composed of one ribosome residing on an mRNA), the small (40S in eukaryotes) and large (60S in eukaryotes) ribosomal subunits, "free" mRNA and a host of other soluble cellular components. The procedure continues by making a continuous sucrose gradient of continuously variable density in a centrifuge tube. At the concentrations used (15-45% in the example), sucrose does not disrupt the association of ribosomes and mRNA. The 15% portion of the gradient is at the top of the tube, while the 45% portion is at the bottom because of their different density. A specific amount (as measured by optical density) of the lysate is then layered gently on top of the gradient in the tube. The lysate, even though it contains a large amount of soluble material, is much less dense than 15% sucrose, and so it can be kept as a separate layer at the top of the tube if this is done gently. In order to separate the components of the lysate, the preparation is subjected to centrifugation. This accelerates the components of the lysate with many times the force of gravity and thus propels them through the gradient based upon how "big" the individual components are. The small (40S) subunits travel less far into the gradient than the large (60S) subunits. The 80S ribosomes on an mRNA travel further (note that the contribution of the size of the mRNA to the distance traveled is not significant). Polysomes composed of 2 ribosomes travel further, polysomes with 3 ribosomes travel further still, and on and on. The "size" of the components is designated by S, the svedberg unit. Note that one S = 10−13 seconds, and that the concept of "big" is actually an oversimplification. After centrifugation, the contents of the tube are collected as fractions from the top (smaller, slower traveling) to bottom (bigger, faster traveling) and the optical density of the fractions is determined. The first fractions removed have a large amount of relatively small molecules, such as tRNAs, individual proteins, etc. Applications It is possible to use this technique to study the overall degree of translation in cells (for examples), but it can be used much more specifically to study individual proteins and their mRNAs. As an example shown in the lower portion of the figure, a protein that composes part of the small subunit can first be detected in the 40S fraction, then nearly disappears from the 60S fraction (the separations on these gradients are not absolute), then reappears in the 80S and polysome fractions. This indicates that there is at most very little of the protein found in the cell that is not part of the small subunit. In contrast, in the upper row of the immunoblot figure, a soluble protein appears in the soluble fractions and associated with ribosomes and polysomes. The particular protein is a chaperone protein, which (in brief) helps to fold the nascent peptide as it is being extruded from the ribosome. As other work in the paper showed, there is a direct association of the chaperone with the ribosome. The technique can also be used to study the degree of translation of a particular mRNA In these experiments, 5' and 3' sequences of an mRNA were investigated for their effects on amount of mRNA produced and how well the mRNAs were translated. As shown, not all mRNA isoforms are translated with the same efficiency even though their coding sequences are the same. References Molecular biology
Polysome profiling
[ "Chemistry", "Biology" ]
884
[ "Biochemistry", "Molecular biology" ]
52,885,568
https://en.wikipedia.org/wiki/Fluctuation%20electron%20microscopy
Fluctuation electron microscopy (FEM), originally called Variable Coherence Microscopy before decoherence effects in the sample rendered that naming moot, is a technique in electron microscopy that probes nanometer-scale or "medium-range" order in disordered materials. The first studies were performed on amorphous Si (Treacy and Gibson 1997) and later on hydrogenated amorphous silicon. References Electron microscopy techniques
Fluctuation electron microscopy
[ "Physics", "Materials_science" ]
89
[ "Materials science stubs", "Condensed matter stubs", "Condensed matter physics" ]
59,129,360
https://en.wikipedia.org/wiki/Beilinson%E2%80%93Bernstein%20localization
In mathematics, especially in representation theory and algebraic geometry, the Beilinson–Bernstein localization theorem relates D-modules on flag varieties G/B to representations of the Lie algebra attached to a reductive group G. It was introduced by . Extensions of this theorem include the case of partial flag varieties G/P, where P is a parabolic subgroup in and a theorem relating D-modules on the affine Grassmannian to representations of the Kac–Moody algebra in . Statement Let G be a reductive group over the complex numbers, and B a Borel subgroup. Then there is an equivalence of categories On the left is the category of D-modules on G/B. On the right χ is a homomorphism χ : Z(U(g)) → C from the centre of the universal enveloping algebra, corresponding to the weight -ρ ∈ t* given by minus half the sum over the positive roots of g. The above action of W on t* = Spec Sym(t) is shifted so as to fix -ρ. Twisted version There is an equivalence of categories for any λ ∈ t* such that λ-ρ does not pair with any positive root α to give a nonpositive integer (it is "regular dominant"): Here χ is the central character corresponding to λ-ρ, and Dλ is the sheaf of rings on G/B formed by taking the *-pushforward of DG/U along the T-bundle G/U → G/B, a sheaf of rings whose center is the constant sheaf of algebras U(t), and taking the quotient by the central character determined by λ (not λ-ρ). Example: SL2 The Lie algebra of vector fields on the projective line P1 is identified with sl2, and via It can be checked linear combinations of three vector fields C ⊂ P1 are the only vector fields extending to ∞ ∈ P1. Here, is sent to zero. The only finite dimensional sl2 representation on which Ω acts by zero is the trivial representation k, which is sent to the constant sheaf, i.e. the ring of functions O ∈ D-Mod. The Verma module of weight 0 is sent to the D-Module δ supported at 0 ∈ P1. Each finite dimensional representation corresponds to a different twist. References Hotta, R. and Tanisaki, T., 2007. D-modules, perverse sheaves, and representation theory (Vol. 236). Springer Science & Business Media. Beilinson, A. and Bernstein, J., 1993. A proof of Jantzen conjectures. ADVSOV, pp. 1–50. Representation theory Lie algebras Algebraic geometry
Beilinson–Bernstein localization
[ "Mathematics" ]
564
[ "Representation theory", "Fields of abstract algebra", "Algebraic geometry" ]
59,129,761
https://en.wikipedia.org/wiki/Prothrombin%20fragment%201%2B2
Prothrombin fragment 1+2 (F1+2), also written as prothrombin fragment 1.2 (F1.2), is a polypeptide fragment of prothrombin (factor II) generated by the in vivo cleavage of prothrombin into thrombin (factor IIa) by the enzyme prothrombinase (a complex of factor Xa and factor Va). It is released from the N-terminus of prothrombin. F1+2 is a marker of thrombin generation and hence of coagulation activation. It is considered the best marker of in vivo thrombin generation. F1+2 levels can be quantified with blood tests and is used in the diagnosis of hyper- and hypocoagulable states and in the monitoring of anticoagulant therapy. It was initially determined with a radioimmunoassay, but is now measured with several enzyme-linked immunosorbent assays. The molecular weight of F1+2 is around 41 to 43 kDa. Its biological half-life is 90 minutes and it persists in blood for a few hours after formation. The half-life of F1+2 is relatively long, which makes it more reliable for measuring ongoing coagulation than other markers like thrombin–antithrombin complexes and fibrinopeptide A. Concentrations of F1+2 in healthy individuals range from 0.44 to 1.11 nM. F1+2 levels increase with age. Levels of F1+2 have been reported to be elevated in venous thromboembolism, protein C deficiency, protein S deficiency, atrial fibrillation, unstable angina, acute myocardial infarction, acute stroke, atherosclerosis, peripheral arterial disease, and in smokers. Anticoagulants have been found to reduce F1+2 levels. F1+2 levels are increased with pregnancy and by ethinylestradiol-containing birth control pills. Conversely, they do not appear to be increased with estetrol- or estradiol-containing birth control pills. However, F1+2 levels have been reported to be increased with oral estrogen-based menopausal hormone therapy, whereas transdermal estradiol-based menpausal hormone therapy appears to result in less or no consistent increase. References Blood tests Coagulation system
Prothrombin fragment 1+2
[ "Chemistry", "Biology" ]
498
[ "Blood tests", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Chemical pathology" ]