id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
23,820,279
https://en.wikipedia.org/wiki/Gymnopilus%20caerulovirescens
Gymnopilus caerulovirescens is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species caerulovirescens Fungus species
Gymnopilus caerulovirescens
Biology
44
143,689
https://en.wikipedia.org/wiki/Vacuum%20flask
A vacuum flask (also known as a Dewar flask, Dewar bottle or thermos) is an insulating storage vessel that slows the speed at which its contents change in temperature. It greatly lengthens the time over which its contents remain hotter or cooler than the flask's surroundings by trying to be as adiabatic as possible. Invented by James Dewar in 1892, the vacuum flask consists of two flasks, placed one within the other and joined at the neck. The gap between the two flasks is partially evacuated of air, creating a near-vacuum which significantly reduces heat transfer by conduction or convection. When used to hold cold liquids, this also virtually eliminates condensation on the outside of the flask. Vacuum flasks are used domestically to keep contents inside hot or cold for extended periods of time. They are also used for thermal cooking. Vacuum flasks are also used for many purposes in industry. History The vacuum flask was designed and invented by Scottish scientist James Dewar in 1892 as a result of his research in the field of cryogenics and is sometimes called a Dewar flask in his honour. While performing experiments in determining the specific heat of the element palladium, Dewar made a brass chamber that he enclosed in another chamber to keep the palladium at its desired temperature. He evacuated the air between the two chambers, creating a partial vacuum to keep the temperature of the contents stable. Dewar refused to patent his invention; this allowed others to develop the flask using new materials such as glass and aluminium, and it became a significant tool for chemical experiments and also a common household item. Dewar's design was quickly transformed into a commercial item in 1904 as two German glassblowers, Reinhold Burger and Albert Aschenbrenner, discovered that it could be used to keep cold drinks cold and warm drinks warm and invented a more robust flask design, which was suited for everyday use. The Dewar flask design had never been patented but the German men who discovered the commercial use for the product named it Thermos, and subsequently claimed both the rights to the commercial product and the trademark to the name. In his subsequent attempt to claim the rights to the invention, Dewar instead lost a court case to the company. The manufacturing and performance of the Thermos bottle was significantly improved and refined by the Viennese inventor and merchant Gustav Robert Paalen, who designed various types for domestic use, which he also patented, and distributed widely, through the Thermos Bottle Companies in the United States, Canada and the UK, which bought licences for respective national markets. The American Thermos Bottle Company built up a mass production in Norwich, CT, which brought prices down and enabled the wide distribution of the product for at-home use. Over time, the company expanded the size, shapes and materials of these consumer products, primarily used for carrying coffee on the go and carrying liquids on camping trips to keep them either hot or cold. Eventually other manufacturers produced similar products for consumer use. The term "thermos" became a household name for vacuum flasks in general. , Thermos and THERMOS remains a registered trademark in some countries, including the United States, but the lowercase "thermos" was declared a genericized trademark by court action in the United States in 1963. Design The vacuum flask consists of two vessels, one placed within the other and joined at the neck. The gap between the two vessels is partially evacuated of air, creating a partial-vacuum which reduces heat conduction or convection. Heat transfer by thermal radiation may be minimized by silvering flask surfaces facing the gap but can become problematic if the flask's contents or surroundings are very hot; hence vacuum flasks usually hold contents below the boiling point of water. Most heat transfer occurs through the neck and opening of the flask, where there is no vacuum. Vacuum flasks are usually made of metal, borosilicate glass, foam or plastic and have their opening stoppered with cork or polyethylene plastic. Vacuum flasks are often used as insulated shipping containers. Extremely large or long vacuum flasks sometimes cannot fully support the inner flask from the neck alone, so additional support is provided by spacers between the interior and exterior shell. These spacers act as a thermal bridge and partially reduce the insulating properties of the flask around the area where the spacer contacts the interior surface. Several technological applications, such as NMR and MRI machines, rely on the use of double vacuum flasks. These flasks have two vacuum sections. The inner flask contains liquid helium and the outer flask contains liquid nitrogen, with one vacuum section in between. The loss of precious helium is limited in this way. Other improvements to the vacuum flask include the vapour-cooled radiation shield and the vapour-cooled neck, both of which help to reduce evaporation from the flask. Research and industry In laboratories and industry, vacuum flasks are often used to hold liquefied gases (commonly liquid nitrogen with a boiling point of 77 K) for flash freezing, sample preparation and other processes where creating or maintaining an extreme low temperature is desired. Larger vacuum flasks store liquids that become gaseous at well below ambient temperature, such as oxygen and nitrogen; in this case the leakage of heat into the extremely cold interior of the bottle results in a slow boiling-off of the liquid so that a narrow unstoppered opening, or a stoppered opening protected by a pressure relief valve, is necessary to prevent pressure from building up and eventually shattering the flask. The insulation of the vacuum flask results in a very slow "boil" and thus the contents remain liquid for long periods without refrigeration equipment. Vacuum flasks have been used to house standard cells and ovenized Zener diodes, along with their printed circuit board, in precision voltage-regulating devices used as electrical standards. The flask helped with controlling the Zener temperature over a long time span and was used to reduce variations of the output voltage of the Zener standard owing to temperature fluctuation to within a few parts per million. One notable use was by Guildline Instruments, of Canada, in their Transvolt, model 9154B, saturated standard cell, which is an electrical voltage standard. Here a silvered vacuum flask was encased in foam insulation and, using a large glass vacuum plug, held the saturated cell. The output of the device was 1.018 volts and was held to within a few parts per million. The principle of the vacuum flask makes it ideal for storing certain types of rocket fuel, and NASA used it extensively in the propellant tanks of the Saturn launch vehicles in the 1960s and 1970s. The design and shape of the Dewar flask was used as a model for optical experiments based on the idea that the shape of the two compartments with the space in between is similar to the way the light hits the eye. The vacuum flask has also been part of experiments using it as the capacitor of different chemicals in order to keep them at a consistent temperature. The industrial Dewar flask is the base for a device used to passively insulate medical shipments. Most vaccines are sensitive to heat and require a cold chain system to keep them at stable, near freezing temperatures. The Arktek device uses eight one-litre ice blocks to hold vaccines at under 10 °C. In the oil and gas industry, Dewar flasks are used to insulate the electronic components in wireline logging tools. Conventional logging tools (rated to 350 °F) are upgraded to high-temperature specifications by installing all sensitive electronic components in a Dewar flask. Safety Vacuum flasks are at risk of implosion hazard, and glass vessels under vacuum, in particular, may shatter unexpectedly. Chips, scratches or cracks can be a starting point for dangerous vessel failure, especially when the vessel temperature changes rapidly (when hot or cold liquid is added). Proper preparation of the Dewar vacuum flask by tempering prior to use is advised to maintain and optimize the functioning of the unit. Glass vacuum flasks are usually fitted into a metal base with the cylinder contained in or coated with mesh, aluminum or plastic to aid in handling, protect it from physical damage, and contain fragments should they break. In addition, cryogenic storage dewars are usually pressurized, and they may explode if pressure relief valves are not used. Thermal expansion has to be taken into account when engineering a vacuum flask. The outer and inner walls are exposed to different temperatures and will expand at different rates. The vacuum flask can rupture due to the differential in thermal expansion between the outer and inner walls. Expansion joints are commonly used in tubular vacuum flasks to avoid rupture and maintain vacuum integrity. See also References Further reading Burger, R., , "Double walled vessel with a space for a vacuum between the walls," December 3, 1907. External links 1892 introductions Cryogenics Scottish inventions 19th-century inventions Containers Bottles
Vacuum flask
Physics
1,874
60,605
https://en.wikipedia.org/wiki/Dust%20storm
A dust storm, also called a sandstorm, is a meteorological phenomenon common in arid and semi-arid regions. Dust storms arise when a gust front or other strong wind blows loose sand and dirt from a dry surface. Fine particles are transported by saltation and suspension, a process that moves soil from one place and deposits it in another. The arid regions of North Africa, the Middle East, Central Asia and China are the main terrestrial sources of airborne dust. It has been argued that poor management of Earth's drylands, such as neglecting the fallow system, are increasing the size and frequency of dust storms from desert margins and changing both the local and global climate, as well as impacting local economies. The term sandstorm is used most often in the context of desert dust storms, especially in the Sahara Desert, or places where sand is a more prevalent soil type than dirt or rock, when, in addition to fine particles obscuring visibility, a considerable amount of larger sand particles are blown closer to the surface. The term dust storm is more likely to be used when finer particles are blown long distances, especially when the dust storm affects urban areas. Causes As the force of dust passing over loosely held particles increases, particles of sand first start to vibrate, then to move across the surface in a process called saltation. As they repeatedly strike the ground, they loosen and break off smaller particles of dust which then begin to travel in suspension. At wind speeds above that which causes the smallest to suspend, there will be a population of dust grains moving by a range of mechanisms: suspension, saltation and creep. A study from 2008 finds that the initial saltation of sand particles induces a static electric field by friction. Saltating sand acquires a negative charge relative to the ground which in turn loosens more sand particles which then begin saltating. This process has been found to double the number of particles predicted by previous theories. Particles become loosely held mainly due to a prolonged drought or arid conditions, and high wind speeds. Gust fronts may be produced by the outflow of rain-cooled air from an intense thunderstorm. Or, the wind gusts may be produced by a dry cold front: that is, a cold front that is moving into a dry air mass and is producing no precipitation—the type of dust storm which was common during the Dust Bowl years in the U.S. Following the passage of a dry cold front, convective instability resulting from cooler air riding over heated ground can maintain the dust storm initiated at the front. In desert areas, dust and sand storms are most commonly caused by either thunderstorm outflows, or by strong pressure gradients which cause an increase in wind velocity over a wide area. The vertical extent of the dust or sand that is raised is largely determined by the stability of the atmosphere above the ground as well as by the weight of the particulates. In some cases, dust and sand may be confined to a relatively-shallow layer by a low-lying temperature inversion. In other instances, dust (but not sand) may be lifted as high as . Dust storms are a major health hazard. Drought and wind contribute to the emergence of dust storms, as do poor farming and grazing practices by exposing the dust and sand to the wind. Wildfires can lead to dust storms as well. One poor farming practice which contributes to dust storms is dryland farming. Particularly poor dryland farming techniques are intensive tillage or not having established crops or cover crops when storms strike at particularly vulnerable times prior to revegetation. In a semi-arid climate, these practices increase susceptibility to dust storms. However, soil conservation practices may be implemented to control wind erosion. Physical and environmental effects A sandstorm can transport and carry large volumes of sand unexpectedly. Dust storms can carry large amounts of dust, with the leading edge being composed of a wall of thick dust as much as high. Dust and sand storms which come off the Sahara Desert are locally known as a simoom or simoon (sîmūm, sîmūn). The haboob (həbūb) is a sandstorm prevalent in the region of Sudan around Khartoum, with occurrences being most common in the summer. The Sahara desert is a key source of dust storms, particularly the Bodélé Depression and an area covering the confluence of Mauritania, Mali, and Algeria. Sahara dust is frequently emitted into the Mediterranean atmosphere and transported by the winds sometimes as far north as central Europe and Great Britain. Saharan dust storms have increased approximately 10-fold during the half-century since the 1950s, causing topsoil loss in Niger, Chad, northern Nigeria, and Burkina Faso. In Mauritania there were just two dust storms a year in the early 1960s; there are about 80 a year since 2007, according to English geographer Andrew Goudie, professor at the University of Oxford. Levels of Saharan dust coming off the east coast of Africa in June 2007 were five times those observed in June 2006, and were the highest observed since at least 1999, which may have cooled Atlantic waters enough to slightly reduce hurricane activity in late 2007. Dust storms have also been shown to increase the spread of disease across the globe. Bacteria and fungus spores in the ground are blown into the atmosphere by the storms with the minute particles and interact with urban air pollution. Short-term effects of exposure to desert dust include immediate increased symptoms and worsening of the lung function in individuals with asthma, increased mortality and morbidity from long-transported dust from both Saharan and Asian dust storms suggesting that long-transported dust storm particles adversely affects the circulatory system. Dust pneumonia is the result of large amounts of dust being inhaled. Prolonged and unprotected exposure of the respiratory system in a dust storm can also cause silicosis, which, if left untreated, will lead to asphyxiation; silicosis is an incurable condition that may also lead to lung cancer. There is also the danger of keratoconjunctivitis sicca ("dry eyes") which, in severe cases without immediate and proper treatment, can lead to blindness. Economic impact Dust storms cause soil loss from the drylands, and worse, they preferentially remove organic matter and the nutrient-rich lightest particles, thereby reducing agricultural productivity. Also, the abrasive effect of the storm damages young crop plants. Dust storms also reduce visibility, affecting aircraft and road transportation. Dust can also have beneficial effects where it deposits: Central and South American rainforests get significant quantities of mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the mid-western U.S., ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Iranian cities existence are challenged by dust storms. On Mars Dust storms are not limited to Earth and have also been known to form on Mars. These dust storms can extend over larger areas than those on Earth, sometimes encircling the planet, with wind speeds as high as . However, given Mars' much lower atmospheric pressure (roughly 1% that of Earth's), the intensity of Mars storms could never reach the hurricane-force winds experienced on Earth. Martian dust storms are formed when solar heating warms the Martian atmosphere and causes the air to move, lifting dust off the ground. The chance for storms is increased when there are great temperature variations like those seen at the equator during the Martian summer. See also References External links 12-hour U.S. map of surface dust concentrations Mouse-over an hour block on the row for 'Surface Dust Concentrations' Dust in the Wind Photos of the April 14 1935 and September 2 1934 dust storms in the Texas Panhandle hosted by the Portal to Texas History. University of Arizona Dust Model Page Photos of a sandstorm in Riyadh in 2009 from the BBC Newsbeat website Dust storm in Phoenix Arizona via YouTube Weather hazards Road hazards Articles containing video clips Hazards of outdoor recreation
Dust storm
Physics,Technology
1,650
3,037,867
https://en.wikipedia.org/wiki/Spatial%20frequency
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is the reciprocal metre (m−1), although cycles per meter (c/m) is also common. In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (c/mm) or also line pairs per millimeter (LP/mm). In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength and is commonly denoted by or sometimes : Angular wavenumber , expressed in radian per metre (rad/m), is related to ordinary wavenumber and wavelength by Visual perception In the study of visual perception, sinusoidal gratings are frequently used to probe the capabilities of the visual system, such as contrast sensitivity. In these stimuli, spatial frequency is expressed as the number of cycles per degree of visual angle. Sine-wave gratings also differ from one another in amplitude (the magnitude of difference in intensity between light and dark stripes), orientation, and phase. Spatial-frequency theory The spatial-frequency theory refers to the theory that the visual cortex operates on a code of spatial frequency, not on the code of straight edges and lines hypothesised by Hubel and Wiesel on the basis of early experiments on V1 neurons in the cat. In support of this theory is the experimental observation that the visual cortex neurons respond even more robustly to sine-wave gratings that are placed at specific angles in their receptive fields than they do to edges or bars. Most neurons in the primary visual cortex respond best when a sine-wave grating of a particular frequency is presented at a particular angle in a particular location in the visual field. (However, as noted by Teller (1984), it is probably not wise to treat the highest firing rate of a particular neuron as having a special significance with respect to its role in the perception of a particular stimulus, given that the neural code is known to be linked to relative firing rates. For example, in color coding by the three cones in the human retina, there is no special significance to the cone that is firing most strongly – what matters is the relative rate of firing of all three simultaneously. Teller (1984) similarly noted that a strong firing rate in response to a particular stimulus should not be interpreted as indicating that the neuron is somehow specialized for that stimulus, since there is an unlimited equivalence class of stimuli capable of producing similar firing rates.) The spatial-frequency theory of vision is based on two physical principles: Any visual stimulus can be represented by plotting the intensity of the light along lines running through it. Any curve can be broken down into constituent sine waves by Fourier analysis. The theory (for which empirical support has yet to be developed) states that in each functional module of the visual cortex, Fourier analysis (or its piecewise form ) is performed on the receptive field and the neurons in each module are thought to respond selectively to various orientations and frequencies of sine wave gratings. When all of the visual cortex neurons that are influenced by a specific scene respond together, the perception of the scene is created by the summation of the various sine-wave gratings. (This procedure, however, does not address the problem of the organization of the products of the summation into figures, grounds, and so on. It effectively recovers the original (pre-Fourier analysis) distribution of photon intensity and wavelengths across the retinal projection, but does not add information to this original distribution. So the functional value of such a hypothesized procedure is unclear. Some other objections to the "Fourier theory" are discussed by Westheimer (2001) ). One is generally not aware of the individual spatial frequency components since all of the elements are essentially blended together into one smooth representation. However, computer-based filtering procedures can be used to deconstruct an image into its individual spatial frequency components. Research on spatial frequency detection by visual neurons complements and extends previous research using straight edges rather than refuting it. Further research shows that different spatial frequencies convey different information about the appearance of a stimulus. High spatial frequencies represent abrupt spatial changes in the image, such as edges, and generally correspond to featural information and fine detail. M. Bar (2004) has proposed that low spatial frequencies represent global information about the shape, such as general orientation and proportions. Rapid and specialised perception of faces is known to rely more on low spatial frequency information. In the general population of adults, the threshold for spatial frequency discrimination is about 7%. It is often poorer in dyslexic individuals. Spatial frequency in MRI When spatial frequency is used as a variable in a mathematical function, the function is said to be in k-space. Two dimensional k-space has been introduced into MRI as a raw data storage space. The value of each data point in k-space is measured in the unit of 1/meter, i.e. the unit of spatial frequency. It is very common that the raw data in k-space shows features of periodic functions. The periodicity is not spatial frequency, but is temporal frequency. An MRI raw data matrix is composed of a series of phase-variable spin-echo signals. Each of the spin-echo signal is a sinc function of time, which can be described by Where Here is the gyromagnetic ratio constant, and is the basic resonance frequency of the spin. Due to the presence of the gradient G, the spatial information r is encoded onto the frequency . The periodicity seen in the MRI raw data is just this frequency , which is basically the temporal frequency in nature. In a rotating frame, , and is simplified to . Just by letting , the spin-echo signal is expressed in an alternative form Now, the spin-echo signal is in the k-space. It becomes a periodic function of k with r as the k-space frequency but not as the "spatial frequency", since "spatial frequency" is reserved for the name of the periodicity seen in the real space r. The k-space domain and the space domain form a Fourier pair. Two pieces of information are found in each domain, the spatial information and the spatial frequency information. The spatial information, which is of great interest to all medical doctors, is seen as periodic functions in the k-space domain and is seen as the image in the space domain. The spatial frequency information, which might be of interest to some MRI engineers, is not easily seen in the space domain but is readily seen as the data points in the k-space domain. See also Fourier analysis Superlens Visual perception Fringe visibility Reciprocal space References External links Mathematical physics Space
Spatial frequency
Physics,Mathematics
1,420
3,735,222
https://en.wikipedia.org/wiki/NGC%203486
NGC 3486 is an intermediate barred spiral galaxy located about 27.4 million light years away in the constellation of Leo Minor. It has a morphological classification of SAB(r)c, which indicates it is a weakly barred spiral with an inner ring and loosely wound arms. This is a borderline, low-luminosity Seyfert galaxy with an active nucleus. However, no radio or X-ray emission has been detected from the core, and it may only have a small supermassive black hole with less than a million times the mass of the Sun. References External links Leo Minor 3486 Intermediate spiral galaxies 17850411
NGC 3486
Astronomy
132
66,491,838
https://en.wikipedia.org/wiki/Kenneth%20Z.%20Altshuler
Kenneth Z. Altshuler (April 11, 1929 – January 6, 2021) was an American psychiatrist and psychoanalyst. He was a Professor Emeritus of Psychiatry and the Chairman of the Department of Psychiatry at the University of Texas Southwestern Medical Center in Dallas. Early life and education Kenneth Z. Altshuler was born on April 11, 1929, in Paterson, New Jersey, to Jacob and Altie Altshuler. He graduated from Cornell University in 1948 and received his M.D. degree from the University at Buffalo, School of Medicine in 1952, at age 23. He did an internship at Kings County Hospital Center. From 1953–1955, he served in the Navy leaving the service with the rank of Lt. (J.G.) in the Medical Corps. After the military service, he underwent a specialty training in psychiatry and psychoanalysis at Columbia University Center for Psychoanalytic Training and Research. Career In 1973, Altshuler joined the Columbia University faculty where he focused on the research of mental illnesses among deaf patients and in geriatric psychiatry. From 1973–1977, he managed undergraduate education in psychiatry at Columbia University's College of Physicians and Surgeons in New York. In 1977, he left Columbia University and moved to Texas. He became the chairman of the Department of Psychiatry at UT Southwestern Medical Center in Dallas. There, he expanded the faculty from five to over one hundred full-time physicians and raising fifty-two million dollars in departmental endowments, including funds for ten chairs and two research centers. He retired in 2019, and was appointed a Professor Emeritus of Psychiatry. He served as a director of the National Board of Medical Examiners, as a president of the American Association of Chairs of Departments of Psychiatry in 1990–1991, as a board member and later, in 1996, a president of the American Board of Psychiatry and Neurology. In 1999, he was appointed to the board of the Texas Department of Mental Health and Mental Retardation, by then-Governor George W. Bush, and served for five years. He also served on the boards and advisory boards of the local psychiatric and charity organizations. Personal life He had three children from his first marriage, Steven L. Altshuler, Lori L. Altshuler and Dara Altshuler, and six grandchildren. In 1987, he married Ruth Collins Sharp, an American philanthropist. He and his wife were known for their civic engagement in Dallas and philanthropic activities in North Texas, including to UT Southwestern. After his wife died in 2017, he established a fund at UT Southwestern, the Ruth & Ken Altshuler Fund for Clinical Psychiatry and the Kenneth Z. Altshuler Fund for Psychiatric Education, to support clinical research and education programs related to mental illness. Altshuler died from complications of COVID-19 on January 6, 2021, during the COVID-19 pandemic in Texas. Awards and honors Merit Award of the National Psychological Association for Psychoanalysis Honorary Doctorate of Science from the Gallaudet College for the Deaf Certificate of Special Achievement by the American Psychiatric Association for contribution to the program for the deaf in New York Certificate of Special Recognition by the American Psychiatric Association for contribution to the Community Mental Health program in Dallas Distinguished Alumnus Award of the University of Buffalo School of Medicine Trail Blazer Award by the Dallas Community Mental Health Center Wilson Award in Geriatric Psychiatry Psychiatric Excellence Award from the Texas Society of Psychiatric Physicians Texas Star Award from the Texas Mental Health Association Outstanding Psychiatric Award from the North Texas Society of Psychiatric Physicians Prism Award from the Dallas Mental Health Association The Psychiatric Out-Patient Clinic of Dallas Community Mental Health Center is named in his honor The Psychiatric Unit of Zale Lipshy Pavilion is named in his honor The Callier Center for Communication Disorders at University of Texas at Dallas established an annual award bearing his name – the Ruth and Ken Altshuler Callier Care Award The Metrocare Services established a research center bearing his name – the Altshuler Center for Education and Research Dallas County Mental Health and Mental Retardation renamed one of its clinics in his honor – the Kenneth Z. Altshuler Mental Health Clinic See also Ruth Sharp Altshuler Lori L. Altshuler References 1929 births 2021 deaths Physicians from Dallas Physicians from Paterson, New Jersey Military personnel from New Jersey American psychiatrists Cornell University alumni University at Buffalo alumni Columbia University Vagelos College of Physicians and Surgeons alumni University of Texas Southwestern Medical Center faculty Sleep researchers Deaths from the COVID-19 pandemic in Texas
Kenneth Z. Altshuler
Biology
901
20,362,394
https://en.wikipedia.org/wiki/Carboquone
Carboquone is a drug used in chemotherapy. References See also Chemotherapy Aziridine 1-Aziridinyl compounds Carbamates Alkylating antineoplastic agents Ethers 1,4-Benzoquinones
Carboquone
Chemistry
48
67,039,872
https://en.wikipedia.org/wiki/Katsumi%20Kaneko
Katsumi Kaneko was born in Yokohama (Kanagawa), Japan. He graduated with a Bachelor of Engineering degree in 1969 from Yokohama National University (Applied Chemistry), Yokohama. He received a master's degree in physical chemistry at The University of Tokyo, in 1971. He received Doctor of Science in solid state chemistry in 1978 for submitted thesis from The University of Tokyo, entitled “Electrical Properties and Defect Structures of Iron Hydroxide Oxide Colloids”. Education He worked in Chiba University as a faculty of science until 2010, later he studied surface chemistry of metal hydroxide oxides and on gas adsorption, nanoporous materials, and nanospaces molecular science. Later, he became the dean of faculty of science and graduate school of science and technology of Chiba University. He is now a distinguished professor of Shinshu University since 2010. Research and career He developed accurate characterization method of nanoscale pores with gas adsorption and established new nanospaces-molecular science; he found unusual in-pore high pressure effect of nanoscale pores in which molecules and/or atoms prefer to form high pressure phase even without compression. One representative example of the in-pore high pressure effect is spontaneous formation of atomically 1D sulfur-chain of metallic property inside carbon nanotube under vacuum. Also he found partial dehydration of ions by confinement of ions in nanoscale pores, being essential to understand the supercapacitors. He gave a reasonable clue, cluster- associated hydrophobic-to-hydrophilic transformation, to understand water adsorption of nanoporous carbons of hydrophobicity hydration. He contributed to understand adsorption of supercritical gases such as NO, CH4 , and H2 on nanoporous materials. He introduced the concept of quasi-vaporization of supercritical gases through an intensive molecule-pore interaction, giving an efficient guideline for improving adsorption of supercritical gases. He has developed an efficient separation route of isotopic gases such as 18O2 and 16O2. He evidenced partial breaking of Coulombic law in electrically conductive carbon pores to induce association of cations or anions. He developed a sol-gel dispersant of single wall carbon nanotube, producing highly transparent conductive films and stretchable electrodes. Awards and honors He was awarded by Chemical Society of Japan in 1999 and the Charles Petinos Award by the American Carbon Society in 2007. He is fellow of Chemical Society of Japan since 2011, Royal Society of Chemistry and International Adsorption Society since 2013, and a Senior Member of the AIChE. Publications References Shinshu University alumni Fellows of the Royal Society of Chemistry Yokohama National University alumni Academic staff of Chiba University Japanese chemists Living people Year of birth missing (living people) Solid state chemists
Katsumi Kaneko
Chemistry
583
2,857,072
https://en.wikipedia.org/wiki/Triangulated%20irregular%20network
In computer graphics, a triangulated irregular network (TIN) is a representation of a continuous surface consisting entirely of triangular facets (a triangle mesh), used mainly as Discrete Global Grid in primary elevation modeling. The vertices of these triangles are created from field recorded spot elevations through a variety of means including surveying through conventional techniques, Global Positioning System Real-Time Kinematic (GPS RTK), photogrammetry, or some other means. Associated with three-dimensional data and topography, TINs are useful for the description and analysis of general horizontal distributions and relationships. Digital TIN data structures are used in a variety of applications, including geographic information systems (GIS), and computer aided design (CAD) for the visual representation of a topographical surface. A TIN is a vector-based representation of the physical land surface or sea bottom, made up of irregularly distributed nodes and lines with three-dimensional coordinates that are arranged in a network of non-overlapping triangles. A TIN comprises a triangular network of vertices, known as mass points, with associated coordinates in three dimensions connected by edges to form a triangular tessellation. Three-dimensional visualizations are readily created by rendering of the triangular facets. In regions where there is little variation in surface height, the points may be widely spaced whereas in areas of more intense variation in height the point density is increased. A TIN used to represent terrain is often called a digital elevation model (DEM), which can be further used to produce digital surface models (DSM) or digital terrain models (DTM). An advantage of using a TIN over a rasterized digital elevation model (DEM) in mapping and analysis is that the points of a TIN are distributed variably based on an algorithm that determines which points are most necessary to create an accurate representation of the terrain. Data input is therefore flexible and fewer points need to be stored than in a raster DEM, with regularly distributed points. While a TIN may be considered less suited than a raster DEM for certain kinds of GIS applications, such as analysis of a surface's slope and aspect, it is often used in CAD to create contour lines. A DTM and DSM can be formed from a DEM. A DEM can be interpolated from a TIN. TIN are based on a Delaunay triangulation or constrained Delaunay. Delaunay conforming triangulations are recommended over constrained triangulations. This is because the resulting TINs are likely to contain fewer long, skinny triangles, which are undesirable for surface analysis. Additionally, natural neighbor interpolation and Thiessen (Voronoi) polygon generation can only be performed on Delaunay conforming triangulations. A constrained Delaunay triangulation can be considered when you need to explicitly define certain edges that are guaranteed not to be modified (that is, split into multiple edges) by the triangulator. Constrained Delaunay triangulations are also useful for minimizing the size of a TIN, since they have fewer nodes and triangles where breaklines are not densified. The TIN model was developed in the early 1970s as a simple way to build a surface from a set of irregularly spaced points. The first triangulated irregular network program for GIS was written by W. Randolph Franklin, under the direction of David Douglas and Thomas Peucker (Poiker), at Canada's Simon Fraser University, in 1973. File formats A variety of different file formats exist for saving TIN information, including Esri TIN, along with others such as AquaVeo and ICEM CFD. References External links UBC Geography PSU Education ArcGIS irregular network Geometric data structures Geographic data and information
Triangulated irregular network
Mathematics,Technology
763
935,328
https://en.wikipedia.org/wiki/Fusidic%20acid
Fusidic acid, sold under the brand name Fucidin among others, is a steroid antibiotic that is often used topically in creams or ointments and eyedrops but may also be given systemically as tablets or injections. , the global problem of advancing antimicrobial resistance has led to a renewed interest in its use. Medical uses Fusidic acid is active in vitro against Staphylococcus aureus, most coagulase-positive staphylococci, Beta-hemolytic streptococci, Corynebacterium species, and most clostridium species. Fusidic acid has no known useful activity against enterococci or most Gram-negative bacteria (except Neisseria, Moraxella, Legionella pneumophila, and Bacteroides fragilis). Fusidic acid is active in vitro and clinically against Mycobacterium leprae but has only marginal activity against Mycobacterium tuberculosis. One use of fusidic acid is its activity against methicillin-resistant Staphylococcus aureus (MRSA). Although many strains of MRSA remain sensitive to fusidic acid, there is a low genetic barrier to drug resistance (a single point mutation is all that is required), fusidic acid should never be used on its own to treat serious MRSA infection and should be combined with another antimicrobial such as rifampicin when administering oral or topical dosing regimens approved in Europe, Canada, and elsewhere. However, resistance selection is low when pathogens are challenged at high drug exposure. Topical fusidic acid is occasionally used as a treatment for acne vulgaris. As a treatment for acne, fusidic acid is often partially effective at improving acne symptoms. However, research studies have indicated that fusidic acid is not as highly active against Cutibacterium acnes as many other antibiotics that are commonly used as acne treatments. Fusidic acid is also found in several additional topical skin and eye preparations (e.g., Fucibet), although its use for these purposes is controversial. Side effects Fucidin tablets and suspension, whose active ingredient is sodium fusidate, occasionally cause liver damage, which can produce jaundice (yellowing of the skin and the whites of the eyes). This condition will almost always get better after the patient finishes taking Fucidin tablets or suspension. Other related side-effects include dark urine and lighter-than-usual feces. These, too, should normalize when the course of treatment is completed. Pharmacology Fusidic acid acts as a bacterial protein synthesis inhibitor by preventing the turnover of elongation factor G (EF-G) from the ribosome. Fusidic acid is effective primarily on Gram-positive bacteria such as Staphylococcus, Streptococcus and Corynebacterium species. Fusidic acid is a tetracyclic, naturally occurring steroid derived from the fungus Fusidium coccineum. It was first isolated in 1960 and developed by Leo Pharma in Ballerup, Denmark, being used clinically from 1962 onwards. It has also been isolated from Mucor ramannianus, an Acremonium species, and Isaria kogana. The drug is licensed for use as its sodium salt sodium fusidate, and it is approved for use under prescription in Australia, Canada, Colombia, the European Union, India, Japan, New Zealand, South Korea, Taiwan, Thailand, and the United Kingdom. Mechanism of action Fusidic acid binds to EF-G after translocation and GTP (guanosine-5'-triphosphate) hydrolysis. This interaction prevents the necessary conformational changes for EF-G release from the ribosome, effectively blocking the protein synthesis process. Fusidic acid can only bind to EF-G in the ribosome after GTP hydrolysis. Since translocation is a part of elongation and ribosome recycling, fusidic acid can block either or both steps of protein synthesis. Dose Fusidic acid should not be used on its own to treat S. aureus infections when used at low drug dosages. However, it may be possible to use fusidic acid as monotherapy when used at higher doses. The use of topical preparations (skin creams and eye ointments) containing fusidic acid is strongly associated with the development of resistance, and there are voices advocating against the continued use of fusidic acid monotherapy in the community. Topical preparations used in Europe often contain fusidic acid and gentamicin in combination, which helps to prevent the development of resistance. Cautions There is inadequate evidence of safety in human pregnancy. Animal studies and many years of clinical experience suggest that fusidic acid is devoid of teratogenic effects (birth defects), but fusidic acid can cross the placental barrier. Resistance In vitro susceptibility studies of US strains of several bacterial species such as S. aureus, including MRSA and coagulase negative Staphylococcus, indicate potent activity against these pathogens. Mechanisms of resistance have been extensively studied only in Staphylococcus aureus. The most studied mechanism is the development of point mutations in fusA, the chromosomal gene that codes for EF-G. The mutation alters EF-G so that fusidic acid is no longer able to bind to it. Resistance is readily acquired when fusidic acid is used alone and commonly develops during the course of treatment. As with most other antibiotics, resistance to fusidic acid arises less frequently when used in combination with other drugs. For this reason, fusidic acid should not be used on its own to treat serious Staph. aureus infections. However, at least in Canadian hospitals, data collected between 1999 and 2005 showed rather low rate of resistance of both methicillin-susceptible and methicillin-resistant to fusidic acid, and mupirocin was found to be the more problematic topical antibiotic for the aforementioned conditions. Some bacteria also display 'FusB-type' resistance, which has been found to be the most prevalent in Staphylococcus spp. in many clinical isolates. This resistance mechanism is mediated by fusB, fusC, and fusD genes found primarily on plasmids, but have also been found in chromosomal DNA. The product of fusB-type resistance genes is a 213-residue cytoplasmic protein which interacts in a 1:1 ratio with EF-G. FusB-type proteins bind in a region distinct from fusidic acid to induce a conformational change which results in liberation of EF-G from the ribosome, allowing the elongation factor to participate in another round of ribosome translocation. Interactions Fusidic acid should not be used with quinolone antibiotics, with which it is antagonistic. Although clinical practice over the past decade has supported the combination of fusidic acid and rifampicin, a recent clinical trial showed that there is an antagonistic interaction when both antibiotics are combined. On 8 August 2008, it was reported that the Irish Medicines Board was investigating the death of a 59-year-old Irish man who developed rhabdomyolysis after combining atorvastatin and fusidic acid, and three similar cases. In August 2011, the UK's Medicines and Healthcare products Regulatory Agency issued a Drug Safety Update warning that "systemic fusidic acid (Fucidin) should not be given with statins because of a risk of serious and potentially fatal rhabdomyolysis." Society and culture Brand names and preparations Fucidin (of Leo in Canada) Fucidin H (topical cream with hydrocortisone - Leo) Fucidin (of Leo in UK/ Leo-Ranbaxy-Croslands in India) Fuci-Ophthalmic (As eye gel in Damascus, Syria) Fucidine (of Leo in France, Germany and Spain) Fusicutan Creme (topical cream in Germany) Fucidin (of Leo in Norway and Israel) Fucidin (of Adcock Ingram, licensed from Leo, in South Africa) Fucithalmic (of Leo in the UK, the Netherlands, Denmark and Portugal) Fucicort (topical mixture with hydrocortisone) Fucibet (fusidic acid/betamethasone valerate topical cream) Ezaderm (topical mixture with betamethasone)(of United Pharmaceutical "UPM" in Jordan) Fuci (of pharopharm in Egypt) Fucizon (topical mixture with hydrocortisone of pharopharm in Egypt) Foban (topical cream in New Zealand) Betafusin (fusidic acid/betamethasone valerate topical cream in Greece) Betafucin (2% fusidic acid/1% betamethasone valerate topical cream in Egypt)(of Delta Pharma S.A.E., A.R.E. (Egypt)) Fusimax (of Roussette in India) Fusiderm (topical cream and ointment by Indi Pharma in India) Fusid (in Nepal) Fudic (topical cream in India) Fucidin (, of Donghwa Pharm in South Korea) Dermy (Topical cream of W. Woodwards in Pakistan) Fugen Cream ( in Taiwan) Phudicin Cream (in China; 夫西地酸) Fucidin Fusidic Acid (in China;夫西地酸 of Leo Laoratories Limited) Dermofucin cream, ointment and gel (in Jordan) Optifucin viscous eye drops (of API in Jordan) Verutex (of Roche in Brazil) Taksta (of Cempra in U.S. For export only in US) Futasole (of Julphar in Gulf and north Africa) Stanicid (2% ointment of Hemofarm in Serbia) Staphiderm Cream (Israel By Trima). Fuzidin (tablets of Biosintez in Russia) Fuzimet (ointment with methyluracil of Biosintez in Russia) Axcel Fusidic Acid (2% cream and ointment of Kotra Pharma, Malaysia) Ofusidic (eye drops produced by Orchidia pharmaceutical in Egypt Research An orally administered mono-therapy with a high loading dose is under development in the United States. A different oral dosing regimen, based on the compound's pharmacokinetic/pharmacodynamic (PK-PD) profile is in clinical development in the US. as Taksta. Fusidic acid is being tested for indications beyond skin infections. There is evidence from compassionate use cases that fusidic acid may be effective in the treatment of patients with prosthetic joint-related chronic osteomyelitis. Biosynthesis The biosynthetic machinery in Fusidium coccineum (also known as Acremonium fusidioides) has been sequenced and analyzed. (3S)-2,3-oxidosqualene and fusidane are two intermediates. References External links Alkene derivatives CYP2D6 inhibitors Diols Acetate esters Steroid antibiotics Carboxylic acids Protein synthesis inhibitor antibiotics Tetracyclic compounds
Fusidic acid
Chemistry
2,405
44,046,734
https://en.wikipedia.org/wiki/Matched%20molecular%20pair%20analysis
Matched molecular pair analysis (MMPA) is a method in cheminformatics that compares the properties of two molecules that differ only by a single chemical transformation, such as the substitution of a hydrogen atom by a chlorine one. Such pairs of compounds are known as matched molecular pairs (MMP). Because the structural difference between the two molecules is small, any experimentally observed change in a physical or biological property between the matched molecular pair can more easily be interpreted. The term was first coined by Kenny and Sadowski in the book Chemoinformatics in Drug Discovery. Introduction MMP can be defined as a pair of molecules that differ in only a minor single point change (See Fig 1). Matched molecular pairs (MMPs) are widely used in medicinal chemistry to study changes in compound properties which includes biological activity, toxicity, environmental hazards and much more, which are associated with well-defined structural modifications. Single point changes in the molecule pairs are termed a chemical transformation or Molecular transformation. Each molecular pair is associated with a particular transformation. An example of transformation is the replacement of one functional group by another. More specifically, molecular transformation can be defined as the replacement of a molecular fragment having one, two or three attachment points with another fragment. Useful Molecular transformation in a specified context is termed as "Significant" transformations. For example, a transformation may systematically decrease or increase a desired property of chemical compounds. Transformations that affect a particular property/activity in a statistically significant sense are called as significant transformations. The transformation is considered significant, if it increases the property value "more often" than it decreases it or vice versa. Thus, the distribution of increasing and decreasing pairs should be significantly different from the binomial ("no effect") distribution with a particular p-value (usually 0.05). Significance of MMP based analysis MMP based analysis is an attractive method for computational analysis because they can be algorithmically generated and they make it possible to associate defined structural modifications at the level of compound pairs with chemical property changes, including biological activity. Interpretable QSAR models MMPA is quite useful in the field of quantitative structure–activity relationship (QSAR) modelling studies. One of the issues of QSAR models is they are difficult to interpret in a chemically meaningful manner. While it can be pretty easy to interpret simple linear regression models, the most powerful algorithms like neural networks, support vector machine are similar to "black boxes", which provide predictions that can't be easily interpreted. This problem undermines the applicability of QSAR model in helping the medicinal chemist to make the decision. If the compound is predicted to be active against some microorganism, what are the driving factors of its activity? Or if it is predicted to be inactive, how its activity can be modulated? The black box nature of the QSAR model prevents it from addressing these crucial issues. The use of predicted MMPs allows to interpret models and identify which MMPs were learned by the model. The MMPs, which were not reproduced by the model, could correspond to experimental errors or deficiency of the model (inappropriate descriptors, too few data, etc.). Analysis of MMPs (matched molecular pair) can be very useful for understanding the mechanism of action. A medicinal chemist might be interested particularly in "activity cliff". Activity cliff is a minor structural modification, which changes the target activity significantly. Activity Cliff Activity cliffs are pairs or groups of compounds that are highly similar in the structures but have large different in potency towards the same target. Activity cliffs received great attention in computational chemistry and drug discovery as they represent a discontinuity in structure-activity relationship (SAR). This discontinuity also indicates high SAR information content, because small chemical changes in the set of similar compounds lead to large changes in activity. The assessment of activity cliffs requires careful consideration of similarity and potency difference criteria. Types of MMP based analysis Matched molecular pair (MMPA) analyses can be classified into two types: supervised and unsupervised MMPA. Supervised MMPA In supervised MMPA, the chemical transformations are predefined, then the corresponding matched pair compounds are found within the data set and the change in end point computed for each transformation. Unsupervised MMPA Also known as automated MMPAs. A machine learning algorithm is used to finds all possible matched pairs in a data set according to a set of predefined rules. This results in much larger numbers of matched pairs and unique transformations, which are typically filtered during the process to identify those transformations that correspond to statistically significant changes in the targeted property with a reasonable number of matched pairs. Matched molecular series Here instead of looking at the pair of molecules which differ only at one point, a series of more than 2 molecules different at a single point is considered. The concept of matching molecular series was introduced by Wawer and Bajorath. It is argued that longer matched series is more likely to exhibit preferred molecular transformation while, matched pairs exhibit only a small preference. Limitations The application of the MMPA across large chemical databases for the optimization of ligand potency is problematic because same structural transformation may increase or decrease or doesn't affect the potency of different compounds in the dataset. Selection of practical significant transformation from a dataset of molecules is a challenging issue in the MMPA. Moreover, the effect of a particular molecular transformation can significantly depend on the Chemical context of transformations. Beside these, MMPA might pose some limitations in terms of computational resources, especially when dealing with databases of compounds with a large number of breakable bonds. Further, more atoms in the variable part of the molecule also leads to combinatorial explosion problems. To deal with this, the number of breakable bonds and number of atoms in the variable part can be used to pre-filter the database. References Cheminformatics Biostatistics
Matched molecular pair analysis
Chemistry
1,196
2,624,245
https://en.wikipedia.org/wiki/Autoclaved%20aerated%20concrete
Autoclaved aerated concrete (AAC) is a lightweight, precast, cellular concrete building material. It is eco-friendly, and suitable for producing concrete-like blocks. It is composed of quartz sand, calcined gypsum, lime, portland cement, water, and aluminium powder. AAC products are cured under heat and pressure in an autoclave. Developed in the mid-1920s, AAC provides insulation, fire, and mold-resistance. Forms include blocks, wall panels, floor and roof panels, cladding (façade) panels and lintels. AAC products are used in construction, such as industrial buildings, residential houses, apartment buildings, and townhouses. Their applications include exterior and interior walls, firewalls, wet room walls, diffusion-open thermal insulation boards, intermediate floors, upper floors, stairs, opening crossings, beams, and pillars. Exterior uses require an applied finish to guard against weathering, such as a polymer-modified stucco or plaster compound, or a covering of siding materials such as natural or manufactured stone, veneer brick, metal, or vinyl siding. AAC materials can be routed, sanded, or cut to size on-site using a hand saw and standard power tools with carbon steel cutters. Names Autoclaved aerated concrete is also known by various other names, including autoclaved cellular concrete (ACC), autoclaved concrete, cellular concrete, porous concrete, Aircrete, Thermalite, Hebel, Aercon, Starken, Gasbeton, Airbeton, Durox, Siporex (silicon pore expansion), Suporex, H+H, and Ytong. History AAC was first created in the mid-1920s by the Swedish architect and inventor Dr. Johan Axel Eriksson (1888–1961), along with Professor Henrik Kreüger at the Royal Institute of Technology. The process was patented in 1924. In 1929, production started in Sweden in the city of Yxhult. "Yxhults Ånghärdade Gasbetong" later became the first registered building materials brand in the world: Ytong. Another brand, "Siporex", was established in Sweden in 1939, though following WWII activities were reduced to a minimum level and no new plants were built since the 1990s. Josef Hebel of Memmingen established another cellular concrete brand, Hebel, which opened their first plant in Germany in 1943. Ytong AAC was originally produced in Sweden using alum shale, which contained combustible carbon beneficial to the production process. However, these deposits were found to contain natural uranium, which decays over time to radon, which then accumulates in structures where the AAC was used. This problem was addressed in 1972 by the Swedish Radiation Safety Authority, and by 1975, Ytong abandoned alum shale in favor of a formulation made from quartz sand, calcined gypsum, lime (mineral), cement, water and aluminium powder currently in use by most major brands. In 1978, Siporex Sweden opened the Siporex Factory in Saudi Arabia, establishing the Lightweight Construction Company - Siporex - LCC SIPOREX, targeting markets in the Middle East, Africa, and Japan. Currently LCC SIPOREX has three branches in Saudi Arabia. Today, the production of AAC is widespread, concentrated in Europe and Asia with some facilities located in the Americas. Egypt has the sole manufacturing plant in Africa. Although the European AAC market has seen a reduction in growth, Asia is experiencing a rapid expansion in the industry, driven by an escalating need for residential and commercial spaces. Currently, China has the largest Aircrete market globally, with several hundred manufacturing plants. The most significant AAC production and consumption occur in China, Central Asia, India, and the Middle East, reflecting the dynamic growth and demand in these regions. Like other masonry materials, the product Aircrete is sold under many different brand names. Ytong and Hebel are brands of the international operating company Xella, headquartered in Duisburg. Other brand names in Europe are H+H Celcon (Denmark) and Solbet (Poland). Uses AAC is a concrete-based material used for both exterior and interior construction. One of its advantages is quick and easy installation because the material can be routed, sanded, or cut to size on-site using a hand saw and standard power tools with carbon steel cutters. AAC is well suited for high-rise buildings and those with high temperature variations. Due to its lower density, high-rise buildings constructed using AAC require less steel and concrete for structural members. The mortar needed for laying AAC blocks is reduced due to the lower number of joints. Similarly, less material is required for rendering, because AAC can be shaped precisely before installation. Even though regular cement mortar can be used, most buildings that use AAC materials use thin bed mortar in thicknesses around , depending on the national building codes. Manufacturing Unlike most other concrete applications, AAC is produced using no aggregate larger than sand. Quartz sand (SiO2), calcined gypsum, lime (mineral) and/or cement and water are used as a binding agent. Aluminum powder is used at a rate of 0.05%–0.08% by volume (depending on the pre-specified density). In some countries, like India and China, fly ash generated from coal-fired power plants, and having 50–65% silica content, is used as an aggregate. When AAC is mixed and cast in forms, aluminium powder reacts with calcium hydroxide and water to form hydrogen. The hydrogen gas foams and doubles the volume of the raw mix creating gas bubbles up to in diameter—it has been described as having bubbles inside like "a chocolate Aero bar". At the end of the foaming process, the hydrogen escapes into the atmosphere and is replaced by air, leaving a product as light as 20% of the weight of conventional concrete. When the forms are removed from the material, it is solid but still soft. It is then cut into either blocks or panels and placed in an autoclave chamber for 12 hours. During this steam pressure hardening process, when the temperature reaches and the pressure reaches , quartz sand reacts with calcium hydroxide to form calcium silicate hydrate, which gives AAC its high strength and other unique properties. Because of the relatively low temperature used, AAC blocks are not considered to be a fired brick but a lightweight concrete masonry unit. After the autoclaving process, the material is stored and shipped to construction sites for use. Depending on its density, up to 80% of the volume of an AAC block is air. AAC's low density also accounts for its low structural compression strength. It can carry loads of up to , approximately 50% of the compressive strength of regular concrete. In 1978, the first AAC material factory - the LCC Siporex- Lightweight Construction Company - was opened in the Persian Gulf state of Saudi Arabia, supplying Gulf Cooperation Council countries with aerated blocks and panels. Since 1980, there has been a worldwide increase in the use of AAC materials. New production plants are being built in Australia, Bahrain, China, Eastern Europe, India and the United States. AAC is increasingly used worldwide by developers. Reinforced autoclaved aerated concrete Reinforced autoclaved aerated concrete (RAAC) is a reinforced version of autoclaved aerated concrete, commonly used in roofing and wall construction. The first structural reinforced roof and floor panels were manufactured in Sweden, soon after the first autoclaved aerated concrete block plant started up there in 1929, but Belgian and German technologies became market leaders for RAAC elements after the Second World War. In Europe, it gained popularity in the mid-1950s as a cheaper and more lightweight alternative to conventional reinforced concrete, with documented widespread use in a number of European countries as well as Japan and former territories of the British Empire. RAAC was used in roof, floor and wall construction due to its lighter weight and lower cost compared to traditional concrete, and has good fire resistance properties; it does not require plastering to achieve good fire resistance and fire does not cause spalls. RAAC was used in construction in Europe, in buildings constructed after the mid-1950s. RAAC elements have also been used in Japan as walling units owing to their good behaviour in seismic conditions. RAAC has been shown to have limited structural reinforcement bar (rebar) integrity in 40 to 50 year-old RAAC roof panels, which began to be observed in the 1990s. The material is liable to fail without visible deterioration or warning. This is often caused by RAAC's high susceptibility to water infiltration due to its porous nature, which causes corrosion of internal reinforcements in ways that are hard to detect. This places increased tensile stress on the bond between the reinforcement and concrete, lowering the material's service life. Detailed risk analyses are required on a structure-by-structure basis to identify areas in need of maintenance and lower the chance of catastrophic failure. Professional engineering concern about the structural performance of RAAC was first publicly raised in the United Kingdom in 1995 following inspections of cracked units in British school roofs, and was subsequently reinforced in 2022 when the Government Property Agency declared the material to be life-expired, and in 2023 when, following the partial or total closure of 174 schools at risk of a roofing collapse, other buildings were found to have issues with their RAAC construction, with some of these only being discovered to have been made from RAAC during the crisis. During the 2023 crisis, it was observed that it was likely for RAAC in other countries to exhibit problems similar to those found in the United Kingdom. The original site of the Ontario Science Centre in Toronto, Canada, a major museum with similar roof construction, was ordered permanently closed 21 June 2024 because of severely deteriorated roof panels dating from its opening in 1969. While repair options were proposed, the centre's ultimate owner, the provincial government of Ontario, had previously announced plans to relocate the centre and therefore requested the facility be closed immediately rather than paying for repairs. Approximately 400 other public buildings in Ontario are understood to contain the material and are under review, but no other closures were anticipated at the time of the Science Centre closure. Eco-friendliness The high resource efficiency of autoclaved aerated concrete contributes to a lower environmental impact than conventional concrete, from raw material processing to the disposal of aerated concrete waste. Due to continuous improvements in efficiency, the production of aerated concrete blocks requires relatively little raw materials per m3 of product and is five times less than the production of other building materials. There is no loss of raw materials in the production process, and all production waste is returned to the production cycle. Production of aerated concrete requires less energy than for all other masonry products, thereby reducing the use of fossil fuels and associated carbon dioxide () emissions. The curing process also saves energy, as the steam curing takes place at relatively low temperatures and the hot steam generated in the autoclaves is reused for subsequent batches. Advantages AAC has been produced for more than 70 years and has several advantages over other cement construction materials, one of the most important being its lower environmental impact. Improved thermal efficiency reduces the heating and cooling load in buildings. Porous structure gives superior fire resistance. Workability allows accurate cutting, which minimizes the generation of solid waste during use. Is eco-friendly and does not pollute the environment and contributes to LEED rating green building material. Resource efficiency gives it lower environmental impact than conventional concrete in all phases from the processing of raw materials to the ultimate disposal of waste. Being lighter allows for easier handling of concrete bricks. The lighter weight saves cost and energy in transportation, labour expenses, and increases chances of survival during seismic activity. Larger size blocks leads to faster masonry work. Reduces project cost for large constructions. Fire-resistant: AAC, like other concretes, is fire-resistant. Ease of Handling: AAC Blocks are lightweight, making them Easier to Lift, Carry, and Install, Smoothing out construction and further improving Efficiency compared to Traditional Materials. Good ventilation: This material is very airy and allows the diffusion of water, reducing humidity inside the building. AAC absorbs moisture and releases humidity, helping to prevent condensation and other problems related to mildew. Non-toxic: There are no toxic gases or other toxic substances in autoclaved aerated concrete. It does not attract rodents or other pests, and cannot be damaged by them. Accuracy: Panels and blocks made of autoclaved aerated concrete are produced to the exact sizes needed before leaving the factory. There is less need for on-site trimming. Since the blocks and panels fit so well together, there is less use of finishing materials such as mortar. Long-lasting: The life of this material is longer because it is not affected by harsh climates or extreme weather changes, and will not degrade under normal climate changes. Disadvantages AAC has been produced for more than 70 years. However, some disadvantages were found when it was introduced in the UK (where double-leaf masonry, also known as cavity walls, are the norm). The process of using AAC is somewhat complex, so builders have to undergo special training. Non-structural shrinkage cracks may appear in AAC blocks after installation in rainy weather or humid environments. This is more likely in poor-quality blocks that were not properly steam-cured. However, most AAC block manufacturers are certified and their blocks are tested in certified labs, so poor-quality blocks are rare. Has some brittle nature: requires more care than clay bricks to avoid breakage during handling and transporting. Fixings: the brittle nature of the blocks requires longer, thinner screws when fitting cabinets and wall hangings. Special wall fasteners (screw wall plug anchors) designed for autoclaved aerated concrete including gypsum board and plaster tiles are available at a higher cost than standard expandable wall plugs, including special safety-relevant anchors for high load bearing; It is recommended that fixing holes be drilled using HSS drill bits at a steady constant speed without hammer action. Masonry drill bits and standard expandable wall plugs are not suitable for use with AAC blocks. Using European standard density (400 kg/m3, B2,5), AAC blocks alone would require very thick — 500mm or thicker — walls to achieve the insulation levels required by newer building codes in Northern Europe. References External links History of Autoclaved Aerated Concrete Using Autoclaved Aerated Concrete Correctly - Masonry Magazine, June 2008 Aircrete Products Association Autoclaved Aerated Concrete - Portland Cement Association Building materials Concrete Masonry Swedish inventions 1929 introductions
Autoclaved aerated concrete
Physics,Engineering
3,036
49,931,215
https://en.wikipedia.org/wiki/Tardiness
Tardiness is the habit of being late or delaying arrival. Being late as a form of misconduct may be formally punishable in various arrangements, such as workplace, school, etc. An opposite personality trait is punctuality. Workplace tardiness United States Workplace tardiness is one attendance issue, along with the absence from work and failure to properly notify about absence or being late. To be at work on time is an implied obligation unless stated otherwise. It is a legal reason for discharge in cases when it is a demonstrable disregard of duty: repeated tardiness without compelling reasons, tardiness associated with other misconduct, and single inexcusable tardiness resulted in grave loss of employer's interests. If tardiness is minor or without interference with employer's operations, it is not to be legally considered as misconduct. Characteristics of tardy people Diana DeLonzor in her book Never Be Late Again: 7 Cures for the Punctually Challenged classified habitually tardy people into seven categories: a "rationalizer" insists on blaming the circumstances instead of acknowledging responsibility for tardiness. a "producer" tries to do as much as possible in time available and as a result has difficulties with too tight schedules. a "deadliner" enjoys the adrenaline rush during the attempts to beat the time target. an "indulger" has little self-control. for a "rebel" running late is defying the authority and the rules. an "absent-minded professor". an "evader" puts a higher priority to their own needs compared to being on time. Ethnic stereotypes There are several stereotypes that attribute tardiness to certain cultures. Those may be due to polychronic time. African time is the perceived cultural tendency toward a more relaxed attitude to time among Africans both in Africa and abroad. It is generally used in a pejorative and racist sense about tardiness in appointments, meetings, and events. CP Time (from "Colored People's Time") is a dated American expression similarly referring to a stereotype of African Americans as frequently being late. "Fiji Time" is a local saying in Fiji to refer to the habit of tardiness and the slow pace on the island, and the term is widely used by tourist focused businesses both in advertising and products and souvenirs. "Filipino Time" refers to the perceived habitual tardiness of Filipinos. It bears similarities with "African Time" and "CP Time" and the term is usually used in a pejorative sense as one of the defining negative traits of the Filipino. Filipino theologian José M. De Mesa pointed out that the widespread acceptance of "Filipino Time" as one of the traits that defines the Filipino is an example of successful internalization of the negative image of Filipinos as perceived by the Spanish and American colonizers. He argued that the persistence of this colonial self-image among Filipinos contributed to the weakening of their corporate cultural self and to the undermining of their growth, as it compelled many Filipinos to reject themselves and to be ashamed of their identity. He also noted that a local theologian was surprised to discover that many of the writings concerning Filipino self-identity mostly focused on the negative and disparaging traits such as "Filipino time", which is an evidence of the seeming penchant of Filipinos for self-flagellation. Some sources identify the origins of the Filipino's lack of punctuality to the Spanish colonial period, as arriving late was considered to be a sign of status back then, as depicted in a scene in Chapter 22 of José Rizal's novel El Filibusterismo. An alternative interpretation of "Filipino time" sets aside its negative connotations by considering the very concept as an example case of the unsuccessful attempt at imposing Western cultural standards (such as the notion of "time") on Filipino and other non-Western cultures and thus as a successful tool of national resistance. In some cases, however, this tardiness can be deliberately used as a form of showcasing power. The 1976 National Artist of the Philippines for Literature Nick Joaquin challenged the narrative of Spanish colonial roots of "Filipino time", instead identifying its origins in the pre-colonial culture of timelessness before the introduction of the "foreign tyrant clock" during the Spanish era, and thus to the local resistance against the transition from the pre-colonial clockless society to the foreign-imposed clock-based culture. Another related term is the "mañana habit" (; sometimes informally called as mamaya na) which denotes procrastination of Filipinos to do work or an activity mamaya na (later). Mañana attitude: The lax attitude to time is also attributed to South America and the procrastination is described by the euphemism "Mañana!", which literally means "tomorrow", but, as a joke goes, it is "anytime between tomorrow and never". In March 2007 the government of Peru announced the campaign, "La Hora sin Demora," or "Time without Delay" to combat the lateness habit known in the country as "hora peruana," or "Peruvian time". The habit of being late of former President of Peru, Alejandro Toledo was known as "Cabana time" after his place of birth. A term was coined, "Mañanaland", and used in several titles, e.g., Mañanaland (2020) by Pam Muñoz Ryan, The Gringo in Mañanaland (1995) by DeeDee Halleck, Mañanaland; adventuring with camera and rifle through California in Mexico (1928) by John Cudahy, A Gringo in Mañana-Land (1924) by Harry L. Foster, or Stories from Mañana Land (1922) by May Carr Hanley. Other terms referring to a loose attitude to time include "Hawaiian time" and "island time", as well as "Desi Standard Time" and "NDN time". See also Absenteeism Procrastination Time management Tardiness (scheduling) References Misconduct Legal terminology Time management
Tardiness
Physics
1,253
13,708,205
https://en.wikipedia.org/wiki/Ginsenoside
Ginsenosides or panaxosides are a class of natural product steroid glycosides and triterpene saponins. Compounds in this family are found almost exclusively in the plant genus Panax (ginseng), which has a long history of use in traditional medicine that has led to the study of pharmacological effects of ginseng compounds. As a class, ginsenosides exhibit a large variety of subtle and difficult-to-characterize biological effects when studied in isolation. Ginsenosides can be isolated from various parts of the plant, though typically from the roots, and can be purified by column chromatography. The chemical profiles of Panax species are distinct; although Asian ginseng, Panax ginseng, has been most widely studied due to its use in traditional Chinese medicine, there are ginsenosides unique to American ginseng (Panax quinquefolius) and Japanese ginseng (Panax japonicus). Ginsenoside content also varies significantly due to environmental effects. The leaves and stems have emerged as a more abundant and easier-to-extract source of ginsenosides. Nomenclature Ginsenosides are named according to their retention factor in thin layer chromatography (TLC). The letter or number after R is a serial indication of the retention factor, with '0' being most polar, followed by 'a' for the second-most polar, to 'h' being a fairly non-polar ginsenoside. Some of these groups turn out to consist of several molecules are further broken down with numbers: for example, Ra1 is more polar than Ra2. Terms such as "20-gluco-f" may be used to indicate further modification. A different nomenclature is applied to so-called pseudoginsenosides and notoginsenosides. The difference in name reflects more about the circumstances of their discovery than about their chemical nature. Classification and structure They can be broadly divided into two groups based on the carbon skeletons of their aglycones: the four-ring dammarane family, which contains the majority of known ginsenosides, and the oleanane family. The dammaranes further subdivided into 2 main groups, the protopanaxadiols (PPDs) and protopanaxatriols (PPTs), with other smaller groups such as the ocotillol-type pseudoginsenoside F11 and its derivatives. To each ginsenoside is bound at least 2 or 3 hydroxyl groups at the carbon-3 and -20 positions or the carbon-3, -6, and -20 positions respectively. In protopanaxadiols, sugar groups attach to the 3-position of the carbon skeleton, while in comparison sugar groups attach to the carbon-6 position in protopanaxatriols. Well known protopanaxadiols include Rb1, Rb2, Rc, Rd, Rg3, Rh2, and Rh3. Well known protopanaxatriols include Re, Rg1, Rg2, and Rh1. Ginsenosides that are a member of the oleanane family are pentacyclic, composed of a five ring carbon skeleton. R0 (also written Ro) is an example. Biosynthesis The biosynthetic pathway of ginsenosides start in a way common to most steroids, from squalene to 2,3-oxidosqualene via the action of squalene epoxidase, at which point dammaranes can be synthesized through dammarenediol synthase and oleananes through beta-amyrin synthase. As of 2021, the full conversion pathway to protopanaxadiol, protopanaxatriol, and oleanolic acid are known with each step having been assigned at least one gene. Ootillol synthesis remains unclear: 2,3-oxidosqualene is believed to first be converted into 2,3,22,23-dioxidosqualene. An unknown oxidosqualene cyclase produces 3-epicabraleadiol, which is the immediate precursor to ootillol. In the proposed pathway, squalene is synthesized from the assembly of two farnesyl diphosphate (FPP) molecules. Each molecule of FPP is in turn the product of two molecules of dimethylallyl diphosphate and two molecules of isopentenyl diphosphate (IPP). IPP is produced by the mevalonic pathway in the cytosol of a ginseng plant cell and by the methylerythritol phosphate pathway in the plant's plastid. Many UGT enzymes found in the genome of various Panax species are known to be responsible for attaching sugars onto the sterol skeleton, producing ginsenosides. A handful of reactions still don't have an identified UGT. Enzymes responsible for attaching other side chains such as acidic groups and acyls are not yet identified. Ginsenosides likely serve as mechanisms for plant defense. Exposing in vitro cultures of ginseng cells to the plant defense signal methyl jasmonate causes increased production of ginsenosides. Ginsenosides have been found to have both antimicrobial and antifungal properties. Ginsenoside molecules are naturally bitter-tasting and discourage insects and other animals from consuming the plant. It's also been proposed that ginsenosides may interfere with insect growth by mimicking ecdysteroids, though in Drosophilia fruit flies this mimicking activity actually increases fertility. Chemical reactions Steaming ginseng causes ginsenosides to lose their sugar and malonyl side chains, converting more polar molecules into the rarer (in nature), less-polar ones. This change may be responsible for the different effects attributed to red ginseng vs. white ginseng. The same is true of the pulp of the ginseng fruit. Similarly, heat and acid treatment of the stem and leaves can produce less-polar ginsenosides. In general, the less-polar molecules are believed to be easier to be absorbed and to bind onto cell membranes. Some reports claim a stronger biological activity in vitro. Metabolism Ginseng is generally consumed orally as a dietary supplement, and thus its component ginsenosides may be metabolized by gut flora to less-polar molecules. For example, ginsenosides Rb1 and Rb2 are converted to 20-b-O-glucopyranosyl-20(S)-protopanaxadiol or 20(S)-protopanaxadiol by human gut bacteria. This process is known to vary significantly between individuals. In some cases the metabolites of ginsenosides may be the biologically active compounds. Biological effects Most studies of the biological effects of ginsenosides have been in cell culture or animal models and thus their relevance to human biology is unknown. Effects on the cardiovascular system, central nervous system and immune system have been reported, primarily in rodents. Antiproliferative effects have also been described. Many studies suggest that ginsenosides have antioxidant properties. Ginsenosides have been observed to increase internal antioxidant enzymes and act as a free-radical scavenger. Ginsenosides Rg3 and Rh2 have been observed in cell models as having an inhibitory effect on the cell growth of various cancer cells while studies in animal models have suggested that ginsenosides have neuroprotective properties and could be useful in treating neurodegenerative disease such as Alzheimer's and Parkinson's diseases. Two broad mechanisms of action have been suggested for ginsenoside activity, based on their similarity to steroid hormones. They are amphiphilic and may interact with and change the properties of cell membranes. Some ginsenosides have also been shown to be partial agonists of steroid hormone receptors. It is not known how these mechanisms yield the reported biological effects of ginsenosides. The molecules as a class have low bioavailability due to both metabolism and poor intestinal absorption. Sources Although traditionally sourced from the root following folk medicine use, ginsenosides have been isolated from other parts of the plant. The concentration in the stems and leaves of Asian ginseng is 3-6%, compared to just 1-3% in the root. Compared to the root, ginseng fruit pulp contains 7 times the amount of ginsenoside Re and 4 times the amount of total ginsenosides. Cell and tissue culture has also produced significant amounts of ginsenoside, especially when key biosynthetic genes are overexpressed. See also Gintonin Pseudoginsenoside F11 References Saponins Triterpene glycosides
Ginsenoside
Chemistry
1,844
54,571,234
https://en.wikipedia.org/wiki/Hunza%20diet
The Hunza cuisine, also called the Burusho cuisine (), consists of a series of selective food and drink intake practiced by the Burusho people (also called the Hunza people) of northern Pakistan. Alternative medicine and natural health advocates have argued without providing any scientific evidence that the Hunza diet can increase longevity to 120 years. The diet mostly consists of raw food including nuts, fresh vegetables, dry vegetables, mint, fruits and seeds added with yogurt. The cooked meal, daal included with chappati, is included for dinner. Longevity myth In the 1930s, Swiss-German physician Ralph Bircher conducted research on the Hunza diet. In his book about the Hunza, Jay Hoffman argued that, by the ratio to cats, dogs and horses, humans should live up to 120 to 150 years, and argues the Hunza diet to be the key to this longevity. Such ideas also promoted by natural health advocates have been discredited. There is no reliable documentation validating the age of alleged Hunza supercentenarians. In 2005, the Encyclopedia of World Geography stated that "to date there is no credible evidence that determines that the Hunzakut diet of old, not to mention the current diet of the past four decades, contributes to longevity." Another myth associated with the Hunza people is that because their diet is alleged to be high in apricot seeds they are free from disease. This has proven to be untrue as medical scientists have found that the Hunzas suffer from a variety of disease including cancer. See also Longevity myths Pakistani cuisine References Further reading Kinji Imanishi. (1963). Personality and Health in Hunza Valley. Kyoto University. Gerontology Hunza Longevity myths Nutrient-rich, low calorie diets Pakistani cuisine
Hunza diet
Biology
367
318,370
https://en.wikipedia.org/wiki/RuBisCO
Ribulose-1,5-bisphosphate carboxylase/oxygenase, commonly known by the abbreviations RuBisCo, rubisco, RuBPCase, or RuBPco, is an enzyme () involved in the light-independent (or "dark") part of photosynthesis, including the carbon fixation by which atmospheric carbon dioxide is converted by plants and other photosynthetic organisms to energy-rich molecules such as glucose. It emerged approximately four billion years ago in primordial metabolism prior to the presence of oxygen on Earth. It is probably the most abundant enzyme on Earth. In chemical terms, it catalyzes the carboxylation of ribulose-1,5-bisphosphate (also known as RuBP). Alternative carbon fixation pathways RuBisCO is important biologically because it catalyzes the primary chemical reaction by which inorganic carbon enters the biosphere. While many autotrophic bacteria and archaea fix carbon via the reductive acetyl CoA pathway, the 3-hydroxypropionate cycle, or the reverse Krebs cycle, these pathways are relatively small contributors to global carbon fixation compared to that catalyzed by RuBisCO. Phosphoenolpyruvate carboxylase, unlike RuBisCO, only temporarily fixes carbon. Reflecting its importance, RuBisCO is the most abundant protein in leaves, accounting for 50% of soluble leaf protein in plants (20–30% of total leaf nitrogen) and 30% of soluble leaf protein in plants (5–9% of total leaf nitrogen). Given its important role in the biosphere, the genetic engineering of RuBisCO in crops is of continuing interest (see below). Structure In plants, algae, cyanobacteria, and phototrophic and chemoautotrophic Pseudomonadota (formerly proteobacteria), the enzyme usually consists of two types of protein subunit, called the large chain (L, about 55,000 Da) and the small chain (S, about 13,000 Da). The large-chain gene (rbcL) is encoded by the chloroplast DNA in plants. There are typically several related small-chain genes in the nucleus of plant cells, and the small chains are imported to the stromal compartment of chloroplasts from the cytosol by crossing the outer chloroplast membrane. The enzymatically active substrate (ribulose 1,5-bisphosphate) binding sites are located in the large chains that form dimers in which amino acids from each large chain contribute to the binding sites. A total of eight large chains (= four dimers) and eight small chains assemble into a larger complex of about 540,000 Da. In some Pseudomonadota and dinoflagellates, enzymes consisting of only large subunits have been found. Magnesium ions () are needed for enzymatic activity. Correct positioning of in the active site of the enzyme involves addition of an "activating" carbon dioxide molecule () to a lysine in the active site (forming a carbamate). operates by driving deprotonation of the Lys210 residue, causing the Lys residue to rotate by 120 degrees to the trans conformer, decreasing the distance between the nitrogen of Lys and the carbon of . The close proximity allows for the formation of a covalent bond, resulting in the carbamate. is first enabled to bind to the active site by the rotation of His335 to an alternate conformation. is then coordinated by the His residues of the active site (His300, His302, His335), and is partially neutralized by the coordination of three water molecules and their conversion to −OH. This coordination results in an unstable complex, but produces a favorable environment for the binding of . Formation of the carbamate is favored by an alkaline pH. The pH and the concentration of magnesium ions in the fluid compartment (in plants, the stroma of the chloroplast) increases in the light. The role of changing pH and magnesium ion levels in the regulation of RuBisCO enzyme activity is discussed below. Once the carbamate is formed, His335 finalizes the activation by returning to its initial position through thermal fluctuation. Enzymatic activity RuBisCO is one of many enzymes in the Calvin cycle. When Rubisco facilitates the attack of at the C2 carbon of RuBP and subsequent bond cleavage between the C3 and C2 carbon, 2 molecules of glycerate-3-phosphate are formed. The conversion involves these steps: enolisation, carboxylation, hydration, C-C bond cleavage, and protonation. Substrates Substrates for RuBisCO are ribulose-1,5-bisphosphate and carbon dioxide (distinct from the "activating" carbon dioxide). RuBisCO also catalyses a reaction of ribulose-1,5-bisphosphate and molecular oxygen (O2) instead of carbon dioxide (). Discriminating between the substrates and O2 is attributed to the differing interactions of the substrate's quadrupole moments and a high electrostatic field gradient. This gradient is established by the dimer form of the minimally active RuBisCO, which with its two components provides a combination of oppositely charged domains required for the enzyme's interaction with O2 and . These conditions help explain the low turnover rate found in RuBisCO: In order to increase the strength of the electric field necessary for sufficient interaction with the substrates’ quadrupole moments, the C- and N- terminal segments of the enzyme must be closed off, allowing the active site to be isolated from the solvent and lowering the dielectric constant. This isolation has a significant entropic cost, and results in the poor turnover rate. Binding RuBP Carbamylation of the ε-amino group of Lys210 is stabilized by coordination with the . This reaction involves binding of the carboxylate termini of Asp203 and Glu204 to the ion. The substrate RuBP binds displacing two of the three aquo ligands. Enolisation Enolisation of RuBP is the conversion of the keto tautomer of RuBP to an enediol(ate). Enolisation is initiated by deprotonation at C3. The enzyme base in this step has been debated, but the steric constraints observed in crystal structures have made Lys210 the most likely candidate. Specifically, the carbamate oxygen on Lys210 that is not coordinated with the Mg ion deprotonates the C3 carbon of RuBP to form a 2,3-enediolate. Carboxylation Carboxylation of the 2,3-enediolate results in the intermediate 3-keto-2-carboxyarabinitol-1,5-bisphosphate and Lys334 is positioned to facilitate the addition of the substrate as it replaces the third -coordinated water molecule and add directly to the enediol. No Michaelis complex is formed in this process. Hydration of this ketone results in an additional hydroxy group on C3, forming a gem-diol intermediate. Carboxylation and hydration have been proposed as either a single concerted step or as two sequential steps. Concerted mechanism is supported by the proximity of the water molecule to C3 of RuBP in multiple crystal structures. Within the spinach structure, other residues are well placed to aid in the hydration step as they are within hydrogen bonding distance of the water molecule. C-C bond cleavage The gem-diol intermediate cleaves at the C2-C3 bond to form one molecule of glycerate-3-phosphate and a negatively charged carboxylate. Stereo specific protonation of C2 of this carbanion results in another molecule of glycerate-3-phosphate. This step is thought to be facilitated by Lys175 or potentially the carbamylated Lys210. Products When carbon dioxide is the substrate, the product of the carboxylase reaction is an unstable six-carbon phosphorylated intermediate known as 3-keto-2-carboxyarabinitol-1,5-bisphosphate, which decays rapidly into two molecules of glycerate-3-phosphate. This product, also known as 3-phosphoglycerate, can be used to produce larger molecules such as glucose. When molecular oxygen is the substrate, the products of the oxygenase reaction are phosphoglycolate and 3-phosphoglycerate. Phosphoglycolate is recycled through a sequence of reactions called photorespiration, which involves enzymes and cytochromes located in the mitochondria and peroxisomes (this is a case of metabolite repair). In this process, two molecules of phosphoglycolate are converted to one molecule of carbon dioxide and one molecule of 3-phosphoglycerate, which can reenter the Calvin cycle. Some of the phosphoglycolate entering this pathway can be retained by plants to produce other molecules such as glycine. At ambient levels of carbon dioxide and oxygen, the ratio of the reactions is about 4 to 1, which results in a net carbon dioxide fixation of only 3.5. Thus, the inability of the enzyme to prevent the reaction with oxygen greatly reduces the photosynthetic capacity of many plants. Some plants, many algae, and photosynthetic bacteria have overcome this limitation by devising means to increase the concentration of carbon dioxide around the enzyme, including carbon fixation, crassulacean acid metabolism, and the use of pyrenoid. Rubisco side activities can lead to useless or inhibitory by-products. Important inhibitory by-products include xylulose 1,5-bisphosphate and glycero-2,3-pentodiulose 1,5-bisphosphate, both caused by "misfires" halfway in the enolisation-carboxylation reaction. In higher plants, this process causes RuBisCO self-inhibition, which can be triggered by saturating and RuBP concentrations and solved by Rubisco activase (see below). Rate of enzymatic activity Some enzymes can carry out thousands of chemical reactions each second. However, RuBisCO is slow, fixing only 3–10 carbon dioxide molecules each second per molecule of enzyme. The reaction catalyzed by RuBisCO is, thus, the primary rate-limiting factor of the Calvin cycle during the day. Nevertheless, under most conditions, and when light is not otherwise limiting photosynthesis, the speed of RuBisCO responds positively to increasing carbon dioxide concentration. RuBisCO is usually only active during the day, as ribulose 1,5-bisphosphate is not regenerated in the dark. This is due to the regulation of several other enzymes in the Calvin cycle. In addition, the activity of RuBisCO is coordinated with that of the other enzymes of the Calvin cycle in several other ways: By ions Upon illumination of the chloroplasts, the pH of the stroma rises from 7.0 to 8.0 because of the proton (hydrogen ion, ) gradient created across the thylakoid membrane. The movement of protons into thylakoids is driven by light and is fundamental to ATP synthesis in chloroplasts (Further reading: Photosynthetic reaction centre; Light-dependent reactions). To balance ion potential across the membrane, magnesium ions () move out of the thylakoids in response, increasing the concentration of magnesium in the stroma of the chloroplasts. RuBisCO has a high optimal pH (can be >9.0, depending on the magnesium ion concentration) and, thus, becomes "activated" by the introduction of carbon dioxide and magnesium to the active sites as described above. By RuBisCO activase In plants and some algae, another enzyme, RuBisCO activase (Rca, , ), is required to allow the rapid formation of the critical carbamate in the active site of RuBisCO. This is required because ribulose 1,5-bisphosphate (RuBP) binds more strongly to the active sites of RuBisCO when excess carbamate is present, preventing processes from moving forward. In the light, RuBisCO activase promotes the release of the inhibitory (or — in some views — storage) RuBP from the catalytic sites of RuBisCO. Activase is also required in some plants (e.g., tobacco and many beans) because, in darkness, RuBisCO is inhibited (or protected from hydrolysis) by a competitive inhibitor synthesized by these plants, a substrate analog 2-carboxy-D-arabitinol 1-phosphate (CA1P). CA1P binds tightly to the active site of carbamylated RuBisCO and inhibits catalytic activity to an even greater extent. CA1P has also been shown to keep RuBisCO in a conformation that is protected from proteolysis. In the light, RuBisCO activase also promotes the release of CA1P from the catalytic sites. After the CA1P is released from RuBisCO, it is rapidly converted to a non-inhibitory form by a light-activated CA1P-phosphatase. Even without these strong inhibitors, once every several hundred reactions, the normal reactions with carbon dioxide or oxygen are not completed; other inhibitory substrate analogs are still formed in the active site. Once again, RuBisCO activase can promote the release of these analogs from the catalytic sites and maintain the enzyme in a catalytically active form. However, at high temperatures, RuBisCO activase aggregates and can no longer activate RuBisCO. This contributes to the decreased carboxylating capacity observed during heat stress. By activase The removal of the inhibitory RuBP, CA1P, and the other inhibitory substrate analogs by activase requires the consumption of ATP. This reaction is inhibited by the presence of ADP, and, thus, activase activity depends on the ratio of these compounds in the chloroplast stroma. Furthermore, in most plants, the sensitivity of activase to the ratio of ATP/ADP is modified by the stromal reduction/oxidation (redox) state through another small regulatory protein, thioredoxin. In this manner, the activity of activase and the activation state of RuBisCO can be modulated in response to light intensity and, thus, the rate of formation of the ribulose 1,5-bisphosphate substrate. By phosphate In cyanobacteria, inorganic phosphate (Pi) also participates in the co-ordinated regulation of photosynthesis: Pi binds to the RuBisCO active site and to another site on the large chain where it can influence transitions between activated and less active conformations of the enzyme. In this way, activation of bacterial RuBisCO might be particularly sensitive to Pi levels, which might cause it to act in a similar way to how RuBisCO activase functions in higher plants. By carbon dioxide Since carbon dioxide and oxygen compete at the active site of RuBisCO, carbon fixation by RuBisCO can be enhanced by increasing the carbon dioxide level in the compartment containing RuBisCO (chloroplast stroma). Several times during the evolution of plants, mechanisms have evolved for increasing the level of carbon dioxide in the stroma (see carbon fixation). The use of oxygen as a substrate appears to be a puzzling process, since it seems to throw away captured energy. However, it may be a mechanism for preventing carbohydrate overload during periods of high light flux. This weakness in the enzyme is the cause of photorespiration, such that healthy leaves in bright light may have zero net carbon fixation when the ratio of O2 to available to RuBisCO shifts too far towards oxygen. This phenomenon is primarily temperature-dependent: high temperatures can decrease the concentration of dissolved in the moisture of leaf tissues. This phenomenon is also related to water stress: since plant leaves are evaporatively cooled, limited water causes high leaf temperatures. plants use the enzyme PEP carboxylase initially, which has a higher affinity for . The process first makes a 4-carbon intermediate compound, hence the name plants, which is shuttled into a site of photosynthesis then decarboxylated, releasing to boost the concentration of . Crassulacean acid metabolism (CAM) plants keep their stomata closed during the day, which conserves water but prevents the light-independent reactions (a.k.a. the Calvin Cycle) from taking place, since these reactions require to pass by gas exchange through these openings. Evaporation through the upper side of a leaf is prevented by a layer of wax. Genetic engineering Since RuBisCO is often rate-limiting for photosynthesis in plants, it may be possible to improve photosynthetic efficiency by modifying RuBisCO genes in plants to increase catalytic activity and/or decrease oxygenation rates. This could improve sequestration of and be a strategy to increase crop yields. Approaches under investigation include transferring RuBisCO genes from one organism into another organism, engineering Rubisco activase from thermophilic cyanobacteria into temperature sensitive plants, increasing the level of expression of RuBisCO subunits, expressing RuBisCO small chains from the chloroplast DNA, and altering RuBisCO genes to increase specificity for carbon dioxide or otherwise increase the rate of carbon fixation. Mutagenesis in plants In general, site-directed mutagenesis of RuBisCO has been mostly unsuccessful, though mutated forms of the protein have been achieved in tobacco plants with subunit C4 species, and a RuBisCO with more C4-like kinetic characteristics have been attained in rice via nuclear transformation. Robust and reliable engineering for yield of RuBisCO and other enzymes in the C3 cycle was shown to be possible, and it was first achieved in 2019 through a synthetic biology approach. One avenue is to introduce RuBisCO variants with naturally high specificity values such as the ones from the red alga Galdieria partita into plants. This may improve the photosynthetic efficiency of crop plants, although possible negative impacts have yet to be studied. Advances in this area include the replacement of the tobacco enzyme with that of the purple photosynthetic bacterium Rhodospirillum rubrum. In 2014, two transplastomic tobacco lines with functional RuBisCO from the cyanobacterium Synechococcus elongatus PCC7942 (Se7942) were created by replacing the RuBisCO with the large and small subunit genes of the Se7942 enzyme, in combination with either the corresponding Se7942 assembly chaperone, RbcX, or an internal carboxysomal protein, CcmM35. Both mutants had increased fixation rates when measured as carbon molecules per RuBisCO. However, the mutant plants grew more slowly than wild-type. A recent theory explores the trade-off between the relative specificity (i.e., ability to favour fixation over O2 incorporation, which leads to the energy-wasteful process of photorespiration) and the rate at which product is formed. The authors conclude that RuBisCO may actually have evolved to reach a point of 'near-perfection' in many plants (with widely varying substrate availabilities and environmental conditions), reaching a compromise between specificity and reaction rate. It has been also suggested that the oxygenase reaction of RuBisCO prevents depletion near its active sites and provides the maintenance of the chloroplast redox state. Since photosynthesis is the single most effective natural regulator of carbon dioxide in the Earth's atmosphere, a biochemical model of RuBisCO reaction is used as the core module of climate change models. Thus, a correct model of this reaction is essential to the basic understanding of the relations and interactions of environmental models. Expression in bacterial hosts There currently are very few effective methods for expressing functional plant Rubisco in bacterial hosts for genetic manipulation studies. This is largely due to Rubisco's requirement of complex cellular machinery for its biogenesis and metabolic maintenance including the nuclear-encoded RbcS subunits, which are typically imported into chloroplasts as unfolded proteins. Furthermore, sufficient expression and interaction with Rubisco activase are major challenges as well. One successful method for expression of Rubisco in E. coli involves the co-expression of multiple chloroplast chaperones, though this has only been shown for Arabidopsis thaliana Rubisco. Depletion in proteomic studies Due to its high abundance in plants (generally 40% of the total protein content), RuBisCO often impedes analysis of important signaling proteins such as transcription factors, kinases, and regulatory proteins found in lower abundance (10-100 molecules per cell) within plants. For example, using mass spectrometry on plant protein mixtures would result in multiple intense RuBisCO subunit peaks that interfere and hide those of other proteins. Recently, one efficient method for precipitating out RuBisCO involves the usage of protamine sulfate solution. Other existing methods for depleting RuBisCO and studying lower abundance proteins include fractionation techniques with calcium and phytate, gel electrophoresis with polyethylene glycol, affinity chromatography, and aggregation using DTT, though these methods are more time-consuming and less efficient when compared to protamine sulfate precipitation. Evolution of RuBisCO Phylogenetic studies The chloroplast gene rbcL, which codes for the large subunit of RuBisCO has been widely used as an appropriate locus for analysis of phylogenetics in plant taxonomy. Origin Non-carbon-fixing proteins similar to RuBisCO, termed RuBisCO-like proteins (RLPs), are also found in the wild in organisms as common as Bacillus subtilis. This bacterium has a rbcL-like protein with a 2,3-diketo-5-methylthiopentyl-1-phosphate enolase function, part of the methionine salvage pathway. Later identifications found functionally divergent examples dispersed all over bacteria and archaea, as well as transitionary enzymes performing both RLP-type enolase and RuBisCO functions. It is now believed that the current RuBisCO evolved from a dimeric RLP ancestor, acquiring its carboxylase function first before further oligomerizing and then recruiting the small subunit to form the familiar modern enzyme. The small subunit probably first evolved in anaerobic and thermophilic organisms, where it enabled RuBisCO to catalyze its reaction at higher temperatures. In addition to its effect on stabilizing catalysis, it enabled the evolution of higher specificities for over O2 by modulating the effect that substitutions within RuBisCO have on enzymatic function. Substitutions that do not have an effect without the small subunit suddenly become beneficial when it is bound. Furthermore, the small subunit enabled the accumulation of substitutions that are only tolerated in its presence. Accumulation of such substitutions leads to a strict dependence on the small subunit, which is observed in extant Rubiscos that bind a small subunit. C4 With the mass convergent evolution of the C4-fixation pathway in a diversity of plant lineages, ancestral C3-type RuBisCO evolved to have faster turnover of in exchange for lower specificity as a result of the greater localization of from the mesophyll cells into the bundle sheath cells. This was achieved through enhancement of conformational flexibility of the “open-closed” transition in the Calvin cycle. Laboratory-based phylogenetic studies have shown that this evolution was constrained by the trade-off between stability and activity brought about by the series of necessary mutations for C4 RuBisCO. Moreover, in order to sustain the destabilizing mutations, the evolution to C4 RuBisCO was preceded by a period in which mutations granted the enzyme increased stability, establishing a buffer to sustain and maintain the mutations required for C4 RuBisCO. To assist with this buffering process, the newly-evolved enzyme was found to have further developed a series of stabilizing mutations. While RuBisCO has always been accumulating new mutations, most of these mutations that have survived have not had significant effects on protein stability. The destabilizing C4 mutations on RuBisCO has been sustained by environmental pressures such as low concentrations, requiring a sacrifice of stability for new adaptive functions. History of the term The term "RuBisCO" was coined humorously in 1979, by David Eisenberg at a seminar honouring the retirement of the early, prominent RuBisCO researcher, Sam Wildman, and also alluded to the snack food trade name "Nabisco" in reference to Wildman's attempts to create an edible protein supplement from tobacco leaves. The capitalization of the name has been long debated. It can be capitalized for each letter of the full name (Ribulose-1,5 bisphosphate carboxylase/oxygenase), but it has also been argued that is should all be in lower case (rubisco), similar to other terms like scuba or laser. See also Carbon cycle Photorespiration Pyrenoid C3 carbon fixation C4 carbon fixation Crassulacean acid metabolism/CAM photosynthesis Carboxysome References Further reading External links Photosynthesis EC 4.1.1
RuBisCO
Chemistry,Biology
5,409
30,001,529
https://en.wikipedia.org/wiki/Exterior%20dimension
In geometry, exterior dimension is a type of dimension that can be used to characterize the scaling behavior of "fat fractals". A fat fractal is defined to be a subset of Euclidean space such that, for every point of the set and every sufficiently small number , the ball of radius centered at contains both a nonzero Lebesgue measure of points belonging to the fractal, and a nonzero Lebesgue measure of points that do not belong to the fractal. For such a set, the Hausdorff dimension is the same as that of the ambient space. The Hausdorff dimension of a set can be computed by "fattening" (taking its Minkowski sum with a ball of radius ), and examining how the volume of the resulting fattened set scales with , in the limit as tends to zero. The exterior dimension is computed in the same way but looking at the volume of the difference set obtained by subtracting the original set from the fattened set. In the paper introducing exterior dimension, it was claimed that it would be applicable to networks of blood vessels. However, inconsistent behavior of these vessels in different parts of the body, the relatively low number of levels of branching, and the slow convergence of methods based on exterior dimension cast into doubt the practical applicability of this parameter. References Fractals Dimension
Exterior dimension
Physics,Mathematics
282
34,925,720
https://en.wikipedia.org/wiki/Leucosporidiales
The Leucosporidiales are an order of fungi in the class Microbotryomycetes. The order contains a single family, the Leucosporidiaceae, which in turn contains a single genus, Leucosporidium. The order comprises fungi that are mostly known from their yeast states, though some produce hyphal states in culture that give rise to teliospores from which auricularioid (laterally septate) basidia emerge. References Basidiomycota orders Monotypic fungus orders Yeasts Pucciniomycotina
Leucosporidiales
Biology
119
126,474
https://en.wikipedia.org/wiki/Symmetric%20matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if denotes the entry in the th row and th column then for all indices and Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them. Example The following matrix is symmetric: Since . Properties Basic properties The sum and difference of two symmetric matrices is symmetric. This is not always true for the product: given symmetric matrices and , then is symmetric if and only if and commute, i.e., if . For any integer , is symmetric if is symmetric. If exists, it is symmetric if and only if is symmetric. Rank of a symmetric matrix is equal to the number of non-zero eigenvalues of . Decomposition into symmetric and skew-symmetric Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let denote the space of matrices. If denotes the space of symmetric matrices and the space of skew-symmetric matrices then and , i.e. where denotes the direct sum. Let then Notice that and . This is true for every square matrix with entries from any field whose characteristic is different from 2. A symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Similarly, a skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal). Matrix congruent to a symmetric matrix Any matrix congruent to a symmetric matrix is again symmetric: if is a symmetric matrix, then so is for any matrix . Symmetry implies normality A (real-valued) symmetric matrix is necessarily a normal matrix. Real symmetric matrices Denote by the standard inner product on . The real matrix is symmetric if and only if Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangent space to a manifold may be endowed with an inner product, giving rise to what is called a Riemannian manifold. Another area where this formulation is used is in Hilbert spaces. The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every real symmetric matrix there exists a real orthogonal matrix such that is a diagonal matrix. Every real symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If and are real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix: there exists a basis of such that every element of the basis is an eigenvector for both and . Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix (above), and therefore is uniquely determined by up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices. Complex symmetric matrices A complex symmetric matrix can be 'diagonalized' using a unitary matrix: thus if is a complex symmetric matrix, there is a unitary matrix such that is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by Léon Autonne (1915) and Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians. In fact, the matrix is Hermitian and positive semi-definite, so there is a unitary matrix such that is diagonal with non-negative real entries. Thus is complex symmetric with real. Writing with and real symmetric matrices, . Thus . Since and commute, there is a real orthogonal matrix such that both and are diagonal. Setting (a unitary matrix), the matrix is complex diagonal. Pre-multiplying by a suitable diagonal unitary matrix (which preserves unitarity of ), the diagonal entries of can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as . The matrix we seek is simply given by . Clearly as desired, so we make the modification . Since their squares are the eigenvalues of , they coincide with the singular values of . (Note, about the eigen-decomposition of a complex symmetric matrix , the Jordan normal form of may not be diagonal, therefore may not be diagonalized by any similarity transformation.) Decomposition Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices. Every real non-singular matrix can be uniquely factored as the product of an orthogonal matrix and a symmetric positive definite matrix, which is called a polar decomposition. Singular matrices can also be factored, but not uniquely. Cholesky decomposition states that every real positive-definite symmetric matrix is a product of a lower-triangular matrix and its transpose, If the matrix is symmetric indefinite, it may be still decomposed as where is a permutation matrix (arising from the need to pivot), a lower unit triangular matrix, and is a direct sum of symmetric and blocks, which is called Bunch–Kaufman decomposition A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If is diagonalizable it may be decomposed as where is an orthogonal matrix , and is a diagonal matrix of the eigenvalues of . In the special case that is real symmetric, then and are also real. To see orthogonality, suppose and are eigenvectors corresponding to distinct eigenvalues , . Then Since and are distinct, we have . Hessian Symmetric matrices of real functions appear as the Hessians of twice differentiable functions of real variables (the continuity of the second derivative is not needed, despite common belief to the opposite). Every quadratic form on can be uniquely written in the form with a symmetric matrix . Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of , "looks like" with real numbers . This considerably simplifies the study of quadratic forms, as well as the study of the level sets which are generalizations of conic sections. This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of Taylor's theorem. Symmetrizable matrix An matrix is said to be symmetrizable if there exists an invertible diagonal matrix and symmetric matrix such that The transpose of a symmetrizable matrix is symmetrizable, since and is symmetric. A matrix is symmetrizable if and only if the following conditions are met: implies for all for any finite sequence See also Other types of symmetry or pattern in square matrices have special names; see for example: Skew-symmetric matrix (also called antisymmetric or antimetric) Centrosymmetric matrix Circulant matrix Covariance matrix Coxeter matrix GCD matrix Hankel matrix Hilbert matrix Persymmetric matrix Sylvester's law of inertia Toeplitz matrix Transpositions matrix See also symmetry in mathematics. Notes References External links A brief introduction and proof of eigenvalue properties of the real symmetric matrix How to implement a Symmetric Matrix in C++ Matrices
Symmetric matrix
Mathematics
1,733
60,018,379
https://en.wikipedia.org/wiki/Bionic%20Leaf
The Bionic Leaf is a biomimetic system that gathers solar energy via photovoltaic cells that can be stored or used in a number of different functions. Bionic leaves can be composed of both synthetic (metals, ceramics, polymers, etc.) and organic materials (bacteria), or solely made of synthetic materials. The Bionic Leaf has the potential to be implemented in communities, such as urbanized areas to provide clean air as well as providing needed clean energy. History In 2009 at MIT, Daniel Nocera's lab first developed the "artificial leaf", a device made from silicon and an anode electrocatalyst for the oxidation of water, capable of splitting water into hydrogen and oxygen gases. In 2012, Nocera came to Harvard and The Silver Lab of Harvard Medical School joined Nocera’s team. Together the teams expanded the existing technology to create the Bionic Leaf. It merged the concept of the artificial leaf with genetically engineered bacteria that feed on the hydrogen and convert CO2 in the air into alcohol fuels or chemicals. The first version of the teams Bionic Leaf was created in 2015 but the catalyst used was harmful to the bacteria. In 2016, a new catalyst was designed to solve this issue, named the "Bionic Leaf 2.0". Other versions of artificial leaves have been developed by the California Institute of Technology and the Joint Center for Artificial Photosynthesis, the University of Waterloo, and the University of Cambridge. Mechanics Photosynthesis In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutanol. The efficiency of the Bionic Leaf's artificial photosynthesis is the result of bypassing obstacles in natural photosynthesis by virtue of its artificiality. In natural systems, there are numerous energy conversion bottlenecks that limit the overall efficiency of photosynthesis. As a result, most plants do not exceed 1% efficiency and even microalgae grown in bioreactors do not exceed 3%. Existing artificial photosynthetic solar-to-fuels cycles may exceed natural efficiencies but cannot complete the cycle via carbon fixation. When the catalysts of the Bionic Leaf are coupled with the bacterium Ralstonia eutropha, this results in a hybrid system capable of carbon dioxide fixation. This system can store more than half of its input energy as products of carbon dioxide fixation. Overall, the hybrid design allows for artificial photosynthesis with efficiencies rivaling that of natural photosynthesis. Artificial Photosynthesis Systems The Bionic Leaf is an artificial leaf that interfaces a triple-junction Si wafer with amorphous silicon photovoltaic with hydrogen- and oxygen-evolving catalysts made from a ternary alloy, nickel-molybdenum-zinc (NiMoZn) and a cobalt–phosphate cluster (Co-OEC). The Co-OEC is able to operate in natural water at room temperature. Accordingly, the Bionic Leaf can be immersed in water and when held up to sunlight, it can effect direct solar energy conversion via water-splitting. The Bionic Leaf, by virtue of the Co-OEC, also exhibits self-assembling and self-healing properties. The Co-OEC self-assembles upon oxidation of an earth metal ion from 2+ to 3+. It also self-heals upon application of a potential, wherein the cluster reforms due to equilibrium between aqueous cobalt and phosphate. The Bionic Leaf can be used in artificial photosynthetic systems. One such system is a hybrid water-splitting-biosynthetic system that can operate at low driving voltages. The catalyst system of the Bionic Leaf is used in conjunction with bacterium Ralstonia eutropha. The bacterium is grown in contact with the catalysts and then consumes the produced H2 from the water-splitting reaction. After consumption, the bacterium synthesizes biomass and fuels or chemical products from low CO2 concentration in the presence of O2. The usage of the bacterium requires a biocompatible catalyst system that is not toxic to the bacterium and that lowers the overpotential for water splitting. The original catalyst used, the nickel-molybdenum-zinc (NiMoZn) alloy, poisoned the microbes by destroying the bacteria's DNA. Accordingly, this hybrid system uses a cobalt-phosphorus (Co-P) alloy cathode that is resistant to reactive oxygen species. This in return leaves no excess metal and does not form oxygen radicals, leaving the microbes and DNA unharmed. This alloy drives the hydrogen evolution reaction while a cobalt-phosphate (CoPi) anode drives the oxygen evolution reaction. This new catalyst can run up to 16 days at a time when compared to the nickel-molybdenum-zinc (NiMoZn) alloy. Applications Agriculture Early results from Dan Nocera, a researcher at Harvard University, gave insight on how his newly created bionic leaf can be used for fertilizer production. This new bionic leaf uses photovoltaic cells in conjunction with Xanthobacter autotrophicus bacteria to create a plastic called polyhydroxybutyrate (PHB). PHB supplies energy to the bacteria's natural enzymes which then converts nitrogen gas from the air into ammonia. The bionic leaf, can perform this process using renewable electricity, allowing for the sustainable production of ammonia and bio-fertilizers. Currently, the main industrial production of ammonia is performed by what is known as the Haber-Bosch Process, which uses natural gas as the main energy source. The bacteria within the bionic leaf also help to remove carbon dioxide from the environment. The bionic leaf must still pass an environmental impact study in order to determine if this bacteria is safe to release into the wild. Although the bionic leaf currently operates at a mere 25% efficiency, research and development is still with the hopes of improving the process. X. autotrophicus cells act as a living bio-fertilizer due to their ability to directly promote plant growth when applied to organic material. A study was conducted by comparing plants treated with no fertilizer to the same treated with increasing amounts of X. autotrophicus culture. The treated plants root mass and total mass increased by approximately 130% and 100% respectively, compared to that of the untreated control group. Atmosphere Carbon dioxide, a greenhouse gas, traps heat in the atmosphere, the bionic leaf can potentially be used to reduce the carbon dioxide within the atmosphere. While the bionic leaf is running mimics photosynthesis by converting the carbon dioxide in air into fuels. The bionic leaf can eliminate 180 grams of carbon dioxide out of 230,000 liters of air for each kilowatt hour of energy it consumes. While removing large amounts of carbon dioxide from the atmosphere not possible yet on a large scale, this technology is useful in areas where carbon dioxide is produced such as power plants. It can also be implemented within urban areas, providing clean air to the area. The technology may also be used on a smaller scale, helping communities produce, harness, and consume the require energy they need. Bionic Facades Bionic leaves have been considered as an alternative to vertical greenery systems (VGS), also known as green facades. Like VGS, bionic facades can be implemented in buildings to reduce energy consumption from cooling, absorb solar radiation, and reduce CO2 emissions. Unlike their natural counterpart, bionic facades require less costly maintenance (irrigation, fertilization, pest-control) and can be potentially adjusted to external conditions like the changing of seasons. The general structure of the bionic leaves used for these experiments can be characterized as a photovoltaic (PV) cell or plate resistive heater backed with a ceramic evaporative matrix. An experiment comparing the performance of a PV panel alone versus the bionic leaf panel showed increased electricity production of up to 6.6% due to the evaporative cooling from the matrix. The bionic facade also had a comparable effect on lowering the ambient temperature at the building-to-air interface as a green facade planted with ivy. The cooling effect paired with the electricity output of the bionic facade showed a CO2 emissions reduction that was 25 times greater than the daily average CO2 consumption of the ivy wall. See also Artificial Leaf Bioplastic Fuel Cell Metabolic Engineering Photosynthesis References Fuel production Photosynthesis Solar energy
Bionic Leaf
Chemistry,Biology
1,815
4,903,351
https://en.wikipedia.org/wiki/Oil%20filter
An oil filter is a filter designed to remove contaminants from engine oil, transmission oil, lubricating oil, or hydraulic oil. Their chief use is in internal-combustion engines for motor vehicles (both on- and off-road ), powered aircraft, railway locomotives, ships and boats, and static engines such as generators and pumps. Other vehicle hydraulic systems, such as those in automatic transmissions and power steering, are often equipped with an oil filter. Gas turbine engines, such as those on jet aircraft, also require the use of oil filters. Oil filters are used in many different types of hydraulic machinery. The oil industry itself employs filters for oil production, oil pumping, and oil recycling. Modern engine oil filters tend to be "full-flow" (inline) or "bypass". History Early automobile engines did not have oil filters, having only a rudimentary mesh sieve placed at the oil pump intake. Consequently, along with the generally low quality of oil available, very frequent oil changes were required. The Purolator oil filter was the first oil filter for the automobile; it revolutionized the filtration industry, and is still in production today. The Purolator was a bypass filter, whereby most of the oil was pumped from the oil sump directly to the engine's working parts, while a smaller proportion of the oil was sent through the filter via a second flow path, filtering the oil over time. Bypass and full-flow Full-flow A full-flow system will have a pump which sends pressurised oil through a filter to the engine bearings, after which the oil returns by gravity to the sump. In the case of a dry sump engine, the oil that reaches the sump is evacuated by a second pump to a remote oil tank. The function of the full-flow filter is to protect the engine from wear through abrasion. Bypass Modern bypass oil filter systems are secondary systems whereby a bleed from the main oil pump supplies oil to the bypass filter, the oil then passing not to the engine but returning to the sump or oil tank. The purpose of the bypass is to have a secondary filtration system to keep the oil in good condition, free of dirt, soot and water, providing much smaller particle retention than is practical for full flow filtration, the full-flow filter is still used to prevent any excessively large particles from causing substantial abrasion or acute blockage in the engine. Originally used on commercial and industrial diesel engines with large oil capacities where the cost of oil analysis testing and extra filtration to extended oil change intervals makes economic sense; bypass oil filters are becoming more common in private consumer applications. (It is essential that the bypass does not compromise the pressurised oilfeed within the full-flow system; one way to avoid such compromise is to have the bypass system as completely independent). Pressure relief valves Most pressurized lubrication systems incorporate an overpressure relief valve to allow oil to bypass the filter if its flow restriction is excessive, to protect the engine from oil starvation. Filter bypass may occur if the filter is clogged or the oil is thickened by cold weather. The overpressure relief valve is frequently incorporated into the oil filter. Filters mounted such that oil tends to drain from them usually incorporate an anti-drainback valve to hold oil in the filter after the engine (or other lubrication system) is shut down. This is done to avoid a delay in oil pressure buildup once the system is restarted; without an anti-drainback valve, pressurized oil would have to fill the filter before travelling onward to the engine's working parts. This situation can cause premature wear of moving parts due to initial lack of oil. Types of oil filter Mechanical Mechanical designs employ an element made of bulk material (such as cotton waste) or pleated Filter paper to entrap and sequester suspended contaminants. As material builds up on (or in) the filtration medium, oil flow is progressively restricted. This requires periodic replacement of the filter element (or the entire filter, if the element is not separately replaceable). Cartridge and spin-on Early engine oil filters were of cartridge (or replaceable element) construction, in which a permanent housing contains a replaceable filter element or cartridge. The housing is mounted either directly on the engine or remotely with supply and return pipes connecting it to the engine. In the mid-1950s, the spin-on oil filter design was introduced: a self-contained housing and element assembly which was to be unscrewed from its mount, discarded, and replaced with a new one. This made filter changes more convenient and potentially less messy, and quickly came to be the dominant type of oil filter installed by the world's automakers. Conversion kits were offered for vehicles originally equipped with cartridge-type filters. In the 1990s, European and Asian automakers in particular began to shift back in favor of replaceable-element filter construction, because it generates less waste with each filter change. American automakers have likewise begun to shift to replaceable-cartridge filters, and retrofit kits to convert from spin-on to cartridge-type filters are offered for popular applications. Commercially available automotive oil filters vary in their design, materials, and construction details. Ones that are made from completely synthetic material excepting the metal drain cylinders contained within are far superior and longer lasting than the traditional cardboard/cellulose/paper type that still predominate. These variables affect the efficacy, durability, and cost of the filter. Magnetic Magnetic filters use a permanent magnet or an electromagnet to capture ferromagnetic particles. An advantage of magnetic filtration is that maintaining the filter simply requires cleaning the particles from the surface of the magnet. Automatic transmissions in vehicles frequently have a magnet in the fluid pan to sequester magnetic particles and prolong the life of the media-type fluid filter. Some companies are manufacturing magnets that attach to the outside of an oil filter or magnetic drain plugs—first invented and offered for cars and motorcycles in the mid-1930s—to aid in capturing these metallic particles, though there is ongoing debate as to the effectiveness of such devices. Sedimentation A sedimentation or gravity bed filter allows contaminants heavier than oil to settle to the bottom of a container under the influence of gravity. Centrifugal A centrifuge oil cleaner is a rotary sedimentation device using centrifugal force rather than gravity to separate contaminants from the oil, in the same manner as any other centrifuge. Pressurized oil enters the center of the housing and passes into a drum rotor free to spin on a bearing and seal. The rotor has two jet nozzles arranged to direct a stream of oil at the inner housing to rotate the drum. The oil then slides to the bottom of the housing wall, leaving particulate oil contaminants stuck to the housing walls. The housing must periodically be cleaned, or the particles will accumulate to such a thickness as to stop the drum rotating. In this condition, unfiltered oil will be recirculated. Advantages of the centrifuge are: (i) that the cleaned oil may separate from any water which, being heavier than oil, settles at the bottom and can be drained off (provided any water has not emulsified with the oil); and (ii) they are much less likely to become blocked than a conventional filter. If the oil pressure is insufficient to spin the centrifuge, it may instead by driven mechanically or electrically. Note: some spin-off filters are described as centrifugal but they are not true centrifuges; rather, the oil is directed in such a way that there is a centrifugal swirl that helps contaminants stick to the outside of the filter. High efficiency (HE) High efficiency oil filters are a type of bypass filter that are claimed to allow extended oil drain intervals. HE oil filters typically have pore sizes of 3 micrometres, which studies have shown reduce engine wear. Some fleets have been able to increase their drain intervals up to 5-10 times. Filter placement in an oil system Deciding how clean the oil needs to be is important as cost increases rapidly with cleanliness. Having determined the optimum target cleanliness level for a contamination control programme, many engineers are then challenged by the process of optimizing the location of the filter. To ensure effective solid particle ingression balance, the engineer must consider various elements such as whether the filter will be for protection or for contamination control, ease of access for maintenance, and the performance of the unit being considered to meet the challenges of the target set. See also Air filter Fuel filter Impingement filter List of auto parts Oil-filter wrench References External links Oil filter cross reference Vehicle parts Oil filter
Oil filter
Chemistry,Technology,Engineering
1,813
53,091,827
https://en.wikipedia.org/wiki/NGC%20399
NGC 399 is a barred spiral galaxy located in the constellation Pisces. It was discovered on October 7, 1874, by Lawrence Parsons. It was described by Dreyer as "very faint, small, round." References External links 0399 18741007 Pisces (constellation) Barred spiral galaxies 004096
NGC 399
Astronomy
69
13,502,784
https://en.wikipedia.org/wiki/Rhizodermis
Rhizodermis is the root epidermis (also referred to as epiblem), the outermost primary cell layer of the root. Specialized rhisodermal cells, trichoblasts, form long tubular structures (from 5 to 17 micrometers in diameter and from 80 micrometers to 1.5 millimeters in length) almost perpendicular to the main cell axis – root hairs that absorb water and nutrients. Root hairs of the rhizodermis are always in close contact with soil particles and because of their high surface to volume ratio form an absorbing surface which is much larger than the transpiring surfaces of the plant. With some species of the family Fabaceae, the rhizodermis participates in the recognition and the uptake of nitrogen-fixing Rhizobia bacteria – the first stage of nodulation leading to formation of root nodules. Rhizodermis plays an important role in nutrient uptake by the plant roots. In contrast with the epidermis, rhizodermis contains no stomata, and is not covered by cuticle. Its unique feature is the presence of root hairs. Root hair is the outgrowth of a single rhizodermal cell. They occur in high frequency in the adsorptive zone of the root. Root hair derives from a trichoblast as a result of an unequal division. It contains a large vacuole; its cytoplasm and nucleus are superseded to the apical region of the outgrowth. Although it does not divide, its DNA replicates so the nucleus is polyploid. Root hairs live only for few days, and die off in 1–2 days due to mechanical damages. References Plant morphology
Rhizodermis
Biology
354
54,449,666
https://en.wikipedia.org/wiki/Semantic%20space
Semantic spaces in the natural language domain aim to create representations of natural language that are capable of capturing meaning. The original motivation for semantic spaces stems from two core challenges of natural language: Vocabulary mismatch (the fact that the same meaning can be expressed in many ways) and ambiguity of natural language (the fact that the same term can have several meanings). The application of semantic spaces in natural language processing (NLP) aims at overcoming limitations of rule-based or model-based approaches operating on the keyword level. The main drawback with these approaches is their brittleness, and the large manual effort required to create either rule-based NLP systems or training corpora for model learning. Rule-based and machine learning based models are fixed on the keyword level and break down if the vocabulary differs from that defined in the rules or from the training material used for the statistical models. Research in semantic spaces dates back more than 20 years. In 1996, two papers were published that raised a lot of attention around the general idea of creating semantic spaces: latent semantic analysis and Hyperspace Analogue to Language. However, their adoption was limited by the large computational effort required to construct and use those semantic spaces. A breakthrough with regard to the accuracy of modelling associative relations between words (e.g. "spider-web", "lighter-cigarette", as opposed to synonymous relations such as "whale-dolphin", "astronaut-driver") was achieved by explicit semantic analysis (ESA) in 2007. ESA was a novel (non-machine learning) based approach that represented words in the form of vectors with 100,000 dimensions (where each dimension represents an Article in Wikipedia). However practical applications of the approach are limited due to the large number of required dimensions in the vectors. More recently, advances in neural network techniques in combination with other new approaches (tensors) led to a host of new recent developments: Word2vec from Google, GloVe from Stanford University, and fastText from Facebook AI Research (FAIR) labs. See also Word embedding Semantic folding Distributional–relational database References Semantics Semantic relations Natural language processing
Semantic space
Technology
436
78,789,446
https://en.wikipedia.org/wiki/Anaprazole
Anaprazole is a pharmaceutical drug used for the treatment of duodenal ulcers. It is classified as a proton pump inhibitor (PPI). It was approved for use in China in 2023. It is formulated as its sodium salt, anaprazole sodium, in enteric-coated tablets. References Proton-pump inhibitors Benzimidazoles Benzofurans Ethers Sulfoxides Methoxy compounds
Anaprazole
Chemistry
91
13,625,222
https://en.wikipedia.org/wiki/MULTICOM
In U.S. and Canadian aviation, MULTICOM is a frequency allocation used as a Common Traffic Advisory Frequency (CTAF) by aircraft near airports where no air traffic control is available. Frequency allocations vary from region to region. Despite the use of uppercase letters, MULTICOM is not an abbreviation or acronym. In the United States, there is one MULTICOM frequency: 122.9 MHz. (See AIM table 4-1-2 or AIM table 4-1-1) At uncontrolled airports without a UNICOM, pilots are to self-announce on the MULTICOM frequency. In Australia, there is one MULTICOM frequency: 126.7 MHz. In Brazil, there is one MULTICOM frequency: 123.45 MHz. See also UNICOM CTAF Airbands Aviation communications Avionics Air traffic control
MULTICOM
Technology
173
32,848,676
https://en.wikipedia.org/wiki/Continuous%20big%20q-Hermite%20polynomials
In mathematics, the continuous big q-Hermite polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties. Definition The polynomials are given in terms of basic hypergeometric functions. References Orthogonal polynomials Q-analogs Special hypergeometric functions
Continuous big q-Hermite polynomials
Mathematics
64
73,033,921
https://en.wikipedia.org/wiki/National%20Biotechnology%20Development%20Agency
National Biotechnology Development Agency (NABDA) is an agency established in 2001 under the Federal Ministry of Science and Technology, that implements policies, explores resources, conducts research, promotes, coordinates and develops biotechnology in Nigeria. The NABDA also controls and supervises the introduction of genetically modified organisms into Nigeria. Background and history In July 2019, NABDA announced their invention of Anaerobic Digestion Technology digesters which can convert organic wastes to biogas. In May 2020, NABDA worked with Nigeria Centre for Disease Control in the process of enabling local production of locally made testing kits for Coronavirus. In March 2022, NABDA announced it was making research on reliable and affordable drugs for Lassa fever. In December 2022, NABDA announced that they had locally produced a starter culture for the preparation of yoghurt with the two germs, Lactobacillus bulgaricus and Streptococcus thermophilus. In the same month, NABDA announced their livestock genetic experiments around artificial insemination for milk and meat improvement. In January 2023, NABDA announced plans to release insect and drought-resistant maize to Nigerian farmers to improve food production. In the same month, they worked with National Committee on Naming, Registration and Release of Crop Varieties, Livestock Breed/Fisheries to release FARO68, a rice variety and 20 other crop varieties for farmers to boost food efficiency. Leadership Controversies In June 2017, the Economic and Financial Crimes Commission (EFCC) arrested the then Director General of NABDA, Professor Lucy Jumeyi Ogbadu for alleged criminal conspiracy and diversion of N23 Million in public funds. This was part of a larger sum of N603 million naira. In an October 24, 2017 letter signed by the secretary of the EFCC, Ogbadu was absolved of any involvement in the alleged fraud. However, on January 9, 2018, it faulted the clearing and went on to file 49 charges against her. On August 30, 2021, a witness invited by the EFCC, Christopher Orji, the director of Bioresources Development Centre, Langtang, a sub agency under NABDA reportedly committed suicide. In 2017, following the issuance of the permit for the commercialization of the genetically modified cotton, a group of 16 civil society organizations sued the National Biosafety Management Agency and National Biotechnology Development Agency. The court ruled in favour of the government agencies. In November 2020, staff of NABDA protested citing unpaid promotion arrears, promotion examinations not being conducted, corruption and poor welfare as reasons. In April 2022, Nnimmo Bassey decried NABDA's distribution of commercial quantities of Genetically Modified Cowpea to farmers without informing them on what the seeds were. Farmers went on to plant, harvest and sell the seeds without knowing they were GMCs. In September 2022, NABDA and other agencies insisted that such crops were safe for consumption and to tackle food crisis. In November 2022, the Independent Corrupt Practices Commission arraigned Alex Akpa, a former acting director-general of NABDA, Famous Daunemigha, an ex-member of the Governing Board of NABDA and Wesley Ebi Siasia, an ex-director, finance and accounts of the agency, and others over alleged N400 million fraud. During the trial, they pleaded not guilty. In November, 2024 Appointment Of Acting DG Of NABDA By Minister Creates Tension In Agency. https://forefrontng.com/appointment-of-acting-dg-of-nabda-by-minister-creates-tension-in-agency/ References Biotechnology by country Government agencies of Nigeria
National Biotechnology Development Agency
Biology
770
35,119,424
https://en.wikipedia.org/wiki/Journal%20of%20Analytical%20Toxicology
Journal of Analytical Toxicology (JAT) is a peer-reviewed scientific journal focusing on analytical toxicology. According to Journal Citation Reports it received an impact factor of 3.513, ranking it 23rd out of 92 journals in the category "Toxicology" and 20th out of 86 journals in the category "Analytical Chemistry". The editor is Bruce A. Goldberger (University of Florida, Gainesville). JAT is the official journal of the Society of Forensic Toxicologists. External links References Toxicology journals English-language journals Oxford University Press academic journals Academic journals established in 1977 9 times per year journals
Journal of Analytical Toxicology
Environmental_science
124
62,292,920
https://en.wikipedia.org/wiki/Vera%20Rubin%20Early%20Career%20Prize
The Vera Rubin Early Career Prize is named after Vera Rubin and is awarded by the Division on Dynamical Astronomy of the American Astronomical Society. The prize recognizes excellence in dynamical astronomy. Recipients must have received their doctorate no more than ten years prior. Winners See also List of astronomy awards References Astronomy prizes American Astronomical Society Early career awards
Vera Rubin Early Career Prize
Astronomy,Technology
67
37,328,605
https://en.wikipedia.org/wiki/Relative%20age%20effect
The term relative age effect (RAE), also known as birthdate effect or birth date effect, is used to describe a bias, evident in the upper echelons of youth sport and academia, where participation is higher amongst those born earlier in the relevant selection period (and lower for those born later in the selection period) than would be expected from the distribution of births. The selection period is usually the calendar year, the academic year or the sporting season. The difference in maturity often contributes to the effect, with age category, skill level and sport context also impacting the risk of the relative age effect. Mid to late adolescent, regional to nation, popular sports seeing the highest risk, and under 11, recreational, unpopular sports seeing the lowest risk. The terms month of birth bias and season of birth bias are used to describe similar effect but are fundamentally different. Season of birth examines the influence of different prenatal and perinatal seasonal environmental factors like sunlight, temperature, or viral exposure during gestation, that relate to health outcomes. Conversely, the relative age effect shifts with selection dates moving the advantage with the selection period. With influence from social agents, children born soon after the cut-off date are typically included, and a child born soon before the cut-off date excluded. In sport Youth sport participation is often organized into annual age-groups. The IOC, FIFA and the six international football confederations (AFC, CAF, CONCACAF, CONMEBOL, OFC and UEFA) all use 1 January as their administrative cut-off which is most commonly used but, 1 September is used in the UK, like many other locations around the world. This grouping can be seen in the first graph showing the distribution of births, by month, for the European Union over the ten years from 2000 to 2009. The birth rate correlates closely with the number of days in a month with a slight increase in the summer months. The second graph, by the month, shows the birth distribution of over 4,000 players involved in the qualifying squads for U17, U19 and U21 tournaments organised by UEFA in 2010–11. This declining distribution from the beginning of the year for professional athlete participation has been seen in sports like: association football, baseball, cricket, gymnastics, handball, ice hockey, rugby league, running, skiing, swimming, tennis, and the Youth Olympic Games, as well as non-physical sports like shooting. Malcolm Gladwell's book Outliers: The Story of Success and the book SuperFreakonomics by Steven Levitt and Stephen Dubner, popularised the issue in respect of Canadian ice-hockey players, European football players, and US Major League baseball players. Contributing factors Relative age effects are caused by birthdate eligibility rules but can be affected by parents, coaches and athletes through other mechanisms. The Pygmalion effect, Galatea effect, and Matthew effect are examples of effects which impact player motivation. In addition to these social factors contextual differences change the distribution with decreased effects in female sports, unpopular sports, at different ages, individual sports, or sports with a lower reliance on body size, with an expected increased effect in male sports, popular sports, or competitive sports. The sports popularity in a geographical or cultural area will affect the relative age distribution relative, with examples seen in volleyball and American football. The early maturation levels giving physical advantages to first quarter individuals can create the bias, seen in players' height in basketball, dominant hand in tennis, or size in a cricket position, but physical size isn't always the cause. Older individuals also gain more competence and self-efficacy, increasing the performance gap. These advantages lead to increased dropout rates for Q1 births. However, the bias for sports where height and mass impedes flexibility, rotational speed and the strength to mass ratio, maturational delay may be preferred as seen in gymnastics. With an adult group the relative age has the opposite meaning, as performance declines in age, and is more significant with more physically demanding sports, depending on what age the average peak performance level is, in that sport. The "underdog effect" has shown that those late birth individuals may see better chances if they are selected to play, with the advantage decreasing after selection. Playing position, federation membership, and individual and team performance also contribute to the effect, with older players having a higher risk of injury. Reducing the relative age effect Various methods have been suggested and tested to reduce the relative age effect like moving the cut off dates, expanding the age group range, birthdate quotas for the players, the average team age (ATA) method for eligibility, or grouping by height and weight. Some methods have struggled to find success due to the effect moving with selection dates. Making the relative age known to the individuals in the environment have shown less bias in talent identification reducing the relative age effect. Birthday banding, and re-calculating scores based on relative age, are other methods used to reduce the effects, with bio-banding seeing the most research, showing benefit to early and late maturing players, both in academy football and in recreational football. Bio-banding can help promote appropriate training loads and reduce injury risk, while increasing technical demands from players, however, sports already categorized by maturation metrics like Judo, may not see those effects. More longitudinal studies are needed, alongside more reliable ways to band individuals, as biological, psychological and social development doesn't progress in synchrony, creating different imbalances in the groups. In education The Academic year is decided by education authorities with August or September being common cut-off dates in the Northern Hemisphere and February or March cut-off dates in the Southern Hemisphere. The third graph illustrates the relative age effect in graduations from the University of Oxford over a 10-year period, which has also been seen in UK Nobel laureates. The relative age effect and reversal effect are evident in education, with older students on average scoring higher marks, getting into more gifted and talented programs, and being more likely to attend higher education in academic schools over vocational schools, not necessarily due to higher intelligence. The Matthew effect again plays a role, as the skills learned early in education compound over time, increasing the advantage, with older students becoming more likely to take up leadership roles. However, like in sport, the effect diminishes over time after middle school, and those born later in the year perform better in university education. In leadership positions A relative age effect has also been observed in the context of leadership. An over-representation starts in high-school leadership activities such as sports team captain or club president. Then in adult life, this over-representation has been observed in top managerial positions (CEOs of S&P 500 companies), and in top political positions, both in the USA (senators and representatives), and in Finland (MPs). Seasonal birth effect Seasonal birth in humans varies, and alongside the relative age effect the epidemiology of seasonal births show over-representations in health conditions like ADHD and schizophrenia, with one study finding "that higher school starting age lowers the propensity to commit crime at young ages." However, other studies failed to replicate relative age effects on temperament, mood, or physical development. Obesity has been linked to season of birth with increased chances, potentially due to surrounding temperature at birth, with winter and spring having the highest correlation, but physical inactivity is still a larger risk factor. Summer babies have increased chances of specific learning difficulties, and winter and spring babies related to schizophrenia and mania/bipolar disorder. Schizoaffective disorder can be related to December–March births, major depression to March–May births, and autism to March births. Increased rates in seasonal affective disorder relate to the influence of seasonal birth in humans. References Sports science Educational assessment and evaluation Academia Epidemiology Ageism Social anthropology Pedagogy
Relative age effect
Environmental_science
1,604
5,699,083
https://en.wikipedia.org/wiki/Survival%20Under%20Atomic%20Attack
Survival Under Atomic Attack was the title of an official United States government booklet released in 1951 by the Executive Office of the President, the National Security Resources Board (document 130), and the Civil Defense Office. Released at the onset of the Cold War era, the pamphlet was in line with rising fears that the Soviet Union would launch a nuclear attack against the United States, and outlined what to do in the event of an atomic attack. The booklet introduced general public to the effects of nuclear weapons and was aimed at calming down the fears surrounding them. Survival Under Atomic Attack was the first entry in a series of government publications and communications that employed the strategy of "emotion management" in order to neutralize the horrifying aspects of nuclear weapons. Purpose Published in 1950 by the Government Printing Office, one year after the Soviet Union detonated their first atomic bomb, the booklet explains how to protect oneself, one's food and water supply, and one's home. It also covered how to prevent burns and what to do if exposed to radiation. The U.S Strategic bombing survey had assessed the civilian response in Hiroshima and Nagasaki beginning as early as August–September 1945 and its report was "Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved...". Secondly, the Atomic Bomb Casualty Commission was active from 1946 to 1975 studying the effects of the two bombs on survivors in both cities and thus represented four years of post-bombing study at the time of publication. Center Insert The four pages in the center of the brochure (15, 16, 17, 18) were designed to be torn out. "Remove this sheet and keep it with you until you've memorized it." Kill the Myths (15) Atomic Weapons Will Not Destroy The Earth Atomic bombs hold more death and destruction than man ever before has wrapped up in a single package, but their over-all power still has very definite limits. Not even hydrogen bombs will blow the earth apart or kill us all by radioactivity. Doubling Bomb Power Does Not Double Destruction Modern A-bombs can cause heavy damage 2 miles away, but doubling their power would extend that range only to 2.5 miles. To stretch the damage range from 2 to 4 miles would require a weapon more than 8 times the rated power of present models. Radioactivity Is Not The Bomb's Greatest Threat In most atom raids, blast and heat are by far the greatest dangers that people must face. Radioactivity alone would account for only a small percentage of all human deaths and injuries, except in underground or underwater explosions. Radiation Sickness Is Not Always Fatal In small amounts, radioactivity seldom is harmful. Even when serious radiation sickness follows a heavy dosage, there is still a good chance for recovery. Six Survival Secrets For Atomic Attacks (16, 17) Always Put First Things First And (16) 1. Try To Get Shielded If you have time, get down in a basement or subway. Should you unexpectedly be caught out-of-doors, seek shelter alongside a building, or jump in any handy ditch or gutter. 2. Drop Flat On Ground Or Floor To keep from being tossed about and to lessen the chances of being struck by falling and flying objects, flatten out at the base of a wall, or at the bottom of a bank. 3. Bury Your Face In Your Arms When you drop flat, hide your eyes in the crook of your elbow. That will protect your face from flash burns, prevent temporary blindness and keep flying objects out of your eyes. Never Lose Your Head And (17) 4. Don't Rush Outside Right After A Bombing After an air burst, wait a few minutes then go help to fight fires. After other kinds of bursts wait at least 1 hour to give lingering radiation some chance to die down. 5. Don't Take Chances With Food Or Water In Open Containers To prevent radioactive poisoning or disease, select your food and water with care. When there is reason to believe they may be contaminated, stick to canned and bottled things if possible. 6. Don't Start Rumors In the confusion that follows a bombing, a single rumor might touch off a panic that could cost your life. Five Keys To Household Safety (18) 1. Strive For "Fireproof Housekeeping" Don't let trash pile up, and keep waste paper in covered containers. When an alert sounds, do all you can to eliminate sparks by shutting off the oil burner and covering all open flames. 2. Know Your Own Home Know which is the safest part of your cellar, learn how to turn off your oil burner and what to do about utilities. 3. Have Emergency Equipment And Supplies Handy Always have a good flashlight, a radio, first-aid equipment and a supply of canned goods in the house. 4. Close All Windows And Doors And Draw The Blinds If you have time when an alert sounds, close the house up tight in order to keep out fire sparks and radioactive dusts and to lessen the chances of being cut by flying glass. Keep the house closed until all danger is past. 5. Use the Telephone Only For True Emergencies Do not use the phone unless absolutely necessary. Leave the lines open for real emergency traffic. See also List of books about nuclear issues Continuity of government Duck and Cover (film) Fallout Protection Nuclear warfare Protect and Survive Survivalism United States Civil Defense Nuclear War Survival Skills References External links Survival under Atomic Attack, (PDF-3 Mb). 1951, Reprint by City of Boston, Department of Civil Defense via us.archive.org Shelter from Atomic Attack in Existing Buildings, 1952, archive.org Ten for Survival : Survive Nuclear Attack, 1961, archive.org 1950 non-fiction books Disaster preparedness in the United States Publications of the United States government Works about the Cold War Nuclear warfare Books about nuclear issues Cold War history of the United States United States civil defense
Survival Under Atomic Attack
Chemistry
1,197
5,916,553
https://en.wikipedia.org/wiki/NGC%201042
NGC 1042 is a spiral galaxy located in the constellation Cetus. It was discovered on 10 November 1885 by American astronomer Lewis Swift. The galaxy has an apparent magnitude of 14.0. NGC 1042 is a low-luminosity active galaxy. Furthermore, its luminosity class is III–IV and it has a broad HI line. It is known that NGC 1042 also hosts an intermediate-mass black hole in its center. NGC 1042 contains an ultraluminous X-ray source called NGC 1042 ULX1. Morphology NGC 1042 is a late-type galaxy, classified as type SAB(rs)cd. It has a bulgeless structure with spiral arms consisting of two symmetric arms located in the inner side with ceaseless long outer arms, with an Arm Class 9 classification. The spiral galaxy type of NGC 1042 is a mystery; some astronomers classified it a barred spiral galaxy based on ellipse fitting via B- and H-band images, while others classified it an unbarred spiral galaxy. Further evidence by them suggests, the inner arms of NGC 1042 are curved with a bar-like structure that is mistaken as a bar. Nearby galaxies NGC 1042 appears near the spiral galaxy NGC 1035 in the sky, with both having similar redshifts. The two objects may therefore be physically associated with each other. In additional, NGC 1042 is also a member of the NGC 1052 group. It is shown to be the only galaxy with a large gas reservoir, indicating it was stripped of gas during a past interaction with NGC 1052. See also List of NGC objects (1001–2000) References External links Picture of NGC 1042 Intermediate spiral galaxies Cetus 1042 010122 -02-07-054 02379-0838 18851110 Discoveries by Lewis Swift
NGC 1042
Astronomy
374
2,916,580
https://en.wikipedia.org/wiki/Ethylene%20glycol%20dinitrate
Ethylene glycol dinitrate, abbreviated EGDN and NGC, also known as Nitroglycol, is a colorless, oily, explosive liquid obtained by nitrating ethylene glycol. It is similar to nitroglycerine in both manufacture and properties, though it is more volatile and less viscous. Unlike nitroglycerine, the chemical has a perfect oxygen balance, meaning that its ideal exothermic decomposition would completely convert it to low energy carbon dioxide, water, and nitrogen gas, with no excess unreacted substances, without needing to react with anything else. History and production Pure EGDN was first produced by the Belgian chemist Louis Henry (1834–1913) in 1870 by dropping a small amount of ethylene glycol into a mixture of nitric and sulfuric acids cooled to 0 °C. The previous year, August Kekulé had produced EGDN by the nitration of ethylene, but this was actually contaminated with beta-nitroethyl nitrate. Other investigators preparing NGC before publication in 1926 of Rinkenbach's work included: Champion (1871), Neff (1899) & Wieland & Sakellarios (1920), Dautriche, Hough & Oehme. The American chemist William Henry Rinkenbach (1894–1965) prepared EGDN by nitrating purified glycol obtained by fractioning the commercial product under pressure of 40mm Hg, and at a temperature of 120°. For this 20g of middle fraction of purified glycol was gradually added to mixture of 70g nitric acid and 130g sulfuric acid, maintaining the temperature at 23°. The resulting 49g of crude product was washed with 300ml of water to obtain 39.6g of purified product. The low yield so obtained could be improved by maintaining a lower temperature and using a different nitrating acid mixture. 1) Direct Nitration of Glycol is carried out in exactly the same manner, with the same apparatus, and with the same mixed acids as nitration of glycerine. In the test nitration of anhydrous glycol (100g) with 625g of mixed acid 40% & 60% at 10-12°, the yield was 222g and it dropped to 218g when the temp was raised to 29-30°. When 500g of mixed acid 50% & 50% was used at 10-12°, the yield increased to 229g. In commercial nitration, the yields obtained from 100 kg anhydrous glycol and 625 kg of mixed acid containing 41%, 58% & water 1% were 222.2 kg of NGc at nitrating temp of 10-12° and only 218.3 kg at 29-30°. This means 90.6% of theory, as compared to 93.6% with NG. C2H4(OH)2 + 2 HNO3 → C2H4(ONO2)2 + 2 H2O or through the reaction of ethylene oxide and dinitrogen pentoxide: C2H4O + N2O5 → C2H4(ONO2)2 2) Direct Production of NGc from Gaseous Ethylene. 3) Preparation of NGc from Ethylene Oxide. 4) Preparation of NGc by method of Messing from ethylene through chlorohydrin & ethylene oxide. 5) Preparation of NGc by duPont method. Properties Physical properties Ethylene glycol dinitrate is a colorless volatile liquid when in pure state, but is yellowish when impure. Molar weight 152.07, N 18.42%, OB to CO2 0%, OB to CO +21%; colorless volatile liquid when in pure state; yellowish liquid in crude state; sp gr 1.488 at 20/4° or 1.480 at 25°; n_D 1.4452 at 25° or 1.4472 at 20°; freezing point -22.75° (versus +13.1° for NG); frozen point given in is -22.3°; boiling point 199° at 760mm Hg (with decomposition). Brisance by lead block compression (Hess crusher test) is 30.0 mm, versus 18.5 mm for NG and 16 mm for TNT (misleading, needs to give exact density and mass of explosive (25 or 50 g). Brisance by sand test, determined in mixtures with 40% kieselguhr, gave for NGc mixtures slightly higher results then with those containing NG. Chemical properties When ethylene glycol dinitrate is rapidly heated to 215 °C, it explodes; this is preceded by partial decomposition similar to that of nitroglycerin. EGDN has a slightly higher brisance than nitroglycerin. Ethylene glycol dinitrate reacts violently with potassium hydroxide, yielding ethylene glycol and potassium nitrate: C2H2(ONO2)2 + 2 KOH → C2H2(OH)2 + 2 KNO3 Other EGDN was used in manufacturing explosives to lower the freezing point of nitroglycerin, in order to produce dynamite for use in colder weather. Due to its volatility it was used as a detection taggant in some plastic explosives, e.g. Semtex, to allow more reliable explosive detection, until 1995 when it was replaced by dimethyldinitrobutane. It is considerably more stable than glyceryl trinitrate owing to the lack of secondary hydroxyl groups in the precursor polyol. Like other organic nitrates, ethylene glycol dinitrate is a vasodilator. See also Methyl nitrate Erythritol tetranitrate Xylitol pentanitrate Mannitol hexanitrate RE factor References External links WebBook page for ethylene glycol dinitrate CDC - NIOSH Pocket Guide to Chemical Hazards Nitrate esters Antianginals Explosive chemicals Liquid explosives Explosive detection Sugar alcohol explosives Glycol esters
Ethylene glycol dinitrate
Chemistry
1,289
56,458
https://en.wikipedia.org/wiki/Apraxia
Apraxia is a motor disorder caused by damage to the brain (specifically the posterior parietal cortex or corpus callosum), which causes difficulty with motor planning to perform tasks or movements. The nature of the damage determines the disorder's severity, and the absence of sensory loss or paralysis helps to explain the level of difficulty. Children may be born with apraxia; its cause is unknown, and symptoms are usually noticed in the early stages of development. Apraxia occurring later in life, known as acquired apraxia, is typically caused by traumatic brain injury, stroke, dementia, Alzheimer's disease, brain tumor, or other neurodegenerative disorders. The multiple types of apraxia are categorized by the specific ability and/or body part affected. The term "apraxia" comes . Types The several types of apraxia include: Apraxia of speech (AOS) is having difficulty planning and coordinating the movements necessary for speech (e.g. potato=totapo, topato). AOS can independently occur without issues in areas such as verbal comprehension, reading comprehension, writing, articulation, or prosody. Buccofacial or orofacial apraxia, the most common type of apraxia, is the inability to carry out facial movements on demand. For example, an inability to lick one's lips, wink, or whistle when requested to do so. This suggests an inability to carry out volitional movements of the tongue, cheeks, lips, pharynx, or larynx on command. Constructional apraxia is the inability to draw, construct, or copy simple configurations, such as intersecting shapes. These patients have difficulty copying a simple diagram or drawing basic shapes. Gait apraxia is the loss of ability to have normal function of the lower limbs such as walking. This is not due to loss of motor or sensory functions. Ideational/conceptual apraxia is having an inability to conceptualize a task and impaired ability to complete multistep actions. This form of apraxia consists of an inability to select and carry out an appropriate motor program. For example, the patient may complete actions in incorrect orders, such as buttering bread before putting it in the toaster, or putting on shoes before putting on socks. Also, a loss occurs in the ability to voluntarily perform a learned task when given the necessary objects or tools. For instance, if given a screwdriver, these patients may try to write with it as if it were a pen, or try to comb their hair with a toothbrush. Ideomotor apraxia is having deficits in the ability to plan or complete motor actions that rely on semantic memory. These patients are able to explain how to perform an action, but unable to "imagine" or act out a movement such as "pretend to brush your teeth" or "pucker as though you bit into a sour lemon." When the ability to perform an action automatically when cued remains intact, though, this is known as automatic-voluntary dissociation. For example, they may not be able to pick up a phone when asked to do so, but can perform the action without thinking when the phone rings. Limb-kinetic apraxia is having the inability to perform precise, voluntary movements of extremities. For example, a person affected by limb apraxia may have difficulty waving hello, tying shoes, or typing on a computer. This type is common in patients who have experienced a stroke, some type of brain trauma, or have Alzheimer's disease. Oculomotor apraxia is having difficulty moving the eye on command, especially with saccade movements that direct the gaze to targets. This is one of the three major components of Balint's syndrome. Causes Apraxia is most often due to a lesion located in the dominant (usually left) hemisphere of the brain, typically in the frontal and parietal lobes. Lesions may be due to stroke, acquired brain injuries, or neurodegenerative diseases such as Alzheimer's disease or other dementias, Parkinson's disease, or Huntington's disease. Also, apraxia possibly may be caused by lesions in other areas of the brain. Ideomotor apraxia is typically due to a decrease in blood flow to the dominant hemisphere of the brain and particularly the parietal and premotor areas. It is frequently seen in patients with corticobasal degeneration. Ideational apraxia has been observed in patients with lesions in the dominant hemisphere near areas associated with aphasia, but more research is needed on ideational apraxia due to brain lesions. The localization of lesions in areas of the frontal and temporal lobes would provide explanation for the difficulty in motor planning seen in ideational apraxia, as well as its difficulty to distinguish it from certain aphasias. Constructional apraxia is often caused by lesions of the inferior nondominant parietal lobe, and can be caused by brain injury, illness, tumor, or other condition that can result in a brain lesion. Diagnosis Although qualitative and quantitative studies exist, little consensus exists on the proper method to assess for apraxia. The criticisms of past methods include failure to meet standard psychometric properties and research-specific designs that translate poorly to nonresearch use. The Test to Measure Upper Limb Apraxia (TULIA) is one method of determining upper limb apraxia through the qualitative and quantitative assessment of gesture production. In contrast to previous publications on apraxic assessment, the reliability and validity of TULIA was thoroughly investigated. The TULIA consists of subtests for the imitation and pantomime of nonsymbolic ("put your index finger on top of your nose"), intransitive ("wave goodbye"), and transitive ("show me how to use a hammer") gestures. Discrimination (differentiating between well- and poorly performed tasks) and recognition (indicating which object corresponds to a pantomimed gesture) tasks are also often tested for a full apraxia evaluation. However, a strong correlation may not be seen between formal test results and actual performance in everyday functioning or activities of daily living (ADLs). A comprehensive assessment of apraxia should include formal testing, standardized measurements of ADLs, observation of daily routines, self-report questionnaires, and targeted interviews with the patients and their relatives. As stated above, apraxia should not be confused with aphasia (the inability to understand language); however, they frequently occur together. Apraxia is so often accompanied by aphasia that many believe that if a person displays AOS, then the patient also having some level of aphasia should be assumed. Treatment Treatment for individuals with apraxia includes speech therapy, occupational therapy, and physical therapy. Currently, no medications are indicated for the treatment of apraxia, only therapy treatments. Generally, treatments for apraxia have received little attention for several reasons, including the tendency for the condition to resolve spontaneously in acute cases. Additionally, the very nature of the automatic-voluntary dissociation of motor abilities that defines apraxia means that patients may still be able to automatically perform activities if cued to do so in daily life. Nevertheless, patients experiencing apraxia have less functional independence in their daily lives, and that evidence for the treatment of apraxia is scarce. However, a literature review of apraxia treatment to date reveals that although the field is in its early stages of treatment design, certain aspects can be included to treat apraxia. One method is through rehabilitative treatment, which has been found to positively impact apraxia, as well as ADLs. In this review, rehabilitative treatment consisted of 12 different contextual cues, which were used to teach patients how to produce the same gesture under different contextual situations. Additional studies have also recommended varying forms of gesture therapy, whereby the patient is instructed to make gestures (either using objects or symbolically meaningful and nonmeaningful gestures) with progressively less cuing from the therapist. Patients with apraxia may need to use a form of alternative and augmentative communication depending on the severity of the disorder. In addition to using gestures as mentioned, patients can also use communication boards or more sophisticated electronic devices if needed. No single type of therapy or approach has been proven as the best way to treat a patient with apraxia, since each patient's case varies. One-on-one sessions usually work the best, though, with the support of family members and friends. Since everyone responds to therapy differently, some patients will make significant improvements, while others will make less progress. The overall goal for treatment of apraxia is to treat the motor plans for speech, not treating at the phoneme (sound) level. Individuals with apraxia of speech should receive treatment that focuses on the repetition of target words and rate of speech. The overall goal for treatment of apraxia should be to improve speech intelligibility, rate of speech, and articulation of targeted words. See also Praxis (process) Ataxia Aging movement control Developmental coordination disorder (also known as developmental dyspraxia) Lists of language disorders References Further reading Kasper, D.L.; Braunwald, E.; Fauci, A.S.; Hauser, S.L.; Longo, D.L.; Jameson, J.L.. Harrison's Principles of Internal Medicine. New York: McGraw-Hill, 2005. . Manasco, H. (2014). Introduction to Neurogenic Communication Disorders. Jones & Bartlett Publishers. External links Acquired Apraxia of Speech: A Treatment Overview Apraxia: Symptoms, Causes, Tests, Treatments ApraxiaKids GettingTheWordOutOnApraxia.com: A Community for Parents of Children with Apraxia Complications of stroke Dementia Aphasias Motor control
Apraxia
Biology
2,064
14,440,076
https://en.wikipedia.org/wiki/VN1R1
Vomeronasal type-1 receptor 1 is a protein that in humans is encoded by the VN1R1 gene. Function Pheromones are chemical signals that elicit specific behavioral responses and physiologic alterations in recipients of the same species. The protein encoded by this gene is similar to pheromone receptors and is primarily localized to the olfactory mucosa. An alternate splice variant of this gene is thought to exist, but its full length nature has not been determined. Ligands Decanal Hedione Iso E Super References Further reading G protein-coupled receptors
VN1R1
Chemistry
123
652,791
https://en.wikipedia.org/wiki/NetIQ
NetIQ is a security software company. In 2023 it was acquired by OpenText. NetIQ was previously based in San Jose, California, with products that provide identity and access management, security and data center management. Its flagship offerings are NetIQ AppManager, NetIQ Identity Manager and NetIQ Access Manager. OpenText has owned NetIQ since 2023 after its acquisition of Micro Focus, which acquired The Attachmate Group, which acquired NetIQ in 2006, six years after the latter acquired Mission Critical Software. History NetIQ was founded by Ching-fa Hwang, Her-daw Che, Hon Wong, Ken Prayoon Cheng and Tom Kemp in September 1995; AppManager was introduced in 1996. Their February 2000 merger with Mission Critical Software widened the company's focus to include systems management as well as performance. In 2001, NetIQ acquired Webtrends, whose software "monitors corporate Internet traffic." — which they sold in 2005. In 2006, NetIQ was acquired by AttachmateWRQ, After that company acquired Novell in 2011, it changed its name to The Attachmate Group. NetIQ added identity and security products as well as data center and virtualization to their offerings. It was the Attachmate acquisition that led to the alignment of NetIQ products into three categories: identity and access management, security management and data center management. In 2014, The Attachmate Group was merged into Micro Focus International. NetIQ also announced the use of an Identity Manager number 4.5. See also References External links Further reading OpenText 1995 establishments in California 2000 software 2006 mergers and acquisitions 2006 software Companies based in San Jose, California Computer security software Federated identity Identity management systems Information technology companies of the United States Information technology management Micro Focus International Identity Manager Identity Manager Software companies established in 1995 Defunct software companies of the United States
NetIQ
Technology,Engineering
372
3,077,796
https://en.wikipedia.org/wiki/Fluorodeoxyglucose%20%2818F%29
{{DISPLAYTITLE:Fluorodeoxyglucose (18F)}} [18F]Fluorodeoxyglucose (INN), or fluorodeoxyglucose F 18 (USAN and USP), also commonly called fluorodeoxyglucose and abbreviated [18F]FDG, 2-[18F]FDG or FDG, is a radiopharmaceutical, specifically a radiotracer, used in the medical imaging modality positron emission tomography (PET). Chemically, it is 2-deoxy-2-[18F]fluoro-D-glucose, a glucose analog, with the positron-emitting radionuclide fluorine-18 substituted for the normal hydroxyl group at the C-2 position in the glucose molecule. The uptake of [18F]FDG by tissues is a marker for the tissue uptake of glucose, which in turn is closely correlated with certain types of tissue metabolism. After [18F]FDG is injected into a patient, a PET scanner can form two-dimensional or three-dimensional images of the distribution of [18F]FDG within the body. Since its development in 1976, [18F]FDG had a profound influence on research in the neurosciences. The subsequent discovery in 1980 that [18F]FDG accumulates in tumors underpins the evolution of PET as a major clinical tool in cancer diagnosis. [18F]FDG is now the standard radiotracer used for PET neuroimaging and cancer patient management. The images can be assessed by a nuclear medicine physician or radiologist to provide diagnoses of various medical conditions. History In 1968, Dr. Josef Pacák, Zdeněk Točík and Miloslav Černý at the Department of Organic Chemistry, Charles University, Czechoslovakia were the first to describe the synthesis of FDG. Later, in the 1970s, Tatsuo Ido and Al Wolf at the Brookhaven National Laboratory were the first to describe the synthesis of FDG labeled with fluorine-18. The compound was first administered to two normal human volunteers by Abass Alavi in August, 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of [18F]FDG in that organ (see history reference below). Beginning in August 1990, and continuing throughout 1991, a shortage of oxygen-18, a raw material for FDG, made it necessary to ration isotope supplies. Israel's oxygen-18 facility had shut down due to the Gulf War, and the U.S. government had shut down its isotopes of carbon, oxygen and nitrogen facility at Los Alamos National Laboratory, leaving Isotec as the main supplier. Synthesis [18F]FDG was first synthesized via electrophilic fluorination with [18F]F2. Subsequently, a "nucleophilic synthesis" was devised with the same radioisotope. As with all radioactive 18F-labeled radioligands, the fluorine-18 must be made initially as the fluoride anion in a cyclotron. Synthesis of complete [18F]FDG radioactive tracer begins with synthesis of the unattached fluoride radiotracer, since cyclotron bombardment destroys organic molecules of the type usually used for ligands, and in particular, would destroy glucose. Cyclotron production of fluorine-18 may be accomplished by bombardment of neon-20 with deuterons, but usually is done by proton bombardment of 18O-enriched water, causing a (p-n) reaction (sometimes called a "knockout reaction"a common type of nuclear reaction with high probability where an incoming proton "knocks out" a neutron) in the 18O. This produces "carrier-free" dissolved [18F]fluoride ([18F]F−) ions in the water. The 109.8-minute half-life of fluorine-18 makes rapid and automated chemistry necessary after this point. Anhydrous fluoride salts, which are easier to handle than fluorine gas, can be produced in a cyclotron. To achieve this chemistry, the [18F]F− is separated from the aqueous solvent by trapping it on an ion-exchange column, and eluted with an acetonitrile solution of 2,2,2-cryptand and potassium carbonate. Evaporation of the eluate gives [(crypt-222)K]+ [18F]F− (2) . The fluoride anion is nucleophilic, so anhydrous conditions are required to avoid competing reactions involving hydroxide, which is also a good nucleophile. The use of the cryptand to sequester the potassium ions avoids ion-pairing between free potassium and fluoride ions, rendering the fluoride anion more reactive. Intermediate 2 is treated with the protected mannose triflate (1); the fluoride anion displaces the triflate leaving group in an SN2 reaction, giving the protected fluorinated deoxyglucose (3). Base hydrolysis removes the acetyl protecting groups, giving the desired product (4) after removing the cryptand via ion-exchange: Mechanism of action, metabolic end-products, and metabolic rate [18F]FDG, as a glucose analog, is taken up by high-glucose-using cells such as brain, brown adipocytes, kidney, and cancer cells, where phosphorylation prevents the glucose from being released again from the cell, once it has been absorbed. The 2-hydroxyl group (–OH) in normal glucose is needed for further glycolysis (metabolism of glucose by splitting it), but [18F]FDG is missing this 2-hydroxyl. Thus, in common with its sister molecule 2-deoxy-D-glucose, FDG cannot be further metabolized in cells. The [18F]FDG-6-phosphate formed when [18F]FDG enters the cell cannot exit the cell before radioactive decay. As a result, the distribution of [18F]FDG is a good reflection of the distribution of glucose uptake and phosphorylation by cells in the body. The fluorine in [18F]FDG decays radioactively via beta-decay to 18O−. After picking up a proton H+ from a hydronium ion in its aqueous environment, the molecule becomes glucose-6-phosphate labeled with harmless nonradioactive "heavy oxygen" in the hydroxyl at the C-2 position. The new presence of a 2-hydroxyl now allows it to be metabolized normally in the same way as ordinary glucose, producing non-radioactive end-products. Although in theory all [18F]FDG is metabolized as above with a radioactivity elimination half-life of 110 minutes (the same as that of fluorine-18), clinical studies have shown that the radioactivity of [18F]FDG partitions into two major fractions. About 75% of the fluorine-18 activity remains in tissues and is eliminated with a half-life of 110 minutes, presumably by decaying in place to O-18 to form [18O]O-glucose-6-phosphate, which is non-radioactive (this molecule can soon be metabolized to carbon dioxide and water, after nuclear transmutation of the fluorine to oxygen ceases to prevent metabolism). Another fraction of [18F]FDG, representing about 20% of the total fluorine-18 activity of an injection, is excreted renally by two hours after a dose of [18F]FDG, with a rapid half-life of about 16 minutes (this portion makes the renal-collecting system and bladder prominent in a normal PET scan). This short biological half-life indicates that this 20% portion of the total fluorine-18 tracer activity is eliminated renally much more quickly than the isotope itself can decay. Unlike normal glucose, FDG is not fully reabsorbed by the kidney. Because of this rapidly excreted urine 18F, the urine of a patient undergoing a PET scan may therefore be especially radioactive for several hours after administration of the isotope. All radioactivity of [18F]FDG, both the 20% which is rapidly excreted in the first several hours of urine which is made after the exam, and the 80% which remains in the patient, decays with a half-life of 110 minutes (just under two hours). Thus, within 24 hours (13 half-lives after the injection), the radioactivity in the patient and in any initially voided urine which may have contaminated bedding or objects after the PET exam will have decayed to 2−13 = of the initial radioactivity of the dose. In practice, patients who have been injected with [18F]FDG are told to avoid the close vicinity of especially radiation-sensitive persons, such as infants, children and pregnant women, for at least 12 hours (7 half-lives, or decay to the initial radioactive dose). Production Alliance Medical and Siemens Healthcare are the only producers in the United Kingdom. A dose of FDG in England costs about £130. In Northern Ireland, where there is a single supplier, doses cost up to £450. IBA Molecular North America and Zevacor Molecular, both of which are owned by Illinois Health and Science (IBAM having been purchased as of 1 August 2015), Siemens' PETNET Solutions (a subsidiary of Siemens Healthcare), and Cardinal Health are producers in the U.S. Distribution The labeled [18F]FDG compound has a relatively short shelf life which is dominated by the physical decay of fluorine-18 with a half-life of 109.8 minutes, or slightly less than two hours. Still, this half life is sufficiently long to allow shipping the compound to remote PET scanning facilities, in contrast to other medical radioisotopes like carbon-11. Due to transport regulations for radioactive compounds, delivery is normally done by specially licensed road transport, but means of transport may also include dedicated small commercial jet services. Transport by air allows expanding the distribution area around a [18F]FDG production site to deliver the compound to PET scanning centres even hundreds of miles away. Recently, on-site cyclotrons with integral shielding and portable chemistry stations for making [18F]FDG have accompanied PET scanners to remote hospitals. This technology holds some promise in the future, for replacing some of the scramble to transport [18F]FDG from site of manufacture to site of use. Applications In PET imaging, [18F]FDG is primarily used for imaging tumors in oncology, where a static [18F]FDG PET scan is performed and the tumor [18F]FDG uptake is analyzed in terms of Standardized Uptake Value (SUV). FDG PET/CT can be used for the assessment of glucose metabolism in the heart and the brain. [18F]FDG is taken up by cells, and subsequently phosphorylated by hexokinase (whose mitochondrial form is greatly elevated in rapidly growing malignant tumours). Phosphorylated [18F]FDG cannot be further metabolised and is thus retained by tissues with high metabolic activity, such as most types of malignant tumours. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin's disease, non-Hodgkin lymphoma, colorectal cancer, breast cancer, melanoma, and lung cancer. It has also been approved for use in diagnosing Alzheimer's disease. In body-scanning applications in searching for tumor or metastatic disease, a dose of [18F]-FDG in solution (typically 5 to 10 millicuries or 200 to 400 MBq) is typically injected rapidly into a saline drip running into a vein, in a patient who has been fasting for at least six hours, and who has a suitably low blood sugar. (This is a problem for some diabetics; usually PET scanning centers will not administer the isotope to patients with blood glucose levels over about 180 mg/dL = 10 mmol/L, and such patients must be rescheduled). The patient must then wait about an hour for the sugar to distribute and be taken up into organs which use glucose – a time during which physical activity must be kept to a minimum, in order to minimize uptake of the radioactive sugar into muscles (this causes unwanted artifacts in the scan, interfering with reading especially when the organs of interest are inside the body vs. inside the skull). Then, the patient is placed in the PET scanner for a series of one or more scans which may take from 20 minutes to as long as an hour (often, only about one-quarter of the body length may be imaged at a time). References Aldoses Deoxy sugars Medicinal radiochemistry Neuroimaging Organofluorides PET radiotracers Pyranoses Radiopharmaceuticals de:Fluordesoxyglucose
Fluorodeoxyglucose (18F)
Chemistry
2,839
38,790,613
https://en.wikipedia.org/wiki/Median%20aerodynamic%20diameter
Median aerodynamic diameter (MAD) is one of two parameters influencing the deposition of inhaled particles, the other being the geometric standard deviation of the particle size distribution. The MAD is the value of aerodynamic diameter for which 50% of some quantity in a given aerosol is associated with particles smaller than the MAD, and 50% of the quantity is associated with particles larger than the MAD. It simplifies the true distribution of aerodynamic diameters of a given aerosol as a single value. It is also used to describe those particle sizes for which deposition depends chiefly on inertial impaction and sedimentation. Activity median aerodynamic diameter In the context of radiation protection, activity median aerodynamic diameter (AMAD) is the MAD for the airborne activity in a given aerosol. Internal dosimetry uses it as a means of simplifying the true distribution of aerodynamic diameters of a given aerosol. Count median aerodynamic diameter Count median aerodynamic diameter (CMAD) is only used rarely. Half of the particles (by count) of a given aerosol have the aerodynamic diameter smaller than the CMAD, and the other half larger. A similar quantity, count median (geometric) diameter (CMD) is more common. Mass median aerodynamic diameter Mass median aerodynamic diameter (MMAD) is the MAD for mass. References Nuclear safety and security Radioactive contamination
Median aerodynamic diameter
Physics,Chemistry,Technology
269
70,651
https://en.wikipedia.org/wiki/Van%20der%20Waals%20radius
The van der Waals radius, r, of an atom is the radius of an imaginary hard sphere representing the distance of closest approach for another atom. It is named after Johannes Diderik van der Waals, winner of the 1910 Nobel Prize in Physics, as he was the first to recognise that atoms were not simply points and to demonstrate the physical consequences of their size through the van der Waals equation of state. van der Waals volume The van der Waals volume, V, also called the atomic volume or molecular volume, is the atomic property most directly related to the van der Waals radius. It is the volume "occupied" by an individual atom (or molecule). The van der Waals volume may be calculated if the van der Waals radii (and, for molecules, the inter-atomic distances, and angles) are known. For a single atom, it is the volume of a sphere whose radius is the van der Waals radius of the atom: For a molecule, it is the volume enclosed by the van der Waals surface. The van der Waals volume of a molecule is always smaller than the sum of the van der Waals volumes of the constituent atoms: the atoms can be said to "overlap" when they form chemical bonds. The van der Waals volume of an atom or molecule may also be determined by experimental measurements on gases, notably from the van der Waals constant b, the polarizability α, or the molar refractivity A. In all three cases, measurements are made on macroscopic samples and it is normal to express the results as molar quantities. To find the van der Waals volume of a single atom or molecule, it is necessary to divide by the Avogadro constant N. The molar van der Waals volume should not be confused with the molar volume of the substance. In general, at normal laboratory temperatures and pressures, the atoms or molecules of gas only occupy about of the volume of the gas, the rest is empty space. Hence the molar van der Waals volume, which only counts the volume occupied by the atoms or molecules, is usually about times smaller than the molar volume for a gas at standard temperature and pressure. Table of van der Waals radii {| class="mw-collapsible " border="0" cellpadding="0" cellspacing="1" style="text-align:center; background:; border:1px solid ; width:100%; max-width:1300px; margin:0 auto; padding:2px;" ! colspan=20 style="background:; padding:2px 4px;" | Van der Waals radius of the elements in the periodic table |- style="background:" ! width="1.0%" | Group → ! width="5.4%" | 1 ! width="5.4%" | 2 ! width="1.8%" | ! width="5.4%" | 3 ! width="5.4%" | 4 ! width="5.4%" | 5 ! width="5.4%" | 6 ! width="5.4%" | 7 ! width="5.4%" | 8 ! width="5.4%" | 9 ! width="5.4%" | 10 ! width="5.4%" | 11 ! width="5.4%" | 12 ! width="5.4%" | 13 ! width="5.4%" | 14 ! width="5.4%" | 15 ! width="5.4%" | 16 ! width="5.4%" | 17 ! width="5.4%" | 18 |- ! ↓ Period | colspan="20"| |- ! 1 | | colspan="17"| | |- ! 2 | | | colspan="11"| | | | | | | |- ! 3 | | | colspan="11"| | | | | | | |- ! 4 | | | | | | | | | | | | | | | | | | | |- ! 5 | | | | | | | | | | | | | | | | | | | |- ! 6 | | | | | | | | | | | | | | | | | | | |- ! 7 | | | | | | | | | | | | | | | | | | | |- | colspan="22"| |- | colspan="4" | | | | | | | | | | | | | | |- | colspan="4" | | | | | | | | | | | | | | |- | colspan=19 style="text-align:left;" |Legend |- | colspan=19 style="text-align:left;" |Values for the van der Waals radii are in picometers (pm or ) |- | colspan=19 style="text-align:left;" |The shade of the box ranges from red to yellow as the radius increases; Gray indicate a lack of data. |- | colspan=19 style="text-align:left;" |Unless indicated otherwise, the data is from Mathematica'''s ElementData function from Wolfram Research, Inc. |- | colspan=19 | |} Methods of determination Van der Waals radii may be determined from the mechanical properties of gases (the original method), from the critical point, from measurements of atomic spacing between pairs of unbonded atoms in crystals or from measurements of electrical or optical properties (the polarizability and the molar refractivity). These various methods give values for the van der Waals radius which are similar (1–2 Å, 100–200 pm) but not identical. Tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will often have different values for the van der Waals radius of the same atom. Indeed, there is no reason to assume that the van der Waals radius is a fixed property of the atom in all circumstances: rather, it tends to vary with the particular chemical environment of the atom in any given case. Van der Waals equation of state The van der Waals equation of state is the simplest and best-known modification of the ideal gas law to account for the behaviour of real gases: where is pressure, is the number of moles of the gas in question and and depend on the particular gas, is the volume, is the specific gas constant on a unit mole basis and the absolute temperature; is a correction for intermolecular forces and corrects for finite atomic or molecular sizes; the value of equals the van der Waals volume per mole of the gas. Their values vary from gas to gas. The van der Waals equation also has a microscopic interpretation: molecules interact with one another. The interaction is strongly repulsive at a very short distance, becomes mildly attractive at the intermediate range, and vanishes at a long distance. The ideal gas law must be corrected when attractive and repulsive forces are considered. For example, the mutual repulsion between molecules has the effect of excluding neighbors from a certain amount of space around each molecule. Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion. In the equation of state, this volume of exclusion () should be subtracted from the volume of the container (), thus: (). The other term that is introduced in the van der Waals equation, , describes a weak attractive force among molecules (known as the van der Waals force), which increases when increases or decreases and molecules become more crowded together. The van der Waals constant b volume can be used to calculate the van der Waals volume of an atom or molecule with experimental data derived from measurements on gases. For helium, b = 23.7 cm/mol. Helium is a monatomic gas, and each mole of helium contains atoms (the Avogadro constant, N): Therefore, the van der Waals volume of a single atom V = 39.36 Å, which corresponds to r = 2.11 Å (≈ 200 picometers). This method may be extended to diatomic gases by approximating the molecule as a rod with rounded ends where the diameter is and the internuclear distance is . The algebra is more complicated, but the relation can be solved by the normal methods for cubic functions. Crystallographic measurements The molecules in a molecular crystal are held together by van der Waals forces rather than chemical bonds. In principle, the closest that two atoms belonging to different molecules can approach one another is given by the sum of their van der Waals radii. By examining a large number of structures of molecular crystals, it is possible to find a minimum radius for each type of atom such that other non-bonded atoms do not encroach any closer. This approach was first used by Linus Pauling in his seminal work The Nature of the Chemical Bond. Arnold Bondi also conducted a study of this type, published in 1964, although he also considered other methods of determining the van der Waals radius in coming to his final estimates. Some of Bondi's figures are given in the table at the top of this article, and they remain the most widely used "consensus" values for the van der Waals radii of the elements. Scott Rowland and Robin Taylor re-examined these 1964 figures in the light of more recent crystallographic data: on the whole, the agreement was very good, although they recommend a value of 1.09 Å for the van der Waals radius of hydrogen as opposed to Bondi's 1.20 Å. A more recent analysis of the Cambridge Structural Database, carried out by Santiago Alvareza, provided a new set of values for 93 naturally occurring elements. A simple example of the use of crystallographic data (here neutron diffraction) is to consider the case of solid helium, where the atoms are held together only by van der Waals forces (rather than by covalent or metallic bonds) and so the distance between the nuclei can be considered to be equal to twice the van der Waals radius. The density of solid helium at 1.1 K and 66 atm is , corresponding to a molar volume V = . The van der Waals volume is given by where the factor of π/√18 arises from the packing of spheres: V =  = 23.0 Å, corresponding to a van der Waals radius r = 1.76 Å. Molar refractivity The molar refractivity of a gas is related to its refractive index by the Lorentz–Lorenz equation: The refractive index of helium n = at 0 °C and 101.325 kPa, which corresponds to a molar refractivity A = . Dividing by the Avogadro constant gives V =  = 0.8685 Å, corresponding to r = 0.59 Å. Polarizability The polarizability α of a gas is related to its electric susceptibility χ by the relation and the electric susceptibility may be calculated from tabulated values of the relative permittivity ε using the relation χ = ε − 1. The electric susceptibility of helium χ = at 0 °C and 101.325 kPa, which corresponds to a polarizability α = . The polarizability is related the van der Waals volume by the relation so the van der Waals volume of helium V =  = 0.2073 Å by this method, corresponding to r'' = 0.37 Å. When the atomic polarizability is quoted in units of volume such as Å, as is often the case, it is equal to the van der Waals volume. However, the term "atomic polarizability" is preferred as polarizability is a precisely defined (and measurable) physical quantity, whereas "van der Waals volume" can have any number of definitions depending on the method of measurement. See also Atomic radii of the elements (data page) van der Waals force van der Waals molecule van der Waals strain van der Waals surface References Further reading External links van der Waals Radius of the elements at PeriodicTable.com van der Waals Radius – Periodicity at WebElements.com Chemical properties Intermolecular forces Radius Atomic radius
Van der Waals radius
Physics,Chemistry,Materials_science,Engineering
2,646
75,336,541
https://en.wikipedia.org/wiki/Jay%20Quade
Jay Quade (born December 13, 1955) is an American geochemist and geologist and former middle-distance runner. He is known for pioneering research applying geochemical isotopic methods for investigations of tectonics, global climate change, and the paleontology of Darwinian evolution. Biography Jay Quade was born and grew up in Nevada. As a teenager, he set two all-time Nevada State high school track and field records. At the University of New Mexico, he had a track scholarship, for four years. He was twice an NCAA All-American in track and once an NCAA champion in track (relay race). In 1977 he became a geologist employed by the Mineral Exploration Division of Utah International, Inc. In 1978 he graduated with B.S. in geology from the University of New Mexico. In 1982 he graduated with an M.S. in geology from the University of Arizona. From 1982 to 1989 he worked as a geologist in Nevada — from 1982 to 1984 for Noranda Exploration, Inc., from 1984 to 1986 for the Desert Research Institute, and from 1986 to 1989 for Mifflin & Associates (a mining consulting firm founded in 1986 by the geologist Martin David Mifflin). From 1989 to 1990 Quade was a graduate student at the University of Utah, where he received his Ph.D. in 1990. In 1991 he was a postdoc at the Australian National University. At the University of Arizona, he was appointed to an assistant professorship in 1992, an associate professorship in 1998, and a full professorship in 2003. Quade's research is remarkably varied, including low-temperature geochemistry, radiometric dating using a variety of isotopes, and theoretical reconstructions of paleoenvironments, mostly from the Cenozoic. Some of his projects have involved archaeologists and anthropologists. Quade with Thure E. Cerling and other colleagues did important research on stable isotope composition of soil carbonate in the Great Basin. In 2001, Quade with Nathan B. English, Julio L. Betancourt, and Jeffrey S. Dean published an important paper on the deforestation of Chaco Canyon. As a geological team member, Quade has done fieldwork on stratigraphy and paleohydrologic reconstruction in the western USA, gold deposits in Oregon, Alaska, and Nevada, and paleo-lake hydrology in Mongolia, Tibet, Chile, Argentina, and the western USA. From 1985 to 2015 his fieldwork on low temperature geochemistry has been done all over the world: parts of the US, Asia, Australia, and South America, as well as Greece and Ethiopia. In 2001 Quade won the Farouk El-Baz Award of the Geological Society of America (GSA). In 2015 he was elected a Fellow of the Geological Society of American and also a Fellow of the American Geophysical Union (AGU). In 2017 he was elected a Fellow of the Geochemical Society. He received in 2016 a Lady Davis Fellowship from the Hebrew University and in 2017 a Japan Society for the Promotion of Science Fellowship from the University of Tokyo. In 2018 he was awarded the Arthur L. Day Medal. In Nevada on December 21, 1984, Jay Quade married Barbra A. Valdez. They have three children. Selected publications Articles (See carbon fixation.) Books References External links 1955 births Living people Geochemists 20th-century American geologists 21st-century American geologists Members of the United States National Academy of Sciences Fellows of the American Association for the Advancement of Science University of New Mexico alumni University of Arizona alumni University of Utah alumni University of Arizona faculty Fellows of the American Geophysical Union Fellows of the Geological Society of America Scientists from Nevada New Mexico Lobos men's track and field athletes NCAA Division I Indoor Track and Field Championships winners American male middle-distance runners 20th-century American sportsmen
Jay Quade
Chemistry
788
49,863,574
https://en.wikipedia.org/wiki/Freddy%20Cachazo
Freddy Alexander Cachazo is a Venezuelan-born theoretical physicist who holds the Gluskin Sheff Freeman Dyson Chair in Theoretical Physics at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada. He is known for the contributions to quantum field theory through the study of scattering amplitudes, in particular in quantum chromodynamics, N = 4 supersymmetric Yang–Mills theory and quantum gravity. His contributions include BCFW recursion relations, the CSW vertex expansion and the amplituhedron. In 2014, Cachazo was awarded the New Horizons Prize for uncovering numerous structures underlying scattering amplitudes in gauge theories and gravity. Academic career After graduating from Simón Bolívar University in 1996, Cachazo attended a year-long Postgraduate Diploma Programme at the International Centre for Theoretical Physics (ICTP) in Trieste, Italy. He was admitted in Harvard University, where he completed the Ph.D. under the supervision of Cumrun Vafa in 2002. Cachazo was a post-doctoral member of the Institute for Advanced Study (IAS) in Princeton, New Jersey in 2002-05 and 2009-10. In 2005, he became a faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, as well as an Adjoint Faculty at the nearby University of Waterloo. He currently holds the Gluskin Sheff Freeman Dyson Chair in Theoretical Physics. Cachazo's research concerns quantum field theory, the underlying theory describing fundamental interactions of particles and space-time itself. The research program is to understand their deep structure through the study of scattering amplitudes. Such understanding allows for both efficient computation of the probabilities of physical processes occurring and insights into the unknown structures of the gauge theories and gravity. Together with Ruth Britto, Bo Feng and Edward Witten, he introduced the recursion relations for the computation of scattering amplitudes, which opened a new window for computations required at particle accelerators, such as the Large Hadron Collider. With Nima Arkani-Hamed and collaborators, he studied N = 4 supersymmetric Yang–Mills theory and showed how to compute amplitudes at any order in the perturbation theory. He co-discovered a new formalism unifying gauge theory and gravity in any space-time dimension, known as the Cachazo-He-Yuan formulation. Awards and honors In 2009, he was awarded the Gribov Medal for an outstanding work by a young physicist from the European Physical Society. Two year later he won the Rutherford Medal, an equivalent prize awarded by the Royal Society of Canada. In 2012, Canadian Association of Physicists awarded Cachazo with the Herzberg Medal. Finally, he won the 2014 New Horizons Prize, which by many is regarded to be the most prestigious award for young theoretical physicists. Selected publications References Living people Harvard University alumni 21st-century Canadian physicists Theoretical physicists Year of birth missing (living people)
Freddy Cachazo
Physics
604
18,934,464
https://en.wikipedia.org/wiki/Embrace%2C%20extend%2C%20and%20extinguish
"Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found was used internally by Microsoft to describe its strategy for entering product categories involving widely used open standards, extending those standards with proprietary capabilities, and using the differences to strongly disadvantage its competitors. Origin The strategy and phrase "embrace and extend" were first described outside Microsoft in a 1996 article in The New York Times titled "Tomorrow, the World Wide Web! Microsoft, the PC King, Wants to Reign Over the Internet", in which writer John Markoff said, "Rather than merely embrace and extend the Internet, the company's critics now fear, Microsoft intends to engulf it." The phrase "embrace and extend" also appears in a facetious motivational song by an anonymous Microsoft employee, and in an interview of Steve Ballmer by The New York Times. A variant of the phrase, "embrace, extend then innovate", is used in J Allard's 1994 memo "Windows: The Next Killer Application on the Internet" to Paul Maritz and other executives at Microsoft. The memo starts with a background on the Internet in general, and then proposes a strategy on how to turn Windows into the next "killer app" for the Internet: The addition "extinguish" was introduced in the United States v. Microsoft Corp. antitrust trial when then vice president of Intel, Steven McGeady, used the phrase to explain Maritz's statement in a 1995 meeting with Intel that described Microsoft's strategy to "kill HTML by extending it". Strategy The strategy's three phases are: Embrace: Development of software substantially compatible with an Open Standard. Extend: Addition of features not supported by the Open Standard, creating interoperability problems. Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors who are unable to support the new extensions. Microsoft claims the original strategy is not anti-competitive, but rather an exercise to implement features it believes customers want. Examples by Microsoft Browser incompatibilities: The plaintiffs in an antitrust case claimed Microsoft had added support for ActiveX controls in the Internet Explorer Web browser to break compatibility with Netscape Navigator, which used components based on Java and Netscape's own plugin system. On CSS, data:, etc.: A decade after the original Netscape-related antitrust suit, the Web browser company Opera Software filed an antitrust complaint against Microsoft with the European Union, saying it "calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious 'Embrace, Extend and Extinguish' strategy". Office documents: In a memo to the Office product group in 1998, Bill Gates stated: "One thing we have got to change in our strategy – allowing Office documents to be rendered very well by other people's browsers is one of the most destructive things we could do to the company. We have to stop putting any effort into this and make sure that Office documents very well depends on PROPRIETARY IE capabilities. Anything else is suicide for our platform. This is a case where Office has to avoid doing something to destory Windows." Breaking Java's portability: The antitrust case's plaintiffs also accused Microsoft of using an "embrace and extend" strategy with regard to the Java platform, which was designed explicitly with the goal of developing programs that could run on any operating system, be it Windows, Mac, or Linux. They claimed that, by omitting the Java Native Interface (JNI) from its implementation and providing J/Direct for a similar purpose, Microsoft deliberately tied Windows Java programs to its platform, making them unusable on Linux and Mac systems. According to an internal communication, Microsoft sought to downplay Java's cross-platform capability and make it "just the latest, best way to write Windows applications". Microsoft paid Sun Microsystems US$20 million in January 2001 (equivalent to $ million in ) to settle the resulting legal implications of their breach of contract. More Java issues: Sun sued Microsoft over Java again in 2002 and Microsoft agreed to settle out of court for US$2 billion (equivalent to US$ billion in ). Instant messaging: In 2001, CNET described an instance concerning Microsoft's instant messaging program. "Embrace" AOL's IM protocol, the de facto standard of the 1990s and early 2000s. "Extend" the standard with proprietary Microsoft addons which added new features, but broke compatibility with AOL's software. Gain dominance, since Microsoft had 95% OS share and their MSN Messenger was provided for free. Finally, "extinguish" and lock out AOL's IM software, since AOL was unable to use the modified MS-patented protocol. Email protocols: Microsoft supported POP3, IMAP, and SMTP email protocols in their Microsoft Outlook email client. At the same time, they developed their own email protocol, MAPI, which has since been documented but is largely unused by third parties. Microsoft has announced that they would end support for the less secure basic authentication, which lacks support for multi-factor authentication, access to Exchange Online APIs for Office 365 customers, which disables most use of IMAP or POP3 and requires significant upgrades to support the more secure OAuth2 based authentication in applications in order to continue to use those protocols; some customers have responded by simply shutting off older protocols. Web browsers Netscape During the browser wars, Netscape implemented the "font" tag, among other HTML extensions, without seeking review from a standards body. With the rise of Internet Explorer, the two companies became locked in a dead heat to out-implement each other with non-standards-compliant features. In 2004, to prevent a repeat of the "browser wars", and the resulting morass of conflicting standards, the browser vendors Apple Inc. (Safari), Mozilla Foundation (Firefox), and Opera Software (Opera browser) formed the Web Hypertext Application Technology Working Group (WHATWG) to create open standards to complement those of the World Wide Web Consortium. Microsoft refused to join, citing the group's lack of a patent policy as the reason. Google Chrome With its dominance in the web browser market, Google has been accused of using Google Chrome and Blink development to push new web standards that are proposed in-house by Google and subsequently implemented by its services first and foremost. These have led to performance disadvantages and compatibility issues with competing browsers, and in some cases, developers intentionally refusing to test their websites on any other browser than Chrome. Tom Warren of The Verge went as far as comparing Chrome to Internet Explorer 6, the default browser of Windows XP that was often targeted by competitors due to its similar ubiquity in the early 2000s. See also 32-bit vs 64-bit AARD code Criticism of Microsoft Halloween documents Microsoft and open source Network effect Path dependence Vendor lock-in Enshittification Planned obsolescence References External links Report on Microsoft documents relating to Office and IE Embrace, extend and extinguish Microsoft criticisms and controversies Interoperability Marketing techniques Spheres of influence Standards
Embrace, extend, and extinguish
Engineering
1,492
24,559,076
https://en.wikipedia.org/wiki/The%20Amateur%20Astronomer
The Amateur Astronomer was a four-page bulletin published between 1929 and 1935 by the Amateur Astronomers Association of New York. C. S. Brainin was the first editor; a section called "Meteor Notes" was edited by Virginia Geiger starting in 1933. In 1935, The Amateur Astronomer merged into The Sky published by the Hayden Planetarium. In 1941, The Sky merged with The Telescope to become Sky & Telescope, which has remained in print since then. References External links Web site of the Amateur Astronomers Association of New York 1929 establishments in New York (state) 1935 disestablishments in New York (state) Amateur astronomy Monthly magazines published in the United States Science and technology magazines published in the United States Astronomy magazines Defunct magazines published in the United States Magazines established in 1929 Magazines disestablished in 1935 Magazines published in New York City
The Amateur Astronomer
Astronomy
170
58,091,424
https://en.wikipedia.org/wiki/Pool%20fire
A pool fire is a type of diffusion flame where a layer of volatile liquid fuel is evaporating and burning. The fuel layer can be either on a horizontal solid substrate or floating on a higher-density liquid, usually water. Pool fires are an important scenario in fire process safety and combustion science, as large amounts of liquid fuels are stored and transported by different industries. Physical properties The most important physical parameter describing a pool fire is the heat release rate, which determines the minimum safe distance needed to avoid burns from thermal radiation. The heat release rate is limited by the rate of evaporation of the fuel, as the combustion reaction takes place in the gas phase. The evaporation rate, in turn, is determined by other physical parameters, such as the depth, surface area and shape of the pool, as well as the fuel boiling point, heat of vaporization, heat of combustion, thermal conductivity and others. A feedback loop exists between the heat release rate and evaporation rate, as a significant part of the energy released in the combustion reaction will be transmitted from the gas phase to the liquid fuel, and can supply the needed heat of vaporization. In the case of large pool fires, most of the heat transfer happens in the form of thermal radiation. Typical fuels in accidental pool fires, or experiments simulating them, include aliphatic hydrocarbons (n-heptane, liquefied propane gas), aromatic hydrocarbons (toluene, xylene), alcohols (methanol, ethanol) or mixtures thereof (kerosene). It is important that a pool fire involving a water-insoluble fuel is not attempted to be extinguished with water, as this can trigger explosive boiling and spattering of the burning material. Open-top tank fires are pool fires of industrial scale that occur when the roof of an atmospheric tank fails due to internal tank blast, followed by the contents of the tank catching fire. If a layer of water is present underneath the fuel and the fuel is a mixture of chemical species with several different boiling points, a boilover may eventually occur, greatly aggravating the fire. The boilover onset occurs as soon as a hot zone propagates down through the fuel, reaching the water and making it boil. See also Radiative transfer Fire safety References Types of fire Process safety
Pool fire
Chemistry,Engineering
481
54,349,009
https://en.wikipedia.org/wiki/Ugeoji
In Korean cuisine, ugeoji () is outer leaves or stems of cabbage, radish, and other greens, which are removed while trimming the vegetables. Ugeoji is often used in soups and stews, including haejang-guk (hangover soup). Gallery See also Siraegi – dried radish greens References Food ingredients Korean cuisine
Ugeoji
Technology
81
96,910
https://en.wikipedia.org/wiki/Respiratory%20complex%20I
Respiratory complex I, (also known as NADH:ubiquinone oxidoreductase, Type I NADH dehydrogenase and mitochondrial complex I) is the first large protein complex of the respiratory chains of many organisms from bacteria to humans. It catalyzes the transfer of electrons from NADH to coenzyme Q10 (CoQ10) and translocates protons across the inner mitochondrial membrane in eukaryotes or the plasma membrane of bacteria. This enzyme is essential for the normal functioning of cells, and mutations in its subunits lead to a wide range of inherited neuromuscular and metabolic disorders. Defects in this enzyme are responsible for the development of several pathological processes such as ischemia/reperfusion damage (stroke and cardiac infarction), Parkinson's disease and others. Function Complex I is the first enzyme of the mitochondrial electron transport chain. There are three energy-transducing enzymes in the electron transport chain - NADH:ubiquinone oxidoreductase (complex I), Coenzyme Q – cytochrome c reductase (complex III), and cytochrome c oxidase (complex IV). Complex I is the largest and most complicated enzyme of the electron transport chain. The reaction catalyzed by complex I is: NADH + H+ + CoQ + 4H+in→ NAD+ + CoQH2 + 4H+out In this process, the complex translocates four protons across the inner membrane per molecule of oxidized NADH, helping to build the electrochemical potential difference used to produce ATP. Escherichia coli complex I (NADH dehydrogenase) is capable of proton translocation in the same direction to the established Δψ, showing that in the tested conditions, the coupling ion is H+. Na+ transport in the opposite direction was observed, and although Na+ was not necessary for the catalytic or proton transport activities, its presence increased the latter. H+ was translocated by the Paracoccus denitrificans complex I, but in this case, H+ transport was not influenced by Na+, and Na+ transport was not observed. Possibly, the E. coli complex I has two energy coupling sites (one Na+ independent and the other Na+dependent), as observed for the Rhodothermus marinus complex I, whereas the coupling mechanism of the P. denitrificans enzyme is completely Na+ independent. It is also possible that another transporter catalyzes the uptake of Na+. Complex I energy transduction by proton pumping may not be exclusive to the R. marinus enzyme. The Na+/H+ antiport activity seems not to be a general property of complex I. However, the existence of Na+-translocating activity of the complex I is still in question. The reaction can be reversed – referred to as aerobic succinate-supported NAD+ reduction by ubiquinol – in the presence of a high membrane potential, but the exact catalytic mechanism remains unknown. Driving force of this reaction is a potential across the membrane which can be maintained either by ATP-hydrolysis or by complexes III and IV during succinate oxidation. Complex I may have a role in triggering apoptosis. In fact, there has been shown to be a correlation between mitochondrial activities and programmed cell death (PCD) during somatic embryo development. Complex I is not homologous to Na+-translocating NADH Dehydrogenase (NDH) Family (TC# 3.D.1), a member of the Na+ transporting Mrp superfamily. As a result of a two NADH molecule being oxidized to NAD+, three molecules of ATP can be produced by Complex V (ATP synthase) downstream in the respiratory chain. Mechanism Overall mechanism All redox reactions take place in the hydrophilic domain of complex I. NADH initially binds to complex I, and transfers two electrons to the flavin mononucleotide (FMN) prosthetic group of the enzyme, creating FMNH2. The electron acceptor – the isoalloxazine ring – of FMN is identical to that of FAD. The electrons are then transferred through the FMN via a series of iron-sulfur (Fe-S) clusters, and finally to coenzyme Q10 (ubiquinone). This electron flow changes the redox state of the protein, inducing conformational changes of the protein which alters the pK values of ionizable side chain, and causes four hydrogen ions to be pumped out of the mitochondrial matrix. Ubiquinone (CoQ) accepts two electrons to be reduced to ubiquinol (CoQH2). Electron transfer mechanism The proposed pathway for electron transport prior to ubiquinone reduction is as follows: NADH – FMN – N3 – N1b – N4 – N5 – N6a – N6b – N2 – Q, where Nx is a labelling convention for iron sulfur clusters. The high reduction potential of the N2 cluster and the relative proximity of the other clusters in the chain enable efficient electron transfer over long distance in the protein (with transfer rates from NADH to N2 iron-sulfur cluster of about 100 μs). The equilibrium dynamics of Complex I are primarily driven by the quinone redox cycle. In conditions of high proton motive force (and accordingly, a ubiquinol-concentrated pool), the enzyme runs in the reverse direction. Ubiquinol is oxidized to ubiquinone, and the resulting released protons reduce the proton motive force. Proton translocation mechanism The coupling of proton translocation and electron transport in Complex I is currently proposed as being indirect (long range conformational changes) as opposed to direct (redox intermediates in the hydrogen pumps as in heme groups of Complexes III and IV). The architecture of the hydrophobic region of complex I shows multiple proton transporters that are mechanically interlinked. The three central components believed to contribute to this long-range conformational change event are the pH-coupled N2 iron-sulfur cluster, the quinone reduction, and the transmembrane helix subunits of the membrane arm. Transduction of conformational changes to drive the transmembrane transporters linked by a 'connecting rod' during the reduction of ubiquinone can account for two or three of the four protons pumped per NADH oxidized. The remaining proton must be pumped by direct coupling at the ubiquinone-binding site. It is proposed that direct and indirect coupling mechanisms account for the pumping of the four protons. The N2 cluster's proximity to a nearby cysteine residue results in a conformational change upon reduction in the nearby helices, leading to small but important changes in the overall protein conformation. Further electron paramagnetic resonance studies of the electron transfer have demonstrated that most of the energy that is released during the subsequent CoQ reduction is on the final ubiquinol formation step from semiquinone, providing evidence for the "single stroke" H+ translocation mechanism (i.e. all four protons move across the membrane at the same time). Alternative theories suggest a "two stroke mechanism" where each reduction step (semiquinone and ubiquinol) results in a stroke of two protons entering the intermembrane space. The resulting ubiquinol localized to the membrane domain interacts with negatively charged residues in the membrane arm, stabilizing conformational changes. An antiporter mechanism (Na+/H+ swap) has been proposed using evidence of conserved Asp residues in the membrane arm. The presence of Lys, Glu, and His residues enable for proton gating (a protonation followed by deprotonation event across the membrane) driven by the pKa of the residues. Composition and structure NADH:ubiquinone oxidoreductase is the largest of the respiratory complexes. In mammals, the enzyme contains 44 separate water-soluble peripheral membrane proteins, which are anchored to the integral membrane constituents. Of particular functional importance are the flavin prosthetic group (FMN) and eight iron-sulfur clusters (FeS). Of the 44 subunits, seven are encoded by the mitochondrial genome. The structure is an "L" shape with a long membrane domain (with around 60 trans-membrane helices) and a hydrophilic (or peripheral) domain, which includes all the known redox centres and the NADH binding site. All thirteen of the E. coli proteins, which comprise NADH dehydrogenase I, are encoded within the nuo operon, and are homologous to mitochondrial complex I subunits. The antiporter-like subunits NuoL/M/N each contains 14 conserved transmembrane (TM) helices. Two of them are discontinuous, but subunit NuoL contains a 110 Å long amphipathic α-helix, spanning the entire length of the domain. The subunit, NuoL, is related to Na+/ H+ antiporters of TC# 2.A.63.1.1 (PhaA and PhaD). Three of the conserved, membrane-bound subunits in NADH dehydrogenase are related to each other, and to Mrp sodium-proton antiporters. Structural analysis of two prokaryotic complexes I revealed that the three subunits each contain fourteen transmembrane helices that overlay in structural alignments: the translocation of three protons may be coordinated by a lateral helix connecting them. Complex I contains a ubiquinone binding pocket at the interface of the 49-kDa and PSST subunits. Close to iron-sulfur cluster N2, the proposed immediate electron donor for ubiquinone, a highly conserved tyrosine constitutes a critical element of the quinone reduction site. A possible quinone exchange path leads from cluster N2 to the N-terminal beta-sheet of the 49-kDa subunit. All 45 subunits of the bovine NDHI have been sequenced. Each complex contains noncovalently bound FMN, coenzyme Q and several iron-sulfur centers. The bacterial NDHs have 8-9 iron-sulfur centers. A recent study used electron paramagnetic resonance (EPR) spectra and double electron-electron resonance (DEER) to determine the path of electron transfer through the iron-sulfur complexes, which are located in the hydrophilic domain. Seven of these clusters form a chain from the flavin to the quinone binding sites; the eighth cluster is located on the other side of the flavin, and its function is unknown. The EPR and DEER results suggest an alternating or “roller-coaster” potential energy profile for the electron transfer between the active sites and along the iron-sulfur clusters, which can optimize the rate of electron travel and allow efficient energy conversion in complex I. Notes: a Found in all species except fungi b May or may not be present in any species c Found in fungal species such as Schizosaccharomyces pombe d Recent research has described NDUFA4 to be a subunit of complex IV, and not of complex I Inhibitors Inhibition of complex I is the mode of action of the METI acaricides and insecticides: fenazaquin, fenpyroximate, pyrimidifen, pyridaben, tebufenpyrad, and tolfenpyrad. They are assigned to IRAC group 21A. Perhaps the best-known inhibitor of complex I is rotenone, which is used as a piscicide and previously commonly used as an organic pesticide, but now banned in many countries. It is in IRAC group 21B. Rotenone and rotenoids are isoflavonoids occurring in several genera of tropical plants such as Antonia (Loganiaceae), Derris and Lonchocarpus (Faboideae, Fabaceae). There have been reports of the indigenous people of French Guiana using rotenone-containing plants to fish - due to its ichthyotoxic effect - as early as the 17th century. Rotenone binds to the ubiquinone binding site of complex I as well as piericidin A, another potent inhibitor with a close structural homologue to ubiquinone. Acetogenins from Annonaceae are even more potent inhibitors of complex I. They cross-link to the ND2 subunit, which suggests that ND2 is essential for quinone-binding. Rolliniastatin-2, an acetogenin, is the first complex I inhibitor found that does not share the same binding site as rotenone. Bullatacin (an acetogenin found in Asimina triloba fruit) is the most potent known inhibitor of NADH dehydrogenase (ubiquinone) (=1.2 nM, stronger than rotenone). Despite more than 50 years of study of complex I, no inhibitors blocking the electron flow inside the enzyme have been found. Hydrophobic inhibitors like rotenone or piericidin most likely disrupt the electron transfer between the terminal FeS cluster N2 and ubiquinone. It has been shown that long-term systemic inhibition of complex I by rotenone can induce selective degeneration of dopaminergic neurons. Complex I is also blocked by adenosine diphosphate ribose – a reversible competitive inhibitor of NADH oxidation – by binding to the enzyme at the nucleotide binding site. Both hydrophilic NADH and hydrophobic ubiquinone analogs act at the beginning and the end of the internal electron-transport pathway, respectively. The antidiabetic drug Metformin has been shown to induce a mild and transient inhibition of the mitochondrial respiratory chain complex I, and this inhibition appears to play a key role in its mechanism of action. Inhibition of complex I has been implicated in hepatotoxicity associated with a variety of drugs, for instance flutamide and nefazodone. Further, complex I inhibition was shown to trigger NAD+-independent glucose catabolism. Active/inactive transition The catalytic properties of eukaryotic complex I are not simple. Two catalytically and structurally distinct forms exist in any given preparation of the enzyme: one is the fully competent, so-called “active” A-form and the other is the catalytically silent, dormant, “inactive”, D-form. After exposure of idle enzyme to elevated, but physiological temperatures (>30 °C) in the absence of substrate, the enzyme converts to the D-form. This form is catalytically incompetent but can be activated by the slow reaction (k~4 min−1) of NADH oxidation with subsequent ubiquinone reduction. After one or several turnovers the enzyme becomes active and can catalyse physiological NADH:ubiquinone reaction at a much higher rate (k~104 min−1). In the presence of divalent cations (Mg2+, Ca2+), or at alkaline pH the activation takes much longer. The high activation energy (270 kJ/mol) of the deactivation process indicates the occurrence of major conformational changes in the organisation of the complex I. However, until now, the only conformational difference observed between these two forms is the number of cysteine residues exposed at the surface of the enzyme. Treatment of the D-form of complex I with the sulfhydryl reagents N-Ethylmaleimide or DTNB irreversibly blocks critical cysteine residues, abolishing the ability of the enzyme to respond to activation, thus inactivating it irreversibly. The A-form of complex I is insensitive to sulfhydryl reagents. It was found that these conformational changes may have a very important physiological significance. The inactive, but not the active form of complex I was susceptible to inhibition by nitrosothiols and peroxynitrite. It is likely that transition from the active to the inactive form of complex I takes place during pathological conditions when the turnover of the enzyme is limited at physiological temperatures, such as during hypoxia, ischemia or when the tissue nitric oxide:oxygen ratio increases (i.e. metabolic hypoxia). Production of superoxide Recent investigations suggest that complex I is a potent source of reactive oxygen species. Complex I can produce superoxide (as well as hydrogen peroxide), through at least two different pathways. During forward electron transfer, only very small amounts of superoxide are produced (probably less than 0.1% of the overall electron flow). During reverse electron transfer, complex I might be the most important site of superoxide production within mitochondria, with around 3-4% of electrons being diverted to superoxide formation. Reverse electron transfer, the process by which electrons from the reduced ubiquinol pool (supplied by succinate dehydrogenase, glycerol-3-phosphate dehydrogenase, electron-transferring flavoprotein or dihydroorotate dehydrogenase in mammalian mitochondria) pass through complex I to reduce NAD+ to NADH, driven by the inner mitochondrial membrane potential electric potential. Although it is not precisely known under what pathological conditions reverse-electron transfer would occur in vivo, in vitro experiments indicate that this process can be a very potent source of superoxide when succinate concentrations are high and oxaloacetate or malate concentrations are low. This can take place during tissue ischaemia, when oxygen delivery is blocked. Superoxide is a reactive oxygen species that contributes to cellular oxidative stress and is linked to neuromuscular diseases and aging. NADH dehydrogenase produces superoxide by transferring one electron from FMNH2 (or semireduced flavin) to oxygen (O2). The radical flavin leftover is unstable, and transfers the remaining electron to the iron-sulfur centers. It is the ratio of NADH to NAD+ that determines the rate of superoxide formation. Pathology Mutations in the subunits of complex I can cause mitochondrial diseases, including Leigh syndrome. Point mutations in various complex I subunits derived from mitochondrial DNA (mtDNA) can also result in Leber's Hereditary Optic Neuropathy.There is some evidence that complex I defects may play a role in the etiology of Parkinson's disease, perhaps because of reactive oxygen species (complex I can, like complex III, leak electrons to oxygen, forming highly toxic superoxide). Although the exact etiology of Parkinson's disease is unclear, it is likely that mitochondrial dysfunction, along with proteasome inhibition and environmental toxins, may play a large role. In fact, the inhibition of complex I has been shown to cause the production of peroxides and a decrease in proteasome activity, which may lead to Parkinson's disease. Additionally, Esteves et al. (2010) found that cell lines with Parkinson's disease show increased proton leakage in complex I, which causes decreased maximum respiratory capacity. Brain ischemia/reperfusion injury is mediated via complex I impairment. Recently it was found that oxygen deprivation leads to conditions in which mitochondrial complex I lose its natural cofactor, flavin mononucleotide (FMN) and become inactive. When oxygen is present the enzyme catalyzes a physiological reaction of NADH oxidation by ubiquinone, supplying electrons downstream of the respiratory chain (complexes III and IV). Ischemia leads to dramatic increase of succinate level. In the presence of succinate mitochondria catalyze reverse electron transfer so that fraction of electrons from succinate is directed upstream to FMN of complex I. Reverse electron transfer results in a reduction of complex I FMN, increased generation of ROS, followed by a loss of the reduced cofactor (FMNH2) and impairment of mitochondria energy production. The FMN loss by complex I and I/R injury can be alleviated by the administration of FMN precursor, riboflavin. Recent studies have examined other roles of complex I activity in the brain. Andreazza et al. (2010) found that the level of complex I activity was significantly decreased in patients with bipolar disorder, but not in patients with depression or schizophrenia. They found that patients with bipolar disorder showed increased protein oxidation and nitration in their prefrontal cortex. These results suggest that future studies should target complex I for potential therapeutic studies for bipolar disorder. Similarly, Moran et al. (2010) found that patients with severe complex I deficiency showed decreased oxygen consumption rates and slower growth rates. However, they found that mutations in different genes in complex I lead to different phenotypes, thereby explaining the variations of pathophysiological manifestations of complex I deficiency. Exposure to pesticides can also inhibit complex I and cause disease symptoms. For example, chronic exposure to low levels of dichlorvos, an organophosphate used as a pesticide, has been shown to cause liver dysfunction. This occurs because dichlorvos alters complex I and II activity levels, which leads to decreased mitochondrial electron transfer activities and decreased ATP synthesis. In chloroplasts A proton-pumping, ubiquinone-using NADH dehydrogenase complex, homologous to complex I, is found in the chloroplast genomes of most land plants under the name ndh. This complex is inherited from the original symbiosis from cyanobacteria, but has been lost in most eukaryotic algae, some gymnosperms (Pinus and gnetophytes), and some very young lineages of angiosperms. The purpose of this complex is originally cryptic as chloroplasts do not participate in respiration, but now it is known that ndh serves to maintain photosynthesis in stressful situations. This makes it at least partially dispensable in favorable conditions. It is evident that angiosperm lineages without ndh do not last long from their young ages, but how gymnosperms survive on land without ndh for so long is unknown. Genes The following is a list of humans genes that encode components of complex I: NADH dehydrogenase (ubiquinone) 1 alpha subcomplex NDUFA1 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 1, 7.5kDa NDUFA2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 2, 8kDa NDUFA3 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 3, 9kDa NDUFA4 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4, 9kDa - recently described to be part of complex IV NDUFA4L – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4-like NDUFA4L2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4-like 2 NDUFA5 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 5, 13kDa NDUFA6 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 6, 14kDa NDUFA7 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 7, 14.5kDa NDUFA8 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 8, 19kDa NDUFA9 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 9, 39kDa NDUFA10 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 10, 42kDa NDUFA11 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 11, 14.7kDa NDUFA12 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 12 NDUFA13 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 13 NDUFAB1 – NADH dehydrogenase (ubiquinone) 1, alpha/beta subcomplex, 1, 8kDa NDUFAF1 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 1 NDUFAF2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 2 NDUFAF3 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 3 NDUFAF4 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 4 NADH dehydrogenase (ubiquinone) 1 beta subcomplex NDUFB1 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 1, 7kDa NDUFB2 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 2, 8kDa NDUFB3 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 3, 12kDa NDUFB4 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 4, 15kDa NDUFB5 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 5, 16kDa NDUFB6 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 6, 17kDa NDUFB7 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 7, 18kDa NDUFB8 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 8, 19kDa NDUFB9 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 9, 22kDa NDUFB10 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 10, 22kDa NDUFB11 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 11, 17.3kDa NADH dehydrogenase (ubiquinone) 1, subcomplex unknown NDUFC1 – NADH dehydrogenase (ubiquinone) 1, subcomplex unknown, 1, 6kDa NDUFC2 – NADH dehydrogenase (ubiquinone) 1, subcomplex unknown, 2, 14.5kDa NADH dehydrogenase (ubiquinone) Fe-S protein NDUFS1 – NADH dehydrogenase (ubiquinone) Fe-S protein 1, 75kDa (NADH-coenzyme Q reductase) NDUFS2 – NADH dehydrogenase (ubiquinone) Fe-S protein 2, 49kDa (NADH-coenzyme Q reductase) NDUFS3 – NADH dehydrogenase (ubiquinone) Fe-S protein 3, 30kDa (NADH-coenzyme Q reductase) NDUFS4 – NADH dehydrogenase (ubiquinone) Fe-S protein 4, 18kDa (NADH-coenzyme Q reductase) NDUFS5 – NADH dehydrogenase (ubiquinone) Fe-S protein 5, 15kDa (NADH-coenzyme Q reductase) NDUFS6 – NADH dehydrogenase (ubiquinone) Fe-S protein 6, 13kDa (NADH-coenzyme Q reductase) NDUFS7 – NADH dehydrogenase (ubiquinone) Fe-S protein 7, 20kDa (NADH-coenzyme Q reductase) NDUFS8 – NADH dehydrogenase (ubiquinone) Fe-S protein 8, 23kDa (NADH-coenzyme Q reductase) NADH dehydrogenase (ubiquinone) flavoprotein 1 NDUFV1 – NADH dehydrogenase (ubiquinone) flavoprotein 1, 51kDa NDUFV2 – NADH dehydrogenase (ubiquinone) flavoprotein 2, 24kDa NDUFV3 – NADH dehydrogenase (ubiquinone) flavoprotein 3, 10kDa mitochondrially encoded NADH dehydrogenase subunit MT-ND1 - mitochondrially encoded NADH dehydrogenase subunit 1 MT-ND2 - mitochondrially encoded NADH dehydrogenase subunit 2 MT-ND3 - mitochondrially encoded NADH dehydrogenase subunit 3 MT-ND4 - mitochondrially encoded NADH dehydrogenase subunit 4 MT-ND4L - mitochondrially encoded NADH dehydrogenase subunit 4L MT-ND5 - mitochondrially encoded NADH dehydrogenase subunit 5 MT-ND6 - mitochondrially encoded NADH dehydrogenase subunit 6 References External links Institute of Science and Technology Austria (ISTA): Sazanov Group MRC MBU Sazanov group Interactive Molecular model of NADH dehydrogenase (Requires MDL Chime) Complex I homepage Complex I news facebook page Cellular respiration Glycolysis EC 7.1.1 Integral membrane proteins
Respiratory complex I
Chemistry,Biology
6,281
54,595,901
https://en.wikipedia.org/wiki/Androsterone%20sulfate
Androsterone sulfate, also known as 3α-hydroxy-5α-androstan-17-one 3α-sulfate, is an endogenous, naturally occurring steroid and one of the major urinary metabolites of androgens. It is a steroid sulfate which is formed from sulfation of androsterone by the steroid sulfotransferase SULT2A1 and can be desulfated back into androsterone by steroid sulfatase. See also Androsterone glucuronide Steroid sulfate C19H30O5S References External links Metabocard for Androsterone Sulfate (HMDB02759) - Human Metabolome Database 5α-Reduced steroid metabolites Androgen esters Androstanes Human metabolites Ketones Sulfate esters
Androsterone sulfate
Chemistry,Biology
181
28,933,065
https://en.wikipedia.org/wiki/Geographic%20center%20of%20the%20United%20States
The geographic center of the United States is a point approximately north of Belle Fourche, South Dakota at . It has been regarded as such by the United States Coast and Geodetic Survey and the U.S. National Geodetic Survey (NGS) since the additions of Alaska and Hawaii to the United States in 1959. Overview This is distinct from the contiguous geographic center, which has not changed since the 1912 admissions of New Mexico and Arizona to the 48 contiguous United States, and falls near the town of Lebanon, Kansas. This served as the overall geographic center of the United States for 47 years, until the 1959 admissions of Alaska and Hawaii moved the geographic center of the overall United States approximately northwest by north. While any measurement of the exact center of a land mass will always be imprecise due to changing shorelines and other factors, the NGS coordinates identify the center of the fifty states as an uninhabited parcel of private pastureland approximately east of the cornerpoint where the South Dakota–Wyoming–Montana borders meet. According to the NGS data sheet, the actual marker is "set in an irregular mass of concrete 36 inches below the surface of the ground." For public commemoration, a nearby proxy marker is located in a park in Belle Fourche, where one will find a flag atop a small concrete slab bearing a United States Coast and Geodetic Survey Reference Marker. Contiguous United States The geographic center of the 48 contiguous or conterminous United States, determined in a 1918 survey, is located at , about northwest of the center of Lebanon, Kansas, approximately south of the Kansas–Nebraska border. The determination is accurate to about . While any measurement of the exact center of a land mass will always be imprecise due to changing shorelines and other factors, the NGS coordinates are recognized in a historical marker in a small park at the intersection of AA Road and K-191. It is accessible by a turn-off from U.S. Route 281. It is distinct from the geographic center of the 50 United States located at a point northeast of Belle Fourche, South Dakota, reflecting the 1959 additions of the states of Alaska and Hawaii. In a technical glitch, a farmstead northeast of Potwin, Kansas, became the default geolocation of 600 million IP addresses (due to a lack of fine granularity) when the Massachusetts-based digital mapping company MaxMind changed the putative geographic center of the contiguous United States from to . Marker In order to protect the privacy of the private land owner where the point identified by the 1918 survey falls, a proxy marker was erected in 1940 about half a mile (800 m) away, at the 130/AA intersection (). Its inscription reads: The GEOGRAPHIC CENTER of the UNITED STATES LAT. 39°50' LONG. −98°35' NE 1/4 – SE 1/4 – S32 – T2S – R11W Located by L.T. Hagadorn of Paulette & Wilson – Engineers and L.A. Beardslee – County Engineer. From data furnished by United States Coast and Geodetic Survey. Sponsored by Lebanon Hub Club. Lebanon, Kansas. April 25, 1940 An American flag usually flies atop a pole placed on the monument. A covered picnic area and the U.S. Center Chapel, a small eight-pew chapel, are nearby. Method of measurement In 1918, the United States Coast and Geodetic Survey found this location by balancing on a point a cardboard cutout shaped like the U.S. This method was accurate to within , but while the Geodetic Survey no longer endorses any location as the center of the U.S., the identification of Lebanon, Kansas, has remained. Cultural references The geographic center of the contiguous United States is mentioned in Neil Gaiman's American Gods as a neutral ground where the modern and the old gods can meet despite the war between them. In the 1969 Disney movie The Computer Wore Tennis Shoes, the final question of the college knowledge program is, "A small Midwest city is located exactly on an area designated as the 'geographic center of the United States.' For ten points and $100,000, can you tell us the name of that city?" The answer of Lebanon, Kansas is accepted as correct. A 2021 Jeep Super Bowl commercial titled "The Middle", starring Bruce Springsteen, features the U.S. Center Chapel in Lebanon, Kansas. Belle Fourche, South Dakota, is referenced as the geographic center of the U.S. in "A Serpent's Tooth: A Longmire Mystery Book 9" by Craig Johnson. See also Center of population Geographic centers of the United States Mean center of the United States population Median center of the United States population United States Coast and Geodetic Survey (USC&GS) References External links In the Middle of Nowhere, a Nation’s Center, New York Times Smith County Map, KDOT Kansas Travel article Center for Land Use Interpretation article about the origins and accuracy of the marker Roadside America article USGS information The Center of the United States article about applying mathematical methods to geography Geography of the United States United States Historic surveying landmarks in the United States Geography of Smith County, Kansas Geography of Butte County, South Dakota 1918 establishments in Kansas
Geographic center of the United States
Physics,Mathematics
1,072
77,852,453
https://en.wikipedia.org/wiki/Algorithmic%20wage%20discrimination
Algorithmic wage discrimination is the utilization of algorithmic bias to enable wage discrimination where workers are paid different wages for the same work. The term was coined by Veena Dubal, a law professor at the University of California College of the Law, San Francisco, in a 2023 publication. United States In the United States, Algorithmic wage discrimination may be illegal under United States antitrust laws. References Information ethics Discrimination Bias
Algorithmic wage discrimination
Technology,Biology
86
15,032,003
https://en.wikipedia.org/wiki/Exoplanet%20orbital%20and%20physical%20parameters
This page describes exoplanet orbital and physical parameters. Orbital parameters Most known extrasolar planet candidates have been discovered using indirect methods and therefore only some of their physical and orbital parameters can be determined. For example, out of the six independent parameters that define an orbit, the radial-velocity method can determine four: semi-major axis, eccentricity, longitude of periastron, and time of periastron. Two parameters remain unknown: inclination and longitude of the ascending node. Distance from star and orbital period There are exoplanets that are much closer to their parent star than any planet in the Solar System is to the Sun, and there are also exoplanets that are much further from their star. Mercury, the closest planet to the Sun at 0.4 astronomical units (AU), takes 88 days for an orbit, but the smallest known orbits of exoplanets have orbital periods of only a few hours, see Ultra-short period planet. The Kepler-11 system has five of its planets in smaller orbits than Mercury's. Neptune is 30 AU from the Sun and takes 165 years to orbit it, but there are exoplanets that are thousands of AU from their star and take tens of thousands of years to orbit, e.g. GU Piscium b. The radial-velocity and transit methods are most sensitive to planets with small orbits. The earliest discoveries such as 51 Peg b were gas giants with orbits of a few days. These "hot Jupiters" likely formed further out and migrated inwards. The direct imaging method is most sensitive to planets with large orbits, and has discovered some planets that have planet–star separations of hundreds of AU. However, protoplanetary disks are usually only around 100 AU in radius, and core accretion models predict giant planet formation to be within 10 AU, where the planets can coalesce quickly enough before the disk evaporates. Very-long-period giant planets may have been rogue planets that were captured, or formed close-in and gravitationally scattered outwards, or the planet and star could be a mass-imbalanced wide binary system with the planet being the primary object of its own separate protoplanetary disk. Gravitational instability models might produce planets at multi-hundred AU separations but this would require unusually large disks. For planets with very wide orbits up to several hundred thousand AU it may be difficult to observationally determine whether the planet is gravitationally bound to the star. Most planets that have been discovered are within a couple of AU from their host star because the most used methods (radial-velocity and transit) require observation of several orbits to confirm that the planet exists and there has only been enough time since these methods were first used to cover small separations. Some planets with larger orbits have been discovered by direct imaging but there is a middle range of distances, roughly equivalent to the Solar System's gas giant region, which is largely unexplored. Direct imaging equipment for exploring that region was installed on two large telescopes that began operation in 2014, e.g. Gemini Planet Imager and VLT-SPHERE. The microlensing method has detected a few planets in the 1–10 AU range. It appears plausible that in most exoplanetary systems, there are one or two giant planets with orbits comparable in size to those of Jupiter and Saturn in the Solar System. Giant planets with substantially larger orbits are now known to be rare, at least around Sun-like stars. The distance of the habitable zone from a star depends on the type of star and this distance changes during the star's lifetime as the size and temperature of the star changes. Eccentricity The eccentricity of an orbit is a measure of how elliptical (elongated) it is. All the planets of the Solar System except for Mercury have near-circular orbits (e<0.1). Most exoplanets with orbital periods of 20 days or less have near-circular orbits, i.e. very low eccentricity. That is thought to be due to tidal circularization: reduction of eccentricity over time due to gravitational interaction between two bodies. The mostly sub-Neptune-sized planets found by the Kepler spacecraft with short orbital periods have very circular orbits. By contrast, the giant planets with longer orbital periods discovered by radial-velocity methods have quite eccentric orbits. (As of July 2010, 55% of such exoplanets have eccentricities greater than 0.2, whereas 17% have eccentricities greater than 0.5.) Moderate to high eccentricities (e>0.2) of giant planets are not an observational selection effect, because a planet can be detected about equally well regardless of the eccentricity of its orbit. The statistical significance of elliptical orbits in the ensemble of observed giant planets is somewhat surprising, because current theories of planetary formation suggest that low-mass planets should have their orbital eccentricity circularized by gravitational interactions with the surrounding protoplanetary disk. However, as a planet grows more massive and its interaction with the disk becomes nonlinear, it may induce eccentric motion of the surrounding disk's gas, which in turn may excite the planet's orbital eccentricity. Low eccentricities are correlated with high multiplicity (number of planets in the system). Low eccentricity is needed for habitability, especially advanced life. For weak Doppler signals near the limits of the current detection ability, the eccentricity becomes poorly constrained and biased towards higher values. It is suggested that some of the high eccentricities reported for low-mass exoplanets may be overestimates, because simulations show that many observations are also consistent with two planets on circular orbits. Reported observations of single planets in moderately eccentric orbits have about a 15% chance of being a pair of planets. This misinterpretation is especially likely if the two planets orbit with a 2:1 resonance. With the exoplanet sample known in 2009, a group of astronomers estimated that "(1) around 35% of the published eccentric one-planet solutions are statistically indistinguishable from planetary systems in 2:1 orbital resonance, (2) another 40% cannot be statistically distinguished from a circular orbital solution" and "(3) planets with masses comparable to Earth could be hidden in known orbital solutions of eccentric super-Earths and Neptune mass planets". Radial velocity surveys found exoplanet orbits beyond 0.1 AU to be eccentric, particularly for large planets. Transit data obtained by the Kepler spacecraft, is consistent with the RV surveys and also revealed that smaller planets tend to have less eccentric orbits. Inclination vs. spin–orbit angle Orbital inclination is the angle between a planet's orbital plane and another plane of reference. For exoplanets, the inclination is usually stated with respect to an observer on Earth: the angle used is that between the normal to the planet's orbital plane and the line of sight from Earth to the star. Therefore, most planets observed by the transit method are close to 90 degrees. Because the word 'inclination' is used in exoplanet studies for this line-of-sight inclination then the angle between the planet's orbit and the star's rotation must use a different word and is termed the spin–orbit angle or spin–orbit alignment. In most cases the orientation of the star's rotational axis is unknown. The Kepler spacecraft has found a few hundred multi-planet systems and in most of these systems the planets all orbit in nearly the same plane, much like the Solar System. However, a combination of astrometric and radial-velocity measurements has shown that some planetary systems contain planets whose orbital planes are significantly tilted relative to each other. More than half of hot Jupiters have orbital planes substantially misaligned with their parent star's rotation. A substantial fraction of hot-Jupiters even have retrograde orbits, meaning that they orbit in the opposite direction from the star's rotation. Rather than a planet's orbit having been disturbed, it may be that the star itself flipped early in their system's formation due to interactions between the star's magnetic field and the planet-forming disk. Periastron precession Periastron precession is the rotation of a planet's orbit within the orbital plane, i.e. the axes of the ellipse change direction. In the Solar System, perturbations from other planets are the main cause, but for close-in exoplanets the largest factor can be tidal forces between the star and planet. For close-in exoplanets, the general relativistic contribution to the precession is also significant and can be orders of magnitude larger than the same effect for Mercury. Some exoplanets have significantly eccentric orbits, which makes it easier to detect the precession. The effect of general relativity can be detectable in timescales of about 10 years or less. Nodal precession Nodal precession is rotation of a planet's orbital plane. Nodal precession is more easily seen as distinct from periastron precession when the orbital plane is inclined to the star's rotation, the extreme case being a polar orbit. WASP-33 is a fast-rotating star that hosts a hot Jupiter in an almost polar orbit. The quadrupole mass moment and the proper angular momentum of the star are 1900 and 400 times, respectively, larger than those of the Sun. This causes significant classical and relativistic deviations from Kepler's laws. In particular, the fast rotation causes large nodal precession because of the star's oblateness and the Lense–Thirring effect. Rotation and axial tilt In April 2014, the first measurement of a planet's rotation period was announced: the length of day for the super-Jupiter gas giant Beta Pictoris b is 8 hours (based on the assumption that the axial tilt of the planet is small.) With an equatorial rotational velocity of 25 km per second, this is faster than for the giant planets of the Solar System, in line with the expectation that the more massive a giant planet, the faster it spins. Beta Pictoris b's distance from its star is 9 AU. At such distances the rotation of Jovian planets is not slowed by tidal effects. Beta Pictoris b is still warm and young and over the next hundreds of millions of years, it will cool down and shrink to about the size of Jupiter, and if its angular momentum is preserved, then as it shrinks, the length of its day will decrease to about 3 hours and its equatorial rotation velocity will speed up to about 40 km/s. The images of Beta Pictoris b do not have high enough resolution to directly see details but doppler spectroscopy techniques were used to show that different parts of the planet were moving at different speeds and in opposite directions from which it was inferred that the planet is rotating. With the next generation of large ground-based telescopes it will be possible to use doppler imaging techniques to make a global map of the planet, like the mapping of the brown dwarf Luhman 16B in 2014. A 2017 study of the rotation of several gas giants found no correlation between rotation rate and mass of the planet. As of 2024 the axial tilt of 4 exoplanets have been measured with one of them VHS 1256 b having a Uranus like tilt of 90 degrees +- 25 degrees. Origin of spin and tilt of terrestrial planets Giant impacts have a large effect on the spin of terrestrial planets. The last few giant impacts during planetary formation tend to be the main determiner of a terrestrial planet's rotation rate. On average the spin angular velocity will be about 70% of the velocity that would cause the planet to break up and fly apart; the natural outcome of planetary embryo impacts at speeds slightly larger than escape velocity. In later stages terrestrial planet spin is also affected by impacts with planetesimals. During the giant impact stage, the thickness of a protoplanetary disk is far larger than the size of planetary embryos so collisions are equally likely to come from any direction in three-dimensions. This results in the axial tilt of accreted planets ranging from 0 to 180 degrees with any direction as likely as any other with both prograde and retrograde spins equally probable. Therefore, prograde spin with a small axial tilt, common for the Solar System's terrestrial planets except Venus, is not common in general for terrestrial planets built by giant impacts. The initial axial tilt of a planet determined by giant impacts can be substantially changed by stellar tides if the planet is close to its star and by satellite tides if the planet has a large satellite. Tidal effects For most planets, the rotation period and axial tilt (also called obliquity) are not known, but a large number of planets have been detected with very short orbits (where tidal effects are greater) that will probably have reached an equilibrium rotation that can be predicted (i.e. tidal lock, spin–orbit resonances, and non-resonant equilibria such as retrograde rotation). Gravitational tides tend to reduce the axial tilt to zero but over a longer timescale than the rotation rate reaches equilibrium. However, the presence of multiple planets in a system can cause axial tilt to be captured in a resonance called a Cassini state. There are small oscillations around this state and in the case of Mars these axial tilt variations are chaotic. Hot Jupiters' close proximity to their host star means that their spin–orbit evolution is mostly due to the star's gravity and not the other effects. Hot Jupiters' rotation rate is not thought to be captured into spin–orbit resonance because of the way in which such a fluid-body reacts to tides; a planet like this therefore slows down into synchronous rotation if its orbit is circular, or, alternatively, it slows down into a non-synchronous rotation if its orbit is eccentric. Hot Jupiters are likely to evolve towards zero axial tilt even if they had been in a Cassini state during planetary migration when they were further from their star. Hot Jupiters' orbits will become more circular over time, however the presence of other planets in the system on eccentric orbits, even ones as small as Earth and as far away as the habitable zone, can continue to maintain the eccentricity of the Hot Jupiter so that the length of time for tidal circularization can be billions instead of millions of years. The rotation rate of planet HD 80606 b is predicted to be about 1.9 days. HD 80606 b avoids spin–orbit resonance because it is a gas giant. The eccentricity of its orbit means that it avoids becoming tidally locked. Physical parameters Mass When a planet is found by the radial-velocity method, its orbital inclination i is unknown and can range from 0 to 90 degrees. The method is unable to determine the true mass (M) of the planet, but rather gives a lower limit for its mass, M sini. In a few cases an apparent exoplanet may be a more massive object such as a brown dwarf or red dwarf. However, the probability of a small value of i (say less than 30 degrees, which would give a true mass at least double the observed lower limit) is relatively low (1−/2 ≈ 13%) and hence most planets will have true masses fairly close to the observed lower limit. If a planet's orbit is nearly perpendicular to the line of vision (i.e. i close to 90°), a planet can be detected through the transit method. The inclination will then be known, and the inclination combined with M sini from radial-velocity observations will give the planet's true mass. Also, astrometric observations and dynamical considerations in multiple-planet systems can sometimes provide an upper limit to the planet's true mass. In 2013 it was proposed that the mass of a transiting exoplanet can also be determined from the transmission spectrum of its atmosphere, as it can be used to constrain independently the atmospheric composition, temperature, pressure, and scale height, however a 2017 study found that the transmission spectrum cannot unambiguously determine the mass. Transit-timing variation can also be used to find a planet's mass. Radius, density, and bulk composition Prior to recent results from the Kepler space observatory, most confirmed planets were gas giants comparable in size to Jupiter or larger because they are most easily detected. However, the planets detected by Kepler are mostly between the size of Neptune and the size of Earth. If a planet is detectable by both the radial-velocity and the transit methods, then both its true mass and its radius can be determined, as well as its density. Planets with low density are inferred to be composed mainly of hydrogen and helium, whereas planets of intermediate density are inferred to have water as a major constituent. A planet of high density is inferred to be rocky, like Earth and the other terrestrial planets of the Solar System. Gas giants, puffy planets, and super-Jupiters Gaseous planets that are hot are caused by extreme proximity to their host star, or because they are still hot from their formation and are expanded by the heat. For colder gas planets, there is a maximum radius which is slightly larger than Jupiter which occurs when the mass reaches a few Jupiter-masses. Adding mass beyond this point causes the radius to shrink. Even when taking heat from the star into account, many transiting exoplanets are much larger than expected given their mass, meaning that they have surprisingly low density. See the magnetic field section for one possible explanation. Besides the inflated hot Jupiters, there is another type of low-density planet: super-puffs with masses only a few times Earth's but with radii larger than Neptune. The planets around Kepler-51 are far less dense (far more diffuse) than the inflated hot Jupiters as can be seen in the plots on the right where the three Kepler-51 planets stand out in the diffusity vs. radius plot. Ice giants and super-Neptunes Kepler-101b was the first super-Neptune discovered. It has three times Neptune's mass but its density suggests that heavy elements make up more than 60% of its total mass, unlike hydrogen–helium-dominated gas giants. Super-Earths, mini-Neptunes, and gas dwarfs If a planet has a radius and/or mass between that of Earth and Neptune, then there is a question about whether the planet is rocky like Earth, a mixture of volatiles and gas like Neptune, a small planet with a hydrogen/helium envelope (mini-Jupiter), or of some other composition. Some of the Kepler transiting planets with radii in the range of 1–4 Earth radii have had their masses measured by radial-velocity or transit-timing methods. The calculated densities show that up to 1.5 Earth radii, these planets are rocky and that density increases with increasing radius due to gravitational compression. However, between 1.5 and 4 Earth radii the density decreases with increasing radius. This indicates that above 1.5 Earth radii, planets tend to have increasing amounts of volatiles and gas. Despite this general trend, there is a wide range of masses at a given radius, which could be because gas planets can have rocky cores of different masses and compositions, and could also be due to photoevaporation of volatiles. Thermal evolutionary atmosphere models suggest a radius of 1.75 times that of Earth as a dividing line between rocky and gaseous planets. Excluding close-in planets that have lost their gas envelope due to stellar irradiation, studies of the metallicity of stars suggest a dividing line of 1.7 Earth radii between rocky planets and gas dwarfs, then another dividing line at 3.9 Earth radii between gas dwarfs and gas giants. These dividing lines are statistical trends and do not apply universally, because there are many other factors besides metallicity that affect planet formation, including distance from star – there may be larger rocky planets that formed at larger distances. An independent reanalysis of the data suggests that there are no such dividing lines and that there is a continuum of planet formation between 1 and 4 Earth radii and no reason to suspect that the amount of solid material in a protoplanetary disk determines whether super-Earths or mini-Neptunes form. Studies done in 2016 based on over 300 planets suggest that most objects over approximately two Earth masses collect significant hydrogen–helium envelopes, meaning rocky super-Earths may be rare. The discovery of the low-density Earth-mass planet Kepler-138d shows that there is an overlapping range of masses in which both rocky planets and low-density planets occur. A low-mass low-density planets could be an ocean planet or super-Earth with a remnant hydrogen atmosphere, or a hot planet with a steam atmosphere, or a mini-Neptune with a hydrogen–helium atmosphere. Another possibility for a low-mass low-density planet is that it has a large atmosphere made up chiefly of carbon monoxide, carbon dioxide, methane, or nitrogen. Massive solid planets In 2014, new measurements of Kepler-10c found it to be a Neptune-mass planet (17 Earth masses) with a density higher than Earth's, indicating that Kepler-10c is composed mostly of rock with possibly up to 20% high-pressure water ice but without a hydrogen-dominated envelope. Because this is well above the 10-Earth-mass upper limit that is commonly used for the term 'super-Earth', the term mega-Earth has been coined. A similarly massive and dense planet could be Kepler-131b, although its density is not as well measured as that of Kepler 10c. The next most massive known solid planets are half this mass: 55 Cancri e and Kepler-20b. Gas planets can have large solid cores. The Saturn-mass planet HD 149026 b has only two-thirds of Saturn's radius, so it may have a rock–ice core of 60 Earth masses or more. CoRoT-20b has 4.24 times Jupiter's mass but a radius of only 0.84 that of Jupiter; it may have a metal core of 800 Earth masses if the heavy elements are concentrated in the core, or a core of 300 Earth masses if the heavy elements are more distributed throughout the planet. Transit-timing variation measurements indicate that Kepler-52b, Kepler-52c and Kepler-57b have maximum masses between 30 and 100 times that of Earth, although the actual masses could be much lower. With radii about 2 Earth radii in size, they might have densities larger than that of an iron planet of the same size. They orbit very close to their stars, so they could each be the remnant core (chthonian planet) of an evaporated gas giant or brown dwarf. If a remnant core is massive enough it could remain in such a state for billions of years despite having lost the atmospheric mass. Solid planets up to thousands of Earth masses may be able to form around massive stars (B-type and O-type stars; 5–120 solar masses), where the protoplanetary disk would contain enough heavy elements. Also, these stars have high UV radiation and winds that could photoevaporate the gas in the disk, leaving just the heavy elements. For comparison, Neptune's mass equals 17 Earth masses, Jupiter has 318 Earth masses, and the 13-Jupiter-mass limit used in the IAU's working definition of an exoplanet equals approximately 4000 Earth masses. Cold planets have a maximum radius because adding more mass at that point causes the planet to compress under the weight instead of increasing the radius. The maximum radius for solid planets is lower than the maximum radius for gas planets. Shape When the size of a planet is described using its radius, this is approximating the shape by a sphere. However, the rotation of a planet causes it to be flattened at the poles; so the equatorial radius is larger than the polar radius, making it closer to an oblate spheroid. The oblateness of transiting exoplanets will affect the transit light curves. At the limits of current technology it has been possible to show that HD 189733b is less oblate than Saturn. If the planet is close to its star, then gravitational tides will elongate the planet in the direction of the star, making the planet closer to a triaxial ellipsoid. Because tidal deformation is along a line between the planet and the star, it is difficult to detect from transit photometry; it will have an effect on the transit light curves an order of magnitude less than that caused by rotational deformation even in cases where tidal deformation is larger than rotational deformation (as is the case for tidally locked hot Jupiters). Material rigidity of rocky planets and rocky cores of gas planets will cause further deviations from the aforementioned shapes. Thermal tides caused by unevenly irradiated surfaces are another factor. See also Detecting Earth from distant star-based systems Notes References External links Kepler public data archive by the Space Telescope Science Institute Strömgren Survey for Asteroseismology and Galactic Archaeology Exoplanet catalogs and databases Extrasolar Planets Encyclopaedia by the Paris Observatory The Habitable Exoplanets Catalog by UPR Arecibo New Worlds Atlas by the NASA/JPL PlanetQuest Astrobiology Planetary science
Exoplanet orbital and physical parameters
Astronomy,Biology
5,222
31,276,014
https://en.wikipedia.org/wiki/Celivarone
Celivarone is an experimental drug being tested for use in pharmacological antiarrhythmic therapy. Cardiac arrhythmia is any abnormality in the electrical activity of the heart. Arrhythmias range from mild to severe, sometimes causing symptoms like palpitations, dizziness, fainting, and even death. They can manifest as slow (bradycardia) or fast (tachycardia) heart rate, and may have a regular or irregular rhythm. Molecular causes of cardiac arrhythmias The causes of cardiac arrhythmias are numerous, from structural changes in the conduction system (the sinoatrial and atrioventricular nodes, or His-Purkinje system) and cardiac muscle, to mutations in genes coding for ion channels of the heart. Movement of ions, particularly Na+, Ca2+ and K+, causes depolarizations of cell membranes in node cells, which are then transmitted to cardiac muscle cells to induce contraction. After depolarization, the ions are moved back to their original locations, leading to repolarization of the membrane and relaxation. Disruptions in ion flow affect the heart's ability to contract by altering the resting membrane potential, affecting the cell's ability to conduct or transmit an action potential (AP), or by affecting the rate or force of contraction. The specific molecular changes involved in arrhythmias depend on the nature of the problem. Ion channel mutations can alter protein conformation, and so change the amount of current flowing through these channels. Due to changes in amino acids and binding domains, mutations may also affect the ability of these channels to respond to physiological changes in cardiac demand. Mutations resulting in loss of function of K+ channels can result in delayed repolarization of the cardiac muscle cells. Similarly, gain of function of Na+ and Ca2+ channels results in delayed repolarization, and Ca2+ overload causing increased Ca2+ binding to cardiac troponin C, more actin-myosin interactions and causing an increased contractility, respectively. Mutations cause many arrhythmic conditions, including atrial fibrillation (AF), atrial flutter (AFl), and ventricular fibrillation (V-Fib). Arrhythmias can also be induced by altered activity of the vagus nerve and activation of β1 adrenergic receptors. Mechanism of action Celivarone is a non-iodinated benzofuran derivative, structurally related to amiodarone, a drug commonly used to treat arrhythmias. Celivarone has potential as an antiarrhythmic agent, attributable to its multifactorial mechanism of action; blocking Na+, L-type Ca2+ and many types of K+ channels (IKr, IKs, IKACh and IKv1.5), as well as inhibiting β1 receptors, all in dose-dependent manners. The mechanisms by which celivarone modifies ion flow through these channels is unknown, but hearts demonstrate longer PQ intervals and decreased cell shortening, indicative of blocked L-type Ca2+ channels, depressed maximum current with each action potential with no change in the resting membrane potential, caused by blocked Na+ channels, and longer action potential duration due to K+ channel blocks. Celivarone is therefore described as having class I, II, III, and IV antiarrhythmic properties. Indications for use Celivarone displays some atrial selectivity, suggesting it may be most effective at targeting atrial arrhythmias like atrial fibrillation and atrial flutter. These conditions are characterized by rapid atrial rates, 400–600 bpm for atrial fibrillation and 150–300 bpm for atrial flutter. Studies have shown celivarone is capable of cardioversion, maintaining normal sinus cardiac rhythms, being effective in hypokalemic, vasotonic, and stretch-induced atrial fibrillation, as well as ischemic and reperfusion ventricular fibrillation. Since it affects multiple ion channels, it also shows promise in treating genetic forms of arrhythmia caused by several ion channel mutations. Future research Celivarone may be an effective antihypertensive therapy, as it inhibits both angiotensin II and phenylephrine induced hypertension in dogs, despite having no affinity for these receptors. Atrial fibrillation is especially common in hypertensive adults so a single drug to combat both problems is desirable. The non-iodinated nature of celivarone means that the harmful side-effects on the thyroid commonly seen with amiodarone therapy are eliminated, making the drug an attractive alternative. Higher oral bioavailability, shorter duration of action, and lower accumulation in body tissues are also benefits of celivarone. Presently, two studies are underway to determine if the effects observed in the animal models are reproducible in a human population. See also Amiodarone Benzbromarone Benziodarone Budiodarone Dronedarone References Antiarrhythmic agents Benzofurans Sanofi Amines Carboxylate esters Isopropyl esters Diarylketones Butyl compounds
Celivarone
Chemistry
1,101
4,157,594
https://en.wikipedia.org/wiki/Great%20dodecahemidodecahedron
In geometry, the great dodecahemidodecahedron is a nonconvex uniform polyhedron, indexed as U70. It has 18 faces (12 pentagrams and 6 decagrams), 60 edges, and 30 vertices. Its vertex figure is a crossed quadrilateral. Aside from the regular small stellated dodecahedron {5/2,5} and great stellated dodecahedron {5/2,3}, it is the only nonconvex uniform polyhedron whose faces are all non-convex regular polygons (star polygons), namely the star polygons {5/2} and {10/3}. It is a hemipolyhedron with 6 decagrammic faces passing through the model center. Related polyhedra Its convex hull is the icosidodecahedron. It also shares its edge arrangement with the great icosidodecahedron (having the pentagrammic faces in common) and the great icosihemidodecahedron (having the decagrammic faces in common). Gallery See also List of uniform polyhedra References External links Uniform polyhedra and duals Uniform polyhedra
Great dodecahemidodecahedron
Physics
250
62,286,468
https://en.wikipedia.org/wiki/GraphBLAS
GraphBLAS () is an API specification that defines standard building blocks for graph algorithms in the language of linear algebra. GraphBLAS is built upon the notion that a sparse matrix can be used to represent graphs as either an adjacency matrix or an incidence matrix. The GraphBLAS specification describes how graph operations (e.g. traversing and transforming graphs) can be efficiently implemented via linear algebraic methods (e.g. matrix multiplication) over different semirings. The development of GraphBLAS and its various implementations is an ongoing community effort, including representatives from industry, academia, and government research labs. Background Graph algorithms have long taken advantage of the idea that a graph can be represented as a matrix, and graph operations can be performed as linear transformations and other linear algebraic operations on sparse matrices. For example, matrix-vector multiplication can be used to perform a step in a breadth-first search. The GraphBLAS specification (and the various libraries that implement it) provides data structures and functions to compute these linear algebraic operations. In particular, GraphBLAS specifies sparse matrix objects which map well to graphs where vertices are likely connected to relatively few neighbors (i.e. the degree of a vertex is significantly smaller than the total number of vertices in the graph). The specification also allows for the use of different semirings to accomplish operations in a variety of mathematical contexts. Originally motivated by the need for standardization in graph analytics, similar to its namesake BLAS, the GraphBLAS standard has also begun to interest people outside the graph community, including researchers in machine learning, and bioinformatics. GraphBLAS implementations have also been used in high-performance graph database applications such as RedisGraph. Specification The GraphBLAS specification has been in development since 2013, and has reached version 2.1.0 as of December 2023. While formally a specification for the C programming language, a variety of programming languages have been used to develop implementations in the spirit of GraphBLAS, including C++, Java, and Nvidia CUDA. Compliant implementations and language bindings There are currently two fully-compliant reference implementations of the GraphBLAS specification. Bindings assuming a compliant specification exist for the Python, MATLAB, and Julia programming languages. Linear algebraic foundations The mathematical foundations of GraphBLAS are based in linear algebra and the duality between matrices and graphs. Each graph operation in GraphBLAS operates on a semiring, which is made up of the following elements: A scalar addition operator () A scalar multiplication operator () A set (or domain) Note that the zero element (i.e. the element that represents the absence of an edge in the graph) can also be reinterpreted. For example, the following algebras can be implemented in GraphBLAS: All the examples above satisfy the following two conditions in their respective domains: Additive identity, Multiplicative annihilation, For instance, a user can specify the min-plus algebra over the domain of double-precision floating point numbers with GrB_Semiring_new(&min_plus_semiring, GrB_MIN_FP64, GrB_PLUS_FP64). Functionality While the GraphBLAS specification generally allows significant flexibility in implementation, some functionality and implementation details are explicitly described: GraphBLAS objects, including matrices and vectors, are opaque data structures. Non-blocking execution mode, which permits lazy or asynchronous evaluation of certain operations. Masked assignment, denoted , which assigns elements of matrix to matrix only in positions where the mask matrix is non-zero. The GraphBLAS specification also prescribes that library implementations be thread-safe. Example code The following is a GraphBLAS 2.1-compliant example of a breadth-first search in the C programming language. #include <stdlib.h> #include <stdio.h> #include <stdint.h> #include <stdbool.h> #include "GraphBLAS.h" /* * Given a boolean n x n adjacency matrix A and a source vertex s, performs a BFS traversal * of the graph and sets v[i] to the level in which vertex i is visited (v[s] == 1). * If i is not reachable from s, then v[i] = 0 does not have a stored element. * Vector v should be uninitialized on input. */ GrB_Info BFS(GrB_Vector *v, GrB_Matrix A, GrB_Index s) { GrB_Index n; GrB_Matrix_nrows(&n,A); // n = # of rows of A GrB_Vector_new(v,GrB_INT32,n); // Vector<int32_t> v(n) GrB_Vector q; // vertices visited in each level GrB_Vector_new(&q, GrB_BOOL, n); // Vector<bool> q(n) GrB_Vector_setElement(q, (bool)true, s); // q[s] = true, false everywhere else /* * BFS traversal and label the vertices. */ int32_t level = 0; // level = depth in BFS traversal GrB_Index nvals; do { ++level; // next level (start with 1) GrB_apply(*v, GrB_NULL, GrB_PLUS_INT32, GrB_SECOND_INT32, q, level, GrB_NULL); // v[q] = level GrB_vxm(q, *v, GrB_NULL, GrB_LOR_LAND_SEMIRING_BOOL, q, A, GrB_DESC_RC); // q[!v] = q ||.&& A; finds all the // unvisited successors from current q GrB_Vector_nvals(&nvals, q); } while (nvals); // if there is no successor in q, we are done. GrB_free(&q); // q vector no longer needed return GrB_SUCCESS; } See also Basic Linear Algebra Subprograms (BLAS) LEMON Graph Library References External links GraphBLAS Forum Numerical linear algebra Numerical software Graph description languages
GraphBLAS
Mathematics
1,349
4,694,572
https://en.wikipedia.org/wiki/Pecom%2064
Pecom 64 was an educational and/or home computer developed by Elektronska Industrija Niš of Serbia in 1985. Modern emulators for the system exist, along with software preservation efforts. Specifications The machine had the following specifications: CPU: CDP 1802B 5V7 running at 2.813 MHz ROM: 16 KB, with optional 16 KB upgrade containing enhanced editor and assembler RAM: 32 KB Secondary storage: cassette tape VIS: (Video Interface System) CDP1869 / CDP1870 Text modes: 40 columns x 24 lines Character set: 128 Programmable characters Character size: 6x9 pixels Graphics modes: None, but the character-set was re programmable (semigraphics) to simulate a 240x216 High Resolution display Colours: A total of 8 foreground colours are available (with a limited choice of 4 per character and 1 per line of that character) and 8 background colours (defined for the whole screen). Sound: 2 channels: one for tone generation with a span of 8 octaves, and 1 for special effect/white noise. Volume programmable in 16 steps. I/O ports:cassette tape storage, composite and RF video, RS-232 and expansion connector Power supply: 220V AC, 0.02 A, 4.5 W (built-in transformer) See also Pecom 32 Very similar HW and BASIC as used in the Comx-35 References Home computers EI Niš Computer-related introductions in 1985
Pecom 64
Technology
306
23,820,110
https://en.wikipedia.org/wiki/Gymnopilus%20alabamensis
Gymnopilus alabamensis is a species of mushroom in the family Hymenogastraceae. It was first described by American mycologist Murrill in 1917. See also List of Gymnopilus species References External links Index Fungorum alabamensis Taxa named by William Alphonso Murrill Fungus species
Gymnopilus alabamensis
Biology
72
43,536
https://en.wikipedia.org/wiki/Beeswax
Italic text Beeswax (also known as cera alba) is a natural wax produced by honey bees of the genus Apis. The wax is formed into scales by eight wax-producing glands in the abdominal segments of worker bees, which discard it in or at the hive. The hive workers collect and use it to form cells for honey storage and larval and pupal protection within the beehive. Chemically, beeswax consists mainly of esters of fatty acids and various long-chain alcohols. Beeswax has been used since prehistory as the first plastic, as a lubricant and waterproofing agent, in lost wax casting of metals and glass, as a polish for wood and leather, for making candles, as an ingredient in cosmetics and as an artistic medium in encaustic painting. Beeswax is edible, having similarly negligible toxicity to plant waxes, and is approved for food use in most countries and in the European Union under the E number E901. However, due to its inability to be broken down by the human digestive system, it has insignificant nutritional value. Production Beeswax is formed by worker bees, which secrete it from eight wax-producing mirror glands on the inner sides of the sternites (the ventral shield or plate of each segment of the body) on abdominal segments 4 to 7. The sizes of these wax glands depend on the age of the worker, and after many daily flights, these glands gradually begin to atrophy. The new wax is initially glass-clear and colorless, becoming opaque after chewing and being contaminated with pollen by the hive worker bees, becoming progressively yellower or browner by incorporation of pollen oils and propolis. The wax scales are about across and thick, and about 1100 are needed to make a gram of wax. Worker bees use the beeswax to build honeycomb cells. For the wax-making bees to secrete wax, the ambient temperature in the hive must be . The book Beeswax Production, Harvesting, Processing and Products suggests of beeswax is sufficient to store of honey. Another study estimated that of wax can store of honey. Sugars from honey are metabolized into beeswax in wax-gland-associated fat cells. The amount of honey used by bees to produce wax has not been accurately determined, but according to Whitcomb's 1946 experiment, of honey yields of wax. Processing Beeswax as a product for human use may come from cappings cut off the cells in the process of extraction, from old comb that is scrapped, or from unwanted burr comb and brace comb removed from a hive. Its color varies from nearly white to brownish, but most often is a shade of yellow, depending on purity, the region, and the type of flowers gathered by the bees. The wax from the brood comb of the honey bee hive tends to be darker than wax from the honeycomb because impurities accumulate more quickly in the brood comb. Due to the impurities, the wax must be rendered before further use. The leftovers are called slumgum, and is derived from old breeding rubbish (pupa casings, cocoons, shed larva skins, etc.), bee droppings, propolis, and general rubbish. The wax may be clarified further by heating in water. As with petroleum waxes, it may be softened by dilution with mineral oil or vegetable oil to make it more workable at room temperature. Physical characteristics Beeswax is a fragrant solid at room temperature. The colors are light yellow, medium yellow, or dark brown and white. Beeswax is a tough wax formed from a mixture of several chemical compounds. Beeswax has a relatively low melting point range of . If beeswax is heated above discoloration occurs. The flash point of beeswax is . When natural beeswax is cold, it is brittle, and its fracture is dry and granular. At room temperature (conventionally taken as about ), it is tenacious and it softens further at human body temperature (). Chemical composition An approximate chemical formula for beeswax is C15H31COOC30H61. Its main constituents are palmitate, palmitoleate, and oleate esters of long-chain (30–32 carbons) aliphatic alcohols, with the ratio of triacontanyl palmitate CH3(CH2)29O-CO-(CH2)14CH3 to cerotic acid CH3(CH2)24COOH, the two principal constituents, being 6:1. Beeswax can be classified generally into European and Oriental types. The saponification value is lower (3–5) for European beeswax, and higher (8–9) for Oriental types. The analytical characterization can be done by high-temperature gas chromatography. Adulteration Beeswax faces challenges in the market due to the presence of various suppliers, making it difficult to distinguish authentic from fake variants. Adulterated beeswax often contains paraffin and other toxic additives, posing potential health risks and lacking the genuine honey-scented aroma of pure beeswax. Pharmaceutical grades of pure beeswax are distributed in the shape of pellets for the cosmetic, phamaceutical and food industries, among other uses. Production In 2020, world production of beeswax was 62,116 tonnes, led by India with 38% of the total. Uses Candle-making has long involved the use of beeswax, which burns readily and cleanly, and this material was traditionally prescribed for the making of the Paschal candle or "Easter candle". Beeswax candles are purported to be superior to other wax candles, because they burn brighter and longer, do not bend, and burn cleaner. It is further recommended for the making of other candles used in the liturgy of the Roman Catholic Church. Beeswax is also the candle constituent of choice in the Eastern Orthodox Church. Refined beeswax plays a prominent role in art materials both as a binder in encaustic paint and as a stabilizer in oil paint to add body. Beeswax is an ingredient in surgical bone wax, which is used during surgery to control bleeding from bone surfaces; shoe polish and furniture polish can both use beeswax as a component, dissolved in turpentine or sometimes blended with linseed oil or tung oil; modeling waxes can also use beeswax as a component; pure beeswax can also be used as an organic surfboard wax. Beeswax blended with pine rosin is used for waxing, and can serve as an adhesive to attach reed plates to the structure inside a squeezebox. It can also be used to make Cutler's resin, an adhesive used to glue handles onto cutlery knives. It is used in Eastern Europe in egg decoration; it is used for writing, via resist dyeing, on batik eggs (as in pysanky) and for making beaded eggs. Beeswax is used by percussionists to make a surface on tambourines for thumb rolls. It can also be used as a metal injection moulding binder component along with other polymeric binder materials. Beeswax was formerly used in the manufacture of phonograph cylinders. It may still be used to seal formal legal or royal decree and academic parchments such as placing an awarding stamp imprimatur of the university upon completion of postgraduate degrees. Purified and bleached beeswax is used in the production of food, cosmetics, and pharmaceuticals. The three main types of beeswax products are yellow, white, and beeswax absolute. Yellow beeswax is the crude product obtained from the honeycomb, white beeswax is bleached or filtered yellow beeswax, and beeswax absolute is yellow beeswax treated with alcohol. In food preparation, it is used as a coating for cheese; by sealing out the air, protection is given against spoilage (mold growth). Beeswax may also be used as a food additive E901, in small quantities acting as a glazing agent, which serves to prevent water loss, or used to provide surface protection for some fruits. Soft gelatin capsules and tablet coatings may also use E901. Beeswax is also a common ingredient of natural chewing gum. The wax monoesters in beeswax are poorly hydrolysed in the guts of humans and other mammals, so they have insignificant nutritional value. Some birds, such as honeyguides, can digest beeswax. Beeswax is the main diet of wax moth larvae. The use of beeswax in skin care and cosmetics has been increasing. A German study found beeswax to be superior to similar barrier creams (usually mineral oil-based creams such as petroleum jelly), when used according to its protocol. Beeswax is used in lip balm, lip gloss, hand creams, salves, and moisturizers; and in cosmetics such as eye shadow, blush, and eye liner. Beeswax is also an important ingredient in moustache wax and hair pomades, which make hair look sleek and shiny. In oil spill control, beeswax is processed to create Petroleum Remediation Product (PRP). It is used to absorb oil or petroleum-based pollutants from water. Historical uses Beeswax was among the first plastics to be used, alongside other natural polymers such as gutta-percha, horn, tortoiseshell, and shellac. For thousands of years, beeswax has had a wide variety of applications; it has been found in the tombs of Egypt, in wrecked Viking ships, and in Roman ruins. Beeswax never goes bad and can be heated and reused. Historically, it has been used: As candles - the oldest intact beeswax candles north of the Alps were found in the Alamannic graveyard of Oberflacht, Germany, dating to 6th/7th century AD In the manufacture of cosmetics As a modelling material in the lost-wax casting process, or cire perdue For wax tablets used for a variety of writing purposes In encaustic paintings such as the Fayum mummy portraits In bow making To strengthen and preserve sewing thread, cordage, shoe laces, etc. As a component of sealing wax To strengthen and to forestall splitting and cracking of wind instrument reeds To form the mouthpieces of a didgeridoo, and the frets on the Philippine kutiyapi – a type of boat lute As a sealant or lubricant for bullets in cap and ball firearms To stabilize the military explosive Torpex – before being replaced by a petroleum-based product In producing Javanese batik As an ancient form of dental tooth filling As the joint filler in the slate bed of pool and billiard tables. See also Carnauba wax Candelilla wax Paraffin wax Ozokerite (ceresin) Spermaceti References External links The chemistry of bees Joel Loveridge, School of Chemistry, University of Bristol, accessed November 2005 Bee products Animal glandular products Waxes Biodegradable materials Sewing equipment Articles containing video clips E-number additives
Beeswax
Physics,Chemistry
2,318
10,945,580
https://en.wikipedia.org/wiki/Schwarz%20integral%20formula
In complex analysis, a branch of mathematics, the Schwarz integral formula, named after Hermann Schwarz, allows one to recover a holomorphic function, up to an imaginary constant, from the boundary values of its real part. Unit disc Let f be a function holomorphic on the closed unit disc {z ∈ C | |z| ≤ 1}. Then for all |z| < 1. Upper half-plane Let f be a function holomorphic on the closed upper half-plane {z ∈ C | Im(z) ≥ 0} such that, for some α > 0, |zα f(z)| is bounded on the closed upper half-plane. Then for all Im(z) > 0. Note that, as compared to the version on the unit disc, this formula does not have an arbitrary constant added to the integral; this is because the additional decay condition makes the conditions for this formula more stringent. Corollary of Poisson integral formula The formula follows from Poisson integral formula applied to u: By means of conformal maps, the formula can be generalized to any simply connected open set. Notes and references Ahlfors, Lars V. (1979), Complex Analysis, Third Edition, McGraw-Hill, Remmert, Reinhold (1990), Theory of Complex Functions, Second Edition, Springer, Saff, E. B., and A. D. Snider (1993), Fundamentals of Complex Analysis for Mathematics, Science, and Engineering, Second Edition, Prentice Hall, Theorems in complex analysis
Schwarz integral formula
Mathematics
323
1,368,932
https://en.wikipedia.org/wiki/PEG%20ratio
The 'PEG ratio' (price/earnings to growth ratio) is a valuation metric for determining the relative trade-off between the price of a stock, the earnings generated per share (EPS), and the company's expected growth. In general, the P/E ratio is higher for a company with a higher growth rate. Thus, using just the P/E ratio would make high-growth companies appear overvalued relative to others. It is assumed that by dividing the P/E ratio by the earnings growth rate, the resulting ratio is better for comparing companies with different growth rates. The PEG ratio is considered to be a convenient approximation. It was originally developed by Mario Farina who wrote about it in his 1969 Book, A Beginner's Guide To Successful Investing In The Stock Market. It was later popularized by Peter Lynch, who wrote in his 1989 book One Up on Wall Street that "The P/E ratio of any company that's fairly priced will equal its growth rate", i.e., a fairly valued company will have its PEG equal to 1. The formula can be supported theoretically by reference to the Sum of perpetuities method. Basic formula The rate is expressed as a percent value, and should use real growth only, to correct for inflation. For example, if a company is growing at 30% a year in real terms, and has a P/E of 30.00, it would have a PEG of 1.00. A lower ratio than 1.00 indicates an undervalued stock and a value above 1.00 indicates overvalued. The P/E ratio used in the calculation may be projected or trailing, and the annual growth rate may be the expected growth rate for the next year or the next five years. As an indicator PEG is a widely employed indicator of a stock's possible true value. Similar to PE ratios, a lower PEG means that the stock is undervalued more. It is favored by many over the price/earnings ratio because it also accounts for growth. See also PVGO. The PEG ratio of 1 is sometimes said to represent a fair trade-off between the values of cost and the values of growth, indicating that a stock is reasonably valued given the expected growth. A crude analysis suggests that companies with PEG values between 0 and 1 may provide higher returns. A PEG Ratio can also be a negative number if a stock's present income figure is negative (negative earnings), or if future earnings are expected to drop (negative growth). PEG ratios calculated from negative present earnings are viewed with skepticism as almost meaningless, other than as an indication of high investment risk. Criticism The PEG ratio is commonly used and provided by numerous sources of financial and stock information. Despite its wide use, the PEG ratio is only a rough rule of thumb. Criticisms of the PEG ratio include that it is an oversimplified ratio that fails to usefully relate the price/earnings ratio to growth because it fails to factor in return on equity (ROE) or the required return factor (T). When the PEG is quoted in public sources it makes a great deal of difference whether the earnings used in calculating the PEG is the past year's EPS, the estimated future year's EPS, or even selected analysts' speculative estimates of growth over the next five years. Use of the coming year's expected growth rate is considered preferable as the most reliable of the future-looking estimates. Yet which growth rate was selected for calculating a particular published PEG ratio may not be clear, or may require a close reading of the footnotes for the given figure. The PEG ratio's validity is particularly questionable when used to compare companies expecting high growth with those expecting low-growth, or to compare companies with high P/E with those with a low P/E. It is more apt to be considered when comparing so-called growth companies (those growing earnings significantly faster than the market). Growth rate numbers are expected to come from an impartial source. This may be from an analyst, whose job it is to be objective, or the investor's own analysis. Management is not impartial and it is assumed that their statements have a bit of puffery, going from a bit optimistic to completely implausible. This is not always true, since some managers tend to predict modest results only to have things come out better than claimed. A prudent investor should investigate for himself whether the estimates are reasonable, and what should be used to compare the stock price. PEG calculations based on five-year growth estimates are especially subject to over-optimistic growth projections by analysts, which on average are not achieved, and to discounting the risk of outright loss of invested capital. Advantages Investors may prefer the PEG ratio because it explicitly puts a value on the expected growth in earnings of a company. The PEG ratio can offer a suggestion of whether a company's high P/E ratio reflects an excessively high stock price or is a reflection of promising growth prospects for the company. Disadvantages The PEG ratio is less appropriate for measuring companies without high growth. Large, well-established companies, for instance, may offer dependable dividend income, but little opportunity for growth. A company's growth rate is an estimate. It is subject to the limitations of projecting future events. Future growth of a company can change due to any number of factors: market conditions, expansion setbacks, and hype of investors. Also, the convention that "PEG=1" is appropriate is somewhat arbitrary and considered a rule-of-thumb metric. . The simplicity and convenience of calculating PEG leaves out several important variables. First, the absolute company growth rate used in the PEG does not account for the overall growth rate of the economy, and hence an investor must compare a stock's PEG to average PEG's across its industry and the entire economy to get any accurate sense of how competitive a stock is for investment. A low (attractive) PEG in times of high growth in the entire economy may not be particularly impressive when compared to other stocks, and vice versa for high PEG's in periods of slow growth or recession. In addition, company growth rates that are much higher than the economy's growth rate are unstable and vulnerable to any problems the company may face that would prevent it from keeping its current rate. Therefore, a higher-PEG stock with a steady, sustainable growth rate (compared to the economy's growth) can often be a more attractive investment than a low-PEG stock that may happen to just be on a short-term growth "streak". A sustained higher-than-economy growth rate over the years usually indicates a highly profitable company, but can also indicate a scam, especially if the growth is a flat percentage no matter how the rest of the economy fluctuates (as was the case for several years for returns in Bernie Madoff's Ponzi scheme). Finally, the volatility of highly speculative and risky stocks, which have low price/earnings ratios due to their very low price, is also not corrected for in PEG calculations. These stocks may have low PEG's due to a very low short-term (~1 year) PE ratio (e.g. 100% growth rate from $1 to $2 /stock) that does not indicate any guarantee of maintaining future growth or even solvency. References External links Investopedia - PEG Ratio Financial ratios Investment indicators
PEG ratio
Mathematics
1,516
67,337,770
https://en.wikipedia.org/wiki/Hydroxymethylation
Hydroxymethylation is a chemical reaction that installs the CH2OH group. The transformation can be implemented in many ways and applies to both industrial and biochemical processes. Hydroxymethylation with formaldehyde A common method for hydroxymethylation involves the reaction of formaldehyde with active C-H and N-H bonds: R3C-H + CH2O → R3C-CH2OH R2N-H + CH2O → R2N-CH2OH A typical active C-H bond is provided by a terminal acetylene or the alpha protons of an aldehyde. In industry, hydroxymethylation of acetaldehyde with formaldehyde is used in the production of pentaerythritol: P-H bonds are also prone to reaction with formaldehyde. Tetrakis(hydroxymethyl)phosphonium chloride ([P(CH2OH)4]Cl) is produced in this way from phosphine (PH3). Hydroxymethylation in demethylation 5-Methylcytosine is a common epigenetic marker. The methyl group is modified by oxidation of the methyl group in a process called hydroxymethylation: RCH3 + O → RCH2OH This oxidation is thought to be a prelude to removal, regenerating cytosine. Representative reactions A two-step hydroxymethylation of aldehydes involves methylenation followed by hydroboration-oxidation: RCHO + Ph3P=CH2 → RCH=CH2 + Ph3PO RCH=CH2 + R2BH → RCH2-CH2BR2 RCH2-CH2BR2 + H2O2 → RCH2-CH2OH + "HOBR2" Silylmethyl Grignard reagents are nucleophilic reagents for hydroxymethylation of ketones: R2C=O + ClMgCH2SiR'3 → R2C(OMgCl)CH2SiR'3 R2C(OMgCl)CH2SiR'3 + H2O + H2O2 → R2C(OH)CH2OH + "HOSiR'3" Reactions of hydroxymethylated compounds A common reaction of hydroxymethylated compounds is further reaction with a second equivalent of an active X-H bond: hydroxymethylation: X-H + CH2O → X-CH2OH crosslinking: X-H + X-CH2OH → X-CH2-X + H2O This pattern is illustrated by the use of formaldehyde in the production various polymers and resins from phenol-formaldehyde condensations (Bakelite, Novolak, and calixarenes). Similar crosslinking occurs in urea-formaldehyde resins. The hydroxymethylation of N-H and P-H bonds can often be reversed by base. This reaction is illustrated by the preparation of tris(hydroxymethyl)phosphine: [P(CH2OH)4]Cl + NaOH → P(CH2OH)3 + H2O + H2C=O + NaCl When conducted in the presence of chlorinating agents, hydroxymethylation leads to chloromethylation as illustrated by the Blanc chloromethylation. Related Hydroxyethylation involves the installation of the CH2CH2OH group, as practiced in ethoxylation. Aminomethylation is often effected with Eschenmoser's salt, [(CH3)2NCH2]OTf References Carbon-carbon bond forming reactions
Hydroxymethylation
Chemistry
786
176,735
https://en.wikipedia.org/wiki/Neolithic%20architecture
Neolithic architecture refers to structures encompassing housing and shelter from approximately 10,000 to 2,000 BC, the Neolithic period. In southwest Asia, Neolithic cultures appear soon after 10,000 BC, initially in the Levant (Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and from there into the east and west. Early Neolithic structures and buildings can be found in southeast Anatolia, Syria, and Iraq by 8,000 BC with agriculture societies first appearing in southeast Europe by 6,500 BC, and central Europe by ca. 5,500 BC (of which the earliest cultural complexes include the Starčevo-Koros (Cris), Linearbandkeramic, and Vinča. Architectural advances are an important part of the Neolithic period (10,000-2000 BC), during which some of the major innovations of human history occurred. The domestication of plants and animals, for example, led to both new economics and a new relationship between people and the world, an increase in community size and permanence, a massive development of material culture, and new social and ritual solutions to enable people to live together in these communities. New styles of individual structures and their combination into settlements provided the buildings required for the new lifestyle and economy, and were also an essential element of change. Housing The Neolithic people in the Levant, Anatolia, Syria, northern Mesopotamia and central Asia were great builders, utilising mud-brick to construct houses and villages. At Çatalhöyük, houses were plastered and painted with elaborate scenes of humans and animals. In Europe, the Neolithic long house with a timber frame, pitched, thatched roof, and walls finished in wattle and daub could be very large, presumably housing a whole extended family. Villages might comprise only a few such houses. Neolithic pile dwellings have been excavated in Sweden (Alvastra pile dwelling) and in the circum-Alpine area, with remains being found at the Mondsee and Attersee lakes in Upper Austria. Early archaeologists like Ferdinand Keller thought they formed artificial islands, much like the Scottish crannogs, but today it is clear that the majority of settlements was located on the shores of lakes and were only inundated later on. Reconstructed pile dwellings are shown in open-air museums in Unteruhldingen and Zürich (Pfahlbauland). In Romania, Moldova, and Ukraine, Neolithic settlements included wattle-and-daub structures with thatched roofs and floors made of logs covered in clay. This is also when the burdei pit-house (below-ground) style of house construction was developed, which was still used by Romanians and Ukrainians until the 20th century. Neolithic settlements and "cities" include: Göbekli Tepe in Turkey, ca. 9,000 BC Tell es-Sultan (Jericho) in the Levant, Neolithic from around 8,350 BC, arising from the earlier Epipaleolithic Natufian culture Nevali Cori in Turkey, ca. 8,000 BC Çatalhöyük in Turkey, 7,500 BC Mehrgarh in Pakistan, 7,000 BC Knap of Howar and Skara Brae, the Orkney Islands, Scotland, from 3,500 BC over 3,000 settlements of the Cucuteni-Trypillian culture, some with populations up to 15,000 residents, flourished in present-day Romania, Moldova and Ukraine from 5,400 to 2,800 BC. Tombs and ritual monuments Elaborate tombs for the dead were also built. These tombs are particularly numerous in Ireland, where there are many thousand still in existence. Neolithic people in the British Isles built long barrows and chamber tombs for their dead and causewayed camps, henges and cursus monuments. Megalithic architecture Megaliths found in Europe and the Mediterranean were also erected in the Neolithic period. These monuments include megalithic tombs, temples and several structures of unknown function. Tomb architecture is normally easily distinguished by the presence of human remains that had originally been buried, often with recognizable intent. Other structures may have had a mixed use, now often characterised as religious, ritual, astronomical or political. The modern distinction between various architectural functions with which we are familiar today, now makes it difficult for us to think of some megalithic structures as multi-purpose socio-cultural centre points. Such structures would have served a mixture of socio-economic, ideological, political functions and indeed aesthetic ideals. The megalithic structures of Ġgantija, Tarxien, Ħaġar Qim, Mnajdra, Ta' Ħaġrat, Skorba and smaller satellite buildings on Malta and Gozo, first appearing in their current form around 3600 BC, represent one of the earliest examples of a fully developed architectural statement in which aesthetics, location, design and engineering fused into free-standing monuments. Stonehenge, the other well-known building from the Neolithic would later, 2600 and 2400 BC for the sarsen stones, and perhaps 3000 BC for the blue stones, be transformed into the form that we know so well. At its height Neolithic architecture marked geographic space; their durable monumentality embodied a past, perhaps made up of memories and remembrance. In the Central Mediterranean, Malta also became home of a subterranean skeuomorphised form of architecture around 3600 BC. At the Ħal-Saflieni Hypogeum, the inhabitants of Malta carved out an underground burial complex in which surface architectural elements were used to embellish a series of chambers and entrances. It is at the Neolithic Ħal-Saflieni Hypogeum that the earliest known skeuomorphism first occurred in the world. This architectural device served to define the aesthetics of the underworld in terms that well known in the larger megaliths. On Malta and Gozo, surface and subterranean architecture defined two worlds, which later, in the Greek world, would manifest themselves in the myth of Hades and the world of the living. In Malta, therefore, we encounter Neolithic architecture which is demonstrably not purely functional, but which was conceptual in design and purpose. Other structures Early Neolithic water wells from the Linear Pottery culture have been found in central Germany near Leipzig. These structures are built in timber with complicated woodworking joints at the edges and are dated between 5,200 and 5,100 BC. The world's oldest known engineered roadway, the Sweet Track in England, also dates from this time. See also Ancestral Puebloans Architectural history Megalithic Temples of Malta Ħal-Saflieni Hypogeum Womb tomb Proto-city References External links Russian Architecture: Pre-History Architectural history architecture
Neolithic architecture
Engineering
1,352
78,255,463
https://en.wikipedia.org/wiki/Octadecanolide
Octadecanolide is an organic compound with the chemical formula . It is a cyclic ester or lactone, more specifically a macrolide. Occurrence Several species of bees (such as some of genera Colletes, Halictus, Lasioglossum) and butterflies (such as some of genus Heliconius) use octadecanolide as a pheromone. The Dufour's gland of bees in the Halictinae subfamily, contains octadecanolide along with other macrocyclic lactones, which could be used for a range of different applications like nest building, larval food and chemical communication. References Lactones Heterocyclic compounds with 1 ring Pheromones
Octadecanolide
Chemistry
150
44,924,659
https://en.wikipedia.org/wiki/Wire%20Swiss
Wire Swiss GmbH is a software company with headquarters in Zug, Switzerland. Its development center is in Berlin, Germany. The company is best known for its messaging application called Wire. The Wire app allows users to exchange end-to-end encrypted instant messages, as well as make voice and video calls. The software is available for the iOS, Android, macOS, Linux and Windows operating systems and WebRTC-compatible web browsers. It uses the Internet to make voice and video calls; send text messages, files, images, videos, audio files and user drawings depending on the clients used. It can be used on any of the available clients, requiring a phone number or email for registration. It is hosted inside the European Union and protected by European Union laws. Many employees working on Wire have previously worked with Skype, and Skype's co-founder Janus Friis is backing the project. Audio quality is one of Wire's key selling points. Since January 2024 the company is headed by Benjamin Schilz as CEO. Before joining Wire, Schilz founded Acorus Networks and later worked for the security company F5. History Wire Swiss GmbH was founded in Fall 2012 by Jonathan Christensen, Alan Duric and Priidu Zilmer, who previously worked at Skype and Microsoft. Jonathan Christensen previously co-founded Camino Networks in 2005 with Alan Duric, who also co-founded Telio. Camino networks was later acquired by Skype, a division of Microsoft Corporation. At Skype, Jonathan was responsible for getting Skype into new platforms such as Internet televisions and set-top boxes, while Priidu Zilmer, former head of design at Vdio, lead the Skype design team. On December 7, 2017, the company announced that former Huddle CEO Morten Brøgger had replaced Alan Duric as the company's CEO, and that Duric would join Wire’s Board of Directors and resume his role as CTO/COO. The company launched the Wire app on December 3, 2014. Shortly after its launch, the company retracted a claim from their website that the app's messages and conversation history could only be read by the conversation participants. In August 2015, the company added group calling to their app. From its launch until March 2016, Wire's messages were only encrypted between the client and the company's server. In March 2016, the company added end-to-end encryption for its messaging traffic, as well as a video calling feature. Wire Swiss GmbH released the source code of the wire client applications under the GPLv3 license in July 2016. The company also published a number of restrictions that apply to users who have compiled their own applications. Among other things, they may not change the way the applications connect and interact with the company's centralized servers. Wire Swiss started open sourcing Wire's server code in April 2017. On September 19, 2017, the company announced that they had finished open sourcing the server code, licensed under the AGPL. In July 2019 Wire raised $8.2m investment from Morpheus Ventures and others. On July 18 of the same month, 100% of the company's shares have been taken over by Wire Holdings Inc., Delaware, USA. As of August 13, 2020 the Wire Group Holding GmbH from Germany is the sole shareholder of Wire Swiss GmbH. App Features Wire allows users to exchange text, voice, photo, video and music messages. The application also supports group messaging. The app allows group calling with up to ten participants. A stereo feature places participants in "virtual space" so that users can differentiate voice directionality. The application adapts to varying network conditions. The application supports the exchange of animated GIFs up to 5MB through a media integration with a company called Giphy. The iOS and Android versions also include a sketch feature that allows users to draw a sketch into a conversation or over a photo. YouTube, SoundCloud, Spotify and Vimeo integrations allow users to share music and videos within chats. Wire is available on mobile and web. The web service is called Wire for Web. Wire activity is synced on iOS, Android and web apps. The desktop version supports screen sharing. Wire also includes a function for ephemeral messaging in 1:1 and group conversations. With Wire for Teams, Wire introduced a paid product with a series of features available to businesses. It offers the administration of team members: Adding and removing people, assigning roles, and inviting guests to specific chats. Technical Wire provides end-to-end encryption for its instant messages. Wire's instant messages are encrypted with Proteus, a protocol that Wire Swiss developed based on the Signal Protocol. Wire's voice calls are encrypted with DTLS and SRTP, and its video calls with RTP. In addition to this, client-server communication is protected by Transport Layer Security. Business model Wire Swiss GmbH receives financial backing from a firm called Iconical. According to an article published by Reuters, Wire Swiss has not disclosed how much funding it has received, and in March 2016, it had yet to discover a sustainable business model. Wire Executive Chairman Janus Friis told Bloomberg that the company will "never create an advertising-based business model", but "might charge for certain premium services in the future". In July 2017, Wire Swiss announced the beta version of an end-to-end encrypted team messaging platform. In October 2017, Wire officially released the team messaging platform as a subscription based communication solution for small businesses. See also Comparison of instant messaging clients Comparison of VoIP software Internet privacy List of video telecommunication services and product brands Secure instant messaging References External links Swiss companies established in 2012 Privately held companies of Switzerland Mobile telecommunication services Swiss brands
Wire Swiss
Technology
1,194
4,914,004
https://en.wikipedia.org/wiki/Rachitrema
Rachitrema is a poorly known genus of ichthyosaur from the Triassic of France. Its remains were found in France by two independent collectors, towards the end of the nineteenth century. They were only isolated bone fragments. Classification The type species is R. pellati, described by Sauvage in 1883. When first described, Sauvage classified it as a dinosaur. Later, Franz Nopcsa referred the genus to Anchisauridae, while Karl Alfred von Zittel referred it to either Zanclodontidae or Megalosauridae. The ichthyosaur nature of Rachitrema was recognized by Friedrich von Huene, who synonymized it with Shastasaurus. Sauvage conceded that Rachitrema was non-dinosaurian, and the ichthyosaur classification of the genus became universally accepted by several authors. McGowan and Motani (2003) considered Rachitrema dinosaurian without comment. However, recent re-examination of the type material of Rachitrema reaffirms the ichthyosaurian classification of the genus, with most of the original remains referable to Ichthyosauria, and the rest being indeterminate beyond Reptilia. References External links Dinosaur Mailing List entry, which discusses the genus Nomina dubia Fossils of France Fossil taxa described in 1883
Rachitrema
Biology
280
34,292,864
https://en.wikipedia.org/wiki/Australian%20Faunal%20Directory
The Australian Faunal Directory (AFD) is an online catalogue of taxonomic and biological information on all animal species known to occur within Australia. It is a database produced by the Department of Climate Change, Energy, the Environment and Water of the Government of Australia. By May 12, 2021, the Australian Faunal Directory had collected information about 126,442 species and subspecies. It includes the data from the discontinued Zoological Catalogue of Australia and is regularly updated. Started in the 1980s, its goal is compile a "list of all Australian fauna including terrestrial vertebrates, ants and marine fauna" and create an "Australian biotaxonomic information system". References External links http://www.environment.gov.au/science/abrs/publications/fauna http://www.environment.gov.au/science/abrs/publications/zoological-catalogue-of-australia Fauna of Australia Australian science websites
Australian Faunal Directory
Biology
189
211,830
https://en.wikipedia.org/wiki/Horologium%20%28constellation%29
Horologium (Latin , the pendulum clock, from Greek , ) is a constellation of six stars faintly visible in the southern celestial hemisphere. It was first described by the French astronomer Nicolas-Louis de Lacaille in 1756 and visualized by him as a clock with a pendulum and a second hand. In 1922 the constellation was redefined by the International Astronomical Union (IAU) as a region of the celestial sphere containing Lacaille's stars, and has since been an IAU designated constellation. Horologium's associated region is wholly visible to observers south of 23°N. The constellation's brightest star—and the only one brighter than an apparent magnitude of 4—is Alpha Horologii (at 3.85), an aging orange giant star that has swollen to around 11 times the diameter of the Sun. The long-period variable-brightness star, R Horologii (4.7 to 14.3), has one of the largest variations in brightness among all stars in the night sky visible to the unaided eye. Four star systems in the constellation are known to have exoplanets; at least one—Gliese 1061—contains an exoplanet in its habitable zone. History The French astronomer Nicolas-Louis de Lacaille first described the constellation as l'Horloge à pendule & à secondes (Clock with pendulum and seconds hand) in 1756, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in previously uncharted regions of the southern celestial hemisphere, which were not visible from Europe. All but one honoured scientific instruments, and so symbolised the Age of Enlightenment. The constellation name was Latinised to Horologium in a catalogue and updated chart published posthumously in 1763. The Latin term is ultimately derived from the Ancient Greek ὡρολόγιον, for an instrument for telling the hour. Characteristics Covering a total of 248.9 square degrees or 0.603% of the sky, Horologium ranks 58th in area out of the 88 modern constellations. Its position in the southern celestial hemisphere means the whole constellation is visible to observers south of 23°N. Horologium is bordered by five constellations: Eridanus (the Po river or Nile river), Caelum (the chisel), Reticulum (the reticle), Dorado (the dolphin/swordfish), and Hydrus (the male water snake). The three letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Hor". The official constellation boundaries are defined by a twenty-two-sided polygon (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −39.64° and −67.04°. Features Stars Horologium has one star brighter than apparent magnitude 4, and 41 stars brighter than or equal to magnitude 6.5. Lacaille charted and designated 11 stars in the constellation, giving them the Bayer designations Alpha (α Hor) through Lambda Horologii (λ Hor) in 1756. In the mid-19th century, English astronomer Francis Baily removed the designations of two—Epsilon and Theta Horologii—as he held they were too faint to warrant naming. He was unable to find a star that corresponded to the coordinates of Lacaille's Beta Horologii. Determining that the coordinates were wrong, he assigned the designation to another star. Kappa Horologii, too, was unable to be verified—although it most likely was the star HD 18292—and the name fell out of use. In 1879, American astronomer Benjamin Apthorp Gould assigned designations to what became Mu and Nu Horologii as he felt they were bright enough to warrant them. At magnitude 3.9, Alpha Horologii is the brightest star in the constellation, located 115 (±0.5) light-years from Earth. German astronomer Johann Elert Bode depicted it as the pendulum of the clock, while Lacaille made it one of the weights. It is an orange giant star of spectral type K2III that has swollen to around 11 times the diameter of the Sun, having spent much of its life as a white main-sequence star. At an estimated 1.55 times the mass of the Sun, it is radiating 38 times the Sun's luminosity from its photosphere at an effective (surface) temperature of 5,028K. At magnitude 4.93, Delta Horologii is the second-brightest star in the constellation, and forms a wide optical double with Alpha. Delta itself is a true binary system composed of a white main sequence star of spectral type A5V that is 1.41 times as massive as the Sun with a magnitude of 5.15 and its fainter companion of magnitude 7.29. The system is located 179 (±4) light-years from the earth. At magnitude 5.0, Beta Horologii is a white giant 63 times as luminous as the Sun with an effective temperature of 8,303K. It is 312 (±4) light-years from Earth, and has been little-studied. Lambda Horologii is an ageing yellow-white giant star of spectral type F2III that spins around at 140km/second, and is hence mildly flattened at its poles (oblate). It is 161 (±1) light-years from Earth. With a magnitude of 5.24, Nu Horologii is a white main sequence star of spectral type A2V located 169 (±1) light-years from Earth that is around 1.9 times as massive as the Sun. Estimated to be around 540 million years old, it has a debris disk that appears to have two components: an inner disk is orbiting at a distance of , while an outer disk lies from the star. The estimated mass of the disks is the mass of the Earth. Horologium has several variable stars. R Horologii is a red giant Mira variable with one of the widest ranges in brightness known of stars in the night sky visible to the unaided eye. It is around 1,000 light-years from Earth. It has a minimum magnitude of 14.3 and a maximum magnitude of 4.7, with a period of approximately 13 months. T and U Horologii are also Mira variables. The Astronomical Society of Southern Africa reported in 2003 that observations of these two stars were needed as data on their light curves was incomplete. TW Horologii is a semiregular variable red giant star that is classified as a carbon star, and is 1,370 (±70) light-years from Earth. Iota Horologii is a yellow-white dwarf star 1.23 (±0.12) times as massive and 1.16 (±0.04) times as wide as the Sun with a spectral type of F8V, 57 (±0.05) light-years from Earth. Its chemical profile, movement and age indicate it formed within the Hyades cluster but has drifted around 130 light-years away from the other members. It has a planet at least 2.5 times as massive as Jupiter orbiting it every 307 days. HD 27631 is a Sun-like star located 164 (±0.3) light-years from Earth which was found to have a planet at least 1.45 times as massive as Jupiter that takes 2,208 (±66) days (six years) to complete an orbit. WASP-120 is a yellow-white main-sequence star around 1.4 times as massive as the Sun with a spectral type of F5V that is estimated to be 2.6 (±0.5) billion years old. It has a massive planet around 4.85 times the mass of Jupiter that completes its orbit every 3.6 days, and has an estimated surface temperature of 1,880 (±70)K. With an apparent magnitude of 13.06, Gliese 1061 is a red dwarf of spectral type M5.5V that has 12% of the mass and 15% of the diameter of the Sun, and shines with only 0.17% of its luminosity. Located 12 light-years away from Earth, it is the 20th-closest single star or stellar system to the Sun. In August 2019, it was announced that it had three planets, one of which lay in its habitable zone. Deep-sky objects Horologium is home to many deep-sky objects, including several globular clusters. NGC 1261 is a globular cluster of magnitude 8, located 53,000 light-years from Earth. It lies 4.7 degrees north-northeast of Mu Horologii. The globular cluster Arp-Madore 1 is the most remote known globular cluster in the Milky Way at a distance of from Earth. NGC 1512 is a barred spiral galaxy 2.1 degrees west-southwest of Alpha Horologii with an apparent magnitude of 10.2. About five arcmin (13.8 kpc) away is the dwarf lenticular galaxy NGC 1510. The two are in the process of a merger which has been going on for 400 million years. The Horologium-Reticulum Supercluster is a galaxy supercluster, second in size only to the Shapley Supercluster in the local universe (anything within 200 mpc of Earth). It contains over 20 Abell galaxy clusters and covers more than 100 deg2 of the sky, centered roughly at equatorial coordinates α = , δ = . See also Horologium (Chinese astronomy) Notes References Works cited External links The Deep Photographic Guide to the Constellations: Horologium The clickable Horologium Starry Night Photography – Horologium Constellation Southern constellations Constellations listed by Lacaille
Horologium (constellation)
Astronomy
2,094
4,051,750
https://en.wikipedia.org/wiki/Dagmar%20bumper
Dagmar bumpers (also known as "bullet bumpers") is a slang term for chrome conical-shaped bumper guards that began to appear on the front bumper/grille assemblies of certain American automobiles following World War II. They reached their peak in the mid-1950s. Derivation The term evokes the prominent bosom of Dagmar, a buxom early-1950s television personality featuring low-cut gowns and conical bra cups. She was amused by the tribute. History As originally conceived by Harley Earl, GM Vice President of Design, the conical bumper guards would mimic artillery shells. Placed inboard of the headlights on front bumpers of Cadillacs, they were intended to both convey the image of a speeding projectile and protect vehicles' front ends in collisions. The similarity of these features to the then popular bullet bra as epitomized by buxom television personality Dagmar was inescapable. As the 1950s wore on and American automakers' use of chrome grew more flamboyant, they grew more pronounced. The black rubber tips they gained on the 1957 Cadillac Eldorado Brougham and other models were known as pasties. In the early 1960s, American car designers shed both rear tailfins and prominent bumper guards. Use Postwar Cadillacs began sporting conical bumper guards in the 1946 model year. In 1951 models, some were raised into the grille. In 1957, black rubber tips appeared. The element continued to become more pronounced in size through 1958, but were eliminated in the 1959 Cadillac redesign. Mercury sported Dagmars in 1953 through the 1956 model year. Lincoln added Dagmars in 1960, with a black rubber ring separating the body from the chrome tip. Buick added Dagmars on its 1954 and 1955 models, in 1954 as part of the bumper assembly, and moved into the grille in 1955. Packard included large Dagmars on the bumper in 1955 and 1956 models. Full-sized Chevys in 1961 and 1963 also had small rubber Dagmars on the front bumper, and 1962 Ford Galaxie had small rubber Dagmars as an option. GAZ-13 Chaika had similar designs until their discontinuation in the 1980s Other iterations In 1974, British motoring press applied the name of statuesque British actress Sabrina to oversized pairs of protruding rubber bumper blocks added to MG MGB, MG Midget, Triumph Spitfire and Triumph TR6 sports cars to meet strengthened US auto safety regulations. The term, which was not common in the U.S., lingered at least to the mid-1990s in some areas. Gallery References External links 1961 Chevrolet Impala with Dagmar bumpers 1955 Packard Caribbean with Dagmar bumpers Vehicle design Slang Automotive body parts Automotive styling features
Dagmar bumper
Engineering
557
47,863,556
https://en.wikipedia.org/wiki/Volcanic%20ash
Volcanic ash consists of fragments of rock, mineral crystals, and volcanic glass, produced during volcanic eruptions and measuring less than 2 mm (0.079 inches) in diameter. The term volcanic ash is also often loosely used to refer to all explosive eruption products (correctly referred to as tephra), including particles larger than 2 mm. Volcanic ash is formed during explosive volcanic eruptions when dissolved gases in magma expand and escape violently into the atmosphere. The force of the gases shatters the magma and propels it into the atmosphere where it solidifies into fragments of volcanic rock and glass. Ash is also produced when magma comes into contact with water during phreatomagmatic eruptions, causing the water to explosively flash to steam leading to shattering of magma. Once in the air, ash is transported by wind up to thousands of kilometres away. Due to its wide dispersal, ash can have a number of impacts on society, including animal and human health problems, disruption to aviation, disruption to critical infrastructure (e.g., electric power supply systems, telecommunications, water and waste-water networks, transportation), primary industries (e.g., agriculture), and damage to buildings and other structures. Formation Volcanic ash is formed during explosive volcanic eruptions and phreatomagmatic eruptions, and may also be formed during transport in pyroclastic density currents. Explosive eruptions occur when magma decompresses as it rises, allowing dissolved volatiles (dominantly water and carbon dioxide) to exsolve into gas bubbles. As more bubbles nucleate a foam is produced, which decreases the density of the magma, accelerating it up the conduit. Fragmentation occurs when bubbles occupy ~70–80 vol% of the erupting mixture. When fragmentation occurs, violently expanding bubbles tear the magma apart into fragments which are ejected into the atmosphere where they solidify into ash particles. Fragmentation is a very efficient process of ash formation and is capable of generating very fine ash even without the addition of water. Volcanic ash is also produced during phreatomagmatic eruptions. During these eruptions fragmentation occurs when magma comes into contact with bodies of water (such as the sea, lakes and marshes) groundwater, snow or ice. As the magma, which is significantly hotter than the boiling point of water, comes into contact with water an insulating vapor film forms (Leidenfrost effect). Eventually this vapor film will collapse leading to direct coupling of the cold water and hot magma. This increases the heat transfer which leads to the rapid expansion of water and fragmentation of the magma into small particles which are subsequently ejected from the volcanic vent. Fragmentation causes an increase in contact area between magma and water creating a feedback mechanism, leading to further fragmentation and production of fine ash particles. Pyroclastic density currents can also produce ash particles. These are typically produced by lava dome collapse or collapse of the eruption column. Within pyroclastic density currents particle abrasion occurs as particles violently collide, resulting in a reduction in grain size and production of fine grained ash particles. In addition, ash can be produced during secondary fragmentation of pumice fragments, due to the conservation of heat within the flow. These processes produce large quantities of very fine grained ash which is removed from pyroclastic density currents in co-ignimbrite ash plumes. Physical and chemical characteristics of volcanic ash are primarily controlled by the style of volcanic eruption. Volcanoes display a range of eruption styles which are controlled by magma chemistry, crystal content, temperature and dissolved gases of the erupting magma and can be classified using the volcanic explosivity index (VEI). Effusive eruptions (VEI 1) of basaltic composition produce <105 m3 of ejecta, whereas extremely explosive eruptions (VEI 5+) of rhyolitic and dacitic composition can inject large quantities (>109 m3) of ejecta into the atmosphere. Properties Chemical The types of minerals present in volcanic ash are dependent on the chemistry of the magma from which it erupted. Considering that the most abundant elements found in silicate magma are silicon and oxygen, the various types of magma (and therefore ash) produced during volcanic eruptions are most commonly explained in terms of their silica content. Low energy eruptions of basalt produce a characteristically dark coloured ash containing ~45–55% silica that is generally rich in iron (Fe) and magnesium (Mg). The most explosive rhyolite eruptions produce a felsic ash that is high in silica (>69%) while other types of ash with an intermediate composition (e.g., andesite or dacite) have a silica content between 55 and 69%. The principal gases released during volcanic activity are water, carbon dioxide, hydrogen, sulfur dioxide, hydrogen sulfide, carbon monoxide and hydrogen chloride. The sulfur and halogen gases and metals are removed from the atmosphere by processes of chemical reaction, dry and wet deposition, and by adsorption onto the surface of volcanic ash. It has long been recognised that a range of sulfate and halide (primarily chloride and fluoride) compounds are readily mobilised from fresh volcanic ash. It is considered most likely that these salts are formed as a consequence of rapid acid dissolution of ash particles within eruption plumes, which is thought to supply the cations involved in the deposition of sulfate and halide salts. While some 55 ionic species have been reported in fresh ash leachates, the most abundant species usually found are the cations Na+, K+, Ca2+ and Mg2+ and the anions Cl−, F− and SO42−. Molar ratios between ions present in leachates suggest that in many cases these elements are present as simple salts such as NaCl and CaSO4. In a sequential leaching experiment on ash from the 1980 eruption of Mount St. Helens, chloride salts were found to be the most readily soluble, followed by sulfate salts Fluoride compounds are in general only sparingly soluble (e.g., CaF2, MgF2), with the exception of fluoride salts of alkali metals and compounds such as calcium hexafluorosilicate (CaSiF6). The pH of fresh ash leachates is highly variable, depending on the presence of an acidic gas condensate (primarily as a consequence of the gases SO2, HCl and HF in the eruption plume) on the ash surface. The crystalline-solid structure of the salts act more as an insulator than a conductor. However, once the salts are dissolved into a solution by a source of moisture (e.g., fog, mist, light rain, etc.), the ash may become corrosive and electrically conductive. A recent study has shown that the electrical conductivity of volcanic ash increases with (1) increasing moisture content, (2) increasing soluble salt content, and (3) increasing compaction (bulk density). The ability of volcanic ash to conduct electric current has significant implications for electric power supply systems. Physical Components Volcanic ash particles erupted during magmatic eruptions are made up of various fractions of vitric (glassy, non-crystalline), crystalline or lithic (non-magmatic) particles. Ash produced during low viscosity magmatic eruptions (e.g., Hawaiian and Strombolian basaltic eruptions) produce a range of different pyroclasts dependent on the eruptive process. For example, ash collected from Hawaiian lava fountains consists of sideromelane (light brown basaltic glass) pyroclasts which contain microlites (small quench crystals, not to be confused with the rare mineral microlite) and phenocrysts. Slightly more viscous eruptions of basalt (e.g., Strombolian) form a variety of pyroclasts from irregular sideromelane droplets to blocky tachylite (black to dark brown microcrystalline pyroclasts). In contrast, most high-silica ash (e.g. rhyolite) consists of pulverised products of pumice (vitric shards), individual phenocrysts (crystal fraction) and some lithic fragments (xenoliths). Ash generated during phreatic eruptions primarily consists of hydrothermally altered lithic and mineral fragments, commonly in a clay matrix. Particle surfaces are often coated with aggregates of zeolite crystals or clay and only relict textures remain to identify pyroclast types. Morphology The morphology (shape) of volcanic ash is controlled by a plethora of different eruption and kinematic processes. Eruptions of low-viscosity magmas (e.g., basalt) typically form droplet shaped particles. This droplet shape is, in part, controlled by surface tension, acceleration of the droplets after they leave the vent, and air friction. Shapes range from perfect spheres to a variety of twisted, elongate droplets with smooth, fluidal surfaces. The morphology of ash from eruptions of high-viscosity magmas (e.g., rhyolite, dacite, and some andesites) is mostly dependent on the shape of vesicles in the rising magma before disintegration. Vesicles are formed by the expansion of magmatic gas before the magma has solidified. Ash particles can have varying degrees of vesicularity and vesicular particles can have extremely high surface area to volume ratios. Concavities, troughs, and tubes observed on grain surfaces are the result of broken vesicle walls. Vitric ash particles from high-viscosity magma eruptions are typically angular, vesicular pumiceous fragments or thin vesicle-wall fragments while lithic fragments in volcanic ash are typically equant, or angular to subrounded. Lithic morphology in ash is generally controlled by the mechanical properties of the wall rock broken up by spalling or explosive expansion of gases in the magma as it reaches the surface. The morphology of ash particles from phreatomagmatic eruptions is controlled by stresses within the chilled magma which result in fragmentation of the glass to form small blocky or pyramidal glass ash particles. Vesicle shape and density play only a minor role in the determination of grain shape in phreatomagmatic eruptions. In this sort of eruption, the rising magma is quickly cooled on contact with ground or surface water. Stresses within the "quenched" magma cause fragmentation into five dominant pyroclast shape-types: (1) blocky and equant; (2) vesicular and irregular with smooth surfaces; (3) moss-like and convoluted; (4) spherical or drop-like; and (5) plate-like. Density The density of individual particles varies with different eruptions. The density of volcanic ash varies between 700 and 1200 kg/m3 for pumice, 2350–2450 kg/m3 for glass shards, 2700–3300 kg/m3 for crystals, and 2600–3200 kg/m3 for lithic particles. Since coarser and denser particles are deposited close to source, fine glass and pumice shards are relatively enriched in ash fall deposits at distal locations. The high density and hardness (~5 on the Mohs Hardness Scale) together with a high degree of angularity, make some types of volcanic ash (particularly those with a high silica content) very abrasive. Grain size Volcanic ash consists of particles (pyroclasts) with diameters less than 2 mm (particles larger than 2 mm are classified as lapilli), and can be as fine as 1 μm. The overall grain size distribution of ash can vary greatly with different magma compositions. Few attempts have been made to correlate the grain size characteristics of a deposit with those of the event which produced it, though some predictions can be made. Rhyolitic magmas generally produce finer grained material compared to basaltic magmas, due to the higher viscosity and therefore explosivity. The proportions of fine ash are higher for silicic explosive eruptions, probably because vesicle size in the pre-eruptive magma is smaller than those in mafic magmas. There is good evidence that pyroclastic flows produce high proportions of fine ash by communition and it is likely that this process also occurs inside volcanic conduits and would be most efficient when the magma fragmentation surface is well below the summit crater. Dispersal Ash particles are incorporated into eruption columns as they are ejected from the vent at high velocity. The initial momentum from the eruption propels the column upwards. As air is drawn into the column, the bulk density decreases and it starts to rise buoyantly into the atmosphere. At a point where the bulk density of the column is the same as the surrounding atmosphere, the column will cease rising and start moving laterally. Lateral dispersion is controlled by prevailing winds and the ash may be deposited hundreds to thousands of kilometres from the volcano, depending on eruption column height, particle size of the ash and climatic conditions (especially wind direction and strength and humidity). Ash fallout occurs immediately after the eruption and is controlled by particle density. Initially, coarse particles fall out close to source. This is followed by fallout of accretionary lapilli, which is the result of particle agglomeration within the column. Ash fallout is less concentrated during the final stages as the column moves downwind. This results in an ash fall deposit which generally decreases in thickness and grain size exponentially with increasing distance from the volcano. Fine ash particles may remain in the atmosphere for days to weeks and be dispersed by high-altitude winds. These particles can impact on the aviation industry (refer to impacts section) and, combined with gas particles, can affect global climate. Volcanic ash plumes can form above pyroclastic density currents. These are called co-ignimbrite plumes. As pyroclastic density currents travel away from the volcano, smaller particles are removed from the flow by elutriation and form a less dense zone overlying the main flow. This zone then entrains the surrounding air and a buoyant co-ignimbrite plume is formed. These plumes tend to have higher concentrations of fine ash particles compared to magmatic eruption plumes due to the abrasion within the pyroclastic density current. Impacts Population growth has caused the progressive encroachment of urban development into higher risk areas, closer to volcanic centres, increasing the human exposure to volcanic ash fall events. Direct health effects of volcanic ash on humans are usually short-term and mild for persons in normal health, though prolonged exposure potentially poses some risk of silicosis in unprotected workers. Of greater concern is the impact of volcanic ash on the infrastructure critical to supporting modern societies, particularly in urban areas, where high population densities create high demand for services. Several recent eruptions have illustrated the vulnerability of urban areas that received only a few millimetres or centimetres of volcanic ash. This has been sufficient to cause disruption of transportation, electricity, water, sewage and storm water systems. Costs have been incurred from business disruption, replacement of damaged parts and insured losses. Ash fall impacts on critical infrastructure can also cause multiple knock-on effects, which may disrupt many different sectors and services. Volcanic ash fall is physically, socially, and economically disruptive. Volcanic ash can affect both proximal areas and areas many hundreds of kilometres from the source, and causes disruptions and losses in a wide variety of different infrastructure sectors. Impacts are dependent on: ash fall thickness; the grain size and chemistry of the ash; whether the ash is wet or dry; the duration of the ash fall; and any preparedness, management and prevention (mitigation) measures employed to reduce effects from the ash fall. Different sectors of infrastructure and society are affected in different ways and are vulnerable to a range of impacts or consequences. These are discussed in the following sections. Human and animal health Ash particles of less than 10 μm diameter suspended in the air are known to be inhalable, and people exposed to ash falls have experienced respiratory discomfort, breathing difficulty, eye and skin irritation, and nose and throat symptoms. Most of these effects are short-term and are not considered to pose a significant health risk to those without pre-existing respiratory conditions. The health effects of volcanic ash depend on the grain size, mineralogical composition and chemical coatings on the surface of the ash particles. Additional factors related to potential respiratory symptoms are the frequency and duration of exposure, the concentration of ash in the air and the respirable ash fraction; the proportion of ash with less than 10 μm diameter, known as PM10. The social context may also be important. Chronic health effects from volcanic ash fall are possible, as exposure to free crystalline silica is known to cause silicosis. Minerals associated with this include quartz, cristobalite and tridymite, which may all be present in volcanic ash. These minerals are described as ‘free’ silica as the SiO2 is not attached to another element to create a new mineral. However, magmas containing less than 58% SiO2 are thought to be unlikely to contain crystalline silica. The exposure levels to free crystalline silica in the ash are commonly used to characterise the risk of silicosis in occupational studies (for people who work in mining, construction and other industries,) because it is classified as a human carcinogen by the International Agency for Research on Cancer. Guideline values have been created for exposure, but with unclear rationale; UK guidelines for particulates in air (PM10) are 50 μg/m3 and USA guidelines for exposure to crystalline silica are 50 μg/m3. It is thought that the guidelines on exposure levels could be exceeded for short periods of time without significant health effects on the general population. There have been no documented cases of silicosis developed from exposure to volcanic ash. However, long-term studies necessary to evaluate these effects are lacking. Ingesting ash For surface water sources such as lakes and reservoirs, the volume available for dilution of ionic species leached from ash is generally large. The most abundant components of ash leachates (Ca, Na, Mg, K, Cl, F and SO4) occur naturally at significant concentrations in most surface waters and therefore are not affected greatly by inputs from volcanic ashfall, and are also of low concern in drinking water, with the exception of fluorine. The elements iron, manganese and aluminium are commonly enriched over background levels by volcanic ashfall. These elements may impart a metallic taste to water, and may produce red, brown or black staining of whiteware, but are not considered a health risk. Volcanic ashfalls are not known to have caused problems in water supplies for toxic trace elements such as mercury (Hg) and lead (Pb) which occur at very low levels in ash leachates. Ingesting ash may be harmful to livestock, causing abrasion of the teeth, and in cases of high fluorine content, fluorine poisoning (toxic at levels of >100 μg/g) for grazing animals. It is known from the 1783 eruption of Laki in Iceland that fluorine poisoning occurred in humans and livestock as a result of the chemistry of the ash and gas, which contained high levels of hydrogen fluoride. Following the 1995/96 Mount Ruapehu eruptions in New Zealand, two thousand ewes and lambs died after being affected by fluorosis while grazing on land with only 1–3 mm of ash fall. Symptoms of fluorosis among cattle exposed to ash include brown-yellow to green-black mottles in the teeth, and hypersensibility to pressure in the legs and back. Ash ingestion may also cause gastrointestinal blockages. Sheep that ingested ash from the 1991 Mount Hudson volcanic eruption in Chile, suffered from diarrhoea and weakness. Other effects on livestock Ash accumulating in the back wool of sheep may add significant weight, leading to fatigue and sheep that can not stand up. Rainfall may result in a significant burden as it adds weight to ash. Pieces of wool may fall away and any remaining wool on sheep may be worthless as poor nutrition associated with volcanic eruptions impacts the quality of the fibre. As the usual pastures and plants become covered in volcanic ash during eruption some livestock may resort to eat whatever is available including toxic plants. There are reports of goats and sheep in Chile and Argentina having natural abortions in connection to volcanic eruptions. Infrastructure Electricity Volcanic ash can disrupt electric power supply systems at all levels of power generation, transformation, transmission, and distribution. There are four main impacts arising from ash-contamination of apparatus used in the power delivery process: Wet deposits of ash on high voltage insulators can initiate a leakage current (small amount of current flow across the insulator surface) which, if sufficient current is achieved, can cause ‘flashover’ (the unintended electrical discharge around or over the surface of an insulating material). If the resulting short-circuit current is high enough to trip the circuit breaker then disruption of service will occur. Ash-induced flashover across transformer insulation (bushings) can burn, etch or crack the insulation irreparably and can result in the disruption of the power supply. Volcanic ash can erode, pit, and scour metallic apparatus, particularly moving parts such as water and wind turbines and cooling fans on transformers or thermal power plants. The high bulk density of some ash deposits can cause line breakage and damage to steel towers and wooden poles due to ash loading. This is most hazardous when the ash and/or the lines and structures are wet (e.g., by rainfall) and there has been ≥10  mm of ashfall. Fine-grained ash (e.g., <0.5  mm diameter) adheres to lines and structures most readily. Volcanic ash may also load overhanging vegetation, causing it to fall onto lines. Snow and ice accumulation on lines and overhanging vegetation further increases the risk of breakage and or collapse of lines and other hardware. Controlled outages of vulnerable connection points (e.g., substations) or circuits until ash fall has subsided or for de-energised cleaning of equipment. Drinking water supplies Groundwater-fed systems are resilient to impacts from ashfall, although airborne ash can interfere with the operation of well-head pumps. Electricity outages caused by ashfall can also disrupt electrically powered pumps if there is no backup generation. The physical impacts of ashfall can affect the operation of water treatment plants. Ash can block intake structures, cause severe abrasion damage to pump impellers and overload pump motors. Ash can enter filtration systems such as open sand filters both by direct fallout and via intake waters. In most cases, increased maintenance will be required to manage the effects of an ashfall, but there will not be service interruptions. The final step of drinking water treatment is disinfection to ensure that final drinking water is free from infectious microorganisms. As suspended particles (turbidity) can provide a growth substrate for microorganisms and can protect them from disinfection treatment, it is extremely important that the water treatment process achieves a good level of removal of suspended particles. Chlorination may have to be increased to ensure adequate disinfection. Many households, and some small communities, rely on rainwater for their drinking water supplies. Roof-fed systems are highly vulnerable to contamination by ashfall, as they have a large surface area relative to the storage tank volume. In these cases, leaching of chemical contaminants from the ashfall can become a health risk and drinking of water is not recommended. Prior to an ashfall, downpipes should be disconnected so that water in the tank is protected. A further problem is that the surface coating of fresh volcanic ash can be acidic. Unlike most surface waters, rainwater generally has a very low alkalinity (acid-neutralising capacity) and thus ashfall may acidify tank waters. This may lead to problems with plumbosolvency, whereby the water is more aggressive towards materials that it comes into contact with. This can be a particular problem if there are lead-head nails or lead flashing used on the roof, and for copper pipes and other metallic plumbing fittings. During ashfall events, large demands are commonly placed on water resources for cleanup and shortages can result. Shortages compromise key services such as firefighting and can lead to a lack of water for hygiene, sanitation and drinking. Municipal authorities need to monitor and manage this water demand carefully, and may need to advise the public to utilise cleanup methods that do not use water (e.g., cleaning with brooms rather than hoses). Wastewater treatment Wastewater networks may sustain damage similar to water supply networks. It is very difficult to exclude ash from the sewerage system. Systems with combined storm water/sewer lines are most at risk. Ash will enter sewer lines where there is inflow/infiltration by stormwater through illegal connections (e.g., from roof downpipes), cross connections, around manhole covers or through holes and cracks in sewer pipes. Ash-laden sewage entering a treatment plant is likely to cause failure of mechanical prescreening equipment such as step screens or rotating screens. Ash that penetrates further into the system will settle and reduce the capacity of biological reactors as well as increasing the volume of sludge and changing its composition. Aircraft The principal damage sustained by aircraft flying into a volcanic ash cloud is abrasion to forward-facing surfaces, such as the windshield and leading edges of the wings, and accumulation of ash into surface openings, including engines. Abrasion of windshields and landing lights will reduce visibility forcing pilots to rely on their instruments. However, some instruments may provide incorrect readings as sensors (e.g., pitot tubes) can become blocked with ash. Ingestion of ash into engines causes abrasion damage to compressor fan blades. The ash erodes sharp blades in the compressor, reducing its efficiency. The ash melts in the combustion chamber to form molten glass. The ash then solidifies on turbine blades, blocking air flow and causing the engine to stall. The composition of most ash is such that its melting temperature is within the operating temperature (>1000 °C) of modern large jet engines. The degree of impact depends upon the concentration of ash in the plume, the length of time the aircraft spends within the plume and the actions taken by the pilots. Critically, melting of ash, particularly volcanic glass, can result in accumulation of resolidified ash on turbine nozzle guide vanes, resulting in compressor stall and complete loss of engine thrust. The standard procedure of the engine control system when it detects a possible stall is to increase power which would exacerbate the problem. It is recommended that pilots reduce engine power and quickly exit the cloud by performing a descending 180° turn. Volcanic gases, which are present within ash clouds, can also cause damage to engines and acrylic windshields, and can persist in the stratosphere as an almost invisible aerosol for prolonged periods of time. Occurrence There are many instances of damage to jet aircraft as a result of an ash encounter. On 24 June 1982, a British Airways Boeing 747-236B (Flight 9) flew through the ash cloud from the eruption of Mount Galunggung, Indonesia resulting in the failure of all four engines. The plane descended 24,000 feet (7,300 m) in 16 minutes before the engines restarted, allowing the aircraft to make an emergency landing. On 15 December 1989, a KLM Boeing 747-400 (Flight 867) also lost power to all four engines after flying into an ash cloud from Mount Redoubt, Alaska. After dropping 14,700 feet (4,500 m) in four minutes, the engines were started just 1–2 minutes before impact. Total damage was US$80 million and it took 3 months' work to repair the plane. In the 1990s, a further US$100 million of damage was sustained by commercial aircraft (some in the air, others on the ground) as a consequence of the 1991 eruption of Mount Pinatubo in the Philippines. In April 2010, airspace all over Europe was affected, with many flights cancelled-which was unprecedented-due to the presence of volcanic ash in the upper atmosphere from the eruption of the Icelandic volcano Eyjafjallajökull. On 15 April 2010, the Finnish Air Force halted training flights when damage was found from volcanic dust ingestion by the engines of one of its Boeing F-18 Hornet fighters. In June 2011, there were similar closures of airspace in Chile, Argentina, Brazil, Australia and New Zealand, following the eruption of Puyehue-Cordón Caulle, Chile. Detection Volcanic ash clouds are very difficult to detect from aircraft as no onboard cockpit instruments exist to detect them. However, a new system called Airborne Volcanic Object Infrared Detector (AVOID) has recently been developed by Dr Fred Prata while working at CSIRO Australia and the Norwegian Institute for Air Research, which will allow pilots to detect ash plumes up to 60 km (37 mi) ahead and fly safely around them. The system uses two fast-sampling infrared cameras, mounted on a forward-facing surface, that are tuned to detect volcanic ash. This system can detect ash concentrations of <1 mg/m3 to > 50 mg/m3, giving pilots approximately 7–10 minutes warning. The camera was tested by the easyJet airline company, AIRBUS and Nicarnica Aviation (co-founded by Dr Fred Prata). The results showed the system could work to distances of ~60 km and up to 10,000 ft but not any higher without some significant modifications. In addition, ground and satellite based imagery, radar, and lidar can be used to detect ash clouds. This information is passed between meteorological agencies, volcanic observatories and airline companies through Volcanic Ash Advisory Centers (VAAC). There is one VAAC for each of the nine regions of the world. VAACs can issue advisories describing the current and future extent of the ash cloud. Airport systems Volcanic ash not only affects in-flight operations but can affect ground-based airport operations as well. Small accumulations of ash can reduce visibility, produce slippery runways and taxiways, infiltrate communication and electrical systems, interrupt ground services, damage buildings and parked aircraft. Ash accumulation of more than a few millimeters requires removal before airports can resume full operations. Ash does not disappear (unlike snowfalls) and must be disposed of in a manner that prevents it from being remobilised by wind and aircraft. Land transport Ash may disrupt transportation systems over large areas for hours to days, including roads and vehicles, railways and ports and shipping. Falling ash will reduce the visibility which can make driving difficult and dangerous. In addition, fast travelling cars will stir up ash, generating billowing clouds which perpetuate ongoing visibility hazards. Ash accumulations will decrease traction, especially when wet, and cover road markings. Fine-grained ash can infiltrate openings in cars and abrade most surfaces, especially between moving parts. Air and oil filters will become blocked requiring frequent replacement. Rail transport is less vulnerable, with disruptions mainly caused by reduction in visibility. Marine transport can also be impacted by volcanic ash. Ash fall will block air and oil filters and abrade any moving parts if ingested into engines. Navigation will be impacted by a reduction in visibility during ash fall. Vesiculated ash (pumice and scoria) will float on the water surface in ‘pumice rafts’ which can clog water intakes quickly, leading to over heating of machinery. Communications Telecommunication and broadcast networks can be affected by volcanic ash in the following ways: attenuation and reduction of signal strength; damage to equipment; and overloading of network through user demand. Signal attenuation due to volcanic ash is not well documented; however, there have been reports of disrupted communications following the 1969 Surtsey eruption and 1991 Mount Pinatubo eruption. Research by the New Zealand-based Auckland Engineering Lifelines Group determined theoretically that impacts on telecommunications signals from ash would be limited to low frequency services such as satellite communication. Signal interference may also be caused by lightning, as this is frequently generated within volcanic eruption plumes. Telecommunication equipment may become damaged due to direct ash fall. Most modern equipment requires constant cooling from air conditioning units. These are susceptible to blockage by ash which reduces their cooling efficiency. Heavy ash falls may cause telecommunication lines, masts, cables, aerials, antennae dishes and towers to collapse due to ash loading. Moist ash may also cause accelerated corrosion of metal components. Reports from recent eruptions suggest that the largest disruption to communication networks is overloading due to high user demand. This is common of many natural disasters. Computers Computers may be impacted by volcanic ash, with their functionality and usability decreasing during ashfall, but it is unlikely they will completely fail. The most vulnerable components are the mechanical components, such as cooling fans, cd drives, keyboard, mice and touch pads. These components can become jammed with fine grained ash causing them to cease working; however, most can be restored to working order by cleaning with compressed air. Moist ash may cause electrical short circuits within desktop computers; however, will not affect laptop computers. Buildings and structures Damage to buildings and structures can range from complete or partial roof collapse to less catastrophic damage of exterior and internal materials. Impacts depend on the thickness of ash, whether it is wet or dry, the roof and building design and how much ash gets inside a building. The specific weight of ash can vary significantly and rain can increase this by 50–100%. Problems associated with ash loading are similar to that of snow; however, ash is more severe as 1) the load from ash is generally much greater, 2) ash does not melt and 3) ash can clog and damage gutters, especially after rain fall. Impacts for ash loading depend on building design and construction, including roof slope, construction materials, roof span and support system, and age and maintenance of the building. Generally flat roofs are more susceptible to damage and collapse than steeply pitched roofs. Roofs made of smooth materials (sheet metal or glass) are more likely to shed ash than roofs made with rough materials (thatch, asphalt or wood shingles). Roof collapse can lead to widespread injuries and deaths and property damage. For example, the collapse of roofs from ash during the 15 June 1991 Mount Pinatubo eruption killed about 300 people. Environment and agriculture Volcanic ash can have a detrimental impact on the environment which can be difficult to predict due to the large variety of environmental conditions that exist within the ash fall zone. Natural waterways can be impacted in the same way as urban water supply networks. Ash will increase water turbidity which can reduce the amount of light reaching lower depths, which can inhibit growth of submerged aquatic plants and consequently affect species which are dependent on them such as fish and shellfish. High turbidity can also affect the ability of fish gills to absorb dissolved oxygen. Acidification will also occur, which will reduce the pH of the water and impact the fauna and flora living in the environment. Fluoride contamination will occur if the ash contains high concentrations of fluoride. Ash accumulation will also affect pasture, plants and trees which are part of the horticulture and agriculture industries. Thin ash falls (<20 mm) may put livestock off eating, and can inhibit transpiration and photosynthesis and alter growth. There may be an increase in pasture production due to a mulching effect and slight fertilizing effect, such as occurred following the 1980 Mount St. Helens and 1995/96 Mt Ruapehu eruptions. Heavier falls will completely bury pastures and soil leading to death of pasture and sterilization of the soil due to oxygen deprivation. Plant survival is dependent on ash thickness, ash chemistry, compaction of ash, amount of rainfall, duration of burial and the length of plant stalks at the time of ash fall. Young forests (trees <2 years old) are most at risk from ash falls and are likely to be destroyed by ash deposits >100 mm. Ash fall is unlikely to kill mature trees, but ash loading may break large branches during heavy ash falls (>500 mm). Defoliation of trees may also occur, especially if there is a coarse ash component within the ash fall. Land rehabilitation after ash fall may be possible depending on the ash deposit thickness. Rehabilitation treatment may include: direct seeding of deposit; mixing of deposit with buried soil; scraping of ash deposit from land surface; and application of new topsoil over the ash deposit. Interdependence Critical infrastructure and infrastructure services are vital to the functionality of modern society, to provide: medical care, policing, emergency services, and lifelines such as water, wastewater, and power and transportation links. Often critical facilities themselves are dependent on such lifelines for operability, which makes them vulnerable to both direct impacts from a hazard event and indirect effects from lifeline disruption. The impacts on lifelines may also be inter-dependent. The vulnerability of each lifeline may depend on: the type of hazard, the spatial density of its critical linkages, the dependency on critical linkages, susceptibility to damage and speed of service restoration, state of repair or age, and institutional characteristics or ownership. The 2010 eruption of Eyjafjallajokull in Iceland highlighted the impacts of volcanic ash fall in modern society and our dependence on the functionality of infrastructure services. During this event, the airline industry suffered business interruption losses of €1.5–2.5 billion from the closure of European airspace for six days in April 2010 and subsequent closures into May 2010. Ash fall from this event is also known to have caused local crop losses in agricultural industries, losses in the tourism industry, destruction of roads and bridges in Iceland (in combination with glacial melt water), and costs associated with emergency response and clean-up. However, across Europe there were further losses associated with travel disruption, the insurance industry, the postal service, and imports and exports across Europe and worldwide. These consequences demonstrate the interdependency and diversity of impacts from a single event. Preparedness, mitigation and management Preparedness for ashfalls should involve sealing buildings, protecting infrastructure and homes, and storing sufficient supplies of food and water to last until the ash fall is over and clean-up can begin. Dust masks can be worn to reduce inhalation of ash and mitigate against any respiratory health affects. Goggles can be worn to protect against eye irritation. At home, staying informed about volcanic activity, and having contingency plans in place for alternative shelter locations, constitutes good preparedness for an ash fall event. This can prevent some impacts associated with ash fall, reduce the effects, and increase the human capacity to cope with such events. A few items such as a flashlight, plastic sheeting to protect electronic equipment from ash ingress, and battery operated radios, are extremely useful during ash fall events. Communication plans should be made beforehand to inform of mitigation actions being undertaken. Spare parts and back-up systems should be in place prior to ash fall events to reduce service disruption and return functionality as quickly as possible. Good preparedness also includes the identification of ash disposal sites, before ash fall occurs, to avoid further movement of ash and to aid clean-up. Some effective techniques for the management of ash have been developed including cleaning methods and cleaning apparatus, and actions to mitigate or limit damage. The latter include covering of openings such as: air and water intakes, aircraft engines and windows during ash fall events. Roads may be closed to allow clean-up of ash falls, or speed restrictions may be put in place, in order to prevent motorists from developing motor problems and becoming stranded following an ash fall. To prevent further effects on underground water systems or waste water networks, drains and culverts should be unblocked and ash prevented from entering the system. Ash can be moistened (but not saturated) by sprinkling with water, to prevent remobilisation of ash and to aid clean-up. Prioritisation of clean-up operations for critical facilities and coordination of clean-up efforts also constitute good management practice. It is recommended to evacuate livestock in areas where ashfall may reach 5 cm or more. Volcanic ash soils Volcanic ash's primary use is that of a soil enricher. Once the minerals in ash are washed into the soil by rain or other natural processes, it mixes with the soil and forms an andisol layer. This layer is highly rich in nutrients and is very good for agricultural use; the presence of lush forests on volcanic islands is often as a result of trees growing and flourishing in the phosphorus and nitrogen-rich andisol. Volcanic ash can also be used as a replacement for sand. See also References External links What to do during an ash fall event The International Volcanic Health Hazard Network ASHTAM: The Aviation Volcanic Ash Information Site Volcanic Ash Testing Laboratory Collaborative volcano research and risk mitigation Information for understanding, preparing for and managing impacts of volcanic eruptions World Organization of Volcano Observatories Tephra Weather hazards Powders Pollution
Volcanic ash
Physics
8,477
42,630,261
https://en.wikipedia.org/wiki/G-less%20cassette
The G-less cassette transcription assay is a method used in molecular biology to determine promoter strength in vitro. The technique involves quantification of an mRNA product with the use of a plasmid. The G-less cassette is part of a pre-constructed vector, usually containing a multiple cloning site (MCS) upstream of the cassette. For this reason, promoters of interest can be inserted directly into the MCS to ultimately measure the accuracy and efficiency of a promoter in recruiting transcription machinery. Method The G-less cassette is a reporter gene that encodes a transcript lacking guanine nucleotides in the sense strand of the DNA (hence "G-less"). A plasmid containing such a gene is located downstream of a MCS. After the promoter is inserted into the MCS, transcription proceeds with the addition of radiolabeled UTP, CTP, and ATP (as well as non-radiolabeled/cold nucleotides) and continues until the end of the G-less cassette is reached and guanine residues are once again apparent in the sense strand of the DNA. The absence of GTP in vitro results in transcription being prematurely terminated at the first guanine residue in the sense strand following the cassette. Gel electrophoresis is performed on the transcription products and the amount of radioactivity is quantified by autoradiography or phosphorimaging to determine the strength of the promoter of interest. Application The G-less cassette technique is used to determine promoter strength beyond basal levels of transcription (i.e. in the presence of transcription activators or transcription factors). For example, to measure the effects of a TATA box consensus sequence modification in Saccharomyces cerevisiae in the presence of TFIID, G-less cassettes were implemented to measure the relative strength of each promoter. Advantages The G-less assay can be performed on a circular plasmid to measure levels of transcription. A circular plasmid provides a more efficient template in many systems when compared to other assays such as runoff transcription, in which a cleaved end is required. This method generates radiolabeled transcripts very efficiently because it bypasses the unnecessary process of performing other indirect mRNA product measurements. The promoter is inserted into a circular plasmid containing the G-less cassette, which will generate a transcript of a certain length that omits random and nonspecific transcription throughout the plasmid. Most crude systems, such as HeLa nuclear extracts, are used because they contain low amounts of contaminating GTP that lead to background transcription and may occasionally cause random transcription to read through the G-less cassette. References Gene expression Molecular biology
G-less cassette
Chemistry,Biology
567
623,154
https://en.wikipedia.org/wiki/Assessment%20of%20kidney%20function
Assessment of kidney function occurs in different ways, using the presence of symptoms and signs, as well as measurements using urine tests, blood tests, and medical imaging. Functions of a healthy kidney include maintaining a person's fluid balance, maintaining an acid-base balance; regulating electrolytes sodium, and other electrolytes; clearing toxins; regulating blood pressure; and regulating hormones, such as erythropoietin; and activation of vitamin D. The kidney is also involved in maintaining blood pH balance. Description The functions of the kidney include maintenance of acid-base balance; regulation of fluid balance; regulation of sodium, potassium, and other electrolytes; clearance of toxins; absorption of glucose, amino acids, and other small molecules; regulation of blood pressure; production of various hormones, such as erythropoietin; and activation of vitamin D. The Glomerular filtration rate (GFR) is regarded as the best overall measure of the kidney's ability to carry out these numerous functions. An estimate of the GFR is used clinically to determine the degree of kidney impairment and to track the progression of the disease. The GFR, however, does not reveal the source of the kidney disease. This is accomplished by urinalysis, measurement of urine protein excretion, kidney imaging, and, if necessary, kidney biopsy. Much of renal physiology is studied at the level of the nephron the smallest functional unit of the kidney. Each nephron begins with a filtration component that filters the blood entering the kidney. This filtrate then flows along the length of the nephron, which is a tubular structure lined by a single layer of specialized cells and surrounded by capillaries. The major functions of these lining cells are the reabsorption of water and small molecules from the filtrate into the blood, and the secretion of wastes from the blood into the urine. Proper function of the kidney requires that it receives and adequately filters blood. This is performed at the microscopic level by many hundreds of thousands of filtration units called renal corpuscles, each of which is composed of a glomerulus and a Bowman's capsule. A global assessment of renal function is often ascertained by estimating the rate of filtration, called the glomerular filtration rate (GFR). Clinical assessment Clinical assessment can be used to assess the function of the kidneys. This is because a person with abnormally functioning kidneys may have symptoms that develop. For example, a person with chronic kidney disease may develop oedema due to failure of the kidneys to regulate water balance. They may develop evidence of chronic kidney disease, that can be used to assess its severity, for example high blood pressure, osteoporosis or anaemia. If the kidneys are unable to excrete urea, a person may develop a widespread itch or confusion. Urine tests Part of the assessment of kidney function includes the measurement of urine and its contents. Abnormal kidney function may cause too much or too little urine to be produced. The ability of the kidneys to filter protein is often measured, as urine albumin or urine protein levels, measured either at a single instance or, because of variation throughout the day, as 24-hour urine tests. Blood tests Blood tests are also used to assess kidney function. These include tests that are intended to directly measure the function of the kidneys, as well as tests that assess the function of the kidneys by looking for evidence of problems associated with abnormal function. One of the measures of kidney function is the glomerular filtration rate (GFR). Other tests that can assess the function of the kidneys include assessment of electrolyte levels such as potassium and phosphate, assessment of acid-base status by the measurement of bicarbonate levels from a vein, and assessment of the full blood count for anaemia. Glomerular filtration rate The glomerular filtration rate (GFR) describes the volume of fluid filtered from the renal (kidney) glomerular capillaries into the Bowman's capsule per unit time. Creatinine clearance (CCr) is the volume of blood plasma that is cleared of creatinine per unit time and is a useful measure for approximating the GFR. Creatinine clearance exceeds GFR due to creatinine secretion, which can be blocked by cimetidine. Both GFR and CCr may be accurately calculated by comparative measurements of substances in the blood and urine, or estimated by formulas using just a blood test result (eGFR and eCCr) The results of these tests are used to assess the excretory function of the kidneys. Staging of chronic kidney disease is based on categories of GFR as well as albuminuria and cause of kidney disease. Central to the physiologic maintenance of GFR is the differential basal tone of the afferent and efferent arterioles (see diagram). In other words, the filtration rate is dependent on the difference between the higher blood pressure created by vasoconstriction of the input or afferent arteriole versus the lower blood pressure created by lesser vasoconstriction of the output or efferent arteriole. GFR is equal to the renal clearance ratio when any solute is freely filtered and is neither reabsorbed nor secreted by the kidneys. The rate therefore measured is the quantity of the substance in the urine that originated from a calculable volume of blood. Relating this principle to the below equation – for the substance used, the product of urine concentration and urine flow equals the mass of substance excreted during the time that urine has been collected. This mass equals the mass filtered at the glomerulus as nothing is added or removed in the nephron. Dividing this mass by the plasma concentration gives the volume of plasma which the mass must have originally come from, and thus the volume of plasma fluid that has entered Bowman's capsule within the aforementioned period of time. The GFR is typically recorded in units of volume per time, e.g., milliliters per minute (mL/min). Compare to filtration fraction. There are several different techniques used to calculate or estimate the glomerular filtration rate (GFR or eGFR). The above formula only applies for GFR calculation when it is equal to the clearance rate. The normal range of GFR, adjusted for body surface area, is 100–130 average 125 (mL/min)/(1.73 m2) in men and 90–120 (mL/min)/(1.73 m2) in women younger than the age of 40. In children, GFR measured by inulin clearance is 110 (mL/min)/(1.73 m2) until 2 years of age in both sexes, and then it progressively decreases. After age 40, GFR decreases progressively with age, by 0.4–1.2 mL/min per year. Estimated GFR (eGFR) is now recommended by clinical practice guidelines and regulatory agencies for routine evaluation of GFR whereas measured GFR (mGFR) is recommended as a confirmatory test when more accurate assessment is required. Medical imaging The kidney function can also be assessed with medical imaging. Some forms of imaging, such as kidney ultrasound or CT scans, may assess kidney function by indicating chronic disease that can impact function, by showing a small or shrivelled kidney.. Other tests, such as nuclear medicine tests, directly assess the function of the kidney by measuring the perfusion and excretion of radioactive substances through the kidneys. Kidney function in disease A decreased renal function can be caused by many types of kidney disease. Upon presentation of decreased renal function, it is recommended to perform a history and physical examination, as well as performing a renal ultrasound and a urinalysis. The most relevant items in the history are medications, edema, nocturia, gross hematuria, family history of kidney disease, diabetes and polyuria. The most important items in a physical examination are signs of vasculitis, lupus erythematosus, diabetes, endocarditis and hypertension. A urinalysis is helpful even when not showing any pathology, as this finding suggests an extrarenal etiology. Proteinuria and/or urinary sediment usually indicates the presence of glomerular disease. Hematuria may be caused by glomerular disease or by a disease along the urinary tract. The most relevant assessments in a renal ultrasound are renal sizes, echogenicity and any signs of hydronephrosis. Renal enlargement usually indicates diabetic nephropathy, focal segmental glomerular sclerosis or myeloma. Renal atrophy suggests longstanding chronic renal disease. Chronic kidney disease stages Risk factors for kidney disease include diabetes, high blood pressure, family history, older age, ethnic group and smoking. For most patients, a GFR over 60 (mL/min)/(1.73 m2) is adequate. But significant decline of the GFR from a previous test result can be an early indicator of kidney disease requiring medical intervention. The sooner kidney dysfunction is diagnosed and treated the greater odds of preserving remaining nephrons, and preventing the need for dialysis. The severity of chronic kidney disease (CKD) is described by six stages; the most severe three are defined by the MDRD-eGFR value, and first three also depend on whether there is other evidence of kidney disease (e.g., proteinuria): 0) Normal kidney function – GFR above 90 (mL/min)/(1.73 m2) and no proteinuria 1) CKD1 – GFR above 90 (mL/min)/(1.73 m2) with evidence of kidney damage 2) CKD2 (mild) – GFR of 60 to 89 (mL/min)/(1.73 m2) with evidence of kidney damage 3) CKD3 (moderate) – GFR of 30 to 59 (mL/min)/(1.73 m2) 4) CKD4 (severe) – GFR of 15 to 29 (mL/min)/(1.73 m2) 5) CKD5 kidney failure – GFR less than 15 (mL/min)/(1.73 m2) Some people add CKD5D for those stage 5 patients requiring dialysis; many patients in CKD5 are not yet on dialysis. Note: others add a "T" to patients who have had a transplant regardless of stage. Not all clinicians agree with the above classification, suggesting that it may mislabel patients with mildly reduced kidney function, especially the elderly, as having a disease. A conference was held in 2009 regarding these controversies by Kidney Disease: Improving Global Outcomes (KDIGO) on CKD: Definition, Classification and Prognosis, gathering data on CKD prognosis to refine the definition and staging of CKD. See also References External links Online calculators Online GFR Calculator Schwartz formula for estimating pediatric renal function Creatinine clearance calculator (Cockcroft-Gault Equation)- by MDCalc MDRD GFR Equation GFR calculator using Cystatin C Reference links National Kidney Disease Education Program website. Includes professional references and GFR calculators eGFR at Lab Tests Online Renal physiology Blood tests
Assessment of kidney function
Chemistry
2,383
11,421,363
https://en.wikipedia.org/wiki/Poxvirus%20AX%20element%20late%20mRNA%20cis-regulatory%20element
The Poxvirus AX element late mRNA family represents a cis-regulatory element present at the 3' end of poxvirus late ATI mRNA and is known as the AX element. The AX element is involved in directing the efficient production and orientation-dependent formation of late RNAs. It is likely that this element directs the endonucleolytic cleavage of the transcript. It has been shown that the F17R late mRNA transcript which is also cleaved is also likely to share a common factor in their mechanism despite a lack of any obvious similarity in its cis-regulatory RNA element. See also Potato virus X cis-acting regulatory element References External links Cis-regulatory RNA elements Poxviruses
Poxvirus AX element late mRNA cis-regulatory element
Chemistry
144
3,499,647
https://en.wikipedia.org/wiki/AD%20Leonis
AD Leonis (Gliese 388) is a red dwarf star. It is located relatively near the Sun, at a distance of , in the constellation Leo. AD Leonis is a main sequence star with a spectral classification of M3.5V. It is a flare star that undergoes random increases in luminosity. Properties AD Leonis is an M-type star with a spectral type M3.5eV, indicating it is a main sequence star that displays emission lines in its spectrum. At a trigonometric distance of , it has an apparent visual magnitude of 9.43. It has about 39–42% of the Sun's mass — above the mass at which a star is fully convective — and 39% of the Sun's radius. The projected rotation of this star is only 2.4 km/s, but it completes a rotation once every 2.227 days, indicating a relatively pole-on inclination of about . It is a relatively young star with an estimated age of 25–300 million years, and is considered a member of the young disk population. The variable nature of this star was first observed in 1949 by Katherine C. Gordon and Gerald E. Kron at Lick Observatory. AD Leonis is one of the most active flare stars known, and the emissions from the flares have been detected across the electromagnetic spectrum as high as the X-ray band. The net magnetic flux at the surface is about 3 kG. Besides star spots, about 73% of the surface is covered by magnetically active regions. Examination of the corona in X-ray shows compact loop structures that span up to 30% of the size of the star. The average temperature of the corona is around 6.39 MK. This star is orbiting through the Milky Way galaxy with an eccentricity of 0.028 . This carries the star as close as 8.442 kpc from the galactic core, and as far as 8.926 kpc. The orbital inclination carries it as far as 0.121 kpc from the plane of the galaxy. In 2021, a superflare on AD Leo was observed simultaneously in X-ray by XMM-Newton and in optical by TESS. Search for planets During a 1943 proper motion study by Dirk Reuyl at McCormick Observatory, AD Leonis was suspected of having a companion. However, a 1968 study by Sarah L. Lippincott at Sproul Observatory was unable to confirm this result. A 1997 search with a near-infrared speckle interferometer failed to detect a companion orbiting 1–10 AU from the star. In 2001, an optical coronagraph was used to example the star, but no companion was found. As of 1981, there was no sign of variability in its radial velocity, which would otherwise indicate the presence of an unseen companion. In 2018, AD Leonis was found to have radial velocity variations with a period of 2.23 days. The star was found to rotate with the same period, suggesting that the stellar rotation may be the cause of the radial velocity signal, but it was thought possible that the signal was caused by a planet of in a spin-orbit resonance with the star. This was listed as a candidate planet in a 2019 preprint. However, subsequent studies starting in 2020 refuted the planet hypothesis, finding stellar activity to be the most likely explanation for the radial velocity variations. A 2022 study confidently ruled out planets more massive than orbiting at the stellar rotation period, as well as planets more than with periods up to 14 years. See also List of nearest stars and brown dwarfs Gamma Leonis, located just 5' from Gamma Leonis References External links Leo (constellation) Local Bubble M-type main-sequence stars 0388 Leonis, AD BD+20 2465 Flare stars
AD Leonis
Astronomy
771
67,626,578
https://en.wikipedia.org/wiki/QX39
QX39 (Compound A, CA39) is a synthetic compound that activates chaperone-mediated autophagy (CMA) by increasing the expression of the lysosomal receptor for this pathway, LAMP2A lysosomes. It showed potent activity in vitro but has poor pharmacokinetic properties and was not suitable for animal research. Subsequent research led to the development of CA77.1, a CMA activator suitable for in vivo use. References Oxazines Chloroarenes
QX39
Chemistry
110
2,726,086
https://en.wikipedia.org/wiki/Uncleftish%20Beholding
"Uncleftish Beholding" is a short text by Poul Anderson, first published in the Mid-December 1989 issue of the magazine Analog Science Fiction and Fact (with no indication of its fictional or factual status) and included in his anthology All One Universe (1996). It is designed to illustrate what English might look like without its large number of words derived from languages such as French, Greek, and Latin, especially with regard to the proportion of scientific words with origins in those languages. Written as a demonstration of linguistic purism in English, the work explains atomic theory using Germanic words almost exclusively and coining new words when necessary; many of these new words have cognates in modern German, an important scientific language in its own right. The title phrase uncleftish beholding calques "atomic theory." To illustrate, the text begins: It goes on to define firststuffs (chemical elements), such as waterstuff (hydrogen), sourstuff (oxygen), and ymirstuff (uranium), as well as bulkbits (molecules), bindings (compounds), and several other terms important to uncleftish worldken (atomic science). and are the modern German words for hydrogen and oxygen, and in Dutch the modern equivalents are and . Sunstuff refers to helium, which derives from , the Ancient Greek word for 'sun'. Ymirstuff references Ymir, a giant in Norse mythology similar to Uranus in Greek mythology. Glossary The vocabulary used in "Uncleftish Beholding" does not completely derive from Anglo-Saxon. Around, from Old French (Modern French ), completely displaced Old English (modern English (now obsolete), cognate to German and Latin ) and left no "native" English word for this concept. The text also contains the French-derived words rest, ordinary and sort. The text gained increased exposure and popularity after being circulated around the Internet, and has served as inspiration for some inventors of Germanic English conlangs. Douglas Hofstadter, in discussing the piece in his book , jocularly refers to the use of only Germanic roots for scientific pieces as "Ander-Saxon." See also Anglish Thing Explainer References External links English language Atomic physics 1989 documents Works by Poul Anderson Linguistic purism Books written in fictional dialects
Uncleftish Beholding
Physics,Chemistry
486
66,004,740
https://en.wikipedia.org/wiki/Energy%20Regulators%20Association%20of%20East%20Africa
The Energy Regulators Association of East Africa (EREA) is a non-profit organisation mandated to spearhead harmonisation of energy regulatory frameworks, sustainable capacity building and information sharing among the List of energy regulatory bodies in the East African Community. Its key objective is to promote the independence of national regulators and support the establishment of a robust East African energy union. Foundation and mission On 28 May 2008, four national energy regulatory authorities voluntarily signed a "Memorandum of Understanding" for the establishment of the Energy Regulators Association of East Africa (EREA). Subsequently, it was recognized by the 8th Sectoral Council on Energy of East African Community (EAC) as a forum of energy regulators in the EAC on 21 June 2013. It was registered by the United Republic of Tanzania on 23 May 2019 into a company limited by guarantees and without share capital under the Companies Act, 2002, and the Memorandum of Association. The EREA represents seven members – the national energy regulators from the EAC Member States. The EREA works closely with the EAC, African Union Eastern Africa Power Pool (EAPP)-Independent Regulatory Board (IRB), National Association of Regulatory Utility Commissioners and The Regional Association of Energy Regulators for Eastern and Southern Africa (RAERESA). EREA's seat is in Arusha, Tanzania. Objectives and functions EREA is composed of nine Key Result Areas and the objectives are summarised as follows: Facilitating the harmonization of NRI’s policies, tariff structures and legislation in the Member States; Sustainable Capacity Building through the establishment of the Energy Regulation Centre of Excellence (ERCE) to support regional member institutions contribute to the advancement of research on regulatory issues Promoting regional co-operation in the planning and development of an integrated energy market and infrastructure. Promoting independent regulation in the East African Community. EREA was established to also, amongst other objectives; strengthen economic, commercial, social, cultural, political, technological and other ties for fast balanced and sustainable development within the East African region. Members and Governance EREA members include Energy and Water Utilities Regulatory Authority (EWURA) of Tanzania, Energy Petroleum Regulatory Authority (EPRA) of Kenya, Zanzibar Utility Regulatory Authority (ZURA) of Zanzibar, and Petroleum Authority of Uganda (PAU) of Uganda. Others include Electricity Regulatory Authority (ERA) of Uganda, Rwanda Utilities Regulatory Authority (RURA) of Rwanda and Autorité de Régulation des secteurs de l’Eau potable et de l’Energie (AREEN) of Burundi. EREA is also supporting the Government of the Republic of South Sudan to establish an independent regulatory authority which will eventually be integrated within the regional regulatory association. The Association has four organs and applies the principle of rotating leadership of the organs among its members. These organs are: (a) The General Assembly (G.A.) – the supreme organ, is currently chaired by AREEN-Burundi. The GA is the meeting of chairpersons and chief executive officers of the national regulatory authorities in EAC. (b) The Executive Council – is currently chaired by EWURA-Tanzania. This a meeting of Chief Executive Officers/Director Generals of the national regulatory authorities in EAC. (c) The Secretariat - is headed by the Executive Secretary-Dr. Geoffrey Aori Mabea, appointed for a three-year term, and the position is on a rotational basis among the countries of East African Community. (d) Three Specialized Portfolio Committees for handling Economic, Legal, and Technical matters of the Association. EAC Electricity Markets The East African Community's Electricity Regulatory Index(ERI) Source: The African Development Bank carried out a third electricity regulatory index for Africa to assess the three main pillars of regulation. They included: the Regulatory Governance Index (RGI); the Regulatory Substance Index (RSI); and the Regulatory Outcome Index (ROI). In the report, the East African Community members states shows a significant improvement in the key regulatory index. According to the African Development Bank, Uganda has maintained the top position for two consecutive years. End User Electricity Tariff Electricity Statistics for EAC See also Common Market for Eastern and Southern Africa (COMESA) Energy Regulators Regional Association (ERRA) Eastern Africa Power Pool (EAPP-IRB) Energy Regulation Centre of Excellence (ERCE) Regional Association of Energy Regulators for Eastern and Southern Africa RAERESA References External links EREA website EREA Magazine website Energy Regulation Centre of Excellence Organizations established in 2008 Energy markets International energy organizations East African Community Energy regulatory authorities Non-profit organizations based in Africa
Energy Regulators Association of East Africa
Engineering
922
505,425
https://en.wikipedia.org/wiki/DNA%20polymerase%20I
DNA polymerase I (or Pol I) is an enzyme that participates in the process of prokaryotic DNA replication. Discovered by Arthur Kornberg in 1956, it was the first known DNA polymerase (and the first known of any kind of polymerase). It was initially characterized in E. coli and is ubiquitous in prokaryotes. In E. coli and many other bacteria, the gene that encodes Pol I is known as polA. The E. coli Pol I enzyme is composed of 928 amino acids, and is an example of a processive enzyme — it can sequentially catalyze multiple polymerisation steps without releasing the single-stranded template. The physiological function of Pol I is mainly to support repair of damaged DNA, but it also contributes to connecting Okazaki fragments by deleting RNA primers and replacing the ribonucleotides with DNA. Discovery In 1956, Arthur Kornberg and colleagues discovered Pol I by using Escherichia coli (E. coli) extracts to develop a DNA synthesis assay. The scientists added 14C-labeled thymidine so that a radioactive polymer of DNA, not RNA, could be retrieved. To initiate the purification of DNA polymerase, the researchers added streptomycin sulfate to the E. coli extract. This separated the extract into a nucleic acid-free supernatant (S-fraction) and nucleic acid-containing precipitate (P-fraction). The P-fraction also contained Pol I and heat-stable factors essential for the DNA synthesis reactions. These factors were identified as nucleoside triphosphates, the building blocks of nucleic acids. The S-fraction contained multiple deoxynucleoside kinases. In 1959, the Nobel Prize in Physiology or Medicine was awarded to Arthur Kornberg and Severo Ochoa "for their discovery of the mechanisms involved in the biological synthesis of Ribonucleic acid and Deoxyribonucleic Acid." Structure and function General structure Pol I mainly functions in the repair of damaged DNA. Structurally, Pol I is a member of the alpha/beta protein superfamily, which encompasses proteins in which α-helices and β-strands occur in irregular sequences. E. coli DNA Pol I consists of multiple domains with three distinct enzymatic activities. Three domains, often referred to as thumb, finger and palm domain work together to sustain DNA polymerase activity. A fourth domain next to the palm domain contains an exonuclease active site that removes incorrectly incorporated nucleotides in a 3' to 5' direction in a process known as proofreading. A fifth domain contains another exonuclease active site that removes DNA or RNA in a 5' to 3' direction and is essential for RNA primer removal during DNA replication or DNA during DNA repair processes. E. coli bacteria produces 5 different DNA polymerases: DNA Pol I, DNA Pol II, DNA Pol III, DNA Pol IV, and DNA Pol V. Structural and functional similarity to other polymerases In DNA replication, the leading DNA strand is continuously extended in the direction of replication fork movement, whereas the DNA lagging strand runs discontinuously in the opposite direction as Okazaki fragments. DNA polymerases also cannot initiate DNA chains so they must be initiated by short RNA or DNA segments known as primers. In order for DNA polymerization to take place, two requirements must be met. First of all, all DNA polymerases must have both a template strand and a primer strand. Unlike RNA, DNA polymerases cannot synthesize DNA from a template strand. Synthesis must be initiated by a short RNA segment, known as RNA primer, synthesized by Primase in the 5' to 3' direction. DNA synthesis then occurs by the addition of a dNTP to the 3' hydroxyl group at the end of the preexisting DNA strand or RNA primer. Secondly, DNA polymerases can only add new nucleotides to the preexisting strand through hydrogen bonding. Since all DNA polymerases have a similar structure, they all share a two-metal ion-catalyzed polymerase mechanism. One of the metal ions activates the primer 3' hydroxyl group, which then attacks the primary 5' phosphate of the dNTP. The second metal ion will stabilize the leaving oxygen's negative charge, and subsequently chelates the two exiting phosphate groups. The X-ray crystal structures of polymerase domains of DNA polymerases are described in analogy to human right hands. All DNA polymerases contain three domains. The first domain, which is known as the "fingers domain", interacts with the dNTP and the paired template base. The "fingers domain" also interacts with the template to position it correctly at the active site. Known as the "palm domain", the second domain catalyses the reaction of the transfer of the phosphoryl group. Lastly, the third domain, which is known as the "thumb domain", interacts with double stranded DNA. The exonuclease domain contains its own catalytic site and removes mispaired bases. Among the seven different DNA polymerase families, the "palm domain" is conserved in five of these families. The "finger domain" and "thumb domain" are not consistent in each family due to varying secondary structure elements from different sequences. Function Pol I possesses four enzymatic activities: A 5'→3' (forward) DNA-dependent DNA polymerase activity, requiring a 3' primer site and a template strand A 3'→5' (reverse) exonuclease activity that mediates proofreading A 5'→3' (forward) exonuclease activity mediating nick translation during DNA repair. A 5'→3' (forward) RNA-dependent DNA polymerase activity. Pol I operates on RNA templates with considerably lower efficiency (0.1–0.4%) than it does DNA templates, and this activity is probably of only limited biological significance. In order to determine whether Pol I was primarily used for DNA replication or in the repair of DNA damage, an experiment was conducted with a deficient Pol I mutant strain of E. coli. The mutant strain that lacked Pol I was isolated and treated with a mutagen. The mutant strain developed bacterial colonies that continued to grow normally and that also lacked Pol I. This confirmed that Pol I was not required for DNA replication. However, the mutant strain also displayed characteristics which involved extreme sensitivity to certain factors that damaged DNA, like UV light. Thus, this reaffirmed that Pol I was more likely to be involved in repairing DNA damage rather than DNA replication. Mechanism In the replication process, RNase H removes the RNA primer (created by primase) from the lagging strand and then polymerase I fills in the necessary nucleotides between the Okazaki fragments (see DNA replication) in a 5'→3' direction, proofreading for mistakes as it goes. It is a template-dependent enzyme—it only adds nucleotides that correctly base pair with an existing DNA strand acting as a template. It is crucial that these nucleotides are in the proper orientation and geometry to base pair with the DNA template strand so that DNA ligase can join the various fragments together into a continuous strand of DNA. Studies of polymerase I have confirmed that different dNTPs can bind to the same active site on polymerase I. Polymerase I is able to actively discriminate between the different dNTPs only after it undergoes a conformational change. Once this change has occurred, Pol I checks for proper geometry and proper alignment of the base pair, formed between bound dNTP and a matching base on the template strand. The correct geometry of A=T and G≡C base pairs are the only ones that can fit in the active site. However, it is important to know that one in every 104 to 105 nucleotides is added incorrectly. Nevertheless, Pol I can fix this error in DNA replication using its selective method of active discrimination. Despite its early characterization, it quickly became apparent that polymerase I was not the enzyme responsible for most DNA synthesis—DNA replication in E. coli proceeds at approximately 1,000 nucleotides/second, while the rate of base pair synthesis by polymerase I averages only between 10 and 20 nucleotides/second. Moreover, its cellular abundance of approximately 400 molecules per cell did not correlate with the fact that there are typically only two replication forks in E. coli. Additionally, it is insufficiently processive to copy an entire genome, as it falls off after incorporating only 25–50 nucleotides. Its role in replication was proven when, in 1969, John Cairns isolated a viable polymerase I mutant that lacked the polymerase activity. Cairns' lab assistant, Paula De Lucia, created thousands of cell free extracts from E. coli colonies and assayed them for DNA-polymerase activity. The 3,478th clone contained the polA mutant, which was named by Cairns to credit "Paula" [De Lucia]. It was not until the discovery of DNA polymerase III that the main replicative DNA polymerase was finally identified. Research applications DNA polymerase I obtained from E. coli is used extensively for molecular biology research. However, the 5'→3' exonuclease activity makes it unsuitable for many applications. This undesirable enzymatic activity can be simply removed from the holoenzyme to leave a useful molecule called the Klenow fragment, widely used in molecular biology. In fact, the Klenow fragment was used during the first protocols of polymerase chain reaction (PCR) amplification until Thermus aquaticus, the source of a heat-tolerant Taq Polymerase I, was discovered in 1976. Exposure of DNA polymerase I to the protease subtilisin cleaves the molecule into a smaller fragment, which retains only the DNA polymerase and proofreading activities. See also DNA polymerase II DNA polymerase III DNA polymerase V References EC 2.7.7 DNA replication Enzymes 1956 in biology
DNA polymerase I
Biology
2,108
12,145,959
https://en.wikipedia.org/wiki/Machine-readable%20dictionary
Machine-readable dictionary (MRD) is a dictionary stored as machine-readable data instead of being printed on paper. It is an electronic dictionary and lexical database. A machine-readable dictionary is a dictionary in an electronic form that can be loaded in a database and can be queried via application software. It may be a single language explanatory dictionary or a multi-language dictionary to support translations between two or more languages or a combination of both. Translation software between multiple languages usually apply bidirectional dictionaries. An MRD may be a dictionary with a proprietary structure that is queried by dedicated software (for example online via internet) or it can be a dictionary that has an open structure and is available for loading in computer databases and thus can be used via various software applications. Conventional dictionaries contain a lemma with various descriptions. A machine-readable dictionary may have additional capabilities and is therefore sometimes called a smart dictionary. An example of a smart dictionary is the Open Source Gellish English dictionary. The term dictionary is also used to refer to an electronic vocabulary or lexicon as used for example in spelling checkers. If dictionaries are arranged in a subtype-supertype hierarchy of concepts (or terms) then it is called a taxonomy. If it also contains other relations between the concepts, then it is called an ontology. Search engines may use either a vocabulary, a taxonomy or an ontology to optimise the search results. Specialised electronic dictionaries are morphological dictionaries or syntactic dictionaries. The term MRD is often contrasted with NLP dictionary, in the sense that an MRD is the electronic form of a dictionary which was printed before on paper. Although being both used by programs, in contrast, the term NLP dictionary is preferred when the dictionary was built from scratch with NLP in mind. An ISO standard for MRD and NLP is able to represent both structures and is called Lexical Markup Framework. History The first widely distributed MRDs were the Merriam-Webster Seventh Collegiate (W7) and the Merriam-Webster New Pocket Dictionary (MPD). Both were produced by a government-funded project at System Development Corporation under the direction of John Olney. They were manually keyboarded as no typesetting tapes of either book were available. Originally each was distributed on multiple reels of magnetic tape as card images with each separate word of each definition on a separate punch card with numerous special codes indicating the details of its usage in the printed dictionary. Olney outlined a grand plan for the analysis of the definitions in the dictionary, but his project expired before the analysis could be carried out. Robert Amsler at the University of Texas at Austin resumed the analysis and completed a taxonomic description of the Pocket Dictionary under National Science Foundation funding, however his project expired before the taxonomic data could be distributed. Roy Byrd et al. at IBM Yorktown Heights resumed analysis of the Webster's Seventh Collegiate following Amsler's work. Finally, in the 1980s starting with initial support from Bellcore and later funded by various U.S. federal agencies, including NSF, ARDA, DARPA, DTO, and REFLEX, George Armitage Miller and Christiane Fellbaum at Princeton University completed the creation and wide distribution of a dictionary and its taxonomy in the WordNet project, which today stands as the most widely distributed computational lexicology resource. References Computational linguistics Dictionaries by type Lexicography
Machine-readable dictionary
Technology
710
7,723,926
https://en.wikipedia.org/wiki/Biseriate
Biseriate is a botanical term applied to both plantae and fungi, meaning 'arranged in two rows'. The term can refer to any number of structures found within these kingdoms, from arrangement of leaves to the placement of spores. It becomes useful in taxonomy for placing a species within a certain genus, family, or even order, based upon morphology, when making an initial choice or when DNA evidence is inconclusive. References https://web.archive.org/web/20061017163011/http://glossary.gardenweb.com/glossary/biseriate.html Botany
Biseriate
Biology
129
8,876,908
https://en.wikipedia.org/wiki/Natural%20Selection%202
Natural Selection 2 is a multiplayer video game which combines first-person shooter and real-time strategy rules. It is set in a science fiction universe in which a human team fights an alien team for control of resources and territory in large and elaborate indoor facilities. It is the sequel to Natural Selection. Gameplay Like its predecessor, Natural Selection 2 features two opposing teams of players, Kharaa (Aliens) and Frontiersmen (Marines), seeking to destroy the other's respective base. While the two teams have the same essential goals, gameplay for each team varies drastically. Marines largely rely on guns and other pieces of technology to annihilate the alien presence. Aliens, however, rely primarily on melee attacks. Certain alien lifeforms can walk on walls, fly, and even dash forward in the blink of an eye. Players also have a currency system which they use to buy better equipment or evolve into higher lifeforms. The primary feature that differentiates Natural Selection 2 from others in the FPS genre is its strategy component. Both teams may have one player act as a commander, who is given a top-down view of the map and plays the game in a Real-time strategy perspective. The commander can place buildings, research upgrades and has a number of abilities to support their team (dropping health and ammo packs, using certain support units to aid in combat or building, or erecting walls to block enemy movement), at the cost of resources. Buildings in Natural Selection are designed to aid players in their offensive, defensive, stealth and speed capabilities. For a team to achieve victory, they must eliminate all of the opposing team command structures (Hive of the Kharaa, Command Station of the Frontiersmen). The Aliens also have the option of eliminating all infantry portals (Marine spawn structure) and any alive Marines; however because eggs (Alien spawn structure) automatically spawn around hives, Marines cannot do the same. Development A game engine originally dubbed "Evolution" was developed specifically for the game. It has since been renamed "Spark". The game engine utilizes the Lua scripting language for game logic, allowing for easy expansion of the game's mechanics. Physics support is provided by several third-party libraries. The game was officially announced in October 2006. It was to be developed by the Natural Selection creator's newly founded company, Unknown Worlds. Charlie ‘Flayra’ Cleveland will continue his work on the game and Cory Strader (concept artist from Natural Selection) will also be contributing concept artwork. On December 1, 2006 the first major announcement of a possible feature was announced, named 'Dynamic Infestation'. A video containing an example of Dynamic Infestation was posted on the Unknown Worlds development blog. On August 31, 2007, podcasts by Max McGuire and Charlie Cleveland were released. These audio updates have since been released at irregular intervals. They discuss the development process, funding and focus, and serve as a basis for interviews with other names in the industry. On April 6, 2008, Unknown Worlds established an office. On July 10, 2008, Unknown Worlds announced their move from the Source Engine to an in-house developed engine dubbed "Spark". Concept artwork was often shown on the Unknown Worlds development blog. In October 2009, Unknown Worlds confirmed plans to support Mac OS X, Linux platforms and perhaps console. However, in February 2010, Max McGuire announced that OS X, Linux, and Xbox support would not be available at the game's initial launch. It was also revealed that Natural Selection amassed over $200,000 in pre-orders and $500,000 through angel investors. On April 9, 2010 a standalone Engine Build became available which included an external map creation utility. On May 7, the Engine Build started using Steam as its primary distribution and update source. On 13 July 2010, Unknown Worlds Entertainment announced that a private alpha was to be released through Steam for all Special Edition pre-order customers on 26 July 2010. It will be updated throughout the game development and eventually become the beta release. The full release version of the game will subsequently follow. The alpha test started on July 26, 2010, with those who pre-ordered the game's "Special Edition" able to activate it via Steam. The game was released on Steam on October 31, 2012. On November 18, 2010, Unknown Worlds Entertainment updated the status from private alpha to closed beta, allowing anyone who had previously pre-ordered either edition of the game, plus the first 10,000 pre-orders after the announcement was made, into the beta. This was primarily to bring in more capital. On February 14, 2023, Unknown Worlds Entertainment announced that the active development of Natural Selection 2 has ended. Post-release A few years after Natural Selection 2 was released, Unknown Worlds turned over development to a small team made up of community members. In November 2015, UWE took over development once more, with eight members of the community development team being hired, most working part-time. The initial announcement led to controversy in the community, with one community developer stating they would no longer be working on the game, believing he and others were poorly treated in the community. Reception The game sold 144,000 copies in its first week, earning over $1 million. As of February 26, 2013 the game has sold 300,000 copies. The review aggregator Metacritic shows generally favorable reviews, with a Metascore of 80. References External links 2012 video games First-person strategy video games Indie games Lua (programming language)-scripted video games Multiplayer online games Multiplayer video games Science fiction video games Video game sequels Video games developed in the United States Windows games Linux games Video games with Steam Workshop support Asymmetrical multiplayer video games Unknown Worlds Entertainment games Cancelled Linux games
Natural Selection 2
Physics
1,166
268,344
https://en.wikipedia.org/wiki/Efficiency
Efficiency is the often measurable ability to avoid making mistakes or wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste. In more mathematical or scientific terms, it signifies the level of performance that uses the least amount of inputs to achieve the highest amount of output. It often specifically comprises the capability of a specific application of effort to produce a specific outcome with a minimum amount or quantity of waste, expense, or unnecessary effort. Efficiency refers to very different inputs and outputs in different fields and industries. In 2019, the European Commission said: "Resource efficiency means using the Earth's limited resources in a sustainable manner while minimising impacts on the environment. It allows us to create more with less and to deliver greater value with less input." Writer Deborah Stone notes that efficiency is "not a goal in itself. It is not something we want for its own sake, but rather because it helps us attain more of the things we value." In statistical terms, Nimari Burnett of the Michigan Wolverines is the most efficient basketball player on the planet. Efficiency and effectiveness Efficiency is very often confused with effectiveness. In general, efficiency is a measurable concept, quantitatively determined by the ratio of useful output to total useful input. Effectiveness is the simpler concept of being able to achieve a desired result, which can be expressed quantitatively but does not usually require more complicated mathematics than addition. Efficiency can often be expressed as a percentage of the result that could ideally be expected, for example if no energy were lost due to friction or other causes, in which case 100% of fuel or other input would be used to produce the desired result. In some cases efficiency can be indirectly quantified with a non-percentage value, e.g. specific impulse. A common but confusing way of distinguishing between efficiency and effectiveness is the saying "Efficiency is doing things right, while effectiveness is doing the right things". This saying indirectly emphasizes that the selection of objectives of a production process is just as important as the quality of that process. This saying popular in business, however, obscures the more common sense of "effectiveness", which would/should produce the following mnemonic: "Efficiency is doing things right; effectiveness is getting things done". This makes it clear that effectiveness, for example large production numbers, can also be achieved through inefficient processes if, for example, workers are willing or used to working longer hours or with greater physical effort than in other companies or countries or if they can be forced to do so. Similarly, a company can achieve effectiveness, for example large production numbers, through inefficient processes if it can afford to use more energy per product, for example if energy prices or labor costs or both are lower than for its competitors. Inefficiency Inefficiency is the absence of efficiency. Kinds of inefficiency include: Allocative inefficiency refers to a situation in which the distribution of resources between alternatives does not fit with consumer taste (perceptions of costs and benefits). For example, a company may have the lowest costs in "productive" terms, but the result may be inefficient in allocative terms because the "true" or social cost exceeds the price that consumers are willing to pay for an extra unit of the product. This is true, for example, if the firm produces pollution (see also external cost). Consumers would prefer that the firm and its competitors produce less of the product and charge a higher price, to internalize the external cost. Distributive inefficiency refers to the inefficient distribution of income and wealth within a society. Decreasing marginal utilities of wealth, in theory, suggests that more egalitarian distributions of wealth are more efficient than inegalitarian distributions. Distributive inefficiency is often associated with economic inequality. Economic inefficiency refers to a situation where "we could be doing a better job," i.e., attaining our goals at lower cost. It is the opposite of economic efficiency. In the latter case, there is no way to do a better job, given the available resources and technology. Sometimes, this type of economic efficiency is referred to as the Koopmans efficiency. Keynesian inefficiency might be defined as incomplete use of resources (labor, capital goods, natural resources, etc.) because of inadequate aggregate demand. We are not attaining potential output, while suffering from cyclical unemployment. We could do a better job if we applied deficit spending or expansionary monetary policy. Pareto inefficiency is a situation in which one person can not be made better off without making anyone else worse off. In practice, this criterion is difficult to apply in a constantly changing world, so many emphasize Kaldor-Hicks efficiency and inefficiency: a situation is inefficient if someone can be made better off even after compensating those made worse off, regardless of whether the compensation actually occurs. Productive inefficiency says that we could produce the given output at a lower cost—or could produce more output for a given cost. For example, a company that is inefficient will have higher operating costs and will be at a competitive disadvantage (or have lower profits than other firms in the market). See Sickles and Zelenyuk (2019, Chapter 3) for more extensive discussions. Resource-market inefficiency refers to barriers that prevent full adjustment of resource markets, so that resources are either unused or misused. For example, structural unemployment results from barriers of mobility in labor markets which prevent workers from moving to places and occupations where there are job vacancies. Thus, unemployed workers can co-exist with unfilled job vacancies. X-inefficiency refers to inefficiency in the "black box" of production, connecting inputs to outputs. This type of inefficiency says that we could be organizing people or production processes more effectively. Often problems of "morale" or "bureaucratic inertia" cause X-inefficiency. Productive inefficiency, resource-market inefficiency, and X-inefficiency might be analyzed using data envelopment analysis and similar methods. Mathematical expression Efficiency is often measured as the ratio of useful output to total input, which can be expressed with the mathematical formula r=P/C, where P is the amount of useful output ("product") produced per the amount C ("cost") of resources consumed. This may correspond to a percentage if products and consumables are quantified in compatible units, and if consumables are transformed into products via a conservative process. For example, in the analysis of the energy conversion efficiency of heat engines in thermodynamics, the product P may be the amount of useful work output, while the consumable C is the amount of high-temperature heat input. Due to the conservation of energy, P can never be greater than C, and so the efficiency r is never greater than 100% (and in fact must be even less at finite temperatures). In science and technology In physics Useful work per quantity of energy, mechanical advantage over ideal mechanical advantage, often denoted by the Greek lowercase letter η (Eta): Electrical efficiency Energy conversion efficiency Mechanical efficiency Thermal efficiency, ratio of work done to thermal energy consumed Efficient energy use, the objective of maximising efficiency In thermodynamics: Energy conversion efficiency, measure of second law thermodynamic loss Radiation efficiency, ratio of radiated power to power absorbed at the terminals of an antenna Volumetric efficiency, in internal combustion engine design for the RAF Lift-to-drag ratio Faraday efficiency, electrolysis Quantum efficiency, a measure of sensitivity of a photosensitive device Grating efficiency, a generalization of the reflectance of a mirror, extended to a diffraction grating In economics Productivity improving technologies Economic efficiency, the extent to which waste or other undesirable features are avoided Market efficiency, the extent to which a given market resembles the ideal of an efficient market Pareto efficiency, a state of its being impossible to make one individual better off, without making any other individual worse off Kaldor-Hicks efficiency, a less stringent version of Pareto efficiency Allocative efficiency, the optimal distribution of goods Efficiency wages, paying workers more than the market rate for increased productivity Business efficiency, revenues relative to expenses, etc. Efficiency Movement, of the Progressive Era (1890–1932), advocated efficiency in the economy, society and government In other sciences In computing: Algorithmic efficiency, optimizing the speed and memory requirements of a computer program. A non-functional requirement (criterion for quality) in systems design and systems architecture which says something about the resource consumption for given load Efficiency factor, in data communications Storage efficiency, effectiveness of computer data storage Efficiency (statistics), a measure of desirability of an estimator Material efficiency, compares material requirements between construction projects or physical processes Administrative efficiency, measuring transparency within public authorities and simplicity of rules and procedures for citizens and businesses In biology: Photosynthetic efficiency Ecological efficiency See also Jevons paradox References Economic efficiency Heat transfer Engineering concepts Waste management Waste of resources
Efficiency
Physics,Chemistry,Engineering
1,915
8,045,950
https://en.wikipedia.org/wiki/Group%20of%20pictures
In video coding, a group of pictures, or GOP structure, specifies the order in which intra- and inter-frames are arranged. The GOP is a collection of successive pictures within a coded video stream. Each coded video stream consists of successive GOPs, from which the visible frames are generated. Encountering a new GOP in a compressed video stream means that the decoder doesn't need any previous frames in order to decode the next ones, and allows fast seeking through the video. Elements A GOP can contain the following picture types: I frame (intra coded picture, also sometimes incorrectly called keyframe) – a picture that is coded independently of all other pictures. Each GOP begins (in decoding order) with this type of frame. IDR frame (Instantaneous Decoder Refresh): I frame with a marking indicating that no subsequent P frames have references reaching further back than this I frame. Through the use of these IDR frames, closed GOPs are formed that can’t refer to frames outside the GOP. IDR are the true keyframes together with clean random access frames (recovery points). P frame (predictive coded picture) – contains motion-compensated difference information relative to previously decoded pictures. In older designs such as MPEG-1, H.262/MPEG-2 and H.263, each P frame can only reference one picture, and that picture must precede the P frame in display order as well as in decoding order, and the reference must be an I or P frame. These constraints do not apply in the newer standards H.264/MPEG-4 AVC and HEVC. B frame (bipredictive coded picture) – contains motion-compensated difference information relative to previously decoded pictures. In older designs such as MPEG-1 and H.262/MPEG-2, each B frame can only reference two frames, the one which precedes the B frame in display order and the one which follows, and all referenced pictures must be I or P frames. These constraints do not apply in newer standards H.264/MPEG-4 AVC and HEVC. Sometimes, a codec will use unidirectional B-frames. This is a P-frame that, while it does not use data from a future frame, no other frames depend on it. A fundamental property of B-frames is that they can be dropped without affecting the correct decoding of other frames. D frame (DC direct coded picture) – serves as a fast-access representation of a frame for loss robustness or fast-forward. D frames are only used in MPEG-1 video. An I frame indicates the beginning of a GOP. Afterwards, several P and B frames follow. In older designs, the allowed ordering and referencing structure is relatively constrained. The I frames contain the full image and do not require any additional information to reconstruct them. Typically, encoders use GOP structures that cause each I frame to be a "clean random access point," such that decoding can start cleanly on an I frame and any errors within the GOP structure are corrected after processing a correct I frame. In the newer designs found in H.264/MPEG-4 AVC and HEVC, encoders have much more flexibility about referencing structures. They can use the same referencing structures as were previously used in older designs, or they can use more pictures as references and they can use more flexible ordering of the coding order relative to the display order. They are also allowed to use B frames as references when coding other (B or P) frames. This extra flexibility can improve compression efficiency, but it can cause propagation of errors if some data becomes lost or corrupted. One popular structure for use with the newer designs is the use of a hierarchy of B frames. Hierarchical B frames can provide very good compression efficiency and can also limit the propagation of errors, since the hierarchy can ensure that the number of pictures affected by any data corruption problem is strictly limited. Generally, the more I frames the video stream has, the more editable it is. However, having more I frames substantially increases bit rate needed to code the video. Structure The GOP structure is often referred by two numbers, for example, . The first number tells the distance between two anchor frames (I or P), also known as the length of a "mini-GOP". The second one tells the distance between two full images (I-frames): it is the GOP size. Instead of the M parameter, the maximal count of B-frames between two consecutive anchor frames can be used; this is the approach used by ffmpeg. Examples: For , the GOP structure is . There are 2 B-frames between two consecutive anchor frames. For the sequence , GOP size , anchor-distance . There are 4 B-frames between two consecutive anchor frames. The GOP structure does not need to stay fixed throughout encoding. Varying to insert an I-frame on scene change is a well-known technique. Newer techniques also vary based on the amount of motion in the video. Additional concepts With H.264 and later designs which allow highly flexible reference structures, a B frame in one GOP is able to reference a frame in a different GOP, in particular even before the I frame, which makes I frame non-IDR (not a keyframe). A GOP that contains any such outward-referencing frame is known as an "open GOP". The opposite is a self-contained GOP, known as a "closed GOP". In coding order GOP can begin with a B-frame, but it cannot end with one. Open GOP starts with a B-frame and it is a little more efficient because starting with an I-frame means that an extra P-frame must be added to the end (a GOP cannot end with a B-frame). See also Video compression picture types Key frame References MPEG Video compression
Group of pictures
Technology
1,233
6,641,341
https://en.wikipedia.org/wiki/MRC%20%28file%20format%29
MRC is a file format that has become an industry standard in cryo-electron microscopy (cryoEM) and electron tomography (ET), where the result of the technique is a three-dimensional grid of voxels each with a value corresponding to electron density or electric potential. It was developed by the MRC (Medical Research Council, UK) Laboratory of Molecular Biology. In 2014, the format was standardised. The format specification is available on the CCP-EM website. The MRC format is supported by many of the software packages listed in b:Software Tools For Molecular Microscopy. See also CCP4 (file format) References External links MRC specification Computational chemistry Structural biology Computer file formats
MRC (file format)
Chemistry,Biology
145
68,608,152
https://en.wikipedia.org/wiki/Soviet%20computing%20technology%20smuggling
Soviet computing technology smuggling, both attempted and actual, was a response to CoCom (Coordinating Committee for Multilateral Export Controls) restrictions on technology transfer. History Mainframe successes Initially the Soviet Union focused on mainframe computing technology, particularly the IBM 360 and 370. Between 1967 and 1972 much effort went into reverse engineering what they "acquired." Their first IBM-like machine was based on a 360/40 smuggled in via Poland. The second Soviet-built machine was from a 370/145. Their focus subsequently shifted to super-minicomputers. Failure in 1983 to import a VAX-11/782 did not stop their efforts. "Reverse-engineered and copied Apple IIe parts" brought microcomputers to the Soviet Union; it also brought computer viruses too. IBM PC compatible computers were also smuggled in. Production of Iron Curtain mainframes, at one point, was estimated to be 180 per year. VAX failures The failure of the Soviets to acquire a VAX-11/782, a dual-processor variation of the VAX-11/780, the original VAX, unraveled much of their smuggling system. U. S. Secretary of Defense Caspar Weinberger made a public display of the system, about which The Washington Post headlined "Seized Computer Put on Display" in later 1983. The computer had been exported from the United States to South Africa, from which it was to clandestinely be reshipped; it was seized "moments before its scheduled shipment to the Soviet Union." Weinberger stated at a news conference that the VAX was intended for assisting production of "vastly more accurate . . . and more destructive weapons." Like the 360/40, the smuggling process involved multiple shipments. The 360 had been disassembled and placed in a large number of suitcases. A smaller number of "huge containers of parts" held the 782. The latter's route involved transhipping, some more than half via Sweden, others via West Germany. A U.S. official describe potential "military uses, including the operation of a missile guidance system." The exact configuration was not released even by over a year later: APnews, which noted that the smuggling operation was spread across ten countries, cited $1.1 million as the system's price The Los Angeles Times described the same system's price as $1.5 million. The New York Times wrote "between $1.5 and $2 million." Another VAX-smuggling attempt, five years later, involved a VAX 8800; this too ended in a failure. This time also, the computer involved was a dual-processor system. American government wiretapping revealed that some of the parties involved considered even settling for a VAX 8700, a uni-processor system. See also Toshiba–Kongsberg scandal References Further reading Technobandits, by Linda Melvern, David Hebditch, and Nick Anning History of computing hardware History of international relations
Soviet computing technology smuggling
Technology
617
11,145,154
https://en.wikipedia.org/wiki/Photodisintegration
Photodisintegration (also called phototransmutation, or a photonuclear reaction) is a nuclear process in which an atomic nucleus absorbs a high-energy gamma ray, enters an excited state, and immediately decays by emitting a subatomic particle. The incoming gamma ray effectively knocks one or more neutrons, protons, or an alpha particle out of the nucleus. The reactions are called (γ,n), (γ,p), and (γ,α), respectively. Photodisintegration is endothermic (energy absorbing) for atomic nuclei lighter than iron and sometimes exothermic (energy releasing) for atomic nuclei heavier than iron. Photodisintegration is responsible for the nucleosynthesis of at least some heavy, proton-rich elements via the p-process in supernovae of type Ib, Ic, or II. This causes the iron to further fuse into the heavier elements. Photodisintegration of deuterium A photon carrying 2.22 MeV or more energy can photodisintegrate an atom of deuterium: {| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || |} James Chadwick and Maurice Goldhaber used this reaction to measure the proton-neutron mass difference. This experiment proves that a neutron is not a bound state of a proton and an electron, as had been proposed by Ernest Rutherford. Photodisintegration of beryllium A photon carrying 1.67 MeV or more energy can photodisintegrate an atom of beryllium-9 (100% of natural beryllium, its only stable isotope): {| border="0" |- style="height:2em;" | ||+ || ||→ ||2||  ||+ || |} Antimony-124 is assembled with beryllium to make laboratory neutron sources and startup neutron sources. Antimony-124 (half-life 60.20 days) emits β− and 1.690 MeV gamma rays (also 0.602 MeV and 9 fainter emissions from 0.645 to 2.090 MeV), yielding stable tellurium-124. Gamma rays from antimony-124 split beryllium-9 into two alpha particles and a neutron with an average kinetic energy of 24 keV (a so-called intermediate neutron in terms of energy): {| border="0" |- style="height:2em;" | ||→ ||||+ || ||+ || |} Other isotopes have higher thresholds for photoneutron production, as high as 18.72 MeV, for carbon-12. Hypernovae In explosions of very large stars (250 or more solar masses), photodisintegration is a major factor in the supernova event. As the star reaches the end of its life, it reaches temperatures and pressures where photodisintegration's energy-absorbing effects temporarily reduce pressure and temperature within the star's core. This causes the core to start to collapse as energy is taken away by photodisintegration, and the collapsing core leads to the formation of a black hole. A portion of mass escapes in the form of relativistic jets, which could have "sprayed" the first metals into the universe. Photodisintegration in lightning Terrestrial lightnings produce high-speed electrons that create bursts of gamma-rays as bremsstrahlung. The energy of these rays is sometimes sufficient to start photonuclear reactions resulting in emitted neutrons. One such reaction, (γ,n), is the only natural process other than those induced by cosmic rays in which is produced on Earth. The unstable isotopes remaining from the reaction may subsequently emit positrons by β+ decay. Photofission Photofission is a similar but distinct process, in which a nucleus, after absorbing a gamma ray, undergoes nuclear fission (splits into two fragments of nearly equal mass). See also Pair-instability supernova Silicon-burning process References Nuclear physics Nucleosynthesis Neutron sources
Photodisintegration
Physics,Chemistry
879
1,218,135
https://en.wikipedia.org/wiki/Alfred%20Stowell%20Jones
Lieutenant Colonel Alfred Stowell Jones, VC (24 January 1832 – 29 May 1920) was an English recipient of the Victoria Cross, the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to British and Commonwealth forces. Early life Jones was the son of the Archdeacon John Jones. He was educated at Liverpool College and Sandhurst and entered the 9th Lancers in 1852. Details on the Victoria Cross Jones was 25 years old, and a lieutenant in the 9th Lancers, British Army during the Indian Mutiny when the following deed on 8 June 1857 at Delhi, India took place for which he was awarded the VC: Later life Throughout the siege of Delhi he served as DAQMG to the cavalry and was mentioned in despatches three times and promoted Captain and Brevet-Major. After graduating from Staff College in 1861 he served on the Staff at the Cape 1861–67. He retired in 1872 with the rank of lieutenant colonel. After retiring from the military Jones became an environmental engineer and won a prize from the Royal Agricultural Society for best managed sewage farm. He lived at Ridge Cottage, Finchampstead, Berkshire. He died there, aged 88, on 29 May 1920 and is buried in the churchyard of St James in the village. Family Among his children were: Owen Jones, a lieutenant in the Royal Naval Reserve, married in 1902 Lillian Stevenson. Percy Jones, married in 1902 Olive Mary Edgar Clark, daughter of Major-General Edgar Clark, of the Bengal Staff Corps. References Sources Location of grave and VC medal (Berkshire) 9th Queen's Royal Lancers officers 1832 births 1920 deaths People educated at Liverpool College People from Finchampstead British recipients of the Victoria Cross Indian Rebellion of 1857 recipients of the Victoria Cross Graduates of the Royal Military College, Sandhurst Military personnel from Liverpool 19th-century British Army personnel 18th Royal Hussars officers Somerset Light Infantry officers Environmental engineers English civil engineers British Army recipients of the Victoria Cross Burials in Berkshire
Alfred Stowell Jones
Chemistry,Engineering
398
1,958,097
https://en.wikipedia.org/wiki/Fuzzy%20electronics
Fuzzy electronics is an electronic technology that uses fuzzy logic, instead of the two-state Boolean logic more commonly used in digital electronics. Fuzzy electronics is fuzzy logic implemented on dedicated hardware. This is to be compared with fuzzy logic implemented in software running on a conventional processor. Fuzzy electronics has a wide range of applications, including control systems and artificial intelligence. History The first fuzzy electronic circuit was built by Takeshi Yamakawa et al. in 1980 using discrete bipolar transistors. The first industrial fuzzy application was in a cement kiln in Denmark in 1982. The first VLSI fuzzy electronics was by Masaki Togai and Hiroyuki Watanabe in 1984. In 1987, Yamakawa built the first analog fuzzy controller. The first digital fuzzy processors came in 1988 by Togai (Russo, pp. 2–6). In the early 1990s, the first fuzzy logic chips were presented to the public. Two companies which are Omron and NEC have announced the development of dedicated fuzzy electronic hardware in the year 1991. Two years later, the Japanese Omron Cooperation has shown a working fuzzy chip during a technical fair. See also Defuzzification Fuzzy set Fuzzy set operations References Bibliography Abraham Kandel, Gideon Langholz (eds), Fuzzy Hardware: Architectures and Applications, Springer Science & Business Media, 2012 . Further reading Yamakawa, T.; Inoue, T.; Ueno, F.; Shirai, Y., "Implementation of Fuzzy Logic hardware systems-Three fundamental arithmetic circuits", Transactions of the Institute of Electronics and Communications Engineers, vol. 63, 1980, pp. 720–721. Togai, M.; Watanabe, H., "A VLSI implementation of a fuzzy inference engine: towards an expert system on a chip", Information Sciences, vol. 38, iss. 2, April 1986, pp. 147–163 External links Applications of Fuzzy logic in electronics Fuzzy logic Digital electronics Electronic engineering
Fuzzy electronics
Technology,Engineering
400
662,088
https://en.wikipedia.org/wiki/Mathematical%20and%20theoretical%20biology
Mathematical and theoretical biology, or biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to test scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged. Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It can be useful in both theoretical and practical research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models. Because of the complexity of the living systems, theoretical biology employs several fields of mathematics, and has contributed to the development of new techniques. History Early history Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits. In the 18th century, Daniel Bernoulli applied mathematics to describe the effect of smallpox on the human population. Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth. Pierre François Verhulst formulated the logistic growth model in 1836. Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment's carrying capacity) could only grow arithmetically. The term "theoretical biology" was first used as a monograph title by Johannes Reinke in 1901, and soon after by Jakob von Uexküll in 1920. One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson, and other early pioneers include Ronald Fisher, Hans Leo Przibram, Vito Volterra, Nicolas Rashevsky and Conrad Hal Waddington. Recent growth Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include: The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology An increase in computing power, which facilitates calculations and simulations not previously possible An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and non-human animal research Areas of research Several areas of specialized research in mathematical and theoretical biology as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models. Abstract relational biology Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization. Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints. Algebraic biology Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes. Complex systems biology An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology. Computer models and automata theory A monograph on this topic summarizes an extensive amount of published research in this area up to 1986, including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics, cancer modelling, neural nets, genetic networks, abstract categories in relational biology, metabolic-replication systems, category theory applications in biology and medicine, automata theory, cellular automata, tessellation models and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories. Modeling cell and molecular biology This area has received a boost due to the growing importance of molecular biology. Mechanics of biological tissues Theoretical enzymology and enzyme kinetics Cancer modelling and simulation Modelling the movement of interacting cell populations Mathematical modelling of scar tissue formation Mathematical modelling of intracellular dynamics Mathematical modelling of the cell cycle Mathematical modelling of apoptosis Modelling physiological systems Modelling of arterial disease Multi-scale modelling of the heart Modelling electrical properties of muscle interactions, as in bidomain and monodomain models Computational neuroscience Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system. Evolutionary biology Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology. Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic. Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions. In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics. Mathematical biophysics The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments. The following is a list of mathematical descriptions and their assumptions. Deterministic processes (dynamical systems) A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space. Difference equations/Maps – discrete time, continuous state space. Ordinary differential equations – continuous time, continuous state space, no spatial derivatives. See also: Numerical ordinary differential equations. Partial differential equations – continuous time, continuous state space, spatial derivatives. See also: Numerical partial differential equations. Logical deterministic cellular automata – discrete time, discrete state space. See also: Cellular automaton. Stochastic processes (random dynamical systems) A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution. Non-Markovian processes – generalized master equation – continuous time with memory of past events, discrete state space, waiting times of events (or transitions between states) discretely occur. Jump Markov process – master equation – continuous time with no memory of past events, discrete state space, waiting times between events discretely occur and are exponentially distributed. See also: Monte Carlo method for numerical simulation methods, specifically dynamic Monte Carlo method and Gillespie algorithm. Continuous Markov process – stochastic differential equations or a Fokker–Planck equation – continuous time, continuous state space, events occur continuously according to a random Wiener process. Spatial modelling One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society. Travelling waves in a wound-healing assay Swarming behaviour A mechanochemical theory of morphogenesis Biological pattern formation Spatial distribution modeling using plot samples Turing patterns Mathematical methods A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur. Mathematical molecular bioscience Mathematical molecular bioscience is an interdisciplinary field that explores biological processes at the molecular level using advanced mathematics, such as geometry, topology, algebra, and combinatorics. It combines principles of biology, chemistry, physics, and bioinformatics to investigate the structure, function, and interactions of biomolecules, such as DNA, RNA, proteins, lipids, and carbohydrates. This field is foundational to understanding life at a microscopic scale and has broad applications in medicine, drug discovery, epidemiology, biotechnology, agriculture, protein engineering, and environmental science. There are a wide variety of research topics in this field. Algebraic geometry modeling of protein structure. Algebraic graph theory modeling of protein-ligand binding. Algebraic topology and natural vector models for phylogenetic analysis. Combinatorial approaches to RNA secondary structure. Combinatorial Laplacian and Hodge Laplacian models for biomolecules. De Rham–Hodge theory modeling of biomolecules. Differential geometry-based multiscale models of biomolecular structure, electrostatics, dynamics, and function. Geometric algebra models for biomolecules. Geometric and topological analysis of omics data. Geometric topology modeling of biomolecules. Graph theory modeling of RNA molecules. Grassmann manifolds for genome modeling. Gromov-Wasserstein and Wasserstein metrics for macromolecular comparison. Group, graph and tiling theory models in virology. Knot theory and Gauss linking integral models for DNA/RNA/proteins. Laplace-Beltrami operator modeling of biological surfaces. Mathematical models for molecular networks, including networks for gene regulation, protein-protein interaction, drug-target interaction, interactome, transcription, and metabolic pathway. Mathematics-assisted protein engineering. Number theory models for DNA coding. Optimal transport modeling of omics data. Quantum topology and Dirac modeling of biomolecules. Ricci curvature and mean curvature modeling of protein-ligand binding. Persistent homology modeling of biomolecules. Persistent Khovanov homology modeling of DNA, RNA, and proteins. Persistent topological Laplacian modeling of biomolecules. String theory and M-theory models for DNA packaging. Tor-algebra modeling of protein-protein interactions. Topological data analysis approach for protein classification. Persistent topology-enabled discovery of virus evolution mechanisms. Persistent topology-enabled prediction of emerging dominant viral variants. Molecular set theory Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine. In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine. Organizational biology Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea. For example, abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization. Model example: the cell cycle The eukaryotic cell cycle is very complex and has been the subject of intense study, since its misregulation leads to cancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006). By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process). To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size. To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments. In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate). A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation. See also Biological applications of bifurcation theory Biophysics Biostatistics Entropy and life Ewens's sampling formula Journal of Theoretical Biology Logistic function Mathematical modelling of infectious disease Metabolic network modelling Molecular modelling Morphometrics Population genetics Spring school on theoretical biology Statistical genetics Theoretical ecology Turing pattern Notes References "Biologist Salary | Payscale". Payscale.Com, 2021, Biologist Salary | PayScale. Accessed 3 May 2021. Theoretical biology Further reading External links The Society for Mathematical Biology The Collection of Biostatistics Research Archive
Mathematical and theoretical biology
Mathematics
3,825
48,772,958
https://en.wikipedia.org/wiki/Maritime%20mobile%20service
A maritime mobile service (also MMS or maritime mobile radiocommunication service) is a mobile service between coast stations and ship stations, or between ship stations, or between associated on-board communication stations. The service may also be used by survival craft stations and emergency position-indicating radiobeacon stations. Classification This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Maritime mobile service Maritime mobile-satellite service (article 1.29) Port operations service (article 1.30) Ship movement service (article 1.31) Frequency allocation The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. primary allocation: is indicated by writing in capital letters (see example below) secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrations However, military usage, in bands where there is civil usage, will be in accordance with the ITU Radio Regulations. In NATO countries military utilizations will be in accordance with the NATO Joint Civil/Military Frequency Agreement (NJFA). Frequency range 415... 495 kHz 505...526,5 kHz 1606,5...1625 kHz 1635...1800 kHz 2045...2160 kHz 2170...2173,5 kHz 2190,5...2194 kHz 2625...2650 kHz 4000...4438 kHz 6200...6525 kHz 8100...8815 kHz 12230...13200 kHz 16360...17410 kHz 18780...18900 kHz 19680...19800 kHz 22000...22855 kHz 25070...25210 kHz 26100...26175 kHz See also Radio station Radiocommunication service References Mobile services ITU Maritime communication
Maritime mobile service
Technology
451
10,915,035
https://en.wikipedia.org/wiki/Rifle%20bedding
Rifle bedding is a gunsmithing process of providing a rigid and consistent foundation for a rifle’s operational components, by creating a stable and close-fitting bearing surface between the gun's functional parts (i.e. the receiver housing the barrelled action) and its structural support (i.e. the stock) that do not deform with heat, pressure and moisture, or shift under the shear stress of the recoil from firing. The bedding process is often an aftermarket modification, and is done for the goal of accurizing the rifle and (to a lesser extent) prolonging the service life of the stock. Purpose Increasing accuracy Bedding increases accuracy in part by relieving stress on the action. The rifle's action will rarely sit flush in the stock without bedding. This causes the action to flex when tightening the fixing screws that hold the action to the stock. If the stock is wooden, it will also expand or shrink significantly with environmental changes such as temperature and moisture, which causes changes in action screw tension. These result in inconsistencies during operation, which degrades accuracy. Bedding will create a flush bearing surface for the action and prevent flexing. Bedding also reduces minute movements of the action within the stock. Without bedding, the action may be more likely to shift after each shot. If the action shifts and does not return to same spot in the stock the rifle will lose the ability to maintain zero. The presence of the bedding material also adds a small amount of extra height to the action, and creates more of a gap between the barrel (which is fixed to the front of the action) and the fore-end of the stock, allowing the barrel to be better floated, which helps improve accuracy. Prolonging stock life Bedding can help prolong the life of the stock. Repeated shearing forces from recoils can create focal wear and chips in the stock surface, and eventually ruining the stock with repeated usage. Bedding redistributes stress over a larger area, reduces shifting between the action and the stock, and creates a hardier, protective epoxy coating over the softer stock contact surface, thus protecting it from mechanical wears over time. Methods Bedding involves molding an epoxy-based material onto the stock recess to fill away the gaps within its contact surface with the receiver (known as glass bedding), and/or inserting a metal cylinders (which act as compression members) around the action screws to reduce compressive shifting (known as pillar bedding). The receiver and the stock are sometimes fastened indirectly through an intermediate piece (usually made of rigid materials such as aluminium alloy) known as a bedding block, which multifunctionally serves as a larger pillar, a bedding surface and even recoil lugs. The contact interface on the stock may also be substituted by a metallic bedding frame known as a chassis, which is either embedded within the stock, or even completely replacing the stock like the lower receiver on many modern modular semi-automatic rifles. Skim bedding refers to an adjustment of a glass bedding job, usually after wear and tear from use, which consists of removing a small layer of the bedding material — usually up to around — and adding new bedding material on top of that. Several different bedding methods can be used depending on the type of stock, desired results and level of experience of the person attempting to perform the bedding. Methods include: Full contact bedding of the action with the barrel floated. Full contact bedding of the action and the barrel. Full contact bedding of the action with a pressure-bearing pad for the barrel. Pillar bedding of the action with the barrel floated. Full length aluminum action bedding block. Full contact bedding of the action with the barrel floated is a very common method for long range rifles with a heavy barrel. A free-floating barrel will generally produce the greatest accuracy. However, a pressure pad under the barrel just forward of the action can sometimes improve accuracy by acting on barrel harmonics and reducing stress on the action from the weight of the barrel. Pillar bedding can be used to float the action as well as the barrel, but the process is more difficult. Precautions If performed improperly, bedding can destroy a rifle. Mechanical locking occurs when bedding material is allowed to harden in holes or around protrusions on the action. If locking occurs, the action can be permanently fixed to the stock. Extreme measures may have to be taken to separate the stock from the action, possibly destroying one or both. Improperly applied or insufficient release agent can cause the bedding material to bind to the metal. If the trigger assembly is not removed prior to bedding, epoxy can seep into the trigger assembly and ruin it. References Further reading Firearm components
Rifle bedding
Technology
987
9,902,787
https://en.wikipedia.org/wiki/Structure%20theorem%20for%20finitely%20generated%20modules%20over%20a%20principal%20ideal%20domain
In mathematics, in the field of abstract algebra, the structure theorem for finitely generated modules over a principal ideal domain is a generalization of the fundamental theorem of finitely generated abelian groups and roughly states that finitely generated modules over a principal ideal domain (PID) can be uniquely decomposed in much the same way that integers have a prime factorization. The result provides a simple framework to understand various canonical form results for square matrices over fields. Statement When a vector space over a field F has a finite generating set, then one may extract from it a basis consisting of a finite number n of vectors, and the space is therefore isomorphic to Fn. The corresponding statement with F generalized to a principal ideal domain R is no longer true, since a basis for a finitely generated module over R might not exist. However such a module is still isomorphic to a quotient of some module Rn with n finite (to see this it suffices to construct the morphism that sends the elements of the canonical basis of Rn to the generators of the module, and take the quotient by its kernel.) By changing the choice of generating set, one can in fact describe the module as the quotient of some Rn by a particularly simple submodule, and this is the structure theorem. The structure theorem for finitely generated modules over a principal ideal domain usually appears in the following two forms. Invariant factor decomposition For every finitely generated module over a principal ideal domain , there is a unique decreasing sequence of proper ideals such that is isomorphic to the sum of cyclic modules: The generators of the ideals are unique up to multiplication by a unit, and are called invariant factors of M. Since the ideals should be proper, these factors must not themselves be invertible (this avoids trivial factors in the sum), and the inclusion of the ideals means one has divisibility . The free part is visible in the part of the decomposition corresponding to factors . Such factors, if any, occur at the end of the sequence. While the direct sum is uniquely determined by , the isomorphism giving the decomposition itself is not unique in general. For instance if is actually a field, then all occurring ideals must be zero, and one obtains the decomposition of a finite dimensional vector space into a direct sum of one-dimensional subspaces; the number of such factors is fixed, namely the dimension of the space, but there is a lot of freedom for choosing the subspaces themselves (if ). The nonzero elements, together with the number of which are zero, form a complete set of invariants for the module. Explicitly, this means that any two modules sharing the same set of invariants are necessarily isomorphic. Some prefer to write the free part of M separately: where the visible are nonzero, and f is the number of 's in the original sequence which are 0. Primary decomposition Every finitely generated module M over a principal ideal domain R is isomorphic to one of the form where and the are primary ideals. The are unique (up to multiplication by units). The elements are called the elementary divisors of M. In a PID, nonzero primary ideals are powers of primes, and so . When , the resulting indecomposable module is itself, and this is inside the part of M that is a free module. The summands are indecomposable, so the primary decomposition is a decomposition into indecomposable modules, and thus every finitely generated module over a PID is a completely decomposable module. Since PID's are Noetherian rings, this can be seen as a manifestation of the Lasker-Noether theorem. As before, it is possible to write the free part (where ) separately and express M as where the visible are nonzero. Proofs One proof proceeds as follows: Every finitely generated module over a PID is also finitely presented because a PID is Noetherian, an even stronger condition than coherence. Take a presentation, which is a map (relations to generators), and put it in Smith normal form. This yields the invariant factor decomposition, and the diagonal entries of Smith normal form are the invariant factors. Another outline of a proof: Denote by tM the torsion submodule of M. Torsion module can be embedded as a submodule of M and this gives short exact sequence: Where the map is a projection. M/tM is a finitely generated torsion free module, and such a module over a commutative PID is a free module of finite rank, so it is isomorphic to: for a positive integer n. Since every free module is projective module, then exists right inverse of the projection map (it suffices to lift each of the generators of M/tM into M). By splitting lemma (left split) M splits into: . For a prime element p in R we can then speak of . This is a submodule of tM, and it turns out that each Np is a direct sum of cyclic modules, and that tM is a direct sum of Np for a finite number of distinct primes p. Putting the previous two steps together, M is decomposed into cyclic modules of the indicated types. Corollaries This includes the classification of finite-dimensional vector spaces as a special case, where . Since fields have no non-trivial ideals, every finitely generated vector space is free. Taking yields the fundamental theorem of finitely generated abelian groups. Let T be a linear operator on a finite-dimensional vector space V over K. Taking , the algebra of polynomials with coefficients in K evaluated at T, yields structure information about T. V can be viewed as a finitely generated module over . The last invariant factor is the minimal polynomial, and the product of invariant factors is the characteristic polynomial. Combined with a standard matrix form for , this yields various canonical forms: invariant factors + companion matrix yields Frobenius normal form (aka, rational canonical form) primary decomposition + companion matrix yields primary rational canonical form primary decomposition + Jordan blocks yields Jordan canonical form (this latter only holds over an algebraically closed field) Uniqueness While the invariants (rank, invariant factors, and elementary divisors) are unique, the isomorphism between M and its canonical form is not unique, and does not even preserve the direct sum decomposition. This follows because there are non-trivial automorphisms of these modules which do not preserve the summands. However, one has a canonical torsion submodule T, and similar canonical submodules corresponding to each (distinct) invariant factor, which yield a canonical sequence: Compare composition series in Jordan–Hölder theorem. For instance, if , and is one basis, then is another basis, and the change of basis matrix does not preserve the summand . However, it does preserve the summand, as this is the torsion submodule (equivalently here, the 2-torsion elements). Generalizations Groups The Jordan–Hölder theorem is a more general result for finite groups (or modules over an arbitrary ring). In this generality, one obtains a composition series, rather than a direct sum. The Krull–Schmidt theorem and related results give conditions under which a module has something like a primary decomposition, a decomposition as a direct sum of indecomposable modules in which the summands are unique up to order. Primary decomposition The primary decomposition generalizes to finitely generated modules over commutative Noetherian rings, and this result is called the Lasker–Noether theorem. Indecomposable modules By contrast, unique decomposition into indecomposable submodules does not generalize as far, and the failure is measured by the ideal class group, which vanishes for PIDs. For rings that are not principal ideal domains, unique decomposition need not even hold for modules over a ring generated by two elements. For the ring R = Z[√−5], both the module R and its submodule M generated by 2 and 1 + √−5 are indecomposable. While R is not isomorphic to M, R ⊕ R is isomorphic to M ⊕ M; thus the images of the M summands give indecomposable submodules L1, L2 < R ⊕ R which give a different decomposition of R ⊕ R. The failure of uniquely factorizing R ⊕ R into a direct sum of indecomposable modules is directly related (via the ideal class group) to the failure of the unique factorization of elements of R into irreducible elements of R. However, over a Dedekind domain the ideal class group is the only obstruction, and the structure theorem generalizes to finitely generated modules over a Dedekind domain with minor modifications. There is still a unique torsion part, with a torsionfree complement (unique up to isomorphism), but a torsionfree module over a Dedekind domain is no longer necessarily free. Torsionfree modules over a Dedekind domain are determined (up to isomorphism) by rank and Steinitz class (which takes value in the ideal class group), and the decomposition into a direct sum of copies of R (rank one free modules) is replaced by a direct sum into rank one projective modules: the individual summands are not uniquely determined, but the Steinitz class (of the sum) is. Non-finitely generated modules Similarly for modules that are not finitely generated, one cannot expect such a nice decomposition: even the number of factors may vary. There are Z-submodules of Q4 which are simultaneously direct sums of two indecomposable modules and direct sums of three indecomposable modules, showing the analogue of the primary decomposition cannot hold for infinitely generated modules, even over the integers, Z. Another issue that arises with non-finitely generated modules is that there are torsion-free modules which are not free. For instance, consider the ring Z of integers. Then Q is a torsion-free Z-module which is not free. Another classical example of such a module is the Baer–Specker group, the group of all sequences of integers under termwise addition. In general, the question of which infinitely generated torsion-free abelian groups are free depends on which large cardinals exist. A consequence is that any structure theorem for infinitely generated modules depends on a choice of set theory axioms and may be invalid under a different choice. References Theorems in abstract algebra Module theory de:Hauptidealring#Moduln über Hauptidealringen
Structure theorem for finitely generated modules over a principal ideal domain
Mathematics
2,210
15,114,275
https://en.wikipedia.org/wiki/DRYOS
DRYOS (also stylized as DryOS) is a proprietary real-time operating system made by Canon and is used in their digital cameras and camcorders. Since late 2007, DIGIC-based cameras are shipped using DRYOS. It replaces VxWorks from Wind River Systems which has been used before on DIGIC II and some DIGIC III equipped cameras. DRYOS had existed before and was in use in other Canon hardware, such as digital video cameras and high-end webcams. DRYOS has a 16-kilobyte kernel module at its core and is currently compatible with more than 10 CPU types. It provides a simulation-based development environment for debugging. Canon also developed a USB- and middleware-compatible device driver for file systems and network devices, e.g. video server. DRYOS aims to be compatible with μITRON 4.0 and with POSIX. Cameras with DRYOS The following cameras are known to run DRYOS: Canon PowerShot SX1 IS Canon PowerShot SX10 IS Canon PowerShot SX20 IS Canon PowerShot SX30 IS Canon PowerShot SX40 HS Canon PowerShot SX50 HS Canon PowerShot SX60 HS Canon PowerShot S5 IS Canon PowerShot S90 Canon PowerShot S95 Canon PowerShot G9 Canon PowerShot G10 Canon PowerShot G11 Canon PowerShot G12 Canon PowerShot A470 Canon PowerShot A480 Canon PowerShot A580 Canon PowerShot A590 IS Canon PowerShot A650 IS Canon PowerShot A720 IS Canon PowerShot A810 Canon PowerShot A1100 IS Canon PowerShot A2200 IS Canon PowerShot A2300 IS Canon PowerShot A3000 IS Canon PowerShot A3100 IS Canon PowerShot SD1100 IS Canon PowerShot SX100 IS Canon PowerShot SX110 IS Canon PowerShot SX120 IS Canon PowerShot SX130 IS Canon PowerShot SX160 IS Canon PowerShot SX200 IS Canon PowerShot SX230 IS Canon PowerShot SX230 HS Canon PowerShot SD780 IS Canon PowerShot SD880 IS Canon PowerShot SD990 IS (IXUS 980 IS) Canon PowerShot SD1400 IS Canon PowerShot ELPH100 HS (IXUS 115 HS) Canon EOS 5D Mark IV Canon EOS 80D Canon EOS 90D Canon EOS 650D Canon EOS 700D Canon EOS 750D Canon EOS 1100D Canon EOS 1200D Canon EOS 1300D Canon EOS 5D Mark III Canon EOS 7D Mark II Canon EOS M Canon EOS M2 Canon EOS M3 Canon EOS M10 Canon EOS M50 Canon EOS M100 Canon EOS R-series References External links Canon DRYOS technology explanation page (Archived January 16, 2008) Canon technology explanation page covering many Canon technologies, including DRYOS (Archived February 14, 2019) Real-time operating systems Camera firmware
DRYOS
Technology
618
37,325,129
https://en.wikipedia.org/wiki/Strontium%20oxalate
Strontium oxalate is a compound with the chemical formula . Strontium oxalate can exist either in a hydrated form () or as the acidic salt of strontium oxalate (). Strontium oxalate is soluble in 20 000 parts of water; in 1 900 parts of 3.5% acetic acid, in 115 parts of the 23% acid, but less soluble in the 35% acid; readily soluble in diluted HCl or nitric acid. Use in pyrotechnics With the addition of heat, strontium oxalate will decompose based on the following reaction: Strontium oxalate is a good agent for use in pyrotechnics since it decomposes readily with the addition of heat. When it decomposes into strontium oxide, it produces a red flame color. Since this reaction produces carbon monoxide, which can undergo a further reduction with magnesium oxide, strontium oxalate is an excellent red flame color producing agent in the presence of magnesium. If it is not in the presence of magnesium, strontium carbonate has been found to be a better option to produce an even greater effect. References Strontium compounds Oxalates Inorganic compounds
Strontium oxalate
Chemistry
261
24,847,032
https://en.wikipedia.org/wiki/Digital%20Earth%20Reference%20Model
The term Digital Earth Reference Model (DERM) was coined by Tim Foresman in context with a vision for an all encompassing geospatial platform as an abstract for information flow in support of Al Gore's vision for a Digital Earth. The Digital Earth reference model seeks to facilitate and promote the use of georeferenced information from multiple sources over the Internet. A digital Earth reference model defines a fixed global reference frame for the Earth using four principles of a digital system, namely: Discrete partitioning using regular or irregular cell mesh, tiling or Grid; Data acquisition using signal processing theory (sampling and quantizing) for assigning binary values from continuous analog or other digital sources to the discrete cell partitions; An ordering or naming of cells that can provide both unique spatial indexing and geographic location address; A set of mathematical operations built on the indexing for algebraic, geometric, Boolean and image processing transforms, etc. The distinction between "digital" versus "analog" Earth reference model is made in the manner the entire Earth surface is covered. Tessellation refer to a finite number of objects/cells that cover the surface as discrete partitions while Lattice refer to ordered sets of points that cover the surface in continuous vector space. The mathematical frame for a digital Earth reference model is a tessellation while the mathematical frame for an analog Earth reference is a lattice. The value of a digital Earth reference model to encode information about the Earth is akin to the value obtained from other digital technologies, namely synchronization of the physical domain with the information domain, such as in digital audio and digital photography. Efficiencies are found in data storage, processing, integration, discovery, transmission, visualization, aggregation, and analytical, fusion and modeling transforms. Data reference to a Digital Earth Reference Model (DERM) becomes ubiquitous facilitating distributed spatial queries such as “What is here?” and “What has changed?”. Image and signal processing theory can be utilized to operate on data referenced to a DERM. The DERM structure is data independent allowing for the general quantization of all georeferenced data sources onto the common grid. Application, algorithms and operations can then be developed on the grid independent of data sources. Approaches using an analog reference require rigorous manual conflation to satisfy the creation of digital products such as digital maps or other cartographic, navigation or geospatial information (see also GIS). However, digital models are weaker at geometric transformations where translation, scaling and rotation must conform to the discrete cell locations wherein on an analog model with a continuum of locations geometric transformation are straight forward with no requirements for reprocessing or resampling. A cell shape in such representations can be critical to the validity, adaptability and usefulness of the grid. As rectilinear structures are intuitive but lack optimization characteristics as a tessellation especially when tiled to a sphere, other schemes including voronoi regions, peano curves, triangles and hexagonal tilings have been advanced as superior alternatives. Many ordering and naming models have been implemented as geospatial database indexing for efficient data retrieval (R-Trees, QTM, HHC). Few of these models have encompassed a complete digital Earth reference model where both a formation of digits that represent a hierarchy where the index contains a parent child relationship and a formation of digits that monotonically converges by a set modulus to all vector Reals. The International Society on Digital Earth has a standing committee considering DERM implementations and standards which includes both the Earth reference frame and the ancillary requirements for metadata and attribute semantics. References Geographic information systems
Digital Earth Reference Model
Technology
742
56,343,495
https://en.wikipedia.org/wiki/Gas%20chromatography%E2%80%93vacuum%20ultraviolet%20spectroscopy
Gas chromatography–vacuum ultraviolet spectroscopy (GC-VUV) is a universal detection technique for gas chromatography. VUV detection provides both qualitative and quantitative spectral information for most gas phase compounds. GC-VUV spectral data is three-dimensional (time, absorbance, wavelength) and specific to chemical structure. Nearly all compounds absorb in the vacuum ultraviolet region of the electromagnetic spectrum with the exception of carrier gases hydrogen, helium, and argon. The high energy, short wavelength VUV photons probe electronic transitions in almost all chemical bonds including ground state to excited state. The result is spectral "fingerprints" that are specific to individual compound structure and can be readily identified by the VUV library. Unique VUV spectra enable closely related compounds such as structural isomers to be clearly differentiated. VUV detectors complement mass spectrometry, which struggles with characterizing constitutional isomers and compounds with low mass quantitation ions. VUV spectra can also be used to deconvolve analyte co-elution, resulting in an accurate quantitative representation of individual analyte contribution to the original response. This characteristically lends itself to significantly reducing GC runtimes through flow rate-enhanced chromatographic compression. VUV spectroscopy follows the simple linear relationship between absorbance and concentration described by the Beer-Lambert Law, resulting in more accurate retention time-based identification. VUV absorbance spectra also exhibit feature similarity within compound classes, meaning VUV detectors can rapidly compound class characterization in complex samples through compound spectral shape and retention index information. Advances in technology reduces the typical group analysis data processing time from 15 to 30 minutes to <1 minute per sample. History The first benchtop detector was introduced in 2014 with detection capabilities between 120 and 240 nm. This portion of the ultraviolet spectrum had historically been restricted to bright source synchrotron facilities due to significant background absorption challenges inherent to working within the wavelength range. Further detector platform development has extended the wavelength detection range out from 120 to 430 nm. How it works VUV detectors for gas chromatography detectors VUV detectors are compatible with most gas chromatography (GC) manufacturers. The detectors can be connected through a heated transfer line inserted through a punch-out in the GC oven casing. A makeup flow of carrier gas is introduced at the end of the transfer line. Analytes arrive in the flow cell and are exposed to VUV light from a deuterium lamp. Specially coated reflective optics paired with a back-thinned charge-coupled device (CCD) enable the collection of high-quality VUV absorption data. Figure 1 shows a schematic of the analyte path from GC to VUV detector. VUV spectral identification Gas phase species absorb and display unique spectra between 120 – 240 nm where high energy σ→σ*, n→σ*, π→π*, n → π* electronic transitions can be excited and probed. VUV spectra reflect the absorbance cross section of compounds and are specific to their electronic structure and functional group arrangement. The ability of VUV detectors to produce spectra for most compounds results in universal and highly selective compound identification. VUV spectroscopy data is highly characteristic while also providing quantitative information. Many commonly used GC detectors such as the electron capture detector (ECD), flame ionization detector (FID), and thermal conductivity detector (TCD) produce quantitative but not qualitative detail. Gas chromatography–mass spectrometry (GC-MS) generates qualitative and quantitative data but has difficulty characterizing labile and low mass compounds, as well as differentiating between isomers. GC-VUV complements MS by overcoming its limitations and providing a secondary method of confirmation. It also offers a single-instrument alternative to the use of multiple detectors for qualitative and quantitative analysis. Naphthols, xylenes, and cis- and trans- fatty acids are compounds that are prohibitively difficult to distinguish according to their electron ionization mass spectral profiles. Xylenes present the additional challenge of natural co-elution that makes separating their isoforms problematic. Figure 2 shows the distinct VUV spectra of m-, p-, and o-xylene. These compounds can be differentiated despite their only difference being the position of two methyl groups around a benzene ring. The spectral differences of these isomers enable their co-elution to be resolved through spectral deconvolution. Fatty acid screening and profiling is an application that commonly requires the use of multiple detectors to achieve quantitative and qualitative results. FID is a quantitative detector that is suitable for routine screening when guided by retention index information. GC-MS has traditionally been used for qualitative compound profiling, but falls short where isobaric analytes are prevalent. It especially struggles with differentiating cis and trans fatty acid isomers. Electron impact ionization can also cause double bond migration and lead to ambiguous fatty acid structural data. Determining cis and trans fatty acid distribution in oils and fats is important in assessing their potential health impacts. VUV spectra of trans-containing fatty acid methyl ester (FAME) isomers typically found in butter and vegetable oils are shown in Figure 3. These trans-containing isomers separate chromatographically from cis-containing isomers and have the tendency to co-elute with each other and, in some cases, with select C20:1 isomers. GC-VUV is not only able to differentiate the C18:3 FAME variants, but is also capable of telling cis isomers apart from trans isomers. Degrees of unsaturation such as C20:1 vs. C18:3 can additionally be distinguished. Previous work has demonstrated how distinct VUV spectra enable straightforward deconvolution and accurate quantitation of cis and trans FAME isomers. Chromatographic compression and spectral deconvolution Unique VUV absorbance spectra not only enable unambiguous compound identification, and allows GC run times to be deliberately shortened. VUV detectors operate at ambient pressure and are thus not flow rate limited. GC run times can be reduced by increasing the GC column flow and oven temperature program rates. Flow rate-enhanced chromatographic compression utilizes VUV spectral deconvolution to resolve any co-elution that may result from shortening GC runtimes. VUV absorption is additive, meaning that overlapping peaks give a spectrum that corresponds to the sum absorbance of each compound. The individual contribution of each analyte can be determined if the VUV spectra for co-eluting compounds are stored in the VUV library. The ability to differentiate coeluting analyte spectra and use them to deconvolve the overlapping signals is demonstrated in Figure 4. The individual spectra of terpenes limonene and p-Cymene are shown in Panel A along with the summed absorbance of the selected retention time window (blue region in Panel B) and the fit with VUV library spectra. The R2 >0.999 fit result confirms their identities, and enables the deconvolution of these and other terpenes analyzed by GC-VUV as featured in Panel B. Testing for the presence of residual solvents in Active Pharmaceutical Ingredients (APIs) is critical for patient safety and commonly follows United States Pharmacopeia (USP) Method <467> guidelines, or more broadly, International Council for Harmonization (ICH) Guideline Q3C(R6). The gas chromatography (GC) runtime suggested by USP Method 467 is approximately 60 min. A generic method for residual solvent analysis by GC-MS describes conditions that include a runtime of approximately 30 minutes. A GC-VUV and static headspace method was developed using a chromatographic compression strategy that resulted in a GC runtime of 8 minutes. The GC-VUV method uses a flow rate of 4 mL/min and an oven ramp of 35 °C (held for 1 min), followed by an increase to 245 °C at a rate of 30 °C/min. Figure 5 compares the results when the general conditions of the GC-MS method were followed against the GC-VUV method run with Class 2 residual solvents. Tetralin eluted at approximately 35 minutes using the GC-MS method conditions, whereas the analyte had a retention time of less than 7 minutes when the GC-VUV method was applied. The co-elution of m- and p-xylene occurred in both GC-MS and GC-VUV method runs. VUV software matched the analyte absorbance of both isomers with VUV library spectra (Figure 2) to deconvolve the overlapping signals as displayed in Figure 6. Goodness of fit information ensures that the correct compound assignment takes place during the post-run data analysis. The flow rate-enhanced chromatographic compression strategy has been applied to a diverse set of applications since the development of the GC-VUV method for residual solvents analysis. The fast GC-VUV approach reduced GC runtimes for terpene analysis from 30 minutes to 9 minutes (the deconvolution of monoterpene isomers is shown in Figure 4). It has also been demonstrated that GC runtimes as short as 14 minutes can be used for PIONA compound analysis of gasoline samples. Typical GC separation times range between 1 – 2 hours using alternative methods. Compound class characterization GC-VUV can be used for bulk compositional analysis because compounds share spectral shape characteristics within a class. Proprietary software applies fitting procedures to quickly determine the relative contribution of each compound category present in a sample. Retention index information is used to limit the amount of VUV library searching and fitting performed for each analyte, enabling the automated data processing routine to be completed quickly. Compound class or specific compound concentrations can be reported as either mass or volume percent. GC-VUV bulk compound characterization was first applied to the analysis of paraffin, isoparaffin, olefin, naphthene, and aromatic (PIONA) hydrocarbons in gasoline streams. It is suitable for use with finished gasoline, reformate, reformer feed, FCC, light naphtha, and heavy naphtha samples. A typical chromatographic analysis is displayed in Figure 7. The inset shows how the analyte spectral response is fit with VUV library spectra for the selected time slice. A report detailing the carbon number breakdown within each PIONA compound class, as well as the relative mass or volume percent of classes, is shown. A table with mass % and carbon number data from a gasoline sample can be seen in Figure 8. Compound class characterization utilizes a method known as time interval deconvolution (TID), which has recently been applied to the analysis of terpenes. References Gas chromatography Spectroscopy
Gas chromatography–vacuum ultraviolet spectroscopy
Physics,Chemistry
2,261
66,181
https://en.wikipedia.org/wiki/Role-based%20access%20control
In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users, and to implementing mandatory access control (MAC) or discretionary access control (DAC). Role-based access control is a policy-neutral access control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations. RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication. Design Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department. Three primary rules are defined for RBAC: Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized. Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized. Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles. With the concepts of role hierarchy and constraints, one can control RBAC to create or simulate lattice-based access control (LBAC). Thus RBAC can be considered to be a superset of LBAC. When defining an RBAC model, the following conventions are useful: S = Subject = A person or automated agent R = Role = Job function or title which defines an authority level P = Permissions = An approval of a mode of access to a resource SE = Session = A mapping involving S, R and/or P SA = Subject Assignment PA = Permission Assignment RH = Partially ordered Role Hierarchy. RH can also be written: ≥ (The notation: x ≥ y means that x inherits the permissions of y.) A subject can have multiple roles. A role can have multiple subjects. A role can have many permissions. A permission can be assigned to many roles. An operation can be assigned to many permissions. A permission can be assigned to many operations. A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles. Thus it can be used to achieve appropriate separation of duties. For example, the same person should not be allowed to both create a login account and to authorize the account creation. Thus, using set theory notation: and is a many to many permission to role assignment relation. and is a many to many subject to role assignment relation. A subject may have multiple simultaneous sessions with/in different roles. Standardized levels The NIST/ANSI/INCITS RBAC standard (2004) recognizes three levels of RBAC: core RBAC hierarchical RBAC, which adds support for inheritance between roles constrained RBAC, which adds separation of duties Relation to other models RBAC is a flexible access control technology whose flexibility allows it to implement DAC or MAC. DAC with groups (e.g., as implemented in POSIX file systems) can emulate RBAC. MAC can simulate RBAC if the role graph is restricted to a tree rather than a partially ordered set. Prior to the development of RBAC, the Bell-LaPadula (BLP) model was synonymous with MAC and file system permissions were synonymous with DAC. These were considered to be the only known models for access control: if a model was not BLP, it was considered to be a DAC model, and vice versa. Research in the late 1990s demonstrated that RBAC falls in neither category. Unlike context-based access control (CBAC), RBAC does not look at the message context (such as a connection's source). RBAC has also been criticized for leading to role explosion, a problem in large enterprise systems which require access control of finer granularity than what RBAC can provide as roles are inherently assigned to operations and data types. In resemblance to CBAC, an Entity-Relationship Based Access Control (ERBAC, although the same acronym is also used for modified RBAC systems, such as Extended Role-Based Access Control) system is able to secure instances of data by considering their association to the executing subject. Comparing to ACL Access control lists (ACLs) are used in traditional discretionary access-control (DAC) systems to affect low-level data-objects. RBAC differs from ACL in assigning permissions to operations which change the direct-relations between several entities (see: ACLg below). For example, an ACL could be used for granting or denying write access to a particular system file, but it wouldn't dictate how that file could be changed. In an RBAC-based system, an operation might be to 'create a credit account' transaction in a financial application or to 'populate a blood sugar level test' record in a medical application. A Role is thus a sequence of operations within a larger activity. RBAC has been shown to be particularly well suited to separation of duties (SoD) requirements, which ensure that two or more people must be involved in authorizing critical operations. Necessary and sufficient conditions for safety of SoD in RBAC have been analyzed. An underlying principle of SoD is that no individual should be able to effect a breach of security through dual privilege. By extension, no person may hold a role that exercises audit, control or review authority over another, concurrently held role. Then again, a "minimal RBAC Model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent. In modern SQL implementations, like ACL of the CakePHP framework, ACLs also manage groups and inheritance in a hierarchy of groups. Under this aspect, specific "modern ACL" implementations can be compared with specific "modern RBAC" implementations, better than "old (file system) implementations". For data interchange, and for "high level comparisons", ACL data can be translated to XACML. Attribute-based access control Attribute-based access control or ABAC is a model which evolves from RBAC to consider additional attributes in addition to roles and groups. In ABAC, it is possible to use attributes of: the user e.g. citizenship, clearance, the resource e.g. classification, department, owner, the action, and the context e.g. time, location, IP. ABAC is policy-based in the sense that it uses policies rather than static permissions to define what is allowed or what is not allowed. Relationship-based access control Relationship-based access control or ReBAC is a model which evolves from RBAC. In ReBAC, a subject's permission to access a resource is defined by the presence of relationships between those subjects and resources. The advantage of this model is that allows for fine-grained permissions; for example, in a social network where users can share posts with other specific users. Use and availability The use of RBAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice. A 2010 report prepared for NIST by the Research Triangle Institute analyzed the economic value of RBAC for enterprises, and estimated benefits per employee from reduced employee downtime, more efficient provisioning, and more efficient access control policy administration. In an organization with a heterogeneous IT infrastructure and requirements that span dozens or hundreds of systems and applications, using RBAC to manage sufficient roles and assign adequate role memberships becomes extremely complex without hierarchical creation of roles and privilege assignments. Newer systems extend the older NIST RBAC model to address the limitations of RBAC for enterprise-wide deployments. The NIST model was adopted as a standard by INCITS as ANSI/INCITS 359-2004. A discussion of some of the design choices for the NIST model has also been published. Potential Vulnerabilities Role based access control interference is a relatively new issue in security applications, where multiple user accounts with dynamic access levels may lead to encryption key instability, allowing an outside user to exploit the weakness for unauthorized access. Key sharing applications within dynamic virtualized environments have shown some success in addressing this problem. See also References Further reading External links FAQ on RBAC models and standards Role Based Access Controls at NIST XACML core and hierarchical role based access control profile Institute for Cyber Security at the University of Texas San Antonio Practical experiences in implementing RBAC Computer security models Access control
Role-based access control
Engineering
1,925
72,909,447
https://en.wikipedia.org/wiki/Yukaghir%20birch-bark%20carvings
The Yukaghir birch-bark carvings were traditionally drawn by Yukaghir people of Siberia on birch barks for various purposes such as mapping, record-keeping, and party games. Russian writers observed these carvings in the 1890s, and based on their descriptions, several 20th-century scholars misunderstood them to be the examples of a writing system. One particular carving became well-known as the "Yukaghir love letter", but is actually the product of a guessing game. Types Three kinds of Yukaghir carvings are known from the accounts of the Russian writers S. Shargorodskii and Vladimir Jochelson: The so-called "Yukaghir love letters", which are actually product of a guessing game at social gatherings (see below). Small-scale maps drawn by men to assist in travels for hunting and other purposes. These maps used a limited set of symbols to depict features such as rivers and dwellings, so it appears that the Yukaghir men had established certain mapping conventions. Depictions of record-keeping: for example, Shargorodskii provides a picture, which according to a Yukaghir man, records that a Yukaghir woman made a shawl for him, and received payment in form of several items such as a comb, tobacco, and buttons. According to John DeFrancis, the Yukaghir carving is "an example not of writing but of anecdotic art", whose meaning is clear only to someone who is in contact with the creator or another interpreter who understands its meaning. The so-called "Yukaghir love letter" A notable example of the Yukaghir carving is a sketch by the Russian writer S. Shargorodskii (1895), reproduced by Gustav Krahmer (1896). Shargorodskii, a member of the revolutionary group Narodnaya Volya, had been exiled to Siberia by the Tsarist regime. He spent 1892-1893 in the Yukaghir village of Nelmenoye in the Kolyma river area, near the Arctic Ocean. He gained the trust of the local Yukaghir people, and joined them in social activities. In 1895, Shargorodskii published a 10-page article titled On Yukaghir Writing in the journal Zemlevedenie. Six photographs of the alleged Yukaghir writing system accompanied the article. Shargorodskii obtained the picture of what later came to be known as a "love letter" from a Yukaghir party game, similar to charades or twenty questions. He states that he observed such pictures being made during social gatherings: a young girl would start carving on a fresh birch bark, and the onlookers made guesses about what she was depicting. Eventually, after several incorrect guesses, all the participants in the game would arrive at a common understanding of the picture. Since the participants knew each other well, they could easily deduce the meaning of the carvings; it was not easy for the outsiders to understand the meaning, but Shargorodskii could do so with the help of his Yukaghir acquaintances. According to Shargorodskii, such birch bark carvings were drawn only by young women, and only discussed love lives. In 1896, General-Major Gustav Krahmer published a translation of Shargorodskii's article in the geographical journal Globus. Shargorodskii had referred to the pictures as "writings" and "figures", but Krahmer presented them as "letters". In 1898, Shargorodskii's friend Vladimir Jochelson, a political exile turned ethnographer, published another example of the Yukaghir carving. Subsequently, several other writers reproduced these pictures. Jochelson wrote that the Yukaghir men often visited the Russian settlement of Srednekolymsk for various purposes; the Yukaghir pictures were expressions of sadness by the jealous Yukaghir girls, who were concerned about losing their lovers to Russian women during such visits. German writer Karl Weule (1915) published a slightly different version of the Shargorodskii's picture, drawn by the artist Paul Lindner, with the caption "Yukaghir Love Letter" in a popular museum booklet. Thus, Weule appears to have been primarily responsible for promoting the idea that the Yukaghir pictures represent love letters. David Diringer's widely-read book The Alphabet (1948) included an illustration, likely based on Weule's work, with the caption "Sad love-story of a Yukaghir girl". According to one interpretation, the arrow shapes represent four adults and two children. The solid and broken lines connecting the arrows represent current and previous relationships between the adults. The so-called "Yukaghir love letter" was alleged to be the best example of ideographic picture writing for years. British linguist Geoffrey Sampson (1985) included a modified version of this sketch in his Writing Systems. Sampson described the sketch as a love letter sent by a Yukaghir girl to a young man, presenting it as an example of a semasiographic writing system, which is capable of "communicating its meaning independently of speech". Although Sampson did not mention his source, American linguist John DeFrancis (1989) traced it to Diringer, and ultimately Shargorodskii. DeFrancis asserted that the pictures were not letters, but the product of a party game, in which young women could publicly express their feelings about love and separation to a small circle of friends in a socially acceptable way. In a Linguistics article, Sampson admitted that the picture was "not an example of 'communication' at all", and that he had taken the picture (and its interpretation) from Diringer. Apparently unaware of DeFrancis' work, art historian James Elkins (1999) described the Yukaghir pictures as "diagrams of emotional attachments" and "texts, because they tell stories". American linguist J. Marshall Unger dismisses this interpretation as inaccurate. References Yukaghir people Russian folklore Siberian culture Pictograms Betula
Yukaghir birch-bark carvings
Mathematics
1,274
40,903,344
https://en.wikipedia.org/wiki/Campus%20Galli
Campus Galli is a Carolingian monastic community under construction in Meßkirch, Baden-Württemberg, Germany. The construction project includes plans to build a medieval monastery according to the early ninth-century Plan of Saint Gall using techniques from that era. The long-term financing of the project is to come from revenue generated from the site's operation as a tourist attraction. The construction site has been open for visitors since June 2013. Construction site The Carolingian monastery town is located in a wooded area approximately four kilometers north of the small town of Meßkirch in southern Germany. The buildings follow the designs in the Plan of Saint Gall, the only surviving major architectural drawing from the Middle Ages, and uses as much as possible the materials and methods contemporary to the time of Charlemagne in keeping with goals of experimental archaeology. The major raw materials, such as wood and stone, are obtained from the site. Between 20 and 30 staff members are permanently at the site, with an exception according to medieval custom for rest during the winter months from 11 November (St. Martin's day) until 2 April (Charlemagne's birthday). The total construction time is estimated at forty years. Volunteer workers not only help with the construction, but also act as costumed interpreters. The project was launched by the Aachen-based journalist Bert Geurten with 1 million euros provided by city, state and European-Union sources. An Advisory Board of 18 experts in fields including archaeology, history, theology, and veterinary medicine provide the scientific management and monitoring of the construction. Construction progress A small area in the forest was cleared by the end of June 2013, and temporary shelters for the craftsmen were built. A map of the site shows areas for carpenters, basket weavers, potters, blacksmiths, stonemasons, wood turners, broom makers, roofers, textile workers, and rope makers. There are also pens for pigs, goats and sheep, and a chicken coop, along with a bee hive. There is also a herb garden for medicinal plants. In the center of the site is the ongoing construction of a wooden church. Wooden church The construction of a wooden church was started in 2014, and the main structure was completed in 2015. Construction continues on the interior and details of the exterior. Scholarly research On 20 April 2018, the Campus Galli was officially named 'teaching and research site of the University of Tübingen'. The production of ceramics and also the production and processing of mortars using medieval methods at Campus Galli are of interest to the Competence Center Archaeometry Baden-Wuerttemberg (CCA-BW). Mineralogical investigations can establish the connection between archaeological finds and materials produced according to traditional methods. Archaeological experiments on ceramic firing and joint courses for students of archaeology will also be conducted as part of the cooperation. See also Duncarron in Scotland, a reconstruction of a typical residence of a Scottish clan chief from the early part of the last millennium Guédelon Castle in France, a project for an authentic construction of a medieval castle in Treigny References External links Campus Galli, official website in English City of Meßkirch Experimental archaeology Buildings and structures in Sigmaringen (district) Tourist attractions in Baden-Württemberg Monasteries in Baden-Württemberg Buildings and structures under construction in Germany Masonry
Campus Galli
Engineering
671
24,521,633
https://en.wikipedia.org/wiki/8%20Aquarii
8 Aquarii (abbreviated 8 Aqr) is a blue-white sub-giant of the spectral class A4IV in the constellation Aquarius. 8 Aquarii is the Flamsteed designation. It is approximately 298 light-years away from Earth, based on parallax. It is approximately 1.7 solar masses and about 3 times hotter than the Sun and thus allows lines of ionized metals with an abundance of metals. References A-type subgiants Aquarius (constellation) Aquarii, 008 Durchmusterung objects 199828 103640
8 Aquarii
Astronomy
122
22,743,077
https://en.wikipedia.org/wiki/Immunoglobulin%20Y
Immunoglobulin Y (abbreviated as IgY) is a type of immunoglobulin which is the major antibody in bird, reptile, and lungfish blood. It is also found in high concentrations in chicken egg yolk. As with the other immunoglobulins, IgY is a class of proteins which are formed by the immune system in reaction to certain foreign substances, and specifically recognize them. IgY is often mislabelled as Immunoglobulin G (IgG) in older literature, and sometimes even in commercial product catalogues, due to its functional similarity to mammalian IgG and Immunoglobulin E (IgE). However, this older nomenclature is obsolete, since IgY differs both structurally and functionally from mammalian IgG, and does not cross-react with antibodies raised against mammalian IgG. Since chickens can lay eggs almost every day, and the yolk of an immunised hen's egg contains a high concentration of IgY, chickens are gradually becoming popular as a source of customised antibodies for research. (Usually, mammals such as rabbits or goats are injected with the antigen of interest by the researcher or a contract laboratory.) Ducks produce a truncated form of IgY which is missing part of the Fc region. As a result, it cannot bind complement or be picked up by macrophages. IgY has also been analyzed in the Chinese soft-shelled turtle, Pelodiscus sinensis. Characteristics In chickens, immunoglobulin Y is the functional equivalent to Immunoglobulin G (IgG). Like IgG, it is composed of two light and two heavy chains. Structurally, these two types of immunoglobulin differ primarily in the heavy chains, which in IgY have a molecular mass of about 65,100 atomic mass units (amu), and are thus larger than in IgG. The light chains in IgY, with a molar mass of about 18,700 amu, are somewhat smaller than the light chains in IgG. The molar mass of IgY thus amounts to about 167,000 amu. The steric flexibility of the IgY molecule is less than that of IgG. Functionally, IgY is partially comparable to Immunoglobulin E (IgE), as well as to IgG. However, in contrast to IgG, IgY does not bind to Protein A, to Protein G, or to cellular Fc receptors. Furthermore, IgY does not activate the complement system. The name Immunoglobulin Y was suggested in 1969 by G.A. Leslie and L.W. Clem, after they were able to show differences between the immunoglobulins found in chicken eggs, and immunoglobulin G. Other synonymous names are Chicken IgG, Egg Yolk IgG, and 7S-IgG. Bioanalytic applications As compared to mammalian antibodies, IgY offers various advantages for the targeted extraction of antibodies and their application in bioanalysis. Since the antibodies are extracted from the yolks of laid eggs, the method of antibody production is non-invasive. Thus, no blood must be taken from the animals for the extraction of blood serum. The available quantity of a given antibody is considerably increased through repeated egg laying from the same hen. The cross-reactivity of IgY with proteins from mammals is also markedly less than that of IgG. Furthermore, the immune response against certain antigens in chickens is more strongly expressed than in rabbits or other mammals. Of the immunoglobulins arising during the immune response, only IgY is found in chicken eggs. Thus, in preparations from chicken eggs, there is no contamination with Immunoglobulin A (IgA) or Immunoglobulin M (IgM). The yield of IgY from a chicken egg is comparable to that of IgG from rabbit serum. One disadvantage of IgY, as compared to mammalian antibodies, is that the isolation of IgY from egg yolk is more difficult than the isolation of IgG from blood serum. This is due in large part to the fact that IgY cannot be bound with Protein A and Protein G. Thus, it cannot be separated from other components of the assay, for example from other proteins. Additionally, the egg yolk's rich store of lipids and lipoproteins must be removed. Antibody-containing blood serums, on the other hand, can sometimes be directly used in bioanalysis, i.e., without complicated isolation steps. Utilization in foods Particularly in Asian countries, IgY has been clinically tested as a food supplement and preservative. For example, yogurt products containing pathogen specific IgY, have been tested for their ability to reduce Helicobacter pylori in the stomach by hindering the attachment of the bacterium to the stomach lining. The IgY used for this purpose is extracted from the eggs of immunized hens. Antibodies against Salmonella and other bacteria, as well as against viruses, are produced in this manner, and employed as a nutritional component for protection against these pathogens. The Food Safety Lab of Ocean University of China has experimented with using IgY specific to the bacteria Shewanella putrefaciens and Pseudomonas fluorescens as a food preservative for fish. The shelf life of fish treated with the IgY was extended from 9 days to 12 – 15 days demonstrating a significant antimicrobial activity to the specific bacteria. Anti-Fel d1 egg IgY immunoglobulin has been successfully tested to reduce active Fel d1 in cats saliva in order to lower allergenic potential of treated cats. Literature Rüdiger Schade, Irene Behn, Michael Erhard: Chicken Egg Yolk Antibodies, Production and Application. Springer-Verlag, Berlin 2001, G.A. Leslie, L.W. Clem: Phylogeny of immunoglobulin structure and function. 3. Immunoglobulins of the chicken. In: Journal of Experimental Medicine. 130(6)/1969. Rockefeller University Press, S. 1337-1352, A. Polson, M.B. von Wechmar, M.H. van Regenmortel: Isolation of viral IgY antibodies from yolks of immunized hens. In: Immunological Communications. 9(5)/1980. Dekker New York, S. 475-493, A. Polson, M.B. von Wechmar, G. Fazakerley: Antibodies to proteins from yolk of immunized hens. In: Immunological Communications. 9(5)/1980. Dekker New York, S. 495-514, References Table comparing mammalian IgG and IgE with avian IgY and duck truncated IgY. Gallus Immunotech, accessed 28 October 2010. Glycoproteins Antibodies
Immunoglobulin Y
Chemistry
1,457
26,165,838
https://en.wikipedia.org/wiki/HD%20152079
HD 152079 is a star with an orbiting exoplanet in the southern constellation of Ara. It is located at a distance of 287 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −21 km/s. At that distance the star is much too faint to be visible with the naked eye, having an apparent visual magnitude of 9.18. This is a G-type main-sequence star with a stellar classification of G6V. Age estimates range from 1.6 to 6.2 billion years. It has 1.15 times the mass of the Sun and 1.13 times the Sun's girth. This is a metal-rich star, having a higher iron abundance than in the Sun. The star is radiating 1.44 times the luminosity of the Sun from its photosphere at an effective temperature of 5,907 K. Planetary system It has one confirmed exoplanet, discovered in 2010 by the Magellan Planet Search Program. This is a super-jovian object with an eccentric orbit and a orbital period. In 2018, an analysis of HARPS data suggested the presence of an additional outer companion with a mass at least 83% of the mass of Jupiter. References G-type main-sequence stars Planetary systems with one confirmed planet Ara (constellation) CD-46 11085 152079 082632
HD 152079
Astronomy
290