id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,952,018
https://en.wikipedia.org/wiki/Novobiocin
Novobiocin, also known as albamycin, is an aminocoumarin antibiotic that is produced by the actinomycete Streptomyces niveus, which has recently been identified as a subjective synonym for S. spheroides a member of the class Actinomycetia. Other aminocoumarin antibiotics include clorobiocin and coumermycin A1. Novobiocin was first reported in the mid-1950s (then called streptonivicin). Clinical use It is active against Staphylococcus epidermidis and may be used to differentiate it from the other coagulase-negative Staphylococcus saprophyticus, which is resistant to novobiocin, in culture. Novobiocin was licensed for clinical use under the tradename Albamycin (Upjohn) in the 1960s. Its efficacy has been demonstrated in preclinical and clinical trials. The oral form of the drug has since been withdrawn from the market due to lack of efficacy. A combination product of novobiocin and tetracycline, sold by Upjohn under brand names such as Panalba and Albamycin-T, was in particular the subject of intense FDA scrutiny before it was finally taken off the market. Novobiocin is an effective antistaphylococcal agent used in the treatment of MRSA. Mechanism of action The molecular basis of action of novobiocin, and other related drugs clorobiocin and coumermycin A1 has been examined. Aminocoumarins are very potent inhibitors of bacterial DNA gyrase and work by targeting the GyrB subunit of the enzyme involved in energy transduction. Novobiocin as well as the other aminocoumarin antibiotics act as competitive inhibitors of the ATPase reaction catalysed by GyrB. The potency of novobiocin is considerably higher than that of the fluoroquinolones that also target DNA gyrase, but at a different site on the enzyme. The GyrA subunit is involved in the DNA nicking and ligation activity. Novobiocin has been shown to weakly inhibit the C-terminus of the eukaryotic Hsp90 protein (high micromolar IC50). Modification of the novobiocin scaffold has led to more selective Hsp90 inhibitors. Novobiocin has also been shown to bind and activate the Gram-negative lipopolysaccharide transporter LptBFGC. The ATP binding pocket of polymerase theta is blocked by novobiocin resulting in a loss of ATPase activity. This results in the loss of microhomology-mediated end joining as a pathway for homologous recombination deficient cells to circumvent DNA damaging agents. The action of novobiocin is syngeristic with PARP inhibitors for reducing tumor size in a mouse model. Structure Novobiocin is an aminocoumarin. Novobiocin may be divided up into three entities; a benzoic acid derivative, a coumarin residue, and the sugar novobiose. X-ray crystallographic studies have found that the drug-receptor complex of Novobiocin and DNA Gyrase shows that ATP and Novobiocin have overlapping binding sites on the gyrase molecule. The overlap of the coumarin and ATP-binding sites is consistent with aminocoumarins being competitive inhibitors of the ATPase activity. Structure–activity relationship In structure activity relationship experiments it was found that removal of the carbamoyl group located on the novobiose sugar lead to a dramatic decrease in inhibitory activity of novobiocin. Biosynthesis This aminocoumarin antibiotic consists of three major substituents. The 3-dimethylallyl-4-hydroxybenzoic acid moiety, known as ring A, is derived from prephenate and dimethylallyl pyrophosphate. The aminocoumarin moiety, known as ring B, is derived from L-tyrosine. The final component of novobiocin is the sugar derivative L-noviose, known as ring C, which is derived from glucose-1-phosphate. The biosynthetic gene cluster for novobiocin was identified by Heide and coworkers in 1999 (published 2000) from Streptomyces spheroides NCIB 11891. They identified 23 putative open reading frames (ORFs) and more than 11 other ORFs that may play a role in novobiocin biosynthesis. The biosynthesis of ring A (see Fig. 1) begins with prephenate which is a derived from the shikimic acid biosynthetic pathway. The enzyme NovF catalyzes the decarboxylation of prephenate while simultaneously reducing nicotinamide adenine dinucleotide phosphate (NADP+) to produce NADPH. Following this NovQ catalyzes the electrophilic substitution of the phenyl ring with dimethylallyl pyrophosphate (DMAPP) otherwise known as prenylation. DMAPP can come from either the mevalonic acid pathway or the deoxyxylulose biosynthetic pathway. Next the 3-dimethylallyl-4-hydroxybenzoate molecule is subjected to two oxidative decarboxylations by NovR and molecular oxygen. NovR is a non-heme iron oxygenase with a unique bifunctional catalysis. In the first stage both oxygens are incorporated from the molecular oxygen while in the second step only one is incorporated as determined by isotope labeling studies. This completes the formation of ring A. The biosynthesis of ring B (see Fig. 2) begins with the natural amino acid L-tyrosine. This is then adenylated and thioesterified onto the peptidyl carrier protein (PCP) of NovH by ATP and NovH itself. NovI then further modifies this PCP bound molecule by oxidizing the β-position using NADPH and molecular oxygen. NovJ and NovK form a heterodimer of J2K2 which is the active form of this benzylic oxygenase. This process uses NADP+ as a hydride acceptor in the oxidation of the β-alcohol. This ketone will prefer to exist in its enol tautomer in solution. Next a still unidentified protein catalyzes the selective oxidation of the benzene (as shown in Fig. 2). Upon oxidation this intermediate will spontaneously lactonize to form the aromatic ring B and lose NovH in the process. The biosynthesis of L-noviose (ring C) is shown in Fig. 3. This process starts from glucose-1-phosphate where NovV takes dTTP and replaces the phosphate group with a dTDP group. NovT then oxidizes the 4-hydroxy group using NAD+. NovT also accomplishes a dehydroxylation of the 6 position of the sugar. NovW then epimerizes the 3 position of the sugar. The methylation of the 5 position is accomplished by NovU and S-adenosyl methionine (SAM). Finally NovS reduces the 4 position again to achieve epimerization of that position from the starting glucose-1-phosphate using NADH. Rings A, B, and C are coupled together and modified to give the finished novobiocin molecule. Rings A and B are coupled together by the enzyme NovL using ATP to diphosphorylate the carboxylate group of ring A so that the carbonyl can be attacked by the amine group on ring B. The resulting compound is methylated by NovO and SAM prior to glycosylation. NovM adds ring C (L-noviose) to the hydroxyl group derived from tyrosine with the loss of dTDP. Another methylation is accomplished by NovP and SAM at the 4 position of the L-noviose sugar. This methylation allows NovN to carbamylate the 3 position of the sugar as shown in Fig. 4 completing the biosynthesis of novobiocin. References External links Novobiocin bound to proteins in the PDB Antibiotics Coumarin drugs Benzamides Carbamates Topoisomerase inhibitors Benzopyrans 4-Hydroxyphenyl compounds
Novobiocin
[ "Biology" ]
1,741
[ "Antibiotics", "Biocides", "Biotechnology products" ]
2,952,019
https://en.wikipedia.org/wiki/Acoustically%20Navigated%20Geological%20Underwater%20Survey
The Acoustically Navigated Geological Underwater Survey (ANGUS) was a deep-towed still-camera sled operated by the Woods Hole Oceanographic Institute (WHOI) in the early 1970s. It was the first unmanned research vehicle made by WHOI. ANGUS was encased in a large steel frame designed to explore rugged volcanic terrain and able to withstand high impact collisions. It was fitted with three 35 mm color cameras with of film. Together, its three cameras were able to photograph a strip of the sea floor with a width up to . Each camera was equipped with strobe lights allowing them to photograph the ocean floor from above. On the bottom of the body was a downward-facing sonar system to monitor the sled's height above the ocean floor. It was capable of working in depths up to and could therefore reach roughly 98% of the sea floor. ANGUS could remain in the deep ocean for work sessions of 12 to 14 hours at a time, taking up to 16,000 photographs in one session. ANGUS was often used to scout locations of interest to later be explored and sampled by other vehicles such as Argo or Alvin. ANGUS has been used to search for and photograph underground geysers and the creatures living near them, and it was equipped with a heat sensor to alert the tether-ship when it passed over one. It was used on expeditions such as Project FAMOUS (French-American Mid Ocean Undersea Study 1973–1974), the Discovery expedition with Argo to survey the wreckage of the Titanic. (1985), and again in the return mission to the Titanic (1986). ANGUS was the only ROV used on both dives to the Titanic. On Project FAMOUS, ANGUS helped change scientists' views of the ocean floor. It showed them how different geological formations and chemical compositions of sediments can be, disproving previous assumptions of ocean floor uniformity The project also provided new insight to the theory of seafloor spreading by observing and sampling the rock formations around ridges and the horizontal formation of layers parallel to the ridge. In another 1977 expedition with ANGUS, scientists monitored temperatures over the ocean floor for any fluctuation. It was not until late at night the crew noticed temperatures rise drastically. They would review the photograph footage taken after the vehicle's session. ANGUS provided the first photographic evidence for hydrothermal vents and black smokers. It had returned with over 3000 colored photos showing both vents as well as colonies of clams and other organisms. They would later return with Alvin to take samples. Scientists nicknamed ANGUS Dope on a rope due to its durability and lack of fragile sensors. It was also given the motto "takes a lickin' but it keeps on clickin'". ANGUS was retired in the late 1980s, having completed over 250 voyages. References External links Project FAMOUS: Exploring the Mid-Atlantic Ridge Oceanography
Acoustically Navigated Geological Underwater Survey
[ "Physics", "Environmental_science" ]
578
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
2,952,350
https://en.wikipedia.org/wiki/Cotton%20effect
The Cotton effect in physics, is the characteristic change in optical rotatory dispersion and/or circular dichroism in the vicinity of an absorption band of a substance. In a wavelength region where the light is absorbed, the absolute magnitude of the optical rotation at first varies rapidly with wavelength, crosses zero at absorption maxima and then again varies rapidly with wavelength but in the opposite direction. This phenomenon was discovered in 1895 by the French physicist Aimé Cotton (1869–1951). The Cotton effect is called positive if the optical rotation first increases as the wavelength decreases (as first observed by Cotton), and negative if the rotation first decreases. A protein structure such as a beta sheet shows a negative Cotton effect. See also Cotton–Mouton effect References Polarization (waves) Atomic, molecular, and optical physics
Cotton effect
[ "Physics", "Chemistry" ]
167
[ " and optical physics stubs", "Astrophysics", " molecular", "Atomic", "Polarization (waves)", "Physical chemistry stubs", " and optical physics" ]
2,952,362
https://en.wikipedia.org/wiki/Stellar%20engineering
Stellar engineering is a type of engineering (currently a form of exploratory engineering) concerned with creating or modifying stars through artificial means. While humanity does not yet possess the technological ability to perform stellar engineering of any kind, stellar manipulation (or husbandry), requiring substantially less technological advancement than would be needed to create a new star, could eventually be performed in order to stabilize or prolong the lifetime of a star, mine it for useful material (known as star lifting) or use it as a direct energy source. Since a civilization advanced enough to be capable of manufacturing a new star would likely have vast material and energy resources at its disposal, it almost certainly wouldn't need to do so. In science fiction Many science fiction authors have explored the possible applications of stellar engineering, among them Iain M Banks, Larry Niven and Arthur C. Clarke. In the novel series Star Carrier by Ian Douglas the Sh’daar species merge many stars to make blue giants, which then explode to become black holes. These perfectly synchronized black holes form a Tipler cylinder called the Texagu Resh gravitational anomaly. In the novel series The Book of The New Sun by Gene Wolfe, the brightness of Urth's sun seems to have been reduced by artificial means. In the season 3 (1989) episode "Take Me to Your Leader" of the 1987 Teenage Mutant Ninja Turtles cartoon, Krang, Shredder, Bebop and Rocksteady use a Solar Siphon to aim towards the Sun, and store the solar energy into compact batteries, freezing the Earth making it too cold for people to resist them. Once the Turtles have defeated them, Donatello reverses the flow. In episode 12 of Stargate Universe, Destiny was dropped prematurely out of FTL by an uncharted star that the crew determines to be artificially created and younger than 200 million years old with an Earth-sized planet containing a biosphere exactly like Earth's being the only planet in the system. In Firefly (TV series), set 500 years in the future, several gas giants are "Helioformed" to create viable suns for the surrounding planets and moons. In the Space Empires series the last available technology for research is called Stellar Manipulation. In addition to the ability to create and destroy stars, this branch also gives a race the ability to create and destroy black holes, wormholes, nebulae, planets, ringworlds and sphereworlds. Just as described above, this technology is so advanced that once the player has the ability to use them, they usually don't need them anymore. This is even more the case with the last two; once one of these megastructures is complete, the race controlling the ringworld or sphereworld has almost unlimited resources, usually leading to defeat of the others. In The Saga of the Seven Suns, by Kevin J. Anderson, humans are able to convert gas giant planets into stars through the use of a "Klikiss Torch". This device creates a wormhole between two points in space, allowing a neutron star to be dropped into the planet and ignite stellar nuclear fusion. See also References Exploratory engineering Fictional technology Engineering
Stellar engineering
[ "Technology" ]
646
[ "Exploratory engineering" ]
2,952,363
https://en.wikipedia.org/wiki/Optical%20rotatory%20dispersion
In optics, optical rotatory dispersion is the variation of the specific rotation of a medium with respect to the wavelength of light. Usually described by German physicist Paul Drude's empirical relation: where is the specific rotation at temperature and wavelength , and and are constants that depend on the properties of the medium. Optical rotatory dispersion has applications in organic chemistry regarding determining the structure of organic compounds. Principles of operation When white light passes through a polarizer, the extent of rotation of light depends on its wavelength. Short wavelengths are rotated more than longer wavelengths, per unit of distance. Because the wavelength of light determines its color, the variation of color with distance through the tube is observed. This dependence of specific rotation on wavelength is called optical rotatory dispersion. In all materials the rotation varies with wavelength. The variation is caused by two quite different phenomena. The first accounts in most cases for the majority of the variation in rotation and should not strictly be termed rotatory dispersion. It depends on the fact that optical activity is actually circular birefringence. In other words, a substance which is optically active transmits right circularly polarized light with a different velocity from left circularly polarized light. In addition to this pseudodispersion which depends on the material thickness, there is a true rotatory dispersion which depends on the variation with wavelength of the indices of refraction for right and left circularly polarized light. For wavelengths that are absorbed by the optically active sample, the two circularly polarized components will be absorbed to differing extents. This unequal absorption is known as circular dichroism. Circular dichroism causes incident linearly polarized light to become elliptically polarized. The two phenomena are closely related, just as are ordinary absorption and dispersion. If the entire optical rotatory dispersion spectrum is known, the circular dichroism spectrum can be calculated, and vice versa. Chirality In order for a molecule (or crystal) to exhibit circular birefringence and circular dichroism, it must be distinguishable from its mirror image. An object that cannot be superimposed on its mirror image is said to be chiral, and optical rotatory dispersion and circular dichroism are known as chiroptical properties. Most biological molecules have one or more chiral centers and undergo enzyme-catalyzed transformations that either maintain or invert the chirality at one or more of these centers. Still other enzymes produce new chiral centers, always with a high specificity. These properties account for the fact that optical rotatory dispersion and circular dichroism are widely used in organic and inorganic chemistry and in biochemistry. In the absence of magnetic fields, only chiral substances exhibit optical rotatory dispersion and circular dichroism. In a magnetic field, even substances that lack chirality rotate the plane of polarized light, as shown by Michael Faraday. Magnetic optical rotation is known as the Faraday effect, and its wavelength dependence is known as magnetic optical rotatory dispersion. In regions of absorption, magnetic circular dichroism is observable. See also Absorption Circular dichroism Enzyme Magnetic circular dichroism Polarimetry Polarography Hyper–Rayleigh scattering optical activity Raman optical activity (ROA) Stereochemistry References Polarization (waves) Stereochemistry
Optical rotatory dispersion
[ "Physics", "Chemistry" ]
704
[ "Stereochemistry", "Astrophysics", "Space", "nan", "Spacetime", "Polarization (waves)" ]
2,952,442
https://en.wikipedia.org/wiki/Beijing%E2%80%93Shanghai%20high-speed%20railway
The Beijing–Shanghai high-speed railway (or Jinghu high-speed railway) is a high-speed railway that connects two major economic zones in the People's Republic of China: the Bohai Economic Rim and the Yangtze River Delta. Construction began on April 18, 2008, with the line opened to the public for commercial service on June 30, 2011. The long high-speed line is the world's longest high-speed line ever constructed in a single phase. The line is one of the busiest high speed railways in the world, transporting over 210 million passengers in 2019, more than the annual ridership of the entire TGV or Intercity Express network. It is also China's most profitable high speed rail line, reporting a ¥11.9 billion Yuan ($1.86 billion USD) net profit in 2019. The non-stop train from Beijing South station to Shanghai Hongqiao station was expected to take 3 hours and 58 minutes, making it the fastest scheduled train in the world, compared to 9 hours and 49 minutes on the fastest trains running on the parallel conventional railway. At first trains were limited to a maximum speed of , with the fastest train taking 4 hours and 48 minutes to travel from Beijing South to Shanghai Hongqiao, with one stop at Nanjing South. On September 21, 2017, operation was restored with the introduction of China Standardized EMU. This reduced travel times between Beijing and Shanghai to about 4 hours 18 minutes on the fastest scheduled trains, attaining an average speed of over a journey of making those services the fastest in the world. The Beijing–Shanghai high-speed railway went public on Shanghai Stock Exchange () in 2020. Specifications The Beijing–Shanghai High-Speed Railway Co., Ltd. was in charge of construction. The project was expected to cost 220 billion yuan (about $32 billion). An estimated 220,000 passengers are expected to use the trains each day, which is double the current capacity. During peak hours, trains should run every five minutes. , or 87% of the railway, is elevated. There are 244 bridges along the line. The long Danyang–Kunshan Grand Bridge is the longest bridge in the world, the long viaduct bridge between Langfang and Qingxian is the second longest in the world, and the Cangde Grand Bridge between Beijing's 4th Ring Road and Langfang is the fifth longest. The line also includes 22 tunnels, totaling . A total of of the length is ballastless. According to Zhang Shuguang, then deputy chief designer of China's high-speed railway network, the designed continuous operating speed is , with a maximum speed of up to . The average commercial speed from Beijing to Shanghai was planned to be , which would have cut the train travel time from 10 hours to 4 hours. The rolling stock used on this line consists mainly of CRH380 trains. The CTCS-3 based train control system is used on the line, to allow for a maximum speed of of running and a minimum train interval of 3 minutes. With power consumption of and capacity of about 1,050 passengers, the energy consumption per passenger from Beijing to Shanghai should be less than 80 kWh. History Beijing and Shanghai were not linked by rail until 1912, when the Jinpu railway was completed between Tianjin and Pukou. With the existing railway between Beijing and Tianjin, which was completed in 1900, the Huning railway between Nanjing and Shanghai opened in 1908, interrupted by a ferry between Pukou and Nanjing across the Yangtze River. A weekly Beijing–Shanghai direct train was first introduced in 1913. In 1933, a train ride from Beijing to Shanghai took around 44 hours, at an average speed of . Passengers had to get off in Pukou with their luggage, board a ferry named "Kuaijie" across the Yangtze, and get on another connecting train in Xiaguan on the other side of the river. In 1933, the Nanjing Train Ferry was opened for service. The new train ferry, "Changjiang" (Yangtze), built by a British company, was long, wide, was able to carry 21 freight cars or 12 passenger cars. Passengers could remain on the train when crossing the river, and the travel time was thus cut to around 36 hours. The train service was suspended during the Japanese invasion. In 1949, from Shanghai's North railway station toward Beijing (then Beiping) it took 36 hours, 50 minutes, at an average speed of . In 1956 the trip time was cut to 28 hours, 17 minutes. In the early 1960s, the travel time was further cut down to 23 hours, 39 minutes. In October 1968, the Nanjing Yangtze River Bridge was opened. The travel time was cut to 21 hours, 34 minutes. As new diesel locomotives were introduced in the 1970s, the speed was increased further. In 1986, the travel time was 16 hours, 59 minutes. China introduced six line schedule reductions from 1997 to 2007. In October 2001, train T13/T14 took about 14 hours from Beijing to Shanghai. On April 18, 2004, Z-series trains were introduced. The trip time was cut to 11 hours, 58 minutes. There were five trains departing around 7 pm every day, each 7 minutes apart, arriving at their destination the next morning. The railway was completely electrified in 2006. On April 18, 2007, the new CRH bullet train was introduced on the upgraded railway as part of the Sixth Railway Speed-Up Campaign. A day-time train D31 served the route, departing from Beijing at 10:50 every morning, and arriving at Shanghai at 20:49 in the evening, travelling mostly at (up to in a very short section between Anting and Shanghai West). In 2008 overnight sleeper CRH trains were introduced, replacing the locomotive-hauled Z sleeper trains. With a new high-speed intercity line opening between Nanjing and Shanghai in the summer of 2010, the sleeper trains made use of the high-speed line in the Shanghai–Nanjing section, travelling at for a longer distance. The fastest sleeper trains took 9 hours, 49 minutes, with four intermediate stops, at an average speed of . As the Nanjing Yangtze Bridge connected the two sections of the railway into a continuous line, the entire railway between Beijing and Shanghai was renamed the Jinghu Railway, with Jing (京) being the standard Chinese abbreviation for Beijing, and Hu (沪), short for Shanghai. The Jinghu Railway has served as China's busiest railway for nearly a century. Due to rapid growth in passenger and freight traffic in the last 20 years, this line has reached and surpassed capacity. Dedicated high-speed rail proposal The Jinghu high-speed railway was proposed in the early 1990s, because one quarter of the country's population lived along the existing Beijing-Shanghai rail line In December 1990, the Ministry of Railways submitted to the National People's Congress a proposal to build the Beijing–Shanghai high speed railway parallel to the existing Beijing–Shanghai railway line. In 1995, Premier Li Peng announced that work on the Beijing–Shanghai high-speed railway would begin in the 9th Five Year Plan (1996–2000). The Ministry's initial design for the high-speed rail line was completed, and a report was submitted for state approval in June 1998. The construction plan was set in 2004, after a five-year debate on whether to use steel-on-steel rail track, or maglev technology. Maglev was not chosen due to its incompatibility with China's existing rail-and-track technology and its high price, which is two times higher than that of conventional rail technology. Technology debate Although engineers originally said construction could take until 2015, the China's Ministry of Railways initially promised a 2010 opening date for the new line. However, the Ministry did not anticipate an ensuing debate over the possible use of maglev technology. Although more traditional steel-on-steel rail technology was chosen for the railway, the technology debate resulted in a substantial delay of the railway's feasibility studies, completed in March 2006. The current rolling stock is the CRH380AL, which is a Chinese electric high-speed train that was developed by China South Locomotive & Rolling Stock Corporation Limited (CSR). CRH380A is one of the four Chinese train series which have been designed for the new standard operating speed of on newly constructed Chinese high-speed main lines. The other three are CRH380B, CRH380C and CRH380D. Engineering challenges Testing began shortly thereafter on the main line section between Shanghai and Nanjing. This section of the line sits on the soft soil of the Yangtze Delta, providing engineers an example of the more difficult challenges they would face in later construction. In addition to these challenges, high speed trains use extensive amounts of aluminium alloy, with specially designed windscreen glass capable of withstanding avian impacts. Construction Construction work began on April 18, 2008. Track-laying was started on July 19, 2010, and completed on November 15, 2010. On December 3, 2010, a 16-car CRH380AL trainset set a speed record of on the Zaozhuang West to Bengbu section of the line during a test run. On January 10, 2011, another 16-car modified CRH380BL train set a speed record of during a test run. The overhead catenary work was completed on February 4, 2011 for the entire line. According to CCTV, more than 130,000 construction workers and engineers were at work at the peak of the construction phase. According to the Ministry of Railways, construction has used twice as much concrete as the Three Gorges dam, and 120 times the amount of steel in the Beijing National Stadium. There are 244 bridges and 22 tunnels built to standardized designs, and the route is monitored by 321 seismic, 167 windspeed and 50 rainfall sensors. Start of service Tickets were put on sale at 09:00 on June 24, 2011, and sold out within an hour. To compete with the new train service, airlines slashed the cost of flights between Beijing and Shanghai by up to 65%. Economy air fares between Beijing and Shanghai fell by 52%. Sleeper bullet trains on the upgraded railway were cancelled at the beginning, but later resumed. The new line will increase the freight capacity of the old line by 50 million tons per year between Beijing and Shanghai. In its second week in service, the system experienced three malfunctions in four days. On July 10, 2011, trains were delayed after heavy winds and a thunderstorm caused power supply problems in Shandong. On July 12, 2011, trains were delayed again when another power failure occurred in Suzhou. On July 13, 2011, a transformer malfunction in Changzhou forced a train to halve its top speed, forcing passengers to take a backup train. Within two weeks after opening, airline prices had rebounded due to frequent malfunctions on the line. Airline ticket sales were only down 5% in July 2011 compared to June 2011, after the opening of the line. On August 12, 2011, after several delays caused by equipment problems, 54 CRH380BL trains running on this line were recalled by their manufacturer. They returned to regular service on November 16, 2011. A spokesman for the Ministry of Railways apologized for the glitches and delays, stating that in the two weeks since service had begun only 85.6% of trains had arrived on time. Finances In 2006, it was estimated that the line would cost between CN¥130 billion (US$16.25 billion) and ¥170 billion ($21.25 billion). The following year, the estimated cost had revised to ¥200 billion ($25 billion), or ¥150 million per kilometer. Due to rapid rises in the costs of labor, construction materials and land acquisitions over the previous years, by July 2008, the estimated cost was increased to ¥220 billion ($32 billion). By then, the state-owned company Beijing–Shanghai high-speed railway, established to raise funds for the project, had raised ¥110 billion, with the remaining to be sourced from local governments, share offerings, bank loans and, for the first time for a railway project, foreign investment. In the end, investment in the project totaled ¥217.6 billion ($34.7 billion). In 2016 it was revealed, that last year the Beijing–Shanghai High-Speed Railway Company (BSHSRC) has total assets of ¥181.54 billion ($28 billion), revenue ¥23.42 billion ($3.6 billion) and a net profit ¥6.58 billion (US$1 billion), thus being labeled as the most profitable railway line in the world. In 2019, Jinghu Express Railway Company submitted an application for an IPO. The company announced that the Jinghu HSR recorded a net profit of ¥9.5 billion (US$1.35 billion) in the first nine months of 2019. In 2020, BSHSRC went public, as the first high-speed rail operator in China. The proceeds of the IPO will be used to purchase a 65% stake in the Beijing Fuzhou Railway Passenger Dedicated Line Anhui Company, which operates the Hefei–Bengbu high-speed railway, Hefei–Fuzhou high-speed railway (Anhui section), Shangqiu–Hangzhou high-speed railway (Anhui section, still under construction) and Zhengzhou–Fuyang high-speed railway (Anhui section). Rolling stock services use the CR400AF, CR400BF, CRH380A, CRH380B, and CRH380C trainsets, prior to 2014 slower services use CRH2 and CRH5 trainsets. First and Second Class coaches are available on all trains. On the shorter trains, a six-person Premier Class compartment is available. Available on the longer trains are up to 28 Business Class seats and a full-length dining car. Operation and ridership More than 90 trains a day run between Beijing South and Shanghai Hongqiao from 07:00 until 18:00. The line's average ridership in its initial two weeks of operation was 165,000 passengers daily, while 80,000 passengers every day continued to ride on the slower and less expensive old railway. The figure of 165,000 daily riders was three-quarters of the forecast of 220,000 daily riders. After the opening passengers numbers continued to grow, with 230,000 passengers using the line each day by 2013. By March 2013, the line had carried 100 million passengers. By 2015, ridership grew to 489,000 passengers per day. By 2017, average ridership reached over 500,000 passengers per day. This line is gradually gaining popularity through the years and it is reaching its capacity at weekends and holidays. With the introduction of the China Standardized EMU, the highest operation speed of the line is raised to on September 21, 2017. The fastest train will complete the journey in 4 hours 18 minutes (G7), while making two stops along the trip at Jinan and Nanjing. In 2019, in response to high passenger demand 17-car-long Fuxing trains started operating on the line. Fares On June 13, 2011, the list of fares was announced at a Ministry of Railways press conference. The fares from Beijing South to Shanghai Hongqiao in RMB Yuan are listed below: Note: *Only available on services using the CRH380AL, CRH380BL and CRH380CL trains Online ticketing service Passengers can buy tickets online. If the passenger uses a 2nd-generation PRC ID Card or an International Passport, they can use this card directly as the ticket to enter the station and pass the ticketing gates. Components Stations and service There are 24 stations on the line. Cruise speeds are depending on services. Fare are calculated based on distance traveled regardless of speed and travel time. More than 40 pairs of daily scheduled train services travel end-to-end along this route, and hundreds more that only use a segment of it. Note: * – Lines in italic text are under construction or planned The travel time column in the following table only list shortest time possible to get to a certain station from Beijing. Different services make different stops along the way and there is no services that stop at every station. Bridges The railway line has some of the longest bridges in the world. They include: Danyang–Kunshan Grand Bridge – longest bridge in the world. Tianjin Grand Bridge – fourth longest bridge in the world. Beijing Grand Bridge Cangzhou–Dezhou Grand Bridge Nanjing Qinhuai River Bridge Zhenjiang Beijing–Hangzhou Canal Bridge Notes From its native Mandarin name. References External links High-speed railway lines in China Standard gauge railways in China 2011 establishments in China Rail transport in Shanghai Railway lines opened in 2011 25 kV AC railway electrification Companies in the CSI 100 Index 2011 in Shanghai 2011 in Beijing
Beijing–Shanghai high-speed railway
[ "Engineering" ]
3,466
[ "Megaprojects" ]
2,952,567
https://en.wikipedia.org/wiki/Gas%20engine
A gas engine is an internal combustion engine that runs on a fuel gas (a gaseous fuel), such as coal gas, producer gas, biogas, landfill gas, natural gas or hydrogen. In the United Kingdom and British English-speaking countries, the term is unambiguous. In the United States, due to the widespread use of "gas" as an abbreviation for gasoline (petrol), such an engine is sometimes called by a clarifying term, such as gaseous-fueled engine or natural gas engine. Generally in modern usage, the term gas engine refers to a heavy-duty industrial engine capable of running continuously at full load for periods approaching a high fraction of 8,760 hours per year, unlike a gasoline automobile engine, which is lightweight, high-revving and typically runs for no more than 4,000 hours in its entire life. Typical power ranges from to . History Lenoir There were many experiments with gas engines in the 19th century, but the first practical gas-fuelled internal combustion engine was built by the Belgian engineer Étienne Lenoir in 1860. However, the Lenoir engine suffered from a low power output and high fuel consumption. Otto and Langen Lenoir's work was further researched and improved by a German engineer Nicolaus August Otto, who was later to invent the first four-stroke engine to efficiently burn fuel directly in a piston chamber. In August 1864 Otto met Eugen Langen who, being technically trained, glimpsed the potential of Otto's development, and one month after the meeting, founded the first engine factory in the world, NA Otto & Cie, in Cologne. In 1867 Otto patented his improved design and it was awarded the Grand Prize at the 1867 Paris World Exhibition. This atmospheric engine worked by drawing a mixture of gas and air into a vertical cylinder. When the piston has risen about eight inches, the gas and air mixture is ignited by a small pilot flame burning outside, which forces the piston (which is connected to a toothed rack) upwards, creating a partial vacuum beneath it. No work is done on the upward stroke. The work is done when the piston and toothed rack descend under the effects of atmospheric pressure and their own weight, turning the main shaft and flywheels as they fall. Its advantage over the existing steam engine was its ability to be started and stopped on demand, making it ideal for intermittent work such as barge loading or unloading. Four-stroke engine The atmospheric gas engine was in turn replaced by Otto's four-stroke engine. The changeover to four-stroke engines was remarkably rapid, with the last atmospheric engines being made in 1877. Liquid-fuelled engines soon followed using diesel (around 1898) or gasoline (around 1900). Crossley The best-known builder of gas engines in the United Kingdom was Crossley of Manchester, who in 1869 acquired the United Kingdom and world (except German) rights to the patents of Otto and Langen for the new gas-fuelled atmospheric engine. In 1876 they acquired the rights to the more efficient Otto four-stroke cycle engine. Tangye There were several other firms based in the Manchester area as well. Tangye Ltd., of Smethwick, near Birmingham, sold its first gas engine, a 1 nominal horsepower two-cycle type, in 1881, and in 1890 the firm commenced manufacture of the four-cycle gas engine. Preservation The Anson Engine Museum in Poynton, near Stockport, England, has a collection of engines that includes several working gas engines, including the largest running Crossley atmospheric engine ever made. Current manufacturers Manufacturers of gas engines include Hyundai Heavy Industries, Rolls-Royce with the Bergen-Engines AS, Kawasaki Heavy Industries, Liebherr, MTU Friedrichshafen, INNIO Jenbacher, Caterpillar Inc., Perkins Engines, MWM, Cummins, Wärtsilä, INNIO Waukesha, Guascor Energy , Deutz, MTU, MAN, Scania AB, Fairbanks-Morse, Doosan, Eaton (successor to another former large market share holder, Cooper Industries), and Yanmar. Output ranges from about micro combined heat and power (CHP) to . Generally speaking, the modern high-speed gas engine is very competitive with gas turbines up to about depending on circumstances, and the best ones are much more fuel efficient than the gas turbines. Rolls-Royce with the Bergen Engines, Caterpillar and many other manufacturers base their products on a diesel engine block and crankshaft. INNIO Jenbacher and Waukesha are the only two companies whose engines are designed and dedicated to gas alone. Typical applications Stationary Typical applications are base load or high-hour generation schemes, including combined heat and power (for typical performance figures see), landfill gas, mines gas, well-head gas and biogas, where the waste heat from the engine may be used to warm the digesters. For typical biogas engine installation parameters see. For parameters of a large gas engine CHP system, as fitted in a factory, see. Gas engines are rarely used for standby applications, which remain largely the province of diesel engines. One exception to this is the small (<150 kW) emergency generator often installed by farms, museums, small businesses, and residences. Connected to either natural gas from the public utility or propane from on-site storage tanks, these generators can be arranged for automatic starting upon power failure. Transport Liquefied natural gas (LNG) engines are expanding into the marine market, as the lean-burn gas engine can meet the new emission requirements without any extra fuel treatment or exhaust cleaning systems. Use of engines running on compressed natural gas (CNG) is also growing in the bus sector. Users in the United Kingdom include Reading Buses. Use of gas buses is supported by the Gas Bus Alliance and manufacturers include Scania AB. Use of gaseous methane or propane Since natural gas, chiefly methane, has long been an economical and readily available fuel, many industrial engines are either designed or modified to use gas, as distinguished from gasoline. Their operation produces less complex-hydrocarbon pollution, and the engines have fewer internal problems. One example is the liquefied petroleum gas, chiefly propane. engine used in vast numbers of forklift trucks. Common United States usage of "gas" to mean "gasoline" requires the explicit identification of a natural gas engine. There is also such a thing as "natural gasoline", but this term, which refers to a subset of natural gas liquids, is very rarely observed outside the refining industry. Technical details Fuel-air mixing A gas engine differs from a petrol engine in the way the fuel and air are mixed. A petrol engine uses a carburetor or fuel injection. but a gas engine often uses a simple venturi system to introduce gas into the air flow. Early gas engines used a three-valve system, with separate inlet valves for air and gas. Exhaust valves The weak point of a gas engine compared to a diesel engine is the exhaust valves, since the gas engine exhaust gases are much hotter for a given output, and this limits the power output. Thus, a diesel engine from a given manufacturer will usually have a higher maximum output than the same engine block size in the gas engine version. The diesel engine will generally have three different ratings — standby, prime, and continuous, a.k.a. 1-hour rating, 12-hour rating and continuous rating in the United Kingdom, whereas the gas engine will generally only have a continuous rating, which will be less than the diesel continuous rating. Ignition Various ignition systems have been used, including hot-tube ignitors and spark ignition. Some modern gas engines are essentially dual-fuel engines. The main source of energy is the gas-air mixture but it is ignited by the injection of a small volume of diesel fuel. Energy balance Thermal efficiency Gas engines that run on natural gas typically have a thermal efficiency between 35-45% (LHV basis)., As of year 2018, the best engines can achieve a thermal efficiency up to 50% (LHV basis). These gas engines are usually medium-speed engines Bergen Engines Fuel energy arises at the output shaft, the remainder appears as waste heat. Large engines are more efficient than small engines. Gas engines running on biogas typically have a slightly lower efficiency (~1-2%) and syngas reduces the efficiency further still. GE Jenbacher's recent J624 engine is the world's first high efficiency methane-fueled 24-cylinder gas engine. When considering engine efficiency one should consider whether this is based on the lower heating value (LHV) or higher heating value (HHV) of the gas. Engine manufacturers will typically quote efficiencies based on the lower heating value of the gas, i.e. the efficiency after energy has been taken to evaporate the intrinsic moisture within the gas itself. Gas distribution networks will typically charge based upon the higher heating value of the gas. i.e., total energy content. A quoted engine efficiency based on LHV might be say 44% whereas the same engine might have an efficiency of 39.6% based on HHV on natural gas. It is also important to ensure that efficiency comparisons are on a like-for-like basis. For example, some manufactures have mechanically driven pumps whereas other use electrically driven pumps to drive engine cooling water, and the electrical usage can sometimes be ignored giving a falsely high apparent efficiency compared to the direct drive engines. Combined heat and power Engine reject heat can be used for building heating or heating a process. In an engine, roughly half the waste heat arises (from the engine jacket, oil cooler and after-cooler circuits) as hot water, which can be at up to 110 °C. The remainder arises as high-temperature heat which can generate pressurised hot water or steam by the use of an exhaust gas heat exchanger. Engine cooling Two most common engine types are an air-cooled engine or water cooled engine. Water cooled nowadays use antifreeze in the internal combustion engine Some engines (air or water) have an added oil cooler. Cooling is required to remove excessive heat, as overheating can cause engine failure, usually from wear, cracking or warping. Gas consumption formula The formula shows the gas flow requirement of a gas engine in norm conditions at full load. where: is the gas flow in norm conditions is the engine power is the mechanical efficiency LHV is the Low Heating Value of the gas Gallery of historic gas engines See also Autogas CHP Directive Cogeneration Gas turbine History of the internal combustion engine List of natural gas vehicles Anson Engine Museum References Engines Stationary engines Internal combustion piston engines
Gas engine
[ "Physics", "Technology" ]
2,173
[ "Physical systems", "Machines", "Stationary engines", "Engines" ]
2,952,577
https://en.wikipedia.org/wiki/Safety%20integrity%20level
In functional safety, safety integrity level (SIL) is defined as the relative level of risk-reduction provided by a safety instrumented function (SIF), i.e. the measurement of the performance required of the SIF. In the functional safety standards based on the IEC 61508 standard, four SILs are defined, with SIL4 being the most dependable and SIL1 the least. The applicable SIL is determined based on a number of quantitative factors in combination with qualitative factors, such as risk assessments and safety lifecycle management. Other standards, however, may have different SIL number definitions. SIL allocation Assignment, or allocation of SIL is an exercise in risk analysis where the risk associated with a specific hazard, which is intended to be protected against by a SIF, is calculated without the beneficial risk reduction effect of the SIF. That unmitigated risk is then compared against a tolerable risk target. The difference between the unmitigated risk and the tolerable risk, if the unmitigated risk is higher than tolerable, must be addressed through risk reduction of provided by the SIF. This amount of required risk reduction is correlated with the SIL target. In essence, each order of magnitude of risk reduction that is required correlates with an increase in SIL, up to a maximum of SIL4. Should the risk assessment establish that the required SIL cannot be achieved by a SIL4 SIF, then alternative arrangements must be designed, such as non-instrumented safeguards (e.g, a pressure relief valve). There are several methods used to assign a SIL. These are normally used in combination, and may include: Risk matrices Risk graphs Layer of protection analysis (LOPA) Of the methods presented above, LOPA is by far the most commonly used in large industrial facilities, such as for example chemical process plants. The assignment may be tested using both pragmatic and controllability approaches, applying industry guidance such as the one published by the UK HSE. SIL assignment processes that use the HSE guidance to ratify assignments developed from Risk Matrices have been certified to meet IEC 61508. Problems There are several problems inherent in the use of safety integrity levels. These can be summarized as follows: Poor harmonization of definition across the different standards bodies which utilize SIL. Process-oriented metrics for derivation of SIL. Estimation of SIL based on reliability estimates. System complexity, particularly in software systems, making SIL estimation difficult to impossible. These lead to such erroneous statements as the tautology "This system is a SIL N system because the process adopted during its development was the standard process for the development of a SIL N system", or use of the SIL concept out of context such as "This is a SIL 3 heat exchanger" or "This software is SIL 2". According to IEC 61508, the SIL concept must be related to the dangerous failure rate of a system, not just its failure rate or the failure rate of a component part, such as the software. Definition of the dangerous failure modes by safety analysis is intrinsic to the proper determination of the failure rate. SIL types and certification The International Electrotechnical Commission's (IEC) standard IEC 61508 defines SIL using requirements grouped into two broad categories: hardware safety integrity and systematic safety integrity. A device or system must meet the requirements for both categories to achieve a given SIL. The SIL requirements for hardware safety integrity are based on a probabilistic analysis of the device. In order to achieve a given SIL, the device must meet targets for the maximum probability of dangerous failure and a minimum safe failure fraction. The concept of 'dangerous failure' must be rigorously defined for the system in question, normally in the form of requirement constraints whose integrity is verified throughout system development. The actual targets required vary depending on the likelihood of a demand, the complexity of the device(s), and types of redundancy used. PFD (probability of dangerous failure on demand) and RRF (risk reduction factor) of low demand operation for different SILs as defined in IEC EN 61508 are as follows: For continuous operation, these change to the following, where PFH is probability of dangerous failure per hour. Hazards of a control system must be identified then analysed through risk analysis. Mitigation of these risks continues until their overall contribution to the hazard are considered acceptable. The tolerable level of these risks is specified as a safety requirement in the form of a target 'probability of a dangerous failure' in a given period of time, stated as a discrete SIL. Certification schemes, such as the CASS Scheme (Conformity Assessment of Safety-related Systems) are used to establish whether a device meets a particular SIL. Third parties that can provide certification include Bureau Veritas, CSA Group, TÜV Rheinland, TÜV SÜD and UL among others. Self-certification is also possible. The requirements of these schemes can be met either by establishing a rigorous development process, or by establishing that the device has sufficient operating history to argue that it has been proven in use. Certification is achieved by proving the functional safety capability (FSC) of the organization, usually by assessment of its functional safety management (FSM) program, and the assessment of the design and life-cycle activities of the product to be certified, which is conducted based on specifications, design documents, test specifications and results, failure rate predictions, FMEAs, etc. Electric and electronic devices can be certified for use in functional safety applications according to IEC 61508. There are a number of application-specific standards based on or adapted from IEC 61508, such as IEC 61511 for the process industry sector. This standard is used in the petrochemical and hazardous chemical industries, among others. Standards The following standards use SIL as a measure of reliability and/or risk reduction. ANSI/ISA S84 (functional safety of safety instrumented systems for the process industry sector) IEC 61508 (functional safety of electrical/electronic/programmable electronic safety related systems) IEC 61511 (implementing IEC 61508 in the process industry sector) IEC 61513 (implementing IEC 61508 in the nuclear industry) IEC 62061 (implementing IEC 61508 in the domain of machinery safety) EN 50128 (railway applications – software for railway control and protection) EN 50129 (railway applications – safety related electronic systems for signalling) EN 50657 (railway applications – software on board of rolling stock) EN 50402 (fixed gas detection systems) ISO 26262 (automotive industry) MISRA (guidelines for safety analysis, modelling, and programming in automotive applications) See also As low as reasonably practicable (ALARP) High-integrity pressure protection system (HIPPS) Reliability engineering Spurious trip level (STL) References Further reading Hartmann, H.; Thomas, H.; Scharpf, E. (2022). Practical SIL Target Selection – Risk Analysis per the IEC 61511 Safety Lifecycle. Exida. Houtermans, M.J.M. (2014). SIL and Functional Safety in a Nutshell (2nd ed.). Prime Intelligence. ASIN B00MTWSBG2 Medoff, M.; Faller, R. (2014). Functional Safety – An IEC 61508 SIL 3 Compliant Development Process (3rd ed.). Exida. Punch, Marcus (2013). Functional Safety for the Mining and Machinery-based Industries (2nd ed.). Tenambit, N.S.W.: Marcus Punch. External links 61508.org - The 61508 Association Functional Safety, A Basic Guide IEC Safety and functional safety - The IEC functional safety site Safety Integrity Level Manual (Archived) - Pepperl+Fuchs SIL Manual Process safety Safety
Safety integrity level
[ "Chemistry", "Engineering" ]
1,641
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
2,952,636
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28pressure%29
This is a tabulated listing of the orders of magnitude in relation to pressure expressed in pascals. psi values, prefixed with + and -, denote values relative to Earth's sea level standard atmospheric pressure (psig); otherwise, psia is assumed. References Units of pressure Pressure
Orders of magnitude (pressure)
[ "Mathematics" ]
61
[ "Quantity", "Orders of magnitude", "Units of measurement", "Units of pressure" ]
2,952,847
https://en.wikipedia.org/wiki/Hayride
A hayride, also known as a hayrack ride, is a traditional American and Canadian activity consisting of a recreational ride in a wagon or cart pulled by a tractor, horses or a truck, which has been loaded with hay or straw for comfortable seating. Tradition Hayrides traditionally have been held as celebratory activities, usually in connection to celebration of the autumn harvest. Hayrides originated with farmhands and working farm children riding loaded hay wagons back to the barn for unloading, which was one of the few times during the day one could stop to rest during the frenetic days of the haying season. By the late 19th century and the spread of the railroads, tourism and summer vacations in the country had become popular with urban families, many of whom had read idealized accounts of hayrides in children's books. To capitalize on the demand, local farmers began offering "genuine hayrides" on wagons loaded with hay, since one could make more cash income selling rides to "summer people" than by selling the same wagon-load of hay (although most farmers did both). During this era, farming was transforming from a subsistence system to a cash system, and there were few options for bringing real money into the average farm. Over time the hayride became a real tradition, although the original concept of riding on top of a load of hay was gradually replaced with a simple ride in a wagon sitting on a layer of hay intended to cushion the ride. This was considered far safer than (if not as fun as) riding perched 15-20 feet on top of a slippery pile of hay on a moving vehicle. Contemporary hayrides are often organized commercially, providing an additional source of income for farms or existing on their own as an enfranchised activity during the fall season. During fall, a hayride may feature a stop at a pumpkin patch where passengers can pick a pumpkin or be dropped off to pick apples. Hayrides may also deliver customers to the entrance of a corn maze. Haunted hayrides Hayrides on Halloween are often dubbed as 'haunted hayrides'. These hayrides sometimes incorporate special effects and actors portraying ghosts, monsters, and other spooky creatures to attract thrill-seekers and capitalize on the Halloween season. Haunted Hayrides are held all over North America, but most prominently on the East Coast including major ones in Crownsville, MD and Mountville, PA. Accidents Despite the fact that hayrides are typically regarded as a lighthearted activity, there have been incidents where hayrides have flipped or veered off-road and resulted in injuries or death. Other accidents, such as a 1989 incident in Cormier-Village, can occur when hayrides collide with other vehicles on or near roads. See also Louisiana Hayride 1989 Cormier-Village hayride accident References Amusement rides Rural culture in North America
Hayride
[ "Physics", "Technology" ]
593
[ "Physical systems", "Machines", "Amusement rides" ]
2,952,874
https://en.wikipedia.org/wiki/Retene
Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C. Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills. Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere. Health effects A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells. References Petroleum products Phenanthrenes Biomarkers Isopropyl compounds Polycyclic aromatic hydrocarbons
Retene
[ "Chemistry", "Biology" ]
391
[ "Petroleum", "Biomarkers", "Petroleum products" ]
2,953,306
https://en.wikipedia.org/wiki/Mercedes-Benz%20COMAND
COMAND (Cockpit Management and Data system) is an in-car communications and entertainment system found on Mercedes-Benz vehicles. COMAND features a dedicated flat display screen. It includes software features such as a GPS navigation system, address book, telephone, and radio. Various devices such as CD/DVD changers, sound system, TV receiver and the Linguatronic voice control system can be installed as additional options. The first generations of COMAND used the D2B optical network standard whereas later models are based on MOST. COMAND systems provide integration between the various functions of the car such as multimedia, navigation and telephony. On vehicles with a Mercedes-Benz rear seat entertainment system, COMAND allows the rear seat displays to play content from the front system or from local sources like composite input. On newer Mercedes models, COMAND can control other vehicle fonctions such as the HVAC system, seat controls, and interior lighting. COMAND was introduced first on the S-Class and CL-Class models. Later, it became available on other Mercedes cars too. Model history COMAND 2.5 Somewhat confusingly, COMAND 2.5 (not to be confused with the much later COMAND-APS NTG2.5) actually refers to the first generation of COMAND systems, introduced on the W220. The "2.5" label seems to refer to the fact that the main COMAND unit for this first generation had a height of 2.5 DIN. This COMAND system had a cassette drive, a built-in CD drive for the navigation map discs, an FM/AM radio tuner, a 4-channel amplifier and external connectors to other systems. The European models used Tele Atlas map discs (CD). Towards the end of 1999, the system was upgraded to use the improved DX type navigation discs. COMAND 2.5 uses a D2B optical bus for connection to the external CD changer, the telephone system, the optional Bose surround sound system and the optional Linguatronic voice control system. The COMAND 2.5 unit was made by Bosch. COMAND 2.5 was an option on almost all models and denoted by Mercedes-Benz option code 352. COMAND 2.0/COMAND 2.0 MOPF The same COMAND 2.5 technology (with DX navigation maps) was later incorporated in a somewhat different form factor known as COMAND 2.0 (with "2.0" referring to the fact that this modified unit had a 2 DIN height), but with the cassette drive removed. These units were introduced on the W210 E-Class as rectangular units, and later introduced in a more rounded form on the W203 and R230 models, among other models. From S-Class model year 2003, COMAND 2.5 was replaced by a widescreen version of COMAND 2.0 known as COMAND 2.0 MOPF. Note that COMAND 2.5, COMAND 2.0 and COMAND 2.0 MOPF are collectively referred to as "COMAND 2" systems, despite these being the first generation COMAND systems and despite the successor being known as COMAND-APS NTG1. COMAND-APS NTG1 This new generation of COMAND systems was introduced on the model year 2002 W211 E-class and was a complete redesign. The D2B optical ring network was replaced by MOST. The Mercedes-Benz option code became 527 (although, confusingly, COMAND 2.0 MOPF was also given this option code). These new MOST based systems were given the name COMAND-APS to distinguish them from the older D2B systems. The NTG1 system further distinguished itself from earlier models by having DVD based navigation instead of the CD based COMAND 2. This allowed a single disc to carry a whole region (such as Europe). NTG1 systems were also able to play MP3 CDs/DVDs. COMAND-APS NTG2 The NTG2 evolution was a cheaper and more integrated version of COMAND-APS, having all core components in the single double DIN head unit instead of three separate components as had been the case with NTG1 (an audio gateway including the radio, amplifier, and MOST controller; a head unit with display; and a navigation processor) and thereby also simplifying the wiring. However, unlike NTG1 models, the NTG2 COMAND was unable to play MP3 discs. To use the navigation at the same time as listening to an audio disc requires the optional CD changer. This version of COMAND was used in various models such as the W203 C-Class and the W209 CLK and even the Vito/Viano. It was also fitted to the Smart Forfour. COMAND-APS NTG3/NTG3.5 Intended for the flagship W221 S-Class and the W216 CL-Class, COMAND NTG3 is a high-end system. Unlike for most other Mercedes-Benz cars, the S-Class and CL-Class from now on have COMAND as standard, allowing even better integration and for COMAND to operate even more vehicle functions. The navigation maps are stored on an internal hard disk instead of a DVD disc (although updates are installed from an update DVD). In addition to operating the audio, video, navigation and telecommunication systems, the NTG3 COMAND also controls a host of other features, such as the multi-contour and drive-dynamic seats, the HVAC system, the rear window shade, the vehicle locking, alarm and immobiliser, interior and exterior lighting functions, the optional ambient lighting feature, easy entry/exit settings, etc. The NTG3 system comes with a large, high resolution 8" TFT 16:9 widescreen colour COMAND display mounted higher and more directly into the driver's line of sight and with a separate, large rotary controller mounted on the centre console in between the front seats. This system was the first COMAND to support DAB radio on the MOST bus. NTG3 uses an in-dash DVD (and CD) changer and CompactFlash reader for MP3 music. A digital TV tuner and a Harman Kardon Logic7 stereo sound system with 14 speakers and a 600W 13-channel DSP amplifier can be optionally installed and controlled via COMAND. Where a factory fitted rear seat entertainment package is installed, this can use the NTG3 digital TV tuner and the surround sound system for playing out its audio over the speakers instead of headphones. As with previous models, Linguatronic voice command and control is available as an option too. The main controller for NTG3 is a large rotary dial with haptic feedback, although there are also buttons for quick access to commonly used functions and even a "favourite" button that can be assigned a function of choice. In addition, buttons on the steering wheel give access to various COMAND functions too and the large screen in the instrument cluster can display various COMAND related settings and information in addition to the main COMAND display. The model year 2009 W221 S-Class received an upgraded version, COMAND-APS 3.5 NTG. This includes improved Bluetooth support and split view (where the passenger watches a DVD while the driver sees other COMAND functions such as the navigation map on the same display). The new NTG3.5 system also features a new rights management provision to prevent the use of copied map update discs. This rights management function means it is not possible to upgrade NTG3 based vehicles to NTG3.5. COMAND-APS NTG4 The NTG4 system is a reduced cost version of NTG3 technology. It was first introduced when the W204 C-Class launched in 2007 and features a 7" screen, much smaller than the higher resolution, bigger 8" screen on the flagship W221. It is the first version of COMAND that supports the Mercedes Media Interface. Like NTG3, it stores maps on a hard disk and has a card reader for MP3 music. Amongst other implementation differences are the fact that in the W204 version the screen electrically folds in and out whereas in other incarnations it is fixed. The Mercedes SLS has COMAND NTG4 too. Unlike the NTG3 system, NTG4 did not support 7 character UK postcodes when first released (only 5 character postcodes), but some NTG4 units (e.g. those in the MY2010 W212 and up) have a firmware upgrade available from Mercedes dealers which does add 7 character UK postcodes. NTG4 is also used in Mercedes-Benz GLK-Class (X204). COMAND-APS NTG2.5 Somewhat confusingly, COMAND-APS NTG2.5 was introduced after NTG4, replacing COMAND in models previously fitted with NTG1 and NTG2 systems (apart from the W209 CLK as that model was replaced by the W207 E-Class coupe. The unit has a SD card reader in addition to the DVD drive. An optional DVD changer can replace the single drive. The Mercedes-Benz option code is 512 for the single drive unit and 527 for the unit with the optional DVD changer. Like NTG4 models, it supports the Media Interface. The unit also stores the navigation maps on an internal HDD which have some extra space for mp3 files. COMAND-APS NTG4.5 In late 2011, Mercedes starting fitting the new COMAND-APS NTG4.5 to its W204 cars for Model Year 2012, and then to other models such as the W212, W207 and R172 SLK. It thus became the latest generation of COMAND for its non-flagship cars (i.e. cars other than the S-Class and CL-Class). It is also referred to as COMAND Online as it can use the mobile broadband connection on the telephone to connect to Mercedes-Benz Internet based services. It allows the running of various downloadable apps (such as Facebook) in the COMAND system. It uses a 7" colour display and has a SD-Card slot. The resolution is 800*480 pixels. Initially, the graphics were brown and black, and this system became known as GEN1. The colour scheme of the display were then changed to Grey/Black/Red at the same time as the graphics in the instrument cluster changed and this is known as GEN2 and coincided with the MY2013 update. Initially different firmware was used in GEN1 and GEN2, but later on the firmware for GEN1 and GEN2 was merged into 1 firmware release and the colour scheme became a run-time configuration item. Again, Media Interface is supported and maps are stored on a hard disk and restricted, as with NTG3.5, by a PIN based rights management protection feature. Like NTG3/3.5 it also has a separate rotary controller (although smaller than for NTG3/3.5 based vehicles). COMAND-APS NTG4.7 COMAND-4.7 was installed in vehicles from June 2013 onwards, such as GLK350(2015). It now supports direct tethering of iPhones & Android phones for internet functions which was not previously possible with these devices. It also features internet radio plus slight changes to the route guidance. The differences to NTG4.5 are new Mercedes Apps; a CPU upgrade; increase from 80G to 100G SATA; better Navigator figure performance. This enabled a more fluent system. Supported file formats are: Audio MP3 (supported bit rates: fixed and variable 32 kbit/s to 320 kbit/s, sampling rates of 8 kHz to 48 kHz) WMA (fixed 5 kbit/s to 384 kbit/s, sampling rates of 8 kHz to 48 kHz) CD-A AAC (Apple formats): .aac, .mp4, .m4a, and .m4b (copy protected files are not supported) Video MPEG WMV M4V AVI COMAND-APS NTG5.0 This is the last version to use the COMAND name and was introduced in the 2013/2014 S class, the 2014 GLA and the C-class. It includes Mercedes Online Radio which is broadcast from Europe 24/7. The maps zoom in/out more smoothly and are enriched with graphics and functions. The core of the system is an Intel Atom processor. There are three generations of NTG5, NTG5*1, NTG5*2 and NTG5.5. NTG5*1 has a keypad, whereas NTG5*2 / NTG5.5 does not. COMAND NTG5*1 has support for Apple CarPlay and Android Auto NTG5*2 is fitted in S, CL, C, GLC, AMG-GTS, X-Class and the new generation Vito. NTG5*1 is fitted to facelift A, B, CLA, GLA, W207-E, W212-E, CLS, GLE and GLS vehicles. NTG5*5 was fitted to the new W213-E class and is replacing NTG5*2 as vehicles are face-lifted. The NTG 5.0 plays the following file formats: Audio: MP3 v1, 128 kbit/s, 44.1 kHz, Stereo (.mp3) MP3 v1, 320 kbit/s, 44.1 kHz, Stereo (.mp3) ISO Media, MPEG v4 system, iTunes AAC-LC (.m4a) PCM, 16 bit, stereo 44100 Hz (.wav) Video: MPEG sequence, v1, system multiplex (DVD PAL, MPEG2) (.mpg) MPEG-4 AVC H264, AAC Audio (.mkv) ISO Media, MPEG v4 system, version 1 (.mp4) Mercedes-Benz User Experience (MBUX) The COMAND name is no longer in use and has been replaced by MBUX in many Mercedes-Benz vehicles. A second-generation system, called "My MBUX", is being introduced to the new W223 series S-Class. Marketing OMD produced a mobile phone commercial for the MBUX feature. References Advanced driver assistance systems Automotive technology tradenames Human–computer interaction Mercedes-Benz de:Comand
Mercedes-Benz COMAND
[ "Engineering" ]
3,022
[ "Human–computer interaction", "Human–machine interaction" ]
2,953,344
https://en.wikipedia.org/wiki/Stagnation%20pressure
In fluid dynamics, stagnation pressure, also referred to as total pressure, is what the pressure would be if all the kinetic energy of the fluid were to be converted into pressure in a reversable manner.; it is defined as the sum of the free-stream static pressure and the free-stream dynamic pressure. The Bernoulli equation applicable to incompressible flow shows that the stagnation pressure is equal to the dynamic pressure and static pressure combined. In compressible flows, stagnation pressure is also equal to total pressure as well, provided that the fluid entering the stagnation point is brought to rest isentropically. Stagnation pressure is sometimes referred to as pitot pressure because the two pressures are equal. Magnitude The magnitude of stagnation pressure can be derived from Bernoulli equation for incompressible flow and no height changes. For any two points 1 and 2: The two points of interest are 1) in the freestream flow at relative speed where the pressure is called the "static" pressure, (for example well away from an airplane moving at speed ); and 2) at a "stagnation" point where the fluid is at rest with respect to the measuring apparatus (for example at the end of a pitot tube in an airplane). Then or where: is the stagnation pressure is the fluid density is the speed of fluid is the static pressure So the stagnation pressure is increased over the static pressure, by the amount which is called the "dynamic" or "ram" pressure because it results from fluid motion. In our airplane example, the stagnation pressure would be atmospheric pressure plus the dynamic pressure. In compressible flow however, the fluid density is higher at the stagnation point than at the static point. Therefore, can't be used for the dynamic pressure. For many purposes in compressible flow, the stagnation enthalpy or stagnation temperature plays a role similar to the stagnation pressure in incompressible flow. Compressible flow Stagnation pressure is the static pressure a gas retains when brought to rest isentropically from Mach number M. or, assuming an isentropic process, the stagnation pressure can be calculated from the ratio of stagnation temperature to static temperature: where: is the stagnation pressure is the static pressure is the stagnation temperature is the static temperature is the ratio of specific heats The above derivation holds only for the case when the gas is assumed to be calorically perfect (specific heats and the ratio of the specific heats are assumed to be constant with temperature). See also Hydraulic ram Stagnation temperature Notes References L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. Cengel, Boles, "Thermodynamics, an engineering approach, McGraw Hill, External links Pitot-Statics and the Standard Atmosphere F. L. Thompson (1937) The Measurement of Air Speed in Airplanes, NACA Technical note #616, from SpaceAge Control. Fluid dynamics
Stagnation pressure
[ "Chemistry", "Engineering" ]
635
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,953,441
https://en.wikipedia.org/wiki/Canopy%20%28biology%29
ground portion of a plant cropping or crop, formed by the collection of individual plant crowns. In forest ecology, the canopy is the upper layer or habitat zone, formed by mature tree crowns and including other biological organisms (epiphytes, lianas, arboreal animals, etc..). The communities that inhabit the canopy layer are thought to be involved in maintaining forest diversity, resilience, and functioning. Shade trees normally have a dense canopy that blocks light from lower growing plants. Early observations of canopies were made from the ground using binoculars or by examining fallen material. Researchers would sometimes erroneously rely on extrapolation by using more reachable samples taken from the understory. In some cases, they would use unconventional methods such as chairs suspended on vines or hot-air dirigibles, among others. Modern technology, including adapted mountaineering gear, has made canopy observation significantly easier and more accurate, allowed for longer and more collaborative work, and broaddened the scope of canopy study. Structure Canopy structure is the organization or spatial arrangement (three-dimensional geometry) of a plant canopy. Leaf area index, leaf area per unit ground area, is a key measure used to understand and compare plant canopies. The canopy is taller than the understory layer. The canopy holds 90% of the animals in the rainforest. Canopies can cover vast distances and appear to be unbroken when observed from an airplane. However, despite overlapping tree branches, rainforest canopy trees rarely touch each other. Rather, they are usually separated by a few feet. Dominant and co-dominant canopy trees form the uneven canopy layer. Canopy trees are able to photosynthesize relatively rapidly with abundant light, so it supports the majority of primary productivity in forests. The canopy layer provides protection from strong winds and storms while also intercepting sunlight and precipitation, leading to a relatively sparsely vegetated understory layer. Forest canopies are home to unique flora and fauna not found in other layers of forests. The highest terrestrial biodiversity resides in the canopies of tropical rainforests. Many rainforest animals have evolved to live solely in the canopy and never touch the ground. The canopy of a rainforest is typically about 10 m thick, and intercepts around 95% of sunlight. The canopy is below the emergent layer, a sparse layer of very tall trees, typically one or two per hectare. With an abundance of water and a near ideal temperature in rainforests, light and nutrients are two factors that limit tree growth from the understory to the canopy. In the permaculture and forest gardening community, the canopy is the highest of seven layers. Ecology Forest canopies have unique structural and ecological complexities and are important for the forest ecosystem. They are involved in critical functions such as rainfall interception, light absorption, nutrient and energy cycling, gas exchange, and providing habitat for diverse wildlife. The canopy also plays a role in modifying the internal environment of the forest by acting as a buffer for incoming light, wind, and temperature fluctuations. The forest canopy layer supports a diverse range of flora and fauna. It has been dubbed "the last biotic frontier" as it provides a habitat that has allowed for the evolution of countless species of plants, microorganisms, invertebrates (e.g., insects), and vertebrates (e.g., birds and mammals) that are unique to the upper layer of forests. Forest canopies are arguably considered some of the most species-rich environments on the planet. It is believed that the communities found within the canopy layer play an essential role in the functioning of the forest, as well as maintaining diversity and ecological resilience. Climate regulation Forest canopies are significantly involved in maintaining the stability of the global climate. They are responsible for at least half of the global carbon dioxide exchange between terrestrial ecosystems and the atmosphere. Forest canopies act as carbon sinks, reducing the increase of atmospheric CO2 caused by human activity. The destruction of forest canopies would lead to the release of carbon dioxide, resulting in an increased concentration of atmospheric CO2. This would then contribute to the greenhouse effect, thereby causing the planet to become warmer. Canopy interception See also References Further reading External links International Canopy Access Network Botanical terminology Forest ecology Habitat Rainforests
Canopy (biology)
[ "Biology" ]
869
[ "Botanical terminology" ]
2,953,519
https://en.wikipedia.org/wiki/Spring-gun
A spring-gun, booby trap gun etc. is a gun, often a shotgun, rigged to fire when a string or other triggering device is tripped by contact of sufficient force to "spring" the trigger so that anyone stumbling over or treading on it would discharge the gun. Setting or maintaining a spring-gun is illegal in many places. Uses Spring-guns were formerly used as booby traps against poachers and trespassers. Since 1827, spring-guns and all man-traps have been illegal in England. Spring-guns are sometimes used to trap animals. Although there have been few reported cases of use, there have been several unconfirmed cases over the 20th century. In the 18th century, spring-guns were often used to protect graveyards, offering an alarm system of sorts to protect newly buried bodies, which were often stolen by grave-robbers who supplied anatomists with cadavers. Spring-guns were often set to protect property. For this purpose, spring-guns are often placed in busy corridors such as near doors. A trespasser opening the door completely would then be shot. Residents who are aware of the trap use a different door or open the door halfway and disconnect the tripwire. To reduce fatalities by using this trap, non-lethal calibers are often used, or the spring-gun is fitted to fire less lethal ammunition. For example, in the United States, most spring-guns are loaded with non-lethal caliber or shot to avoid liability arising from the use of deadly force in the protection of a property interest. Posting clear and unmistakable warning signs as well as making entry to spring-gun guarded premises difficult for innocent persons, such as high walls, fences and natural obstacles, are significant ways to reduce potential tort liability arising from the spring gun's wounding of a careless or criminal intruder. Important US lawsuits regarding trespassers wounded by spring-guns include Katko v. Briney. Bird v. Holbrook is an 1825 English case also of great relevance, where a spring-gun set to protect a tulip garden injured a trespasser who was recovering a stray bird. The man who set the spring-gun was liable for the damage caused. Another example was the Zf.Ger.38 used for training, originally intended to fire blank rounds, however it was later used for static defence. Documented examples An historic use of a spring-gun occurred during the night of June 3 or early morning of June 4, 1775, when a spring-gun set by the British to protect the military stores in the Magazine in Williamsburg, Virginia, wounded two young men who had broken in. The subsequent outrage by the local population proved to be the final act of the Gunpowder Incident, leading Governor Dunmore to flee the city to a British warship and declare the Commonwealth of Virginia in a state of rebellion. In 1981, Rene Seiptius and two friends attempted to flee from East Germany to West Germany. While they managed to avoid land mines, they did trip a spring-gun, killing one of Rene's friends. In 1990, one man in a group of four burglars was killed during a burglary by a spring-gun that was set up by a business owner in Colorado. The business had been burglarized eight times during the previous two years, including at least one previous burglary by the man who died. The man who set the trap pled guilty to manslaughter. To deter thefts, other businesses in the area put up signs claiming their premises were also booby trapped, with the unintended result that firefighters and other emergency personnel would refuse to enter these buildings during emergencies until they could be assured of their safety. Alternatives Alternative traps are mines such as gas mines or the directional mine, such as the SM-70, which was used on the inner German border to prevent refugees from escaping East Germany. Crowd-control munitions and gas mines can be less lethal, while concussion mines are meant to kill. The latter are thus only used in military perimeter defenses. See also Anti-handling device Area denial Cartridge trap Mantrap Booby trap Katko v. Briney Sentry gun References External links Methley Archive - Spring-guns in Methley Park to deter trespassers Area denial weapons Tripwire weapons
Spring-gun
[ "Engineering" ]
880
[ "Area denial weapons", "Military engineering" ]
2,953,922
https://en.wikipedia.org/wiki/Microcrystalline%20wax
Microcrystalline waxes are a type of wax produced by de-oiling petrolatum, as part of the petroleum refining process. In contrast to the more familiar paraffin wax which contains mostly unbranched alkanes, microcrystalline wax contains a higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons. It is characterized by the fineness of its crystals in contrast to the larger crystal of paraffin wax. It consists of high molecular weight saturated aliphatic hydrocarbons. It is generally darker, more viscous, denser, tackier and more elastic than paraffin waxes, and has a higher molecular weight and melting point. The elastic and adhesive characteristics of microcrystalline waxes are related to the non-straight chain components which they contain. Typical microcrystalline wax crystal structure is small and thin, making them more flexible than paraffin wax. It is commonly used in cosmetic formulations. Microcrystalline waxes when produced by wax refiners are typically produced to meet a number of ASTM specifications. These include congeal point (ASTM D938), needle penetration (ASTM D1321), color (ASTM D6045), and viscosity (ASTM D445). Microcrystalline waxes can generally be put into two categories: "laminating" grades and "hardening" grades. The laminating grades typically have a melting point of 140–175 F (60 – 80 °C) and needle penetration of 25 or above. The hardening grades will range from about 175–200 F (80 – 93 °C), and have a needle penetration of 25 or below. Color in both grades can range from brown to white, depending on the degree of processing done at the refinery level. Microcrystalline waxes are derived from the refining of the heavy distillates from lubricant oil production. This by-product must then be de-oiled at a wax refinery. Depending on the end use and desired specification, the product may then have its odor removed and color removed (which typically starts as a brown or dark yellow). This is usually done by means of a filtration method or by hydro-treating the wax material. Industries and applications Microcrystalline wax is often used in industries such as tire and rubber, candles, adhesives, corrugated board, cosmetics, castings, and others. Refineries may use blending facilities to combine paraffin and microcrystalline waxes; this is prevalent in the tire and rubber industries. Microcrystalline waxes have considerable application in the custom making of jewelry and small sculptures. Different formulations produce waxes from those soft enough to be molded by hand to those hard enough to be carved with rotary tools. The melted wax can be cast to make multiple copies that are further carved with details. Jewelry suppliers sell wax molded into the basic forms of rings as well as details that can be heat welded together and tubes and sheets for cutting and building the wax models. Rings may be attached to a wax "tree" so that many can be cast in one pouring. A brand of microcrystalline wax, Renaissance Wax, is also used extensively in museum and conservation settings for protection and polishing of antique woods, ivory, gemstones, and metal objects. It was developed by The British Museum in the 1950s to replace the potentially unstable natural waxes that were previously used such as beeswax and carnauba. Microcrystalline waxes are excellent materials to use when modifying the crystalline properties of paraffin wax. The microcrystalline wax has significantly more branching of the carbon chains that are the backbone of paraffin wax. This is useful when some desired functional changes in the paraffin are needed, such as flexibility, higher melt point, and increased opacity. They are also used as slip agents in printing ink. Microcrystalline wax is used in such sports as ice hockey, skiing and snowboarding. It is applied to the friction tape of an ice hockey stick to prevent degradation of the tape due to water destroying the glue on the tape and also to increase control of the hockey puck due to the wax’s adhesive quality. It is also applied to the underside of skis and snowboards as glide wax to reduce friction and increase the gliding ability of the board, making it easier to control; stickier grades of kick or grip wax are also used on cross-country skis to allow the ski to alternately grip the snow and slip across it as the skier shifts their weight while striding. Microcrystalline wax was used in the final phases of the restoration of the Cosmatesque pavement, Westminster Abbey, London. Use in petrolatum Microcrystalline wax is also a key component in the manufacture of petrolatum. The branched structure of the carbon chain backbone allows oil molecules to be incorporated into the crystal lattice structure. The desired properties of the petrolatum can be modified by using microcrystalline wax bases of different congeal points (ASTM D938) and needle penetration (ASTM D1321). However, key industries that utilize petrolatum, such as the personal care, cosmetic, and candle industries, have pushed for more materials that are considered "green" and based on renewable resources. As an alternative, hybrid petrolatum can be used. Hybrid petrolatum utilizes a complex mixture of vegetable oils and waxes and combines them with petroleum and micro wax-based technologies. This allows a formulator to incorporate higher percentages of renewable resources while maintaining the beneficial properties of the petrolatum. References External links ASTM official website: wax tests Cosmetics chemicals Waxes Petroleum products Sculpture materials
Microcrystalline wax
[ "Physics", "Chemistry" ]
1,196
[ "Petroleum products", "Petroleum", "Materials", "Matter", "Waxes" ]
2,954,049
https://en.wikipedia.org/wiki/Iterative%20learning%20control
Iterative Learning Control (ILC) is an open-loop control approach of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal on a finite time interval. Repetition allows the system to sequentially improve tracking accuracy, in effect learning the required input needed to track the reference as closely as possible. The learning process uses information from previous repetitions to improve the control signal, ultimately enabling a suitable control action to be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form: where is the input to the system during the pth repetition, is the tracking error during the pth repetition and is a design parameter representing operations on . Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as becomes large, whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation is crucial to achieving design objectives (i.e. trading off fast convergence and robust performance) and ranges from simple scalar gains to sophisticated optimization computations. In many cases a low-pass filter is added to the input to improve performance. The control law then takes the form where is a low-pass filtering matrix. This removes high-frequency disturbances which may otherwise be aplified during the learning process. References Control theory
Iterative learning control
[ "Mathematics" ]
377
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
2,954,177
https://en.wikipedia.org/wiki/Piranha%20solution
Piranha solution, also known as piranha etch, is a mixture of sulfuric acid () and hydrogen peroxide (). The resulting mixture is used to clean organic residues off substrates, for example silicon wafers. Because the mixture is a strong oxidizing agent, it will decompose most organic matter, and it will also hydroxylate most surfaces (by adding groups), making them highly hydrophilic (water-compatible). This means the solution can also easily dissolve fabric and skin, potentially causing severe damage and chemical burns in case of inadvertent contact. It is named after the piranha fish due to its tendency to rapidly dissolve and 'consume' organic materials through vigorous chemical reactions. Preparation and use Many different mixture ratios are commonly used, and all are called piranha. A typical mixture is 3 parts of concentrated sulfuric acid and 1 part of hydrogen peroxide solution; other protocols may use a 4:1 or even 7:1 mixture. A closely related mixture, sometimes called "base piranha", is a 5:1:1 mixture of water, ammonia solution (, or ), and 30% hydrogen peroxide. As hydrogen peroxide is less stable at high pH than under acidic conditions, (pH c. 11.6) also accelerates its decomposition. At higher pH, will decompose violently. Piranha solution must be prepared with great care. It is highly corrosive and an extremely powerful oxidizer. Surfaces must be reasonably clean and completely free of organic solvents from previous washing steps before coming into contact with the solution. Piranha solution cleans by decomposing organic contaminants, and a large amount of contaminant will cause violent bubbling and a release of gas that can cause an explosion. Piranha solution should always be prepared by adding hydrogen peroxide to sulfuric acid slowly, never in reverse order. This minimises the concentration of hydrogen peroxide during the mixing process, helping to reduce instantaneous heat generation and explosion risk. Mixing the solution is an extremely exothermic process. If the solution is made rapidly, it will instantly boil, releasing large amounts of corrosive fumes. Even when made with care, the resulting heat can easily bring the solution temperature above 100 °C. It must be allowed to cool reasonably before it is used. A sudden increase in temperature can also lead to a violent boiling of the extremely acidic solution. Solutions made using hydrogen peroxide at concentrations greater than may cause an explosion. The 1:1 acid–peroxide mixtures will also create an explosion risk even when using common 30 wt. % hydrogen peroxide. Once the mixture has stabilized, it can be further heated to sustain its reactivity. The hot (often bubbling) solution cleans organic compounds off substrates and oxidizes or hydroxylates most metal surfaces. Cleaning usually requires about 10 to 40 minutes, after which the substrates can be removed from the solution and rinsed with deionized water. The solution may be mixed before application or directly applied to the material, applying the sulfuric acid first, followed by the peroxide. Due to the self-decomposition of hydrogen peroxide, piranha solution should always be used freshly prepared (extemporaneous preparation). The solution should not be stored, as it generates gas and therefore cannot be kept in a closed container because of the risk of overpressure and explosion. As the solution violently reacts with many oxidizable substances commonly disposed of as chemical waste, if the solution has not yet been completely self-decomposed, or safely neutralized, it must be left in an open container under a fume hood, and clearly marked. Applications Piranha solution is used frequently in the microelectronics industry, e.g. to clean photoresist or organic material residue from silicon wafers. It is also widely employed in wet etching of wafers in the semiconductor fabrication process. In the laboratory, this solution is sometimes used to clean glassware, though it is discouraged in many institutions and it should not be used routinely due to its dangers. Unlike chromic acid solutions, piranha does not contaminate glassware with ions. Piranha solution is particularly useful when cleaning sintered (or "fritted") glass filters. A good porosity and sufficient permeability of the sintered glass filter is critical for its proper function, so it should never be cleaned with strong bases (NaOH, , , ...) which dissolve the silica of the glass sinter and clog the filter. Sintered glass also tends to trap small solid particles deep inside its porous structure, making it difficult to remove them. Where less aggressive cleaning methods fail, piranha solution can be used to return the sinter to a pristine white, free-flowing form without excessive damage to the pore dimensions. This is usually achieved by allowing the solution to percolate backward through the sintered glass. Although cleaning sintered glass with piranha solution will leave it as clean as possible without damaging the glass, it is not recommend due to the risk of explosion from reacting with traces of organic compounds, such as acetone. Piranha solution is also used to make glass more hydrophilic by hydroxylating its surface, thus increasing the number of silanol groups present on its surface. Mechanism The effectiveness of piranha solution in decomposing organic residues is due to two distinct processes operating at noticeably different rates. The first and faster process is the removal of hydrogen and oxygen as units of water by the concentrated sulfuric acid. This occurs because hydration of concentrated sulfuric acid is strongly thermodynamically favorable, with a standard enthalpy of reaction (ΔH) of −880 kJ/mol. The dehydration process exhibits itself as the rapid carbonisation of common organic materials, especially carbohydrates, when they enter in contact with sulfuric acid. With respect to organic residues such as thin films or wax, this results in the formation of carbon compounds rich in C=C double bonds. Simultaneously, sulfuric acid reacts with hydrogen peroxide to produce Caro's acid, which then undergoes homeolytic cleavage to produce oxygen-based radicals. Thus, with the addition of sulfuric acid, hydrogen peroxide is converted from a relatively mild oxidizing agent into one sufficiently aggressive to dissolve elemental carbon, a material that is notoriously resistant to room-temperature aqueous reactions (as, e.g., with sulfochromic acid). This transformation can also be viewed as the energetically favorable dehydration of hydrogen peroxide by concentrated sulfuric acid to form hydronium ions, bisulfate ions, and, transiently, atomic oxygen radicals (very labile ): The resulting oxyradicals then interact with carbon-based compounds, generating alkyl radicals while breaking C-H and C-C bonds: Finally, the alkyl radicals react with additional oxygen radicals, terminating the reaction and fully oxidizing the carbon to CO2: The carbon removed by the piranha solution may be either original residues or char from the dehydration step. The oxidation process is slower than the dehydration process, taking place over a period of minutes. The oxidation of carbon exhibits itself as a gradual clearing of suspended soot and carbon char left by the initial dehydration process. With time, piranha solutions in which organic materials have been immersed typically return to complete clarity, with no visible traces of the original organic materials remaining. A last secondary contribution to the piranha solution cleaning is its high acidity, which dissolves deposits such as metal oxides, hydroxides, and carbonates. However, since it is safer and easier to remove such deposits using milder acids, the solution is more typically used in situations where high acidity facilitates cleaning instead of complicating it. For substrates with low tolerance for acidity, an alkaline solution consisting of ammonium hydroxide and hydrogen peroxide, known as base piranha, is preferred. Etymology Piranha solution is named after the piranha fish. It is so first because the vigor of the dehydration process, since large quantities of organic residues immersed in the solution are dehydrated so violently that the process resembles the fish's reputed feeding frenzy. The second and more definitive rationale for the name, however, is the dissolution ability of piranha solution, capable of "eating anything", in particular, elemental carbon in the form of soot or char. Safety and disposal Piranha solution is dangerous to handle, being both strongly acidic and a strong oxidizer. Solution that is no longer being used should never be left unattended if hot. It should never be stored in a closed receptacle because of the risk of gas overpressure and explosive burst with spills (especially with fragile thin wall volumetric flask). Piranha solution should never be disposed of with organic solvents (e.g. in waste solvent carboys), as this will cause a violent reaction and a substantial explosion, and any aqueous waste container containing even a weak or depleted piranha solution should be labelled appropriately to prevent this. The solution should be allowed to cool, and oxygen gas should be allowed to dissipate prior to disposal. When cleaning glassware, it is both prudent and practical to allow the piranha solution to react overnight taking care to leave the receptacles open under a ventilated fume cupboard. This allows the spent solution to degrade prior to disposal and is especially important if a large portion of peroxide was used in the preparation. While some institutions believe that used piranha solution should be collected as hazardous waste, others consider that it can be neutralized and poured down the drain with copious amounts of water. Improper neutralization can cause a fast decomposition, which releases pure oxygen (increased risk of fire of flammable substances in a close space). One procedure for acid-base neutralization consists in pouring the piranha solution into a sufficiently large glass container filled with at least five times the solution's mass of ice (for cooling the exothermic reaction, and also for dilution purposes), then slowly adding 1M sodium or potassium hydroxide solution until neutralized. If ice is not available then the piranha solution can be added very slowly to a saturated solution of sodium bicarbonate in a large glass container, with a large amount of undissolved bicarbonate at the bottom that is renewed if it is depleted. The bicarbonate method also releases a large amount of gaseous and therefore is not preferred since it can easily overflow with a lot of foam if the addition of the piranha solution is not slow enough, and without cooling the solution can also become very hot. See also Aqua regia () Fenton's reagent () Green death () Peroxydisulfuric acid, or Marshall's acid () Peroxymonosulfuric acid, or Caro's acid () RCA clean (silicon wafer cleaning procedure) Chromic acid () Superhydrophilicity Ultrahydrophobicity References External links Cleaning products Sulfur oxoacids Oxidizing mixtures Hydrogen peroxide
Piranha solution
[ "Chemistry" ]
2,348
[ "Cleaning products", "Oxidizing mixtures", "Oxidizing agents", "Products of chemical industry" ]
2,954,244
https://en.wikipedia.org/wiki/Filtered%20algebra
In mathematics, a filtered algebra is a generalization of the notion of a graded algebra. Examples appear in many branches of mathematics, especially in homological algebra and representation theory. A filtered algebra over the field is an algebra over that has an increasing sequence of subspaces of such that and that is compatible with the multiplication in the following sense: Associated graded algebra In general, there is the following construction that produces a graded algebra out of a filtered algebra. If is a filtered algebra, then the associated graded algebra is defined as follows: The multiplication is well-defined and endows with the structure of a graded algebra, with gradation Furthermore if is associative then so is . Also, if is unital, such that the unit lies in , then will be unital as well. As algebras and are distinct (with the exception of the trivial case that is graded) but as vector spaces they are isomorphic. (One can prove by induction that is isomorphic to as vector spaces). Examples Any graded algebra graded by , for example , has a filtration given by . An example of a filtered algebra is the Clifford algebra of a vector space endowed with a quadratic form The associated graded algebra is , the exterior algebra of The symmetric algebra on the dual of an affine space is a filtered algebra of polynomials; on a vector space, one instead obtains a graded algebra. The universal enveloping algebra of a Lie algebra is also naturally filtered. The PBW theorem states that the associated graded algebra is simply . Scalar differential operators on a manifold form a filtered algebra where the filtration is given by the degree of differential operators. The associated graded algebra is the commutative algebra of smooth functions on the cotangent bundle which are polynomial along the fibers of the projection . The group algebra of a group with a length function is a filtered algebra. See also Filtration (mathematics) Length function References Algebras Homological algebra
Filtered algebra
[ "Mathematics" ]
399
[ "Mathematical structures", "Algebras", "Fields of abstract algebra", "Category theory", "Algebraic structures", "Homological algebra" ]
2,954,287
https://en.wikipedia.org/wiki/Objectory
Objectory is an object-oriented methodology mostly created by Ivar Jacobson, who has greatly contributed to object-oriented software engineering. The framework of objectory is a design technique called design with building blocks. With the building block technique, a system is viewed as a system of connecting blocks with each block representing a system service. It is considered to be the first commercially available object-oriented methodology for developing large-scale industrial systems. This approach gives a global view of the software development and focuses on cost efficiency. Its main techniques are: conceptual modelling, Object-oriented programming, and a block design technique. References Object-oriented programming
Objectory
[ "Engineering" ]
129
[ "Software engineering", "Software engineering stubs" ]
2,954,685
https://en.wikipedia.org/wiki/Povarov%20reaction
The Povarov reaction is an organic reaction described as a formal cycloaddition between an aromatic imine and an alkene. The imine in this organic reaction is a condensation reaction product from an aniline type compound and a benzaldehyde type compound. The alkene must be electron rich which means that functional groups attached to the alkene must be able to donate electrons. Such alkenes are enol ethers and enamines. The reaction product in the original Povarov reaction is a quinoline. Because the reactions can be carried out with the three components premixed in one reactor it is an example of a multi-component reaction. Reaction mechanism The reaction mechanism for the Povarov reaction to the quinoline is outlined in Scheme 1. In step one aniline and benzaldehyde react to the Schiff base in a condensation reaction. The Povarov reaction requires a Lewis acid such as boron trifluoride to activate the imine for an electrophilic addition of the activated alkene. This reaction step forms an oxonium ion which then reacts with the aromatic ring in a classical electrophilic aromatic substitution. Two additional elimination reactions create the quinoline ring structure. The reaction is also classified as a subset of aza Diels-Alder reactions; however, it occurs by a step-wise rather than concerted mechanism. Examples The reaction depicted in Scheme 2 illustrates the Povarov reaction with an imine and an enamine in the presence of yttrium triflate as the Lewis acid. This reaction is regioselective because the iminium ion preferentially attacks the nitro ortho position and not the para position. The nitro group is a meta directing substituent but since this position is blocked, the most electron rich ring position is now ortho and not para. The reaction is also stereoselective because the enamine addition occurs with a diastereomeric preference for trans addition without formation of the cis isomer. This is in contrast to traditional Diels–Alder reactions, which are stereospecific based on the alkene geometry. In 2013, Doyle and coworkers reported a Povarov-type, formal [4+2]-cycloaddition reaction between donor-acceptor cyclopropenes and imines (Scheme 3). In the first step, a dirhodium catalyst effects diazo decomposition from silyl enol ether diazo compound to yield a donor/acceptor cyclopropene. The donor/acceptor cyclopropene is then reacted with an aryl imine under scandium(III) triflate catalyzed conditions to yield cyclopropane-fused tetrahydroquinolines in good yields and diastereoselectivities. Treatment of these compounds with TBAF invokes a ring-expansion that provides the corresponding benzazepines. Variations One variation of the Povarov reaction is a four component reaction. Whereas in the traditional Povarov reaction the intermediate carbocation gives an intramolecular reaction with the aryl group, this intermediate can also be terminated by an additional nucleophile such as an alcohol. Scheme 4 depicts this 4 component reaction with the ethyl ester of glyoxylic acid, 3,4-dihydro-2H-pyran, aniline and ethanol with lewis acid scandium(III) triflate and molecular sieves. References See also Doebner reaction Doebner-Miller reaction Grieco three-component condensation Cycloadditions Multiple component reactions Quinoline forming reactions Name reactions
Povarov reaction
[ "Chemistry" ]
774
[ "Name reactions" ]
2,954,769
https://en.wikipedia.org/wiki/Acceptance%20and%20commitment%20therapy
Acceptance and commitment therapy (ACT, typically pronounced as the word "act") is a form of psychotherapy, as well as a branch of clinical behavior analysis. It is an empirically-based psychological intervention that uses acceptance and mindfulness strategies along with commitment and behavior-change strategies to increase psychological flexibility. This approach was first called comprehensive distancing. Steven C. Hayes developed it around 1982 to integrate features of cognitive therapy and behavior analysis, especially behavior analytic data on the often negative effects of verbal rules and how they might be ameliorated. ACT protocols vary with the target behavior and the setting. For example, in behavioral health, a brief version of ACT is focused acceptance and commitment therapy (FACT). The goal of ACT is not elimination of difficult feelings, but to be present with what life brings and to "move toward valued behavior". Acceptance and commitment therapy invites people to open up to unpleasant feelings, not to overreact to them, and not to avoid situations that cause them. Its therapeutic effect aims to be a positive spiral, in which more understanding of one's emotions leads to a better understanding of the truth. In ACT, "truth" is measured through the concept of "workability", or what works to take another step toward what matters (e.g., values, meaning). Technique Basics ACT is developed within a pragmatic philosophy, functional contextualism. ACT is based on relational frame theory (RFT), a comprehensive theory of language and cognition that is derived from behavior analysis. Both ACT and RFT are based on B. F. Skinner's philosophy of radical behaviorism. ACT differs from some kinds of cognitive behavioral therapy (CBT) in that, rather than try to teach people to control their thoughts, feelings, sensations, memories, and other private events, ACT teaches them to "just notice", accept, and embrace their private events, especially previously unwanted ones. ACT helps the individual get in contact with a transcendent sense of self, "self-as-context"—the one who is always there observing and experiencing and yet distinct from one's thoughts, feelings, sensations, and memories. ACT tries to help the individual clarify values and then use them as the basis for action, bringing more vitality and meaning to life in the process, while increasing psychological flexibility. While Western psychology has typically operated under the "healthy normality" assumption, which states that humans naturally are psychologically healthy, ACT assumes that the psychological processes of a normal human mind are often destructive. The core conception of ACT is that psychological suffering is usually caused by experiential avoidance, cognitive entanglement, and resulting psychological rigidity that leads to a failure to take needed behavioral steps in accord with core values. As a simple way to summarize the model, ACT views the core of many problems to be due to the concepts represented in the acronym, FEAR: Fusion with your thoughts Evaluation of experience Avoidance of your experience Reason-giving for your behavior And the healthy alternative is to ACT: Accept your thoughts and emotions Choose a valued direction Take action Core principles ACT commonly employs six core principles to help clients develop psychological flexibility: Cognitive defusion: Learning methods to reduce the tendency to reify thoughts, images, emotions, and memories. Acceptance: Allowing unwanted private experiences (thoughts, feelings and urges) to come and go without struggling with them. Contact with the present moment: Awareness of the here and now, experienced with openness, interest, and receptiveness. (e.g., mindfulness) The observing self: Accessing a transcendent sense of self, a continuity of consciousness which is unchanging. Values: Discovering what is most important to oneself. Committed action: Setting goals according to values and carrying them out responsibly, in the service of a meaningful life. Correlational evidence has found that absence of psychological flexibility predicts many forms of psychopathology. A 2005 meta-analysis showed that the six ACT principles, on average, account for 16–29% of the variance in psychopathology (general mental health, depression, anxiety) at baseline, depending on the measure, using correlational methods. A 2012 meta-analysis of 68 laboratory-based studies on ACT components has also provided support for the link between psychological flexibility concepts and specific components. Research The website of the Association for Contextual Behavioral Science states that there were over 1,100 randomized controlled trials (RCTs) of ACT, over 500 meta-analyses/systematic reviews, and 84 mediational studies of the ACT literature as of June 2024. Organizations that have stated that acceptance and commitment therapy is empirically supported in certain areas or as a whole according to their standards include (as of March 2022): Society of Clinical Psychology (American Psychological Association/APA Division 12) World Health Organization UK National Institute for Health and Care Excellence Australian Psychological Society Netherlands Institute of Psychologists: Sections of Neuropsychology and Rehabilitation Netherlands National Institute for Public Health and the Environment (RIVM) Sweden Association of Physiotherapists SAMHSA's National Registry of Evidence-based Programs and Practices California Evidence-Based Clearinghouse for Child Welfare U.S. Department of Veterans Affairs/Department of Defense US Department of Justice - Office of Justice Programs Washington State Institute for Public Policy American Headache Society History In 2006, only about 30 randomized clinical trials and controlled time series evaluating ACT were known, in 2011 the number had doubled to more than 60 ACT randomized controlled trials, and in 2023 there were more than 1,000 randomized controlled trials of ACT worldwide. A 2008 meta-analysis concluded that the evidence was still too limited for ACT to be considered a supported treatment. A 2009 meta-analysis found that ACT was more effective than placebo and "treatment as usual" for most problems. A 2012 meta-analysis was more positive and reported that ACT outperformed CBT, except for treating depression and anxiety. A 2015 review found that ACT was better than placebo and typical treatment for anxiety disorders, depression, and addiction. Its effectiveness was similar to traditional treatments like cognitive behavioral therapy (CBT). The authors also noted that research methodologies had improved since the studies described in the 2008 meta-analysis. In 2020, a review of meta-analyses examined 20 meta-analyses that included 133 studies and 12,477 participants. The authors concluded ACT is efficacious for all conditions examined, including anxiety, depression, substance use, pain, and transdiagnostic groups. Results also showed that ACT was generally superior to inactive controls, treatment as usual, and most active intervention conditions. In 2020–2021, after three RCTs of ACT by the World Health Organization (WHO), WHO released an ACT-based self-help course Self-Help Plus (SH+) for "groups of up to 30 people who have lived through or are living through adversity". As of July 2023, there are six RCTs of Self-Help Plus. In 2022, a systematic review of meta-analyses about interventions for depressive symptoms in people living with chronic pain concluded "Acceptance and commitment therapy for general chronic pain, and fluoxetine and web-based psychotherapy for fibromyalgia showed the most robust effects and can be prioritized for implementation in clinical practice". Professional organizations The Association for Contextual Behavioral Science is committed to research and development in the area of ACT, RFT, and contextual behavioral science more generally. As of 2023 it had over 8,000 members worldwide, about half outside of the United States. It holds annual "world conference" meetings each summer, with the location alternating between North America, Europe, and South America. The Association for Behavior Analysis International (ABAI) has a special interest group for practitioner issues, behavioral counseling, and clinical behavior analysis ABA:I. ABAI has larger special interest groups for autism and behavioral medicine. ABAI serves as the core intellectual home for behavior analysts. ABAI sponsors three conferences/year—one multi-track in the U.S., one specific to autism and one international. The Association for Behavioral and Cognitive Therapies (ABCT) also has an interest group in behavior analysis, which focuses on clinical behavior analysis. ACT work is commonly presented at ABCT and other mainstream CBT organizations. The British Association for Behavioural and Cognitive Psychotherapies (BABCP) has a large special interest group in ACT, with over 1,200 members. Doctoral-level behavior analysts who are psychologists belong to the American Psychological Association's (APA) Division 25—Behavior analysis. ACT has been called a "commonly used treatment with empirical support" within the APA-recognized specialty of behavioral and cognitive psychology. Similarities ACT, dialectical behavior therapy (DBT), functional analytic psychotherapy (FAP), mindfulness-based cognitive therapy (MBCT) and other acceptance- and mindfulness-based approaches have been grouped by Steven Hayes under the name "the third wave of cognitive behavior therapy". However, this classification has been criticized and not everyone agrees with it. For example, David Dozois and Aaron T. Beck argued that there is no "new wave" and that there are a variety of extensions of cognitive therapy; for example, Jeffrey Young's schema therapy came after Beck's cognitive therapy but Young did not name his innovations "the third wave" or "the third generation" of cognitive behavior therapy. According to Hayes' classification, the first wave, behaviour therapy, commenced in the 1920s based on Pavlov's classical (respondent) conditioning and operant conditioning that was correlated to reinforcing consequences. The second wave emerged in the 1970s and included cognition in the form of irrational beliefs, dysfunctional attitudes or depressogenic attributions. In the late 1980s empirical limitations and philosophical misgivings of the second wave gave rise to Steven Hayes' ACT theory which modified the focus of abnormal behaviour away from the content or form towards the context in which it occurs. People's rigid ideas about themselves, their lack of focus on what is important in their life, and their struggle to change sensations, feelings or thoughts that are troublesome only serve to create greater distress. Steven C. Hayes described the third wave in his ABCT President Address as follows: ACT has also been adapted to create a non-therapy version of the same processes called acceptance and commitment training. This training process, oriented towards the development of mindfulness, acceptance, and valued skills in non-clinical settings such as businesses or schools, has also been investigated in a handful of research studies with good preliminary results. The emphasis of ACT on ongoing present moment awareness, valued directions and committed action is similar to other psychotherapeutic approaches that, unlike ACT, are not as focused on outcome research or consciously linked to a basic behavioral science program, including approaches such as Gestalt therapy, Morita therapy, and others. Hayes and colleagues themselves stated in their book that introduced ACT that "many or even most of the techniques in ACT have been borrowed from elsewhere—from the human potential movement, Eastern traditions, behavior therapy, mystical traditions, and the like". Wilson, Hayes & Byrd explored at length the compatibilities between ACT and the 12-step treatment of addictions and argued that, unlike most other psychotherapies, both approaches can be implicitly or explicitly integrated due to their broad commonalities. Both approaches endorse acceptance as an alternative to unproductive control. ACT emphasizes the hopelessness of relying on ineffectual strategies to control private experience, similarly the 12-step approach emphasizes the acceptance of powerlessness over addiction. Both approaches encourage a broad life-reorientation, rather than a narrow focus on the elimination of substance use, and both place great value on the long-term project of building of a meaningful life aligned with the clients' values. ACT and 12-step both encourage the pragmatic utility of cultivating a transcendent sense of self (higher power) within an unconventional, individualized spirituality. Finally they both openly accept the paradox that acceptance is a necessary condition for change and both encourage a playful awareness of the limitations of human thinking. Criticism The textbook Systems of Psychotherapy: A Transtheoretical Analysis includes various criticisms of third-wave behaviour therapy, including ACT, from the perspectives of other systems of psychotherapy, including the complaint that third-wave therapies "display an annoying tendency to gather effective methods from other traditions and label them as their own". Evidence-based practice In a 2012 blog post, psychologist James C. Coyne criticized the process and studies initially used by the APA to favorably evaluate ACT for the treatment of psychosis in its labeling system for evidence-based medicine. In particular, it relied on only one full randomized trial, supplemented by a pilot study and a feasibility study, despite the criteria for "strong evidence" requiring a treatment to be supported by many such trials. The main study used (Bach, P., & Hayes, S.C., 2002) was alleged not to have clearly specified its hypothesis, that ACT reduces rehospitalization, in advance (a practice that can allow researchers to retrospectively cherry-pick the metric showing the largest positive change after treatment). In 2016, this and other critiques were cited by William O'Donohue and coauthors in a paper on "weak and pseudo-tests" of ACT and added that while "no doubt there are studies of ACT that are quite good", they had examined three trials of ACT that were "weakened and thus made easier to pass", and they listed over 30 ways in which such trials were "weak or pseudo-tests". Drawing on concepts from Karl Popper's philosophy of science and Popper's critique of psychoanalysis as impossible to falsify, O'Donohue and colleagues advocated Popperian severe testing instead. Excessive promotion over other therapies In 2013, psychologist Jonathan W. Kanter said that Hayes and colleagues "argue that empirical clinical psychology is hampered in its efforts to alleviate human suffering and present contextual behavioral science (CBS) to address the basic philosophical, theoretical and methodological shortcomings of the field. CBS represents a host of good ideas but at times the promise of CBS is obscured by excessive promotion of Acceptance and Commitment Therapy (ACT) and Relational Frame Theory (RFT) and demotion of earlier cognitive and behavior change techniques in the absence of clear logic and empirical support." Nevertheless, Kanter concluded that "the ideas of CBS, RFT, and ACT deserve serious consideration by the mainstream community and have great potential to shape a truly progressive clinical science to guide clinical practice". Authors of a 2013 paper comparing ACT to cognitive therapy (CT) concluded that "although preliminary research on ACT is promising, we suggest that its proponents need to be appropriately humble in their claims. In particular, like CT, ACT cannot yet make strong claims that its unique and theory-driven intervention components are active ingredients in its effects." The authors of the paper suggested that many of the assumptions of ACT and CT "are pre-analytical, and cannot be directly pitted against one another in experimental tests." In 2012, ACT appeared to be about as effective as standard CBT, with some meta-analyses showing small differences in favor of ACT and others not. For example, a meta-analysis published by Francisco Ruiz in 2012 looked at 16 studies comparing ACT to standard CBT. ACT failed to separate from CBT on effect sizes for anxiety, however modest benefits were found with ACT compared to CBT for depression and quality of life. The author did find separation between ACT and CBT on the "primary outcome" – a heterogeneous class of 14 separate outcome measures that were aggregated into the effect size analysis. This analysis however is limited by the highly heterogeneous nature of the outcome variables used in the analysis, which has the tendency to increase the number needed to treat (NNT) to replicate the effect size reported. More limited measures, such as depression, anxiety and quality of life decrease the NNT, making the analysis more clinically relevant, and on these measures ACT did not outperform CBT. A 2012 clinical trial by Forman et al. found that Beckian CBT obtained better results than ACT. Several concerns, both theoretical and empirical, have arisen in response to the ascendancy of ACT. One theoretical concern was that the primary authors of ACT and of the corresponding theories of human behavior, relational frame theory (RFT) and functional contextualism (FC), recommended their approach as the proverbial holy grail of psychological therapies. In 2012, in the preface to the second edition of Acceptance and Commitment Therapy, the primary authors of ACT clarified that "ACT has not been created to undercut the traditions from which it came, nor does it claim to be a panacea." See also Behavioral psychotherapy Contextualism Defence mechanism Humanistic psychology Positive psychology Solution-focused brief therapy References External links Contextualscience.org – Home for the Association for Contextual Behavioral Science, a professional organization dedicated to ACT, RFT, and functional contextualism. Also helpful for training opportunities for professionals interested in ACT and RFT. Most ACT workshops worldwide are listed here. Behaviorism Cognitive behavioral therapy Mindfulness (psychology) Treatment of obsessive–compulsive disorder
Acceptance and commitment therapy
[ "Biology" ]
3,556
[ "Behavior", "Behaviorism" ]
2,954,809
https://en.wikipedia.org/wiki/Ear%20tag
An ear tag is a plastic or metal object used for identification of domestic livestock. If the ear tag uses Radio Frequency Identification Device (RFID) technology it is referred to as an electronic ear tag. Electronic ear tags conform to international standards ISO 11784 and ISO 11785 working at 134.2 kHz, as well as ISO/IEC 18000-6C operating in the UHF spectrum. There are other non-standard systems such as Destron working at 125 kHz. Although there are many shapes of ear tags, the main types in current use are as follows: Flag-shaped ear tag: two discs joined through the ear, one or both bearing a wide, flat plastic surface on which identification details are written or printed in large, easily legible script. Button-shaped ear tag: two discs joined through the ear. Plastic clip ear tag: a moulded plastic strip, folded over the edge of the ear and joined through it. Metal ear tag: an aluminium, steel or brass rectangle with sharp points, clipped over the edge of the ear, with the identification stamped into it. Electronic Identification Tags, include the EID number and sometimes a management number on the button that appears on the back of the ear. These can at times be combined as a matched set, which includes Visual tags with Electronic Identification Tags. Each of these except the metal type may carry a RFID chip, which normally carries an electronic version of the same identification number. Overview An ear tag usually carries an Animal Identification Number (AIN) or code for the animal, or for its herd or flock. Non electronic ear tags may be simply handwritten for the convenience of the farmer (these are known as "management tags"). Alternatively this identification number (ID) may be assigned by an organisation which is a not-for-profit organisation owned by cattle, sheep, goat and pig producers and funded by a levy on livestock sales with government input; an example is the Meat and Livestock Association (MLA) of Australia. Electronic tags may also show other information about the animal, including other related identification numbers; such as the Property Identification Code (PIC) for the properties the animals have been located. Depending on jurisdiction, the movement of certain species of livestock (primarily cattle, goats, sheep and pigs) must be recorded in the online database within 24 hours of the movement; and include the PICs of the properties the animals are travelling between. The National Livestock Identification System (NLIS) of Australia regulations require that all cattle be fitted with a RFID device in the form of an ear tag or rumen bolus (a cylindrical object placed in the rumen) before movement from the property and that the movement be reported to the NLIS. However, if animals are tagged for internal purposes in a herd or farm, IDs need not be unique in larger scales. The NLIS now also requires sheep and goats to use an ear tag that has the Property Identification Code inscribed on it. These ear tags and boluses are complemented by transport documents supplied by vendors that are used for identification and tracking. A similar system is used for cattle in the European Union (EU), each bovine animal having a passport document and tag in each ear carrying the same number. Sheep and goats in the EU have tags in both ears, the carrying the official number of their flock and also for breeding stock an individual number for each animal; in case of sheep or goats intended for intra-community trade, one of these tags (the left one) must have a RFID chip (or the chip may instead be carried in a rumen bolus or on an anklet). Pigs are required in the EU to carry in one of the ears a tag with the number of the herd of birth, as well as with the numbers of any other herds the pig was kept with for more than 30 days; tattooing may be used as a replacement. An ear tag can be applied with an ear tag applicator, however there are also specially-designed tags that can be applied by hand. Depending on the purpose of the tagging, an animal may be tagged on one ear or both. There may be requirements for the placement of ear tags, and care must be taken to ensure they are not placed too close to the edge of the ear pinnae; which may leave the tag vulnerable to being ripped out accidentally. If there exists a national animal identification programme in a country, animals may be tagged on both ears for the sake of increased security and effectiveness, or as a legal requirement. If animals are tagged for private purposes, usually one ear is tagged. Australian sheep and goats are required to have visually readable ear tags printed with a Property Identification Code (PIC). They are complemented by movement documents supplied by consignors that are used for identification and tracking. Very small ear tags are available for laboratory animals such as mice and rats. They are usually sold with a device that pierces the animal's ear and installs the tag at the same time. Lab animals can also be identified by other methods such as ear punching or marking (also used for livestock; see below), implanted RFID tags (mice are too small to wear an ear tag containing an RFID chip), and dye. History Livestock ear tags were developed in 1799 under the direction of Sir Joseph Banks, President of the Royal Society, for identification of Merino sheep in the flock established for King George III. Matthew Boulton designed and produced the first batch of sheep eartags, and produced subsequent batches, modified according to suggestions received from Banks. The first tags were made of tin. Ear tags were incorporated as breed identification in the United States with the forming of the International Ohio Improved Chester Association as early as 1895, and stipulated in the Articles of Incorporation, as an association animal and breed identification, of the improved Chester White. Although ear tags were developed in Canada as early as 1913 as a means to identify cattle when testing for tuberculosis, the significant increase of use of ear tags appeared with the outbreak of BSE in UK. Today, ear tags in a variety of designs are used throughout the world on many species of animal to ensure traceability, to help prevent theft and to control disease outbreaks. The first ear tags were primarily steel with nickel plating. After World War II, larger, flag-like, plastic tags were developed in the United States. Designed to be visible from a distance, these were applied by cutting a slit in the ear and slipping the arrow-shaped head of the tag through it so that the flag would hang from the ear. In 1953, the first two-piece, self-piercing plastic ear tag was developed and patented. This tag, which combined the easy application of metal tags with the visibility and colour options of plastic tags, also limited the transfer of blood-borne diseases between animals during the application process. Some cattle ear tags contain chemicals to repel insects, such as buffalo flies, horseflies, etc. Metal ear tags are used to identify the date of regulation shearing of stud and show sheep. Today, a large number of manufacturers are in competition for the identification of world livestock population. In 2004, the U.S. Government asked farmers to use EID or Electronic Identification ear tags on all their cattle. This request was part of the National Animal Identification System (NAIS) spurred by the discovery of the first case of mad cow disease in the United States. Due to poor performance and concern that other people could access their confidential information, only about 30 percent of cattle producers in the United States tried using EID tags using standards based on the low frequency standards, while the UHF standards are being mandated for use in Brazil, Paraguay, and Korea. The United States Department of Agriculture maintains a list of manufacturers approved to sell ear tags in the USA. Ear tags (conventional and electronic) are used in the EU as official ID system for cattle, sheep and goat, in some cases combined with RFID devices The International Committee for Animal Recording (ICAR) controls the issue electronic tag numbers under ISO regulation 11784. The National Livestock Identification System (NLIS) is Australia's system for tracing cattle, sheep and goats from birth to slaughter. In Canada, the Health of Animals Regulations require approved ear tags on all bison, cattle and sheep that leave the farm of origin, except that a bovine may be moved, without a tag, from the farm of origin to a tagging site. RFID (radio frequency identification) tags are used for cattle in Canada and metal as well as RFID tags have been in use for sheep. Mandatory RFID tagging of sheep in Canada (which was previously scheduled to take effect January 1, 2013) will be deferred to some later date. Other forms of animal identification Pigs, cattle and sheep are frequently earmarked with pliers that notch registered owner and/or age marks into the ear. Mares on large horse breeding farms have a plastic tag attached to a neck strap for identification; which preserves their ears free of notches. Dairy cows are sometimes identified with ratchet fastened plastic anklets fitted on the pastern for ready inspection during milking; however NLIS requirements apply to cattle - including both dairy and beef animals. More commonly coloured electrical tape is used as short term ankle identifiers for dairy animals to identify when one teat should not be milked for any reason. Laboratory rodents are often marked with ear tags, ear notches or implantable microchips. The National Livestock Identification System (NLIS) Australia, formerly used cattle tail tags for property identification and hormone usage declaration. See also Branding iron British Cattle Movement Service Cattle crush Livestock branding Animal Identification References External links Department of Primary Industries National Livestock Identification System Livestock Animal equipment Radio-frequency identification Identification of domesticated animals
Ear tag
[ "Engineering", "Biology" ]
1,985
[ "Radio-frequency identification", "Radio electronics", "Animal equipment", "Animals" ]
2,954,872
https://en.wikipedia.org/wiki/Throat%20lozenge
A throat lozenge (also known as a cough drop, sore throat sweet, troche, cachou, pastille or cough sweet) is a small, typically medicated tablet intended to be dissolved slowly in the mouth to temporarily stop coughs, lubricate, and soothe irritated tissues of the throat (usually due to a sore throat or strep throat), possibly from the common cold or influenza. Cough tablets have taken the name lozenge, based on their original shape, a diamond. Ingredients Lozenges may contain benzocaine, an anaesthetic, or eucalyptus oil. Non-menthol throat lozenges generally use either zinc gluconate glycine or pectin as an oral demulcent. Several brands of throat lozenges contain dextromethorphan. Other varieties such as Halls contain menthol, peppermint oil and/or spearmint as their active ingredient(s). Honey lozenges are also available. The purpose of the throat lozenge is to calm the irritation that may be felt in the throat while swallowing, breathing, or even drinking certain fluids. However, one study found that excessive use of menthol cough drops can prolong coughs rather than relieve them. History Candies to soothe the throat date back to 1000 BC in Egypt's Twentieth Dynasty, when they were made from honey flavored with citrus, herbs, and spices. In the 19th century, physicians discovered morphine and heroin, which suppress coughing at its source—the brain. Popular formulations of that era included Smith Brothers Cough Drops, first advertised in 1852, and Luden's, created in 1879. Concern over the risk of opioid dependence led to the development of alternative medications. Brands Anta Cēpacol Chloraseptic Fisherman's Friend Halls Jinsangzi Läkerol Lockets Luden's Mynthon Negro Nin Jiom Pine Bros. Ricola Robitussin Smith Brothers Strepsils Sucrets Ülker Takabb Anti-Cough Pill Throzz Troketts Tunes Tyrozets (now discontinued) Vicks Strep-Drops Victory V Vigroids Zubes (now discontinued) See also Pastille Mint (candy) References External links Ingredients of a throat lozenge, Health Canada Drug delivery devices Dosage forms Medicine in the United States Army
Throat lozenge
[ "Chemistry" ]
492
[ "Pharmacology", "Drug delivery devices" ]
2,955,066
https://en.wikipedia.org/wiki/Silicon%20alkoxide
Silicon alkoxides are a group of alkoxides, chemical compounds of silicon and an alcohol, with the formula . Silicon alkoxides are important precursors for the manufacture of silica-based aerogels. References Alkoxides Silicate esters
Silicon alkoxide
[ "Chemistry" ]
59
[ "Alkoxides", "Functional groups", "Organic compounds", "Bases (chemistry)", "Organic compound stubs", "Organic chemistry stubs" ]
2,955,185
https://en.wikipedia.org/wiki/Run-Time%20Abstraction%20Services
Run-Time Abstraction Services (RTAS) is run-time firmware that provides abstraction to the operating systems running on IBM System i and IBM System p computers. It contrasts with Open Firmware, in that the latter is usually used only during boot, while RTAS is used during run-time. Later it was renamed to Open Power Abstraction Layer (OPAL), and it's stored in a PNOR (platform NOR) flash memory. Firmware IBM mainframe technology
Run-Time Abstraction Services
[ "Technology" ]
99
[ "Computing stubs" ]
2,955,276
https://en.wikipedia.org/wiki/Enol%20ether
In organic chemistry an enol ether is an alkene with an alkoxy substituent. The general structure is R2C=CR-OR where R = H, alkyl or aryl. A common subfamily of enol ethers are vinyl ethers, with the formula ROCH=CH2. Important enol ethers include the reagent 3,4-dihydropyran and the monomers methyl vinyl ether and ethyl vinyl ether. Reactions and uses Akin to enamines, enol ethers are electron-rich alkenes by virtue of the electron-donation from the heteroatom via pi-bonding. Enol ethers have oxonium ion character. By virtue of their bonding situation, enol ethers display distinctive reactivity. In comparison with simple alkenes, enol ethers exhibit enhanced susceptibility to attack by electrophiles such as Bronsted acids. Similarly, they undergo inverse demand Diels-Alder reactions. The reactivity of enol ethers is highly dependent on the presence of substituents alpha to oxygen. The vinyl ethers are susceptible to polymerization to give polyvinyl ethers. They also react readily with thiols in the thiol-ene reaction to form thioethers. This makes enol ether-functionalized monomers ideal for polymerization with thiol-based monomers to form thiol-ene networks. Some vinyl ethers find some use as inhalation anesthetics. Enol ethers bearing α substituents do not polymerize readily. They are mainly of academic interest, e.g. as intermediates in the synthesis of more complex molecules. The acid-catalyzed addition of hydrogen peroxide to vinyl ethers gives the hydroperoxide: C2H5OCH=CH2 + H2O2 → C2H5OCH(OOH)CH3 Nazi Germany used vinyl ether mixtures as rocket propellants during WWII, because their hypergolic combustion with a mixture of nitric and sulfuric acids is relatively insensitive to temperature. Preparation Vinyl ethers can be prepared from alcohols by iridium-catalyzed transesterification of vinyl esters, especially the widely available vinyl acetate: ROH + CH2=CHOAc → ROCH=CH2 + HOAc Vinyl ethers can be prepared by reaction of acetylene and alcohols in presence of a base. Although enol ethers can be considered the ether of the corresponding enolates, they are not prepared by alkylation of enolates. Some enol ethers are prepared from saturated ethers by elimination reactions. Occurrence in nature A prominent enol ether is phosphoenol pyruvate. The enzyme chorismate mutase catalyzes the Claisen rearrangement of the enol ether called chorismate to prephenate, an intermediate in the biosynthesis of phenylalanine and tyrosine. Batyl alcohol and related glycyl ethers are susceptible to dehydrogenation catalyzed unsaturases to give the vinyl ethers called plasmalogens: See also Silyl enol ether References Functional groups
Enol ether
[ "Chemistry" ]
674
[ "Functional groups" ]
2,955,396
https://en.wikipedia.org/wiki/Ethylenediamine
Ethylenediamine (abbreviated as en when a ligand) is the organic compound with the formula C2H4(NH2)2. This colorless liquid with an ammonia-like odor is a basic amine. It is a widely used building block in chemical synthesis, with approximately 500,000 tonnes produced in 1998. Ethylenediamine is the first member of the so-called polyethylene amines. Synthesis Ethylenediamine is produced industrially by treating 1,2-dichloroethane with ammonia under pressure at 180 °C in an aqueous medium: In this reaction hydrogen chloride is generated, which forms a salt with the amine. The amine is liberated by addition of sodium hydroxide and can then be recovered by fractional distillation. Diethylenetriamine (DETA) and triethylenetetramine (TETA) are formed as by-products. Another industrial route to ethylenediamine involves the reaction of ethanolamine and ammonia: This process involves passing the gaseous reactants over a bed of nickel heterogeneous catalysts. It can be prepared in the lab by the reaction of either ethylene glycol or ethanolamine and urea, followed by decarboxylation of the ethyleneurea intermediate. Ethylenediamine can be purified by treatment with sodium hydroxide to remove water followed by distillation. Applications Ethylenediamine is used in large quantities for production of many industrial chemicals. It forms derivatives with carboxylic acids (including fatty acids), nitriles, alcohols (at elevated temperatures), alkylating agents, carbon disulfide, and aldehydes and ketones. Because of its bifunctional nature, having two amino groups, it readily forms heterocycles such as imidazolidines. Precursor to chelation agents, drugs, and agrochemicals A most prominent derivative of ethylenediamine is the chelating agent EDTA, which is derived from ethylenediamine via a Strecker synthesis involving cyanide and formaldehyde. Hydroxyethylethylenediamine is another commercially significant chelating agent. Numerous bio-active compounds and drugs contain the N–CH2–CH2–N linkage, including some antihistamines. Salts of ethylenebisdithiocarbamate are commercially significant fungicides under the brand names Maneb, Mancozeb, Zineb, and Metiram. Some imidazoline-containing fungicides are derived from ethylenediamine. Pharmaceutical ingredient Ethylenediamine is an ingredient in the common bronchodilator drug aminophylline, where it serves to solubilize the active ingredient theophylline. Ethylenediamine has also been used in dermatologic preparations, but has been removed from some because of causing contact dermatitis. When used as a pharmaceutical excipient, after oral administration its bioavailability is about 0.34, due to a substantial first-pass effect. Less than 20% is eliminated by renal excretion. Ethylenediamine-derived antihistamines are the oldest of the five classes of first-generation antihistamines, beginning with piperoxan aka benodain, discovered in 1933 at the Pasteur Institute in France, and also including mepyramine, tripelennamine, and antazoline. The other classes are derivatives of ethanolamine, alkylamine, piperazine, and others (primarily tricyclic and tetracyclic compounds related to phenothiazines, tricyclic antidepressants, as well as the cyproheptadine-phenindamine family) Role in polymers Ethylenediamine, because it contains two amine groups, is a widely used precursor to various polymers. Condensates derived from formaldehyde are plasticizers. It is widely used in the production of polyurethane fibers. The PAMAM class of dendrimers are derived from ethylenediamine. Tetraacetylethylenediamine The bleaching activator tetraacetylethylenediamine is generated from ethylenediamine. The derivative N,N-ethylenebis(stearamide) (EBS) is a commercially significant mold-release agent and a surfactant in gasoline and motor oil. Other applications as a solvent, it is miscible with polar solvents and is used to solubilize proteins such as albumins and casein. It is also used in certain electroplating baths. as a corrosion inhibitor in paints and coolants. ethylenediamine dihydroiodide (EDDI) is added to animal feeds as a source of iodide. chemicals for color photography developing, binders, adhesives, fabric softeners, curing agents for epoxies, and dyes. as a compound to sensitize nitromethane into an explosive. This mixture was used at Picatinny Arsenal during World War II, giving the nitromethane and ethylenediamine mixture the nickname PLX, or Picatinny Liquid Explosive. Coordination chemistry Ethylenediamine is a well-known bidentate chelating ligand for coordination compounds, with the two nitrogen atoms donating their lone pairs of electrons when ethylenediamine acts as a ligand. It is often abbreviated "en" in inorganic chemistry. The complex [Co(en)3]3+ is a well studied example. Schiff base ligands easily form from ethylenediamine. For example, the diamine condenses with 4-Trifluoromethylbenzaldehyde to give to the diimine. The salen ligands, some of which are used in catalysis, are derived from the condensation of salicylaldehydes and ethylenediamine. Related ligands Related derivatives of ethylenediamine include ethylenediaminetetraacetic acid (EDTA), tetramethylethylenediamine (TMEDA), and tetraethylethylenediamine (TEEDA). Chiral analogs of ethylenediamine include 1,2-diaminopropane and trans-diaminocyclohexane. Safety Ethylenediamine, like ammonia and other low-molecular weight amines, is a skin and respiratory irritant. Unless tightly contained, liquid ethylenediamine will release toxic and irritating vapors into its surroundings, especially on heating. The vapors absorb moisture from humid air to form a characteristic white mist, which is extremely irritating to skin, eyes, lungs and mucous membranes. References External links IRIS EPA Ethylenediamine CDC - NIOSH Pocket Guide to Chemical Hazards Chemical data Diamines Amine solvents Chelating agents Fuel antioxidants Corrosion inhibitors Commodity chemicals Ethyleneamines Foul-smelling chemicals Organic compounds with 2 carbon atoms
Ethylenediamine
[ "Chemistry" ]
1,479
[ "Fuel antioxidants", "Products of chemical industry", "Organic compounds with 2 carbon atoms", "Organic compounds", "Corrosion inhibitors", "Chelating agents", "Commodity chemicals", "Process chemicals" ]
2,955,643
https://en.wikipedia.org/wiki/Diethylenetriamine
Diethylenetriamine (abbreviated Dien or DETA) and also known as 2,2’-Iminodi(ethylamine)) is an organic compound with the formula HN(CH2CH2NH2)2. This colourless hygroscopic liquid is soluble in water and polar organic solvents, but not simple hydrocarbons. Diethylenetriamine is structural analogue of diethylene glycol. Its chemical properties resemble those for ethylene diamine, and it has similar uses. It is a weak base and its aqueous solution is alkaline. DETA is a byproduct of the production of ethylenediamine from ethylene dichloride. Reactions and uses Diethylenetriamine is a common curing agent for epoxy resins in epoxy adhesives and other thermosets. It is N-alkylated upon reaction with epoxide groups forming crosslinks. In coordination chemistry, it serves as a tridentate ligand forming complexes such as Co(dien)(NO2)3. Like some related amines, it is used in oil industry for the extraction of acid gas. Like ethylenediamine, DETA can also be used to sensitize nitromethane, making a liquid explosive compound similar to PLX. This compound is cap sensitive with an explosive velocity of around 6200 m/s and is discussed in patent #3,713,915. Mixed with unsymmetrical dimethylhydrazine it was used as Hydyne, a propellent for liquid-fuel rockets. DETA has been evaluated for use in the Countermine System under development by the U.S. Office of Naval Research, where it would be used to ignite and consume the explosive fill of land mines in beach and surf zones. See also Triethylenetetramine References Further reading External links Ethyleneamines Amine solvents Chelating agents Rocket fuels NMDA receptor antagonists Secondary amines
Diethylenetriamine
[ "Chemistry" ]
427
[ "Chelating agents", "Process chemicals" ]
2,955,674
https://en.wikipedia.org/wiki/Triethylenetetramine
{{chembox | Watchedfields = changed | verifiedrevid = 470614378 | ImageFile = N1,N1'-(ethane-1,2-diyl)bis(ethane-1,2-diamine) 200.svg | ImageFile_Ref = | ImageSize = 220 | ImageName = Skeletal formula of triethylenetetramine | ImageFile1 = Triethylenetetramine-3D-balls.png | ImageFile1_Ref = | ImageSize1 = 220 | ImageName1 = Ball and stick model of triethylenetetramine | ImageFile2 = Triethylenetetramine-3D-spacefill.png | ImageFile2_Ref = | ImageSize2 = 220 | ImageName2 = Spacefill model of triethylenetetramine | PIN = N1,N1′-(Ethane-1,2-diyl)di(ethane-1,2-diamine) | OtherNames = {{Unbulleted list| N,N-Bis(2-aminoethyl)ethane-1,2-diamine | TETA | trien | trientine (INN) | trientine dihydrochloride | MK-0681 }} |Section1= |Section2= |Section3= |Section5= | Section6 = |Section9= }}Triethylenetetramine (TETA and trien), also known as trientine''' (INN) when used medically, is an organic compound with the formula [CH2NHCH2CH2NH2]2. The pure free base is a colorless oily liquid, but, like many amines, older samples assume a yellowish color due to impurities resulting from air oxidation. It is soluble in polar solvents. The branched isomer tris(2-aminoethyl)amine and piperazine derivatives may also be present in commercial samples of TETA. The hydrochloride salts are used medically as a treatment for copper toxicity. Uses Epoxy uses The reactivity and uses of TETA are similar to those for the related polyamines ethylenediamine and diethylenetriamine. It is primarily used as a crosslinker ("hardener") in epoxy curing. TETA, like other aliphatic amines, react quicker and at lower temperatures than aromatic amines due to less negative steric effects since the linear nature of the molecule provides it the ability to rotate and twist. Medical uses The hydrochloride salt of TETA, referred to as trientine hydrochloride, is a chelating agent that is used to bind and remove copper in the body to treat Wilson's disease, particularly in those who are intolerant to penicillamine. Some recommend trientine as first-line treatment, but experience with penicillamine is more extensive. Trientine hydrochloride (brand name Syprine) was approved for medical use in the United States in November 1985. Trientine tetrahydrochloride (brand name Cuprior) was approved for medical use in the European Union in September 2017. It is indicated for the treatment of Wilson's disease in adults, adolescents and children five years of age or older who are intolerant to D-penicillamine therapy. Trientine dihydrochloride (brand name Cufence) was approved for medical use in the European Union in July 2019. It is indicated for the treatment of Wilson's disease in adults, adolescents and children five years of age or older who are intolerant to D-penicillamine therapy. The most common side effects include nausea, especially when starting treatment, skin rash, duodenitis (inflammation of the duodenum, the part of the gut leading out of the stomach), and severe colitis (inflammation in the large bowel causing pain and diarrhea). Society and culture Controversies In the United States, Valeant Pharmaceuticals International raised the price of its Syprine brand of TETA from $625 to $21,267 for 100 pills over five years. The New York Times'' said that this "egregious" price increase caused public outrage. Teva Pharmaceuticals developed a generic, which patients and doctors expected to be cheaper, but when it was introduced in February 2018, Teva's price was $18,375 for 100 pills. Aaron Kesselheim, who studies drug pricing at Harvard Medical School, said that drug companies price the product at what they think the market will bear. Production TETA is prepared by heating ethylenediamine or ethanolamine/ammonia mixtures over an oxide catalyst. This process gives a variety of amines, especially ethylene amines which are separated by distillation and sublimation. Coordination chemistry TETA is a tetradentate ligand in coordination chemistry, where it is referred to as trien. Octahedral complexes of the type M(trien)L2 can adopt several diastereomeric structures. References Ethyleneamines Chelating agents Orphan drugs Tetradentate ligands Secondary amines X
Triethylenetetramine
[ "Chemistry" ]
1,108
[ "Chelating agents", "Process chemicals" ]
2,955,810
https://en.wikipedia.org/wiki/Glomeromycota
Glomeromycota (often referred to as glomeromycetes, as they include only one class, Glomeromycetes) are one of eight currently recognized divisions within the kingdom Fungi, with approximately 230 described species. Members of the Glomeromycota form arbuscular mycorrhizas (AMs) with the thalli of bryophytes and the roots of vascular land plants. Not all species have been shown to form AMs, and one, Geosiphon pyriformis, is known not to do so. Instead, it forms an endocytobiotic association with Nostoc cyanobacteria. The majority of evidence shows that the Glomeromycota are dependent on land plants (Nostoc in the case of Geosiphon) for carbon and energy, but there is recent circumstantial evidence that some species may be able to lead an independent existence. The arbuscular mycorrhizal species are terrestrial and widely distributed in soils worldwide where they form symbioses with the roots of the majority of plant species (>80%). They can also be found in wetlands, including salt-marshes, and associated with epiphytic plants. According to multigene phylogenetic analyses, this taxon is located as a member of the phylum Mucoromycota. Currently, the phylum name Glomeromycota is invalid, and the subphylum Glomeromycotina should be used to describe this taxon. Reproduction The Glomeromycota have generally coenocytic (occasionally sparsely septate) mycelia and reproduce asexually through blastic development of the hyphal tip to produce spores (Glomerospores,blastospore) with diameters of 80–500 μm. In some, complex spores form within a terminal saccule. Recently it was shown that Glomus species contain 51 genes encoding all the tools necessary for meiosis. Based on these and related findings, it was suggested that Glomus species may have a cryptic sexual cycle. Colonization New colonization of AM fungi largely depends on the amount of inoculum present in the soil. Although pre-existing hyphae and infected root fragments have been shown to colonize the roots of a host successfully, germinating spores are considered to be the key players in new host establishment. Spores are commonly dispersed by fungal and plant burrowing herbivore partners, but some air dispersal capabilities are also known. Studies have shown that spore germination is specific to particular environmental conditions such as right amount of nutrients, temperature or host availability. It has also been observed that the rate of root system colonization is directly correlated to spore density in the soil. In addition, new data also suggests that AM fungi host plants also secrete chemical factors that attract and enhance the growth of developing spore hyphae towards the root system. The necessary components for the colonization of Glomeromycota include the host's fine root system, proper development of intracellular arbuscular structures, and a well-established external fungal mycelium. Colonization is accomplished by the interactions between germinating spore hyphae and the root hairs of the host or by the development of appressoria between epidermal root cells. The process is regulated by specialized chemical signaling and changes in gene expression of both the host and AM fungi. Intracellular hyphae extend up to the cortical cells of the root and penetrate the cell walls but not the inner cellular membrane creating an internal invagination. The penetrating hyphae develop a highly branched structure called an arbuscule, which has low functional periods before degradation and absorption by the host's root cells. A fully developed arbuscular mycorrhizal structure facilitates the two-way movement of nutrients between the host and mutualistic fungal partner. The symbiotic association allows the host plant to respond better to environmental stresses, and the non-photosynthetic fungi to obtain carbohydrates produced by photosynthesis. Phylogeny Initial studies of the Glomeromycota were based on the morphology of soil-borne sporocarps (spore clusters) found in or near colonized plant roots. Distinguishing features such as wall morphologies, size, shape, color, hyphal attachment and reaction to staining compounds allowed a phylogeny to be constructed. Superficial similarities led to the initial placement of genus Glomus in the unrelated family Endogonaceae. Following broader reviews that cleared up the sporocarp confusion, the Glomeromycota were first proposed in the genera Acaulospora and Gigaspora before being accorded their own order with the three families Glomaceae (now Glomeraceae), Acaulosporaceae and Gigasporaceae. With the advent of molecular techniques this classification has undergone major revision. An analysis of small subunit (SSU) rRNA sequences indicated that they share a common ancestor with the Dikarya. Nowadays it is accepted that Glomeromycota consists of 4 orders. Several species which produce glomoid spores (i.e. spores similar to Glomus) in fact belong to other deeply divergent lineages and were placed in the orders, Paraglomerales and Archaeosporales. This new classification includes the Geosiphonaceae, which presently contains one fungus (Geosiphon pyriformis) that forms endosymbiotic associations with the cyanobacterium Nostoc punctiforme and produces spores typical to this division, in the Archaeosporales. Work in this field is incomplete, and members of Glomus may be better suited to different genera or families. Molecular biology The biochemical and genetic characterization of the Glomeromycota has been hindered by their biotrophic nature, which impedes laboratory culturing. This obstacle was eventually surpassed with the use of root cultures and, most recently, a method which applies sequencing of single nucleus from spores has also been developed to circumvent this challenge. The first mycorrhizal gene to be sequenced was the small-subunit ribosomal RNA (SSU rRNA). This gene is highly conserved and commonly used in phylogenetic studies so was isolated from spores of each taxonomic group before amplification through the polymerase chain reaction (PCR). A metatranscriptomic survey of the Sevilleta Arid Lands found that 5.4% of the fungal rRNA reads mapped to Glomeromycota. This result was inconsistent with previous PCR-based studies of community structure in the region, suggesting that previous PCR-based studies may have underestimated Glomeromycota abundance due to amplification biases. See also Prototaxites References External links Tree of Life Glomeromycota Glomeromycota at the International Culture Collection of VA Mycorrhizal Fungi (INVAM) Glomeromycota at the University of Sydney Fungal Biology s|ite 'AMF-phylogeny' – 'Glomeromycota database' web-site at the University of Munich Fungus phyla Fungi by classification Soil biology
Glomeromycota
[ "Biology" ]
1,507
[ "Fungi", "Eukaryotes by classification", "Soil biology", "Fungi by classification" ]
2,955,843
https://en.wikipedia.org/wiki/Satplan
Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem, which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT. Given a problem instance in planning, with a given initial state, a given set of actions, a goal, and a horizon length, a formula is generated so that the formula is satisfiable if and only if there is a plan with the given horizon length. This is similar to simulation of Turing machines with the satisfiability problem in the proof of Cook's theorem. A plan can be found by testing the satisfiability of the formulas for different horizon lengths. The simplest way of doing this is to go through horizon lengths sequentially, 0, 1, 2, and so on. See also Graphplan References H. A. Kautz and B. Selman (1992). Planning as satisfiability. In Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI'92), pages 359–363. H. A. Kautz and B. Selman (1996). Pushing the envelope: planning, propositional logic, and stochastic search. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96), pages 1194–1201. J. Rintanen (2009). Planning and SAT. In A. Biere, H. van Maaren, M. Heule and Toby Walsh, Eds., Handbook of Satisfiability, pages 483–504, IOS Press. Automated planning and scheduling
Satplan
[ "Technology" ]
354
[ "Computing stubs", "Computer science", "Computer science stubs" ]
2,955,951
https://en.wikipedia.org/wiki/Grid-leak%20detector
A grid leak detector is an electronic circuit that demodulates an amplitude modulated alternating current and amplifies the recovered modulating voltage. The circuit utilizes the non-linear cathode to control grid conduction characteristic and the amplification factor of a vacuum tube. Invented by Lee De Forest around 1912, it was used as the detector (demodulator) in the first vacuum tube radio receivers until the 1930s. History Early applications of triode tubes (Audions) as detectors usually did not include a resistor in the grid circuit. First use of a resistance in the grid circuit of a vacuum tube detector circuit may have been by Sewall Cabot in 1906. Cabot wrote that he made a pencil mark to discharge the grid condenser, after finding that touching the grid terminal of the tube would cause the detector to resume operation after having stopped. Edwin H. Armstrong, in 1915, describes the use of "a resistance of several hundred thousand ohms placed across the grid condenser" for the purpose of discharging the grid condenser. The heyday for grid leak detectors was the 1920s, when battery operated, multiple dial tuned radio frequency receivers using low amplification factor triodes with directly heated cathodes were the contemporary technology. The Zenith Models 11, 12, and 14 are examples of these kinds of radios. After screen-grid tubes became available for new designs in 1927, most manufacturers switched to plate detectors, and later to diode detectors. The grid leak detector has been popular for many years with amateur radio operators and shortwave listeners who construct their own receivers. Functional overview The stage performs two functions: Detection: The control grid and cathode operate as a diode. At small radio frequency signal (carrier) amplitudes, square-law detection takes place due to non-linear curvature of the grid current versus grid voltage characteristic. Detection transitions at larger carrier amplitudes to linear detection behavior due to unilateral conduction from the cathode to grid. Amplification: The varying direct current (dc) voltage of the grid acts to control the plate current. The voltage of the recovered modulating signal is increased in the plate circuit, resulting in the grid leak detector producing greater audio frequency output than a diode detector, at small input signal levels. The plate current includes the radio frequency component of the received signal, which is made use of in regenerative receiver designs. Operation The control grid and cathode are operated as a diode while at the same time the control grid voltage exerts its usual influence on the electron stream from cathode to plate. In the circuit, a capacitor (the grid condenser) couples a radio frequency signal (the carrier) to the control grid of an electron tube. The capacitor also facilitates development of dc voltage on the grid. The impedance of the capacitor is small at the carrier frequency and high at the modulating frequencies. A resistor (the grid leak) is connected either in parallel with the capacitor or from the grid to the cathode. The resistor permits dc charge to "leak" from the capacitor and is utilized in setting up the grid bias. At small carrier signal levels, typically not more than 0.1 volt, the grid to cathode space exhibits non-linear resistance. Grid current occurs during 360 degrees of the carrier frequency cycle. The grid current increases more during the positive excursions of the carrier voltage than it decreases during the negative excursions, due to the parabolic grid current versus grid voltage curve in this region. This asymmetrical grid current develops a dc grid voltage that includes the modulation frequencies. In this region of operation, the demodulated signal is developed in series with the dynamic grid resistance , which is typically in the range of 50,000 to 250,000 ohms. and the grid condenser along with the grid capacitance form a low pass filter that determines the audio frequency bandwidth at the grid. At carrier signal levels large enough to make conduction from cathode to grid cease during the negative excursions of the carrier, the detection action is that of a linear diode detector. Grid leak detection optimized for operation in this region is known as power grid detection or grid leak power detection. Grid current occurs only on the positive peaks of the carrier frequency cycle. The coupling capacitor will acquire a dc charge due to the rectifying action of the cathode to grid path. The capacitor discharges through the resistor (thus grid leak) during the time that the carrier voltage is decreasing. The dc grid voltage will vary with the modulation envelope of an amplitude modulated signal. The plate current is passed through a load impedance chosen to produce the desired amplification in conjunction with the tube characteristics. In non-regenerative receivers, a capacitor of low impedance at the carrier frequency is connected from the plate to cathode to prevent amplification of the carrier frequency. Design The capacitance of the grid condenser is chosen to be around ten times the grid input capacitance and is typically 100 to 300 picofarads (pF), with the smaller value for screen grid and pentode tubes. The resistance and electrical connection of the grid leak along with the grid current determine the grid bias. For operation of the detector at maximum sensitivity, the bias is placed near the point on the grid current versus grid voltage curve where maximum rectification effect occurs, which is the point of maximum rate of change of slope of the curve. If a dc path is provided from the grid leak to an indirectly heated cathode or to the negative end of a directly heated cathode, negative initial velocity grid bias is produced relative to the cathode determined by the product of the grid leak resistance and the grid current. For certain directly heated cathode tubes, the optimum grid bias is at a positive voltage relative to the negative end of the cathode. For these tubes, a dc path is provided from the grid leak to the positive side of the cathode or the positive side of the "A" battery; providing a positive fixed bias voltage at the grid determined by the dc grid current and the resistance of the grid leak. As the resistance of the grid leak is increased, the grid resistance increases and the audio frequency bandwidth at the grid decreases, for a given grid condenser capacitance. For triode tubes, the dc voltage at the plate is chosen for operation of the tube at the same plate current usually used in amplifier operation and is typically less than 100 volts. For pentode and tetrode tubes, the screen grid voltage is chosen or made adjustable to permit the desired plate current and amplification with the chosen plate load impedance. For grid leak power detection, the time constant of the grid leak and condenser must be shorter than the period of the highest audio frequency to be reproduced. A grid leak of around 250,000 to 500,000 ohms is suitable with a condenser of 100 pF. The grid leak resistance for grid leak power detection can be determined by where is the highest audio frequency to be reproduced and is the grid condenser capacitance. A tube requiring comparatively large grid voltage for plate current cutoff is of advantage (usually a low amplification factor triode). The peak 100 percent modulated input signal voltage the grid leak detector can demodulate without excess distortion is about one half of the projected cutoff bias voltage , corresponding to a peak unmodulated carrier voltage of about one quarter of the projected cutoff bias. For power grid detection using a directly heated cathode tube, the grid leak resistor is connected between the grid and the negative end of the filament, either directly or through the RF transformer. Effect of tube type Tetrode and pentode tubes provide significantly higher grid input impedance than triodes, resulting in less loading of the circuit providing the signal to the detector. Tetrode and pentode tubes also produce significantly higher audio frequency output amplitude at small carrier input signal levels (around one volt or less) in grid leak detector applications than triodes. Advantages The grid leak detector potentially offers greater economy than use of separate diode and amplifier tubes. At small input signal levels, the circuit produces higher output amplitude than a simple diode detector. Disadvantages One potential disadvantage of the grid leak detector, primarily in non-regenerative circuits, is that of the load it can present to the preceding circuit. The radio frequency input impedance of the grid leak detector is dominated by the tube's grid input impedance, which can be on the order of 6000 ohms or less for triodes, depending on tube characteristics and signal frequency. Other disadvantages are that it can produce more distortion and is less suitable for input signal voltages over a volt or two than the plate detector or diode detector. See also Tuned radio frequency receiver Regenerative radio receiver Radio References Further reading Schematic of Philco model 84 A superheterodyne cathedral radio from 1933 that uses a regenerative detector. (Note: The capacitor for the detector's control grid is the "tickler coil" winding on the IF transformer.) Analog circuits Radio electronics Radio technology Vacuum tubes History of radio de:Gittergleichrichtung
Grid-leak detector
[ "Physics", "Technology", "Engineering" ]
1,931
[ "Information and communications technology", "Radio electronics", "Telecommunications engineering", "Analog circuits", "Vacuum tubes", "Vacuum", "Radio technology", "Electronic engineering", "Matter" ]
2,955,960
https://en.wikipedia.org/wiki/Social-desirability%20bias
In social science research social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad" or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences. Topics subject to social-desirability bias Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do you masturbate?," for example, respondents may be pressured by a social taboo against masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severely underestimated. When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents may feel pressured to deny any drug use or rationalize it, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias. Other topics that are sensitive to social-desirability bias include: Feelings of psychological distress Self-reported personality traits will correlate strongly with social desirability bias Personal income and earnings, often inflated when low and deflated when high Feelings of low self-worth and/or powerlessness, often denied Excretory functions, often approached uncomfortably, if discussed at all Compliance with medicinal-dosing schedules, often inflated Family planning, including use of contraceptives and abortion Religion, often either avoided or uncomfortably approached Patriotism, either inflated or, if denied, done so with a fear of other party's judgment Bigotry and intolerance, often denied, even if it exists within the responder Intellectual achievements, often inflated Physical appearance, either inflated or deflated Acts of real or imagined physical violence, often denied Indicators of charity or "benevolence," often inflated Illegal acts, often denied Voter turnout Support for far-right parties Individual differences in socially desirable responding In 1953, Allen L. Edwards introduced the notion of social desirability to psychology, demonstrating the role of social desirability in the measurement of personality traits. He demonstrated that social desirability ratings of personality trait descriptions are very highly correlated with the probability that a subsequent group of people will endorse these trait self-descriptions. In his first demonstration of this pattern, the correlation between one group of college students’ social desirability ratings of a set of traits and the probability that college students in a second group would endorse self-descriptions describing the same traits was so high that it could distort the meaning of the personality traits. In other words, do these self-descriptions describe personality traits or social desirability? Edwards subsequently developed the first Social Desirability Scale, a set of 39, true-false questions extracted from the Minnesota Multiphasic Personality Inventory (MMPI), questions that judges could, with high agreement, order according to their social desirability. These items were subsequently found to be very highly correlated with a wide range of measurement scales, MMPI personality and diagnostic scales. The SDS is also highly correlated with the Beck Hopelessness Inventory. The fact that people differ in their tendency to engage in socially desirable responding (SDR) is a special concern to those measuring individual differences with self-reports. Individual differences in SDR make it difficult to distinguish those people with good traits who are responding factually from those distorting their answers in a positive direction. When SDR cannot be eliminated, researchers may resort to evaluating the tendency and then control for it. A separate SDR measure must be administered together with the primary measure (test or interview) aimed at the subject matter of the research/investigation. The key assumption is that respondents who answer in a socially desirable manner on that scale are also responding desirably to all self-reports throughout the study. In some cases, the entire questionnaire package from high scoring respondents may simply be discarded. Alternatively, respondents' answers on the primary questionnaires may be statistically adjusted commensurate with their SDR tendencies. For example, this adjustment is performed automatically in the standard scoring of MMPI scales. The major concern with SDR scales is that they confound style with content. After all, people actually differ in the degree to which they possess desirable traits (e.g. nuns versus criminals). Consequently, measures of social desirability confound true differences with social-desirability bias. Standard measures of individual SDR Until the 1990s, the most commonly used measure of socially desirable responding was the Marlowe–Crowne Social Desirability Scale. The original version comprised 33 True-False items. A shortened version, the Strahan–Gerbasi only comprises ten items, but some have raised questions regarding the reliability of this measure. In 1991, Delroy L. Paulhus published the Balanced Inventory of Desirable Responding (BIDR): a questionnaire designed to measure two forms of SDR. This forty-item instrument provides separate subscales for "impression management," the tendency to give inflated self-descriptions to an audience; and self-deceptive enhancement, the tendency to give honest but inflated self-descriptions. The commercial version of the BIDR is called the "Paulhus Deception Scales (PDS)." Scales designed to tap response styles are available in all major languages, including Italian and German. Techniques to reduce social-desirability bias Anonymity and confidentiality Anonymous survey administration, compared with in-person or phone-based administration, has been shown to elicit higher reporting of items with social-desirability bias. In anonymous survey settings, the subject is assured that their responses will not be linked to them, and they are not asked to divulge sensitive information directly to a surveyor. Anonymity can be established through self-administration of paper surveys returned by envelope, mail, or ballot boxes, or self-administration of electronic survey via computer, smartphone, or tablet. Audio-assisted electronic surveys have also been established for low-literacy or non-literate study subjects. Confidentiality can be established in non-anonymous settings by ensuring that only study staff are present and by maintaining data confidentiality after surveys are complete. Including assurances of data confidentiality in surveys has a mixed effect on sensitive-question response; it may either increase response due to increased trust, or decrease response by increasing suspicion and concern. Specialized questioning techniques Several techniques have been established to reduce bias when asking questions sensitive to social desirability. Complex question techniques may reduce social-desirability bias, but may also be confusing or misunderstood by respondents. Beyond specific techniques, social-desirability bias may be reduced by neutral question and prompt wording. Ballot Box Method The Ballot Box Method (BBM) provides survey respondents anonymity by allowing them to respond in private by self-completing their responses to the sensitive survey questions on a secret ballot and submitting them to a locked box. The interviewer has no knowledge of what is recorded on the secret ballot and does not have access to the lock on the box, providing obscurity to the responses and limiting the potential for SDB. However, a unique control number on each ballot allows the answers to be reunited with a corresponding questionnaire that contains less sensitive questions. The BBM has been used successfully to obtain estimates of sensitive sexual behaviours during an HIV prevention study, as well as illegal environmental resource use. In a validation study where observed behaviour was matched to reported behaviour using various SDB control methods, the BBM was by far the most accurate bias reduction method, performing significantly better than the Randomized Response Technique (RRT). Randomized response techniques The randomized response technique asks a participant to respond with a fixed answer or to answer truthfully based on the outcome of a random act. For example, respondents secretly throw a coin and respond "yes" if it comes up heads (regardless of their actual response to the question), and are instructed to respond truthfully if it comes up tails. This enables the researcher to estimate the actual prevalence of the given behavior among the study population without needing to know the true state of any one individual respondent. Research shows that the validity of the randomized response technique is limited. Validation research has shown that the RRT actually performs worse than direct questioning for some sensitive behaviours and care should be taken when considering its use. Nominative and best-friend techniques The nominative technique asks a participant about the behavior of their close friends, rather than about their own behavior. Participants are asked how many close friends they know have done for certain a sensitive behavior and how many other people they think know about that behavior. Population estimates of behaviors can be derived from the response. The similar best-friend methodology asks the participant about the behavior of one best friend. Unmatched-count technique The unmatched-count technique asks respondents to indicate how many of a list of several items they have done or are true for them. Respondents are randomized to receive either a list of non-sensitive items or that same list plus the sensitive item of interest. Differences in the total number of items between the two groups indicate how many of those in the group receiving the sensitive item said yes to it. Grouped-answer method The grouped-answer method, also known as the two-card or three-card method, combines answer choices such that the sensitive response is combined with at least one non-sensitive response option. Crosswise, triangular, and hidden-sensitivity methods These methods ask participants to select one response based on two or more questions, only one of which is sensitive. For example, a participant will be asked whether their birth year is even and whether they have performed an illegal activity; if yes to both or no to both, to select A, and if yes to one but no to the other, select B. By combining sensitive and non-sensitive questions, the participant's response to the sensitive item is masked. Research shows that the validity of the crosswise model is limited. Bogus pipeline Bogus-pipeline techniques are those in which a participant believes that an objective test, like a lie detector, will be used along with survey response, whether or not that test or procedure is actually used. Researches using this technique must convince the participants that there is a machine that can measure accurately their true attitudes and desires. While this can raise ethical questions surrounding deception in psychological research, this technique quickly became widely popular in the 1970s. However, by the 1990s the use of this technique began to wane. Interested in this change, Roese and Jamison (1993) took twenty years of research to do a meta-analysis on the effectiveness of the Bogus pipeline technique in reducing social desirability bias. They concluded that while the Bogus pipeline technique was significantly effective, it had perhaps become less used simply because it went out of fashion, or became cumbersome for researchers to use regularly. However, Roese and Jamison argued that there are simple adjustments that can be made to this technique to make it more user-friendly for researchers. Other response styles "Extreme-response style" (ERS) takes the form of exaggerated-extremity preference, e.g. for '1' or '7' on 7-point scales. Its converse, 'moderacy bias' entails a preference for middle-range (or midpoint) responses (e.g. 3–5 on 7-point scales). "Acquiescence" (ARS) is the tendency to respond to items with agreement/affirmation independent of their content ("yea"-saying). These kinds of response styles differ from social-desirability bias in that they are unrelated to the question's content and may be present in both socially neutral and in socially favorable or unfavorable contexts, whereas SDR is, by definition, tied to the latter. See also Biased random walk on a graph Bradley effect Knowledge falsification Moralistic fallacy Preference falsification Pseudo-opinion Reactivity (psychology) Response bias Self-censorship Self-report study Silent majority Social influence bias Social media bias Social research Virtue signalling Watching-eye effect References Social influence Conformity Experimental bias Popular psychology Survey methodology
Social-desirability bias
[ "Mathematics", "Biology" ]
2,630
[ "Behavior", "Conformity", "Statistical concepts", "Experimental bias", "Human behavior" ]
2,956,219
https://en.wikipedia.org/wiki/Post%E2%80%93earnings-announcement%20drift
In financial economics and accounting research, post–earnings-announcement drift or PEAD (also named the SUE effect) is the tendency for a stock’s cumulative abnormal returns to drift in the direction of an earnings surprise for several weeks (even several months) following an earnings announcement. Cause and effect Once a firm's current earnings become known, the information content should be quickly digested by investors and incorporated into the efficient market price. However, it has long been known that this is not exactly what happens. For firms that report good news in quarterly earnings, their abnormal security returns tend to drift upwards for at least 60 days following their earnings announcement. Similarly, firms that report bad news in earnings tend to have their abnormal security returns drift downwards for a similar period. This phenomenon is called post-announcement drift. This was initially proposed by the information content study of Ray J. Ball & P. Brown, 'An empirical evaluation of accounting income numbers', Journal of Accounting Research, Autumn 1968, pp. 159–178. As one of major earnings anomalies, which supports the counterargument against market efficiency theory, PEAD is considered a robust finding and one of the most studied topics in financial market literature. Hypotheses The phenomenon can be explained with a number of hypotheses. The most widely accepted explanation for the effect is investor under-reaction to earnings announcements. Bernard & Thomas (1989) and Bernard & Thomas (1990) provided a comprehensive summary of PEAD research. According to Bernard & Thomas (1990), PEAD patterns can be viewed as including two components. The first component is a positive autocorrelation between seasonal difference (i.e., seasonal random walk forecast errors – the difference between the actual returns and forecasted returns) that is strongest for adjacent quarters, being positive over the first three lag quarters. Second, there is a negative auto correlation between seasonal differences that are four quarters apart. References Do Individual Investors Drive Post Earnings Announcement Drift? OSU Finance [?] Dharmesh, V. K., & Nakul, S. (1995). contemporary issues in accounting. Post earnings announcement drift., ed 5th, p. 269. See also Earnings response coefficient Momentum Accounting research Stock market Financial economics Behavioral finance Financial markets
Post–earnings-announcement drift
[ "Biology" ]
463
[ "Behavioral finance", "Behavior", "Human behavior" ]
2,956,278
https://en.wikipedia.org/wiki/Vascular%20bundle
A vascular bundle is a part of the transport system in vascular plants. The transport itself happens in the stem, which exists in two forms: xylem and phloem. Both these tissues are present in a vascular bundle, which in addition will include supporting and protective tissues. There is also a tissue between xylem and phloem, which is the cambium. The xylem typically lies towards the axis (adaxial) with phloem positioned away from the axis (abaxial). In a stem or root this means that the xylem is closer to the centre of the stem or root while the phloem is closer to the exterior. In a leaf, the adaxial surface of the leaf will usually be the upper side, with the abaxial surface the lower side. The sugars synthesized by the plant with sun light are transported by the phloem, which is closer to the lower surface. Aphids and leaf hoppers feed off of these sugars by tapping into the phloem. This is why aphids and leaf hoppers are typically found on the underside of a leaf rather than on the top. The position of vascular bundles relative to each other may vary considerably: see stele. The vascular bundle are depend on size of veins Bundle-sheath cells The bundle-sheath cells are the photosynthetic cells arranged into a tightly packed sheath around the vein of a leaf. It forms a protective covering on the leaf vein and consists of one or more cell layers, usually parenchyma. Loosely-arranged mesophyll cells lie between the bundle sheath and the leaf surface. The Calvin cycle is confined to the chloroplasts of these bundle sheath cells in C4 plants. C2 plants also use a variation of this structure. References Further reading Campbell, N. A. & Reece, J. B. (2005). Photosynthesis. Biology (7th ed.). San Francisco: Benjamin Cummings. External links Curtis, Lersten, and Nowak cross section of a vascular bundle Mauseth another cross section of a vascular bundle Plant physiology Plant anatomy Tissues (biology)
Vascular bundle
[ "Biology" ]
442
[ "Plant physiology", "Plants" ]
2,956,372
https://en.wikipedia.org/wiki/Foot%20per%20second
The foot per second (plural feet per second) is a unit of both speed (scalar) and velocity (vector quantity, which includes direction). It expresses the distance in feet (ft) traveled or displaced, divided by the time in seconds (s). The corresponding unit in the International System of Units (SI) is the meter per second. Abbreviations include ft/s, fps, and the scientific notation ft s−1. Conversions See also Foot per second squared, a corresponding unit of acceleration. Feet per minute References Units of velocity Customary units of measurement in the United States
Foot per second
[ "Mathematics" ]
121
[ "Quantity", "Units of velocity", "Units of measurement" ]
2,956,374
https://en.wikipedia.org/wiki/Navigational%20Aids%20for%20the%20History%20of%20Science%2C%20Technology%2C%20and%20the%20Environment%20Project
The Navigational Aids for the History of Science, Technology, and the Environment Project (NAHSTE) was a research archives/manuscripts cataloguing project based at the University of Edinburgh. Following a proposal led by Arnott Wilson in 1999, the project received £261,755 funding from the Research Support Libraries Programme (RSLP) from 2000 until 2002. The project was designed to access a variety of outstanding collections of archives and manuscripts held at the three partner Higher Education Institutions (HEIs); the University of Edinburgh, University of Glasgow and Heriot-Watt University and to make them accessible on the Internet. The project additionally included linkages to related records held by non-HEI collaborators. Descriptions of the material conform to ISAD(G) (Second edition), whilst information about key individuals conform to ISAAR(CPF). Catalogues were tagged using the Encoded Archival Description XML standard. Although the project was completed in 2002, the resulting web service continues to be hosted at Edinburgh. References External links - homepage with links to online collections. Index (publishing) University of Edinburgh Open-access archives
Navigational Aids for the History of Science, Technology, and the Environment Project
[ "Technology" ]
227
[ "Computing stubs", "World Wide Web stubs" ]
2,956,700
https://en.wikipedia.org/wiki/Whitespace%20character
A whitespace character is a character data element that represents white space when text is rendered for display by a computer. For example, a space character (, ASCII 32) represents blank space such as a word divider in a Western script. A printable character results in output when rendered, but a whitespace character does not. Instead, whitespace characters define the layout of text to a limited degree, interrupting the normal sequence of rendering characters next to each other. The output of subsequent characters is typically shifted to the right (or to the left for right-to-left script) or to the start of the next line. The effect of multiple sequential whitespace characters is cumulative such that the next printable character is rendered at a location based on the accumulated effect of preceding whitespace characters. The term whitespace is rooting in the common practice of rendering text on white paper. Normally, a whitespace character is not rendered as white. It affects rendering, but it is not itself rendered. Overview A space character typically inserts horizontal space that is about as wide as a letter. For a monospaced font the width is the width of a letter, and for a variable-width font the width is font-specific. Some fonts support multiple space characters that have different widths. A tab character typically inserts horizontal space that is based on tab stops which vary by application. A newline character sequence typically moves the render output location to the beginning of the next line. If one follows text, it does not actually result in whitespace. But, two sequential newline sequences between text blocks results in a blank line between the blocks. The height of the blank line varies by application. Using whitespace characters to layout text is a convention. Applications sometimes render whitespace characters as visible markup so that a user can see what is normally not visible. Typically, a user types a space character by pressing , a tab character by pressing and newline by pressing . Unicode The table below lists the twenty-five characters defined as whitespace ("WSpace=Y", "WS") characters in the Unicode Character Database. Seventeen use a definition of whitespace consistent with the algorithm for bidirectional writing ("Bidirectional Character Type=WS") and are known as "Bidi-WS" characters. The remaining characters may also be used, but are not of this "Bidi" type. Note: Depending on the browser and fonts used to view the following table, not all spaces may be displayed properly. Substitute images Unicode also provides some visible characters that can be used to represent various whitespace characters, in contexts where a visible symbol must be displayed: Exact space The Cambridge Z88 provided a special "exact space" (code point 160 aka 0xA0) (invokable by key shortcut ), displayed as "…" by the operating system's display driver. It was therefore also known as "dot space" in conjunction with BBC BASIC. Under code point 224 (0xE0) the computer also provided a special three-character-cells-wide SPACE symbol "SPC" (analogous to Unicode's single-cell-wide U+2420). Non-space blanks The Braille Patterns Unicode block contains , a Braille pattern with no dots raised. Some fonts display the character as a fixed-width blank, however the Unicode standard explicitly states that it does not act as a space. Unicode's coverage of the Korean alphabet includes several code points which represent the absence of a written letter, and thus do not display a glyph: Unicode includes a Hangul Filler character in the Hangul Compatibility Jamo block (). This is classified as a letter, but displayed as an empty space, like a Hangul block containing no jamo. It is used in KS X 1001 Hangul combining sequences to introduce them or denote the absence of a letter in a position, but not in Unicode's combining jamo system. Unicode's combining jamo system uses similar Hangul Choseong Filler and Hangul Jungseong Filler characters to denote the absence of a letter in initial or medial position within a syllable block, which are included in the Hangul Jamo block (, ). Additionally, a Halfwidth Hangul Filler is included in the Halfwidth and Fullwidth Forms (), which is used when mapping from encodings which include characters from both Johab (or Wansung) and N-byte Hangul (or its EBCDIC counterpart), such as IBM-933, which includes both Johab and EBCDIC fillers. Whitespace and digital typography On-screen display Text editors, word processors, and desktop publishing software differ in how they represent whitespace on the screen, and how they represent spaces at the ends of lines longer than the screen or column width. In some cases, spaces are shown simply as blank space; in other cases they may be represented by an interpunct or other symbols. Many different characters (described below) could be used to produce spaces, and non-character functions (such as margins and tab settings) can also affect whitespace. Many of the Unicode space characters were created for compatibility with classic print typography. Even if digital typography has algorithmic kerning and justification, those space characters can be used to supplement the electronic formatting when needed. Variable-width general-purpose space In computer character encodings, there is a normal general-purpose space (Unicode character U+0020) whose width will vary according to the design of the typeface. Typical values range from 1/5 em to 1/3 em (in digital typography an em is equal to the nominal size of the font, so for a 10-point font the space will probably be between 2 and 3.3 points). Sophisticated fonts may have differently sized spaces for bold, italic, and small-caps faces, and often compositors will manually adjust the width of the space depending on the size and prominence of the text. In addition to this general-purpose space, it is possible to encode a space of a specific width. See the table below for a complete list. Hair spaces around dashes Em dashes used as parenthetical dividers, and en dashes when used as word joiners, are usually set continuous with the text. However, such a dash can optionally be surrounded with a hair space, U+200A, or thin space, U+2009. The hair space can be written in HTML by using the numeric character references &#x200A; or &#8202;, or the named entity &hairsp;, although that is not universally supported in browsers The thin space is named entity &thinsp; and numeric references &#x2009; or &#8201;. These spaces are much thinner than a normal space (except in a monospaced (non-proportional) font), with the hair space in particular being the thinnest of horizontal whitespace characters. Computing applications Programming languages In most programming language syntax, whitespace characters can be used to separate tokens. For a free-form language, whitespace characters are ignored by code processors (i.e. compiler). Even when language syntax requires white space, often multiple whitespace characters are treated the same as a single. In an off-side rule language, indentation white space is syntactically significant. In the satirical and contrarian language called Whitespace, whitespace characters are the only significant characters and normal text is ignored. Good use of white space in source code can group related logic and make the code easier to understand. Excessive use of whitespace, including at the end of a line where it provides no rendering behavior, is considered a nuisance. Most languages only recognize whitespace characters that have an ASCII code. They disallow most or all of the Unicode codes listed above. The C language defines whitespace characters to be "space, horizontal tab, new-line, vertical tab, and form-feed". The HTTP network protocol requires different types of whitespace to be used in different parts of the protocol, such as: only the space character in the status line, CRLF at the end of a line, and "linear whitespace" in header values. Command-line parsing Typical command-line parsers use the space character to delimit arguments. A value with an embedded space character is problematic since it causes the value to parse as multiple arguments. Typically, a parser allows for escaping the normal argument parsing by enclosing the text in quotes. Consider that one wants to list the files in directory named "foo bar". This command instead lists the files matching either "foo" or "bar": ls foo bar This command correctly specifies a single argument: ls "foo bar" Markup languages Some markup languages, such as SGML, preserve whitespace as written. Web markup languages such as XML and HTML treat whitespace characters specially, including space characters, for programmers' convenience. One or more space characters read by conforming display-time processors of those markup languages are collapsed to 0 or 1 space, depending on their semantic context. For example, double (or more) spaces within text are collapsed to a single space, and spaces which appear on either side of the "=" that separates an attribute name from its value have no effect on the interpretation of the document. Element end tags can contain trailing spaces, and empty-element tags in XML can contain spaces before the "/>". In these languages, unnecessary whitespace increases the file size, and so may slow network transfers. On the other hand, unnecessary whitespace can also inconspicuously mark code, similar to, but less obvious than comments in code. This can be desirable to prove an infringement of license or copyright that was committed by copying and pasting. In XML attribute values, sequences of whitespace characters are treated as a single space when the document is read by a parser. Whitespace in XML element content is not changed in this way by the parser, but an application receiving information from the parser may choose to apply similar rules to element content. An XML document author can use the xml:space="preserve" attribute on an element to instruct the parser to discourage the downstream application from altering whitespace in that element's content. In most HTML elements, a sequence of whitespace characters is treated as a single inter-word separator, which may manifest as a single space character when rendering text in a language that normally inserts such space between words. Conforming HTML renderers are required to apply a more literal treatment of whitespace within a few prescribed elements, such as the pre tag and any element for which CSS has been used to apply pre-like whitespace processing. In such elements, space characters will not be "collapsed" into inter-word separators. In both XML and HTML, the non-breaking space character, along with other non-"standard" spaces, is not treated as collapsible "whitespace", so it is not subject to the rules above. File names Such usage is similar to multiword file names written for operating systems and applications that are confused by embedded space codes—such file names instead use an underscore (_) as a word separator, as_in_this_phrase. Another such symbol was . This was used in the early years of computer programming when writing on coding forms. Keypunch operators immediately recognized the symbol as an "explicit space". It was used in BCDIC, EBCDIC, and ASCII-1963. See also Carriage return Em (typography) En (typography) Form feed Indent style Line feed Newline Programming style Prosigns for Morse code for the white-space character class. Space bar Space (punctuation) Tab key Trimming (computer programming) Whitespace (programming language) Zero-width space References External links Property List of Unicode Character Database Character encoding Source code
Whitespace character
[ "Technology" ]
2,494
[ "Natural language and computing", "Character encoding" ]
4,060,393
https://en.wikipedia.org/wiki/Intracrine
Intracrine refers to a hormone that acts inside a cell, regulating intracellular events. In simple terms it means that the cell stimulates itself by cellular production of a factor that acts within the cell. Steroid hormones act through intracellular (mostly nuclear) receptors and, thus, may be considered to be intracrines. In contrast, peptide or protein hormones, in general, act as endocrines, autocrines, or paracrines by binding to their receptors present on the cell surface. Several peptide/protein hormones or their isoforms also act inside the cell through different mechanisms. These peptide/protein hormones, which have intracellular functions, are also called intracrines. The term 'intracrine' is thought to have been coined to represent peptide/protein hormones that also have intracellular actions. To better understand intracrine, we can compare it to paracrine, autocrine and endocrine. The autocrine system deals with the autocrine receptors of a cell allowing for the hormones to bind, which have been secreted from that same cell. The paracrine system is one where nearby cells get hormones from a cell, and change the functioning of those nearby cells. The endocrine system refers to when the hormones from a cell affect another cell that is very distant from the one that released the hormone. Paracrine physiology has been understood for decades now and the effects of paracrine hormones have been observed when for example, an obesity associate tumor will face the effects of local adipocytes, even if it is not in direct contact with the fat pads in concern. Endocrine physiology on the other hand is a growing field and has had a new area explored, called intracrinology. In intracrinology, the sex steroids produced locally, exert their action in the same cell where they are produced. The biological effects produced by intracellular actions are referred as intracrine effects, whereas those produced by binding to cell surface receptors are called endocrine, autocrine, or paracrine effects, depending on the origin of the hormone. The intracrine effect of some of the peptide/protein hormones are similar to their endocrine, autocrine, or paracrine effects; however, these effects are different for some other hormones. Intracrine can also refer to a hormone acting within the cell that synthesizes it. Examples of intracrine peptide hormones: There are several protein/peptide hormones that are also intracrines. Notable examples that have been described in the references include: Peptides of the renin–angiotensin system: angiotensin II and angiotensin (1-7) Fibroblast growth factor 2 Parathyroid hormone-related protein See also Local hormone Autocrine signalling References Park, Jiyoung; Euhus, David M.; Scherer, Philipp E. (August 2011). "Paracrine and Endocrine Effects of Adipose Tissue on Cancer Development and Progression". Endocrine Reviews. 32 (4): 550–570. . Labrie, Fernand; Luu-The, Van; Labrie, Claude; Bélanger, Alain; Simard, Jacques; Lin, Sheng-Xiang; Pelletier, Georges (April 2003). "Endocrine and Intracrine Sources of Androgens in Women: Inhibition of Breast Cancer and Other Roles of Androgens and Their Precursor Dehydroepiandrosterone". Endocrine Reviews. 24 (2): 152–182. . Specific Cell biology
Intracrine
[ "Biology" ]
739
[ "Cell biology" ]
4,060,966
https://en.wikipedia.org/wiki/Leukocyte-promoting%20factor
Leukocyte-promoting factor, more commonly known as leukopoietin, is a category of substances produced by neutrophils when they encounter a foreign antigen. Leukopoietin stimulates the bone marrow to increase the rate of leukopoiesis in order to replace the neutrophils that will inevitably be lost when they begin to phagocytose the foreign antigens. Leukocyte-promoting factors include colony stimulating factors (CSFs) (produced by monocytes and T lymphocytes), interleukins (produced by monocytes, macrophages, and endothelial cells), prostaglandins, and lactoferrin. See also White blood cell Leukocytosis Complete blood count Indium-111 WBC scan Leukocyte extravasation References Cytokines Hormones of the blood Hematology
Leukocyte-promoting factor
[ "Chemistry", "Biology" ]
188
[ "Biotechnology stubs", "Signal transduction", "Biochemistry stubs", "Cytokines", "Biochemistry" ]
4,061,565
https://en.wikipedia.org/wiki/Charge%20sharing
Charge sharing is an effect of signal degradation through transfer of charges from one electronic domain to another. Charge sharing in semiconductor radiation detectors In pixelated semiconductor radiation detectors - such as photon-counting or hybrid-pixel-detectors, charge sharing refers to the diffusion of electrical charges with a negative impact on image quality. Formation of charge sharing In the active detector layer of photon detectors, incident photons are converted to electron-hole pairs via the photoelectric effect. The resulting charge cloud is being accelerated towards the readout electronics via an applied voltage bias. Because of thermic energy and repulsion due to the electric fields inside such a device, the charge cloud diffuses, effectively getting larger in lateral size. In pixelated detectors, this effect can lead to a detection of parts of the initial charge cloud in neighbouring pixels. As the probability for this cross talk increases towards pixel edges, it is more prominent in detectors with smaller pixel size. Furthermore, fluorescence of the detector material above its K-edge can lead to additional charge carriers that add to the effect of charge-sharing. Especially in photon counting detectors, charge sharing can lead to errors in the signal count. Problems of charge sharing Especially in photon counting detectors, the energy of an incident photon is correlated with the net sum of the charge in the primary charge cloud. This kind of detectors often use thresholds to be able to act over a certain noise level but also to discriminate incident photons of different energies. If a certain part of the charge cloud is diffusing to the read-out electronics of a neighbouring pixel, this results in the detection of two events with lower energy than the primary photon. Furthermore, if the resulting charge in one of the affected pixels is smaller than the threshold, the event is discarded as noise. In general, this leads to the underestimation of the energy of incident photons. The registration of one incident photon in several pixels degrades spatial resolution, as the information about the primary interaction is smeared out. Furthermore, this effect leads to degradation of energy resolution due to the general underestimation. Especially in medical applications, charge sharing reduces the dose efficiency, meaning that the useful proportion of the incident dose for imaging applications is reduced. Correction of charge sharing There are several approaches on the correction of charge sharing. One approach is to neglect all events, where in the same time window there is a detector response in more than one corresponding pixel - which severely reduces detector efficiency and limits the possible maximum countrate. Another approach is addition of the low levels of signal of correlated events in neighbouring pixels and attribution to the pixel with the largest signal. Other correction approaches basically rely on a deconvolution in the signal domain, taking calibrated detector response into account. Charge sharing in digital electronics In digital electronics, charge sharing is an undesirable signal integrity phenomenon observed most commonly in the Domino logic family of digital circuits. The charge sharing problem occurs when the charge which is stored at the output node in the precharge phase is shared among the output or junction capacitances of transistors which are in the evaluation phase. Charge sharing may degrade the output voltage level or even cause erroneous output value References Digital electronics
Charge sharing
[ "Engineering" ]
655
[ "Electronic engineering", "Digital electronics" ]
4,061,679
https://en.wikipedia.org/wiki/Pieter%20Van%20den%20Abeele
Pieter Van den Abeele is a computer programmer, and the founder of the PowerPC-version of Gentoo Linux, a foundation connected with a distribution of the Linux computer operating system. He founded Gentoo for OS X, for which he received a scholarship by Apple Computer. In 2004 Pieter was invited to the OpenSolaris pilot program and assisted Sun Microsystems with building a development eco-system around Solaris. Pieter was nominated for the OpenSolaris Community Advisory Board and managed a team of developers to make Gentoo available on the Solaris operating system as well. Pieter is a co-author of the Gentoo handbook. The teams managed by Pieter Van den Abeele have shaped the PowerPC landscape with several "firsts". Gentoo/PowerPC was the first distribution to introduce PowerPC Live CDs. Gentoo also beat Apple to releasing a full 64-bit PowerPC userland environment for the IBM PowerPC 970 (G5) processor. His Gentoo-based Home Media and Communication System, based on a Freescale Semiconductor PowerPC 7447 processor won the Best of Show award at the inaugural 2005 Freescale Technology Forum in Orlando, Florida. Pieter is also a member of the Power.org consortium and participates in committees and workgroups focusing on disruptive business plays around the Power Architecture. References People in information technology Gentoo Linux people Living people Year of birth missing (living people)
Pieter Van den Abeele
[ "Technology" ]
296
[ "People in information technology", "Information technology" ]
4,061,767
https://en.wikipedia.org/wiki/Heaviside%E2%80%93Lorentz%20units
Heaviside–Lorentz units (or Lorentz–Heaviside units) constitute a system of units and quantities that extends the CGS with a particular set of equations that defines electromagnetic quantities, named for Oliver Heaviside and Hendrik Antoon Lorentz. They share with the CGS-Gaussian system that the electric constant and magnetic constant do not appear in the defining equations for electromagnetism, having been incorporated implicitly into the electromagnetic quantities. Heaviside–Lorentz units may be thought of as normalizing and , while at the same time revising Maxwell's equations to use the speed of light instead. The Heaviside–Lorentz unit system, like the International System of Quantities upon which the SI system is based, but unlike the CGS-Gaussian system, is rationalized, with the result that there are no factors of appearing explicitly in Maxwell's equations. That this system is rationalized partly explains its appeal in quantum field theory: the Lagrangian underlying the theory does not have any factors of when this system is used. Consequently, electromagnetic quantities in the Heaviside–Lorentz system differ by factors of in the definitions of the electric and magnetic fields and of electric charge. It is often used in relativistic calculations, and are used in particle physics. They are particularly convenient when performing calculations in spatial dimensions greater than three such as in string theory. Motivation In the mid-late 19th century, electromagnetic measurements were frequently made in either the so-named electrostatic (ESU) or electromagnetic (EMU) systems of units. These were based respectively on Coulomb's and Ampere's Law. Use of these systems, as with to the subsequently developed Gaussian CGS units, resulted in many factors of appearing in formulas for electromagnetic results, including those without any circular or spherical symmetry. For example, in the CGS-Gaussian system, the capacitance of sphere of radius is while that of a parallel plate capacitor is , where is the area of the smaller plate and is their separation. Heaviside, who was an important, though somewhat isolated, early theorist of electromagnetism, suggested in 1882 that the irrational appearance of in these sorts of relations could be removed by redefining the units for charges and fields. In his 1893 book Electromagnetic Theory, Heaviside wrote in the introduction: Length–mass–time framework As in the Gaussian system (), the Heaviside–Lorentz system () uses the length–mass–time dimensions. This means that all of the units of electric and magnetic quantities are expressible in terms of the units of the base quantities length, time and mass. Coulomb's equation, used to define charge in these systems, is in the Gaussian system, and in the HL system. The unit of charge then connects to , where 'HLC' is the HL unit of charge. The HL quantity describing a charge is then times larger than the corresponding Gaussian quantity. There are comparable relationships for the other electromagnetic quantities (see below). The commonly used set of units is the called the SI, which defines two constants, the vacuum permittivity () and the vacuum permeability (). These can be used to convert SI units to their corresponding Heaviside–Lorentz values, as detailed below. For example, SI charge is . When one puts , , , and , this evaluates to , the SI-equivalent of the Heaviside–Lorentz unit of charge. Comparison of Heaviside–Lorentz with other systems of units This section has a list of the basic formulas of electromagnetism, given in the SI, Heaviside–Lorentz, and Gaussian systems. Here and are the electric field and displacement field, respectively, and are the magnetic fields, is the polarization density, is the magnetization, is charge density, is current density, is the speed of light in vacuum, is the electric potential, is the magnetic vector potential, is the Lorentz force acting on a body of charge and velocity , is the permittivity, is the electric susceptibility, is the magnetic permeability, and is the magnetic susceptibility. Maxwell's equations The electric and magnetic fields can be written in terms of the potentials and . The definition of the magnetic field in terms of , , is the same in all systems of units, but the electric field is in the SI system, but in the HL or Gaussian systems. Other basic laws Dielectric and magnetic materials Below are the expressions for the macroscopic fields , , and in a material medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the susceptibilities are constants. Note that The quantities , and are dimensionless, and they have the same numeric value. By contrast, the electric susceptibility is dimensionless in all the systems, but has for the same material: The same statements apply for the corresponding magnetic quantities. Advantages and disadvantages of Heaviside–Lorentz units Advantages The formulas above are clearly simpler in units compared to either or Gaussian units. As Heaviside proposed, removing the from the Gauss law and putting it in the Force law considerably reduces the number of places the appears compared to Gaussian CGS units. Removing the explicit from the Gauss law makes it clear that the inverse-square force law arises by the field spreading out over the surface of a sphere. This allows a straightforward extension to other dimensions. For example, the case of long, parallel wires extending straight in the direction can be considered a two-dimensional system. Another example is in string theory, where more than three spatial dimensions often need to be considered. The equations are free of the constants and that are present in the SI system. (In addition and are overdetermined, because .) The below points are true in both Heaviside–Lorentz and Gaussian systems, but not SI. The electric and magnetic fields and have the same dimensions in the Heaviside–Lorentz system, meaning it is easy to recall where factors of go in the Maxwell equation. Every time derivative comes with a , which makes it dimensionally the same as a space derivative. In contrast, in SI units is . Giving the and fields the same dimension makes the assembly into the electromagnetic tensor more transparent. There are no factors of that need to be inserted when assembling the tensor out of the three-dimensional fields. Similarly, and have the same dimensions and are the four components of the 4-potential. The fields , , , and also have the same dimensions as and . For vacuum, any expression involving can simply be recast as the same expression with . In SI units, and have the same units, as do and , but they have different units from each other and from and . Disadvantages Despite Heaviside's urgings, it proved difficult to persuade people to switch from the established units. He believed that if the units were changed, "[o]ld style instruments would very soon be in a minority, and then disappear ...". Persuading people to switch was already difficult in 1893, and in the meanwhile there have been more than a century's worth of additional textbooks printed and voltmeters built. Heaviside–Lorentz units, like the Gaussian CGS units by which they generally differ by a factor of about 3.5, are frequently of rather inconvenient sizes. The ampere (coulomb/second) is reasonable unit for measuring currents commonly encountered, but the ESU/s, as demonstrated above, is far too small. The Gaussian CGS unit of electric potential is named a statvolt. It is about , a value which is larger than most commonly encountered potentials. The henry, the SI unit for inductance is already on the large side compared to most inductors; the Gaussian unit is 12 orders of magnitude larger. A few of the Gaussian CGS units have names; none of the Heaviside–Lorentz units do. Textbooks in theoretical physics use Heaviside–Lorentz units nearly exclusively, frequently in their natural form (see below), system's conceptual simplicity and compactness significantly clarify the discussions, and it is possible if necessary to convert the resulting answers to appropriate units after the fact by inserting appropriate factors of and . Some textbooks on classical electricity and magnetism have been written using Gaussian CGS units, but recently some of them have been rewritten to use SI units. Outside of these contexts, including for example magazine articles on electric circuits, Heaviside–Lorentz and Gaussian CGS units are rarely encountered. Translating formulas between systems To convert any formula between the SI, Heaviside–Lorentz system or Gaussian system, the corresponding expressions shown in the table below can be equated and hence substituted for each other. Replace by or vice versa. This will reproduce any of the specific formulas given in the list above. As an example, starting with the equation and the equations from the table Moving the factor across in the latter identities and substituting, the result is which then simplifies to Notes References Special relativity Electromagnetism Hendrik Lorentz
Heaviside–Lorentz units
[ "Physics" ]
1,939
[ "Electromagnetism", "Physical phenomena", "Special relativity", "Fundamental interactions", "Theory of relativity" ]
4,062,159
https://en.wikipedia.org/wiki/Ranelic%20acid
Ranelic acid is an organic acid capable of chelating metal cations. It forms the ranelate ion, C12H6N2O8S4−. Strontium ranelate, the strontium salt of ranelic acid, is a drug used to treat osteoporosis and increase bone mineral density (BMD). References Carboxylic acids Nitriles Thiophenes Amines
Ranelic acid
[ "Chemistry" ]
92
[ "Carboxylic acids", "Functional groups", "Amines", "Nitriles", "Bases (chemistry)" ]
4,062,370
https://en.wikipedia.org/wiki/American%20Institute%20of%20Mining%2C%20Metallurgical%2C%20and%20Petroleum%20Engineers
The American Institute of Mining, Metallurgical, and Petroleum Engineers (AIME) is a professional association for mining and metallurgy, with over 145,000 members. The association was founded in 1871 by 22 mining engineers in Wilkes-Barre, Pennsylvania, and was one of the first national engineering societies in the country. The association's charter is to "advance and disseminate, through the programs of the Member Societies, knowledge of engineering and the arts and sciences involved in the production and use of minerals, metals, energy sources and materials for the benefit of humankind." It is the parent organization of four Member Societies, the Society for Mining, Metallurgy, and Exploration (SME), The Minerals, Metals & Materials Society (TMS), the Association for Iron and Steel Technology (AIST), and the Society of Petroleum Engineers (SPE). The organization is currently based in San Ramon, California. History Founded as the American Institute of Mining Engineers (AIME), the institute had a membership at the beginning of 1915 of over 5,000, made up of honorary, elected, and associate members. The annual meeting of the institute was held in February, with other meetings during the year as authorized by the council. The institute published three volumes of Transactions annually and a monthly Bulletin which appeared on the first of each month. The headquarters of the institute was in the Engineering Building in New York City. Following creation of the Petroleum Division in 1922, the Iron and Steel Division in 1928 and the Institute of Metals Division in 1933 the name of the society was changed in 1957 to the American Institute of Mining, Metallurgical and Petroleum Engineers. Three of the current member societies were then created from the divisions, increasing to four in 1974 when the Iron and Steel Society (ISS) was formed. In 2004 ISS merged with the Association of Iron and Steel Engineers (AISE) to form the Association for Iron and Steel Technology (AIST) whilst remaining a member society of AIME. Awards The society awards some 25 awards every year at the annual conference. In addition, the member societies also disburse their own awards, including the Percy Nicholls Award, awarded by SME jointly with American Society of Mechanical Engineers. Presidents The following individuals have held the position of President of this organization. 1871: David Thomas 1872–1874: Rossiter Worthington Raymond 1875: Alexander Lyman Holley 1876: Abram Stevens Hewitt 1877: Thomas Sterry Hunt 1878–1879: Eckley Brinton Coxe 1880: William Powell Shinn 1881: William Metcalf 1882: Richard Pennefather Rothwell 1883: Robert Woolston Hunt 1884–1885: James Cooper Bayles 1886: Robert Hallowell Richards 1887: Thomas Egleston 1888: William Bleeker Potter 1889: Richard Pearce 1890: Abram Stevens Hewitt 1891–1892: John Birkinbine 1893: Henry Marion Howe 1894: John Fritz 1895: Joseph D. Weeks 1896: Edmund Gybbon Spilsbury 1897: Thomas Messinger Drown 1898: Charles Kirchhoff 1899–1900: James Douglas 1901–1902: Eben Erskine Olcott 1903: Albert Reid Ledoux 1904–1905: James Gayley 1906: Robert Woolston Hunt 1907–1908: John Hays Hammond 1909–1910: David William Brunton 1911: Charles Kirchhoff 1912: James Furman Kemp 1913: Charles Frederic Rand 1914: Benjamin Bowditch Thayer 1915: William Lawrence Saunders 1916: Louis Davidson Ricketts 1917: Philip North Moore 1918: Sidney Johnston Jennings 1919: Horace Vaughn Winchell 1920: Herbert Hoover 1921: Edwin Ludlow 1922: Arthur Smith Dwight 1923: Edward Payson Mathewson 1924: William Kelly 1925: John van Wicheren Reynders 1926: Samuel A. Taylor 1927: Everette Lee DeGolyer 1928: George Otis Smith 1929: Frederick Worthen Bradley 1930: William Hastings Bassett 1931: Robert Emmet Tally 1932: Scott Turner 1933: Frederick Mark Becket 1934: Howard Nicholas Eavenson 1935: Henry Andrew Buehler 1936: John Meston Lovejoy 1937: Rolland Craten Allen 1938: Daniel Cowan Jackling 1939: Donald Burton Gillies 1940: Herbert George Moulton 1941: John Robert Suman 1942: Eugene McAuliffe 1943: Champion Herbert Mathewson 1944: Chester Alan Fulton 1945: Harvey Seeley Mudd 1946: Louis S. Cates 1947: Clyde Williams 1948: William Embry Wrather 1949: Lewis Emanuel Young 1950: Donald Hamilton McLaughlin 1951: Willis McGerald Peirce 1952: Michael Lawrence Haider 1953: Andrew Fletcher 1954: Leo Frederick Reinartz 1955: Henry DeWitt Smith 1956: Carl Ernest Reistle Jr. 1957: Grover Justine Holt 1958: Augustus Braun Kinzel 1959: Howard Carter Pyle 1960: Joseph Lincoln Gillson 1961: Ronald Russel McNaughton 1962: Lloyd E. Elkins 1963: Roger Vern Pierce 1964: Karl Leroy Fetters 1965: Thomas Corwin Frick 1966: William Bishop Stephenson 1967: Walter R. Hibbard Jr. 1968: John Robertson McMillan 1969: James Boyd 1970: John C. Kinnear 1971: John Smith Bell 1972: Dennis L. McElroy 1973: James B. Austin 1974: Wayne E. Glenn 1975: James D. Reilly 1976: Julius J. Harwood 1977: H. Arthur Nedom 1978: Wayne L. Dowdey 1979: William H. Wise 1980: M. Scott Kraemer 1981: Robert H. Merrill 1982: Harold W. Paxton 1983: Edward E. Runyan 1984: Nelson Severinghaus, Jr. 1985: Norman T. Mills 1986: Arlen L. Edgar 1987: Alan Lawley 1988: Thomas V. Falkie 1989: Howard N. Hubbard, Jr. 1990: Donald G. Russell 1991: Milton E. Wadsworth 1992: Roshan B. Bhappu 1993: G. Hugh Walker 1994: Noel D. Rietman 1995: Frank V. Nolfi, Jr. 1996: Donald W. Gentry 1997: Leonard G. Nelson 1998: Roy H. Koerner 1999: Paul G. Campbell, Jr. 2000: Robert E. Murray 2001: Grant P. Schneider 2002: George H. Sawyer 2003: Robert H. Wagoner 2004: Robert C. Freas 2005: Alan W. Cramb 2006: James R. Jorden 2007: Dan J. Thoma 2008: Michael Karmis 2009: Ian Sadler 2010: DeAnn Craig 2011: Brajendra Mishra 2012: George W. Luxbacher 2013: Dale Heinz 2014: Behrooz Fattahi 2015: Garry W. Warren 2016: Nikhil Trivedi 2017: John G. Speer Vice presidents 1893–1894: Robert Gilmour Leckie Member Societies In addition to individual members, AIME's membership includes the following societies: Association for Iron and Steel Technology (AIST) The Society for Mining, Metallurgy & Exploration (SME) Society of Petroleum Engineers (SPE) The Minerals, Metals & Materials Society (TMS) Mining Engineering magazine The Society for Mining, Metallurgy & Exploration publishes the monthly magazine Mining Engineering since 1949. References External links Organizations based in Colorado Organizations established in 1871 1871 establishments in Pennsylvania Engineering societies based in the United States
American Institute of Mining, Metallurgical, and Petroleum Engineers
[ "Chemistry", "Materials_science", "Engineering" ]
1,480
[ "Mining engineering", "Metallurgy", "Petroleum engineering", " and Petroleum Engineers", "American Institute of Mining", " Metallurgical" ]
4,062,415
https://en.wikipedia.org/wiki/International%20Chemical%20Safety%20Cards
International Chemical Safety Cards (ICSC) are data sheets intended to provide essential safety and health information on chemicals in a clear and concise way. The primary aim of the Cards is to promote the safe use of chemicals in the workplace and the main target users are therefore workers and those responsible for occupational safety and health. The ICSC project is a joint venture between the World Health Organization (WHO) and the International Labour Organization (ILO) with the cooperation of the European Commission (EC). This project began during the 1980s with the objective of developing a product to disseminate the appropriate hazard information on chemicals at the workplace in an understandable and precise way. The Cards are prepared in English by ICSC participating institutions and peer reviewed in semiannual meetings before being made public. Subsequently, national institutions translate the Cards from English into their native languages and these translated Cards are also published on the Web. The English collection of ICSC is the original version. To date approximately 1700 Cards are available in English in HTML and PDF format. Translated versions of the Cards exist in different languages: Chinese, Dutch, Finnish, French, German, Hungarian, Italian, Japanese, Polish, Spanish and others. The objective of the ICSC project is to make essential health and safety information on chemicals available to as wide an audience as possible, especially at the workplace level. The project aims to keep on improving the mechanism for the preparation of Cards in English as well as increasing the number of translated versions available; therefore, welcomes the support of additional institutions who could contribute not only to the preparation of ICSC but also to the translation process. Format ICSC cards follow a fixed format which is designed to give a consistent presentation of the information, and is sufficiently concise to be printed onto two sides of a harmonized sheet of paper, an important consideration to permit easy use in the workplace. The standard sentences and consistent format used in ICSC facilitates the preparation and computer-aided translation of the information in the Cards. Identification of chemicals The identification of the chemicals on the Cards is based on the UN numbers, the Chemical Abstracts Service (CAS) number and the Registry of Toxic Effects of Chemical Substances (RTECS/NIOSH) numbers. It is thought that the use of those three systems assures the most unambiguous method of identifying the chemical substances concerned, referring as it does to numbering systems that consider transportation matters, chemistry and occupational health. The ICSC project is not intended to generate any sort of classification of chemicals. It makes reference to existing classifications. As an example, the Cards cite the results of the deliberations of the UN Committee of Experts on the Transport of Dangerous Goods with respect to transport: the UN hazard classification and the UN packaging group, when they exist, are entered on the Cards. Moreover, the ICSC are so-designed that room is reserved for the countries to enter information of national relevance. Preparation The preparation of ICSC is an ongoing process of drafting and peer reviewing by a group of scientists working for a number of specialized scientific institutions concerned with occupational health and safety in different countries. Chemicals are selected for new ICSC based on a range of criteria for concern (high production volume, incidence of health problems, high risk properties). Chemicals can be proposed by countries or stakeholder groups such as trade unions. ICSC are drafted in English by participating institutions based on publicly available data, and are then peer reviewed by the full group of experts in biannual meetings before being made publicly available. Existing Cards are updated periodically by the same drafting and peer review process, in particular when significant new information becomes available. In this way approximately 50 to 100 new and updated ICSC become available each year and the collection of Cards available has grown from a few hundreds during the 1980s up to more than 1700 today. Authoritative nature The international peer review process followed in the preparation of ICSC ensures the authoritative nature of the Cards and represents a significant asset of ICSC as opposed to other packages of information. ICSC have no legal status and may not meet all requirements included in national legislation. The Cards should complement any available Chemical Safety Data Sheet but cannot be a substitute for any legal obligation on a manufacturer or employer to provide chemical safety information. However, it is recognized that ICSC might be the principal source of information available for both management and workers in less developed countries or in small and medium-sized enterprises. In general, the information provided in the Cards is in line with the ILO Chemicals Convention (No. 170) and Recommendation (No. 177), 1990; the European Union Council Directive 98/24/EC; and the United Nations Globally Harmonized System of Classification and Labelling of Chemicals (GHS) criteria. Globally Harmonized System of Classification and Labelling of Chemicals (GHS) The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is now being widely used for the classification and labelling of chemicals worldwide. One of the aims of introducing the GHS was to make it easier for users to identify chemical hazards in the workplace in a more consistent way. GHS classifications have been added to new and updated ICSC since 2006 and the language and technical criteria underlying the standard phrases used in the Cards has been developed to reflect ongoing developments in the GHS to ensure consistent approaches. The addition of GHS classifications to ICSC has been recognized by the relevant United Nations committee as a contribution to assisting countries to implement the GHS, and as a way of making GHS classifications of chemicals available to a wider audience. Material Safety Data Sheets (MSDS) Great similarities exist between the various headings of the ICSC and the manufacturers' Safety Data Sheet (SDS) or Material Safety Data Sheet (MSDS) of the International Council of Chemical Associations. However, MSDS and the ICSC are not the same. The MSDS, in many instances, may be technically very complex and too extensive for shop floor use, and secondly it is a management document. The ICSC, on the other hand, set out peer-reviewed information about substances in a more concise and simple manner. This is not to say that the ICSC should be a substitute for an MSDS; nothing can replace management's responsibility to communicate with workers on the exact chemicals, the nature of those chemicals used on the shop floor and the risk posed in any given workplace. Indeed, the ICSC and the MSDS can even be thought of as complementary. If the two methods for hazard communication can be combined, then the amount of knowledge available to the safety representative or shop floor workers will be more than doubled. References External links ICSC - official site at the International Labour Organization ICSC - official site at the World Health Organization Chemical safety
International Chemical Safety Cards
[ "Chemistry" ]
1,358
[ "Chemical safety", "Chemical accident", "nan" ]
4,062,502
https://en.wikipedia.org/wiki/Zero-product%20property
In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words, This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers , the rational numbers , the real numbers , and the complex numbers — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain. Algebraic context Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers with ordinary addition and multiplication, which is only a (commutative) semiring. Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that , then either or because and can also be considered as elements of . Examples A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field. If is a prime number, then the ring of integers modulo has the zero-product property (in fact, it is a field). The Gaussian integers are an integral domain because they are a subring of the complex numbers. In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not commutative. The set of nonnegative integers is not a ring (being instead a semiring), but it does satisfy the zero-product property. Non-examples Let denote the ring of integers modulo . Then does not satisfy the zero product property: 2 and 3 are nonzero elements, yet . In general, if is a composite number, then does not satisfy the zero-product property. Namely, if where , then and are nonzero modulo , yet . The ring of 2×2 matrices with integer entries does not satisfy the zero-product property: if and then yet neither nor is zero. The ring of all functions , from the unit interval to the real numbers, has nontrivial zero divisors: there are pairs of functions which are not identically equal to zero yet whose product is the zero function. In fact, it is not hard to construct, for any n ≥ 2, functions , none of which is identically zero, such that is identically zero whenever . The same is true even if we consider only continuous functions, or only even infinitely smooth functions. On the other hand, analytic functions have the zero-product property. Application to finding roots of polynomials Suppose and are univariate polynomials with real coefficients, and is a real number such that . (Actually, we may allow the coefficients and to come from any integral domain.) By the zero-product property, it follows that either or . In other words, the roots of are precisely the roots of together with the roots of . Thus, one can use factorization to find the roots of a polynomial. For example, the polynomial factorizes as ; hence, its roots are precisely 3, 1, and −2. In general, suppose is an integral domain and is a monic univariate polynomial of degree with coefficients in . Suppose also that has distinct roots . It follows (but we do not prove here) that factorizes as . By the zero-product property, it follows that are the only roots of : any root of must be a root of for some . In particular, has at most distinct roots. If however is not an integral domain, then the conclusion need not hold. For example, the cubic polynomial has six roots in (though it has only three roots in ). See also Fundamental theorem of algebra Integral domain and domain Prime ideal Zero divisor Notes References David S. Dummit and Richard M. Foote, Abstract Algebra (3d ed.), Wiley, 2003, . External links PlanetMath: Zero rule of product Abstract algebra Elementary algebra Real analysis Ring theory 0 (number)
Zero-product property
[ "Mathematics" ]
971
[ "Algebra", "Ring theory", "Elementary algebra", "Elementary mathematics", "Fields of abstract algebra", "Abstract algebra" ]
4,062,527
https://en.wikipedia.org/wiki/International%20Programme%20on%20Chemical%20Safety
The International Programme on Chemical Safety (IPCS) was formed in 1980 and is a collaboration between three United Nations bodies, the World Health Organization, the International Labour Organization and the United Nations Environment Programme, to establish a scientific basis for safe use of chemicals and to strengthen national capabilities and capacities for chemical safety. A related joint project with the same aim, IPCS INCHEM, is a collaboration between IPCS and the Canadian Centre for Occupational Health and Safety (CCOHS). The IPCS identifies the following as "chemicals of major public health concern": Air pollution Arsenic Asbestos Benzene Cadmium Dioxin and dioxin-like substances Inadequate or excess fluoride Lead Mercury Highly hazardous pesticides See also Acceptable daily intake International Chemical Safety Card Concise International Chemical Assessment Document Food safety References External links Official WHO site Official site Chemical safety World Health Organization International Labour Organization United Nations Environment Programme
International Programme on Chemical Safety
[ "Chemistry" ]
181
[ "nan", "Chemical accident", "Chemical safety" ]
4,062,863
https://en.wikipedia.org/wiki/Content-addressable%20storage
Content-addressable storage (CAS), also referred to as content-addressed storage or fixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations. Content-addressable storage is similar to content-addressable memory. CAS systems work by passing the content of the file through a cryptographic hash function to generate a unique key, the "content address". The file system's directory stores these addresses and a pointer to the physical storage of the content. Because an attempt to store the same file will generate the same key, CAS systems ensure that the files within them are unique, and because changing the file will result in a new key, CAS systems provide assurance that the file is unchanged. CAS became a significant market during the 2000s, especially after the introduction of the 2002 Sarbanes–Oxley Act in the United States which required the storage of enormous numbers of documents for long periods and retrieved only rarely. Ever-increasing performance of traditional file systems and new software systems have eroded the value of legacy CAS systems, which have become increasingly rare after roughly 2018. However, the principles of content addressability continue to be of great interest to computer scientists, and form the core of numerous emerging technologies, such as peer-to-peer file sharing, cryptocurrencies, and distributed computing. Description Location-based approaches Traditional file systems generally track files based on their filename. On random-access media like a floppy disk, this is accomplished using a directory that consists of some sort of list of filenames and pointers to the data. The pointers refer to a physical location on the disk, normally using disk sectors. On more modern systems and larger formats like hard drives, the directory is itself split into many subdirectories, each tracking a subset of the overall collection of files. Subdirectories are themselves represented as files in a parent directory, producing a hierarchy or tree-like organization. The series of directories leading to a particular file is known as a "path". In the context of CAS, these traditional approaches are referred to as "location-addressed", as each file is represented by a list of one or more locations, the path and filename, on the physical storage. In these systems, the same file with two different names will be stored as two files on disk and thus have two addresses. The same is true if the same file, even with the same name, is stored in more than one location in the directory hierarchy. This makes them less than ideal for a digital archive, where any unique information should only be stored once. As the concept of the hierarchical directory became more common in operating systems especially during the late 1980s, this sort of access pattern began to be used by entirely unrelated systems. For instance, the World Wide Web uses a similar pathname/filename-like system known as the URL to point to documents. The same document on another web server has a different URL in spite of being identical content. Likewise, if an existing location changes in any way, if the filename changes or the server moves to a new domain name service name, the document is no longer accessible. This leads to the common problem of link rot. CAS and FCS Although location-based storage is widely used in many fields, this was not always the case. Previously, the most common way to retrieve data from a large collection was to use some sort of identifier based on the content of the document. For instance, the ISBN system is used to generate a unique number for every book. If one performs a web search for "ISBN 0465048994", one will be provided with a list of locations for the book Why Information Grows on the topic of information storage. Although many locations will be returned, they all refer to the same work, and the user can then pick whichever location is most appropriate. Additionally, if any one of these locations changes or disappears, the content can be found at any of the other locations. CAS systems attempt to produce ISBN like results automatically and on any document. They do this by using a cryptographic hash function on the data of the document to produce what is sometimes known as a "key" or "fingerprint". This key is strongly tied to the exact content of the document, adding a single space at the end of the file, for instance, will produce a different key. In a CAS system, the directory does not map filenames onto locations, but uses the keys instead. This provides several benefits. For one, when a file is sent to the CAS for storage, the hash function will produce a key and then check to see if that key already exists in the directory. If it does, the file is not stored as the one already in storage is identical. This allows CAS systems to easily avoid duplicate data. Additionally, as the key is based on the content of the file, retrieving a document with a given key ensures that the stored file has not been changed. The downside to this approach is that any changes to the document produces a different key, which makes CAS systems unsuitable for files that are often edited. For all of these reasons, CAS systems are normally used for archives of largely static documents, and are sometimes known as "fixed content storage" (FCS). Because the keys are not human-readable, CAS systems implement a second type of directory that stores metadata that will help users find a document. These almost always include a filename, allowing the classic name-based retrieval to be used. But the directory will also include fields for common identification systems like ISBN or ISSN codes, user-provided keywords, time and date stamps, and full-text search indexes. Users can search these directories and retrieve a key, which can then be used to retrieve the actual document. Using a CAS is very similar to using a web search engine. The primary difference is that a web search is generally performed on a topic basis using an internal algorithm that finds "related" content and then produces a list of locations. The results may be a list of the identical content in multiple locations. In a CAS, more than one document may be returned for a given search, but each of those documents will be unique and presented only once. Another advantage to CAS is that the physical location in storage is not part of the lookup system. If, for instance, a library's card catalog stated a book could be found on "shelf 43, bin 10", if the library is re-arranged the entire catalog has to be updated. In contrast, the ISBN will not change and the book can be found by looking for the shelf with those numbers. In the computer setting, a file in the DOS filesystem at the path A:\myfiles\textfile.txt points to the physical storage of the file in the myfiles subdirectory. This file disappears if the floppy is moved to the B: drive, and even moving its location within the disk hierarchy requires the user-facing directories to be updated. In CAS, only the internal mapping from key to physical location changes, and this exists in only one place and can be designed for efficient updating. This allows files to be moved among storage devices, and even across media, without requiring any changes to the retrieval. For data that changes frequently, CAS is not as efficient as location-based addressing. In these cases, the CAS device would need to continually recompute the address of data as it was changed. This would result in multiple copies of the entire almost-identical document being stored, the problem that CAS attempts to avoid. Additionally, the user-facing directories would have to be continually updated with these "new" files, which would become polluted by many similar documents that would make searching more difficult. In contrast, updating a file in a location-based system is highly optimized, only the internal list of sectors has to be changed and many years of tuning have been applied to this operation. Because CAS is used primarily for archiving, file deletion is often tightly controlled or even impossible under user control. In contrast, automatic deletion is a common feature, removing all files older than some legally defined requirement, say ten years. In distributed computing The simplest way to implement a CAS system is to store all of the files within a typical database to which clients connect to add, query, and retrieve files. However, the unique properties of content addressability mean that the paradigm is well suited for computer systems in which multiple hosts collaboratively manage files with no central authority, such as distributed file sharing systems, in which the physical location of a hosted file can change rapidly in response to changes in network topology, while the exact content of the files to be retrieved are of more importance to users than their current physical location. In a distributed system, content hashes are often used for quick network-wide searches for specific files, or to quickly see which data in a given file has been changed and must be propagated to other members of the network with minimal bandwidth usage. In these systems, content addressability allows highly variable network topology to be abstracted away from users who wish to access data, compared to systems like the World Wide Web, in which a consistent location of a file or service is key to easy use. Content-addressable networks History A hardware device called the Content Addressable File Store (CAFS) was developed by International Computers Limited (ICL) in the late 1960s and put into use by British Telecom in the early 1970s for telephone directory lookups. The user-accessible search functionality was maintained by the disk controller with a high-level application programming interface (API) so users could send queries into what appeared to be a black box that returned documents. The advantage was that no information had to be exchanged with the host computer while the disk performed the search. Paul Carpentier and Jan van Riel coined the term CAS while working at a company called FilePool in the late 1990s. FilePool was purchased by EMC Corporation in 2001 and was released the next year as Centera. The timing was perfect; the introduction of the Sarbanes–Oxley Act in 2002 required companies to store huge amounts of documentation for extended periods and required them to do so in a fashion that ensured they were not edited after-the-fact. A number of similar products soon appeared from other large-system vendors. In mid-2004, the industry group SNIA began working with a number of CAS providers to create standard behavior and interoperability guidelines for CAS systems. In addition to CAS, a number of similar products emerged that added CAS-like capabilities to existing products; notable among these was IBM Tivoli Storage Manager. The rise of cloud computing and the associated elastic cloud storage systems like Amazon S3 further diluted the value of dedicated CAS systems. Dell purchased EMC in 2016 and stopped sales of the original Centera in 2018 in favor of their elastic storage product. CAS was not associated with peer-to-peer applications until the 2000s, when rapidly proliferating Internet access in homes and businesses led to a large number of computer users who wanted to swap files, originally doing so on centrally managed services like Napster. However, an injunction against Napster prompted the independent development of file-sharing services such as BitTorrent, which could not be centrally shut down. In order to function without a central federating server, these services rely heavily on CAS to enforce the faithful copying and easy querying of unique files. At the same time, the growth of the open-source software movement in the 2000s led to the rapid proliferation of CAS-based services such as Git, a version control system that uses numerous cryptographic functions such as Merkle trees to enforce data integrity between users and allow for multiple versions of files with minimal disk and network usage. Around this time, individual users of public-key cryptography used CAS to store their public keys on systems such as key servers. The rise of mobile computing and high capacity mobile broadband networks in the 2010s, coupled with increasing reliance on web applications for everyday computing tasks, strained the existing location-addressed client–server model commonplace among Internet services, leading to an accelerated pace of link rot and an increased reliance on centralized cloud hosting. Furthermore, growing concerns about the centralization of computing power in the hands of large technology companies, potential monopoly power abuses, and privacy concerns led to a number of projects created with the goal of creating more decentralized systems. Bitcoin uses CAS and public/private key pairs to manage wallet addresses, as do most other cryptocurrencies. IPFS uses CAS to identify and address communally hosted files on its network. Numerous other peer-to-peer systems designed to run on smartphones, which often access the Internet from varying locations, utilize CAS to store and access user data for both convenience and data privacy purposes, such as secure instant messaging. Implementations Proprietary The Centera CAS system consists of a series of networked nodes (typically large servers running Linux), divided between storage nodes and access nodes. The access nodes maintain a synchronized directory of content addresses, and the corresponding storage node where each address can be found. When a new data element, or blob, is added, the device calculates a hash of the content and returns this hash as the blob's content address. As mentioned above, the hash is searched to verify that identical content is not already present. If the content already exists, the device does not need to perform any additional steps; the content address already points to the proper content. Otherwise, the data is passed off to a storage node and written to the physical media. When a content address is provided to the device, it first queries the directory for the physical location of the specified content address. The information is then retrieved from a storage node, and the actual hash of the data recomputed and verified. Once this is complete, the device can supply the requested data to the client. Within the Centera system, each content address actually represents a number of distinct data blobs, as well as optional metadata. Whenever a client adds an additional blob to an existing content block, the system recomputes the content address. To provide additional data security, the Centera access nodes, when no read or write operation is in progress, constantly communicate with the storage nodes, checking the presence of at least two copies of each blob as well as their integrity. Additionally, they can be configured to exchange data with a different, e.g., off-site, Centera system, thereby strengthening the precautions against accidental data loss. IBM has another flavor of CAS which can be software-based, Tivoli Storage manager 5.3, or hardware-based, the IBM DR550. The architecture is different in that it is based on hierarchical storage management (HSM) design which provides some additional flexibility such as being able to support not only WORM disk but WORM tape and the migration of data from WORM disk to WORM tape and vice versa. This provides for additional flexibility in disaster recovery situations as well as the ability to reduce storage costs by moving data off the disk to tape. Another typical implementation is iCAS from iTernity. The concept of iCAS is based on containers. Each container is addressed by its hash value. A container holds different numbers of fixed content documents. The container is not changeable, and the hash value is fixed after the write process. Open-source Venti: One of the first content-addressed storage servers, originally developed for Plan 9 from Bell Labs and is now also available for Unix-like systems as part of Plan 9 from User Space.The first step towards an open-source CAS+ implementation is Twisted Storage. Tahoe Least-Authority File Store: an open source implementation of CAS. Git: a userspace CAS filesystem. Git is primarily used as a source code control system. git-annex: a distributed file synchronization system that uses content-addressable storage for files it manages. It relies on Git and symbolic links to index their filesystem location. Project Honeycomb: an open-source API for CAS systems. XAM: an interface developed under the auspices of the Storage Networking Industry Association. It provides a standard interface for archiving CAS (and CAS like) products and projects. Perkeep: a 2011 project to bring the advantages of content-addressable storage "to the masses". It is intended to be used for a wide variety of use cases, including distributed backup, a snapshotted-by-default, a version-controlled filesystem, and decentralized, permission-controlled filesharing. Irmin: an OCaml "library for persistent stores with built-in snapshot, branching and reverting mechanisms"; the same design principles as Git. Cassette: an open-source CAS implementation for C#/.NET. Arvados Keep: an open-source content-addressable distributed storage system. It is designed for large-scale, computationally intensive data science work such as storing and processing genomic data. Infinit: a content-addressable and decentralized (peer-to-peer) storage platform that was acquired by Docker Inc. InterPlanetary File System (IPFS): a content-addressable, peer-to-peer hypermedia distribution protocol. casync: a Linux software utility by Lennart Poettering to distribute frequently-updated file system images over the Internet. See also Content Addressable File Store Content-centric networking / Named data networking Data Defined Storage Write Once Read Many References External links Fast, Inexpensive Content-Addressed Storage in Foundation Venti: a new approach to archival storage Associative arrays Computer storage devices
Content-addressable storage
[ "Technology" ]
3,672
[ "Computer storage devices", "Recording devices" ]
4,062,914
https://en.wikipedia.org/wiki/Swedish%20Meteorological%20and%20Hydrological%20Institute
The Swedish Meteorological and Hydrological Institute (, SMHI) is a Swedish government agency and operates under the Ministry of Climate and Enterprise. SMHI has expertise within the areas of meteorology, hydrology and oceanography, and has extensive service and business operations within these areas. History On 1 January 1873, Statens Meteorologiska Centralanstalt was founded, an autonomous part of the Royal Swedish Academy of Sciences, but the first meteorological observations began on 1 July 1874. It was not until 1880 that the first forecasts were issued. The latter will be broadcast on Stockholm radio from 19 February 1924. In 1908, the Hydrographic Office (Hydrografiska byrån, HB) was created. Its task is to scientifically map Sweden's freshwater and collaborate with the weather service in taking certain weather observations such as precipitation and snow cover. In 1919, the two services merged and became the Statens meteorologisk-hydrografiska anstalt (SMHA). In 1945, the service was renamed Sveriges meteorologiska och hydrologiska institut. Prior to 1975 it was located in Stockholm but after a decision taken in the Riksdag in 1971 it was relocated to Norrköping in 1975. Staff and organisation SMHI has offices in Gothenburg, Malmö, Sundsvall and Upplands Väsby, as well as its headquarters. To the Swedish public SMHI is mostly known for the weather forecasts in the public-service radio provided by Sveriges Radio. Many of the other major media companies in Sweden also buy weather forecasts from SMHI. SMHI has about 650 employees. The research staff includes some 100 scientists at the Research Unit, which includes the Rossby Centre. The research division is divided into six units: Meteorological prediction and analysis Air quality Oceanography Hydrology Rossby Centre (Regional and Global Climate Modelling) Atmospheric Remote Sensing The regional and global climate modelling is at the Rossby Centre, which was established at SMHI in 1997. Environmental research spans all six research units. There is also a project for providing contributions to the HIRLAM (High Resolution Limited Area Model) project. The main goal of the research division is to support the Institute and the society with research and development. The scientists participate in many national and international research projects. Air quality research The air quality research unit of SMHI has 10 scientists, all of whom have expertise in air quality, atmospheric pollution transport, and atmospheric pollution dispersion modelling. Some of the atmospheric pollution dispersion models developed by the air quality research unit are: the DISPERSION21 model (also called DISPERSION 2.1) the MATCH model Allegations of harassment and corruption An anonymous letter sent to the Swedish ministry of environment in 2019 and written by 100 SMHI employees, claims that harassment and threats from the management happen frequently within the institution. A claim that SMHI's former general director did not wish to address thoroughly. In 2020, it was revealed that the sea routing department was sold to its recently resigned former director for a very low price, without any public offer. The matter was reported at the Swedish parliament. References External links SMHI website The Model Documententation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency) Airviro web page Airviro page on Westlakes website Government agencies of Sweden Sweden Atmospheric dispersion modeling 1945 establishments in Sweden Government agencies established in 1945 National meteorological and hydrological services Oceanographic organizations Hydrology organizations
Swedish Meteorological and Hydrological Institute
[ "Chemistry", "Engineering", "Environmental_science" ]
716
[ "Hydrology", "Atmospheric dispersion modeling", "Environmental engineering", "Hydrology organizations", "Environmental modelling", "National meteorological and hydrological services" ]
4,062,934
https://en.wikipedia.org/wiki/Phenomics
Phenomics is the systematic study of traits that make up an organisms phenotype, which changes over time, due to development and aging or through metamorphosis such as when a caterpillar changes into a butterfly. The term phenomics was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. An organism's phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist to analyze 2D and 3D imaging data of plants. These methods are available to the community in various implementations, ranging from end-user ready cyber-platforms in the cloud such as DIRT and PlantIt to programming frameworks for software developers such as PlantCV. Many research groups are focused on developing systems using the Breeding API, a Standardized RESTful Web Service API Specification for communicating Plant Breeding Data. The Australian Plant Phenomics Facility (APPF), an initiative of the Australian government, has developed a number of new instruments for comprehensive and fast measurements of phenotypes in both the lab and the field. Research coordination and communities The International Plant Phenotyping Network (IPPN) is an organization that seeks to enable exchange of knowledge, information, and expertise across many disciplines involved in plant phenomics by providing a network linking members, platform operators, users, research groups, developers, and policy makers. Regional partners include, the European Plant Phenotyping Network (EPPN), the North American Plant Phenotyping Network (NAPPN), and others. The European research infrastructure for plant phenotyping, EMPHASIS, enables researchers to use facilities, services and resources for multi-scale plant phenotyping across Europe. EMPHASIS aims to promote future food security and agricultural business in a changing climate by enabling scientists to better understand plant performance and translate this knowledge into application. See also PhenomicDB, a database combining phenotypic and genetic data from several species Phenotype microarray Human Phenotype Ontology, a formal ontology of human phenotypes References Further reading Branches of biology Omics
Phenomics
[ "Biology" ]
747
[ "Bioinformatics", "Omics", "nan" ]
4,062,960
https://en.wikipedia.org/wiki/Particle%20aggregation
Particle agglomeration refers to the formation of assemblages in a suspension and represents a mechanism leading to the functional destabilization of colloidal systems. During this process, particles dispersed in the liquid phase stick to each other, and spontaneously form irregular particle assemblages, flocs, or agglomerates. This phenomenon is also referred to as coagulation or flocculation and such a suspension is also called unstable. Particle agglomeration can be induced by adding salts or other chemicals referred to as coagulant or flocculant. Particle agglomeration can be a reversible or irreversible process. Particle agglomerates defined as "hard agglomerates" are more difficult to redisperse to the initial single particles. In the course of agglomeration, the agglomerates will grow in size, and as a consequence they may settle to the bottom of the container, which is referred to as sedimentation. Alternatively, a colloidal gel may form in concentrated suspensions which changes its rheological properties. The reverse process whereby particle agglomerates are re-dispersed as individual particles, referred to as peptization, hardly occurs spontaneously, but may occur under stirring or shear. Colloidal particles may also remain dispersed in liquids for long periods of time (days to years). This phenomenon is referred to as colloidal stability and such a suspension is said to be functionally stable. Stable suspensions are often obtained at low salt concentrations or by addition of chemicals referred to as stabilizers or stabilizing agents. The stability of particles, colloidal or otherwise, is most commonly evaluated in terms of zeta potential. This parameter provides a readily quantifiable measure of interparticle repulsion, which is the key inhibitor of particle aggregation. Similar agglomeration processes occur in other dispersed systems too. In emulsions, they may also be coupled to droplet coalescence, and not only lead to sedimentation but also to creaming. In aerosols, airborne particles may equally aggregate and form larger clusters (e.g., soot). Early stages A well dispersed colloidal suspension consists of individual, separated particles and is stabilized by repulsive inter-particle forces. When the repulsive forces weaken or become attractive through the addition of a coagulant, particles start to aggregate. Initially, particle doublets A2 will form from singlets A1 according to the scheme In the early stage of the aggregation process, the suspension mainly contains individual particles. The rate of this phenomenon is characterized by the aggregation rate coefficient . Since doublet formation is a second order rate process, the units of this coefficients are m3s−1 since particle concentrations are expressed as particle number per unit volume (m−3). Since absolute aggregation rates are difficult to measure, one often refers to the dimensionless stability ratio , defined as where is the aggregation rate coefficient in the fast regime, and the coefficient at the conditions of interest. The stability ratio is close to unity in the fast regime, increases in the slow regime, and becomes very large when the suspension is stable. Often, colloidal particles are suspended in water. In this case, they accumulate a surface charge and an electrical double layer forms around each particle. The overlap between the diffuse layers of two approaching particles results in a repulsive double layer interaction potential, which leads to particle stabilization. When salt is added to the suspension, the electrical double layer repulsion is screened, and van der Waals attraction become dominant and induce fast aggregation. The figure on the right shows the typical dependence of the stability ratio versus the electrolyte concentration, whereby the regimes of slow and fast aggregation are indicated. The table below summarizes the critical coagulation concentration (CCC) ranges for different net charge of the counter ion. The charge is expressed in units of elementary charge. This dependence reflects the Schulze–Hardy rule, which states that the CCC varies as the inverse sixth power of the counter ion charge. The CCC also depends on the type of ion somewhat, even if they carry the same charge. This dependence may reflect different particle properties or different ion affinities to the particle surface. Since particles are frequently negatively charged, multivalent metal cations thus represent highly effective coagulants. Adsorption of oppositely charged species (e.g., protons, specifically adsorbing ions, surfactants, or polyelectrolytes) may destabilize a particle suspension by charge neutralization or stabilize it by buildup of charge, leading to a fast aggregation near the charge neutralization point, and slow aggregation away from it. Quantitative interpretation of colloidal stability was first formulated within the DLVO theory. This theory confirms the existence slow and fast aggregation regimes, even though in the slow regime the dependence on the salt concentration is often predicted to be much stronger than observed experimentally. The Schulze–Hardy rule can be derived from DLVO theory as well. Other mechanisms of colloid stabilization are equally possible, particularly, involving polymers. Adsorbed or grafted polymers may form a protective layer around the particles, induce steric repulsive forces, and lead to steric stabilization at it is the case with polycarboxylate ether (PCE), the last generation of chemically tailored superplasticizer specifically designed to increase the workability of concrete while reducing its water content to improve its properties and durability. When polymers chains adsorb to particles loosely, a polymer chain may bridge two particles, and induce bridging forces. This situation is referred to as bridging flocculation. When particle aggregation is solely driven by diffusion, one refers to perikinetic aggregation. Aggregation can be enhanced through shear stress (e.g., stirring). The latter case is called orthokinetic aggregation. Later stages As the aggregation process continues, larger clusters form. The growth occurs mainly through encounters between different clusters, and therefore one refers to cluster-cluster aggregation process. The resulting clusters are irregular, but statistically self-similar. They are examples of mass fractals, whereby their mass M grows with their typical size characterized by the radius of gyration Rg as a power-law where d is the mass fractal dimension. Depending whether the aggregation is fast or slow, one refers to diffusion limited cluster aggregation (DLCA) or reaction limited cluster aggregation (RLCA). The clusters have different characteristics in each regime. DLCA clusters are loose and ramified (d ≈ 1.8), while the RLCA clusters are more compact (d ≈ 2.1). The cluster size distribution is also different in these two regimes. DLCA clusters are relatively monodisperse, while the size distribution of RLCA clusters is very broad. The larger the cluster size, the faster their settling velocity. Therefore, aggregating particles sediment and this mechanism provides a way for separating them from suspension. At higher particle concentrations, the growing clusters may interlink, and form a particle gel. Such a gel is an elastic solid body, but differs from ordinary solids by having a very low elastic modulus. Homoaggregation versus heteroaggregation When aggregation occurs in a suspension composed of similar monodisperse colloidal particles, the process is called homoaggregation (or homocoagulation). When aggregation occurs in a suspension composed of dissimilar colloidal particles, one refers to heteroaggregation (or heterocoagulation). The simplest heteroaggregation process occurs when two types of monodisperse colloidal particles are mixed. In the early stages, three types of doublets may form: While the first two processes correspond to homoaggregation in pure suspensions containing particles A or B, the last reaction represents the actual heteroaggregation process. Each of these reactions is characterized by the respective aggregation coefficients , , and . For example, when particles A and B bear positive and negative charge, respectively, the homoaggregation rates may be slow, while the heteroaggregation rate is fast. In contrast to homoaggregation, the heteroaggregation rate accelerates with decreasing salt concentration. Clusters formed at later stages of such heteroaggregation processes are even more ramified that those obtained during DLCA (d ≈ 1.4). An important special case of a heteroaggregation process is the deposition of particles on a substrate. Early stages of the process correspond to the attachment of individual particles to the substrate, which can be pictures as another, much larger particle. Later stages may reflect blocking of the substrate through repulsive interactions between the particles, while attractive interactions may lead to multilayer growth, and is also referred to as ripening. These phenomena are relevant in membrane or filter fouling. Experimental techniques Numerous experimental techniques have been developed to study particle aggregation. Most frequently used are time-resolved optical techniques that are based on transmittance or scattering of light. Light transmission. The variation of transmitted light through an aggregating suspension can be studied with a regular spectrophotometer in the visible region. As aggregation proceeds, the medium becomes more turbid, and its absorbance increases. The increase of the absorbance can be related to the aggregation rate constant k and the stability ratio can be estimated from such measurements. The advantage of this technique is its simplicity. Light scattering. These techniques are based on probing the scattered light from an aggregating suspension in a time-resolved fashion. Static light scattering yields the change in the scattering intensity, while dynamic light scattering the variation in the apparent hydrodynamic radius. At early-stages of aggregation, the variation of each of these quantities is directly proportional to the aggregation rate constant k. At later stages, one can obtain information on the clusters formed (e.g., fractal dimension). Light scattering works well for a wide range of particle sizes. Multiple scattering effects may have to be considered, since scattering becomes increasingly important for larger particles or larger aggregates. Such effects can be neglected in weakly turbid suspensions. Aggregation processes in strongly scattering systems have been studied with transmittance, backscattering techniques or diffusing-wave spectroscopy. Single particle counting. This technique offers excellent resolution, whereby clusters made out of tenths of particles can be resolved individually. The aggregating suspension is forced through a narrow capillary particle counter and the size of each aggregate is being analyzed by light scattering. From the scattering intensity, one can deduce the size of each aggregate, and construct a detailed aggregate size distribution. If the suspensions contain high amounts of salt, one could equally use a Coulter counter. As time proceeds, the size distribution shifts towards larger aggregates, and from this variation aggregation and breakup rates involving different clusters can be deduced. The disadvantage of the technique is that the aggregates are forced through a narrow capillary under high shear, and the aggregates may disrupt under these conditions. Indirect techniques. As many properties of colloidal suspensions depend on the state of aggregation of the suspended particles, various indirect techniques have been used to monitor particle aggregation too. While it can be difficult to obtain quantitative information on aggregation rates or cluster properties from such experiments, they can be most valuable for practical applications. Among these techniques settling tests are most relevant. When one inspects a series of test tubes with suspensions prepared at different concentration of the flocculant, stable suspensions often remain dispersed, while the unstable ones settle. Automated instruments based on light scattering/transmittance to monitor suspension settling have been developed, and they can be used to probe particle aggregation. One must realize, however, that these techniques may not always reflect the actual aggregation state of a suspension correctly. For example, larger primary particles may settle even in the absence of aggregation, or aggregates that have formed a colloidal gel will remain in suspension. Other indirect techniques capable to monitor the state of aggregation include, for example, filtration, rheology, absorption of ultrasonic waves, or dielectric properties. Relevance Particle aggregation is a widespread phenomenon, which spontaneously occurs in nature but is also widely explored in manufacturing. Some examples include. Formation of river delta. When river water carrying suspended sediment particles reaches salty water, particle aggregation may be one of the factors responsible for river delta formation. Charged particles are stable in river's fresh water containing low levels of salt, but they become unstable in sea water containing high levels of salt. In the latter medium, the particles aggregate, the larger aggregates sediment, and thus create the river delta. Papermaking. Retention aids are added to the pulp to accelerate paper formation. These aids are coagulating aids, which accelerate the aggregation between the cellulose fibers and filler particles. Frequently, cationic polyelectrolytes are being used for that purpose. Water treatment. Treatment of municipal waste water normally includes a phase where fine solid particles are removed. This separation is achieved by addition of a flocculating or coagulating agent, which induce the aggregation of the suspended solids. The aggregates are normally separated by sedimentation, leading to sewage sludge. Commonly used flocculating agents in water treatment include multivalent metal ions (e.g., Fe3+ or Al3+), polyelectrolytes, or both. Cheese making. The key step in cheese production is the separation of the milk into solid curds and liquid whey. This separation is achieved by inducing the aggregation processes between casein micelles by acidifying the milk or adding rennet. The acidification neutralizes the carboxylate groups on the micelles and induces the aggregation. See also Aerosol Colloid Clarifying agent Double layer forces DLVO theory (stability of colloids) Electrical double layer Emulsion Flocculation Gel Nanoparticle Particle deposition Peptization Reaction rate Settling Smoluchowski coagulation equation Sol-gel Surface charge Suspension (chemistry) References External links in Microgravity Chemistry Materials science Colloidal chemistry
Particle aggregation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,903
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Materials science", "Colloids", "Surface science", "nan" ]
4,063,091
https://en.wikipedia.org/wiki/Thermal%20physics
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics. Overview Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory. A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction. See also Heat transfer physics Information theory Philosophy of thermal and statistical physics Thermodynamic instruments References Further reading External links Thermal Physics Links on the Web Physics education Thermodynamics
Thermal physics
[ "Physics", "Chemistry", "Mathematics" ]
371
[ "Applied and interdisciplinary physics", "Thermodynamics", "Physics education", "Dynamical systems" ]
4,063,505
https://en.wikipedia.org/wiki/NGC%204038%20Group
The NGC 4038 Group is a group of galaxies in the constellations Corvus and Crater. The group may contain between 13 and 27 galaxies. The group's best known galaxies are the Antennae Galaxies (NGC 4038/NGC4039), a well-known interacting pair of galaxies. Members The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the survey of Fouque et al., the Lyons Groups of Galaxies (LGG) Catalog, and the three group lists created from the Nearby Optical Galaxy sample of Giuricin et al. Additionally, the references above frequently but inconsistently identify PGC 37513, PGC 37565, and UGCA 270 as members of this group. Based on the above references, the exact membership of this group is somewhat uncertain as is the exact number of galaxies within the group. Location The NGC 4038 group along with other galaxies and galaxy groups are part of the Crater Cloud which is a component of the Virgo Supercluster. See also M96 Group - a similar group of galaxies References Galaxy clusters Corvus (constellation) Crater (constellation) Virgo Supercluster
NGC 4038 Group
[ "Astronomy" ]
242
[ "Galaxy clusters", "Constellations", "Corvus (constellation)", "Crater (constellation)", "Astronomical objects" ]
4,063,872
https://en.wikipedia.org/wiki/C/1948%20V1
The Eclipse Comet of 1948, formally known as C/1948 V1, was an especially bright comet discovered during a solar eclipse on November 1, 1948. Although there have been several comets that have been seen during solar eclipses, the Eclipse Comet of 1948 is perhaps the best-known; it was however, best viewed only from the Southern Hemisphere. When it was first discovered during totality, it was already quite bright, at magnitude –1.0; as it was near perihelion, this was its peak brightness. Its visibility during morning twilight improved as it receded outward from the Sun; it peaked near zero magnitude, and at one point displayed a tail roughly 30 degrees in length, before falling below naked eye visibility by the end of December. References Non-periodic comets 1948 in science 19481101
C/1948 V1
[ "Astronomy" ]
166
[ "Astronomy stubs", "Comet stubs" ]
4,064,177
https://en.wikipedia.org/wiki/Aqueduct%20of%20Segovia
The Aqueduct of Segovia () is a Roman aqueduct in Segovia, Spain. It was built around the first century AD to channel water from springs in the mountains away to the city's fountains, public baths and private houses, and was in use until 1973. Its elevated section, with its complete arcade of 167 arches, is one of the best-preserved Roman aqueduct bridges and the foremost symbol of Segovia, as evidenced by its presence on the city's coat of arms. The Old Town of Segovia and the aqueduct, were declared a UNESCO World Heritage Site in 1985. History As the aqueduct lacks a legible inscription (one was apparently located in the structure's attic, or top portion), the date of construction cannot be definitively determined. The general date of the Aqueduct's construction was long a mystery, although it was thought to have been during the 1st century AD, during the reigns of the Emperors Domitian, Nerva, and Trajan. At the end of the 20th century, Géza Alföldy deciphered the text on the dedication plaque by studying the anchors that held the now missing bronze letters in place. He determined that Emperor Domitian (AD 81–96) ordered its construction and the year 98 AD was proposed as the most likely date of completion. However, in 2016 archeological evidence was published which points to a slightly later date, after 112 AD, during the government of Trajan or in the beginning of the government of emperor Hadrian, from 117 AD. The beginnings of Segovia are also not definitively known. The Arevaci people are known to have populated the area before it was conquered by the Romans. Roman troops sent to control the area stayed behind to settle there. The area fell within the jurisdiction of the Roman provincial court (Latin conventus iuridici, Spanish convento jurídico) located in Clunia. Description The aqueduct once transported water from the Rio Frio River, situated in the mountains from the city in the La Acebeda region. It runs before arriving in the city. The construction of the aqueduct follows the principles laid out by Vitruvius in his De Architectura published in the mid-first century BC. The water was first gathered in a tank known as El Caserón (or Big House), and was then led through a channel to a second tower known as the Casa de Aguas (or Waterhouse). There it was naturally decanted and sand settled out before the water continued its route. Next the water traveled on a one-percent grade until it was high upon the Postigo, a rocky outcropping on which sits the walled city center with its Alcázar or castle. To reach the old city, the water is conveyed by its aqueduct bridge. At Plaza de Díaz Sanz, the structure makes an abrupt turn and heads toward Plaza Azoguejo. It is there the monument begins to display its full splendor. At its tallest, the aqueduct reaches a height of , including nearly of foundation. There are both single and double arches supported by pillars. From the point the aqueduct enters the city until it reaches Plaza de Díaz Sanz, it includes 75 single arches and 44 double arches (or 88 arches when counted individually), followed by four single arches, totalling 167 arches in all. The first section of the aqueduct contains 36 semi-circular arches, rebuilt in the 15th century to restore a portion destroyed by the Moors in 1072. The line of arches is organized in two levels, decorated simply, in which simple moulds hold the frame and provide support to the structure. On the upper level, the arches are 5.1 metres (16.1 ft) wide. Built in two levels, the top pillars are both shorter and narrower than those on the lower level. The top of the structure contains the channel through which water travels, through a U-shaped hollow measuring 0.55 tall by 0.46 metre diameter. The top of each pillar has a cross-section measuring 1.8 by 2.5 metres (5.9 by 8.2 feet), while the base cross-section measures 2.4 by 3 metres (7.9 by 9.8 feet). The aqueduct is built of unmortared, brick-like granite blocks. During the Roman era, each of the three tallest arches displayed a sign in bronze letters, indicating the name of its builder along with the date of construction. Today, two niches are still visible, one on each side of the aqueduct. One of them is known to have held the image of Hercules, who, according to legend, was founder of the city. That niche now contains an image of the Virgin. The other one used to hold an image of Saint Stephen, now lost. Distribution of the water Within the walled city there was a distribution system via a deposit called a castellum aquae. While the details of this system are not fully known, it has been established that the water followed a subterranean route. The main channel has been marked on the city's pavements. Subsequent The first reconstruction of the aqueduct took place during the reign of the King Ferdinand and Queen Isabella, known as Los Reyes Católicos or the Catholic Monarchs. Don Pedro Mesa, the prior of the nearby Jerónimos del Parral monastery, led the project. A total of 36 arches were rebuilt, with great care taken not to change any of the original work or style. Later, in the 16th century, the central niches and above-mentioned statues were placed on the structure. On 4 December, the day of Saint Barbara, who is the patron saint of artillery, the cadets of the local military academy drape the image of the Virgen de la Fuencisla in a flag. The aqueduct is the city's most important architectural landmark. It had been kept functioning throughout the centuries and preserved in excellent condition. It provided water to Segovia until the mid 19th century. Because of differential decay of stone blocks, water leakage from the upper viaduct, and pollution that caused the granite ashlar masonry to deteriorate and crack, the site was listed in the 2006 World Monuments Watch by the World Monuments Fund (WMF). Contrary to popular belief, vibrations caused by traffic that used to pass under the arches did not affect the aqueduct due to its great mass. WMF Spain brought together the Ministry of Culture, the regional government of Castilla y León, and other local institutions to collaborate in implementing the project, and provided assistance through the global financial services company American Express. Interpretation One of the buildings of Segovia's former mint, the Real Casa de Moneda, houses an aqueduct interpretation centre, developed with funding from European Economic Area grants. There is a connection between the mint and the aqueduct in that coins minted in Segovia used the aqueduct as a mint mark. Another link is that the building provided for the mint in the 16th century harnessed water power to drive its machinery, although the water is taken directly from the River Eresma rather than sourced from the aqueduct. See also List of aqueducts in the Roman Empire List of Roman aqueducts by date Ancient Roman technology Roman engineering References Further reading External links Club de Amigos del Acueducto Norma Barbacci, "Saving Segovia's Aqueduct," ICON Magazine, Winter 2006/2007, p. 38–41. Aqueduct of Segovia – Information and photos. 600 Roman aqueducts with 35 descriptions in detail among which the Segovia aqueduct World Monuments Fund – Acueducto de Segovia American Society of Civil Engineers - International Historic Civil Engineering Landmark Aqueducts in Spain Buildings and structures completed in the 1st century Buildings and structures in Segovia Historic Civil Engineering Landmarks Roman aqueducts outside Rome Roman bridges in Spain Tourist attractions in Castile and León Wikipedia articles incorporating text from Enciclopedia Libre Universal en Español World Heritage Sites in Spain Bridges in Castile and León
Aqueduct of Segovia
[ "Engineering" ]
1,632
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
4,064,391
https://en.wikipedia.org/wiki/Advanced%20Digital%20Information%20Corporation
Advanced Digital Information Corporation (ADIC) was an American manufacturer of tape libraries and storage management software which is now part of Quantum Corp. Their product line included both hardware, such as the Scalar line of robotic tape libraries, and software, such as the StorNext File System and the StorNext Storage Manager, a Hierarchical Storage Management system. Partners and resellers included Apple, Dell, EMC, Fujitsu-Siemens, HP, IBM and Sun. ADIC was acquired by Quantum in August 2006. References 1983 establishments in Washington (state) 2006 disestablishments in Washington (state) 2006 mergers and acquisitions American companies established in 1983 American companies disestablished in 2006 Computer companies established in 1983 Computer companies disestablished in 2006 Defunct companies based in Redmond, Washington Defunct computer companies of the United States Defunct computer hardware companies
Advanced Digital Information Corporation
[ "Technology" ]
173
[ "Computing stubs", "Computer company stubs" ]
4,064,439
https://en.wikipedia.org/wiki/Free%20entropy
A thermodynamic free entropy is an entropic thermodynamic potential analogous to the free energy. Also known as a Massieu, Planck, or Massieu–Planck potentials (or functions), or (rarely) free information. In statistical mechanics, free entropies frequently appear as the logarithm of a partition function. The Onsager reciprocal relations in particular, are developed in terms of entropic potentials. In mathematics, free entropy means something quite different: it is a generalization of entropy defined in the subject of free probability. A free entropy is generated by a Legendre transformation of the entropy. The different potentials correspond to different constraints to which the system may be subjected. Examples The most common examples are: where is entropy is the Massieu potential is the Planck potential is internal energy is temperature is pressure is volume is Helmholtz free energy is Gibbs free energy is number of particles (or number of moles) composing the i-th chemical component is the chemical potential of the i-th chemical component is the total number of components is the th components. Note that the use of the terms "Massieu" and "Planck" for explicit Massieu-Planck potentials are somewhat obscure and ambiguous. In particular "Planck potential" has alternative meanings. The most standard notation for an entropic potential is , used by both Planck and Schrödinger. (Note that Gibbs used to denote the free energy.) Free entropies were invented by French engineer François Massieu in 1869, and actually predate Gibbs's free energy (1875). Dependence of the potentials on the natural variables Entropy By the definition of a total differential, From the equations of state, The differentials in the above equation are all of extensive variables, so they may be integrated to yield Massieu potential / Helmholtz free entropy Starting over at the definition of and taking the total differential, we have via a Legendre transform (and the chain rule) The above differentials are not all of extensive variables, so the equation may not be directly integrated. From we see that If reciprocal variables are not desired, Planck potential / Gibbs free entropy Starting over at the definition of and taking the total differential, we have via a Legendre transform (and the chain rule) The above differentials are not all of extensive variables, so the equation may not be directly integrated. From we see that If reciprocal variables are not desired, References Bibliography Thermodynamic entropy
Free entropy
[ "Physics" ]
509
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
4,065,365
https://en.wikipedia.org/wiki/Carlson%27s%20theorem
In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem. Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions. Statement Assume that satisfies the following three conditions. The first two conditions bound the growth of at infinity, whereas the third one states that vanishes on the non-negative integers. is an entire function of exponential type, meaning that for some real values , . There exists such that for every non-negative integer . Then is identically zero. Sharpness First condition The first condition may be relaxed: it is enough to assume that is analytic in , continuous in , and satisfies for some real values , . Second condition To see that the second condition is sharp, consider the function . It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of , and indeed it is not identically zero. Third condition A result, due to , relaxes the condition that vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if vanishes on a subset of upper density 1, meaning that This condition is sharp, meaning that the theorem fails for sets of upper density smaller than 1. Applications Suppose is a function that possesses all finite forward differences . Consider then the Newton series with is the binomial coefficient and is the -th forward difference. By construction, one then has that for all non-negative integers , so that the difference . This is one of the conditions of Carlson's theorem; if obeys the others, then is identically zero, and the finite differences for uniquely determine its Newton series. That is, if a Newton series for exists, and the difference satisfies the Carlson conditions, then is unique. See also Newton series Mahler's theorem Table of Newtonian series References F. Carlson, Sur une classe de séries de Taylor, (1914) Dissertation, Uppsala, Sweden, 1914. , cor 21(1921) p. 6. E.C. Titchmarsh, The Theory of Functions (2nd Ed) (1939) Oxford University Press (See section 5.81) R. P. Boas, Jr., Entire functions, (1954) Academic Press, New York. Factorial and binomial topics Finite differences Theorems in complex analysis
Carlson's theorem
[ "Mathematics" ]
553
[ "Theorems in mathematical analysis", "Mathematical analysis", "Factorial and binomial topics", "Theorems in complex analysis", "Finite differences", "Combinatorics" ]
4,065,640
https://en.wikipedia.org/wiki/CRC%20Oil%20Storage%20Depot
CRC Oil Storage Depot was one of five oil terminals in Hong Kong and owned by China Resources Petroleum Company Limited (CRC). See also Energy in Hong Kong References External links Texaco Oil Depot [1936-1988]. Contains a list of former oil depots in Hong Kong. Tsing Yi Oil terminals Energy infrastructure in Hong Kong
CRC Oil Storage Depot
[ "Chemistry" ]
70
[ "Petroleum", "Petroleum stubs" ]
4,065,906
https://en.wikipedia.org/wiki/Radio%20SHARK
radio SHARK (the capitalization is a trademarked logotype) is a computer-controlled radio designed by Griffin Technology, introduced in late 2004. A second generation (radio SHARK 2) superseded it in 2007; they are distinguishable by color (the first model is white, the second is black). The radio connects the computer through a USB interface, which also supplies power to the radio. The device is shaped like a shark fin, which includes four internal LED lights attached to three pieces of clear plastic on each side of the device's case, two LEDs of which glow blue when plugged in, the other two of which glow red when recording radio. Software designed for radio SHARK allows users to record radio programs at specific times and frequencies. The software also facilitates listening of "live" radio using time-shifting technology. Using the time-shifting features of the software, users can pause, rewind, and fast-forward "live" radio, in a manner similar to how users of TiVo or other digital video recorders can time-shift video. The radio SHARK uses the computer's hard drive to store audio files that allow for the time-shifting functionality. The radio SHARK tunes in (Standard mode) 87.5 through 108.0 MHz FM, (Japanese mode) 76.0 through 90.0 MHz FM, and 522 through 1710 kHz AM. radio SHARK can tune both odd and even increments of FM frequencies, and either 9 or 10 kHz increments on AM. Currently, radio SHARK is compatible with both Macintosh and Microsoft Windows. The Macintosh version of the radio SHARK software can load recorded audio files directly into iTunes, facilitating easy transfer of recorded radio programs to an iPod or CD. The product has now been discontinued by the manufacturer, who also says, "We do not support the use of this product in Lion, Mac OS 10.7 and later." External links radio Shark - official site radio Shark 2 - official site Macworld review Ars Technica review iLounge review of version 2 Computer peripherals
Radio SHARK
[ "Technology" ]
417
[ "Computer peripherals", "Components" ]
4,066,001
https://en.wikipedia.org/wiki/%CE%A9-consistent%20theory
In mathematical logic, an ω-consistent (or omega-consistent, also called numerically segregative) theory is a theory (collection of sentences) that is not only (syntactically) consistent (that is, does not prove a contradiction), but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem. Definition A theory T is said to interpret the language of arithmetic if there is a translation of formulas of arithmetic into the language of T so that T is able to prove the basic axioms of the natural numbers under this translation. A T that interprets arithmetic is ω-inconsistent if, for some property P of natural numbers (defined by a formula in the language of T), T proves P(0), P(1), P(2), and so on (that is, for every standard natural number n, T proves that P(n) holds), but T also proves that there is some natural number n such that P(n) fails. This may not generate a contradiction within T because T may not be able to prove for any specific value of n that P(n) fails, only that there is such an n. In particular, such n is necessarily a nonstandard integer in any model for T (Quine has thus called such theories "numerically insegregative"). T is ω-consistent if it is not ω-inconsistent. There is a weaker but closely related property of Σ1-soundness. A theory T is Σ1-sound (or 1-consistent, in another terminology) if every Σ01-sentence provable in T is true in the standard model of arithmetic N (i.e., the structure of the usual natural numbers with addition and multiplication). If T is strong enough to formalize a reasonable model of computation, Σ1-soundness is equivalent to demanding that whenever T proves that a Turing machine C halts, then C actually halts. Every ω-consistent theory is Σ1-sound, but not vice versa. More generally, we can define an analogous concept for higher levels of the arithmetical hierarchy. If Γ is a set of arithmetical sentences (typically Σ0n for some n), a theory T is Γ-sound if every Γ-sentence provable in T is true in the standard model. When Γ is the set of all arithmetical formulas, Γ-soundness is called just (arithmetical) soundness. If the language of T consists only of the language of arithmetic (as opposed to, for example, set theory), then a sound system is one whose model can be thought of as the set ω, the usual set of mathematical natural numbers. The case of general T is different, see ω-logic below. Σn-soundness has the following computational interpretation: if the theory proves that a program C using a Σn−1-oracle halts, then C actually halts. Examples Consistent, ω-inconsistent theories Write PA for the theory Peano arithmetic, and Con(PA) for the statement of arithmetic that formalizes the claim "PA is consistent". Con(PA) could be of the form "No natural number n is the Gödel number of a proof in PA that 0=1". Now, the consistency of PA implies the consistency of PA + ¬Con(PA). Indeed, if PA + ¬Con(PA) was inconsistent, then PA alone would prove ¬Con(PA)→0=1, and a reductio ad absurdum in PA would produce a proof of Con(PA). By Gödel's second incompleteness theorem, PA would be inconsistent. Therefore, assuming that PA is consistent, PA + ¬Con(PA) is consistent too. However, it would not be ω-consistent. This is because, for any particular n, PA, and hence PA + ¬Con(PA), proves that n is not the Gödel number of a proof that 0=1. However, PA + ¬Con(PA) proves that, for some natural number n, n is the Gödel number of such a proof (this is just a direct restatement of the claim ¬Con(PA)). In this example, the axiom ¬Con(PA) is Σ1, hence the system PA + ¬Con(PA) is in fact Σ1-unsound, not just ω-inconsistent. Arithmetically sound, ω-inconsistent theories Let T be PA together with the axioms c ≠ n for each natural number n, where c is a new constant added to the language. Then T is arithmetically sound (as any nonstandard model of PA can be expanded to a model of T), but ω-inconsistent (as it proves , and c ≠ n for every number n). Σ1-sound ω-inconsistent theories using only the language of arithmetic can be constructed as follows. Let IΣn be the subtheory of PA with the induction schema restricted to Σn-formulas, for any n > 0. The theory IΣn + 1 is finitely axiomatizable, let thus A be its single axiom, and consider the theory T = IΣn + ¬A. We can assume that A is an instance of the induction schema, which has the form If we denote the formula by P(n), then for every natural number n, the theory T (actually, even the pure predicate calculus) proves P(n). On the other hand, T proves the formula , because it is logically equivalent to the axiom ¬A. Therefore, T is ω-inconsistent. It is possible to show that T is Πn + 3-sound. In fact, it is Πn + 3-conservative over the (obviously sound) theory IΣn. The argument is more complicated (it relies on the provability of the Σn + 2-reflection principle for IΣn in IΣn + 1). Arithmetically unsound, ω-consistent theories Let ω-Con(PA) be the arithmetical sentence formalizing the statement "PA is ω-consistent". Then the theory PA + ¬ω-Con(PA) is unsound (Σ3-unsound, to be precise), but ω-consistent. The argument is similar to the first example: a suitable version of the Hilbert–Bernays–Löb derivability conditions holds for the "provability predicate" ω-Prov(A) = ¬ω-Con(PA + ¬A), hence it satisfies an analogue of Gödel's second incompleteness theorem. ω-logic The concept of theories of arithmetic whose integers are the true mathematical integers is captured by ω-logic. Let T be a theory in a countable language that includes a unary predicate symbol N intended to hold just of the natural numbers, as well as specified names 0, 1, 2, ..., one for each (standard) natural number (which may be separate constants, or constant terms such as 0, 1, 1+1, 1+1+1, ..., etc.). Note that T itself could be referring to more general objects, such as real numbers or sets; thus in a model of T the objects satisfying N(x) are those that T interprets as natural numbers, not all of which need be named by one of the specified names. The system of ω-logic includes all axioms and rules of the usual first-order predicate logic, together with, for each T-formula P(x) with a specified free variable x, an infinitary ω-rule of the form: From infer . That is, if the theory asserts (i.e. proves) P(n) separately for each natural number n given by its specified name, then it also asserts P collectively for all natural numbers at once via the evident finite universally quantified counterpart of the infinitely many antecedents of the rule. For a theory of arithmetic, meaning one with intended domain the natural numbers such as Peano arithmetic, the predicate N is redundant and may be omitted from the language, with the consequent of the rule for each P simplifying to . An ω-model of T is a model of T whose domain includes the natural numbers and whose specified names and symbol N are standardly interpreted, respectively as those numbers and the predicate having just those numbers as its domain (whence there are no nonstandard numbers). If N is absent from the language then what would have been the domain of N is required to be that of the model, i.e. the model contains only the natural numbers. (Other models of T may interpret these symbols nonstandardly; the domain of N need not even be countable, for example.) These requirements make the ω-rule sound in every ω-model. As a corollary to the omitting types theorem, the converse also holds: the theory T has an ω-model if and only if it is consistent in ω-logic. There is a close connection of ω-logic to ω-consistency. A theory consistent in ω-logic is also ω-consistent (and arithmetically sound). The converse is false, as consistency in ω-logic is a much stronger notion than ω-consistency. However, the following characterization holds: a theory is ω-consistent if and only if its closure under unnested applications of the ω-rule is consistent. Relation to other consistency principles If the theory T is recursively axiomatizable, ω-consistency has the following characterization, due to Craig Smoryński: T is ω-consistent if and only if is consistent. Here, is the set of all Π02-sentences valid in the standard model of arithmetic, and is the uniform reflection principle for T, which consists of the axioms for every formula with one free variable. In particular, a finitely axiomatizable theory T in the language of arithmetic is ω-consistent if and only if T + PA is -sound. Notes Bibliography Kurt Gödel (1931). 'Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I'. In Monatshefte für Mathematik. Translated into English as On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Proof theory
Ω-consistent theory
[ "Mathematics" ]
2,188
[ "Mathematical logic", "Proof theory" ]
4,066,072
https://en.wikipedia.org/wiki/Fish%20curve
A fish curve is an ellipse negative pedal curve that is shaped like a fish. In a fish curve, the pedal point is at the focus for the special case of the squared eccentricity . The parametric equations for a fish curve correspond to those of the associated ellipse. Equations For an ellipse with the parametric equations the corresponding fish curve has parametric equations When the origin is translated to the node (the crossing point), the Cartesian equation can be written as: Properties Area The area of a fish curve is given by: so the area of the tail and head are given by: giving the overall area for the fish as: Curvature, arc length, and tangential angle The arc length of the curve is given by The curvature of a fish curve is given by: and the tangential angle is given by: where is the complex argument. References Plane curves
Fish curve
[ "Mathematics" ]
181
[ "Planes (geometry)", "Euclidean plane geometry", "Plane curves" ]
4,066,199
https://en.wikipedia.org/wiki/Frenkel%20defect
In crystallography, a Frenkel defect is a type of point defect in crystalline solids, named after its discoverer Yakov Frenkel. The defect forms when an atom or smaller ion (usually cation) leaves its place in the structure, creating a vacancy and becomes an interstitial by lodging in a nearby location. In elemental systems, they are primarily generated during particle irradiation, as their formation enthalpy is typically much higher than for other point defects, such as vacancies, and thus their equilibrium concentration according to the Boltzmann distribution is below the detection limit. In ionic crystals, which usually possess low coordination number or a considerable disparity in the sizes of the ions, this defect can be generated also spontaneously, where the smaller ion (usually the cation) is dislocated. Similar to a Schottky defect the Frenkel defect is a stoichiometric defect (does not change the over all stoichiometry of the compound). In ionic compounds, the vacancy and interstitial defect involved are oppositely charged and one might expect them to be located close to each other due to electrostatic attraction. However, this is not likely the case in real material due to smaller entropy of such a coupled defect, or because the two defects might collapse into each other. Also, because such coupled complex defects are stoichiometric, their concentration will be independent of chemical conditions. Effect on density Even though Frenkel defects involve only the migration of the ions within the crystal, the total volume and thus the density is not necessarily changed: in particular for close-packed systems, the structural expansion due to the strains induced by the interstitial atom typically dominates over the structural contraction due to the vacancy, leading to a decrease of density. Examples Frenkel defects are exhibited in ionic solids with a large size difference between the anion and cation (with the cation usually smaller due to an increased effective nuclear charge) Some examples of solids which exhibit Frenkel defects: zinc sulfide, silver(I) chloride, silver(I) bromide (also shows Schottky defects), silver(I) iodide. These are due to the comparatively smaller size of Zn^2+ and Ag+ ions. For example, consider a structure formed by Xn− and Mn+ ions. Suppose an M ion leaves the M sublattice, leaving the X sublattice unchanged. The number of interstitials formed will equal the number of vacancies formed. One form of a Frenkel defect reaction in MgO with the oxide anion leaving the structure and going into the interstitial site written in Kröger–Vink notation: Mg + O → O + v + Mg This can be illustrated with the example of the sodium chloride crystal structure. The diagrams below are schematic two-dimensional representations. See also Deep-level transient spectroscopy (DLTS) Schottky defect Wigner effect Crystallographic defect References Further reading Crystallographic defects
Frenkel defect
[ "Chemistry", "Materials_science", "Engineering" ]
627
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
4,066,270
https://en.wikipedia.org/wiki/Rolling%20circle%20replication
Rolling circle replication (RCR) is a process of unidirectional nucleic acid replication that can rapidly synthesize multiple copies of circular molecules of DNA or RNA, such as plasmids, the genomes of bacteriophages, and the circular RNA genome of viroids. Some eukaryotic viruses also replicate their DNA or RNA via the rolling circle mechanism. As a simplified version of natural rolling circle replication, an isothermal DNA amplification technique, rolling circle amplification was developed. The RCA mechanism is widely used in molecular biology and biomedical nanotechnology, especially in the field of biosensing (as a method of signal amplification). Circular DNA replication Rolling circle DNA replication is initiated by an initiator protein encoded by the plasmid or bacteriophage DNA, which nicks one strand of the double-stranded, circular DNA molecule at a site called the double-strand origin, or DSO. The initiator protein remains bound to the 5' phosphate end of the nicked strand, and the free 3' hydroxyl end is released to serve as a primer for DNA synthesis by DNA polymerase III. Using the unnicked strand as a template, replication proceeds around the circular DNA molecule, displacing the nicked strand as single-stranded DNA. Displacement of the nicked strand is carried out by a host-encoded helicase called PcrA (the abbreviation standing for plasmid copy reduced) in the presence of the plasmid replication initiation protein. Continued DNA synthesis can produce multiple single-stranded linear copies of the original DNA in a continuous head-to-tail series called a concatemer. These linear copies can be converted to double-stranded circular molecules through the following process: First, the initiator protein makes another nick in the DNA to terminate synthesis of the first (leading) strand. RNA polymerase and DNA polymerase III then replicate the single-stranded origin (SSO) DNA to make another double-stranded circle. DNA polymerase I removes the primer, replacing it with DNA, and DNA ligase joins the ends to make another molecule of double-stranded circular DNA. As a summary, a typical DNA rolling circle replication has five steps: Circular dsDNA will be "nicked". The 3' end is elongated using "unnicked" DNA as leading strand (template); 5' end is displaced. Displaced DNA is a lagging strand and is made double stranded via a series of Okazaki fragments. Replication of both "unnicked" and displaced ssDNA. Displaced DNA circularizes. Virology Replication of viral DNA Some DNA viruses replicate their genomic information in host cells via rolling circle replication. For instance, human herpesvirus-6 (HHV-6)(hibv) expresses a set of "early genes" that are believed to be involved in this process. The long concatemers that result are subsequently cleaved between the pac-1 and pac-2 regions of HHV-6's genome by ribozymes when it is packaged into individual virions. Human Papillomavirus-16 (HPV-16) is another virus that employs rolling replication to produce progeny at a high rate. HPV-16 infects human epithelial cells and has a double stranded circular genome. During replication, at the origin, the E1 hexamer wraps around the single strand DNA and moves in the 3' to 5' direction. In normal bidirectional replication, the two replication proteins will disassociate at time of collision, but in HPV-16 it is believed that the E1 hexamer does not disassociate, hence leading to a continuous rolling replication. It is believed that this replication mechanism of HPV may have physiological implications into the integration of the virus into the host chromosome and eventual progression into cervical cancer. In addition, geminivirus also utilizes rolling circle replication as its replication mechanism. It is a virus that is responsible for destroying many major crops, such as cassava, cotton, legumes, maize, tomato and okra. The virus has a circular, single stranded, DNA that replicates in host plant cells. The entire process is initiated by the geminiviral replication initiator protein, Rep, which is also responsible for altering the host environment to act as part of the replication machinery. Rep is also strikingly similar to most other rolling replication initiator proteins of eubacteria, with the presence of motifs I, II, and III at is N terminus. During the rolling circle replication, the ssDNA of geminivirus is converted to dsDNA and Rep is then attached to the dsDNA at the origin sequence TAATATTAC. After Rep, along with other replication proteins, binds to the dsDNA it forms a stem loop where the DNA is then cleaved at the nanomer sequence causing a displacement of the strand. This displacement allows the replication fork to progress in the 3’ to 5’ direction which ultimately yields a new ssDNA strand and a concatameric DNA strand. Bacteriophage T4 DNA replication intermediates include circular and branched circular concatemeric structures. These structures likely reflect a rolling circle mechanism of replication. Replication of viral RNA Some RNA viruses and viroids also replicate their genome through rolling circle RNA replication. For viroids, there are two alternative RNA replication pathways that respectively followed by members of the family Pospiviroidae (asymmetric replication) and Avsunviroidae (symmetric replication). In the family Pospiviroidae (PSTVd-like), the circular plus strand RNA is transcribed by a host RNA polymerase into oligomeric minus strands and then oligomeric plus strands. These oligomeric plus strands are cleaved by a host RNase and ligated by a host RNA ligase to reform the monomeric plus strand circular RNA. This is called the asymmetric pathway of rolling circle replication. The viroids in the family Avsunviroidae (ASBVd-like) replicate their genome through the symmetric pathway of rolling circle replication. In this symmetric pathway, oligomeric minus strands are first cleaved and ligated to form monomeric minus strands, and then are transcribed into oligomeric plus strands. These oligomeric plus strands are then cleaved and ligated to reform the monomeric plus strand. The symmetric replication pathway was named because both plus and minus strands are produced the same way. Cleavage of the oligomeric plus and minus strands is mediated by the self-cleaving hammerhead ribozyme structure present in the Avsunviroidae, but such structure is absent in the Pospiviroidae. Rolling circle amplification (RCA) The derivative form of rolling circle replication has been successfully used for amplification of DNA from very small amounts of starting material. This amplification technique is named as rolling circle amplification (RCA). Different from conventional DNA amplification techniques such as polymerase chain reaction (PCR), RCA is an isothermal nucleic acid amplification technique where the polymerase continuously adds single nucleotides to a primer annealed to a circular template which results in a long concatemer ssDNA that contains tens to hundreds of tandem repeats (complementary to the circular template). There are five important components required for performing a RCA reaction: A DNA polymerase A suitable buffer that is compatible with the polymerase. A short DNA or RNA primer A circular DNA template Deoxynucleotide triphosphates (dNTPs) The polymerases used in RCA are Phi29, Bst, and Vent exo-DNA polymerase for DNA amplification, and T7 RNA polymerase for RNA amplification. Since Phi29 DNA polymerase has the best processivity and strand displacement ability among all aforementioned polymerases, it has been most frequently used in RCA reactions. Different from polymerase chain reaction (PCR), RCA can be conducted at a constant temperature (room temperature to 65C) in both free solution and on top of immobilized targets (solid phase amplification). There are typically three steps involved in a DNA RCA reaction: Circular template ligation, which can be conducted via template mediated enzymatic ligation (e.g., T4 DNA ligase) or template-free ligation using special DNA ligases (i.e., CircLigase). Primer-induced single-strand DNA elongation. Multiple primers can be employed to hybridize with the same circle. As a result, multiple amplification events can be initiated, producing multiple RCA products ("Multiprimed RCA"). Amplification product detection and visualization, which is most commonly conducted through fluorescent detection, with fluorophore-conjugated dNTP, fluorophore-tethered complementary or fluorescently-labeled molecular beacons. In addition to the fluorescent approaches, gel electrophoresis is also widely used for the detection of RCA product. RCA produces a linear amplification of DNA, as each circular template grows at a given speed for a certain amount of time. To increase yield and achieve exponential amplification as PCR does, several approaches have been investigated. One of them is the hyperbranched rolling circle amplification or HRCA, where primers that anneal to the original RCA products are added, and also extended. In this way the original RCA creates more template that can be amplified. Another is circle to circle amplification or C2CA, where the RCA products are digested with a restriction enzyme and ligated into new circular templates using a restriction oligo, followed by a new round of RCA with a larger amount of circular templates for amplification. Applications of RCA RCA can amplify a single molecular binding event over a thousandfold, making it particularly useful for detecting targets with ultra-low abundance. RCA reactions can be performed in not only free solution environments, but also on a solid surface like glass, micro- or nano-bead, microwell plates, microfluidic devices or even paper strips. This feature makes it a very powerful tool for amplifying signals in solid-phase immunoassays (e.g., ELISA). In this way, RCA is becoming a highly versatile signal amplification tool with wide-ranging applications in genomics, proteomics, diagnosis and biosensing. Immuno-RCA Immuno-RCA is an isothermal signal amplification method for high-specificity & high-sensitivity protein detection and quantification. This technique combines two fields: RCA, which allows nucleotide amplification, and immunoassay, which uses antibodies specific to intracellular or free biomarkers. As a result, immuno-RCA gives a specific amplified signal (high signal-to-noise ratio), making it suitable for detecting, quantifying and visualizing low abundance proteic markers in liquid-phase immunoassays and immunohistochemistry. Immuno-RCA follows a typical immuno-adsorbent reaction in ELISA or immunohistochemistry tissue staining. The detection antibodies used in immuno-RCA reaction are modified by attaching a ssDNA oligonucleotide on the end of the heavy chains. So the Fab (Fragment, antigen binding) section on the detection antibody can still bind to specific antigens and the oligonucleotide can serve as a primer of the RCA reaction. The typical antibody mediated immuno-RCA procedure is as follows: 1. A detection antibody recognizes a specific proteic target. This antibody is also attached to an oligonucleotide primer. 2. When circular DNA is present, it is annealed, and the primer matches to the circular DNA complementary sequence. 3. The complementary sequence of the circular DNA template is copied hundreds of times and remains attached to the antibody. 4. RCA output (elongated ssDNA) is detected with fluorescent probes using a fluorescent microscope or a microplate reader. Aptamer based immuno-RCA In addition to antibody mediated immuno-RCA, the ssDNA RCA primer can be conjugated to the 3' end of a DNA aptamer as well. The primer tail can be amplified through rolling circle amplification. The product can be visualized through the labeling of fluorescent reporter. The process is illustrated in the figure on the right. Other applications of RCA Various derivatives of RCA were widely used in the field of biosensing. For example, RCA has been successfully used for detecting the existence of viral and bacterial DNA from clinical samples, which is very beneficial for rapid diagnostics of infectious diseases. It has also been used as an on-chip signal amplification method for nucleic acid (for both DNA and RNA) microarray assay. In addition to the amplification function in biosensing applications, RCA technique can be applied to the construction of DNA nanostructures and DNA hydrogels as well. The products of RCA can also be use as templates for periodic assembly of nanospecies or proteins, synthesis of metallic nanowires and formation of nano-islands. See also Selector-technique References External links DNA replication systems used with small circular DNA molecules Genomes 2, T. Brown et al., at NCBI Books MicrobiologyBytes: Viroids and Virusoids http://mcmanuslab.ucsf.edu/node/246 DNA replication
Rolling circle replication
[ "Biology" ]
2,826
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
4,066,308
https://en.wikipedia.org/wiki/Multiple%20sequence%20alignment
Multiple sequence alignment (MSA) is the process or the result of sequence alignment of three or more biological sequences, generally protein, DNA, or RNA. These alignments are used to infer evolutionary relationships via phylogenetic analysis and can highlight homologous features between sequences. Alignments highlight mutation events such as point mutations (single amino acid or nucleotide changes), insertion mutations and deletion mutations, and alignments are used to assess sequence conservation and infer the presence and activity of protein domains, tertiary structures, secondary structures, and individual amino acids or nucleotides. Multiple sequence alignments require more sophisticated methodologies than pairwise alignments, as they are more computationally complex. Most multiple sequence alignment programs use heuristic methods rather than global optimization because identifying the optimal alignment between more than a few sequences of moderate length is prohibitively computationally expensive. However, heuristic methods generally cannot guarantee high-quality solutions and have been shown to fail to yield near-optimal solutions on benchmark test cases. Problem statement Given sequences , similar to the form below: A multiple sequence alignment is taken of this set of sequences by inserting any amount of gaps needed into each of the sequences of until the modified sequences, , all conform to length and no values in the sequences of of the same column consists of only gaps. The mathematical form of an MSA of the above sequence set is shown below: To return from each particular sequence to , remove all gaps. Graphing approach A general approach when calculating multiple sequence alignments is to use graphs to identify all of the different alignments. When finding alignments via graph, a complete alignment is created in a weighted graph that contains a set of vertices and a set of edges. Each of the graph edges has a weight based on a certain heuristic that helps to score each alignment or subset of the original graph. Tracing alignments When determining the best suited alignments for each MSA, a trace is usually generated. A trace is a set of realized, or corresponding and aligned, vertices that has a specific weight based on the edges that are selected between corresponding vertices. When choosing traces for a set of sequences it is necessary to choose a trace with a maximum weight to get the best alignment of the sequences. Alignment methods There are various alignment methods used within multiple sequence to maximize scores and correctness of alignments. Each is usually based on a certain heuristic with an insight into the evolutionary process. Most try to replicate evolution to get the most realistic alignment possible to best predict relations between sequences. Dynamic programming A direct method for producing an MSA uses the dynamic programming technique to identify the globally optimal alignment solution. For proteins, this method usually involves two sets of parameters: a gap penalty and a substitution matrix assigning scores or probabilities to the alignment of each possible pair of amino acids based on the similarity of the amino acids' chemical properties and the evolutionary probability of the mutation. For nucleotide sequences, a similar gap penalty is used, but a much simpler substitution matrix, wherein only identical matches and mismatches are considered, is typical. The scores in the substitution matrix may be either all positive or a mix of positive and negative in the case of a global alignment, but must be both positive and negative, in the case of a local alignment. For n individual sequences, the naive method requires constructing the n-dimensional equivalent of the matrix formed in standard pairwise sequence alignment. The search space thus increases exponentially with increasing n and is also strongly dependent on sequence length. Expressed with the big O notation commonly used to measure computational complexity, a naïve MSA takes O(LengthNseqs) time to produce. To find the global optimum for n sequences this way has been shown to be an NP-complete problem. In 1989, based on Carrillo-Lipman Algorithm, Altschul introduced a practical method that uses pairwise alignments to constrain the n-dimensional search space. In this approach pairwise dynamic programming alignments are performed on each pair of sequences in the query set, and only the space near the n-dimensional intersection of these alignments is searched for the n-way alignment. The MSA program optimizes the sum of all of the pairs of characters at each position in the alignment (the so-called sum of pair score) and has been implemented in a software program for constructing multiple sequence alignments. In 2019, Hosseininasab and van Hoeve showed that by using decision diagrams, MSA may be modeled in polynomial space complexity. Progressive alignment construction The most widely used approach to multiple sequence alignments uses a heuristic search known as progressive technique (also known as the hierarchical or tree method) developed by Da-Fei Feng and Doolittle in 1987. Progressive alignment builds up a final MSA by combining pairwise alignments beginning with the most similar pair and progressing to the most distantly related. All progressive alignment methods require two stages: a first stage in which the relationships between the sequences are represented as a phylogenetic tree, called a guide tree, and a second step in which the MSA is built by adding the sequences sequentially to the growing MSA according to the guide tree. The initial guide tree is determined by an efficient clustering method such as neighbor-joining or unweighted pair group method with arithmetic mean (UPGMA), and may use distances based on the number of identical two-letter sub-sequences (as in FASTA rather than a dynamic programming alignment). Progressive alignments are not guaranteed to be globally optimal. The primary problem is that when errors are made at any stage in growing the MSA, these errors are then propagated through to the final result. Performance is also particularly bad when all of the sequences in the set are rather distantly related. Most modern progressive methods modify their scoring function with a secondary weighting function that assigns scaling factors to individual members of the query set in a nonlinear fashion based on their phylogenetic distance from their nearest neighbors. This corrects for non-random selection of the sequences given to the alignment program. Progressive alignment methods are efficient enough to implement on a large scale for many (100s to 1000s) sequences. A popular progressive alignment method has been the Clustal family. ClustalW is used extensively for phylogenetic tree construction, in spite of the author's explicit warnings that unedited alignments should not be used in such studies and as input for protein structure prediction by homology modeling. European Bioinformatics Institute (EMBL-EBI) announced that CLustalW2 will expire in August 2015. They recommend Clustal Omega which performs based on seeded guide trees and HMM profile-profile techniques for protein alignments. An alternative tool for progressive DNA alignments is multiple alignment using fast Fourier transform (MAFFT). Another common progressive alignment method named T-Coffee is slower than Clustal and its derivatives but generally produces more accurate alignments for distantly related sequence sets. T-Coffee calculates pairwise alignments by combining the direct alignment of the pair with indirect alignments that aligns each sequence of the pair to a third sequence. It uses the output from Clustal as well as another local alignment program LALIGN, which finds multiple regions of local alignment between two sequences. The resulting alignment and phylogenetic tree are used as a guide to produce new and more accurate weighting factors. Because progressive methods are heuristics that are not guaranteed to converge to a global optimum, alignment quality can be difficult to evaluate and their true biological significance can be obscure. A semi-progressive method that improves alignment quality and does not use a lossy heuristic while running in polynomial time has been implemented in the program PSAlign. Iterative methods A set of methods to produce MSAs while reducing the errors inherent in progressive methods are classified as "iterative" because they work similarly to progressive methods but repeatedly realign the initial sequences as well as adding new sequences to the growing MSA. One reason progressive methods are so strongly dependent on a high-quality initial alignment is the fact that these alignments are always incorporated into the final result – that is, once a sequence has been aligned into the MSA, its alignment is not considered further. This approximation improves efficiency at the cost of accuracy. By contrast, iterative methods can return to previously calculated pairwise alignments or sub-MSAs incorporating subsets of the query sequence as a means of optimizing a general objective function such as finding a high-quality alignment score. A variety of subtly different iteration methods have been implemented and made available in software packages; reviews and comparisons have been useful but generally refrain from choosing a "best" technique. The software package PRRN/PRRP uses a hill-climbing algorithm to optimize its MSA alignment score and iteratively corrects both alignment weights and locally divergent or "gappy" regions of the growing MSA. PRRP performs best when refining an alignment previously constructed by a faster method. Another iterative program, DIALIGN, takes an unusual approach of focusing narrowly on local alignments between sub-segments or sequence motifs without introducing a gap penalty. The alignment of individual motifs is then achieved with a matrix representation similar to a dot-matrix plot in a pairwise alignment. An alternative method that uses fast local alignments as anchor points or seeds for a slower global-alignment procedure is implemented in the CHAOS/DIALIGN suite. A third popular iteration-based method named MUSCLE (multiple sequence alignment by log-expectation) improves on progressive methods with a more accurate distance measure to assess the relatedness of two sequences. The distance measure is updated between iteration stages (although, in its original form, MUSCLE contained only 2-3 iterations depending on whether refinement was enabled). Consensus methods Consensus methods attempt to find the optimal multiple sequence alignment given multiple different alignments of the same set of sequences. There are two commonly used consensus methods, M-COFFEE and MergeAlign. M-COFFEE uses multiple sequence alignments generated by seven different methods to generate consensus alignments. MergeAlign is capable of generating consensus alignments from any number of input alignments generated using different models of sequence evolution or different methods of multiple sequence alignment. The default option for MergeAlign is to infer a consensus alignment using alignments generated using 91 different models of protein sequence evolution. Hidden Markov models A hidden Markov model (HMM) is a probabilistic model that can assign likelihoods to all possible combinations of gaps, matches, and mismatches, to determine the most likely MSA or set of possible MSAs. HMMs can produce a single highest-scoring output but can also generate a family of possible alignments that can then be evaluated for biological significance. HMMs can produce both global and local alignments. Although HMM-based methods have been developed relatively recently, they offer significant improvements in computational speed, especially for sequences that contain overlapping regions. Typical HMM-based methods work by representing an MSA as a form of directed acyclic graph known as a partial-order graph, which consists of a series of nodes representing possible entries in the columns of an MSA. In this representation a column that is absolutely conserved (that is, that all the sequences in the MSA share a particular character at a particular position) is coded as a single node with as many outgoing connections as there are possible characters in the next column of the alignment. In the terms of a typical hidden Markov model, the observed states are the individual alignment columns and the "hidden" states represent the presumed ancestral sequence from which the sequences in the query set are hypothesized to have descended. An efficient search variant of the dynamic programming method, named the Viterbi algorithm, is generally used to successively align the growing MSA to the next sequence in the query set to produce a new MSA. This is distinct from progressive alignment methods because the alignment of prior sequences is updated at each new sequence addition. However, like progressive methods, this technique can be influenced by the order in which the sequences in the query set are integrated into the alignment, especially when the sequences are distantly related. Several software programs are available in which variants of HMM-based methods have been implemented and which are noted for their scalability and efficiency, although properly using an HMM method is more complex than using more common progressive methods. The simplest is Partial-Order Alignment (POA), and a similar more general method is implemented in the Sequence Alignment and Modeling System (SAM) software package. and HMMER. SAM has been used as a source of alignments for protein structure prediction to participate in the Critical Assessment of Structure Prediction (CASP) structure prediction experiment and to develop a database of predicted proteins in the yeast species S. cerevisiae. HHsearch is a software package for the detection of remotely related protein sequences based on the pairwise comparison of HMMs. A server running HHsearch (HHpred) was the fastest of 10 automatic structure prediction servers in the CASP7 and CASP8 structure prediction competitions. Phylogeny-aware methods Most multiple sequence alignment methods try to minimize the number of insertions/deletions (gaps) and, as a consequence, produce compact alignments. This causes several problems if the sequences to be aligned contain non-homologous regions, if gaps are informative in a phylogeny analysis. These problems are common in newly produced sequences that are poorly annotated and may contain frame-shifts, wrong domains or non-homologous spliced exons. The first such method was developed in 2005 by Löytynoja and Goldman. The same authors released a software package called PRANK in 2008. PRANK improves alignments when insertions are present. Nevertheless, it runs slowly compared to progressive and/or iterative methods which have been developed for several years. In 2012, two new phylogeny-aware tools appeared. One is called PAGAN that was developed by the same team as PRANK. The other is ProGraphMSA developed by Szalkowski. Both software packages were developed independently but share common features, notably the use of graph algorithms to improve the recognition of non-homologous regions, and an improvement in code making these software faster than PRANK. Motif finding Motif finding, also known as profile analysis, is a method of locating sequence motifs in global MSAs that is both a means of producing a better MSA and a means of producing a scoring matrix for use in searching other sequences for similar motifs. A variety of methods for isolating the motifs have been developed, but all are based on identifying short highly conserved patterns within the larger alignment and constructing a matrix similar to a substitution matrix that reflects the amino acid or nucleotide composition of each position in the putative motif. The alignment can then be refined using these matrices. In standard profile analysis, the matrix includes entries for each possible character as well as entries for gaps. Alternatively, statistical pattern-finding algorithms can identify motifs as a precursor to an MSA rather than as a derivation. In many cases when the query set contains only a small number of sequences or contains only highly related sequences, pseudocounts are added to normalize the distribution reflected in the scoring matrix. In particular, this corrects zero-probability entries in the matrix to values that are small but nonzero. Blocks analysis is a method of motif finding that restricts motifs to ungapped regions in the alignment. Blocks can be generated from an MSA or they can be extracted from unaligned sequences using a precalculated set of common motifs previously generated from known gene families. Block scoring generally relies on the spacing of high-frequency characters rather than on the calculation of an explicit substitution matrix. Statistical pattern-matching has been implemented using both the expectation-maximization algorithm and the Gibbs sampler. One of the most common motif-finding tools, named Multiple EM for Motif Elicitation (MEME), uses expectation maximization and hidden Markov methods to generate motifs that are then used as search tools by its companion MAST in the combined suite MEME/MAST. Non-coding multiple sequence alignment Non-coding DNA regions, especially transcription factor binding sites (TFBSs), are conserved, but not necessarily evolutionarily related, and may have converged from non-common ancestors. Thus, the assumptions used to align protein sequences and DNA coding regions are inherently different from those that hold for TFBS sequences. Although it is meaningful to align DNA coding regions for homologous sequences using mutation operators, alignment of binding site sequences for the same transcription factor cannot rely on evolutionary related mutation operations. Similarly, the evolutionary operator of point mutations can be used to define an edit distance for coding sequences, but this has little meaning for TFBS sequences because any sequence variation has to maintain a certain level of specificity for the binding site to function. This becomes specifically important when trying to align known TFBS sequences to build supervised models to predict unknown locations of the same TFBS. Hence, Multiple Sequence Alignment methods need to adjust the underlying evolutionary hypothesis and the operators used as in the work published incorporating neighbouring base thermodynamic information to align the binding sites searching for the lowest thermodynamic alignment conserving specificity of the binding site. Optimization Genetic algorithms and simulated annealing Standard optimization techniques in computer science – both of which were inspired by, but do not directly reproduce, physical processes – have also been used in an attempt to more efficiently produce quality MSAs. One such technique, genetic algorithms, has been used for MSA production in an attempt to broadly simulate the hypothesized evolutionary process that gave rise to the divergence in the query set. The method works by breaking a series of possible MSAs into fragments and repeatedly rearranging those fragments with the introduction of gaps at varying positions. A general objective function is optimized during the simulation, most generally the "sum of pairs" maximization function introduced in dynamic programming-based MSA methods. A technique for protein sequences has been implemented in the software program SAGA (Sequence Alignment by Genetic Algorithm) and its equivalent in RNA is called RAGA. The technique of simulated annealing, by which an existing MSA produced by another method is refined by a series of rearrangements designed to find better regions of alignment space than the one the input alignment already occupies. Like the genetic algorithm method, simulated annealing maximizes an objective function like the sum-of-pairs function. Simulated annealing uses a metaphorical "temperature factor" that determines the rate at which rearrangements proceed and the likelihood of each rearrangement; typical usage alternates periods of high rearrangement rates with relatively low likelihood (to explore more distant regions of alignment space) with periods of lower rates and higher likelihoods to more thoroughly explore local minima near the newly "colonized" regions. This approach has been implemented in the program MSASA (Multiple Sequence Alignment by Simulated Annealing). Mathematical programming and exact solution algorithms Mathematical programming and in particular mixed integer programming models are another approach to solve MSA problems. The advantage of such optimization models is that they can be used to find the optimal MSA solution more efficiently compared to the traditional DP approach. This is due in part, to the applicability of decomposition techniques for mathematical programs, where the MSA model is decomposed into smaller parts and iteratively solved until the optimal solution is found. Example algorithms used to solve mixed integer programming models of MSA include branch and price and Benders decomposition. Although exact approaches are computationally slow compared to heuristic algorithms for MSA, they are guaranteed to reach the optimal solution eventually, even for large-size problems. Simulated quantum computing In January 2017, D-Wave Systems announced that its qbsolv open-source quantum computing software had been successfully used to find a faster solution to the MSA problem. Alignment visualization and quality control The necessary use of heuristics for multiple alignment means that for an arbitrary set of proteins, there is always a good chance that an alignment will contain errors. For example, an evaluation of several leading alignment programs using the BAliBase benchmark found that at least 24% of all pairs of aligned amino acids were incorrectly aligned. These errors can arise because of unique insertions into one or more regions of sequences, or through some more complex evolutionary process leading to proteins that do not align easily by sequence alone. As the number of sequence and their divergence increases many more errors will be made simply because of the heuristic nature of MSA algorithms. Multiple sequence alignment viewers enable alignments to be visually reviewed, often by inspecting the quality of alignment for annotated functional sites on two or more sequences. Many also enable the alignment to be edited to correct these (usually minor) errors, in order to obtain an optimal 'curated' alignment suitable for use in phylogenetic analysis or comparative modeling. However, as the number of sequences increases and especially in genome-wide studies that involve many MSAs it is impossible to manually curate all alignments. Furthermore, manual curation is subjective. And finally, even the best expert cannot confidently align the more ambiguous cases of highly diverged sequences. In such cases it is common practice to use automatic procedures to exclude unreliably aligned regions from the MSA. For the purpose of phylogeny reconstruction (see below) the Gblocks program is widely used to remove alignment blocks suspect of low quality, according to various cutoffs on the number of gapped sequences in alignment columns. However, these criteria may excessively filter out regions with insertion/deletion events that may still be aligned reliably, and these regions might be desirable for other purposes such as detection of positive selection. A few alignment algorithms output site-specific scores that allow the selection of high-confidence regions. Such a service was first offered by the SOAP program, which tests the robustness of each column to perturbation in the parameters of the popular alignment program CLUSTALW. The T-Coffee program uses a library of alignments in the construction of the final MSA, and its output MSA is colored according to confidence scores that reflect the agreement between different alignments in the library regarding each aligned residue. Its extension, Transitive Consistency Score (TCS), uses T-Coffee libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. Another alignment program that can output an MSA with confidence scores is FSA, which uses a statistical model that allows calculation of the uncertainty in the alignment. The HoT (Heads-Or-Tails) score can be used as a measure of site-specific alignment uncertainty due to the existence of multiple co-optimal solutions. The GUIDANCE program calculates a similar site-specific confidence measure based on the robustness of the alignment to uncertainty in the guide tree that is used in progressive alignment programs. An alternative, more statistically justified approach to assess alignment uncertainty is the use of probabilistic evolutionary models for joint estimation of phylogeny and alignment. A Bayesian approach allows calculation of posterior probabilities of estimated phylogeny and alignment, which is a measure of the confidence in these estimates. In this case, a posterior probability can be calculated for each site in the alignment. Such an approach was implemented in the program BAli-Phy. There are free programs available for visualization of multiple sequence alignments, for example Jalview and UGENE. Phylogenetic use Multiple sequence alignments can be used to create a phylogenetic tree. This is made possible by two reasons. The first is because functional domains that are known in annotated sequences can be used for alignment in non-annotated sequences. The other is that conserved regions known to be functionally important can be found. This makes it possible for multiple sequence alignments to be used to analyze and find evolutionary relationships through homology between sequences. Point mutations and insertion or deletion events (called indels) can be detected. Multiple sequence alignments can also be used to identify functionally important sites, such as binding sites, active sites, or sites corresponding to other key functions, by locating conserved domains. When looking at multiple sequence alignments, it is useful to consider different aspects of the sequences when comparing sequences. These aspects include identity, similarity, and homology. Identity means that the sequences have identical residues at their respective positions. On the other hand, similarity has to do with the sequences being compared having similar residues quantitatively. For example, in terms of nucleotide sequences, pyrimidines are considered similar to each other, as are purines. Similarity ultimately leads to homology, in that the more similar sequences are, the closer they are to being homologous. This similarity in sequences can then go on to help find common ancestry. See also Alignment-free sequence analysis Cladistics Generalized tree alignment Multiple sequence alignment viewers PANDIT, a biological database covering protein domains Phylogenetics Sequence alignment software Structural alignment References Survey articles External links ExPASy sequence alignment tools Archived Multiple Alignment Resource Page – from the Virtual School of Natural Sciences Tools for Multiple Alignments – from Pôle Bioinformatique Lyonnais An entry point to clustal servers and information An entry point to the main T-Coffee servers An entry point to the main MergeAlign server and information European Bioinformatics Institute servers: ClustalW2 – general purpose multiple sequence alignment program for DNA or proteins. Muscle – MUltiple Sequence Comparison by Log-Expectation T-coffee – multiple sequence alignment. MAFFT – Multiple Alignment using Fast Fourier Transform KALIGN – a fast and accurate multiple sequence alignment algorithm. Lecture notes, tutorials, and courses Multiple sequence alignment lectures – from the Max Planck Institute for Molecular Genetics Lecture Notes and practical exercises on multiple sequence alignments at the European Molecular Biology Laboratory (EMBL) Molecular Bioinformatics Lecture Notes Molecular Evolution and Bioinformatics Lecture Notes Bioinformatics Computational phylogenetics Markov models NP-complete problems
Multiple sequence alignment
[ "Mathematics", "Engineering", "Biology" ]
5,325
[ "Genetics techniques", "Biological engineering", "Computational phylogenetics", "Computational problems", "Bioinformatics", "Phylogenetics", "Mathematical problems", "NP-complete problems" ]
4,066,459
https://en.wikipedia.org/wiki/UnixWorld
UnixWorld (Unixworld: McGraw-Hill's magazine of open systems computing.) is a defunct magazine about Unix systems, published from May 1984 until December 1995. References Defunct computer magazines published in the United States Magazines established in 1984 Magazines disestablished in 1995 Magazines published in California Unix history
UnixWorld
[ "Technology" ]
61
[ "Computing stubs", "Computer magazine stubs" ]
4,066,625
https://en.wikipedia.org/wiki/Michael%20Howard%20%28Microsoft%29
Michael Howard (born 1965) is a software security expert from Microsoft. He is the author of several computer security books, the most famous being Michael Howard is a frequent speaker at security-related conferences and frequently publishes articles on the subject. Books Michael Howard, David LeBlanc : Writing Secure Code (2nd edition). Michael Howard, John Viega, David LeBlanc: The 19 Deadly Sins of Software Security. Michael Howard: Designing Secure Web-Based Applications for Microsoft(r) Windows(r) 2000. External links Michael Howard's Blog Microsoft employees Living people Writers about computer security 1965 births Place of birth missing (living people)
Michael Howard (Microsoft)
[ "Technology" ]
131
[ "Computer security stubs", "Computing stubs" ]
4,066,767
https://en.wikipedia.org/wiki/Industrial%20water%20treatment
There are many uses of water in industry and, in most cases, the used water also needs treatment to render it fit for re-use or disposal. Raw water entering an industrial plant often needs treatment to meet tight quality specifications to be of use in specific industrial processes. Industrial water treatment encompasses all these aspects which include industrial wastewater treatment, boiler water treatment and cooling water treatment. Overview Water treatment is used to optimize most water-based industrial processes, such as heating, cooling, processing, cleaning, and rinsing so that operating costs and risks are reduced. Poor water treatment lets water interact with the surfaces of pipes and vessels which contain it. Steam boilers can scale up or corrode, and these deposits will mean more fuel is needed to heat the same amount of water. Cooling towers can also scale up and corrode, but left untreated, the warm, dirty water they can contain will encourage bacteria to grow, and Legionnaires' disease can be the fatal consequence. Water treatment is also used to improve the quality of water contacting the manufactured product (e.g., semiconductors) and/or can be part of the product (e.g., beverages, pharmaceuticals). In these instances, poor water treatment can cause defective products. In many cases, effluent water from one process can be suitable for reuse in another process if given suitable treatment. This can reduce costs by lowering charges for water consumption, reduce the costs of effluent disposal because of reduced volume, and lower energy costs due to the recovery of heat in recycled wastewater. Objectives Industrial water treatment seeks to manage four main problem areas: scaling, corrosion, microbiological activity and disposal of residual wastewater. Boilers do not have many problems with microbes as the high temperatures prevent their growth. Scaling occurs when the chemistry and temperature conditions are such that the dissolved mineral salts in the water are caused to precipitate and form solid deposits. These can be mobile, like a fine silt, or can build up in layers on the metal surfaces of the systems. Scale is a problem because it insulates and heat exchange becomes less efficient as the scale thickens, which wastes energy. Scale also narrows pipe widths and therefore increases the energy used in pumping the water through the pipes. Corrosion occurs when the parent metal oxidises (as iron rusts, for example) and gradually the integrity of the plant equipment is compromised. The corrosion products can cause similar problems to scale, but corrosion can also lead to leaks, which in a pressurised system can lead to catastrophic failures. Microbes can thrive in untreated cooling water, which is warm and sometimes full of organic nutrients as wet cooling towers are very efficient air scrubbers. Dust, flies, grass, fungal spores, and others collect in the water and create a sort of "microbial soup" if not treated with biocides. Many outbreaks of the deadly Legionnaires' Disease have been traced to unmanaged cooling towers, and the UK has had stringent Health & Safety guidelines concerning cooling tower operations for many years as have had governmental agencies in other countries. Certain processes like tanning and paper making use heavy metals such as Chrome for tanning. Although most is used up but some amount remains and gets carried away with water. The presence in drinking water is toxic when consumed so even the smallest amount must be removed. Disposal of residual industrial wastewaters Disposal of residual wastewaters from an industrial plant is a difficult and costly problem. Most petroleum refineries, chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the local and/or national regulations regarding disposal of wastewaters into sewage treatment plants or into rivers, lakes or oceans. Processes Two of the main processes of industrial water treatment are boiler water treatment and cooling water treatment. A large amount of proper water treatment can lead to the reaction of solids and bacteria within pipe work and boiler housing. Steam boilers can suffer from scale or corrosion when left untreated. Scale deposits can lead to weak and dangerous machinery, while additional fuel is required to heat the same level of water because of the rise in thermal resistance. Poor quality dirty water can become a breeding ground for bacteria such as Legionella causing a risk to public health. Corrosion in low pressure boilers can be caused by dissolved oxygen, acidity and excessive alkalinity. Water treatment therefore should remove the dissolved oxygen and maintain the boiler water with the appropriate pH and alkalinity levels. Without effective water treatment, a cooling water system can suffer from scale formation, corrosion and fouling and may become a breeding ground for harmful bacteria. This reduces efficiency, shortens plant life and makes operations unreliable and unsafe. Boiler water treatment Boiler water treatment is a type of industrial water treatment focused on removal or chemical modification of substances potentially damaging to the boiler. Varying types of treatment are used at different locations to avoid scale, corrosion, or foaming. External treatment of raw water supplies intended for use within a boiler is focused on removal of impurities before they reach the boiler. Internal treatment within the boiler is focused on limiting the tendency of water to dissolve the boiler, and maintaining impurities in forms least likely to cause trouble before they can be removed from the boiler in boiler blowdown. Deaerator is used to reduce oxygen and nitrogen in boiler feed water applications. Cooling water treatment Water cooling is a method of heat removal from components of machinery and industrial equipment. Water may be a more efficient heat transfer fluid where air cooling is ineffective. In most occupied climates water offers the thermal conductivity advantages of a liquid with unusually high specific heat capacity and the option that of evaporative cooling. Low cost often allows rejection as waste after a single use, but recycling coolant loops may be pressurized to eliminate evaporative loss and offer greater portability and improved cleanliness. Unpressurized recycling coolant loops using evaporative cooling require a blowdown waste stream to remove impurities concentrated by evaporation. Disadvantages of water cooling systems include accelerated corrosion and maintenance requirements to prevent heat transfer reductions from biofouling or scale formation. Chemical additives to reduce these disadvantages may introduce toxicity to wastewater. Water cooling is commonly used for cooling automobile internal combustion engines and large industrial facilities such as nuclear and steam electric power plants, hydroelectric generators, petroleum refineries and chemical plants. Technologies Advancements in water treatment technology have affected all areas of industrial water treatment. Although mechanical filtration, such as reverse osmosis, is widely employed to filter contaminants, other technologies including the use of ozone generators, wastewater evaporation, electrodeionization and bioremediation are also able to address the challenges of industrial water treatment. Ozone treatment is a process in which ozone gas is injected into waste streams as a means to reduce or eliminate the need for water treatment chemicals or sanitizers that may be hazardous, including chlorine. Chemical treatment Chemical treatments utilize the additive of chemicals to make industrial water suitable for use or discharge. That includes processes like chemical precipitation, chemical disinfection, Advanced oxidation process (AOP), ion exchange, and chemical neutralization. AOPs are attractive in the treatment of hazardous wastewater due to its high oxidation potential and degradation performance. In AOPs, oxidants like Fenton's reagent, Ozone or Hydrogen peroxide are introduced in the wastewater to degrade harmful substances in industrial water for discharge. Physical treatment Physical treatment involves the separation of solids form industrial wastewater either through Filtration or Dissolved air flotation. Filtration involves the use of Membrane or filters such as mechanical filters like sand filtration etc to achieve solid-liquid separation. Whereas for Dissolved air flotation, pressurized air is pumped into the wastewater. The pressurized air then forms small bubbles which adhere to the suspended matter causing them to float to the surface of the water where they can be removed by a skimming device or an overflow. Biological treatment Biological treatment is needed to treat wastewater containing biodegradable elements. It is commonly used in municipal and industrial wastewater management facilities and usually consists in adding common bacteria and other microbes, mostly environmentally friendly, to treat the water. It is a sustainable practice that has been successful for over a century. Slow sand filters use a biological process to purify raw water to produce potable water. They work by using a complex biological film that grows naturally on the surface of sand. This gelatinous biofilm called the hypogeal layer or Schmutzdecke is located in the upper few millimetres of the sand layer. The surface biofilm purifies the water as it flows through the layer, the underlying sand provides a support medium for the biological treatment layer. The Schmutzdecke consists of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As the biofilm ages, more algae may develop and larger aquatic organisms including bryozoa, snails and Annelid worms may be present. As water passes through the hypogeal layer, particles of matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed. The contaminants are metabolised by the bacteria, fungi and protozoa. Slow sand filters are typically 1–2 metres deep, and have a hydraulic loading rate of 0.2–0.4 cubic metres per square metre per hour. Filters lose their performance as the biofilm thickens and reduces the rate of flow. The filter is refurbished by removing the biofilm and a thin upper layer of sand. Water is decanted back into the filter and re-circulated to enable a new biofilm to develop. Alternatively wet harrowing involves stirring the sand and flushing the biolayer through for disposal. Ultraviolet irradiation Ultraviolet (UV) disinfection technology has been a common water treatment technology in the past two decades due to its ability to provide disinfected water without the use of harmful chemicals. The UV-C portion represents wavelengths from 200 nm - 280 nm which is used for disinfection. UV-C photons penetrate cells and damage the nucleic acid, rendering them incapable of reproduction, or microbiologically inactive. Process water treatment technology Process water is water that is used in a variety of manufacturing operations, such as: coating and plating; rinsing and spraying; washing, etc. Municipal and ground water often contain dissolved minerals which make it unsuitable for these processes because it would affect product quality and/or increase manufacturing costs. A proper incoming water treatment system can remedy these issues and create the right water conditions for specific industrial processes. See also Water treatment Wastewater treatment Wastewater quality indicators Cooling tower Fouling Pumpable ice technology References Industrial processes Water treatment ja:下水処理場
Industrial water treatment
[ "Chemistry" ]
2,204
[ "Water treatment", "Industrial water treatment" ]
4,066,812
https://en.wikipedia.org/wiki/Boiler%20water
Boiler water is liquid water within a boiler, or in associated piping, pumps and other equipment, that is intended for evaporation into steam. The term may also be applied to raw water intended for use in boilers, treated boiler feedwater, steam condensate being returned to a boiler, or boiler blowdown being removed from a boiler. Early practice Impurities in water will leave solid deposits as steam evaporates. These solid deposits thermally insulate heat exchange surfaces initially decreasing the rate of steam generation, and potentially causing boiler metals to reach failure temperatures. Boiler explosions were not uncommon until surviving boiler operators learned how to periodically clean their boilers. Some solids could be removed by cooling the boiler so differential thermal expansion caused brittle crystalline solids to crack and flake off metal boiler surfaces. Other solids were removed by acid washing or mechanical scouring. Various rates of boiler blowdown could reduce the frequency of cleaning, but efficient operation and maintenance of individual boilers was determined by trial and error until chemists devised means of measuring and adjusting water quality to minimize cleaning requirements. Boiler water treatment Boiler water treatment is a type of industrial water treatment focused on the removal or chemical modification of substances potentially damaging to the boiler. Varying types of treatment are used at different locations to avoid scale, corrosion, or foaming. External treatment of raw water supplies intended for use within a boiler is focused on the removal of impurities before they reach the boiler. Internal treatment within the boiler is focused on limiting the tendency of water to dissolve the boiler, and maintaining impurities in forms least likely to cause trouble before they can be removed from the boiler in boiler blowdown. Within the boiler At the elevated temperatures and pressures within a boiler, water exhibits different physical and chemical properties than those observed at room temperature and atmospheric pressure. Chemicals may be added to maintain pH levels minimizing water solubility of boiler materials while allowing efficient action of other chemicals added to prevent foaming, to consume oxygen before it corrodes the boiler, to precipitate dissolved solids before they form scale on steam-generating surfaces, and to remove those precipitates from the vicinity of the steam-generating surfaces. Oxygen scavengers Sodium sulphite or hydrazine may be used to maintain reducing conditions within the boiler. Sulphite is less desirable in boilers operating at pressures above ; because sulfates formed by combination with oxygen may form sulfate scale or decompose into corrosive sulfur dioxide or hydrogen sulfide at elevated temperatures. Excess hydrazine may evaporate with steam to provide corrosion protection by neutralizing carbon dioxide in the steam condensate system; but it may also decompose into ammonia which will attack copper alloys. Products based on filming amines such as Helamin may be preferred for corrosion protection of condensate systems with copper alloys. Coagulation Boilers operating at pressures less than may use unsoftened feedwater with the addition of sodium carbonate or sodium hydroxide to maintain alkaline conditions to precipitate calcium carbonate, magnesium hydroxide and magnesium silicate. Hard water treated this way causes a fairly high concentration of suspended solid particles within the boiler to serve as precipitation nuclei preventing later deposition of calcium sulfate scale. Natural organic materials like starches, tannins and lignins may be added to control crystal growth and disperse precipitates. The soft sludge of precipitates and organic materials accumulates in quiescent portions of the boiler to be removed during bottom blowdown. Phosphates Boiler sludge concentrations created by coagulation treatment may be avoided by sodium phosphate treatment when water hardness is less than 60 mg/L. With adequate alkalinity, addition of sodium phosphate produces an insoluble precipitate of hydroxyapatite with magnesium hydroxide and magnesium and calcium silicates. Lignin may be processed for high temperature stability to control calcium phosphate scale and magnetic iron oxide deposits. Acceptable phosphate concentrations decrease from 140 mg/L in low pressure boilers to less than 40 mg/L at pressures above . Recommended alkalinity similarly decreases from 700 mg/L to 200 mg/L over the same pressure range. Foaming problems are more common with high alkalinity. Coordinated control of pH and phosphates attempts to limit caustic corrosion occurring from concentrations of hydroxyl ions under porous scale on steam generating surfaces within the boiler. High pressure boilers using demineralized water are most vulnerable to caustic corrosion. Hydrolysis of trisodium phosphate is a pH buffer in equilibrium with disodium phosphate and sodium hydroxide. Chelants Chelants like ethylenediaminetetraacetic acid (EDTA) or nitrilotriacetic acid (NTA) form complex ions with calcium and magnesium. Solubility of these complex ions may reduce blowdown requirements if anionic carboxylate polymers are added to control scale formation. Potential decomposition at high temperatures limits chelant use to boilers operating at pressures less than . Decomposition products may cause metal corrosion in areas of stress and high temperature. Feedwater Many large boilers including those used in thermal power stations recycle condensed steam for re-use within the boiler. Steam condensate is distilled water, but it may contain dissolved gases. A deaerator is often used to convert condensate to feedwater by removing potentially damaging gases including oxygen, carbon dioxide, ammonia and hydrogen sulfide. Inclusion of a polisher (an Ion exchange vessel) helps to maintain water purity, and in particular protect the boiler from a condenser tube leak. Make-up water All boilers lose some water in steam leaks; and some is intentionally wasted as boiler blowdown to remove impurities accumulating within the boiler. Steam locomotives and boilers generating steam for use in direct contact with contaminating materials may not recycle condensed steam. Replacement water is required to continue steam production. Make-up water is initially treated to remove floating and suspended materials. Hard water intended for low-pressure boilers may be softened by substituting sodium for divalent cations of dissolved calcium and magnesium most likely to cause carbonate and sulfate scale. High-pressure boilers typically require water demineralized by reverse osmosis, distillation or ion-exchange. See also Dealkalization of water Sources References Boilers Water
Boiler water
[ "Chemistry", "Environmental_science" ]
1,285
[ "Water", "Boilers", "Hydrology", "Pressure vessels" ]
4,066,860
https://en.wikipedia.org/wiki/Photosynthetic%20efficiency
The photosynthetic efficiency is the fraction of light energy converted into chemical energy during photosynthesis in green plants and algae. Photosynthesis can be described by the simplified chemical reaction 6 H2O + 6 CO2 + energy → C6H12O6 + 6 O2 where C6H12O6 is glucose (which is subsequently transformed into other sugars, starches, cellulose, lignin, and so forth). The value of the photosynthetic efficiency is dependent on how light energy is defined – it depends on whether we count only the light that is absorbed, and on what kind of light is used (see Photosynthetically active radiation). It takes eight (or perhaps ten or more) photons to use one molecule of CO2. The Gibbs free energy for converting a mole of CO2 to glucose is 114 kcal, whereas eight moles of photons of wavelength 600 nm contains 381 kcal, giving a nominal efficiency of 30%. However, photosynthesis can occur with light up to wavelength 720 nm so long as there is also light at wavelengths below 680 nm to keep Photosystem II operating (see Chlorophyll). Using longer wavelengths means less light energy is needed for the same number of photons and therefore for the same amount of photosynthesis. For actual sunlight, where only 45% of the light is in the photosynthetically active wavelength range, the theoretical maximum efficiency of solar energy conversion is approximately 11%. In actuality, however, plants do not absorb all incoming sunlight (due to reflection, respiration requirements of photosynthesis and the need for optimal solar radiation levels) and do not convert all harvested energy into biomass, which results in a maximum overall photosynthetic efficiency of 3 to 6% of total solar radiation. If photosynthesis is inefficient, excess light energy must be dissipated to avoid damaging the photosynthetic apparatus. Energy can be dissipated as heat (non-photochemical quenching), or emitted as chlorophyll fluorescence. Typical efficiencies Plants Quoted values sunlight-to-biomass efficiency The following is a breakdown of the energetics of the photosynthesis process from Photosynthesis by Hall and Rao: Starting with the solar spectrum falling on a leaf, 47% lost due to photons outside the 400–700 nm active range (chlorophyll uses photons between 400 and 700 nm, extracting the energy of one 700 nm photon from each one) 30% of the in-band photons are lost due to incomplete absorption or photons hitting components other than chloroplasts 24% of the absorbed photon energy is lost due to degrading short wavelength photons to the 700 nm energy level 68% of the used energy is lost in conversion into d-glucose 35–45% of the glucose is consumed by the leaf in the processes of dark and photo respiration Stated another way: 100% sunlight → non-bioavailable photons waste is 47%, leaving 53% (in the 400–700 nm range) → 30% of photons are lost due to incomplete absorption, leaving 37% (absorbed photon energy) → 24% is lost due to wavelength-mismatch degradation to 700 nm energy, leaving 28.2% (sunlight energy collected by chlorophyll) → 68% is lost in conversion of ATP and NADPH to d-glucose, leaving 9% (collected as sugar) → 35–40% of sugar is recycled/consumed by the leaf in dark and photo-respiration, leaving 5.4% net leaf efficiency. Many plants lose much of the remaining energy on growing roots. Most crop plants store ~0.25% to 0.5% of the sunlight in the product (corn kernels, potato starch, etc.). Photosynthesis increases linearly with light intensity at low intensity, but at higher intensity this is no longer the case (see Photosynthesis-irradiance curve). Above about 10,000 lux or ~100 watts/square meter the rate no longer increases. Thus, most plants can only use ~10% of full mid-day sunlight intensity. This dramatically reduces average achieved photosynthetic efficiency in fields compared to peak laboratory results. However, real plants (as opposed to laboratory test samples) have many redundant, randomly oriented leaves. This helps to keep the average illumination of each leaf well below the mid-day peak enabling the plant to achieve a result closer to the expected laboratory test results using limited illumination. Only if the light intensity is above a plant specific value, called the compensation point the plant assimilates more carbon and releases more oxygen by photosynthesis than it consumes by cellular respiration for its own current energy demand. Photosynthesis measurement systems are not designed to directly measure the amount of light absorbed by the leaf. Nevertheless, the light response curves that the class produces do allow comparisons in photosynthetic efficiency between plants. Algae and other monocellular organisms From a 2010 study by the University of Maryland, photosynthesizing cyanobacteria have been shown to be a significant species in the global carbon cycle, accounting for 20–30% of Earth's photosynthetic productivity and convert solar energy into biomass-stored chemical energy at the rate of ~450 TW. Some pigments such as B-phycoerythrin that are mostly found in red algae and cyanobacteria has much higher light-harvesting efficiency compared to that of other plants. Such organisms are potentially candidates for biomimicry technology to improve solar panels design. Efficiencies of various biofuel crops Popular choices for plant biofuels include: oil palm, soybean, castor oil, sunflower oil, safflower oil, corn ethanol, and sugar cane ethanol. A 2008 Hawaiian oil palm plantation projection stated: "algae could yield from 5,000-10,000 gallons of oil per acre yearly, compared to 250-350 gallons for jatropha and 600-800 gallons for palm oil". That comes to 26 kW per acre or 7 W/m2. Typical insolation in Hawaii is around 230 W/m2., so converting 3% of the incident solar energy to chemical fuel. Total photosynthetic efficiency would include more than just the biodiesel oil, so this number is a lower bound. Contrast this with a typical photovoltaic installation, which would produce an average of roughly 22 W/m2 (roughly 10% of the average insolation), throughout the year. Furthermore, the photovoltaic panels would produce electricity, which is a high-quality form of energy, whereas converting the biodiesel into mechanical energy entails the loss of a large portion of the energy. On the other hand, a liquid fuel is much more convenient for a vehicle than electricity, which has to be stored in heavy, expensive batteries. Most crop plants store ~0.25% to 0.5% of the sunlight in the product (corn kernels, potato starch, etc.) Ethanol fuel in Brazil has a calculation that results in: "Per hectare per year, the biomass produced corresponds to 0.27 TJ. This is equivalent to 0.86 W/m2. Assuming an average insolation of 225 W/m2, the photosynthetic efficiency of sugarcane is 0.38%." Sucrose accounts for little more than 30% of the chemical energy stored in the mature plant; 35% is in the leaves and stem tips, which are left in the fields during harvest, and 35% are in the fibrous material (bagasse) left over from pressing. C3 vs. C4 and CAM plants C3 plants use the Calvin cycle to fix carbon. C4 plants use a modified Calvin cycle in which they separate Ribulose-1,5-bisphosphate carboxylase oxygenase (RuBisCO) from atmospheric oxygen, fixing carbon in their mesophyll cells and using oxaloacetate and malate to ferry the fixed carbon to RuBisCO and the rest of the Calvin cycle enzymes isolated in the bundle-sheath cells. The intermediate compounds both contain four carbon atoms, which gives C4. In Crassulacean acid metabolism (CAM), time isolates functioning RuBisCO (and the other Calvin cycle enzymes) from high oxygen concentrations produced by photosynthesis, in that O2 is evolved during the day, and allowed to dissipate then, while at night atmospheric CO2 is taken up and stored as malic or other acids. During the day, CAM plants close stomata and use stored acids as carbon sources for sugar, etc. production. The C3 pathway requires 18 ATP and 12 NADPH for the synthesis of one molecule of glucose (3 ATP + 2 NADPH per fixed) while the C4 pathway requires 30 ATP and 12 NADPH (C3 + 2 ATP per fixed). In addition, we can take into account that each NADPH is equivalent to 3 ATP, that means both pathways require 36 additional (equivalent of) ATP [better citation needed]. Despite this reduced ATP efficiency, C4 is an evolutionary advancement, adapted to areas of high levels of light, where the reduced ATP efficiency is more than offset by the use of increased light. The ability to thrive despite restricted water availability maximizes the ability to use available light. The simpler C3 cycle which operates in most plants is adapted to wetter darker environments, such as many northern latitudes. Maize, sugar cane, and sorghum are C4 plants. These plants are economically important in part because of their relatively high photosynthetic efficiencies compared to many other crops. Pineapple is a CAM plant. Research Photorespiration One efficiency-focused research topic is improving the efficiency of photorespiration. Around 25% of the time RuBisCO incorrectly collects oxygen molecules instead of , creating and ammonia that disrupt the photosynthesis process. Plants remove these byproducts via photorespiration, requiring energy and nutrients that would otherwise increase photosynthetic output. In C3 plants photorespiration can consume 20-50% of photosynthetic energy. Engineered tobacco The research shortened photosynthetic pathways in tobacco. Engineered crops grew taller and faster, yielding up to 40% more biomass. The study employed synthetic biology to construct new metabolic pathways and assessed their efficiency with and without transporter RNAi. The most efficient pathway increased light-use efficiency by 17%. Expanding photosynthetically active radiation with pigment bioengineering Far-red In efforts to increase photosynthetic efficiency, researchers have proposed extending the spectrum of light that is available for photosynthesis. One approach involves incorporating pigments like chlorophyll d and f, which are capable of absorbing far-red light, into the photosynthetic machinery of higher plants. Naturally present in certain cyanobacteria, these chlorophylls enable photosynthesis with far-red light that standard chlorophylls a and b cannot utilize. By adapting these pigments for use in higher plants, it is hoped that plants can be engineered to utilize a wider range of the light spectrum, potentially leading to increased growth rates and biomass production. Green Green light is considered the least efficient wavelength in the visible spectrum for photosynthesis and presents an opportunity for increased utilization. Chlorophyll c is a pigment found in marine algae with blue-green absorption and could be used to expand absorption in the green wavelengths in plants. Expression of the dinoflagellate CHLOROPHYLL C SYNTHASE gene in the plant Nicotiana benthamiana resulted in the heterologous production of chlorophyll c. This was the first successful introduction of a foreign chlorophyll molecule into a higher plant and is the first step towards bioengineering plants for improved photosynthetic performance across a variety of lighting conditions. Chloroplast biogenesis Research is being done into RCB and NCP, two non-catalytic thioredoxin-like proteins that activate chloroplast transcription. Knowing the exact mechanism can be useful to allow increasing photosynthesis (i.e. through genetic modification). Ecosystem research on photosynthetic efficiency Photosynthesis is the only process that allows the conversion of atmospheric carbon (CO2) to organic (solid) carbon, and this process plays an essential role in climate models. This lead researchers to study the sun-induced chlorophyll fluorescence (i.e., chlorophyll fluorescence that uses the Sun as illumination source; the glow of a plant) as an indicator of photosynthetic efficiency of a region. This is interesting for scientists since its shows them things like the CO2 absorption of a forests, or the productivity of an agricultural region. The FLEX (satellite) is the upcoming satellite program by the European Space Agency designated to this type of measurements. See also Energy conversion efficiency FLEX (satellite) Phototroph Photosynthetically active radiation References Ecological metrics Photosynthesis
Photosynthetic efficiency
[ "Chemistry", "Mathematics", "Biology" ]
2,701
[ "Metrics", "Ecological metrics", "Quantity", "Photosynthesis", "Biochemistry" ]
4,067,031
https://en.wikipedia.org/wiki/Ukkonen%27s%20algorithm
In computer science, Ukkonen's algorithm is a linear-time, online algorithm for constructing suffix trees, proposed by Esko Ukkonen in 1995. The algorithm begins with an implicit suffix tree containing the first character of the string. Then it steps through the string, adding successive characters until the tree is complete. This order addition of characters gives Ukkonen's algorithm its "on-line" property. The original algorithm presented by Peter Weiner proceeded backward from the last character to the first one from the shortest to the longest suffix. A simpler algorithm was found by Edward M. McCreight, going from the longest to the shortest suffix. Implicit suffix tree While generating suffix tree using Ukkonen's algorithm, we will see implicit suffix tree in intermediate steps depending on characters in string S. In implicit suffix trees, there will be no edge with $ (or any other termination character) label and no internal node with only one edge going out of it. High level description of Ukkonen's algorithm Ukkonen's algorithm constructs an implicit suffix tree T for each prefix S[1...i] of S (S being the string of length n). It first builds T using the 1 character, then T using the 2 character, then T using the 3 character, ..., T using the n character. You can find the following characteristics in a suffix tree that uses Ukkonen's algorithm: Implicit suffix tree T is built on top of implicit suffix tree T . At any given time, Ukkonen's algorithm builds the suffix tree for the characters seen so far and so it has on-line property, allowing the algorithm to have an execution time of O(n). Ukkonen's algorithm is divided into n phases (one phase for each character in the string with length n). Each phase i+1 is further divided into i+1 extensions, one for each of the i+1 suffixes of S[1...i+1]. Suffix extension is all about adding the next character into the suffix tree built so far. In extension j of phase i+1, algorithm finds the end of S[j...i] (which is already in the tree due to previous phase i) and then it extends S[j...i] to be sure the suffix S[j...i+1] is in the tree. There are three extension rules: If the path from the root labelled S[j...i] ends at a leaf edge (i.e., S[i] is last character on leaf edge), then character S[i+1] is just added to the end of the label on that leaf edge. if the path from the root labelled S[j...i] ends at a non-leaf edge (i.e., there are more characters after S[i] on path) and next character is not S[i+1], then a new leaf edge with label S[i+1] and number j is created starting from character S[i+1]. A new internal node will also be created if S[1...i] ends inside (in between) a non-leaf edge. If the path from the root labelled S[j..i] ends at a non-leaf edge (i.e., there are more characters after S[i] on path) and next character is S[i+1] (already in tree), do nothing. One important point to note is that from a given node (root or internal), there will be one and only one edge starting from one character. There will not be more than one edge going out of any node starting with the same character. Run time The naive implementation for generating a suffix tree going forward requires or even time complexity in big O notation, where is the length of the string. By exploiting a number of algorithmic techniques, Ukkonen reduced this to (linear) time, for constant-size alphabets, and in general, matching the runtime performance of the earlier two algorithms. Ukkonen's algorithm example To better illustrate how a suffix tree is constructed using Ukkonen's algorithm, we can consider the string S = xabxac. Start with an empty root node. Construct for S[1] by adding the first character of the string. Rule 2 applies, which creates a new leaf node. Construct for S[1..2] by adding suffixes of xa (xa and a). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node. Construct for S[1..3] by adding suffixes of xab (xab, ab and b). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node. Construct for S[1..4] by adding suffixes of xabx (xabx, abx, bx and x). Rule 1 applies, which extends the path label in existing leaf edge. Rule 3 applies, do nothing. Constructs for S[1..5] by adding suffixes of xabxa (xabxa, abxa, bxa, xa and a). Rule 1 applies, which extends the path label in existing leaf edge. Rule 3 applies, do nothing. Constructs for S[1..6] by adding suffixes of xabxac (xabxac, abxac, bxac, xac, ac and c). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node (in this case, three new leaf edges and two new internal nodes are created). References External links Detailed explanation in plain English Fast String Searching With Suffix Trees Mark Nelson's tutorial. Has an implementation example written with C++. Implementation in C with detailed explanation Lecture slides by Guy Blelloch Ukkonen's homepage Text-Indexing project (Ukkonen's linear-time construction of suffix trees) Implementation in C Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Bioinformatics algorithms Algorithms on strings Substring indices
Ukkonen's algorithm
[ "Biology" ]
1,311
[ "Bioinformatics", "Bioinformatics algorithms" ]
4,067,253
https://en.wikipedia.org/wiki/Light-addressable%20potentiometric%20sensor
A light-addressable potentiometric sensor (LAPS) is a sensor that uses light (e.g. LEDs) to select what will be measured. Light can activate carriers in semiconductors. History An example is the pH-sensitive LAPS (range pH4 to pH10) that uses LEDs in combination with (semi-conducting) silicon and pH-sensitive Ta2O5 (SiO2; Si3N4) insulator. The LAPS has several advantages over other types of chemical sensors. The sensor surface is completely flat, no structures, wiring or passivation are required. At the same time, the "light-addressability" of the LAPS makes it possible to obtain a spatially resolved map of the distribution of the ion concentration in the specimen. The spatial resolution of the LAPS is an important factor and is determined by the beam size and the lateral diffusion of photocarries in the semiconductor substrate. By illuminating parts of the semiconductor surface, electron-hole pairs are generated and a photocurrent flows. The LAPS is a semiconductor based chemical sensor with an electrolyte-insulator-semiconductor (EIS) structure. Under a fixed bias voltage, the AC (kHz range) photocurrent signal varies depending on the solution. A two-dimensional mapping of the surface from the LAPS is possible by using a scanning laser beam. Optoelectronics Sensors
Light-addressable potentiometric sensor
[ "Technology", "Engineering" ]
286
[ "Sensors", "Measuring instruments" ]
4,067,466
https://en.wikipedia.org/wiki/Sort%20%28C%2B%2B%29
sort is a generic function in the C++ Standard Library for doing comparison sorting. The function originated in the Standard Template Library (STL). The specific sorting algorithm is not mandated by the language standard and may vary across implementations, but the worst-case asymptotic complexity of the function is specified: a call to must perform no more than comparisons when applied to a range of elements. Usage The function is included from the header of the C++ Standard Library, and carries three arguments: . Here, is a templated type that must be a random access iterator, and and must define a sequence of values, i.e., must be reachable from by repeated application of the increment operator to . The third argument, also of a templated type, denotes a comparison predicate. This comparison predicate must define a strict weak ordering on the elements of the sequence to be sorted. The third argument is optional; if not given, the "less-than" () operator is used, which may be overloaded in C++. This code sample sorts a given array of integers (in ascending order) and prints it out. #include <algorithm> #include <iostream> int main() { int array[] = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(std::begin(array), std::end(array)); for (size_t i = 0; i < std::size(array); ++i) { std::cout << array[i] << ' '; } std::cout << '\n'; } The same functionality using a container, using its and methods to obtain iterators: #include <algorithm> #include <iostream> #include <vector> int main() { std::vector<int> vec = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(vec.begin(), vec.end()); for (size_t i = 0; i < vec.size(); ++i) { std::cout << vec[i] << ' '; } std::cout << '\n'; } Genericity is specified generically, so that it can work on any random-access container and any way of determining that an element of such a container should be placed before another element . Although generically specified, is not easily applied to all sorting problems. A particular problem that has been the subject of some study is the following: Let and be two arrays, where there exists some relation between the element and the element for all valid indices . Sort while maintaining the relation with , i.e., apply the same permutation to that sorts . Do the previous without copying the elements of and into a new array of pairs, sorting, and moving the elements back into the original arrays (which would require temporary space). A solution to this problem was suggested by A. Williams in 2002, who implemented a custom iterator type for pairs of arrays and analyzed some of the difficulties in correctly implementing such an iterator type. Williams's solution was studied and refined by K. Åhlander. Complexity and implementations The C++ standard requires that a call to performs comparisons when applied to a range of elements. In previous versions of C++, such as C++03, only average complexity was required to be . This was to allow the use of algorithms like (median-of-3) quicksort, which are fast in the average case, indeed significantly faster than other algorithms like heap sort with optimal worst-case complexity, and where the worst-case quadratic complexity rarely occurs. The introduction of hybrid algorithms such as introsort allowed both fast average performance and optimal worst-case performance, and thus the complexity requirements were tightened in later standards. Different implementations use different algorithms. The GNU Standard C++ library, for example, uses a 3-part hybrid sorting algorithm: introsort is performed first (introsort itself being a hybrid of quicksort and heap sort), to a maximum depth given by 2×log2 n, where n is the number of elements, followed by an insertion sort on the result. Other types of sorting sort is not stable: equivalent elements that are ordered one way before sorting may be ordered differently after sorting. stable_sort ensures stability of result at expense of worse performance (in some cases), requiring only quasilinear time with exponent 2 – O(n log2 n) – if additional memory is not available, but linearithmic time O(n log n) if additional memory is available. This allows the use of in-place merge sort for in-place stable sorting and regular merge sort for stable sorting with additional memory. Partial sorting is implemented by , which takes a range of elements and an integer , and reorders the range so that the smallest elements are in the first positions in sorted order (leaving the remaining in the remaining positions, in some unspecified order). Depending on design this may be considerably faster than complete sort. Historically, this was commonly implemented using a heap-based algorithm that takes worst-case time. A better algorithm called quickselsort is used in the Copenhagen STL implementation, bringing the complexity down to . Selection of the nth element is implemented by nth_element, which actually implements an in-place partial sort: it correctly sorts the nth element, and also ensures that this element partitions so elements before it are less than it, and elements after it are greater than it. There is the requirement that this takes linear time on average, but there is no worst-case requirement; these requirements are exactly met by quickselect, for any choice of pivot strategy. Some containers, among them list, provide specialised version of sort as a member function. This is because linked lists don't have random access (and therefore can't use the regular sort function); and the specialised version also preserves the values list iterators point to. Comparison to qsort Aside from , the C++ standard library also includes the function from the C standard library. Compared to , the templated is more type-safe since it does not require access to data items through unsafe pointers, as does. Also, accesses the comparison function using a function pointer, necessitating large numbers of repeated function calls, whereas in , comparison functions may be inlined into the custom object code generated for a template instantiation. In practice, C++ code using is often considerably faster at sorting simple data like integers than equivalent C code using . References External links C++ reference for std::sort Another C++ reference for std::sort C++ Standard Library Sorting algorithms
Sort (C++)
[ "Mathematics" ]
1,448
[ "Order theory", "Sorting algorithms" ]
4,067,857
https://en.wikipedia.org/wiki/NAT%20Port%20Mapping%20Protocol
NAT Port Mapping Protocol (NAT-PMP) is a network protocol for establishing network address translation (NAT) settings and port forwarding configurations automatically without user effort. The protocol automatically determines the external IPv4 address of a NAT gateway, and provides means for an application to communicate the parameters for communication to peers. Apple introduced NAT-PMP in 2005 by as part of the Bonjour specification, as an alternative to the more common ISO Standard Internet Gateway Device Protocol implemented in many NAT routers. The protocol was published as an informational Request for Comments (RFC) by the Internet Engineering Task Force (IETF) in RFC 6886. NAT-PMP runs over the User Datagram Protocol (UDP) and uses port number 5351 on the server, whilst port 5350 is used on the client, as per spec. It has no built-in authentication mechanisms because forwarding a port typically does not allow any activity that could not also be achieved using STUN methods. The benefit of NAT-PMP over STUN is that it does not require a STUN server and a NAT-PMP mapping has a known expiration time, allowing the application to avoid sending inefficient keep-alive packets. NAT-PMP is the predecessor of the Port Control Protocol (PCP). See also Port Control Protocol (PCP) Internet Gateway Device Protocol (UPnP IGD) Universal Plug and Play (UPnP) NAT traversal STUN Zeroconf References External links Port Mapping Protocols Overview and Comparison 2024 — About UPnP IGD & PCP/NAT-PMP Apple Inc. services Network protocols Network address translation
NAT Port Mapping Protocol
[ "Technology" ]
339
[ "Computing stubs", "Computer network stubs" ]
4,067,918
https://en.wikipedia.org/wiki/Vertical%20and%20horizontal%20bundles
In mathematics, the vertical bundle and the horizontal bundle are vector bundles associated to a smooth fiber bundle. More precisely, given a smooth fiber bundle , the vertical bundle and horizontal bundle are subbundles of the tangent bundle of whose Whitney sum satisfies . This means that, over each point , the fibers and form complementary subspaces of the tangent space . The vertical bundle consists of all vectors that are tangent to the fibers, while the horizontal bundle requires some choice of complementary subbundle. To make this precise, define the vertical space at to be . That is, the differential (where ) is a linear surjection whose kernel has the same dimension as the fibers of . If we write , then consists of exactly the vectors in which are also tangent to . The name is motivated by low-dimensional examples like the trivial line bundle over a circle, which is sometimes depicted as a vertical cylinder projecting to a horizontal circle. A subspace of is called a horizontal space if is the direct sum of and . The disjoint union of the vertical spaces VeE for each e in E is the subbundle VE of TE; this is the vertical bundle of E. Likewise, provided the horizontal spaces vary smoothly with e, their disjoint union is a horizontal bundle. The use of the words "the" and "a" here is intentional: each vertical subspace is unique, defined explicitly by . Excluding trivial cases, there are an infinite number of horizontal subspaces at each point. Also note that arbitrary choices of horizontal space at each point will not, in general, form a smooth vector bundle; they must also vary in an appropriately smooth way. The horizontal bundle is one way to formulate the notion of an Ehresmann connection on a fiber bundle. Thus, for example, if E is a principal G-bundle, then the horizontal bundle is usually required to be G-invariant: such a choice is equivalent to a connection on the principal bundle. This notably occurs when E is the frame bundle associated to some vector bundle, which is a principal bundle. Formal definition Let π:E→B be a smooth fiber bundle over a smooth manifold B. The vertical bundle is the kernel VE := ker(dπ) of the tangent map dπ : TE → TB. Since dπe is surjective at each point e, it yields a regular subbundle of TE. Furthermore, the vertical bundle VE is also integrable. An Ehresmann connection on E is a choice of a complementary subbundle HE to VE in TE, called the horizontal bundle of the connection. At each point e in E, the two subspaces form a direct sum, such that TeE = VeE ⊕ HeE. Example The Möbius strip is a line bundle over the circle, and the circle can be pictured as the middle ring of the strip. At each point on the strip, the projection map projects it towards the middle ring, and the fiber is perpendicular to the middle ring. The vertical bundle at this point is the tangent space to the fiber. A simple example of a smooth fiber bundle is a Cartesian product of two manifolds. Consider the bundle B1 := (M × N, pr1) with bundle projection pr1 : M × N → M : (x, y) → x. Applying the definition in the paragraph above to find the vertical bundle, we consider first a point (m,n) in M × N. Then the image of this point under pr1 is m. The preimage of m under this same pr1 is {m} × N, so that T(m,n) ({m} × N) = {m} × TN. The vertical bundle is then VB1 = M × TN, which is a subbundle of T(M ×N). If we take the other projection pr2 : M × N → N : (x, y) → y to define the fiber bundle B2 := (M × N, pr2) then the vertical bundle will be VB2 = TM × N. In both cases, the product structure gives a natural choice of horizontal bundle, and hence an Ehresmann connection: the horizontal bundle of B1 is the vertical bundle of B2 and vice versa. Properties Various important tensors and differential forms from differential geometry take on specific properties on the vertical and horizontal bundles, or even can be defined in terms of them. Some of these are: A vertical vector field is a vector field that is in the vertical bundle. That is, for each point e of E, one chooses a vector where is the vertical vector space at e. A differentiable r-form on E is said to be a horizontal form if whenever at least one of the vectors is vertical. The connection form vanishes on the horizontal bundle, and is non-zero only on the vertical bundle. In this way, the connection form can be used to define the horizontal bundle: The horizontal bundle is the kernel of the connection form. The solder form or tautological one-form vanishes on the vertical bundle and is non-zero only on the horizontal bundle. By definition, the solder form takes its values entirely in the horizontal bundle. For the case of a frame bundle, the torsion form vanishes on the vertical bundle, and can be used to define exactly that part that needs to be added to an arbitrary connection to turn it into a Levi-Civita connection, i.e. to make a connection be torsionless. Indeed, if one writes θ for the solder form, then the torsion tensor Θ is given by Θ = D θ (with D the exterior covariant derivative). For any given connection ω, there is a unique one-form σ on TE, called the contorsion tensor, that is vanishing in the vertical bundle, and is such that ω+σ is another connection 1-form that is torsion-free. The resulting one-form ω+σ is nothing other than the Levi-Civita connection. One can take this as a definition: since the torsion is given by , the vanishing of the torsion is equivalent to having , and it is not hard to show that σ must vanish on the vertical bundle, and that σ must be G-invariant on each fibre (more precisely, that σ transforms in the adjoint representation of G). Note that this defines the Levi-Civita connection without making any explicit reference to any metric tensor (although the metric tensor can be understood to be a special case of a solder form, as it establishes a mapping between the tangent and cotangent bundles of the base space, i.e. between the horizontal and vertical subspaces of the frame bundle). In the case where E is a principal bundle, then the fundamental vector field must necessarily live in the vertical bundle, and vanish in any horizontal bundle. Notes References Differential topology Fiber bundles Connection (mathematics)
Vertical and horizontal bundles
[ "Mathematics" ]
1,424
[ "Topology", "Differential topology" ]
4,068,024
https://en.wikipedia.org/wiki/Diploma%20in%20Digital%20Applications
In England, Wales, Northern Ireland and the Isle of Man, the Diploma in Digital Applications (DiDA) was an optional information and communication technology (ICT) course, usually studied by Key Stage 4 or equivalent school students (aged 14–16). DiDA was introduced in 2005 (after a pilot starting in 2004) as a creation of the Edexcel examination board. DiDA was notable for its time in that it consisted entirely of coursework, completed on-computer; all work relating to the DiDA course was created, stored, assessed and moderated digitally. In the late 2000s it was generally taught as a replacement for GCSE ICT, and the GNVQ which had been withdrawn in 2007. DiDA faced controversy in its lifetime, particularly after the Wolf report found that it was primarily being taught by schools because it was the equivalent of studying four GCSEs at once, which had a major impact on league table scores. From 2012 a revised DiDA and CiDA were taught by a smaller number of centres, with the original qualification removed from league table consideration in 2014. The revised version was ultimately discontinued in 2020. At the scheme's launch, 200,000 students were enrolled on the qualification; this had declined to 6,000 on the revised version in 2016 and to 1,400 students by the time of the final report in 2020. Course The course consists of five units. Using ICT is a compulsory unit. The other four units, Multimedia, Graphics, ICT in Enterprise and Computer Games Authoring were optional. Students who completed the Using ICT module alone received an Award in Digital Applications (AiDA), which was equivalent to one GCSE or Standard Grade. Those who completed the Using ICT unit and any one of the other four units received a Certificate in Digital Applications (CiDA), which was equivalent to two GCSEs or Standard Grades. Students who completed four modules in total received the full Diploma in Digital Applications (DiDA), which was equivalent to four GCSEs or Standard Grades. Edexcel also made it possible for candidates to achieve a Certificate in Digital Applications Plus (CiDA+), equivalent to three GCSEs or Standard Grades, upon completion of Using ICT and another two units. The original 2004 pilot included three moderation windows; this was extended to four in the 2005 launch to give students one additional chance for a resit if they failed. Levelling & qualifications The qualification was available either as the equivalent of one, two, or four GCSEs as AiDA, CiDA or DiDA respectively. Adobe Associate Certificates Students who successfully completed DiDA units D202 and D203 were eligible to claim Adobe Systems Associate Certification, provided they attained a merit or distinction grade along with other requirements. There were three different types: Web Media using Dreamweaver - Multimedia Multimedia using Flash - Multimedia Web Graphics using Fireworks - Graphics The Adobe certification scheme was not widely adopted by schools, as most did not have the teacher expertise required for its delivery. While the original DiDA specification was approved for use until 2014, Adobe discontinued Fireworks in 2013. Criticism Use in league table score inflation The qualification was initially designed in response to concerns that schools were using the older GNVQ as a way to inflate their league table performance in the mid 2000s, as it counted as four GCSEs but could be studied in the time of one. Academies in particular relied on DiDA in the same way during the late 2000s, with one study discovering that hypothetically excluding DiDA from rankings caused the score of an academy to drop 21%. DiDA faced criticism from some IT experts early in its development, describing it as a "soft option". The Thomas Telford School, which built the online platform for the GNVQ found that DiDA was "not a suitable alternative" (to the GNVQ). Ofsted was similarly critical of the qualification, describing it as "of doubtful value". Like many other qualifications DiDA was revised in 2012 to meet changing specifications from government, amid concerns that it offered "no basis for progression to further study or to meaningful employment", and was being taught in order to inflate league table scores at the expense of other qualifications. As a result of these changes the original qualification was removed from league table consideration in 2014. For 2015, the revised version was counted as a single qualification rather than four, and saw significantly less widespread adoption by schools. Format and difficulty Lewisham City Learning Centre was concerned about the volume of assessment evidence, with students required to create a large amount of documentation. Grading of these documents was determined by the structure, composition and language used, and not the merit of the projects they were related to. Schools were forced to spend the majority of lesson time on these documents rather than "higher level ICT skills", and avoided creative projects and professional software because of the time requirement. Few schools adopted the Adobe Associate Certification because of the issue. Teachers described the qualification as "very, very challenging" to teach, and many teachers were unsure of what students actually needed to do in order to pass. Speaking to The Guardian, the ICT head of Moor End Technology College commented of the pilot scheme: "Students who were able to get through GNVQ will struggle with Dida. It will be very difficult for us to match the kind of results we have achieved with GNVQ. To get four full A-Cs you have to complete four Dida units. In the pilot some of my students struggled to complete one". Many other schools ultimately found it too difficult for low achievers. Edexcel significantly lowered the grade boundaries for the 2006 academic year, with the pass threshold set at 36% due to these concerns. For 2007, 700 schools which had previously offered the diploma switched instead to the equivalent OCR Nationals. The scheme also faced organisational issues, with some centres continuing to teach using the 2005 version (discontinued in 2014) as late as 2018, by which point Pearson considered it "no longer fit for purpose". References External links DiDA SPB (Using ICT from 2006) DiDA SPB (Multimedia from 2006) (Graphics from 2011) DiDA SPB (ICT in Enterprise from 2006) DiDA ePortfolio-builder Website dedicated to the DiDA course Requirements for Adobe certification Educational qualifications in the United Kingdom Information technology in the United Kingdom Information technology qualifications
Diploma in Digital Applications
[ "Technology" ]
1,300
[ "Computer occupations", "Information technology qualifications" ]
4,068,475
https://en.wikipedia.org/wiki/ATEX%20directives
The ATEX directives are two of the EU directives describing the minimum safety requirements for workplaces and equipment used in explosive atmospheres. The name is an initialization of the term ATmosphères EXplosibles (French for "explosive atmospheres"). Directives Organizations in the EU must follow Directives to protect employees from explosion risk in areas with an explosive atmosphere. There are two ATEX Directives (one for the manufacturer and one for the user of the equipment): The ATEX 114 "equipment" Directive 2014/34/EU - Equipment and protective systems intended for use in potentially explosive atmospheres The ATEX 153 "workplace" Directive 1999/92/EC - Minimum requirements for improving the safety and health protection of workers potentially at risk from explosive atmospheres. Note: The ATEX 95 "equipment" Directive 94/9/EC, was withdrawn on 20 April 2016 when it was replaced by ATEX 114 Directive 2014/34/EU. ATEX Directive 2014/34/EU is mandatory for manufacturers as of 20 April 2016 as stated in article 44 of the Directive. ATEX Directive 2014/34/EU was published on 29 March 2014, by the European Parliament. It refers to the harmonization of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres. Regarding ATEX 99/92/EC Directive, the requirement is that Employers must classify areas where potentially explosive atmospheres may occur, into zones. The classification given to a particular zone, and its size and location, depends on the likelihood of an explosive atmosphere occurring and its persistence if it does. Equipment in use before July 2003 is allowed to be used indefinitely provided a risk assessment shows it is safe to do so. The aim of Directive 2014/34/EU is to allow the free trade of ‘ATEX’ equipment and protective systems within the EU by removing the need for separate testing and documentation for each member state. The regulations apply to all equipment intended for use in explosive atmospheres, whether electrical or mechanical, including protective systems. There are two categories of equipment: 'I' for mining and 'II' for surface industries. Manufacturers who apply its provisions and affix the CE marking and the Ex marking are able to sell their equipment anywhere within the European Union without any further requirements with respect to the risks covered being applied. The directive covers a large range of equipment, potentially including equipment used on fixed offshore platforms, in petrochemical plants, mines, flour mills, and other areas where a potentially explosive atmosphere may be present. In very broad terms, there are three preconditions for the directive to apply: the equipment must (a) have its own effective source of ignition, (b) be intended for use in a potentially explosive atmosphere (air mixtures), and (c) be under normal atmospheric conditions. The directive also covers components essential for the safe use and safety devices directly contributing to the safe use of the equipment in scope. These latter devices may be outside the potentially explosive environment. Manufacturers/suppliers (or importers, if the manufacturers are outside the EU) must ensure that their products meet essential health and safety requirements and undergo appropriate conformity procedures. This usually involves testing and certification by a ‘third-party’ certification body (known as a Notified Body e.g. UL, Vinçotte, Intertek, Sira, Baseefa, Lloyd's, TUV ICQC) but manufacturers/suppliers can ‘self-certify’ Category 3 equipment (technical dossier including drawings, hazard analysis and users manual in the local language) and Category 2 non-electrical equipment. Still, for Category 2 the technical dossier must be lodged with a notified body. Once certified, the equipment is marked by the ‘CE’ (meaning it complies with ATEX and all other relevant directives) and the ‘Ex’ symbol to identify it as approved under the ATEX directive. The technical dossier must be kept for a period of 10 years. Certification ensures that the equipment or protective system is fit for its intended purpose and that adequate information is supplied with it to ensure that it can be used safely. There are four ATEX classifications to ensure that a specific piece of equipment or protective system is appropriate and can be safely used in a particular application: 1. Industrial or Mining Application 2. Equipment Category 3. Atmosphere 4. Temperature The ATEX as an EU directive finds its US equivalent under the HAZLOC standard. This standard given by the Occupational Safety and Health Administration defines and classifies hazardous locations such as explosive atmospheres. Explosive atmospheres In DSEAR, an explosive atmosphere is defined as a mixture of dangerous substances under certain atmospheric conditions that are part of the air. They are in the form of gases or airborne particulates, in which, after ignition has occurred, combustion will spread to the entire mixture. The aforementioned atmospheric conditions are temperatures of −20 to 40°C, and pressures of 0.8 to 1.1 bar. Zone classification The ATEX Directive covers explosions from flammable gas/vapors and combustible dust/fibers (which, contrary to common belief, can lead to hazardous explosions). The following are classifications for zones that can produce explosive atmospheres. Gas/Vapor/Mist: The following zones are each defined as a place in which an explosive atmosphere consisting of a mixture of air or dangerous substances in the form of gas, vapor, or mist... Zone 0 – ...is present continuously or for long periods or frequently. Zone 1 – ...is likely to occur in normal operation occasionally. Zone 2 – ...is not likely to occur in normal operation, and if it does occur, will persist for a short period only. Dust/Fibers: These are defined as a place in which an explosive atmosphere is in the form of a cloud of combustible dust in the air... Zone 20 – ...is present continuously, or for long periods or frequently. Zone 21 – ...is likely to occur in normal operation occasionally. Zone 22 – ...is not likely to occur in normal operation but, if it does occur, will persist for a short period only. Effective ignition source "Effective ignition source" is a term defined in the European ATEX directive as an event that, in combination with sufficient oxygen and fuel, can cause an explosion. Methane, hydrogen, and coal dust are good examples of possible fuels. Effective ignition sources are: Lightning strikes Stray currents Static electricity Some frequencies of electromagnetic waves (Light waves) Ultrasound (Any sound waves of higher frequency than what humans can hear; generally considered to be from ~20Hz to ~20kHz) Electrical switches (Toggling an electrical switch (particularly turning it off) can cause arcing inside the switch) Open flames (This may range from a lit cigarette to welding activity) Hot gasses (This can include a gas that just has hot particulates in it) Mechanically generated impact spark (For example, a hammer blow on a rusty steel surface compared to a hammer blow on a flint stone. The speed and impact angle (between surface and hammer) are important; a 90-degree blow on a surface is relatively harmless) Mechanically generated friction sparks (The combination of materials and speed determine the effectiveness of the ignition source. For example, 4.5 m/s steel-steel friction with a force greater than 2 kN is an effective ignition source. The combination of aluminum and rust is also notoriously dangerous. More than one red hot spark is often necessary in order to have an effective ignition source) Electric sparks (For example, a bad electrical connection or a faulty pressure transmitter) Electrostatic discharge (Static electricity can be generated by air sliding over a wing, or a non-conductive liquid flowing through a filter screen) Ionizing radiation Hot surfaces Exothermic reactions (A chemical reaction that expels heat from the involved substances, into the surrounding area) Adiabatic compression (When air is pushed through a narrow passage quickly, causing the passage's surface to heat up) See also Electrical equipment in hazardous areas Dangerous Substances and Explosive Atmospheres Regulations 2002 (UK implementation of ATEX 137) References External links EPS Regulations 2016 (UK implementation of ATEX 114) IECEx IECEx website ATEX Directive 1999/92/EC ATEX Directive 2014/34/EU ATEX Guidelines (First edition – April 2016) The Dangerous Substances and Explosive Atmospheres Regulations 2002 (UK) Equipment and Protective Systems Intended for Use in Potentially Explosive Atmospheres Regulations 2016 (UK) European Union directives Explosion protection Electrical safety Certification marks Natural gas safety Regulation of chemicals in the European Union
ATEX directives
[ "Chemistry", "Mathematics", "Engineering" ]
1,761
[ "Explosion protection", "Regulation of chemicals in the European Union", "Regulation of chemicals", "Natural gas safety", "Symbols", "Combustion engineering", "Natural gas technology", "Explosions", "Certification marks" ]
4,068,867
https://en.wikipedia.org/wiki/Projective%20cone
A projective cone (or just cone) in projective geometry is the union of all lines that intersect a projective subspace R (the apex of the cone) and an arbitrary subset A (the basis) of some other subspace S, disjoint from R. In the special case that R is a single point, S is a plane, and A is a conic section on S, the projective cone is a conical surface; hence the name. Definition Let X be a projective space over some field K, and R, S be disjoint subspaces of X. Let A be an arbitrary subset of S. Then we define RA, the cone with top R and basis A, as follows : When A is empty, RA = A. When A is not empty, RA consists of all those points on a line connecting a point on R and a point on A. Properties As R and S are disjoint, one may deduce from linear algebra and the definition of a projective space that every point on RA not in R or A is on exactly one line connecting a point in R and a point in A. (RA) S = A When K is the finite field of order q, then = + , where r = dim(R). See also Cone (geometry) Cone (algebraic geometry) Cone (topology) Cone (linear algebra) Conic section Ruled surface Hyperboloid Projective geometry
Projective cone
[ "Mathematics" ]
292
[ "Geometry", "Geometry stubs" ]
5,417,532
https://en.wikipedia.org/wiki/1-Butanol
1-Butanol, also known as butan-1-ol or n-butanol, is a primary alcohol with the chemical formula C4H9OH and a linear structure. Isomers of 1-butanol are isobutanol, butan-2-ol and tert-butanol. The unmodified term butanol usually refers to the straight chain isomer. 1-Butanol occurs naturally as a minor product of the ethanol fermentation of sugars and other saccharides and is present in many foods and drinks. It is also a permitted artificial flavorant in the United States, used in butter, cream, fruit, rum, whiskey, ice cream and ices, candy, baked goods, and cordials. It is also used in a wide range of consumer products. The largest use of 1-butanol is as an industrial intermediate, particularly for the manufacture of butyl acetate (itself an artificial flavorant and industrial solvent). It is a petrochemical derived from propylene. Estimated production figures for 1997 are: United States 784,000 tonnes; Western Europe 575,000 tonnes; Japan 225,000 tonnes. Production Since the 1950s, most 1-butanol is produced by the hydroformylation of propene (oxo process) to preferentially form the butyraldehyde n-butanal. Typical catalysts are based on cobalt and rhodium. Butyraldehyde is then hydrogenated to produce butanol. A second method for producing butanol involves the Reppe reaction of propylene with CO and water: CH3CH=CH2 + H2O + 2 CO → CH3CH2CH2CH2OH + CO2 In former times, butanol was prepared from crotonaldehyde, which can be obtained from acetaldehyde. Butanol can also be produced by fermentation of biomass by bacteria. Prior to the 1950s, Clostridium acetobutylicum was used in industrial fermentation to produce butanol. Research in the past few decades showed results of other microorganisms that can produce butanol through fermentation. Butanol can be produced via furan hydrogenation over Pd or Pt catalyst at high temperature and high pressure.https://pubs.rsc.org/en/content/articlehtml/2014/gc/c3gc41183d Industrial use Constituting 85% of its use, 1-butanol is mainly used in the production of varnishes. It is a popular solvent, e.g. for nitrocellulose. A variety of butanol derivatives are used as solvents, e.g. butoxyethanol or butyl acetate. Many plasticizers are based on butyl esters, e.g., dibutyl phthalate. The monomer butyl acrylate is used to produce polymers. It is the precursor to n-butylamines. Biofuel 1-Butanol has been proposed as a substitute for diesel fuel and gasoline. It is produced in small quantities in nearly all fermentations (see fusel oil). Clostridium produces much higher yields of butanol. Research is underway to increase the biobutanol yield from biomass. Butanol is considered as a potential biofuel (butanol fuel). Butanol at 85 percent strength can be used in cars designed for gasoline without any change to the engine (unlike 85% ethanol), and it provides more energy for a given volume than ethanol, almost as much as gasoline. Therefore, a vehicle using butanol would return fuel consumption more comparable to gasoline than ethanol. Butanol can also be added to diesel fuel to reduce soot emissions. The production of, or in some cases, the use of, the following substances may result in exposure to 1-butanol: artificial leather, butyl esters, rubber cement, dyes, fruit essences, lacquers, motion picture, and photographic films, raincoats, perfumes, pyroxylin plastics, rayon, safety glass, shellac varnish, and waterproofed cloth. Occurrence in nature Butan-1-ol occurs naturally as a result of carbohydrate fermentation in a number of alcoholic beverages, including beer, grape brandies, wine, and whisky. It has been detected in the volatiles of hops, jack fruit, heat-treated milks, musk melon, cheese, southern pea seed, and cooked rice. 1-Butanol is also formed during deep frying of corn oil, cottonseed oil, trilinolein, and triolein. Butan-1-ol is one of the "fusel alcohols" (from the German for "bad liquor"), which include alcohols that have more than two carbon atoms and have significant solubility in water. It is a natural component of many alcoholic beverages, albeit in low and variable concentrations. It (along with similar fusel alcohols) is reputed to be responsible for severe hangovers, although experiments in animal models show no evidence for this. 1-Butanol is used as an ingredient in processed and artificial flavorings, and for the extraction of lipid-free protein from egg yolk, natural flavouring materials and vegetable oils, the manufacture of hop extract for beermaking, and as a solvent in removing pigments from moist curd leaf protein concentrate. Metabolism and toxicity The acute toxicity of 1-butanol is relatively low, with oral LD50 values of 790–4,360 mg/kg (rat; comparable values for ethanol are 7,000–15,000 mg/kg). It is metabolized completely in vertebrates in a manner similar to ethanol: alcohol dehydrogenase converts 1-butanol to butyraldehyde; this is then converted to butyric acid by aldehyde dehydrogenase. Butyric acid can be fully metabolized to carbon dioxide and water by the β-oxidation pathway. In the rat, only 0.03% of an oral dose of 2,000 mg/kg was excreted in the urine. At sub-lethal doses, 1-butanol acts as a depressant of the central nervous system, similar to ethanol: one study in rats indicated that the intoxicating potency of 1-butanol is about 6 times higher than that of ethanol, possibly because of its slower transformation by alcohol dehydrogenase. Other hazards Liquid 1-butanol, as is common with most organic solvents, is extremely irritating to the eyes; repeated contact with the skin can also cause irritation. This is believed to be a generic effect of defatting. No skin sensitization has been observed. Irritation of the respiratory pathways occurs only at very high concentrations (>2,400 ppm). With a flash point of 35 °C, 1-butanol presents a moderate fire hazard: it is slightly more flammable than kerosene or diesel fuel but less flammable than many other common organic solvents. The depressant effect on the central nervous system (similar to ethanol intoxication) is a potential hazard when working with 1-butanol in enclosed spaces, although the odour threshold (0.2–30 ppm) is far below the concentration which would have any neurological effect. See also Butanol fuel External links References Alcohol solvents Primary alcohols GABAA receptor positive allosteric modulators Sedatives Hypnotics Alkanols Butyl compounds
1-Butanol
[ "Biology" ]
1,603
[ "Hypnotics", "Behavior", "Sleep" ]
5,419,082
https://en.wikipedia.org/wiki/Ettore%20Perazzoli
Ettore Perazzoli (June 15, 1974 - December 10, 2003) was an Italian free software developer. Biography Born in Milan, Italy, he studied Engineering at the Politecnico di Milano university. He wrote a port of x64, a Commodore 64 emulator for Unix, to DOS, thus turning it into a cross-platform emulator, which was renamed to VICE. He has been a maintainer of VICE for many years, and started the Microsoft Windows port, which is now the most popular version of VICE. He then started contributing to GNOME, a Linux desktop environment. He helped in writing GtkHTML, Nautilus and Evolution. Close friend of Nat Friedman and Miguel de Icaza, he was invited by them to work for the company they founded, Ximian. He accepted and in 2001 moved to Boston, United States, where Ximian was headquartered. At Ximian he led the effort to create Evolution and remained the project manager until he died. He started writing an application for managing digital photo albums, in C#, for personal use. On November 8, 2003, he published it on GNOME Concurrent Versions System (CVS) server, with the name F-Spot. On December 12, 2003, the GnomeDesktop.org website announced his death. External links Article on Barrapunto about Ettore Perazzoli's death (in Spanish) 1974 births 2003 deaths GNOME developers Polytechnic University of Milan alumni
Ettore Perazzoli
[ "Technology" ]
300
[ "Computing stubs", "Computer specialist stubs" ]
5,419,130
https://en.wikipedia.org/wiki/Lipid%20signaling
Lipid signaling, broadly defined, refers to any biological cell signaling event involving a lipid messenger that binds a protein target, such as a receptor, kinase or phosphatase, which in turn mediate the effects of these lipids on specific cellular responses. Lipid signaling is thought to be qualitatively different from other classical signaling paradigms (such as monoamine neurotransmission) because lipids can freely diffuse through membranes (see osmosis). One consequence of this is that lipid messengers cannot be stored in vesicles prior to release and so are often biosynthesized "on demand" at their intended site of action. As such, many lipid signaling molecules cannot circulate freely in solution but, rather, exist bound to special carrier proteins in serum. Sphingolipid second messengers Ceramide Ceramide (Cer) can be generated by the breakdown of sphingomyelin (SM) by sphingomyelinases (SMases), which are enzymes that hydrolyze the phosphocholine group from the sphingosine backbone. Alternatively, this sphingosine-derived lipid (sphingolipid) can be synthesized from scratch (de novo) by the enzymes serine palmitoyl transferase (SPT) and ceramide synthase in organelles such as the endoplasmic reticulum (ER) and possibly, in the mitochondria-associated membranes (MAMs) and the perinuclear membranes. Being located in the metabolic hub, ceramide leads to the formation of other sphingolipids, with the C1 hydroxyl (-OH) group as the major site of modification. A sugar can be attached to ceramide (glycosylation) through the action of the enzymes, glucosyl or galactosyl ceramide synthases. Ceramide can also be broken down by enzymes called ceramidases, leading to the formation of sphingosine, Moreover, a phosphate group can be attached to ceramide (phosphorylation) by the enzyme, ceramide kinase. It is also possible to regenerate sphingomyelin from ceramide by accepting a phosphocholine headgroup from phosphatidylcholine (PC) by the action of an enzyme called sphingomyelin synthase. The latter process results in the formation of diacylglycerol (DAG) from PC. Ceramide contains two hydrophobic ("water-fearing") chains and a neutral headgroup. Consequently, it has limited solubility in water and is restricted within the organelle where it was formed. Also, because of its hydrophobic nature, ceramide readily flip-flops across membranes as supported by studies in membrane models and membranes from red blood cells (erythrocytes). However, ceramide can possibly interact with other lipids to form bigger regions called microdomains which restrict its flip-flopping abilities. This could have immense effects on the signaling functions of ceramide because it is known that ceramide generated by acidic SMase enzymes in the outer leaflet of an organelle membrane may have different roles compared to ceramide that is formed in the inner leaflet by the action of neutral SMase enzymes. Ceramide mediates many cell-stress responses, including the regulation of programmed cell death (apoptosis) and cell aging (senescence). Numerous research works have focused interest on defining the direct protein targets of action of ceramide. These include enzymes called ceramide-activated Ser-Thr phosphatases (CAPPs), such as protein phosphatase 1 and 2A (PP1 and PP2A), which were found to interact with ceramide in studies done in a controlled environment outside of a living organism (in vitro). On the other hand, studies in cells have shown that ceramide-inducing agents such as tumor necrosis factor-alpha α (TNFα) and palmitate induce the ceramide-dependent removal of a phosphate group (dephosphorylation) of the retinoblastoma gene product RB and the enzymes, protein kinases B (AKT protein family) and C α (PKB and PKCα). Moreover, there is also sufficient evidence which implicates ceramide to the activation of the kinase suppressor of Ras (KSR), PKCζ, and cathepsin D. Cathepsin D has been proposed as the main target for ceramide formed in organelles called lysosomes, making lysosomal acidic SMase enzymes one of the key players in the mitochondrial pathway of apoptosis. Ceramide was also shown to activate PKCζ, implicating it to the inhibition of AKT, regulation of the voltage difference between the interior and exterior of the cell (membrane potential) and signaling functions that favor apoptosis. Chemotherapeutic agents such as daunorubicin and etoposide enhance the de novo synthesis of ceramide in studies done on mammalian cells. The same results were found for certain inducers of apoptosis particularly stimulators of receptors in a class of lymphocytes (a type of white blood cell) called B-cells. Regulation of the de novo synthesis of ceramide by palmitate may have a key role in diabetes and the metabolic syndrome. Experimental evidence shows that there is substantial increase of ceramide levels upon adding palmitate. Ceramide accumulation activates PP2A and the subsequent dephosphorylation and inactivation of AKT, a crucial mediator in metabolic control and insulin signaling. This results in a substantial decrease in insulin responsiveness (i.e. to glucose) and in the death of insulin-producing cells in the pancreas called islets of Langerhans. Inhibition of ceramide synthesis in mice via drug treatments or gene-knockout techniques prevented insulin resistance induced by fatty acids, glucocorticoids or obesity. An increase in in vitro activity of acid SMase has been observed after applying multiple stress stimuli such as ultraviolet (UV) and ionizing radiation, binding of death receptors and chemotherapeutic agents such as platinum, histone deacetylase inhibitors and paclitaxel. In some studies, SMase activation results to its transport to the plasma membrane and the simultaneous formation of ceramide. Ceramide transfer protein (CERT) transports ceramide from ER to the Golgi for the synthesis of SM. CERT is known to bind phosphatidylinositol phosphates, hinting its potential regulation via phosphorylation, a step of the ceramide metabolism that can be enzymatically regulated by protein kinases and phosphatases, and by inositol lipid metabolic pathways. Up to date, there are at least 26 distinct enzymes with varied subcellular localizations, that act on ceramide as either a substrate or product. Regulation of ceramide levels can therefore be performed by one of these enzymes in distinct organelles by particular mechanisms at various times. Sphingosine Sphingosine (Sph) is formed by the action of ceramidase (CDase) enzymes on ceramide in the lysosome. Sph can also be formed in the extracellular (outer leaflet) side of the plasma membrane by the action of neutral CDase enzyme. Sph then is either recycled back to ceramide or phosphorylated by one of the sphingosine kinase enzymes, SK1 and SK2. The product sphingosine-1-phosphate (S1P) can be dephosphorylated in the ER to regenerate sphingosine by certain S1P phosphatase enzymes within cells, where the salvaged Sph is recycled to ceramide. Sphingosine is a single-chain lipid (usually 18 carbons in length), rendering it to have sufficient solubility in water. This explains its ability to move between membranes and to flip-flop across a membrane. Estimates conducted at physiological pH show that approximately 70% of sphingosine remains in membranes while the remaining 30% is water-soluble. Sph that is formed has sufficient solubility in the liquid found inside cells (cytosol). Thus, Sph may come out of the lysosome and move to the ER without the need for transport via proteins or membrane-enclosed sacs called vesicles. However, its positive charge favors partitioning in lysosomes. It is proposed that the role of SK1 located near or in the lysosome is to ‘trap’ Sph via phosphorylation. Since sphingosine exerts surfactant activity, it is one of the sphingolipids found at lowest cellular levels. The low levels of Sph and their increase in response to stimulation of cells, primarily by activation of ceramidase by growth-inducing proteins such as platelet-derived growth factor and insulin-like growth factor, is consistent with its function as a second messenger. It was found that immediate hydrolysis of only 3 to 10% of newly generated ceramide may double the levels of Sph. Treatment of HL60 cells (a type of leukemia cell line) by a plant-derived organic compound called phorbol ester increased Sph levels threefold, whereby the cells differentiated into white blood cells called macrophages. Treatment of the same cells by exogenous Sph caused apoptosis. A specific protein kinase phosphorylates 14-3-3, otherwise known as sphingosine-dependent protein kinase 1 (SDK1), only in the presence of Sph. Sph is also known to interact with protein targets such as the protein kinase H homologue (PKH) and the yeast protein kinase (YPK). These targets in turn mediate the effects of Sph and its related sphingoid bases, with known roles in regulating the actin cytoskeleton, endocytosis, the cell cycle and apoptosis. It is important to note however that the second messenger function of Sph is not yet established unambiguously. Sphingosine-1-Phosphate Sphingosine-1-phosphate (S1P), like Sph, is composed of a single hydrophobic chain and has sufficient solubility to move between membranes. S1P is formed by phosphorylation of sphingosine by sphingosine kinase (SK). The phosphate group of the product can be detached (dephosphorylated) to regenerate sphingosine via S1P phosphatase enzymes or S1P can be broken down by S1P lyase enzymes to ethanolamine phosphate and hexadecenal. Similar to Sph, its second messenger function is not yet clear. However, there is substantial evidence that implicates S1P to cell survival, cell migration, and inflammation. Certain growth-inducing proteins such as platelet-derived growth factor (PDGF), insulin-like growth factor (IGF) and vascular endothelial growth factor (VEGF) promote the formation of SK enzymes, leading to increased levels of S1P. Other factors that induce SK include cellular communication molecules called cytokines, such as tumor necrosis factor α (TNFα) and interleukin-1 (IL-1), hypoxia or lack of oxygen supply in cells, oxidized low-density lipoproteins (oxLDL) and several immune complexes. S1P is probably formed at the inner leaflet of the plasma membrane in response to TNFα and other receptor activity-altering compounds called agonists. S1P, being present in low nanomolar concentrations in the cell, has to interact with high-affinity receptors that are capable of sensing their low levels. So far, the only identified receptors for S1P are the high-affinity G protein-coupled receptors (GPCRs), also known as S1P receptors (S1PRs). S1P is required to reach the extracellular side (outer leaflet) of the plasma membrane to interact with S1PRs and launch typical GPCR signaling pathways. However, the zwitterionic headgroup of S1P makes it unlikely to flip-flop spontaneously. To overcome this difficulty, the ATP-binding cassette (ABC) transporter C1 (ABCC1) serves as the "exit door" for S1P. On the other hand, the cystic fibrosis transmembrane regulator (CFTR) serves as the means of entry for S1P into the cell. In contrast to its low intracellular concentration, S1P is found in high nanomolar concentrations in serum where it is bound to albumin and lipoproteins. Inside the cell, S1P can induce calcium release independent of the S1PRs—the mechanism of which remains unknown. To date, the intracellular molecular targets for S1P are still unidentified. The SK1-S1P pathway has been extensively studied in relation to cytokine action, with multiple functions connected to effects of TNFα and IL-1 favoring inflammation. Studies show that knockdown of key enzymes such as S1P lyase and S1P phosphatase increased prostaglandin production, parallel to increase of S1P levels. This strongly suggests that S1P is the mediator of SK1 action and not subsequent compounds. Research done on endothelial and smooth muscle cells is consistent to the hypothesis that S1P has a crucial role in regulating endothelial cell growth, and movement. Recent work on a sphingosine analogue, FTY270, demonstrates its ability to act as a potent compound that alters the activity of S1P receptors (agonist). FTY270 was further verified in clinical tests to have roles in immune modulation, such as that on multiple sclerosis. This highlights the importance of S1P in the regulation of lymphocyte function and immunity. Most of the studies on S1P are used to further understand diseases such as cancer, arthritis and inflammation, diabetes, immune function and neurodegenerative disorders. Glucosylceramide Glucosylceramides (GluCer) are the most widely distributed glycosphingolipids in cells serving as precursors for the formation of over 200 known glycosphingolipids. GluCer is formed by the glycosylation of ceramide in an organelle called Golgi via enzymes called glucosylceramide synthase (GCS) or by the breakdown of complex glycosphingolipids (GSLs) through the action of specific hydrolase enzymes. In turn, certain β-glucosidases hydrolyze these lipids to regenerate ceramide. GluCer appears to be synthesized in the inner leaflet of the Golgi. Studies show that GluCer has to flip to the inside of the Golgi or transfer to the site of GSL synthesis to initiate the synthesis of complex GSLs. Transferring to the GSL synthesis site is done with the help of a transport protein known as four phosphate adaptor protein 2 (FAPP2) while the flipping to the inside of the Golgi is made possible by the ABC transporter P-glycoprotein, also known as the multi-drug resistance 1 transporter (MDR1). GluCer is implicated in post-Golgi trafficking and drug resistance particularly to chemotherapeutic agents. For instance, a study demonstrated a correlation between cellular drug resistance and modifications in GluCer metabolism. In addition to their role as building blocks of biological membranes, glycosphingolipids have long attracted attention because of their supposed involvement in cell growth, differentiation, and formation of tumors. The production of GluCer from Cer was found to be important in the growth of neurons or brain cells. On the other hand, pharmacological inhibition of GluCer synthase is being considered a technique to avoid insulin resistance. Ceramide-1-Phosphate Ceramide-1-phosphate (C1P) is formed by the action of ceramide kinase (CK) enzymes on Cer. C1P carry ionic charge at neutral pH and contain two hydrophobic chains making it relatively insoluble in aqueous environment. Thus, C1P reside in the organelle where it was formed and is unlikely to spontaneously flip-flop across membrane bilayers. C1P activate phospholipase A2 and is found, along with CK, to be a mediator of arachidonic acid released in cells in response to a protein called interleukin-1β (IL-1β) and a lipid-soluble molecule that transports calcium ions (Ca2+) across the bilayer, also known as calcium ionophore. C1P was also previously reported to encourage cell division (mitogenic) in fibroblasts, block apoptosis by inhibiting acid SMase in white blood cells within tissues (macrophages) and increase intracellular free calcium concentrations in thyroid cells. C1P also has known roles in vesicular trafficking, cell survival, phagocytosis ("cell eating") and macrophage degranulation. Phosphatidylinositol bisphosphate (PIP2) Lipid Agonist PIP2 binds directly to ion channels and modulates their activity. PIP2 was shown to directly agonizes Inward rectifying potassium channels(Kir). In this regard intact PIP2 signals as a bona fide neurotransmitter-like ligand. PIP2's interaction with many ion channels suggest that the intact form of PIP2 has an important signaling role independent of second messenger signaling. Second messengers from phosphatidylinositol Phosphatidylinositol bisphosphate (PIP2) Second Messenger Systems A general second messenger system mechanism can be broken down into four steps. First, the agonist activates a membrane-bound receptor. Second, the activated G-protein produces a primary effector. Third, the primary effect stimulates the second messenger synthesis. Fourth, the second messenger activates a certain cellular process. The G-protein coupled receptors for the PIP2 messenger system produces two effectors, phospholipase C (PLC) and phosphoinositide 3-kinase (PI3K). PLC as an effector produces two different second messengers, inositol triphosphate (IP3) and Diacylglycerol (DAG). IP3 is soluble and diffuses freely into the cytoplasm. As a second messenger, it is recognized by the inositol triphosphate receptor (IP3R), a Ca2+ channel in the endoplasmic reticulum (ER) membrane, which stores intracellular Ca2+. The binding of IP3 to IP3R releases Ca2+ from the ER into the normally Ca2+-poor cytoplasm, which then triggers various events of Ca2+ signaling. Specifically in blood vessels, the increase in Ca2+ concentration from IP3 releases nitric oxide, which then diffuses into the smooth muscle tissue and causes relaxation. DAG remains bound to the membrane by its fatty acid "tails" where it recruits and activates both conventional and novel members of the protein kinase C family. Thus, both IP3 and DAG contribute to activation of PKCs. Phosphoinositide 3-kinase (PI3K) as an effector phosphorylates phosphatidylinositol bisphosphate (PIP2) to produce phosphatidylinositol (3,4,5)-trisphosphate (PIP3). PIP3 has been shown to activate protein kinase B, increase binding to extracellular proteins and ultimately enhance cell survival. Activators of G-protein coupled receptors See main article on G-protein coupled receptors Lysophosphatidic acid (LPA) LPA is the result of phospholipase A2 action on phosphatidic acid. The SN-1 position can contain either an ester bond or an ether bond, with ether LPA being found at elevated levels in certain cancers. LPA binds the high-affinity G-protein coupled receptors LPA1, LPA2, and LPA3 (also known as EDG2, EDG4, and EDG7, respectively). Sphingosine-1-phosphate (S1P) S1P is present at high concentrations in plasma and secreted locally at elevated concentrations at sites of inflammation. It is formed by the regulated phosphorylation of sphingosine. It acts through five dedicated high-affinity G-protein coupled receptors, S1P1 - S1P5. Targeted deletion of S1P1 results in lethality in mice and deletion of S1P2 results in seizures and deafness. Additionally, a mere 3- to 5-fold elevation in serum S1P concentrations induces sudden cardiac death by an S1P3-receptor specific mechanism. Platelet activating factor (PAF) PAF is a potent activator of platelet aggregation, inflammation, and anaphylaxis. It is similar to the ubiquitous membrane phospholipid phosphatidylcholine except that it contains an acetyl-group in the SN-2 position and the SN-1 position contains an ether-linkage. PAF signals through a dedicated G-protein coupled receptor, PAFR and is inactivated by PAF acetylhydrolase. Endocannabinoids The endogenous cannabinoids, or endocannabinoids, are endogenous lipids that activate cannabinoid receptors. The first such lipid to be isolated was anandamide which is the arachidonoyl amide of ethanolamine. Anandamide is formed via enzymatic release from N-arachidonoyl phosphatidylethanolamine by the N-acyl phosphatidylethanolamine phospholipase D (NAPE-PLD). Anandamide activates both the CB1 receptor, found primarily in the central nervous system, and the CB2 receptor which is found primarily in lymphocytes and the periphery. It is found at very low levels (nM) in most tissues and is inactivated by the fatty acid amide hydrolase. Subsequently, another endocannabinoid was isolated, 2-arachidonoylglycerol, which is produced when phospholipase C releases diacylglycerol which is then converted to 2-AG by diacylglycerol lipase. 2-AG can also activate both cannabinoid receptors and is inactivated by monoacylglycerol lipase. It is present at approximately 100-times the concentration of anandamide in most tissues. Elevations in either of these lipids causes analgesia and anti-inflammation and tissue protection during states of ischemia, but the precise roles played by these various endocannabinoids are still not totally known and intensive research into their function, metabolism, and regulation is ongoing. One saturated lipid from this class, often called an endocannabinoid, but with no relevant affinity for the CB1 and CB 2 receptor is palmitoylethanolamide. This signaling lipid has great affinity for the GRP55 receptor and the PPAR alpha receptor. It has been identified as an anti-inflammatory compound already in 1957, and as an analgesic compound in 1975. Rita Levi-Montalcini first identified one of its biological mechanisms of action, the inhibition of activated mast cells. Palmitoylethanolamide is the only endocannabinoid available on the market for treatment, as a food supplement. Prostaglandins Prostaglandins are formed through oxidation of arachidonic acid by cyclooxygenases and other prostaglandin synthases. There are currently nine known G-protein coupled receptors (eicosanoid receptors) that largely mediate prostaglandin physiology (although some prostaglandins activate nuclear receptors, see below). FAHFA FAHFAs (fatty acid esters of hydroxy fatty acids) are formed in adipose tissue, improve glucose tolerance and also reduce adipose tissue inflammation. Palmitic acid esters of hydroxy-stearic acids (PAHSAs) are among the most bioactive members able to activate G-protein coupled receptors 120. Docosahexaenoic acid ester of hydroxy-linoleic acid (DHAHLA) exert anti-inflammatory and pro-resolving properties. Retinol derivatives Retinaldehyde is a retinol (vitamin A) derivative responsible for vision. It binds rhodopsin, a well-characterized GPCR that binds all-cis retinal in its inactive state. Upon photoisomerization by a photon the cis-retinal is converted to trans-retinal causing activation of rhodopsin which ultimately leads to depolarization of the neuron thereby enabling visual perception. Activators of nuclear receptors See the main article on nuclear receptors Steroid Hormones This large and diverse class of steroids are biosynthesized from isoprenoids and structurally resemble cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Retinoic acid Retinol (vitamin A) can be metabolized to retinoic acid which activates nuclear receptors such as the RAR to control differentiation and proliferation of many types of cells during development. Prostaglandins The majority of prostaglandin signaling occurs via GPCRs (see above) although certain prostaglandins activate nuclear receptors in the PPAR family. (See article eicosanoid receptors for more information). See also Allostery Cell signaling Protein dynamics Lysophospholipid receptors List of signaling molecule types References Signal transduction
Lipid signaling
[ "Chemistry", "Biology" ]
5,567
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
5,419,330
https://en.wikipedia.org/wiki/Bancroft%20rule
The Bancroft rule in colloidal chemistry states: "The phase in which an emulsifier is more soluble constitutes the continuous phase." This means that water-soluble surfactants tend to give oil-in-water emulsions and oil-soluble surfactants give water-in-oil emulsions. It is a general rule of thumb, still used, but regarded as inferior to HLD theory (Hydrophilic Lipophilic Difference), which takes many more factors into consideration. It was named after Wilder Dwight Bancroft, an American physical chemist, who proposed the rule in the 1910s. Technical details In all of the typical emulsions, there are tiny particles (discrete phase) suspended in a liquid (continuous phase). In an oil-in-water emulsion, oil is the discrete phase, while water is the continuous phase. What the Bancroft rule states is that contrary to common sense, what makes an emulsion oil-in-water or water-in-oil is not the relative percentages of oil or water, but which phase the emulsifier is more soluble in. So even though there may be a formula that's 60% oil and 40% water, if the emulsifier chosen is more soluble in water, it will create an oil-in-water system. There are some exceptions to Bancroft's rule, but it's a very useful rule of thumb for most systems. The hydrophilic-lipophilic balance (HLB) of a surfactant can be used in order to determine whether it's a good choice for the desired emulsion or not. In oil-in-water emulsions – use emulsifying agents that are more soluble in water than in oil (High HLB surfactants). In water-in-oil emulsions – use emulsifying agents that are more soluble in oil than in water (Low HLB surfactants). Bancroft's rule suggests that the type of emulsion is dictated by the emulsifier and that the emulsifier should be soluble in the continuous phase. This empirical observation can be rationalized by considering the interfacial tension at the oil-surfactant and water-surfactant interfaces. See also Azeotrope Bancroft point References Colloidal chemistry
Bancroft rule
[ "Chemistry" ]
475
[ "Colloidal chemistry", "Surface science", "Colloids" ]
5,419,984
https://en.wikipedia.org/wiki/Torsion%20pendulum%20clock
Kundo reverts here. For other use, see Kundo (disambiguation) A torsion pendulum clock, more commonly known as an anniversary clock or 400-day clock, is a mechanical clock which keeps time with a mechanism called a torsion pendulum. This is a weighted disk or wheel, often a decorative wheel with three or four chrome balls on ornate spokes, suspended by a thin wire or ribbon called a torsion spring (also known as "suspension spring"). The torsion pendulum rotates about the vertical axis of the wire, twisting it, instead of swinging like an ordinary pendulum. The force of the twisting torsion spring reverses the direction of rotation, so the torsion pendulum oscillates slowly, clockwise and counterclockwise. The clock's gears apply a pulse of torque to the top of the torsion spring with each rotation to keep the wheel going. The Atmos Clock made by the Swiss company Jaeger-LeCoultre is another style of this clock. The wheel and torsion spring function similarly to a watch's balance wheel and hairspring, as a harmonic oscillator to control the rate of the clock's hands. Description Torsion clocks are unusually delicate, ornamental machines which require stable conditions to operate properly. The clocks are protected from the vagaries of air currents by a glass dome. Clocks of this style were first made by Anton Harder around 1880, and they are also known as 400-day or anniversary clocks because many can run for an entire year on a single winding. Mechanism Torsion clocks are capable of running much longer between windings than clocks with an ordinary pendulum, because the torsion pendulum rotates slowly and takes little energy. However they can be difficult to set up, and are usually not as accurate as clocks with ordinary pendulums. One reason is that the oscillation period of the torsion pendulum changes with temperature due to the temperature-dependent elasticity of the spring. Nivarox suspension spring wire is now the standard for use; this makes the clock much more accurate. The clock can be made faster or slower by an adjustment screw mechanism on the torsion pendulum that moves the weight balls in or out from the axis. The closer in the balls are, the smaller the moment of inertia of the torsion pendulum and the faster it will turn, what causes the clock to speed up. One oscillation of the torsion pendulum usually takes 12, 15, or 20 seconds. The escapement mechanism, which changes the rotational motion of the clock's gears to pulses to drive the torsion pendulum, works rather like an anchor escapement. A crutch device at the top of the torsion spring engages a lever with two anchor-shaped arms; the two arms alternately engage the teeth of the escape wheel. As the anchor releases a tooth of the escape wheel, the lever, which is fixed to the anchor, moves to one side and, via the crutch, gives a small twist to the top of the torsion spring. This is just enough to keep the oscillation going. The Atmos clock, made by Jaeger-LeCoultre, is a type of torsion pendulum clock that winds itself. The mainspring which powers the clock's wheels is kept wound by small changes in atmospheric pressure and/or local temperature, using a bellows mechanism. Thus no winding key or battery is needed, and it can run for years without human intervention. History The torsion pendulum was invented by Robert Leslie in 1793. The torsion pendulum clock was first invented and patented by American Aaron Crane in 1841. He made clocks that would run up to one year on a winding. He also made precision astronomical regulator clocks based on the torsion pendulum, but only four were sold. The German Anton Harder apparently independently invented and patented the torsion clock in 1879-1880. He was inspired by watching a hanging chandelier rotate after a servant had turned it to light the candles. He formed the firm Jahresuhrenfabrik ('Year Clock Factory') and designed a clock that would run for a year, but its accuracy was poor. He sold the patent in 1884 to F. A. L. deGruyter of Amsterdam, who allowed the patent to expire in 1887. Other firms entered the market, beginning the German mass production of these clocks. Although they were successful commercially, torsion clocks remained poor timekeepers. In 1951, Charles Terwilliger of the Horolovar Co. invented a temperature compensating suspension spring, which allowed fairly accurate clocks to be made. Footnotes External links International 400-day Clock Chapter #168, NAWCC, retrieved Aug. 30, 2007. Torsion clock branch of large clock collectors club. Publishes quarterly journal Torsion Times. Torsion clock gallery, Horology Web Ring, webhorology.com, retrieved Aug. 30, 2007. Pictures of torsion clocks from several private collections. Torsion clock collection, Battersea Clock Home, Flickr.com, retrieved Aug. 29, 2007. Pictures of a collection of 30 anniversary clocks in London, UK. The Danish Telavox (later Clementa) battery driven torsion pendulum clock. A collector's guide illustrating the history and varieties of cases and movements from 1942-1977 (?). Movement (clockwork) Pendulums Clock designs Timekeeping
Torsion pendulum clock
[ "Physics" ]
1,109
[ "Spacetime", "Timekeeping", "Physical quantities", "Time" ]
5,420,644
https://en.wikipedia.org/wiki/Mivacurium%20chloride
Mivacurium chloride (formerly recognized as BW1090U81, BW B1090U or BW1090U) is a short-duration non-depolarizing neuromuscular-blocking drug or skeletal muscle relaxant in the category of non-depolarizing neuromuscular-blocking drugs, used adjunctively in anesthesia to facilitate endotracheal intubation and to provide skeletal muscle relaxation during surgery or mechanical ventilation. Structure Mivacurium is a symmetrical molecule existing as a mixture of three of twenty possible isomers: the isomerism stems from chirality at the C-1 carbon position of both the tetrahydroisoquinolinium rings, as well as both the positively charged nitrogen (onium) heads, and the E/Z diastereomerism at the C=C double bond of the oct-4-ene diester bridge. Thus, owing to the symmetry and chirality, the three isomers of mivacurium are (E)-1R,1'R,2R,2'R, (identified as BW1217U84), (E)-1R,1'R,2R,2'S, (BW1333U83) and (E)-1R,1'R,1'S,2'S, (BW1309U83). These are also known as cis-cis, cis-trans and trans-trans mivacurium. The proportions are; (E)-cis-cis 6% of the mixture, (E)-cis-trans 36% of the mixture and (E)-trans-trans 56% of the mixture. Unlike the potency of the cis-cis isomer of atracurium (also known as 51W89 and eventually produced as the drug cisatracurium), the cis-cis isomer of mivacurium has by far the lowest potency as a muscle relaxant when compared with its other two stereoisomers. It has approximately 10% of the activity of each of the other two structures. Mivacurium belongs to a class of compounds that is commonly and erroneously referred to as "benzylisoquinolines;" mivacurium is in fact a bisbenzyltetrahydroisoquinolinium agent, often abbreviated to bbTHIQ. The orientation of the two O atoms in the bridge is to the THIQ side of the carbonyl C=O group, whereas in atracurium the O atom is on the bridge side. Atracurium's groups are "reversed ester" linkages. This makes ester hydrolysis degradation by plasma cholinesterase more favourable. Pharmacology Having ten methoxy -OCH3 groups, mivacurium is a more potent neuromuscular blocking drug than atracurium (which has eight), but is less potent than doxacurium (which has twelve). Like other non-depolarizing neuromuscular blocking agents, the pharmacological action of mivacurium is antagonism to nicotinic acetylcholine receptors. However, unlike other non-depolarizing neuromuscular blockers, it is metabolized by plasma cholinesterase (similar to the depolarizing neuromuscular blocking agent succinylcholine). Availability Mivacurium is available worldwide. It became unavailable in the United States in 2006 due to manufacturing issues, but was reintroduced in 2016. History Mivacurium represents the second generation of tetrahydroisoquinolinium neuromuscular blocking drugs in a long lineage of nicotinic acetylcholine receptor antagonists synthesized by Mary M. Jackson and James C. Wisowaty, PhD (both chemists within the Chemical Development Laboratories at Burroughs Wellcome Co., Research Triangle Park, NC) in collaboration with John J. Savarese MD (who at the time was an anesthesiologist in the Dept. of Anesthesia, Harvard Medical School at the Massachusetts General Hospital, Boston, MA). Specifically, mivacurium was first synthesized in 1981. Early structure-activity studies had confirmed that the bulky nature of the "benzylisoquinolinium" entity provided a non-depolarizing mechanism of action. Partial saturation of the benzylisoquinoline ring to the tetrahydroisoquinoline ring provided an even further increase in potency of the molecules without detrimental effects to other pharmacological properties: this key finding led to the rapid adoption of the tetrahydroisoquinolinium structures as a standard building block (along with a 1-benzyl attachment), and it is the primary reason why the continued unwarranted reference to "benzylisoquinolinium" is a complete misnomer for all clinically introduced and currently used neuromuscular blocking agents in this class because they are all, in fact, tetrahydroisoquinoline derivatives. By definition, therefore, there has never been, in the history of clinical anesthetic practice, the use of a benzylisoquinoline neuromuscular blocking agent. The heritage of mivacurium and indeed its very closely related cousin, doxacurium chloride, harks back to the synthesis of numerous compounds following structure-activity relationships that drove researchers to find the ideal replacement for succinylcholine (suxamethonium). Both mivacurium and doxacurium are descendants of early vigorous attempts to synthesize potent non-depolarizing agents with pharmacophores derived from cross-combinations of the non-depolarizing agent, laudexium, and the well-known depolarizing agent, succinylcholine (suxamethonium chloride). Ironically, laudexium itself was invented by a cross-combination between the prototypical non-depolarizing agent, d-tubocurarine and the depolarizing agent, decamethonium. From the 1950s through to the 1970s, the present-day concept of a neuromuscular blocking agent with a rapid onset and an ultra-short duration of action had not taken root: researchers and clinicians were still on the quest for potent but non-depolarizing replacements devoid of the histamine release and the dreaded "recurarizing" effects seen with tubocurarine and, more importantly, the absence of a depolarizing mechanism of action as seen with succinylcholine and decamethonium. Clinical pharmacology and pharmacokinetics The first clinical trial of mivacurium (BW1090U), in 1984, was conducted in a cohort of 63 US patients undergoing surgical anesthesia. at the Harvard Medical School at Massachusetts General Hospital, Boston, MA. Preliminary data from the study confirmed a promise for this agent to elicit considerably lesser severity of histamine release than that observed with its immediate predecessor clinically tested agents, BW785U77 and BWA444U, which were discontinued from further clinical development. Mivacurium did not exhibit the ultra-short duration of action seen with BW785U; whereas, BW A444U produced an intermediate duration of action. Mivacurium is a biodegradable neuromuscular blocking agent owing to its degradation by plasma cholinesterases - the esterases rapidly hydrolyze one ester moiety initially resulting in a two mono-quaternary metabolites of which one still has an intact ester moiety. The second ester is metabolized much more slowly, although the lack of a bis-quaternary structure effectively terminates the neuromuscular blocking action. References Nicotinic antagonists Norsalsolinol ethers Chlorides Carboxylate esters
Mivacurium chloride
[ "Chemistry" ]
1,673
[ "Chlorides", "Inorganic compounds", "Salts" ]
5,421,591
https://en.wikipedia.org/wiki/Artin%20billiard
In mathematics and physics, the Artin billiard is a type of a dynamical billiard first studied by Emil Artin in 1924. It describes the geodesic motion of a free particle on the non-compact Riemann surface where is the upper half-plane endowed with the Poincaré metric and is the modular group. It can be viewed as the motion on the fundamental domain of the modular group with the sides identified. The system is notable in that it is an exactly solvable system that is strongly chaotic: it is not only ergodic, but is also strong mixing. As such, it is an example of an Anosov flow. Artin's paper used symbolic dynamics for analysis of the system. The quantum mechanical version of Artin's billiard is also exactly solvable. The eigenvalue spectrum consists of a bound state and a continuous spectrum above the energy . The wave functions are given by Bessel functions. Exposition The motion studied is that of a free particle sliding frictionlessly, namely, one having the Hamiltonian where m is the mass of the particle, are the coordinates on the manifold, are the conjugate momenta: and is the metric tensor on the manifold. Because this is the free-particle Hamiltonian, the solution to the Hamilton-Jacobi equations of motion are simply given by the geodesics on the manifold. In the case of the Artin billiards, the metric is given by the canonical Poincaré metric on the upper half-plane. The non-compact Riemann surface is a symmetric space, and is defined as the quotient of the upper half-plane modulo the action of the elements of acting as Möbius transformations. The set is a fundamental domain for this action. The manifold has, of course, one cusp. This is the same manifold, when taken as the complex manifold, that is the space on which elliptic curves and modular functions are studied. References E. Artin, "Ein mechanisches System mit quasi-ergodischen Bahnen", Abh. Math. Sem. d. Hamburgischen Universität, 3 (1924) pp170-175. Chaotic maps Ergodic theory
Artin billiard
[ "Mathematics" ]
455
[ "Functions and mappings", "Mathematical objects", "Ergodic theory", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
5,421,628
https://en.wikipedia.org/wiki/Feed%20dog
A feed dog is a movable plate which pulls fabric through a sewing machine in discrete steps between stitches. Action A set of feed dogs typically resembles two or three short, thin metal bars, crosscut with diagonal teeth, which move both front to back and up and down in slots in a sewing machine's needle plate: front to back to advance fabric gripped between the dogs and the presser foot toward the needle, and up and down to recess at the end of their stroke, release the fabric, and remain recessed while returning before emerging again to begin a new one. Name A mechanical dog is named to suggest the jaw or teeth of a dog, the animal, clamped on to an object, refusing to let go. This arrangement is called "drop feed" in reference to the way the dogs drop below the needle plate when returning for the next stroke. Allen B. Wilson invented it during the time period 1850 to 1854, while also developing the rotary hook. Wilson called it a "four-motion feed", in reference to the four movements the dogs perform during one full stitch: up into the fabric, back to pull the fabric along to the next stitch, down out of the fabric and below the needle plate, and then forward to return to the starting position. Stitch length Virtually all drop-feed sewing machines can vary their stitch length; this is typically controlled by a lever or dial on the front of the machine. They are usually also capable of reversing the feed dogs' motion to pull the fabric backwards to form a backstitch. See also Dog (engineering) References Sewing machines
Feed dog
[ "Physics", "Technology" ]
324
[ "Physical systems", "Machines", "Sewing machines" ]
5,422,424
https://en.wikipedia.org/wiki/Chorobates
The chorobates, described by Vitruvius in Book VIII of the De architectura, was used to measure horizontal planes and was especially important in the construction of aqueducts. Similar to modern spirit levels, the chorobates consisted of a beam of wood 6 m in length held by two supporting legs and equipped with two plumb lines at each end. The legs were joined to the beam by two diagonal rods with carved notches. If the notches corresponding to the plumb lines matched on both sides, it showed that the beam was level. On top of the beam, a groove or channel was carved. If the condition was too windy for the plumb bobs to work effectively, the surveyor could pour water into the groove and measure the plane by checking the water level. Isaac Moreno Gallo's interpretation of the chorobates Isaac Moreno Gallo, a Technical Engineer of Public Works specialized in Ancient Rome's civil engineering, claims that the present-day representation of the chorobates (in a table-like shape) is mistaken due to a misinterpretation derived from an incorrect translation of the Latin term "ancones" used by Vitruvius. "...ea habet ancones in capitibus extremis aequali modo perfectos inque regulae capitibus ad nomam coagmentatos..." In this context, "ancones" could be translated as "limbs" (extremidades) or "arms", but also as "ménsulas" (brackets or corbels). According to Isaac Moreno, this vertical design is way more efficient when it comes to optical leveling and makes more sense from a topographer's point of view. Furthermore, it preserves the original length described by Vitruvius (20 feet or 5.92 meters) that the table-like chorobates versions persistently seem to ignore. This "vertical" chorobates was indeed the predominant interpretation of the chorobates in the oldest representations recorded: 1547's engravings included in Jean Goujon's translation of Vitrubius works into French. Or 1582's Miguel de Urrea's first edition of Vitrubius works, this time in Spanish. And few years later, when Juan de Lastanosa published “The Twenty-One Books of Engineering and Machines" of Gianello della Torre”. All three of them consistently represented the chorobates in a very similar way, until 1673 Claude Perrault's translations radically altered the vertically shaped stand with "ménsulas" and turned it into a horizontal table-like chorobates (with "legs" instead of "brackets") that has become the standard representation nowadays. In his "Ars Mensoria" series, Isaac Moreno Gallo recreates practical demonstrations of Roman topographic instruments using his own replicas. Among them, the chorobates. See also Groma Dioptra Chorography Odometer References M. J. T. Lewis. Surveying Instruments of Greece and Rome. Cambridge University Press. . 2001. p 31.Isaac Moreno Gallo.Topografía Romana. 2004. p 43 . Surveying and engineering in Ancient Rome Chorobates described Measuring instruments Surveying Ancient Greece Ancient Roman architecture
Chorobates
[ "Technology", "Engineering" ]
692
[ "Surveying", "Civil engineering", "Measuring instruments" ]
5,422,591
https://en.wikipedia.org/wiki/Hypoglycin%20B
Hypoglycin B is a naturally occurring organic compound in the species Blighia sapida. It is particularly concentrated in the fruit of the plant especially in the seeds. Hypoglycin B is toxic if ingested and is one of the causative agents of Jamaican vomiting sickness. It is a dipeptide of glutamic acid and hypoglycin A. References Alpha-Amino acids Dipeptides Dicarboxylic acids Toxic amino acids
Hypoglycin B
[ "Chemistry" ]
101
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
5,422,778
https://en.wikipedia.org/wiki/Cyclobutadieneiron%20tricarbonyl
Cyclobutadieneiron tricarbonyl is an organoiron compound with the formula Fe(C4H4)(CO)3. It is a yellow oil that is soluble in organic solvents. It has been used in organic chemistry as a precursor for cyclobutadiene, which is an elusive species in the free state. Preparation and structure Cyclobutadieneiron tricarbonyl was first prepared in 1965 by Pettit from 3,4-dichlorocyclobutene and diiron nonacarbonyl: C4H4Cl2 + 2 Fe2(CO)9 → (C4H4)Fe(CO)3 + 2 Fe(CO)5 + 5 CO + FeCl2 The compound is an example of a piano stool complex. The C-C distances are 1.426 Å. Properties Oxidative decomplexation of cyclobutadiene is achieved by treating the tricarbonyl complex with ceric ammonium nitrate. The released cyclobutadiene is trapped with a quinone, which functions as a dienophile. Cyclobutadieneiron tricarbonyl displays aromaticity as evidenced by some of its reactions, which can be classified as electrophilic aromatic substitution: It undergoes Friedel-Crafts acylation with acetyl chloride and aluminium chloride to give the acyl derivative 2, with formaldehyde and hydrochloric acid to the chloromethyl derivative 3, in a Vilsmeier-Haack reaction with N-methylformanilide and phosphorus oxychloride to the formyl 4, and in a Mannich reaction to amine derivative 5. The reaction mechanism is identical to that of EAS: Related compounds Several years before Petit's work, (C4Ph4)Fe(CO)3 had been prepared from the reaction of iron carbonyl and diphenylacetylene. (Butadiene)iron tricarbonyl is isoelectronic with cyclobutadieneiron tricarbonyl. History In 1956, Longuet-Higgins and Orgel predicted the existence of transition-metal cyclobutadiene complexes, in which the degenerate eg orbital of cyclobutadiene has the correct symmetry for π interaction with the dxz and dyz orbitals of the proper metal. The compound was synthesized three years after the prediction This is a case of theory before experiment. References Iron carbonyl complexes Half sandwich compounds Iron(0) compounds Diene complexes
Cyclobutadieneiron tricarbonyl
[ "Chemistry" ]
536
[ "Organometallic chemistry", "Half sandwich compounds" ]
5,423,297
https://en.wikipedia.org/wiki/ICMP%20tunnel
An ICMP tunnel establishes a covert connection between two remote computers (a client and proxy), using ICMP echo requests and reply packets. An example of this technique is tunneling complete TCP traffic over ping requests and replies. Technical details ICMP tunneling works by injecting arbitrary data into an echo packet sent to a remote computer. The remote computer replies in the same manner, injecting an answer into another ICMP packet and sending it back. The client performs all communication using ICMP echo request packets, while the proxy uses echo reply packets. In theory, it is possible to have the proxy use echo request packets (which makes implementation much easier), but these packets are not necessarily forwarded to the client, as the client could be behind a translated address (NAT). This bidirectional data flow can be abstracted with an ordinary serial line. ICMP tunneling is possible because RFC 792, which defines the structure of ICMP packets, allows for an arbitrary data length for any type 0 (echo reply) or 8 (echo message) ICMP packets. Uses ICMP tunneling can be used to bypass firewalls rules through obfuscation of the actual traffic. Depending on the implementation of the ICMP tunneling software, this type of connection can also be categorized as an encrypted communication channel between two computers. Without proper deep packet inspection or log review, network administrators will not be able to detect this type of traffic through their network. Mitigation One way to prevent this type of tunneling is to block ICMP traffic, at the cost of losing some network functionality that people usually take for granted (e.g. it might take tens of seconds to determine that a peer is offline, rather than almost instantaneously). Another method for mitigating this type of attack is to only allow fixed sized ICMP packets through firewalls, which can impede or eliminate this type of behavior. ICMP-tunnels are sometimes used to circumvent firewalls that block traffic between the LAN and the outside world. For example, by commercial Wi-Fi services that require the user to pay for usage, or a library that requires the user to first log in at a web portal. If the network operator made the erroneous assumption that it is enough to only block normal transport protocols like TCP and UDP, but not core protocols such as ICMP, then it is sometimes possible to use an ICMP-tunnel to access the internet despite not having been authorized for network access. Encryption and per-user rules that disallow users exchanging ICMP packets (and all other types of packets, maybe by using IEEE 802.1X) with external peers before authorization solves this problem. See also ICMPv6 Smurf attack References External links , Internet Control Message Protocol itun Simple IP over ICMP tunnel Hans ICMP tunnel for Linux (server and client) and BSD MacOSX (client only) ICMP-Shell a telnet-like protocol using only ICMP PingTunnel Tunnel TCP over ICMP ICMP Crafting by Stuart Thomas Using the ICMP tunneling tool Ping Tunnel Project Loki Article on ping tunneling in Phrack ICMP tunnel with C# source code icmptunnel IP over ICMP tunnel by Dhaval Kapil Tunneling protocols Internet privacy
ICMP tunnel
[ "Engineering" ]
689
[ "Computer networks engineering", "Tunneling protocols" ]
5,423,382
https://en.wikipedia.org/wiki/Pesticides%20Safety%20Directorate
The Pesticides Safety Directorate was an agency of the Department for Environment, Food and Rural Affairs (Defra). It was based in York, England, with about 200 scientific, policy and support staff and was responsible for the authorisation of plant protection products and, from 2005, detergents, in the United Kingdom. In April 2008, it joined the Health and Safety Executive (HSE), and in April 2009, became part of a newly formed Chemicals Regulation Directorate (CRD) at the HSE. Aims of the Pesticides Safety Directorate To ensure the safe use of pesticides and detergents for people and the environment. To harmonise pesticide regulation within the European Community and provide a level playing field for crop protection. As part of the strategy for sustainable food and farming, to reduce negative impacts of pesticides on the environment. References (sourced from the Wayback Machine.) External links Home page of the Pesticides Safety Directorate official website as preserved by The National Archives on 19 September 2006. Stewardship Community - working together to promote the safe, effective use of pesticides Agricultural organisations based in the United Kingdom Defunct executive agencies of the United Kingdom government Organisations based in York Garden pests Pesticides in the United Kingdom Research institutes in North Yorkshire Pesticide regulation
Pesticides Safety Directorate
[ "Chemistry", "Biology" ]
258
[ "Pests (organism)", "Regulation of chemicals", "Pesticide regulation", "Garden pests" ]
5,424,051
https://en.wikipedia.org/wiki/Monogamy%20in%20animals
Some animal species have a monogamous mating system, in which pairs bond to raise offspring. This is associated, usually implicitly, with sexual monogamy. Monogamous mating Monogamy is defined as a pair bond between two adult animals of the same species. This pair may cohabitate in an area or territory for some duration of time, and in some cases may copulate and reproduce with only each other. Monogamy may either be short-term, lasting one to a few seasons or long-term, lasting many seasons and in extreme cases, life-long. Monogamy can be partitioned into two categories, social monogamy and genetic monogamy which may occur together in some combination, or completely independently of one another. As an example, in the cichlid species Variabilichromis moorii, a monogamous pair will care for eggs and young together, but the eggs may not all be fertilized by the male giving the care. Monogamy in mammals is rather rare, only occurring in 3–9% of these species. A larger percentage of avian species are known to have monogamous relationships (about 90%), but most avian species practice social but not genetic monogamy in contrast to what was previously assumed by researchers. Monogamy is quite rare in fish and amphibians, but not unheard of, appearing in a select few species. Social monogamy Social monogamy refers to the cohabitation of one male and one female. The two individuals may cooperate in search of resources such as food and shelter and/or in caring for young. Paternal care in monogamous species is commonly displayed through carrying, feeding, defending, and socializing offspring. With social monogamy there may not be an expected sexual fidelity between the males and the females. The existence of purely social monogamy is a polygamous or polyandrous social pair with extra pair coupling. Social monogamy has been shown to increase fitness in prairie voles. It has been shown that female prairie voles live longer when paired with males in a social monogamous relationship. This could be because of the shared energy expenditure by the males and females lower each individual's input. In largemouth bass, females are sometimes seen to exhibit cuckoo behavior by laying some of their eggs in another female's nest, thus "stealing" fertilizations from other females. Sexual conflicts that have been proposed to arise from social monogamy include infidelity and parental investment. The proposed conflict is derived from the conflict-centric differential allocation hypothesis, which states that there is a tradeoff between investment and attractiveness. Genetic monogamy Genetic monogamy refers to a mating system in which fidelity of the bonding pair is exhibited. Though individual pairs may be genetically monogamous, no one species has been identified as fully genetically monogamous. In some species, genetic monogamy has been enforced. Female voles have shown no difference in fecundity with genetic monogamy, but it may be enforced by males in some instances. Mate guarding is a typical tactic in monogamous species. It is present in many animal species and can sometimes be expressed in lieu of parental care by males. This may be for many reasons, including paternity assurance. Evolution of monogamy in animals While the evolution of monogamy in animals cannot be broadly ascertained, there are several theories as to how monogamy may have evolved. Anisogamy Anisogamy is a form of sexual reproduction which involves the fusion of two unequally-sized gametes. In many animals, there are two sexes: the male, in which the gamete is small, motile, usually plentiful, and less energetically expensive, and the female, in which the gamete is larger, more energetically expensive, made at a lower rate, and largely immobile. Anisogamy is thought to have evolved from isogamy, the fusion of similar gametes, multiple times in many different species. The introduction of anisogamy has caused males and females to tend to have different optimal mating strategies. This is because males may increase their fitness by mating with many females, whereas females are limited by their own fecundity. Females are therefore typically more likely to be selective in choosing mates. Monogamy is suggested to limit fitness differences, as males and females will mate in pairs. This would seem to be non-beneficial to males, but may not be in all cases. Several behaviors and ecological concerns may have led to the evolution of monogamy as a relevant mating strategy. Partner and resource availability, enforcement, mate assistance, and territory defense may be some of the most prevalent factors affecting animal behavior. Facultative monogamy First introduced by Kleiman, facultative monogamy occurs when females are widely dispersed. This can either occur because females in a species tend to be solitary or because the distribution of resources available cause females to thrive when separated into distinct territories. In these instances, there is less of a chance for a given male to find multiple females to mate with. In such a case, it becomes more advantageous for a male to remain with a female, rather than seeking out another and risking (a) not finding another female and or (b) not being able to fight off another male from interfering with his offspring by mating with the female or through infanticide. In these situations, male-to-male competition is reduced and female choice is limited. The end result is that the mate choice is more random than in a more dense population, which has a number of effects including limiting dimorphism and sexual selection. With resource availability limited, mating with multiple mates may be harder because the density of individuals is lowered. The habitat cannot sustain multiple mates, so monogamy may be more prevalent. This is because resources may be found more easily for the pair than for the individual. The argument for resource availability has been shown in many species, but in several species, once resource availability increases, monogamy is still apparent. With increased resource availability, males may be offsetting the restriction of their fitness through several means. In instances of social monogamy, males may offset any lowered fitness through extra pair coupling. Extra pair coupling refers to male and females mating with several mates but only raising offspring with one mate. The male may not be related to all of the offspring of his main mate, but some offspring are being raised in other broods by other males and females, thereby offsetting any limitation of monogamy. Males are cuckolds, but because they have other female sexual partners, they cuckold other males and increase their own fitness. Males exhibit parental care habits in order to be an acceptable mate to the female. Any males that do not exhibit parental care would not be accepted as a sexual partner for socially monogamous females in an enforcement pattern. Obligate monogamy Kleiman also offered a second theory. In obligate monogamy, the driving force behind monogamy is a greater need for paternal investment. This theory assumes that without biparental care fitness level of offspring would be greatly reduced. This paternal care may or may not be equal to that of the maternal care. Related to paternal care, some researchers have argued that infanticide is the true cause of monogamy. This theory has not garnered much support, however, critiqued by several authors including Lukas and Clutton-Brock and Dixson. Enforcement Monogamous mating may also be caused simply by enforcement through tactics such as mate guarding. In these species, the males will prevent other males from copulating with their chosen female or vice versa. Males will help to fend off other aggressive males, and keep their mate for themselves. This is not seen in all species, such as some primates, in which the female may be more dominant than the male and may not need help to avoid unwanted mating; the pair may still benefit from some form of mate assistance, however, and therefore monogamy may be enforced to ensure the assistance of males. Bi-parental care is not seen in all monogamous species, however, so this may not be the only cause of female enforcement. Mate assistance and territory defense In species where mate guarding is not needed, there may still be a need for the pair to protect each other. An example of this would be sentinel behavior in avian species. The main advantage of sentinel behavior is that many survival tactics are improved. As stated, the male or female will act as a sentinel and signal to their mate if a predator is present. This can lead to an increase in survivorship, foraging, and incubation of eggs. Male care for offspring is rather rare in some taxa of species. This is because males may increase their fitness by searching for multiple mates. Females are limited in fitness by their fecundity, so multiple mating does not affect their fitness to the same extent. Males have the opportunity to find a new mate earlier than females when there is internal fertilization or the females exhibit the majority of the care for the offspring. When males are shown to care for offspring as well as females, it is referred to as bi-parental care. Bi-parental care may occur when there is a lower chance of survival of the offspring without male care. The evolution of this care has been associated with energetically expensive offspring. Bi-parental care is exhibited in many avian species. In these cases, the male has a greater chance to increase his own fitness by seeing that his offspring live long enough to reproduce. If the male is not present in these populations, the survivorship of the offspring is drastically lowered and there is a lowering in male fitness. Without monogamy, bi-parental care is less common and there is an increased chance of infanticide. Infanticide with monogamous pairing would lead to a lowered fitness for socially monogamous males and is not seen to a wide extent. Consequences of monogamous mating Monogamy as a mating system in animals has been thought to lower levels of some pre and post copulatory competition methods. Because of this reduction in competition in some instances the regulation of certain morphological characteristics may be lowered. This would result in a vast variety of morphological and physiological differences such as sexual dimorphism and sperm quality. Sexual dimorphism Sexual dimorphism denotes the differences in males and females of the same species. Even in animals with seemingly no morphological sexual dimorphism visible there is still dimorphism in the gametes. Among mammals, males have the smaller gametes and females have the larger gametes. As soon as the two sexes emerge the dimorphism in the gamete structures and sizes may lead to further dimorphism in the species. Sexual dimorphism is often caused through evolution in response to male male competition and female choice. In polygamous species there is a noted sexual dimorphism. The sexual dimorphism is seen typically in sexual signaling aspects of morphology. Males typically exhibit these dimorphic traits and they are typically traits which help in signaling to females or male male competition. In monogamous species sexual conflict is thought to be lessened, and typically little to no sexual dimorphism is noted as there is less ornamentation and armor. This is because there is a relaxation of sexual selection. This may have something to do with a feedback loop caused by a low population density. If sexual selection is too strenuous in a population where there is a low density the population will shrink. In the continuing generations sexual selection will become less and less relevant as mating becomes more random. A similar feedback loop is thought to occur for the sperm quality in genetically monogamous pairs. Sperm quality Once anisogamy has emerged in a species due to gamete dimorphism there is an inherent level of competition. This could be seen as sperm competition in the very least. Sperm competition is defined as a post copulatory mode of sexual selection which causes the diversity of sperm across species. As soon as sperm and egg are the predominant mating types there is an increase in the need for the male gametes. This is because there will be a large number of unsuccessful sperm which will cost a certain level of expenditure on energy without a benefit from the individual sperm. Sperm in polygamous sexual encounters have evolved for size, speed, structure, and quantity. This competition causes selection for competitive traits which can be pre or post copulatory. In species where cryptic female choice is one of the main sources of competition females are able to choose sperm from among various male suitors. Typically the sperm of the highest quality are selected. In genetically monogamous species it can be expected that sperm competition is absent or otherwise severely limited. There is no selection for the highest quality sperm amongst the sperm of multiple males, and copulation is more random than it is in polygamous situations. Therefore, sperm quality for monogamous species has a higher variation and lower quality sperm have been noted in several species. The lack of sperm competition is not advantageous for sperm quality. An example of this is in the Eurasian bullfinch which exhibits relaxed selection and sperm competition. The sperm of these males have a lower velocity than other closely related but polygamous passerine bird species and the amount of abnormalities in sperm structure, length, and count when compared to similar bird families is increased. Animals The evolution of mating systems in animals has received an enormous amount of attention from biologists. This section briefly reviews three main findings about the evolution of monogamy in animals. The amount of social monogamy in animals varies across taxa, with over 90% of birds engaging in social monogamy while only 39% of mammals are known to do the same. This list is not complete. Other factors may also contribute to the evolution of social monogamy. Moreover, different sets of factors may explain the evolution of social monogamy in different species. There is no one-size-fits-all explanation of why different species evolved monogamous mating systems. Sexual dimorphism Sexual dimorphism refers to differences in body characteristics between females and males. A frequently studied type of sexual dimorphism is body size. For example, among mammals, males typically have larger bodies than females. In other orders, however, females have larger bodies than males. Sexual dimorphism in body size has been linked to mating behavior. In polygynous species, males compete for control over sexual access to females. Large males have an advantage in the competition for access to females, and they consequently pass their genes along to a greater number of offspring. This eventually leads to large differences in body size between females and males. Polygynous males are often 1.5 to 2.0 times larger in size than females. In monogamous species, on the other hand, females and males have more equal access to mates, so there is little or no sexual dimorphism in body size. From a new biological point of view, monogamy could result from mate guarding and is engaged as a result of sexual conflict. Some researchers have attempted to infer the evolution of human mating systems from the evolution of sexual dimorphism. Several studies have reported a large amount of sexual dimorphism in Australopithecus, an evolutionary ancestor of human beings that lived between 2 and 5 million years ago. These studies raise the possibility that Australopithecus had a polygamous mating system. Sexual dimorphism then began to decrease. Studies suggest sexual dimorphism reached modern human levels around the time of Homo erectus 0.5 to 2 million years ago. This line of reasoning suggests human ancestors started out polygamous and began the transition to monogamy somewhere between 0.5 million and 2 million years ago. Attempts to infer the evolution of monogamy based on sexual dimorphism remain controversial for three reasons: The skeletal remains of Australopithecus are quite fragmentary. This makes it difficult to identify the sex of the fossils. Researchers sometimes identify the sex of the fossils by their size, which, of course, can exaggerate findings of sexual dimorphism. Recent studies using new methods of measurement suggest Australopithecus had the same amount of sexual dimorphism as modern humans. This raises questions about the amount of sexual dimorphism in Australopithecus. Humans may have been partially unique in that selection pressures for sexual dimorphism might have been related to the new niches that humans were entering at the time, and how that might have interacted with potential early cultures and tool use. If these early humans had a differentiation of gender roles, with men hunting and women gathering, selection pressures in favor of increased size may have been distributed unequally between the sexes. Even if future studies clearly establish sexual dimorphism in Australopithecus, other studies have shown the relationship between sexual dimorphism and mating system is unreliable. Some polygamous species show little or no sexual dimorphism. Some monogamous species show a large amount of sexual dimorphism. Studies of sexual dimorphism raise the possibility that early human ancestors were polygamous rather than monogamous. But this line of research remains highly controversial. It may be that early human ancestors showed little sexual dimorphism, and it may be that sexual dimorphism in early human ancestors had no relationship to their mating systems. Testis size The relative sizes of male testes often reflect mating systems. In species with promiscuous mating systems, where many males mate with many females, the testes tend to be relatively large. This appears to be the result of sperm competition. Males with large testes produce more sperm and thereby gain an advantage impregnating females. In polygynous species, where one male controls sexual access to females, the testes tend to be small. One male defends exclusive sexual access to a group of females and thereby eliminates sperm competition. Studies of primates support the relationship between testis size and mating system. Chimpanzees, which have a promiscuous mating system, have large testes compared to other primates. Gorillas, which have a polygynous mating system, have smaller testes than other primates. Humans, which have a socially monogamous mating system, have moderately sized testes. The moderate amounts of sexual non-monogamy in humans may result in a low to moderate amount of sperm competition. Monogamy as a best response In species where the young are particularly vulnerable and may benefit from protection by both parents, monogamy may be an optimal strategy. Monogamy tends to also occur when populations are small and dispersed. This is not conductive to polygamous behavior as the male would spend far more time searching for another mate. The monogamous behavior allows the male to have a mate consistently, without having to waste energy searching for other females. Furthermore, there is an apparent connection between the time a male invests in their offspring and their monogamous behavior. A male which is required to care for the offspring to ensure their survival is much more likely to exhibit monogamous behavior over one that does not. The selection factors in favor of different mating strategies for a species of animal, however, may potentially operate on a large number of factors throughout that animal's life cycle. For instance, with many species of bear, the female will often drive a male off soon after mating, and will later guard her cubs from him. It is thought that this may be due to the fact that too many bears close to one another may deplete the food available to the relatively small but growing cubs. Monogamy may be social but rarely genetic. For example, in the cichlid species Variabilichromis moorii, a monogamous pair will care for their eggs and young but the eggs are not all fertilized by the same male. Thierry Lodé argued that monogamy should result from conflict of interest between the sexes called sexual conflict. Monogamous species There are species which have adopted monogamy with great success. For instance, the male prairie vole will mate exclusively with the first female he ever mates with. The vole is extremely loyal and will go as far as to even attack other females that may approach him. This type of behavior has been linked to the hormone vasopressin. This hormone is released when a male mates and cares for young. Due to this hormone's rewarding effects, the male experiences a positive feeling when they maintain a monogamous relationship. To further test this theory, the receptors that control vasopressin were placed into another species of vole that is promiscuous. After this addition, the originally unfaithful voles became monogamous with their selected partner. These very same receptors can be found in human brain, and have been found to vary at the individual level—which could explain why some human males tend to be more loyal than others. Black vultures stay together as it is more beneficial for their young to be taken care of by both parents. They take turns incubating the eggs, and then supplying their fledglings with food. Black vultures will also attack other vultures that are participating in extra pair copulation, this is an attempt to increase monogamy and decrease promiscuous behavior. Similarly, emperor penguins also stay together to care for their young. This is due to the harshness of the Antarctic weather, predators and the scarcity of food. One parent will protect the chick, while the other finds food. However, these penguins only remain monogamous until the chick is able to go off on their own. After the chick no longer needs their care, approximately 85% of parents will part ways and typically find a new partner every breeding season. Hornbills are a socially monogamous bird species that usually only have one mate throughout their lives, much like the prairie vole. The female will close herself up in a nest cavity, sealed with a nest plug, for two months. At this time, she will lay eggs and will be cared for by her mate. The males are willing to work to support himself, his mate, and his offspring in order for survival; however, unlike the emperor penguin, the hornbills do not find new partners each season. It is relatively uncommon to find monogamous relationships in fish, amphibians and reptiles; however, the red-backed salamander as well as the Caribbean cleaner goby practice monogamy as well. However, the male Caribbean cleaner goby fish has been found to separate from the female suddenly, leaving her abandoned. In a study conducted by Oregon State University, it was found that this fish practices not true monogamy, but serial monogamy. This essentially means that the goby will have multiple monogamous relationships throughout its life – but only be in one relationship at a time. The red-backed salamander exhibited signs of social monogamy, which is the idea that animals form pairs to mate and raise offspring, but still will partake in extra pair copulation with various males or females in order to increase their biological fitness. This is a relatively new concept in salamanders, and has not been seen frequently – it is also concerning that the act of monogamy may inhibit the salamanders reproductive rates and biological success. However, the study which was conducted in cooperation by the University of Louisiana, Lafayette, and the University of Virginia showed that the salamanders are not inhibited by this monogamy if they show alternative strategies with other mates. Azara's night monkeys are another species that proved to be monogamous. In an 18-year study conducted by the University of Pennsylvania, these monkeys proved to be entirely monogamous, exhibiting no genetic information or visual information that could lead to the assumption that extra pair copulation was occurring. This explained the question as to why the male owl monkey invested so much time in protecting and raising their own offspring. Because monogamy is often referred to as "placing all your eggs in one basket" the male wants to ensure his young survive, and thus pass on his genes. The desert grass spider, Agelenopsis aperta, is mostly monogamous as well. Male size is the determining factor in fights over a female, with the larger male emerging as the winner since their size signifies success in future offspring. Other monogamous species include wolves, certain species of fox, otters, a few hooved animals, some bats, and the Eurasian beaver. This beaver is particularly interesting, as it is practicing monogamy in its reintroduction to certain parts of Europe; however, its American counterpart is not monogamous at all and often partakes in promiscuous behavior. The two species are quite similar in ecology, but American beavers tend to be less aggressive than European beavers. In this instance, the scarcity of the European beavers' population could drive its monogamous behavior; moreover, it lowers the risk of parasite transmission which is correlated with biological fitness. Monogamy is proving to be very efficient for this beaver, as their population is climbing. See also Monogamy topics Animal sexual behaviour#Monogamy Monogamy Social monogamy in mammalian species Varieties of monogamy Evolution topics Animal sexuality Evolution of sexual reproduction History of human sexuality Human evolution r/K selection theory References Bibliography Animal sexuality Ethology Evolution of animals Mating systems Monogamy
Monogamy in animals
[ "Biology" ]
5,269
[ "Behavior", "Animals", "Sexuality", "Behavioural sciences", "Animal sexuality", "Ethology", "Mating systems", "Mating", "Evolution of animals" ]
5,424,160
https://en.wikipedia.org/wiki/Chevalley%20basis
In mathematics, a Chevalley basis for a simple complex Lie algebra is a basis constructed by Claude Chevalley with the property that all structure constants are integers. Chevalley used these bases to construct analogues of Lie groups over finite fields, called Chevalley groups. The Chevalley basis is the Cartan-Weyl basis, but with a different normalization. The generators of a Lie group are split into the generators H and E indexed by simple roots and their negatives . The Cartan-Weyl basis may be written as Defining the dual root or coroot of as where is the euclidean inner product. One may perform a change of basis to define The Cartan integers are The resulting relations among the generators are the following: where in the last relation is the greatest positive integer such that is a root and we consider if is not a root. For determining the sign in the last relation one fixes an ordering of roots which respects addition, i.e., if then provided that all four are roots. We then call an extraspecial pair of roots if they are both positive and is minimal among all that occur in pairs of positive roots satisfying . The sign in the last relation can be chosen arbitrarily whenever is an extraspecial pair of roots. This then determines the signs for all remaining pairs of roots. References Lie groups Lie algebras
Chevalley basis
[ "Mathematics" ]
282
[ "Algebra stubs", "Mathematical structures", "Lie groups", "Algebraic structures", "Algebra" ]
5,424,364
https://en.wikipedia.org/wiki/Voigt%20effect
The Voigt effect is a magneto-optical phenomenon which rotates and elliptizes linearly polarised light sent into an optically active medium. The effect is named after the German scientist Woldemar Voigt who discovered it in vapors. Unlike many other magneto-optical effects such as the Kerr or Faraday effect which are linearly proportional to the magnetization (or to the applied magnetic field for a non magnetized material), the Voigt effect is proportional to the square of the magnetization (or square of the magnetic field) and can be seen experimentally at normal incidence. There are also other denominations for this effect, used interchangeably in the modern scientific literature: the Cotton–Mouton effect (in reference to French scientists Aimé Cotton and Henri Mouton who discovered the same effect in liquids a few years later) and magnetic-linear birefringence, with the latter reflecting the physical meaning of the effect. For an electromagnetic incident wave linearly polarized and an in-plane polarized sample , the expression of the rotation in reflection geometry is is: and in the transmission geometry: where is the difference of refraction indices depending on the Voigt parameter (same as for the Kerr effect), the material refraction indices and the parameter responsible of the Voigt effect and so proportional to the or in the case of a paramagnetic material. Detailed calculation and an illustration are given in sections below. Theory As with the other magneto-optical effects, the theory is developed in a standard way with the use of an effective dielectric tensor from which one calculates systems eigenvalues and eigenvectors. As usual, from this tensor, magneto-optical phenomena are described mainly by the off-diagonal elements. Here, one considers an incident polarisation propagating in the z direction: the electric field and a homogenously in-plane magnetized sample where is counted from the [100] crystallographic direction. The aim is to calculate where is the rotation of polarization due to the coupling of the light with the magnetization. Let us notice that is experimentally a small quantity of the order of mrad. is the reduced magnetization vector defined by , the magnetization at saturation. We emphasized with the fact that it is because the light propagation vector is perpendicular to the magnetization plane that it is possible to see the Voigt effect. Dielectric tensor Following the notation of Hubert, the generalized dielectric cubic tensor take the following form: where is the material dielectric constant, the Voigt parameter, and two cubic constants describing magneto-optical effect depending on . is the reduce . Calculation is made in the spherical approximation with . At the present moment, there is no evidence that this approximation is not valid, as the observation of Voigt effect is rare because it is extremely small with respect to the Kerr effect. Eigenvalues and eigenvectors To calculate the eigenvalues and eigenvectors, we consider the propagation equation derived from the Maxwell equations, with the convention : When the magnetization is perpendicular to the propagation wavevector, on the contrary to the Kerr effect, may have all his three components equals to zero making calculations rather more complicated and making Fresnels equations no longer valid. A way to simplify the problem consists to use the electric field displacement vector . Since and we have . The inverse dielectric tensor can seem complicated to handle, but here the calculation was made for the general case. One can follow easily the demonstration by considering . Eigenvalues and eigenvectors are found by solving the propagation equation on which gives the following system of equation: where represents the inverse element of the dielectric tensor , and . After a straightforward calculation of the system's determinant, one has to make a development on 2nd order in and first order of . This led to the two eigenvalues corresponding the two refraction indices: The corresponding eigenvectors for and for are: Reflection geometry Continuity relation Knowing the eigenvectors and eigenvalues inside the material, one have to calculate the reflected electromagnetic vector usually detected in experiments. We use the continuity equations for and where is the induction defined from Maxwell's equations by . Inside the medium, the electromagnetic field is decomposed on the derived eigenvectors . The system of equation to solve is: The solution of this system of equation are: Calculation of rotation angle The rotation angle and the ellipticity angle are defined from the ratio with the two following formulae: where and represent the real and imaginary part of . Using the two previously calculated components, one obtains: This gives for the Voigt rotation: which can also be rewritten in the case of , , and real: where is the difference of refraction indices. Consequently, one obtains something proportional to and which depends on the incident linear polarisation. For proper no Voigt rotation can be observed. is proportional to the square of the magnetization since and . Transmission geometry The calculation of the rotation of the Voigt effect in transmission is in principle equivalent to the one of the Faraday effect. In practice, this configuration is not used in general for ferromagnetic samples since the absorption length is weak in this kind of material. However, the use of transmission geometry is more common for paramagnetic liquid or cristal where the light can travel easily inside the material. The calculation for a paramagnetic material is exactly the same with respect to a ferromagnetic one, except that the magnetization is replaced by a field ( in or ). For convenience, the field will be added at the end of calculation in the magneto-optical parameters. Consider the transmitted electromagnetic waves propagating in a medium of length L. From equation (5), one obtains for and : At the position , the expression of is: where and are the eigenvectors previously calculated, and is the difference for the two refraction indices. The rotation is then calculated from the ratio , with development in first order in and second order in . This gives: Again we obtain something proportional to and , the light propagation length. Let us notice that is proportional to in the same way with respect to the geometry in reflexion for the magnetization. In order to extract the Voigt rotation, we consider , and real. Then we have to calculate the real part of (14). The resulting expression is then inserted in (8). In the approximation of no absorption, one obtains for the Voigt rotation in transmission geometry: Illustration of Voigt effect in GaMnAs As an illustration of the application of the Voigt effect, we give an example in the magnetic semiconductor where a large Voigt effect was observed. At low temperatures (in general for ) for a material with an in-plane magnetization, exhibits a biaxial anisotropy with the magnetization aligned along (or close to) <100> directions. A typical hysteresis cycle containing the Voigt effect is shown in figure 1. This cycle was obtained by sending a linearly polarized light along the [110] direction with an incident angle of approximately 3° (more details can be found in ), and measuring the rotation due to magneto-optical effects of the reflected light beam. In contrast to the common longitudinal/polar Kerr effect, the hysteresis cycle is even with respect to the magnetization, which is a signature of the Voigt effect. This cycle was obtained with a light incidence very close to normal, and it also exhibits a small odd part; a correct treatment has to be carried out in order to extract the symmetric part of the hysteresis corresponding to the Voigt effect, and the asymmetric part corresponding to the longitudinal Kerr effect. In the case of the hysteresis presented here, the field was applied along the [1-10] direction. The switching mechanism is as follows: We start with a high negative field and the magnetization is close to the [-1-10] direction at position 1. The magnetic field is decreasing leading to a coherent magnetization rotation from 1 to 2 At positive field, the magnetization switch brutally from 2 to 3 by nucleation and propagation of magnetic domains giving a first coercive field named here The magnetization stay close to the state 3 while rotating coherently to the state 4, closer from the applied field direction. Again the magnetization switches abruptly from 4 to 5 by nucleation and propagation of magnetic domains. This switching is due to the fact that the final equilibrium position is closer from the state 5 with respect to the state 4 (and so his magnetic energy is lower). This gives another coercive field named Finally the magnetization rotates coherently from the state 5 to the state 6. The simulation of this scenario is given in the figure 2, with As one can see, the simulated hysteresis is qualitatively the same with respect to the experimental one. Notice that the amplitude at or are approximately twice of See also Atomic line filter Cotton–Mouton effect Faraday effect References Further reading Zhao, Zhong-Quan. Excited state atomic line filters . Retrieved March 26, 2006. Magneto-optic effects Polarization (waves)
Voigt effect
[ "Physics", "Chemistry", "Materials_science" ]
1,924
[ "Physical phenomena", "Electric and magnetic fields in matter", "Astrophysics", "Optical phenomena", "Magneto-optic effects", "Polarization (waves)" ]
5,424,787
https://en.wikipedia.org/wiki/Hadamard%27s%20dynamical%20system
In physics and mathematics, the Hadamard dynamical system (also called Hadamard's billiard or the Hadamard–Gutzwiller model) is a chaotic dynamical system, a type of dynamical billiards. Introduced by Jacques Hadamard in 1898, and studied by Martin Gutzwiller in the 1980s, it is the first dynamical system to be proven chaotic. The system considers the motion of a free (frictionless) particle on the Bolza surface, i.e, a two-dimensional surface of genus two (a donut with two holes) and constant negative curvature; this is a compact Riemann surface. Hadamard was able to show that every particle trajectory moves away from every other: that all trajectories have a positive Lyapunov exponent. Frank Steiner argues that Hadamard's study should be considered to be the first-ever examination of a chaotic dynamical system, and that Hadamard should be considered the first discoverer of chaos. He points out that the study was widely disseminated, and considers the impact of the ideas on the thinking of Albert Einstein and Ernst Mach. The system is particularly important in that in 1963, Yakov Sinai, in studying Sinai's billiards as a model of the classical ensemble of a Boltzmann–Gibbs gas, was able to show that the motion of the atoms in the gas follow the trajectories in the Hadamard dynamical system. Exposition The motion studied is that of a free particle sliding frictionlessly on the surface, namely, one having the Hamiltonian where m is the mass of the particle, , are the coordinates on the manifold, are the conjugate momenta: and is the metric tensor on the manifold. Because this is the free-particle Hamiltonian, the solution to the Hamilton–Jacobi equations of motion are simply given by the geodesics on the manifold. Hadamard was able to show that all geodesics are unstable, in that they all diverge exponentially from one another, as with positive Lyapunov exponent with E the energy of a trajectory, and being the constant negative curvature of the surface. References Chaotic maps Ergodic theory
Hadamard's dynamical system
[ "Mathematics" ]
460
[ "Functions and mappings", "Mathematical objects", "Ergodic theory", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
962,638
https://en.wikipedia.org/wiki/Messier%2054
Messier 54 (also known as M54 or NGC 6715) is a globular cluster in the constellation Sagittarius. It was discovered by Charles Messier in 1778 and then included in his catalog of comet-like objects. It is easily found in the sky, being close to the star ζ Sagittarii. It is, however, not resolvable into individual stars even with larger amateur telescopes. In July 2009, a team of astronomers reported that they had found evidence of an intermediate-mass black hole in the core of M54. Distance Previously thought to belong to the Milky Way at a distance from Earth of about 50,000 light-years, it was discovered in 1994 that M54 most likely belongs to the Sagittarius Dwarf Elliptical Galaxy (SagDEG), making it the first globular cluster formerly thought to be part of our galaxy reassigned to extragalactic status, even if not recognized as such for more than two centuries. As it is located in SagDEG's center, some authors think it actually may be its core; however others have proposed that it is a real globular cluster that fell to the center of this galaxy due to decay of its orbit caused by dynamical friction. Modern estimates now place M54 at a distance of some 87,000 light-years, translating into a true radius of 150 light-years across. It is one of the denser of the globulars, being of class III (I being densest and XII being the least dense). It shines with the luminosity of roughly 850,000 times that of the Sun and has an absolute magnitude of −10.0. See also List of Messier objects Omega Centauri Mayall II Palomar 12 References and footnotes External links M54 @ SEDS Messier pages Globular clusters Sagittarius Dwarf Spheroidal Galaxy Sagittarius (constellation) 054 NGC objects Astronomical objects discovered in 1778 Discoveries by Charles Messier
Messier 54
[ "Astronomy" ]
413
[ "Sagittarius (constellation)", "Constellations" ]
962,660
https://en.wikipedia.org/wiki/Line%2010%20%28Beijing%20Subway%29
Line 10 of the Beijing Subway () is the second loop line in Beijing's rapid transit network as well as the second longest and most widely used line. The line is in length, and runs entirely underground through Haidian, Chaoyang and Fengtai Districts, either directly underneath or just beyond the 3rd Ring Road. The Line 10 loop is situated between outside the Line 2 loop, which circumnavigates Beijing's old Inner City. Every subway line through the city centre intersects with Line 10, which has 24 transfer stations along route, and 45 stations in all. Line 10's color is capri. Line 10 was the world's longest rapid transit loop line since its completion in May 2013 till March 2023 and one of the longest entirely underground subway lines requiring 104 minutes to complete one full journey in either direction. History Planning The Beijing Subway network was originally conceived to have only one loop line. The booming economy and explosive population growth of Beijing put huge demand on Line 2, surpassing its designed capacity. In 2001 and 2002, the China Academy of Urban Planning and Design proposed two "L-shaped" lines named Line 10 and 11. Together they would form a second loop around Beijing and relieve pressure on line 2. Phase I On December 27, 2003, in preparation for the 2008 Summer Olympics in Beijing, Phase 1 of Line 10 started construction. On July 19, 2008, Phase I of Line 10 entered operation ahead of the opening of the Olympic Games. It was in length and had 22 stations. Phase I consisted of the northern and eastern sides of Line 10's rectangular loop from to forming an inverted L-shaped line. Phase II Construction on Phase II began on December 28, 2007. which meant that the original plan for Line 11 was not incorporated into the final network design and was instead absorbed into Line 10. Line 10 formed the second full loop around Beijing. In 2010, the Ministry of Railways proposed that Fengtai Railway Station was to be renovated and expanded to become a bigger intercity rail terminal for Beijing, with access to the Beijing-Guangzhou high-speed railway. The rationale was to ease intercity traffic pressure on Beijing West railway station. Due to the need to reorganize the stations on Line 10 to better serve the new rail terminal, work stopped on 2 stations, namely Mengjiacun (孟家村) and Niwa (泥洼). The planning department proposed that the original Mengjiacun and Niwa subway stations be merged into the new Fengtai railway station, known as the "three stations into one" program. Local residents, after realizing their travel to a subway station would be greatly lengthened, quickly opposed the plan. Planners reconsidered and moved Niwa station north to its current position and Mengjiacun station north to be renamed as Fengtai Railway Station. The original station shells were demolished and new stations built in their respective new locations. Niwa station started reconstruction in February 2012, while Fengtai railway station started on April 11, 2012. This made the late 2012 opening date for that section of Line 10 highly unlikely and was postponed to the next year. On December 30, 2012, the first section of Phase II, consisting of the southern and western sides of the loop opened. With the opening of Phase I and Phase II, Line 10 became a "C" shape. The near completion of Line 10 led to rapid growth of Line 10's ridership. At the same time, some traffic from Line 1 was diverted to the parallel and newly opened Line 6, allowing Line 10 to overtake Line 1 as Beijing's busiest subway line. The Beijing Subway started operating express trains that ran non-stop between Songjiazhuang to Jinsong to alleviate traffic in the southeastern section of Line 10. These express trains stopped operating after the completion of the loop. The loop was fully enclosed on May 5, 2013 with the opening of Fengtai and Niwa stations, as well as the infill Jiaomen East. Initially, Line 10 services consisted of a "full-loop" service that make the journey through all 45 stations in 104 minutes, and "partial-loop" trains that run from Chedaogou in the north-west to Songjiazhuang in the south-east before turning back. With the delivery of more rolling stock, "partial-loop" trains were removed and all trains are now serving the full loop at a headway of 2 minutes and 15 seconds. By 2014, the completed loop carried on average 1.69 million passengers per day. By 2019, large sections of Line 10 operated above 100% capacity, particularly the eastern and northern sections. Beijing Subway has responded by increasing the frequency of trains to every two minutes and removing some seats on trains to increase capacity. Operation From near Wanliu Park in Haidian District, Line 10 runs straight east, between the northern 3rd and 4th Ring Roads. At Xitucheng, the line meets the northern section of the Yuan dynasty earthen city wall, called tucheng. Jiandemen and Anzhenmen stations are named after former gates in the wall. At Beitucheng, Line 8 (Phase 1) extends off Line 10 and provides access to the Beijing Olympic Green. Farther east, Line 10 turns south after the and follows the eastern 3rd Ring Road straight south to in Chaoyang District. The Bagou-Jingsong section constituted Phase I of Line 10, which first opened in July 2008, and connects the university district in Haidian with the embassy district and Beijing CBD. A trip from Bagou to Jingsong takes about 40 minutes. The full loop takes about 104 minutes. Fare Starting fare of RMB(¥) 3 that increases according to the distance fare scheme introduced in December 2014. Regular subway users can use a Yikatong card, which offers even cheaper journeys, as well as mobile phone apps, which deploy payment via a QR code. Hours of Operation The first train on the inner (clockwise) loop departs from Xiju towards Shoujingmao at 5:20am. The first train on the outer (counter-clockwise) loop departs from Shoujingmao towards Xiju at 6:12am. The last inner loop train leaves Xiju for Bagou at 11:29pm. The last outer loop train leaves Shoujingmao for Chedaogou at 11:06pm. For the official timetable, see. Safety There are subway public security bureaus (police stations) located in the , and stations. Emergencies can be reported by calling 110 or 64011327. Stations Some trains terminate at stations marked '*'. Technology Rolling Stock Line 10 utilizes a fleet of 6-car DKZ15 trains manufactured by CRRC Changchun Railway Vehicles. Initially when Phase I opened the line was operated with a fleet of only 40 trainsets (240 cars). Some sets operated on the Olympic section of Line 8 before Line 8 was extended and acquired its own dedicated rolling stock. When Line 10 Phase II opened the fleet was expanded to 84 trains. However the two existing depots serving Line 10 had insufficient capacity for the entire fleet. Therefore, only 76 trainsets could operate on the line with 8 being temporary stored in other Beijing Subway depots. With the opening of the new depot in Songjiazhuang and the need to reduce the headway on line to decrease crowding, an additional 32 trainsets were ordered. The fleet grew to 116 trainsets allowing Line 10 to operate at a headway of every 2 minutes throughout the line during rush hour. Some trains had some seats removed to increase capacity. Signaling system Siemens Transportation Systems and China Railway Signaling & Communication Corp. have equipped the entire line with Siemens's Trainguard MT Communication Based Train Control (CBTC) system. As a fallback, ETCS Level 1 is also available. Notes References External links "New Beijing subway to open soon." China Daily. February 28, 2008. Railway loop lines Beijing Subway lines Siemens Mobility projects Railway lines opened in 2008 2008 establishments in China 750 V DC railway electrification
Line 10 (Beijing Subway)
[ "Technology", "Engineering" ]
1,643
[ "Siemens Mobility projects", "Transport systems" ]
962,678
https://en.wikipedia.org/wiki/ESET%20NOD32
ESET NOD32 Antivirus, commonly known as NOD32, is an antivirus software package made by the Slovak company ESET. ESET NOD32 Antivirus is sold in two editions, Home Edition and Business Edition. The Business Edition packages add ESET Remote Administrator allowing for server deployment and management, mirroring of threat signature database updates and the ability to install on Microsoft Windows Server operating systems. History NOD32 The acronym NOD stands for Nemocnica na Okraji Disku ("Hospital at the end of the disk"), a pun related to the Czechoslovak medical drama series Nemocnice na kraji města (Hospital at the End of the City). The first version of NOD32 called NOD-ICE was a DOS-based program. It was created in 1987 by Miroslav Trnka and Peter Paško at the time when computer viruses started to become increasingly prevalent on PCs running DOS. Due to the limitations of the OS (lack of multitasking among others) it didn't feature any on-demand/on-access protection nor most of the other features of the current versions. Besides the virus scanning and cleaning functionality it only featured heuristic analysis. With the increasing popularity of the Windows environment, advent of 32-bit CPUs, a shift in the PC market and increasing popularity of the Internet came the need for a completely different antivirus approach as well. Thus the original program was re-written and christened "NOD32" to emphasize both the radical shift from the previous version and its Win32 system compatibility. Initially the program gained popularity with IT workers in Eastern European countries, as ESET was based in Slovakia. Though the program's abbreviation was originally pronounced as individual letters, the worldwide use of the program led to the more common single-word pronunciation, sounding like the English word nod. Additionally, the "32" portion of the name was added with the release of a 32-bit version in the Windows 9x era. The company reached its 10000th update to virus definitions on June 25, 2014. Mail Security for Microsoft Exchange Server On March 10, 2010 ESET released ESET Mail Security for Microsoft Exchange Server, which contains both antimalware and antispam modules. It supports Microsoft Exchange 5.5, 2000, 2003, 2007 and 2010. Mobile Security ESET Mobile Security is the replacement for ESET Mobile Antivirus, which provided anti-malware and antispam functionality. ESET Mobile Security contains all the features of the older product and adds new anti-theft features such as SIM locking and remote wipe as well as a security audit and a firewall. Versions for Windows Mobile and Symbian OS were available as of September 2010, for both home and enterprise users. Remote Administrator ESET Remote Administrator is a central management console designed to allow network administrators to manage ESET software across a corporate network. Smart Security On November 5, 2007, ESET released an Internet security suite, ESET Smart Security version 3.0, to compete with other security suites by other companies such as McAfee, Symantec, AVG and Kaspersky. ESET Smart Security incorporates anti-spam and a bidirectional firewall along with traditional anti-malware features of ESET NOD32 Antivirus. On March 2, 2009, ESET Smart Security version 4.0 was released, adding integration of ESET SysInspector; support for Mozilla Thunderbird and Windows Live Mail; a new self-defense module, an updated firewall module, ESET SysRescue and a wizard for creating bootable CD and USB flash drives. There were initially compatibility problems between ESET Smart Security 4.0 and Windows Vista Service Pack 2 but these were remedied by an update. On August 17, 2010, ESET Smart Security version 4.2 was released with new features, enhancements and changes. On September 14, 2011, ESET Smart Security version 5.0 was released. On January 15, 2013, ESET Smart Security version 6.0 was released. This version included Anti-Theft feature for tracking of lost, misplaced or stolen laptop. On October 16, 2013, ESET Smart Security version 7.0 was released. It offers enhanced operation memory scanning and blocks misuses of known exploits. On October 2, 2014, ESET Smart Security version 8.0 was released. It adds exploit blocking for Java and botnet protection. On October 13, 2015, ESET Smart Security version 9.0 was released. SysInspector ESET SysInspector is a diagnostic tool which allows in-depth analysis of various aspects of the operating system, including running processes, registry content, startup items and network connections. Anti-Stealth Technology is used to discover hidden objects (rootkits) in the Master Boot Record, boot sector, registry entries, drivers, services and processes. SysInspector Logs are standard XML files and can be submitted to IT experts for further analysis. Two logs can be compared to find a set of items not common to both logs. A log file can be saved as a service script for removing malicious objects from a computer. SysRescue Live ESET SysRescue Live is a Linux-based bootable Live CD/USB image that can be used to boot and clean heavily infected computers independent of the installed operating system. The program is offered free of charge, and can download updates if a network connection is present. Other programs ESET has released free standalone removers for malware when they are widespread, such as Mebroot. Development File Security for Microsoft Windows Server On June 1, 2010, the first release candidate for ESET File Security for Microsoft Windows Server v4.3 was made available to the public. This program is an updated version of ESET NOD32 Antivirus Business Edition designed for Microsoft Windows Server operating systems and contains a revised user interface, automatic exclusions for critical directories and files and unspecified optimizations for operation on servers. Mobile Security On April 22, 2010, ESET Mobile Security for Windows Mobile and Symbian OS went into public beta. The Home Edition was released on September 2, 2010, and on January 20, 2011, the Business Edition went into beta. On April 29, 2011, ESET a beta test version for Android was released. On August 10, 2011, the release candidate was made available. NOD32 for Mac OS X and Linux Desktop On December 2, 2009, ESET NOD32 Antivirus 4 for Mac OS X Desktop and ESET NOD32 Antivirus 4 for Linux Desktop were released for public testing. ESET stated the release automatically detects and cleans cross-platform malware, scans archives, automatically scans removable media such as USB flash drives when mounted, performs real-time scanning, provides reports and offers a GUI similar to the Microsoft Windows version. The second beta test versions were released January 9, 2010, and the third on June 10, 2010. On September 13, 2010, ESET released ESET NOD32 Antivirus for Mac OS X Business Edition. and announced a release candidate for ESET Cybersecurity for Mac OS X On September 24, 2010, ESET released a Release Candidate for ESET Cybersecurity for Mac OS X and on January 21, 2011, ESET released a Release Candidate for ESET NOD32 Antivirus for Linux Desktop Smart Security On May 5, 2011, ESET released a beta test version of ESET Smart Security 5.0. The beta version adds parental control, a cloud-based file reputation service, gamer mode, HIPS and improvements to its antispam, firewall and removable media control functions. On June 14, 2011, ESET released a release candidate for ESET Smart Security version 5.0. On August 5, 2014, ESET Smart Security version 8.0 public beta 1 was released. It offers enhanced exploit blocking and botnet detection. Discontinued products Mobile Antivirus ESET Mobile Antivirus was aimed at protecting smartphones from viruses, spyware, adware, trojans, worms, rootkits, and other unwanted software. It also provided antispam filtering for SMS messages. Versions for Windows Mobile and Symbian OS were available. ESET discontinued ESET Mobile Antivirus in January 2011 and provides ESET Mobile Security as a free upgrade to licensed users of ESET Mobile Antivirus. NOD32 Antivirus v2.7 and older On February 1, 2010, ESET discontinued version 2.7 of NOD32 Antivirus and all previous versions of NOD32 Antivirus. They were removed from the ESET website, including product pages and e-Store. Version 2.7 was the last version supporting Microsoft 95/98/ME and Novell NetWare operating systems. Virus signature database updates and customer support was discontinued on February 1, 2012. Technical information On a network, NOD32 clients can update from a central "mirror server" on the network. Reception As of April 21, 2011, NOD32 Antivirus holds ICSA Labs certifications. As of September 29, 2018, NOD32 has accumulated one hundred eleven VB100 awards from Virus Bulletin; it has only failed to receive this award three times. In comparative report that Virus Bulletin published on September 2, 2008, NOD32 detected 94.4% of all malware and 94.7% of spyware. It stood above competitors like Norton Internet Security and ZoneAlarm but below Windows Live OneCare and Avira AntiVir. In the RAP averages quadrant between December 2011 and June 2012, Virus Bulletin found that ESET was pretty much at the same level, about 94%, but was noted for its ability to block spam and phishing, earning an award, an award only 19 other antivirus companies were able to acquire. On April 28, 2008, Robert Vamosi of CNET.com reviewed version 3.0 of NOD32 and gave it a score of 3.5 out of 5. On March 6, 2009, Seth Rosenblatt of Download.com reviewed the 4.0 version of NOD32 gave it a rating of 4.6 out of 5. On September 15, 2011, Seth Rosenblatt of CNET reviewed the 5.0 version of NOD32 and gave it a rating of 5 out of 5. See also Antivirus software List of antivirus software Comparison of computer viruses References External links Virus Radar, a service run by ESET using NOD32 statistics The official German ESET Support Forum Antivirus software Antivirus software for Linux MacOS security software Linux security software Windows security software Computer security software
ESET NOD32
[ "Engineering" ]
2,158
[ "Cybersecurity engineering", "Computer security software" ]