id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
3,145,482
https://en.wikipedia.org/wiki/Enos%20%28chimpanzee%29
Enos (born about 1957 – died November 4, 1962) was a chimpanzee launched into space by NASA, following his predecessor Ham. He was the only non-human primate to orbit the Earth, and the third hominid to do so after cosmonauts Yuri Gagarin and Gherman Titov. Enos's flight occurred on November 29, 1961. Enos was brought from the Miami Rare Bird Farm on April 3, 1960. He completed more than 1,250 training hours at the University of Kentucky and Holloman Air Force Base. Training was more intense for him than for Ham, who had become the first great ape in space in January 1961, because Enos was exposed to weightlessness and higher gs for longer periods of time. His training included psychomotor instruction and aircraft flights. Enos was selected for his Project Mercury flight only three days before launch. Two months prior, NASA launched Mercury-Atlas 4 on September 13, 1961, to conduct an identical mission with a "crewman simulator" on board. Enos flew into space aboard Mercury-Atlas 5 on November 29, 1961. He completed his first orbit in 1 hour and 28.5 minutes. Enos was scheduled to complete three orbits, but the mission was aborted after two, due to two issues: capsule overheating and a malfunctioning "avoidance conditioning" test subjecting the primate to 76 electrical shocks. According to one history of primatology, "The chimpanzee, about five years old, behaved like a true hero: despite the malfunctions of the electronic system, he conscientiously performed all the tasks he had learned during the entire flight of over three hours...Enos demonstrated that he was careful to successfully complete his mission and that he perfectly understood what was expected of him." After his space capsule made an ocean landing, Enos "had become angry and frustrated at the three-hour wait" before being retrieved by U.S. Navy seamen. The capsule was brought aboard in the late afternoon and Enos was immediately taken below deck by his Air Force handlers. Stormes then dropped Enos at the Kindley US Air Force Base hospital in Bermuda, where he was found to be in good shape. On December 1, 1961 Enos left Bermuda for Cape Canaveral, and eventually Holloman Air Force Base. Enos's flight was a full dress rehearsal for the next Mercury launch on February 20, 1962, which would make John Glenn the first American to orbit Earth. Enos was nicknamed "the Penis," due to his frequent fondling of himself. On November 4, 1962, Enos died of shigellosis-related dysentery, which was resistant to then-known antibiotics. He was constantly observed for two months before his death. Pathologists reported no symptoms that could be attributed or related to his previous space flight. See also Monkeys and apes in space Albert II, first monkey and first primate in space Little Joe 2 flight with Sam, Project Mercury rhesus monkey Animals in space List of individual apes One Small Step: The Story of the Space Chimps: (2008 documentary) Félicette: First cat in space References External links Enos the Chimpanzee travels to space on NASA's Mercury Atlas 5 1960, YouTube video "Voyage of space chimpanzee Enos ended in Bermuda" by Gail Westerfield One Small Step: The Story of the Space Chimps, documentary on history of chimpanzees used in space travel Atlantic.com article about electrical shocks MentalFloss.com on 54th anniversary of flight 1950s animal births 1962 animal deaths 1961 in spaceflight Animals in space Deaths from dysentery Individual chimpanzees Project Mercury Year of birth missing Place of birth missing Non-human primate astronauts of the American space program
Enos (chimpanzee)
[ "Chemistry", "Biology" ]
788
[ "Animal testing", "Space-flown life", "Animals in space" ]
3,145,530
https://en.wikipedia.org/wiki/Three-stratum%20theory
The three-stratum theory is a theory of cognitive ability proposed by the American psychologist John Carroll in 1993. It is based on a factor-analytic study of the correlation of individual-difference variables from data such as psychological tests, school marks and competence ratings from more than 460 datasets. These analyses suggested a three-layered model where each layer accounts for the variations in the correlations within the previous layer. The three layers (strata) are defined as representing narrow, broad, and general cognitive ability. The factors describe stable and observable differences among individuals in the performance of tasks. Carroll argues further that they are not mere artifacts of a mathematical process, but likely reflect physiological factors explaining differences in ability (e.g., nerve firing rates). This does not alter the effectiveness of factor scores in accounting for behavioral differences. Carroll proposes a taxonomic dimension in the distinction between level factors and speed factors. The tasks that contribute to the identification of level factors can be sorted by difficulty and individuals differentiated by whether they have acquired the skill to perform the tasks. Tasks that contribute to speed factors are distinguished by the relative speed with which individuals can complete them. Carroll suggests that the distinction between level and speed factors may be the broadest taxonomy of cognitive tasks that can be offered. Carroll distinguishes his hierarchical approach from taxonomic approaches such as Guilford's Structure of Intellect model (three-dimensional model with contents, operations, and products). Development of the three-stratum theory The three-stratum theory is derived primarily from Spearman's (1927) model of general intelligence and Horn & Cattell's (1966) theory of fluid and crystallized intelligence. Carroll's model was also heavily influenced by the 1976 edition of the ETS standard kit. His factor analyses were largely consistent with the Horn-Cattell model except that Carroll believed that general intelligence was a meaningful construct. This model suggests that intelligence is best conceptualized in a hierarchy of three strata. Stratum III (general intelligence): g factor, accounts for the correlations among the broad abilities at Stratum II. Stratum II (broad abilities): 8 broad abilities—fluid intelligence, crystallized intelligence, general memory and learning, broad visual perception, broad auditory perception, broad retrieval ability, broad cognitive speediness, and processing speed. Stratum I (specific level): more specific factors under the stratum II. Kevin McGrew (2005) integrated the Horn-Cattell model with Carroll's to create the Cattell-Horn-Carroll Theory of Cognitive Abilities (CHC Theory), which has since been influential in guiding test development. Johnson and Bouchard have criticized CHC theory and the two major theories on which it is based, suggesting that their g-VPR model provides a better explanation of the available data. See also CHC theory g factor Fluid and crystallized intelligence g-VPR model References Further reading Keith, T. & Reynolds, M. (2010). Cattell-Horn-Carroll abilities and cognitive tests: What we've learned from 20 years of research. Psychology in the Schools, 47(7), 635-650. Cognitive psychology Intelligence
Three-stratum theory
[ "Biology" ]
650
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
3,145,562
https://en.wikipedia.org/wiki/-zilla
-zilla is an English slang suffix, a libfix back-formation derived from the English name of the Japanese movie monster Godzilla. It is popular for the names of software and websites. It is also found often in popular culture to imply some form of excess, denoting the monster-like qualities of Godzilla. This trend has been observed since the popularization of the Mozilla Project, which itself included the Internet Relay Chat client ChatZilla. The use of the suffix was contested by Toho, owners of the trademark Godzilla, in a lawsuit against the website Davezilla and also against Sears for their mark Bagzilla. Toho has since trademarked the word "Zilla" and retroactively used it as an official name for the "Godzilla In Name Only" creature from the 1998 Roland Emmerich film. List of items ending in -zilla Some uses of the suffix -zilla include: Businesses and products AmiZilla, an Amiga port of Mozilla Firefox Chipzilla, a humorous epithet for the Intel Corporation Clonezilla, an open source disk cloning software FileZilla, an FTP program Go!Zilla, a download manager program Hubzilla, an open source social network, part of Fediverse Mozilla, a group of Internet-related programs created by the Mozilla Foundation, also name of the group's widely known web browser Bugzilla, open source bug tracking software, with a web-based interface. ChatZilla, an Internet Relay Chat program Mozilla Application Suite Classilla, a rebranded Mozilla Application Suite, an internet suite for the classic Mac OS Ghostzilla, a web browser GNUzilla, GNU's fork of the Mozilla Application Suite Warpzilla, the Mozilla Application suite for OS/2 Newszilla, the Usenet server of the Dutch internet provider XS4ALL Podzilla, an open source user interface for the IPodLinux project, which allows for alternative functionality of Apple Computer's iPod Quizilla, an online personality quiz website, which contains its own "Zillapedia" RevZilla.com, an online motorcycle gear retailer Shopzilla, a comparison-shopping search engine, formerly BizRate.com Godzilla (Nissan GT-R), a grand touring/ sports vehicle produced by Nissan Entertainment Bongzilla, a rock band from Madison, Wisconsin "Bootzilla", a song recorded by Bootsy's Rubber Band Bridezilla (band), an Australian indie rock band Bridezilla (EP), a recording by the band Bridezilla Bridezillas, a reality show which airs on the WE: Women's Entertainment network Broadzilla, a rock band from Detroit, Michigan Davezilla, a humor website Godzilla, the franchise from which the -zilla suffix originates Godzilla (1954 film), the first film in the franchise Zilla, a fictional character originally known as Godzilla, from the 1998 American Godzilla film Illzilla, an Australian hip hop group featuring live instruments Popzilla, an animated TV series in production for MTV Rapzilla, a Christian hip hop online magazine Tekzilla, a weekly video podcast on the Revision3 network "Throatzillaaa", a song recorded by Slayyyter Miscellaneous axizilla, a heavy form of axion in an extension of the Standard Model. bitchzilla, a severely disagreeable or aggressive woman. couplezilla, a couple who, in the course of planning their wedding, display difficult, selfish, narcissistic behavior relating to the event. cuntzilla, a term of abuse for a severely disagreeable or aggressive woman. Fedzilla, the federal government regarded as a rapacious monster with an appetite for political power, money, etc. groomzilla, a demanding and perfectionist groom (man who is to be married). Hogzilla, a large male wild hog hybrid that was stabbed and killed in 2004 in Georgia, United States. momzilla, a controlling or over-involved mother. Pigzilla, another large feral pig or possible hoax shot in 2007 in Alabama, United States. Squawkzilla, a nickname for a prehistoric parrot species. promzilla, a teenage girl who is obsessed with preparing for her prom and ensuring it turns out the way she envisions. Snowzilla (disambiguation) weddingzilla, a person overly concerned with ensuring that a wedding goes exactly as they envision it. wimpzilla, a theoretical superheavy dark matter particle, trillions of times more massive than other proposed types of dark matter. For derived words Words ending with -zilla (List of examples in English Wiktionary) -zilla (explaining the suffix in English Wiktionary) References English suffixes Computing terminology Godzilla (franchise) Internet slang Slang Neologisms Japanese words and phrases
-zilla
[ "Technology" ]
1,013
[ "Computing terminology" ]
3,146,241
https://en.wikipedia.org/wiki/Focus-plus-context%20screen
A focus-plus-context screen is a specialized type of display device that consists of one or more high-resolution "focus" displays embedded into a larger low-resolution "context" display. Image content is displayed across all display regions, such that the scaling of the image is preserved, while its resolution varies across the display regions. The original focus-plus-context screen prototype consisted of an 18"/45 cm LCD screen embedded in a 5'/150 cm front-projected screen. Alternative designs have been proposed that achieve the mixed-resolution effect by combining two or more projectors with different focal lengths While the high-resolution area of the original prototype was located at a fixed location, follow-up projects have obtained a movable focus area by using a Tablet PC. Patrick Baudisch is the inventor of focus-plus-context screens (2000, while at Xerox PARC) Advantages Allows users to leverage their foveal and their peripheral vision Cheaper to manufacture than a display that is high-resolution across the entire display surface Displays entirety and details of large images in a single view. Unlike approaches that combine entirety and details in software (fisheye views), focus-plus-context screens do not introduce distortion. Disadvantages In existing implementations, the focus display is either fixed or moving it is physically demanding References Notes Yudhijit Bhattacharjee. In a Seamless Image, the Great and Small. In The New York Times, Thursday, March 14, 2002. External links Focus-plus-context screens homepage User interfaces Computer output devices Display technology User interface techniques
Focus-plus-context screen
[ "Technology", "Engineering" ]
323
[ "User interfaces", "Computer science stubs", "Computer science", "Interfaces", "Electronic engineering", "Display technology", "Computing stubs" ]
3,146,340
https://en.wikipedia.org/wiki/Bourke%20engine
The Bourke engine was an attempt by Russell Bourke, in the 1920s, to improve the two-stroke internal combustion engine. Despite finishing his design and building several working engines, the onset of World War II, lack of test results, and the poor health of his wife compounded to prevent his engine from ever coming successfully to market. The main claimed virtues of the design are that it has only two moving parts, is lightweight, has two power pulses per revolution, and does not need oil mixed into the fuel. The Bourke engine is a two-stroke design, with one horizontally opposed piston assembly using two pistons that move in the same direction at the same time, so that their operations are 180 degrees out of phase. The pistons are connected to a Scotch yoke mechanism in place of the more usual crankshaft mechanism, thus the piston acceleration is perfectly sinusoidal. This causes the pistons to spend more time at top dead center than conventional engines. The incoming charge is compressed in a chamber under the pistons, as in a conventional crankcase-charged two-stroke engine. The connecting-rod seal prevents the fuel from contaminating the bottom-end lubricating oil. Operation The operating cycle is very similar to that of a current production spark ignition two-stroke with crankcase compression, with two modifications: The fuel is injected directly into the air as it moves through the transfer port. The engine is designed to run without using spark ignition once it is warmed up. This is known as auto-ignition or dieseling, and the air/fuel mixture starts to burn due to the high temperature of the compressed gas, and/or the presence of hot metal in the combustion chamber. Design features The following design features have been identified: Mechanical features Scotch yoke, and linearly sliding connecting rods. Fewer moving parts (only 2 moving assemblies per opposed cylinder pair) and the opposed cylinders are combinable to make 2, 4, 6, 8, 10, 12 or any even number of cylinders. The piston is connected to the Scotch yoke through a slipper bearing (a type of hydrodynamic tilting-pad fluid bearing). Mechanical fuel injection. Ports rather than valves. Easy maintenance (top overhauling) with simple tools. The Scotch yoke does not create lateral forces on the piston, reducing friction and piston wear. O-rings are used to seal joints rather than gaskets. The Scotch yoke makes the pistons dwell very slightly longer at top dead center, so the fuel burns more completely in a smaller volume. Gas flow and thermodynamic features Low exhaust temperature (below that of boiling water) so metal exhaust components are not required; plastic ones can be used if strength is not required from exhaust system. 15:1 to 24:1 compression ratio for high efficiency and it can be easily changed as required for different fuels and operation requirements. Fuel is vaporised when it is injected into the transfer ports, and the turbulence in the intake manifolds and the piston shape above the rings stratifies the fuel–air mixture into the combustion chamber. Lean burn for increased efficiency and reduced emissions. Lubrication This design uses oil seals to prevent the pollution from the combustion chamber (created by piston ring blow-by in four-strokes and just combustion in two-strokes) from polluting the crankcase oil, extending the life of the oil as it is used slowly for keeping the rings full of oil. Oil was shown to be used slowly, but checking the quantity and cleanness of it was still recommended by Russell Bourke, its creator. The lubricating oil in the base is protected from combustion chamber pollution by an oil seal over the connecting rod. The piston rings are supplied with oil from a small supply hole in the cylinder wall at bottom dead center. Claimed and measured performance Efficiency - 0.25 (lb/h)/hp is claimed - about the same as the best diesel engine, or roughly twice as efficient as the best two strokes. This is equivalent to a thermodynamic efficiency of 55.4%, which is an exceedingly high figure for a small internal combustion engine. In a test witnessed by a third party, the actual fuel consumption was 1.1 hp/(lb/hr), or 0.9 (lb/hr)/hp, equivalent to a thermodynamic efficiency of about 12.5%, which is typical of a 1920s steam engine. A test of a 30 cubic inch Vaux engine, built by a close associate of Bourke, gave a fuel consumption of 1.48 lb/(bhp hr), or 0.7 (lb/hr)/hp at maximum power. Power to weight - The Silver Eagle was claimed to produce 25 hp from 45 lb, or a power-to-weight ratio of 0.55 hp/lb. The larger 140 cubic inch engine was good for 120 hp from 125 lb, or approximately 1 hp/lb. The Model H was claimed to produce 60 hp with a weight of 95 lb, hence giving a power to weight ratio of 0.63 hp/lb. The 30 cu in twin was reported to produce 114 hp at 15000rpm while weighing only 38 lb, an incredible 3 hp/lb However a 30 cu in replica from Vaux Engines produced just 8.8 hp at 4000 rpm, even after substantial reworking. Other sources claim 0.9 to 2.5 hp/lb, although no independently witnessed test to support these high figures has been documented. The upper range of this is roughly twice as good as the best four-stroke production engine shown here, or 0.1 hp/lb better than a Graupner G58 two-stroke. The lower claim is unremarkable, easily exceeded by production four-stroke engines, never mind two strokes. Emissions - Achieved virtually no hydrocarbons (80 ppm) or carbon monoxide (less than 10 ppm) in published test results, however no power output was given for these results, and NOx was not measured. Low Emissions - The engine is claimed to be able to operate on hydrogen or any hydro-carbon fuel without any modifications, producing only water vapor and carbon dioxide as emissions. Engineering critique of the Bourke engine The Bourke Engine has some interesting features, but the extravagant claims for its performance are unlikely to be borne out by real tests. Many of the claims are contradictory. Seal friction from the seal between the air compressor chamber and the crankcase, against the connecting rod, will reduce the efficiency. Efficiency will be reduced due to pumping losses, as the air charge is compressed and expanded twice but energy is only extracted for power in one of the expansions per piston stroke. Engine weight is likely to be high because it will have to be very strongly built to cope with the high peak pressures seen as a result of the rapid high temperature combustion. Each piston pair is highly imbalanced as the two pistons move in the same direction at the same time, unlike in a boxer engine. This will limit the speed range and hence the power of the engine, and increase its weight due to the strong construction necessary to react the high forces in the components. High speed two-stroke engines tend to be inefficient compared with four-strokes because some of the intake charge escapes unburnt with the exhaust. Use of excess air will reduce the torque available for a given engine size. Forcing the exhaust out rapidly through small ports will incur a further efficiency loss. Operating an internal combustion engine in detonation reduces efficiency due to heat lost from the combustion gases being scrubbed against the combustion chamber walls by the shock waves. Emissions - although some tests have shown low emissions in some circumstances, these were not necessarily at full power. As the scavenge ratio (i.e. engine torque) is increased more HC and CO will be emitted. Increased dwell time at TDC will allow more heat to be transferred to the cylinder walls, reducing the efficiency. When running in auto-ignition mode the timing of the start of the burn is controlled by the operating state of the engine, rather than directly as in a spark ignition or diesel engine. As such it may be possible to optimize it for one operating condition, but not for the wide range of torques and speeds that an engine typically sees. The result will be reduced efficiency and higher emissions. If the efficiency is high, then combustion temperatures must be high, as required by the Carnot cycle, and the air fuel mixture must be lean. High combustion temperatures and lean mixtures cause nitrogen dioxide to be formed. Patents Russell Bourke obtained British and Canadian patents for the engine in 1939: GB514842 and CA381959. He also obtained in 1939. References External links Running engine and Cad modeling Proposed engines Piston engines Engine technology Two-stroke engine technology
Bourke engine
[ "Technology" ]
1,801
[ "Engine technology", "Proposed engines", "Piston engines", "Engines" ]
3,146,410
https://en.wikipedia.org/wiki/Copy%20attack
The copy attack is an attack on certain digital watermarking systems proposed by M. Kutter, S. Voloshynovskiy, and A. Herrige in a paper presented in January, 2000 at the Photonics West SPIE convention. In some scenarios, a digital watermark is added to a piece of media such as an image, film, or audio clip, to prove its authenticity. If a piece of media were presented and found to lack a watermark, it would be considered suspect. Alternatively, a security system could be devised which would limit a user's ability to manipulate any piece of media which contained a watermark. For instance, a DVD burner may prohibit making copies of a film which contained a watermark. The copy attack attempts to thwart the effectiveness of such systems by estimating the watermark given in an originally watermarked piece of media, and then adding that watermark to an un-watermarked piece. In the first scenario listed above, this would allow an attacker to have an inauthentic image declared authentic, since it contains a watermark. In the second scenario, an attacker could flood the market with content which ordinarily would allow a user to manipulate it as he saw fit, but due to the presence of the watermark, limitations would be imposed. In this way, schemes which sought to limit use of watermarked media may prove to be too unpopular for wide distribution. In a 2003 paper presented at the International Conference on Acoustics, Speech, and Signal Processing, John Barr, Brett Bradley, and Brett T. Hannigan of Digimarc describe a way to tie the content of the digital watermark to the underlying image, so that if the watermark were placed into a different image, the watermark detection system would not authenticate it. References Computer security exploits Watermarking
Copy attack
[ "Technology" ]
380
[ "Computer security exploits" ]
3,146,632
https://en.wikipedia.org/wiki/Bland%E2%80%93Altman%20plot
A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement between two different assays. It is identical to a Tukey mean-difference plot, the name by which it is known in other fields, but was popularised in medical statistics by J. Martin Bland and Douglas G. Altman. Construction Consider a sample consisting of observations (for example, objects of unknown volume). Both assays (for example, different methods of volume measurement) are performed on each sample, resulting in data points. Each of the samples is then represented on the graph by assigning the mean of the two measurements as the -value, and the difference between the two values as the -value. The Cartesian coordinates of a given sample with values of and determined by the two assays is For comparing the dissimilarities between the two sets of samples independently from their mean values, it is more appropriate to look at the ratio of the pairs of measurements. Log transformation (base 2) of the measurements before the analysis will enable the standard approach to be used; so the plot will be given by the following equation: This version of the plot is used in MA plot. Application One primary application of the Bland–Altman plot is to compare two clinical measurements each of which produced some error in their measures. It can also be used to compare a new measurement technique or method with a gold standard, as even a gold standard does not—and should not—imply it to be without error. See Analyse-it, MedCalc, NCSS, GraphPad Prism, R, StatsDirect, or JASP for software providing Bland–Altman plots. Bland–Altman plots are extensively used to evaluate the agreement among two different instruments or two measurements techniques. Bland–Altman plots allow identification of any systematic difference between the measurements (i.e., fixed bias) or possible outliers. The mean difference is the estimated bias, and the SD of the differences measures the random fluctuations around this mean. If the mean value of the difference differs significantly from 0 on the basis of a 1-sample t-test, this indicates the presence of fixed bias. If there is a consistent bias, it can be adjusted for by subtracting the mean difference from the new method. It is common to compute 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference), which tells us how far apart measurements by two methods were more likely to be for most individuals. If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. The 95% limits of agreement can be unreliable estimates of the population parameters especially for small sample sizes so, when comparing methods or assessing repeatability, it is important to calculate confidence intervals for 95% limits of agreement. This can be done by Bland and Altman's approximate method or by more precise methods. Bland–Altman plots were also used to investigate any possible relationship of the discrepancies between the measurements and the true value (i.e., proportional bias). The existence of proportional bias indicates that the methods do not agree equally through the range of measurements (i.e., the limits of agreement will depend on the actual measurement). To evaluate this relationship formally, the difference between the methods should be regressed on the average of the 2 methods. When a relationship between the differences and the true value was identified (i.e., a significant slope of the regression line), regression-based 95% limits of agreement should be provided. See also MA plot Gardner–Altman plot Notes A similar method was proposed in 1981 by Eksborg. This method was based on Deming regression—a method introduced by Adcock in 1878. Bland and Altman's Lancet paper was number 29 in a list of the top 100 most-cited papers of all time with over 23,000 citations. References Analytical chemistry Statistical charts and diagrams Medical statistics
Bland–Altman plot
[ "Chemistry" ]
829
[ "nan" ]
3,146,707
https://en.wikipedia.org/wiki/Local-density%20approximation
Local-density approximations (LDA) are a class of approximations to the exchange–correlation (XC) energy functional in density functional theory (DFT) that depend solely upon the value of the electronic density at each point in space (and not, for example, derivatives of the density or the Kohn–Sham orbitals). Many approaches can yield local approximations to the XC energy. However, overwhelmingly successful local approximations are those that have been derived from the homogeneous electron gas (HEG) model. In this regard, LDA is generally synonymous with functionals based on the HEG approximation, which are then applied to realistic systems (molecules and solids). In general, for a spin-unpolarized system, a local-density approximation for the exchange-correlation energy is written as where ρ is the electronic density and єxc is the exchange-correlation energy per particle of a homogeneous electron gas of charge density ρ. The exchange-correlation energy is decomposed into exchange and correlation terms linearly, so that separate expressions for Ex and Ec are sought. The exchange term takes on a simple analytic form for the HEG. Only limiting expressions for the correlation density are known exactly, leading to numerous different approximations for єc. Local-density approximations are important in the construction of more sophisticated approximations to the exchange-correlation energy, such as generalized gradient approximations (GGA) or hybrid functionals, as a desirable property of any approximate exchange-correlation functional is that it reproduce the exact results of the HEG for non-varying densities. As such, LDA's are often an explicit component of such functionals. The local-density approximation was first introduced by Walter Kohn and Lu Jeu Sham in 1965. Applications Local density approximations, as with GGAs are employed extensively by solid state physicists in ab-initio DFT studies to interpret electronic and magnetic interactions in semiconductor materials including semiconducting oxides and spintronics. The importance of these computational studies stems from the system complexities which bring about high sensitivity to synthesis parameters necessitating first-principles based analysis. The prediction of Fermi level and band structure in doped semiconducting oxides is often carried out using LDA incorporated into simulation packages such as CASTEP and DMol3. However an underestimation in Band gap values often associated with LDA and GGA approximations may lead to false predictions of impurity mediated conductivity and/or carrier mediated magnetism in such systems. Starting in 1998, the application of the Rayleigh theorem for eigenvalues has led to mostly accurate, calculated band gaps of materials, using LDA potentials. A misunderstanding of the second theorem of DFT appears to explain most of the underestimation of band gap by LDA and GGA calculations, as explained in the description of density functional theory, in connection with the statements of the two theorems of DFT. Homogeneous electron gas Approximation for єxc depending only upon the density can be developed in numerous ways. The most successful approach is based on the homogeneous electron gas. This is constructed by placing N interacting electrons in to a volume, V, with a positive background charge keeping the system neutral. N and V are then taken to infinity in the manner that keeps the density (ρ = N / V) finite. This is a useful approximation as the total energy consists of contributions only from the kinetic energy, electrostatic interaction energy and exchange-correlation energy, and that the wavefunction is expressible in terms of planewaves. In particular, for a constant density ρ, the exchange energy density is proportional to ρ⅓. Exchange functional The exchange-energy density of a HEG is known analytically. The LDA for exchange employs this expression under the approximation that the exchange-energy in a system where the density is not homogeneous, is obtained by applying the HEG results pointwise, yielding the expression Correlation functional Analytic expressions for the correlation energy of the HEG are available in the high- and low-density limits corresponding to infinitely-weak and infinitely-strong correlation. For a HEG with density ρ, the high-density limit of the correlation energy density is and the low limit where the Wigner-Seitz parameter is dimensionless. It is defined as the radius of a sphere which encompasses exactly one electron, divided by the Bohr radius. The Wigner-Seitz parameter is related to the density as An analytical expression for the full range of densities has been proposed based on the many-body perturbation theory. The calculated correlation energies are in agreement with the results from quantum Monte Carlo simulation to within 2 milli-Hartree. Accurate quantum Monte Carlo simulations for the energy of the HEG have been performed for several intermediate values of the density, in turn providing accurate values of the correlation energy density. Spin polarization The extension of density functionals to spin-polarized systems is straightforward for exchange, where the exact spin-scaling is known, but for correlation further approximations must be employed. A spin polarized system in DFT employs two spin-densities, ρα and ρβ with ρ = ρα + ρβ, and the form of the local-spin-density approximation (LSDA) is For the exchange energy, the exact result (not just for local density approximations) is known in terms of the spin-unpolarized functional: The spin-dependence of the correlation energy density is approached by introducing the relative spin-polarization: corresponds to the diamagnetic spin-unpolarized situation with equal and spin densities whereas corresponds to the ferromagnetic situation where one spin density vanishes. The spin correlation energy density for a given values of the total density and relative polarization, єc(ρ,ς), is constructed so to interpolate the extreme values. Several forms have been developed in conjunction with LDA correlation functionals. Exchange-correlation potential The exchange-correlation potential corresponding to the exchange-correlation energy for a local density approximation is given by In finite systems, the LDA potential decays asymptotically with an exponential form. This result is in error; the true exchange-correlation potential decays much slower in a Coulombic manner. The artificially rapid decay manifests itself in the number of Kohn–Sham orbitals the potential can bind (that is, how many orbitals have energy less than zero). The LDA potential can not support a Rydberg series and those states it does bind are too high in energy. This results in the highest occupied molecular orbital (HOMO) energy being too high in energy, so that any predictions for the ionization potential based on Koopmans' theorem are poor. Further, the LDA provides a poor description of electron-rich species such as anions where it is often unable to bind an additional electron, erroneously predicating species to be unstable. In the case of spin polarization, the exchange-correlation potential acquires spin indices. However, if one only considers the exchange part of the exchange-correlation, one obtains a potential that is diagonal in spin indices: References Density functional theory
Local-density approximation
[ "Physics", "Chemistry" ]
1,470
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics" ]
3,146,792
https://en.wikipedia.org/wiki/Reed%20reaction
The Reed reaction is a chemical reaction that utilizes light to oxidize hydrocarbons to alkylsulfonyl chlorides. This reaction is employed in modifying polyethylene to give chlorosulfonated polyethylene (CSPE), which is noted for its toughness. Commercial implementations Polyethylene is treated with a mixture of chlorine and sulfur dioxide under UV-radiation. Vinylsulfonic acid can also be prepared beginning with the sulfochlorination of chloroethane. Dehydrohalogenation of the product gives vinylsulfonyl chloride, which subsequently is hydrolyzed to give vinylsulfonic acid: = == Mechanism The reaction occurs via a free radical mechanism. UV-light initiates homolysis of chlorine, producing a pair of chlorine atoms: Chain initiation: Cl2 ->[h\nu] 2Cl. Thereafter a chlorine atom attacks the hydrocarbon chain, freeing hydrogen to form hydrogen chloride and an alkyl free radical. The resulting radical then captures SO2. The resulting sulfonyl radical attacks another chlorine molecule to produce the desired sulfonyl chloride and a new chlorine atom, which continues the reaction chain. Chain propagation steps: {R-H} + .Cl -> {R.} + HCl {R.} + {:}SO2 -> R-\dot{S}O2 {R-\dot{S}O2} + Cl2 -> {R-SO2-Cl} + Cl. See also Chain reaction Historical readings Reed, C. F. ; ; . References Substitution reactions Carbon-heteroatom bond forming reactions Name reactions
Reed reaction
[ "Chemistry" ]
359
[ "Name reactions", "Carbon-heteroatom bond forming reactions", "Organic reactions" ]
3,147,007
https://en.wikipedia.org/wiki/Johnson%20bar%20%28vehicle%29
Johnson bar is the term for several different hand-operated levers used in vehicles. Their distinguishing feature is a positive latch, typically spring-loaded, to hold the lever in a selected position, capable of being operated with one hand. Some Johnson bars have a fully ratcheting mechanism, some just a series of detents, and others yet simply engaged and disengaged positions. A common example is the Johnson bar-controlled parking brake found on many trucks and buses. Johnson bar is also the North American term for a steam engine's reversing lever, used to control the valve gear. The forward/reverse lever on Caterpillar tractors is also called a Johnson bar. Some light general aviation aircraft (including Piper Cherokees, Beech Musketeers, and some early model Cessnas—such as the Cessna 140)—use Johnson bars to actuate flaps and wheel brakes. The Cessna 162 Skycatcher uses a Johnson bar for flap operation. A small number of older aircraft (including the Mooney M-18, some older M20s and some Progressive Aerodyne SeaReys) also have landing gear actuated by Johnson bars. The Boeing 707/720 aircraft had a Johnson bar for manually extending the nose landing gear, in case the normal gear extension failed. See also Cutoff Reversing lever References Vehicle design Vehicle parts
Johnson bar (vehicle)
[ "Technology", "Engineering" ]
276
[ "Vehicle parts", "Vehicle design", "Design", "Components" ]
3,147,062
https://en.wikipedia.org/wiki/Schur%20polynomial
In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials. Definition (Jacobi's bialternant formula) Schur polynomials are indexed by integer partitions. Given a partition , where , and each is a non-negative integer, the functions are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables. Since they are alternating, they are all divisible by the Vandermonde determinant The Schur polynomials are defined as the ratio This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula. This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant. Properties The degree Schur polynomials in variables are a linear basis for the space of homogeneous degree symmetric polynomials in variables. For a partition , the Schur polynomial is a sum of monomials, where the summation is over all semistandard Young tableaux of shape . The exponents give the weight of , in other words each counts the occurrences of the number in . This can be shown to be equivalent to the definition from the first Giambelli formula using the Lindström–Gessel–Viennot lemma (as outlined on that page). Schur polynomials can be expressed as linear combinations of monomial symmetric functions with non-negative integer coefficients called Kostka numbers, The Kostka numbers are given by the number of semi-standard Young tableaux of shape λ and weight μ. Jacobi−Trudi identities The first Jacobi−Trudi formula expresses the Schur polynomial as a determinant in terms of the complete homogeneous symmetric polynomials, where . The second Jacobi-Trudi formula expresses the Schur polynomial as a determinant in terms of the elementary symmetric polynomials, where and is the conjugate partition to . In both identities, functions with negative subscripts are defined to be zero. The Giambelli identity Another determinantal identity is Giambelli's formula, which expresses the Schur function for an arbitrary partition in terms of those for the hook partitions contained within the Young diagram. In Frobenius' notation, the partition is denoted where, for each diagonal element in position , denotes the number of boxes to the right in the same row and denotes the number of boxes beneath it in the same column (the arm and leg lengths, respectively). The Giambelli identity expresses the Schur function corresponding to this partition as the determinant of those for hook partitions. The Cauchy identity The Cauchy identity for Schur functions (now in infinitely many variables), and its dual state that and where the sum is taken over all partitions λ, and , denote the complete symmetric functions and elementary symmetric functions, respectively. If the sum is taken over products of Schur polynomials in variables , the sum includes only partitions of length since otherwise the Schur polynomials vanish. There are many generalizations of these identities to other families of symmetric functions. For example, Macdonald polynomials, Schubert polynomials and Grothendieck polynomials admit Cauchy-like identities. Further identities The Schur polynomial can also be computed via a specialization of a formula for Hall–Littlewood polynomials, where is the subgroup of permutations such that for all i, and w acts on variables by permuting indices. The Murnaghan−Nakayama rule The Murnaghan–Nakayama rule expresses a product of a power-sum symmetric function with a Schur polynomial, in terms of Schur polynomials: where the sum is over all partitions μ such that μ/λ is a rim-hook of size r and ht(μ/λ) is the number of rows in the diagram μ/λ. The Littlewood–Richardson rule and Pieri's formula The Littlewood–Richardson coefficients depend on three partitions, say , of which and describe the Schur functions being multiplied, and gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients such that The Littlewood–Richardson rule states that is equal to the number of Littlewood–Richardson tableaux of skew shape and of weight . Pieri's formula is a special case of the Littlewood-Richardson rule, which expresses the product in terms of Schur polynomials. The dual version expresses in terms of Schur polynomials. Specializations Evaluating the Schur polynomial in gives the number of semi-standard Young tableaux of shape with entries in . One can show, by using the Weyl character formula for example, that In this formula, , the tuple indicating the width of each row of the Young diagram, is implicitly extended with zeros until it has length . The sum of the elements is . See also the Hook length formula which computes the same quantity for fixed λ. Example The following extended example should help clarify these ideas. Consider the case n = 3, d = 4. Using Ferrers diagrams or some other method, we find that there are just four partitions of 4 into at most three parts. We have and so on, where is the Vandermonde determinant . Summarizing: Every homogeneous degree-four symmetric polynomial in three variables can be expressed as a unique linear combination of these four Schur polynomials, and this combination can again be found using a Gröbner basis for an appropriate elimination order. For example, is obviously a symmetric polynomial which is homogeneous of degree four, and we have Relation to representation theory The Schur polynomials occur in the representation theory of the symmetric groups, general linear groups, and unitary groups. The Weyl character formula implies that the Schur polynomials are the characters of finite-dimensional irreducible representations of the general linear groups, and helps to generalize Schur's work to other compact and semisimple Lie groups. Several expressions arise for this relation, one of the most important being the expansion of the Schur functions sλ in terms of the symmetric power functions . If we write χ for the character of the representation of the symmetric group indexed by the partition λ evaluated at elements of cycle type indexed by the partition ρ, then where ρ = (1r1, 2r2, 3r3, ...) means that the partition ρ has rk parts of length k. A proof of this can be found in R. Stanley's Enumerative Combinatorics Volume 2, Corollary 7.17.5. The integers χ can be computed using the Murnaghan–Nakayama rule. Schur positivity Due to the connection with representation theory, a symmetric function which expands positively in Schur functions are of particular interest. For example, the skew Schur functions expand positively in the ordinary Schur functions, and the coefficients are Littlewood–Richardson coefficients. A special case of this is the expansion of the complete homogeneous symmetric functions hλ in Schur functions. This decomposition reflects how a permutation module is decomposed into irreducible representations. Methods for proving Schur positivity There are several approaches to prove Schur positivity of a given symmetric function F. If F is described in a combinatorial manner, a direct approach is to produce a bijection with semi-standard Young tableaux. The Edelman–Greene correspondence and the Robinson–Schensted–Knuth correspondence are examples of such bijections. A bijection with more structure is a proof using so called crystals. This method can be described as defining a certain graph structure described with local rules on the underlying combinatorial objects. A similar idea is the notion of dual equivalence. This approach also uses a graph structure, but on the objects representing the expansion in the fundamental quasisymmetric basis. It is closely related to the RSK-correspondence. Generalizations Skew Schur functions Skew Schur functions sλ/μ depend on two partitions λ and μ, and can be defined by the property Here, the inner product is the Hall inner product, for which the Schur polynomials form an orthonormal basis. Similar to the ordinary Schur polynomials, there are numerous ways to compute these. The corresponding Jacobi-Trudi identities are There is also a combinatorial interpretation of the skew Schur polynomials, namely it is a sum over all semi-standard Young tableaux (or column-strict tableaux) of the skew shape . The skew Schur polynomials expands positively in Schur polynomials. A rule for the coefficients is given by the Littlewood-Richardson rule. Double Schur polynomials The double Schur polynomials can be seen as a generalization of the shifted Schur polynomials. These polynomials are also closely related to the factorial Schur polynomials. Given a partition , and a sequence one can define the double Schur polynomial as where the sum is taken over all reverse semi-standard Young tableaux of shape , and integer entries in . Here denotes the value in the box in and is the content of the box. A combinatorial rule for the Littlewood-Richardson coefficients (depending on the sequence a) was given by A.I Molev. In particular, this implies that the shifted Schur polynomials have non-negative Littlewood-Richardson coefficients. The shifted Schur polynomials can be obtained from the double Schur polynomials by specializing and . The double Schur polynomials are special cases of the double Schubert polynomials. Factorial Schur polynomials The factorial Schur polynomials may be defined as follows. Given a partition λ, and a doubly infinite sequence ...,a−1, a0, a1, ... one can define the factorial Schur polynomial sλ(x|a) as where the sum is taken over all semi-standard Young tableaux T of shape λ, and integer entries in 1, ..., n. Here T(α) denotes the value in the box α in T and c(α) is the content of the box. There is also a determinant formula, where (y|a)k = (y − a1) ... (y − ak). It is clear that if we let for all i, we recover the usual Schur polynomial sλ. The double Schur polynomials and the factorial Schur polynomials in n variables are related via the identity sλ(x||a) = sλ(x|u) where an−i+1 = ui. Other generalizations There are numerous generalizations of Schur polynomials: Hall–Littlewood polynomials Shifted Schur polynomials Flagged Schur polynomials Schubert polynomials Stanley symmetric functions (also known as stable Schubert polynomials) Key polynomials (also known as Demazure characters) Quasi-symmetric Schur polynomials Row-strict Schur polynomials Jack polynomials Modular Schur polynomials Loop Schur functions Macdonald polynomials Schur polynomials for the symplectic and orthogonal group. k-Schur functions Grothendieck polynomials (K-theoretical analogue of Schur polynomials) LLT polynomials See also Schur functor Littlewood–Richardson rule, where one finds some identities involving Schur polynomials. References Homogeneous polynomials Invariant theory Representation theory of finite groups Symmetric functions Orthogonal polynomials Issai Schur
Schur polynomial
[ "Physics", "Mathematics" ]
2,454
[ "Symmetry", "Group actions", "Symmetric functions", "Invariant theory", "Algebra" ]
3,147,168
https://en.wikipedia.org/wiki/Kantubek
Kantubek (; ) is a ghost town on Vozrozhdeniya Island in the Aral Sea. The town is still found on maps but was abandoned in 1992 following the dissolution of the Soviet Union. It has since been demolished, and there are plans to make the area a national park. Kantubek used to have a population of approximately 1,500 and housed scientists and employees of the Soviet Union's top-secret Aralsk-7 biological weapons research and test site. Brian Hayes, a biochemical engineer with the United States Defense Threat Reduction Agency, led an expedition in the spring and summer of 2002 to neutralize what was believed to be the world's largest anthrax dumping grounds. His team of 113 people neutralized between 100 and 200 tonnes of anthrax over a three-month period. The cost of the cleanup operation was approximately US$5 million. See also Vozrozhdeniya Island Gruinard Island in Scotland, used for anthrax testing References Former populated places in Uzbekistan Ghost towns in Uzbekistan Biological warfare facilities Soviet biological weapons program 1992 disestablishments in Uzbekistan Populated places disestablished in 1992
Kantubek
[ "Biology" ]
239
[ "Biological warfare facilities", "Biological warfare" ]
3,147,292
https://en.wikipedia.org/wiki/Alacrite
Alacrite (also known as Alloy L-605, Cobalt L-605, Haynes 25, and occasionally F90) is a family of cobalt-based alloys. The alloy exhibits useful mechanical properties and is oxidation- and sulfidation-resistant. One member of the family, XSH Alacrite, is described as "a non-magnetic, stainless super-alloy whose high surface hardness enables one to achieve a mirror quality polish." The Institut National de Métrologie in France has also used the material as a kilogram mass standard. Composition and standardization L-605 is composed primarily of cobalt (Co), with a specified mixture of chromium (Cr), tungsten (W), nickel (Ni), iron (Fe) and carbon (C), as well as small amounts of manganese (Mn), silicon (Si), and phosphorus (P). The tungsten and nickel improve the alloy's machinability, while chromium contributes to its solid-solution strengthening. The following tolerances must be met to be considered an L-605 alloy: Properties and Applications The alloy was originally developed for application in aircraft, including combustion chambers, liners, afterburners and the hot section of gas turbines. It has also been used in aerospace components and turbine engines as well as drug-eluting and other kinds of stents due to its biocompatibility. When used for implantable medical devices, the ASTM F90-09 and ISO 5832-5:2005 specifications dictate how L-605 is manufactured and tested. References Biomaterials Cobalt alloys
Alacrite
[ "Physics", "Chemistry", "Biology" ]
336
[ "Biomaterials", "Alloy stubs", "Materials", "Alloys", "Medical technology", "Matter", "Cobalt alloys" ]
3,147,501
https://en.wikipedia.org/wiki/Copper%28II%29%20fluoride
Copper(II) fluoride is an inorganic compound with the chemical formula CuF2. The anhydrous form is a white, ionic, crystalline, hygroscopic salt with a distorted rutile-type crystal structure, similar to other fluorides of chemical formulae MF2 (where M is a metal). The dihydrate, , is blue in colour. Structure Copper(II) fluoride has a monoclinic crystal structure and cannot achieve a higher-symmetry structure. It forms rectangular prisms with a parallelogram base. Each copper ion has four neighbouring fluoride ions at 1.93 Å separation and two further away at 2.27 Å. This distorted octahedral [4+2] coordination is a consequence of the Jahn–Teller effect in d9 copper(II), and leads to a distorted rutile structure similar to that of chromium(II) fluoride, , which is a d4 compound. Uses Cupric fluoride catalyzes the decomposition of nitric oxides in emission control systems. Copper (II) fluoride can be used to make fluorinated aromatic hydrocarbons by reacting with aromatic hydrocarbons in an oxygen-containing atmosphere at temperatures above 450 °C (842 °F). This reaction is simpler than the Sandmeyer reaction, but is only effective in making compounds that can survive at the temperature used. A coupled reaction using oxygen and 2 HF regenerates the copper(II) fluoride, producing water. This method has been proposed as a "greener" method of producing fluoroaromatics since it avoids producing toxic waste products such as ammonium fluoride. Chemistry Copper(II) fluoride can be synthesized from copper and fluorine at temperatures of 400 °C (752 °F). It occurs as a direct reaction. Cu + F2 → CuF2 It loses fluorine in the molten stage at temperatures above 950 °C (1742 °F). 2CuF2 → 2CuF + F2 2CuF → CuF2 + Cu The complex anions of CuF3−, CuF42− and CuF64− are formed if CuF2 is exposed to substances containing fluoride ions F−. Solubility Copper(II) fluoride is slightly soluble in water, but starts to decompose when it is in hot water, producing basic F− and Cu(OH) ions. Toxicity There is little specific information on the toxicity of Copper(II) fluoride. Copper toxicity can affect the skin, eyes, and respiratory tract. Serious conditions include metal fume fever, and hemolysis of red blood cells. Copper can also cause damage to the liver and other major organs. Metal fluorides are generally safe at low levels and are added to water in many countries to protect against tooth decay. At higher levels they can cause toxic effects ranging from nausea and vomiting to tremors, breathing problems, serious convulsions and even coma. Brain and kidney damage can result. Chronic exposure can cause losses in bone density, weight loss and anorexia. Hazards Experiments using copper(II) fluoride should be conducted in a fume hood because metal oxide fumes can occur. The combination of acids with copper(II) fluoride may lead to the production of hydrogen fluoride, which is highly toxic and corrosive. References Dierks, S. "Copper Fluoride". http://www.espimetals.com/index.php/msds/537-copper-fluoride (accessed October 9). External links National Pollutant Inventory - Copper and compounds fact sheet National Pollutant Inventory - Fluoride and compounds fact sheet Fluorides Metal halides Copper(II) compounds
Copper(II) fluoride
[ "Chemistry" ]
792
[ "Inorganic compounds", "Fluorides", "Metal halides", "Salts" ]
3,147,522
https://en.wikipedia.org/wiki/Cadmium%20fluoride
Cadmium fluoride (CdF2) is a mostly water-insoluble source of cadmium used in oxygen-sensitive applications, such as the production of metallic alloys. In extremely low concentrations (ppm), this and other fluoride compounds are used in limited medical treatment protocols. Fluoride compounds also have significant uses in synthetic organic chemistry. The standard enthalpy has been found to be -167.39 kcal. mole−1 and the Gibbs energy of formation has been found to be -155.4 kcal. mole−1, and the heat of sublimation was determined to be 76 kcal. mole−1. Preparation Cadmium fluoride is prepared by the reaction of gaseous fluorine or hydrogen fluoride with cadmium metal or its salts, such as the chloride, oxide, or sulfate. It may also be obtained by dissolving cadmium carbonate in 40% hydrofluoric acid solution, evaporating the solution and drying in a vacuum at 150 °C. Another method of preparing it is to mix cadmium chloride and ammonium fluoride solutions, followed by crystallization. The insoluble cadmium fluoride is filtered from solution. Cadmium fluoride has also been prepared by reacting fluorine with cadmium sulfide. This reaction happens very quickly and forms nearly pure fluoride at much lower temperatures than other reactions used. Uses Electronic conductor CdF2 can be transformed into an electronic conductor when doped with certain rare earth elements or yttrium and treated with cadmium vapor under high temperature conditions. This process creates blue crystals with varying absorption coefficients depending on the concentrations of the dopant. A proposed mechanism explains that the conductivity of these crystals can be explained by a reaction of Cd atoms with Interstitial F− ions. This creates more CdF2 molecules and releases electrons which are weakly bonded to trivalent dopant ions resulting in n-type conductivity and a hydrogenic donor level. Safety Cadmium fluoride, like all cadmium compounds, is toxic and should be used with care. Cadmium fluoride can cause potential health issues if it is not handled properly. It can cause irritation to the skin and the eyes, so gloves and protective eyewear are advised. The MSDS, or Material Safety Data Sheet, also includes warnings for ingestion and inhalation. Under acidic conditions, at high temperatures, and in moist environments, hydrogen fluoride and cadmium vapors may be released into the air. Inhalation may cause irritation of the respiratory system as well as congestion, fluorosis, and even pulmonary edema in extreme cases. Cadmium fluoride also has the same potential hazards caused by cadmium and fluoride. References External links National Pollutant Inventory - Cadmium and compounds fact sheet National Pollutant Inventory - Fluoride and compounds fact sheet Fluorides Metal halides Cadmium compounds Fluorite crystal structure
Cadmium fluoride
[ "Chemistry" ]
609
[ "Inorganic compounds", "Fluorides", "Metal halides", "Salts" ]
3,147,788
https://en.wikipedia.org/wiki/Copper%20indium%20gallium%20selenide
Copper indium gallium (di)selenide (CIGS) is a I-III-VI2 semiconductor material composed of copper, indium, gallium, and selenium. The material is a solid solution of copper indium selenide (often abbreviated "CIS") and copper gallium selenide. It has a chemical formula of CuIn1−xGaxSe2, where the value of x can vary from 0 (pure copper indium selenide) to 1 (pure copper gallium selenide). CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure, and a bandgap varying continuously with x from about 1.0 eV (for copper indium selenide) to about 1.7 eV (for copper gallium selenide). Structure CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure. Upon heating it transforms to the zincblende form and the transition temperature decreases from 1045 °C for x = 0 to 805 °C for x = 1. Applications It is best known as the material for CIGS solar cells a thin-film technology used in the photovoltaic industry. In this role, CIGS has the advantage of being able to be deposited on flexible substrate materials, producing highly flexible, lightweight solar panels. Improvements in efficiency have made CIGS an established technology among alternative cell materials. See also Copper indium gallium selenide solar cells CZTS List of CIGS companies References Semiconductor materials Copper(I) compounds Indium compounds Gallium compounds Selenides Renewable energy Dichalcogenides
Copper indium gallium selenide
[ "Chemistry" ]
343
[ "Semiconductor materials" ]
3,147,900
https://en.wikipedia.org/wiki/Code-mixing
Code-mixing is the mixing of two or more languages or language varieties in speech. Some scholars use the terms "code-mixing" and "code-switching" interchangeably, especially in studies of syntax, morphology, and other formal aspects of language. Others assume more specific definitions of code-mixing, but these specific definitions may be different in different subfields of linguistics, education theory, communications etc. Code-mixing is similar to the use or creation of pidgins, but while a pidgin is created across groups that do not share a common language, code-mixing may occur within a multilingual setting where speakers share more than one language. As code-switching Some linguists use the terms code-mixing and code-switching more or less interchangeably. Especially in formal studies of syntax, morphology, etc., both terms are used to refer to utterances that draw from elements of two or more grammatical systems. These studies are often interested in the alignment of elements from distinct systems, or on constraints that limit switching. Some work defines code-mixing as the placing or mixing of various linguistic units (affixes, words, phrases, clauses) from two different grammatical systems within the same sentence and speech context, while code-switching is the placing or mixing of units (words, phrases, sentences) from two codes within the same speech context. The structural difference between code-switching and code-mixing is the position of the altered elements—for code-switching, the modification of the codes occurs intersententially, while for code-mixing, it occurs intrasententially. In other work the term code-switching emphasizes a multilingual speaker's movement from one grammatical system to another, while the term code-mixing suggests a hybrid form, drawing from distinct grammars. In other words, code-mixing emphasizes the formal aspects of language structures or linguistic competence, while code-switching emphasizes linguistic performance. While many linguists have worked to describe the difference between code-switching and borrowing of words or phrases, the term code-mixing may be used to encompass both types of language behavior. In sociolinguistics While linguists who are primarily interested in the structure or form of code-mixing may have relatively little interest to separate code-mixing from code-switching, some sociolinguists have gone to great lengths to differentiate the two phenomena. For these scholars, code-switching is associated with particular pragmatic effects, discourse functions, or associations with group identity. In this tradition, the terms code-mixing or language alternation are used to describe more stable situations in which multiple languages are used without such pragmatic effects. See also Code-mixing as fused lect, below. In language acquisition In studies of bilingual language acquisition, code-mixing refers to a developmental stage during which children mix elements of more than one language. Nearly all bilingual children go through a period in which they move from one language to another without apparent discrimination. This differs from code-switching, which is understood as the socially and grammatically appropriate use of multiple varieties. Beginning at the babbling stage, young children in bilingual or multilingual environments produce utterances that combine elements of both (or all) of their developing languages. Some linguists suggest that this code-mixing reflects a lack of control or ability to differentiate the languages. Others argue that it is a product of limited vocabulary; very young children may know a word in one language but not in another. More recent studies argue that this early code-mixing is a demonstration of a developing ability to code-switch in socially appropriate ways. For young bilingual children, code-mixing may be dependent on the linguistic context, cognitive task demands, and interlocutor. Code-mixing may also function to fill gaps in their lexical knowledge. Some forms of code-mixing by young children may indicate risk for language impairment. In psychology and psycholinguistics In psychology and in psycholinguistics the label code-mixing is used in theories that draw on studies of language alternation or code-switching to describe the cognitive structures underlying bilingualism. During the 1950s and 1960s, psychologists and linguists treated bilingual speakers as, in Grosjean's terms, "two monolinguals in one person". This "fractional view" supposed that a bilingual speaker carried two separate mental grammars that were more or less identical to the mental grammars of monolinguals and that were ideally kept separate and used separately. Studies since the 1970s, however, have shown that bilinguals regularly combine elements from "separate" languages. These findings have led to studies of code-mixing in psychology and psycholinguistics. Sridhar and Sridhar define code-mixing as "the transition from using linguistic units (words, phrases, clauses, etc.) of one language to using those of another within a single sentence". They note that this is distinct from code-switching in that it occurs in a single sentence (sometimes known as intrasentential switching) and in that it does not fulfill the pragmatic or discourse-oriented functions described by sociolinguists. (See Code-mixing in sociolinguistics above.) The practice of code-mixing, which draws from competence in two languages at the same time suggests that these competencies are not stored or processed separately. Code-mixing among bilinguals is therefore studied in order to explore the mental structures underlying language abilities. As fused lect A mixed language or a fused lect is a relatively stable mixture of two or more languages. What some linguists have described as "codeswitching as unmarked choice" or "frequent codeswitching" has more recently been described as "language mixing", or in the case of the most strictly grammaticalized forms as "fused lects". In areas where code-switching among two or more languages is very common, it may become normal for words from both languages to be used together in everyday speech. Unlike code-switching, where a switch tends to occur at semantically or sociolinguistically meaningful junctures, this code-mixing has no specific meaning in the local context. A fused lect is identical to a mixed language in terms of semantics and pragmatics, but fused lects allow less variation since they are fully grammaticalized. In other words, there are grammatical structures of the fused lect that determine which source-language elements may occur. A mixed language is different from a creole language. Creoles are thought to develop from pidgins as they become nativized. Mixed languages develop from situations of code-switching. (See the distinction between code-mixing and pidgin above.) Local names There are many names for specific mixed languages or fused lects. These names are often used facetiously or carry a pejorative sense. Named varieties include the following, among others. Benglish Bisalog Bislish Chinglish Denglisch Dunglish Franglais Franponais Greeklish Hinglish Hokaglish Konglish Manglish Maltenglish Poglish Porglish Portuñol Singlish Spanglish Svorsk Tanglish Taglish Tenglish Turklish Notes References Syntax Linguistic morphology Education theory Human communication Sociolinguistics Psycholinguistics Language acquisition
Code-mixing
[ "Biology" ]
1,482
[ "Human communication", "Behavior", "Human behavior" ]
3,147,902
https://en.wikipedia.org/wiki/Deer%20horn
A deer horn, or deer whistle, is a whistle mounted on automobiles intended to help prevent collisions with deer. Air moving through the device produces sound (ultrasound in some models), intended to warn deer of a vehicle's approach. Deer are highly unpredictable, skittish animals whose normal reaction to an unfamiliar sound is to stop, look and listen to determine if they are being threatened. If the whistle gives them advance warning, they may freeze on the roadside, rather than running across the road into the path of the vehicle. In Australia, a different product, with electrically powered speakers (Shu Roo), is used to decrease collisions with kangaroos. Researchers with the University of Wisconsin–Madison measured three devices and a press report said they found these three devices produced "low-pitched and ultrasonic sounds at speeds of 30 to 70 miles per hour; however, researchers were unable to verify that deer responded to the sounds." Researchers with the Georgia Game and Fish Department have pointed out several reasons for ultrasound devices not to work as advertised: Some deer whistles do not emit any ultrasonic sound under the advertised operating conditions (typically when the vehicle exceeds 30 mph). Ultrasonic sound does not carry very well. It does not travel a long enough distance to provide adequate warning, and also is stopped by virtually any intervening object, so any curves in a road will block the sound. Little is known about the auditory limits of deer, but current knowledge indicates that deer hear approximately the same frequencies as humans, and thus if humans can't hear a sound, deer probably can't either. If deer could hear ultrasound, it is unknown if it would alarm them or induce a flight response. In addition to the Georgia and Wisconsin studies, a study by the Ohio State Police Department indicated the whistles are ineffective. The Department of Zoology at the University of Melbourne did independent testing, funded by the Royal Automobile Club of Victoria, New South Wales Road Traffic Authority, National Roads and Motorists’ Association Limited, and Transport South Australia. They bought one Shu Roo and tested it on a sedan, a 4x4, an 18-seat bus, and a cargo truck. The Shu Roo could be heard by their test equipment above the sound of wind and vehicle engines at up to . Wind on test days ranged from 0 to . They also compared road collisions among fleet vehicles with and without Shu Roos, especially targeting bus and truck companies. They used pre-existing installations of Shu Roos at the participating companies, not random assignment. Vehicles averaged one collision with a kangaroo per , the same value with and without Shu Roos. They excluded two vehicles with Shu Roos which hit 39 and 25 kangaroos respectively, each in one night. The collisions of non-Shu Roo vehicles were concentrated in fewer vehicles than the collisions of Shu Roo vehicles, which may reflect routes or drivers. Fleet managers reported some Shu Roos did not stay on. It was hard to recruit professional drivers willing to report their mileage to the survey. An alternative in future studies would be to enlist a car hire company, since they already track mileage, could randomly assign devices to cars, and benefit from accurate results. References Further reading Vehicle safety technologies Sound production Human–animal interaction
Deer horn
[ "Biology" ]
658
[ "Human–animal interaction", "Animals", "Humans and other species" ]
3,147,924
https://en.wikipedia.org/wiki/Photoexcitation
Photoexcitation is the production of an excited state of a quantum system by photon absorption. The excited state originates from the interaction between a photon and the quantum system. Photons carry energy that is determined by the wavelengths of the light that carries the photons. Objects that emit light with longer wavelengths, emit photons carrying less energy. In contrast to that, light with shorter wavelengths emit photons with more energy. When the photon interacts with a quantum system, it is therefore important to know what wavelength one is dealing with. A shorter wavelength will transfer more energy to the quantum system than longer wavelengths. On the atomic and molecular scale photoexcitation is the photoelectrochemical process of electron excitation by photon absorption, when the energy of the photon is too low to cause photoionization. The absorption of the photon takes place in accordance with Planck's quantum theory. Photoexcitation plays a role in photoisomerization and is exploited in different techniques: Dye-sensitized solar cells makes use of photoexcitation by exploiting it in cheaper inexpensive mass production solar cells. The solar cells rely on a large surface area in order to catch and absorb as many high energy photons as possible. Shorter wavelengths are more efficient for the energy conversion compared to longer wavelengths, since shorter wavelengths carry photons that are more energy rich. Light containing shorter wavelengths therefore cause a longer and less efficient conversion of energy in dye-sensitized solar cells. Photochemistry Luminescence Optically pumped lasers use photoexcitation in a way that the excited atoms in the lasers get an enormous direct-gap gain needed for the lasers. The density that is needed for the population inversion in the compound Ge, a material often used in lasers, must become 1020 cm−3, and this is acquired via photoexcitation. The photoexcitation causes the electrons in atoms to go to an excited state. The moment the amount of atoms in the excited state is higher than the amount in the normal ground state, the population inversion occurs. The inversion, like the one caused with germanium, makes it possible for materials to act as lasers. Photochromic applications. Photochromism causes a transformation of two forms of a molecule by absorbing a photon. For example, the BIPS molecule(2H-l-benzopyran-2,2-indolines) can convert from trans to cis and back by absorbing a photon. The different forms are associated with different absorption bands. In a cis-form of BIPS, the transient absorption band has a value of 21050 cm−1, in contrast to the band from the trans-form, that has a value of 16950 cm−1. The results were optically visible, where the BIPS in gels turned from a colorless appearance to a brown or pink color after repeatedly being exposed to a high energy UV pump beam. High energy photons cause a transformation in the BIPS molecule making the molecule change its structure. On the nuclear scale photoexcitation includes the production of nucleon and delta baryon resonances in nuclei. References Photochemistry Physical chemistry Time-resolved spectroscopy
Photoexcitation
[ "Physics", "Chemistry" ]
650
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Spectroscopy", "Time-resolved spectroscopy", "nan", "Physical chemistry" ]
3,147,940
https://en.wikipedia.org/wiki/Quantum%20heterostructure
A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. This leads to the formation of a set of discrete energy levels at which the carriers can exist. Quantum heterostructures have sharper density of states than structures of more conventional sizes. Quantum heterostructures are important for fabrication of short-wavelength light-emitting diodes and diode lasers, and for other optoelectronic applications, e.g. high-efficiency photovoltaic cells. Examples of quantum heterostructures confining the carriers in quasi-two, -one and -zero dimensions are: Quantum wells Quantum wires Quantum dots References See also http://www.ecse.rpi.edu/~schubert/Light-Emitting-Diodes-dot-org/chap04/chap04.htm Kitaev's periodic table Quantum electronics Nanomaterials Semiconductor structures
Quantum heterostructure
[ "Physics", "Materials_science" ]
217
[ "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Nanomaterials", "Quantum physics stubs" ]
3,148,017
https://en.wikipedia.org/wiki/Nightingale%20floor
are floors that make a squeaking sound when walked upon. These floors were used in the hallways of some temples and palaces, the most famous example being Nijō Castle, in Kyoto, Japan. Dry boards naturally creak under pressure, but these floors were built in a way that the flooring nails rub against a jacket or clamp, causing chirping noises. It is unclear if the design was initially intentional. It seems that, at least initially, the effect arose by chance. An information sign in Nijō castle states that "The singing sound is not actually intentional, stemming rather from the movement of nails against clumps in the floor caused by wear and tear over the years". Legend has it that the squeaking floors were used as a security device, assuring that no one could sneak through the corridors undetected. The English name "nightingale" refers to the Japanese bush warbler, or uguisu, which is a common songbird in Japan. Etymology refers to the Japanese bush warbler. The latter segment comes from , which can be used to mean "to lay/board (flooring)", as in the expression yukaita wo haru (床板を張る) meaning "to board a/the floor". The verb haru becomes nominalized as hari and voiced through rendaku to become bari. In this form it refers to the method of boarding, as in other words like herinbōnbari (ヘリンボーン張り), which refers to flooring laid in a Herringbone pattern. As such, uguisubari means "Warbler boarding". Construction The floors were made from dried boards. Upside-down V-shaped joints move within the boards when pressure is applied. Examples The following locations incorporate nightingale floors: Nijō Castle, Kyoto Chion-in, Kyoto Eikan-dō Zenrin-ji, Kyoto Daikaku-ji, Kyoto Ryōan-ji, Kyoto Tōji-in, Kyoto Erin-ji Temple, Tokyo Modern influences and related topics Melody Road in Hokkaido, Wakayama, and Gunma Singing Road in Anyanag, Gyeonggi South Korea Civic Musical Road in Lancaster, California Across the Nightingale Floor, 2002 novel by Lian Hearn Notes References A-Z Animals. "Uguisi" under "Animals". (2008). accessed November 3, 2012. http://a-z-animals.com/animals/uguisu/. Bunt, Jonathan and Gillian Hall, ed. Oxford Beginner's Japanese Dictionary. New York: Oxford University Press, 2000. Henshall, Kenneth G. A Guide to Remembering Japanese Characters. Vermont: Tuttle Publishing Company, 1998. Japan-guide.com. "Nijo Castle (Nihojo)" under "Kyoto Travel: Nijo Castle" (June 11, 2012). accessed November 3, 2012. http://www.japan-guide.com/e/e3918.html. Saiga-Jp.com. "Japanese Kanji Dictionary" under "Japanese Learning" (March 7, 2012). accessed November 4, 2012. https://web.archive.org/web/20101029180930/http://www.saiga-jp.com/kanji_dictionary.html. External links Kyoto Travel: Nijo Castle Floors Japanese architectural features Security engineering
Nightingale floor
[ "Engineering" ]
703
[ "Structural engineering", "Systems engineering", "Floors", "Security engineering" ]
3,148,232
https://en.wikipedia.org/wiki/Semimodule
In mathematics, a semimodule over a semiring R is an algebraic structure analogous to a module over a ring, with the exception that it forms only a commutative monoid with respect to its addition operation, as opposed to an abelian group. Definition Formally, a left R-semimodule consists of an additively-written commutative monoid M and a map from to M satisfying the following axioms: . A right R-semimodule can be defined similarly. For modules over a ring, the last axiom follows from the others. This is not the case with semimodules. Examples If R is a ring, then any R-module is an R-semimodule. Conversely, it follows from the second, fourth, and last axioms that (-1)m is an additive inverse of m for all , so any semimodule over a ring is in fact a module. Any semiring is a left and right semimodule over itself in the same way that a ring is a left and right module over itself. Every commutative monoid is uniquely an -semimodule in the same way that an abelian group is a -module. References Algebraic structures Module theory
Semimodule
[ "Mathematics" ]
259
[ "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Algebraic structures", "Module theory" ]
3,148,264
https://en.wikipedia.org/wiki/Causal%20loop%20diagram
A causal loop diagram (CLD) is a causal diagram that visualizes how different variables in a system are causally interrelated. The diagram consists of a set of words and arrows. Causal loop diagrams are accompanied by a narrative which describes the causally closed situation the CLD describes. Closed loops, or causal feedback loops, in the diagram are very important features of CLDs because they may help identify non-obvious vicious circles and virtuous circles. The words with arrows coming in and out represent variables, or quantities whose value changes over time and the links represent a causal relationship between the two variables (i.e., they do not represent a material flow). A link marked indicates a positive relation where an increase in the causal variable leads, all else equal, to an increase in the effect variable, or a decrease in the causal variable leads, all else equal, to a decrease in the effect variable. A link marked indicates a negative relation where an increase in the causal variable leads, all else equal, to a decrease in the effect variable, or a decrease in the causal variable leads, all else equal, to an increase in the effect variable. A positive causal link can be said to lead to a change in the same direction, and an opposite link can be said to lead to change in the opposite direction, i.e. if the variable in which the link starts increases, the other variable decreases and vice versa. The words without arrows are loop labels. As with the links, feedback loops have either positive (i.e., reinforcing) or negative (i.e., balancing) polarity. CLDs contain labels for these processes, often using numbering (e.g., B1 for the first balancing loop being described in a narrative, B2 for the second one, etc.), and phrases that describe the function of the loop (i.e., "haste makes waste"). A reinforcing loop is a cycle in which the effect of a variation in any variable propagates through the loop and returns to reinforce the initial deviation (i.e. if a variable increases in a reinforcing loop the effect through the cycle will return an increase to the same variable and vice versa). A balancing loop is the cycle in which the effect of a variation in any variable propagates through the loop and returns to the variable a deviation opposite to the initial one (i.e. if a variable increases in a balancing loop the effect through the cycle will return a decrease to the same variable and vice versa). Balancing loops are typically goal-seeking, or error-sensitive, processes and are presented with the variable indicating the goal of the loop. Reinforcing loops are typically vicious or virtuous cycles. Example of positive reinforcing loop shown in the illustration: The amount of the will affect the amount of the , as represented by the top blue arrow, pointing from ' to . Since an increase in results in an increase in , this link is positive, which is denoted with a . The gets added to the , also a positive link, represented by the bottom blue arrow. The causal effect between these variables forms a positive reinforcing loop, represented by the green arrow, which is denoted with an R. (The terms positive and negative mean "tends to increase" and "tend to reduce", not subjective value judgements.) History The use of words and arrows (known in network theory as nodes and edges) to construct directed graph models of cause and effect dates back, at least, to the use of path analysis by Sewall Wright in 1918. According to George Richardson's book "Feedback Thought in Social Science and Systems Theory", the first published, formal use of a causal loop diagram to describe a feedback system was Magoroh Maruyama's 1963 article "The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes". Positive and negative causal links Positive causal link means that the two variables change in the same direction, i.e. if the variable in which the link starts decreases, the other variable also decreases. Similarly, if the variable in which the link starts increases, the other variable increases. Negative causal link means that the two variables change in opposite directions, i.e. if the variable in which the link starts increases, then the other variable decreases, and vice versa. Example Reinforcing and balancing loops To determine if a causal loop is reinforcing or balancing, one can start with an assumption, e.g. "Variable 1 increases" and follow the loop around. The loop is: reinforcing if, after going around the loop, one ends up with the same result as the initial assumption. balancing if the result contradicts the initial assumption. Or to put it in other words: reinforcing loops have an even number of negative links (zero also is even, see example below) balancing loops have an odd number of negative links. Identifying reinforcing and balancing loops is an important step for identifying Reference Behaviour Patterns, i.e. possible dynamic behaviours of the system. Reinforcing loops are associated with exponential increases/decreases. Balancing loops are associated with reaching a plateau. If the system has delays (often denoted by drawing a short line across the causal link), the system might fluctuate. Example See also References External links WikiSD the System Dynamics Society Wiki Learn to Read Causal Loop Diagrams via SystemsAndUs loop diagram loop diagram
Causal loop diagram
[ "Physics" ]
1,115
[]
3,148,334
https://en.wikipedia.org/wiki/Gamma%20Herculis
Gamma Herculis, Latinized from γ Herculis, is a magnitude 3.74 binary star system in the northern constellation of Hercules. It is easily visible to the naked eye under good observing conditions. Properties This is known to be a spectroscopic binary system, although there is no information about the secondary component. Based upon parallax measurements, this system is located at a distance of about from the Earth. The spectrum of the primary star matches a stellar classification of A9III, which indicates this is a giant star that has exhausted the hydrogen at its core and evolved away from the main sequence. The effective temperature is about 7,031 K, giving the star a white hue characteristic of A-type stars. It is rotating rapidly with a projected rotational velocity of . The interferometry-measured angular diameter of this star is , which, at its estimated distance, equates to a physical radius of about six times the radius of the Sun. Observations by German astronomer Ernst Zinner in 1929 gave indications that this may be a variable star. It was listed in the New Catalogue of Suspected Variable Stars (1981) with a magnitude range of 3.74 to 3.81. Further observations up to 1991 showed a pattern of small, slow variations with a magnitude variation of 0.05. These appeared to repeat semi-regularly with a period of 183.6 days, although the spectroscopic data presented a shorter period of 165.9 days. Name It was a member of indigenous Arabic asterism al-Nasaq al-Sha'āmī, "the Northern Line" of al-Nasaqān "the Two Lines", along with β Her (Kornephoros), γ Ser (Zheng, Ching) and β Ser (Chow). According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, al-Nasaq al-Sha'āmī or Nasak Shamiya were the title for three stars :β Ser as Nasak Shamiya I, γ Ser as Nasak Shamiya II, γ Her as Nasak Shamiya III (exclude β Her) In Chinese, (), meaning Right Wall of Heavenly Market Enclosure, refers to an asterism which is represent eleven old states in China which is marking the right borderline of the enclosure, consisting of γ Herculis, β Herculis, κ Herculis, γ Serpentis, β Serpentis, δ Serpentis, α Serpentis, ε Serpentis, δ Ophiuchi, ε Ophiuchi and ζ Ophiuchi. Consequently, the Chinese name for γ Herculis itself is (, ), represent Héjiān (河間), possibly Hejian Kingdom or Hejian Commandery (see Sima Yong, the Prince of Hejian and Liu Wuzhou). Héjiān (河間) was westernized into Ho Keen by R.H. Allen, which was the meaning "between the river". References Hercules (constellation) Spectroscopic binaries Herculis, 020 Herculis, Gamma A-type giants 080170 Semiregular variable stars 6095 147547 Durchmusterung objects
Gamma Herculis
[ "Astronomy" ]
655
[ "Hercules (constellation)", "Constellations" ]
3,148,419
https://en.wikipedia.org/wiki/Isotype%20%28picture%20language%29
Isotype (International System of Typographic Picture Education) is a method of showing social, technological, biological, and historical connections in pictorial form. It consists of a set of standardized and abstracted pictorial symbols to represent social-scientific data with specific guidelines on how to combine the identical figures using serial repetition. It was first known as the Vienna Method of Pictorial Statistics (Wiener Methode der Bildstatistik), due to its having been developed at the Gesellschafts- und Wirtschaftsmuseum in Wien (Social and Economic Museum of Vienna) between 1925 and 1934. The founding director of this museum, Otto Neurath, was the initiator and chief theorist of the Vienna Method. Gerd Arntz was the artist responsible for realising the graphics. The term Isotype was applied to the method around 1935, after its key practitioners were forced to leave Vienna by the rise of Austrian fascism. Origin and development The Gesellschafts- und Wirtschaftsmuseum was principally financed by the municipality of Vienna, during a period of expansive municipal social democratic governance known as Red Vienna within the new republic of Austria. An essential task of the museum was to inform the Viennese about their city. Neurath stated that the museum was not a treasure chest of rare objects, but a teaching museum. The aim was to "represent social facts pictorially" and to bring "dead statistics" to life by making them visually attractive and memorable. One of the museum's catch-phrases was: "To remember simplified pictures is better than to forget accurate figures". The principal instruments of the Vienna Method were pictorial charts, which could be produced in multiple copies and serve both permanent and travelling exhibitions. The museum also innovated with interactive models and other attention-grabbing devices, and there were even some early experiments with animated films. From its beginning the Vienna Method/Isotype was the work of a team. Neurath built up a kind of prototype for an interdisciplinary graphic design agency. In 1926 he encountered woodcut prints by the German artist Gerd Arntz and invited him to collaborate with the museum. There was a further meeting in 1928 when Neurath attended the Pressa international exhibition. Arntz moved to Vienna in 1929 and took up a full-time position there. His simplified graphic style benefited the design of repeatable pictograms that were integral to Isotype. The influence of these pictograms on today's information graphics is immediately apparent, although perhaps not yet fully recognized. A central task in Isotype was the "transformation" of complex source information into a sketch for a self-explanatory chart. The principal "transformer" from the beginning was Marie Reidemeister (who became Marie Neurath in 1941). A defining project of the first phase of Isotype (then still known as the Vienna Method) was the monumental collection of 100 statistical charts, Gesellschaft und Wirtschaft (1930). Principles The first rule of Isotype is that greater quantities are not represented by an enlarged pictogram but by a greater number of the same-sized pictogram. In Neurath’s view, variation in size does not allow accurate comparison (what is to be compared – height/length or area?) whereas repeated pictograms, which always represent a fixed value within a certain chart, can be counted if necessary. Isotype pictograms almost never depicted things in perspective in order to preserve this clarity, and there were other guidelines for graphic configuration and use of colour. The best exposition of Isotype technique remains Otto Neurath’s book International picture language (1936). "Visual education" was always the prime motive behind Isotype, which was worked out in exhibitions and books designed to inform ordinary citizens (including schoolchildren) about their place in the world. It was never intended to replace verbal language; it was a "helping language" always accompanied by verbal elements. Otto Neurath realized that it could never be a fully developed language, so instead he called it a “language-like technique”. Diffusion and adaptation As more requests came to the Vienna museum from abroad, a partner institute called Mundaneum (a name adopted from an abortive collaboration with Paul Otlet) was established in 1931/2 to promote international work. It formed branches containing small exhibitions in Berlin, The Hague, London and New York City. Members of the Vienna team travelled periodically to the Soviet Union during the early 1930s in order to help set up the 'All-union institute of pictorial statistics of Soviet construction and economy' (Всесоюзный институт изобразительной статистики советского строительства и хозяйства), commonly abbreviated to IZOSTAT (ИЗОСТАТ), which produced statistical graphics about the Five Year Plans, among other things. After the closure of the Gesellschafts- und Wirtschaftsmuseum in 1934 Neurath, Reidemeister and Arntz fled to the Netherlands, where they set up the International Foundation for Visual Education in The Hague. During the 1930s significant commissions were received from the US, including a series of mass-produced charts for the National Tuberculosis Association and Otto Neurath’s book Modern man in the making (1939), a high point of Isotype on which he, Reidemeister and Arntz worked in close collaboration. Rudolf Modley, who served as an assistant to Otto Neurath in Vienna, introduced ISOTYPE methods to the United States through his position as chief curator at the Museum of Science and Industry in Chicago. Furthermore, by 1934 Modley established Pictorial Statistics Incorporated in New York, a company which promoted the production and distribution of ISOTYPE-like pictographs for education, news, and other forms of communications. Beginning in 1936, Modley's pictographs were used in a nationwide public health campaign for US Surgeon General Thomas Parran's "War on Syphilis." Otto and Marie Neurath fled from German invasion to England, where they established the Isotype Institute in 1942. In Britain Isotype was applied to wartime publications sponsored by the Ministry of Information and to documentary films produced by Paul Rotha. After Otto Neurath’s death in 1945, Marie Neurath and her collaborators continued to apply Isotype to tasks of representing many kinds of complex information, especially in popular science books for young readers. A real test of the international ambitions of Isotype, as Marie Neurath saw it, was the project to design information for civic education, election procedure and economic development in the Western Region of Nigeria in the 1950s. Archive In 1971 the Isotype Institute gave its working material to the University of Reading, where it is housed in the Department of Typography & Graphic Communication as the Otto and Marie Neurath Isotype Collection. The responsibilities of the institute were transferred to the university in 1981. See also Communication Data visualization Information design Information graphics References Bibliography Otto Neurath, International picture language. London: Kegan Paul, 1936. Facsimile reprint: Department of Typography & Graphic Communication, University of Reading, 1980. Michael Twyman, ‘The significance of Isotype’, 1975 Robin Kinross, ‘On the Influence of Isotype’. Information Design Journal, ii/2, 1981, pp. 122–30. Marie Neurath and Robin Kinross. The transformer: principles of making Isotype charts. London: Hyphen Press, 2009. Otto Neurath, From hieroglyphics to Isotype: a visual autobiography. London, Hyphen Press, 2010. Christopher Burke, Eric Kindel, Sue Walker (eds), Isotype: design & contexts, 1925–1971. London, Hyphen Press, 2013 External links Isotype revisited Gerd Arntz Web Archive Otto Neurath | Pictorial Statistics Otto Neurath (OEGWM) Stroom Den Haag – After Neurath Engineered languages Infographics Graphic design Pictograms 1935 introductions Constructed languages introduced in the 1920s
Isotype (picture language)
[ "Mathematics" ]
1,700
[ "Symbols", "Pictograms" ]
3,148,426
https://en.wikipedia.org/wiki/Canadian%20IT%20Body%20of%20Knowledge
Canadian Information Technology Body of Knowledge (CITBOK) is the project sponsored and undertaken by Canadian Information Processing Society (CIPS) to define and outline the body of knowledge that defines a Canadian Information Systems Professional (ISP). CIPS recognizes that in order to strengthen the criteria of the ISP designation successfully while still maintaining high standards, the association needs to develop a Canadian Information Technology Body of Knowledge (CITBOK) that will set standards for knowledge. The CITBOK is an outline of the knowledge bases that form the intellectual basis for the IT profession. CIPS has identified a core CITBOK that all professionals would be expected to master and specialty bodies of knowledge that would depend on your area of practice. When setting this Canadian standard for Information Technology (IT) knowledge, CIPS looked to other organizations and internationally for standards and bodies of knowledge that CIPS could adopt or adapt, especially for the specialty areas. For example, in 2004, CIPS adopted the British Computer Society (BCS) Professional Examination Study Guide Syllabus Diploma level (Core and 11 specialist modules) as the initial Body of Knowledge for CIPS. The CITBOK aims to: be an industry structure model that can be used to define the set of performance, training and development standards for all IT practitioners (CIPS members and non-members) in Canada; create alternate paths to certification based on concepts of CITBOK; establish guidelines for accreditation criteria based on concepts of CITBOK; and create a professional development model that sets the standards criteria for the knowledge base including knowledge, skills, and professional activities for IT practitioners. See also ITIL PMBOK SWEBOK References Research projects Information technology in Canada Information technology education Bodies of knowledge
Canadian IT Body of Knowledge
[ "Technology" ]
356
[ "Information technology", "Information technology education" ]
3,148,461
https://en.wikipedia.org/wiki/Pesticide%20residue
Pesticide residue refers to the pesticides that may remain on or in food, after they are applied to food crops. The maximum allowable levels of these residues in foods are stipulated by regulatory bodies in many countries. Regulations such as pre-harvest intervals also prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Definition A pesticide is a substance or a mixture of substances used for killing pests: organisms dangerous to cultivated plants or to animals. The term applies to various pesticides such as insecticide, fungicide, herbicide and nematocide. The definition of residue of pesticide according to the world health organisation (WHO) is:- Any specified substances in or on food, agricultural commodities or animal feed resulting from the use of a pesticide. The term includes any derivatives of a pesticide, such as conversion products, metabolites, reaction products and impurities considered to be of toxicological significance. The term “pesticide residue” includes residues from unknown or unavoidable sources (e.g. environmental) as well as known uses of the chemical. The definition of a residue for compliance with maximum residue limits (MRLs) is that combination of the pesticide and its metabolites, derivatives and related compounds to which the MRL applies. Background Prior to 1940, pesticides consisted of inorganic compounds (copper, arsenic, mercury, and lead) and plant derived products. Most of these were abandoned because they were highly toxic and ineffective. Since World War II pesticides composed of synthetic organic compounds were the most important form of pest control. The growth in these pesticides accelerated in late 1940s after Paul Müller discovered DDT in 1939. The effects of pesticides such as aldrin, dieldrin, endrin, chlordane, parathion, captan and 2,4-D were also found at this time. Those pesticides were widely used due to their effective pest control. Problems with environmental issues of DDT became increasingly apparent, since it is persistent and bioaccumulates in the body and the food chain. In the 1960s, Rachel Carson wrote Silent Spring to illustrate a risk of DDT and how it threatened biodiversity. DDT was banned for agricultural use in 1972 and the others in 2001. Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s. Regulations Each country adopts their own agricultural policies and Maximum Residue Limits (MRL) and Acceptable Daily Intake (ADI). The level of food additive usage varies by country because forms of agriculture are different in regions according to their geographical or climatical factors. Pre-harvest intervals are also set to require a crop or livestock product not be harvested before a certain period after application in order to allow the pesticide residue to decrease below maximum residue limits or other tolerance levels. Likewise, restricted entry intervals are the amount of time to allow residue concentrations to decrease before a worker can reenter without protective equipment an area where pesticides have been applied. International Some countries use the International Maximum Residue Limits -Codex Alimentarius to define the residue limits; this was established by Food and Agriculture Organization of the United Nations (FAO) and World Health Organization (WHO) in 1963 to develop international food standards, guidelines codes of practices, and recommendation for food safety. Currently the CODEX has 185 Member Countries and 1 member organization (EU). The following is the list of maximum residue limits (MRLs) for spices adopted by the commission. European Union The European Union has a searchable database with the Maximum Residue Limits (MRLs) for 716 pesticides. Under the previous system, revised in 2008, certain pesticide residues were regulated by the commission; others were regulated by Member States, and others were not regulated at all. New Zealand Food Standards Australia New Zealand develops the standards for levels of pesticide residues in foods through a consultation process. The New Zealand Food Safety Authority publishes the maximum limits of pesticide residues for foods produced in New Zealand. United Kingdom Monitoring of pesticide residues in the UK began in the 1950s. From 1977 to 2000 the work was carried out by the Working Party on Pesticide Residues (WPPR), until in 2000 the work was taken over by the Pesticide Residue Committee (PRC). The PRC advise the government through the Pesticides Safety Directorate and the Food Standards Agency (FSA). United States In the US, tolerances for the amount of pesticide residue that may remain on food are set by the EPA, and measures are taken to keep pesticide residues below the tolerances. The US EPA has a web page for the allowable tolerances. In order to assess the risks associated with pesticides on human health, the EPA analyzed individual pesticide active ingredients as well as the common toxic effect that groups of pesticides have, called the cumulative risk assessment. Limits that the EPA sets on pesticides before approving them includes a determination of how often the pesticide should be used and how it should be used, in order to protect the public and the environment. In the US, the Food and Drug Administration (FDA) and USDA also routinely check food for the actual levels of pesticide residues. A US organic food advocacy group, the Environmental Working Group, is known for creating a list of fruits and vegetables referred to as the Dirty Dozen; it lists produce with the highest number of distinct pesticide residues or most samples with residue detected in USDA data. This list is generally considered misleading and lacks scientific credibility because it lists detections without accounting for the risk of the usually small amount of each residue with respect to consumer health. In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide. Japan In Japan, pesticide residues are regulated by the Food Safety Act. Pesticide tolerances are set by the Ministry of Health, Labour and Welfare through the Drug and Food Safety Committee. Unlisted residue amounts are restricted to 0.01ppm. China In China, the Ministry of Health and the Ministry of Agriculture have jointly established mechanisms and working procedures relating to maximum residue limit standards, while updating them continuously, according to the food safety law and regulations issued by the State Council. From GB25193-2010 to GB28260-2011, from Maximum Residue Limits for 12 Pesticides to 85 pesticides, they have improved the standards in response to Chinese national needs. Health impacts Accidental or inadvertent poisoning of agricultural workers due to exposure to pesticides is a very serious matter resulting in many deaths and hospitalizations. The effects of pesticides at high concentrations on human health is a thus a matter of much study, resulting in many publications on the toxicology of pesticides. However the maximum residue limits of pesticides in food are low, and are carefully set by the authorities to ensure, to their best judgement, no health impacts. According to the American Cancer Society there is no evidence that pesticide residues increase the risk of people getting cancer. The ACA advises washing fruit and vegetables before eating to remove both pesticide residue and other undesirable contaminants. There are many studies on the health differences between consumers of organic foods vs consumers of organically grown foods. When the American Academy of Pediatrics reviewed the literature on organic foods in 2012, they found that "current evidence does not support any meaningful nutritional benefits or deficits from eating organic compared with conventionally grown foods, and there are no well-powered human studies that directly demonstrate health benefits or disease protection as a result of consuming an organic diet." Chinese incidents In China, a number of incidents have occurred where state limits were exceeded by large amounts or where the wrong pesticide was used. In August 1994, a serious incident of pesticide poisoning of sweet potato crops occurred in Shandong province, China. Because local farmers were not fully educated in the use of insecticides, they used the highly-toxic pesticide named parathion instead of trichlorphon. It resulted in over 300 cases of poisoning and 3 deaths. Also, there was a case where a large number of students were poisoned and 23 of them were hospitalized because of vegetables that contained excessive pesticide residues. Child neurodevelopment Many pesticides achieve their intended use of killing pests by disrupting the nervous system. Due to similarities in brain biochemistry among many different organisms, there is much speculation that these chemicals can have a negative impact on humans as well. Children are especially vulnerable to exposure to pesticides, especially at critical windows of development. Infants and children consume higher amounts of food relative to their body-weight, and have a more permeable blood–brain barrier, all of which can contribute to increased risks from exposure to pesticide residues. However, in 2008 the OECD report that the existing guideline represents the best available science for assessing the potential for developmental neurotoxicity in human health risk assessment. See also Child development Dose–response relationship Environmental effects of pesticides Environmental issues with agriculture Food safety List of environmental issues Pesticide poisoning QuEChERS - method for testing pesticide residues References External links WHO fact sheet on pesticide residues in food The European Pesticide Residue Workshop Pesticide residue in Europe International Maximum Residue Level Database US EPA Pesticide Chemical Search CODEX Alimentarius International Food Standards Pesticides and Food:What the Pesticide Residue Limits are on Food Pesticides Soil contamination Food safety Food and the environment Environmental impact of agriculture
Pesticide residue
[ "Chemistry", "Biology", "Environmental_science" ]
1,964
[ "Toxicology", "Pesticides", "Environmental chemistry", "Soil contamination", "Biocides" ]
3,148,619
https://en.wikipedia.org/wiki/Fort%20Vaux
Fort Vaux (), in Vaux-Devant-Damloup, Meuse, France, was a polygonal fort forming part of the ring of 19 large defensive works intended to protect the city of Verdun. Built from 1881 to 1884 for 1,500,000 francs, it housed a garrison of 150 men. Vaux was the second fort to fall in the Battle of Verdun after Fort Douaumont, which was captured by a small German raiding party in February 1916 in the confusion of the French retreat from the Woëvre plain. Vaux had been modernised before 1914 with reinforced concrete top protection like Fort Douaumont and was not destroyed by German heavy artillery fire, which had included shelling by howitzers. The superstructure of the fort was badly damaged but the garrison, the deep interior corridors and stations were intact when the fort was attacked on 2 June by German Stormtroops. The defence of Fort Vaux was marked by the heroism and endurance of the garrison, including Major Sylvain-Eugene Raynal. Under his command, the French garrison repulsed German assaults, including fighting underground from barricades inside the corridors, during the first big engagement inside a fort during the First World War. The last men of the French garrison gave up after running out of water (some of which was poisoned), ammunition, medical supplies and food. Raynal sent several messages by homing pigeon (including Le Vaillant), requesting relief for his soldiers. In his last message, Raynal wrote "This is my last pigeon". After the surrender of the garrison on 7 June, Crown Prince Wilhelm, the commander of the 5th Army, presented Major Raynal with a French officer's sword as a sign of respect. Raynal and his soldiers remained in captivity in Germany until the Armistice of 11 November 1918. The fort was recaptured by French infantry on 2 November 1916 after an artillery bombardment involving two long-range railway guns. After its recapture, Fort Vaux was repaired and garrisoned. Several underground galleries were dug to reach far outside the fort, one of them being long, the water reserve was quadrupled and light was provided by two electric generators. Some damage from the fighting on 2 June can still be seen. The underground installations of the fort are well preserved and are open to the public for guided visits. Battle of Verdun 11 September 1914, the 75 mm turret fires 22 rounds at a German detachment in the . 18 February 1915, the fort is bombarded for the first time by twelve 420 mm rounds which causes little damage. End of 1915, disarmament of the fort is carried out to send the guns and ammunition to the front-line. The four 75 mm guns are removed from the casemates, leaving only the two in the turret. In January 1916, enough gunpowder stored for the possible destruction of the fort in case of an enemy approach. From 21 to 26 February 1916, the fort is bombarded with shells of all sizes including 129 heavy shells. Pillboxes and armoured observatories are damaged and the gallery leading to the 75 mm turret was cut. Late February – early March 1916, the fort is frequently bombarded and the 75 mm turret is destroyed accidentally by heavy shells that cause the demolition explosives within to detonate. 14 May 1916, Commandant Raynal takes command of the fort, which has no artillery. 1 June 1916, the Germans begin preparations to enter the fort through the . They cannot be stopped due to the fort having no artillery. 2 and 3 June 1916, German troops led by Kurt Rackow attack the fort with flame throwers and force French troops outside to retreat into the fort. The Germans penetrate the fort through the coffers of the counterscarp (). 5 June 1916, Commandant Raynal requests the French army to bomb the fort, where the top is occupied by the Germans, to allow part of the garrison to evacuate the fort. 7 June 1916, for three days water supplies are empty and the fighting takes place inside the galleries with grenades, guns and bayonets. Commander Raynal is captured by the Germans under military honours for having fought bravely in extreme conditions with a thirsty garrison. From 8 June to 1 November 1916, the fort is used by the Germans as a shelter and command post for the area. The French attempt to retake the fort several times with enormous loss of life. They bombard the fort to destroy it with heavy shells, including super-heavy 400 mm rounds but the concrete walls resist. Life inside the structure becomes impossible and the Germans eventually abandon the fort at the end of October. 2 November 1916, the fort is recaptured without resistance by a French patrol which finds it empty. By the end of the battle, in December 1916, the fort is almost in the same condition as it was in June, except for some damage caused by French artillery. 1916–1918, the are rehabilitated before being rearmed; an observatory and an armoured command bunker are equipped with machine-guns. Further defences including machine-guns are fitted in place of the 75 mm turret, to defend the area between the ravine and the village of Dieppe-sous-Douaumont. Exits and entrances of the fort are equipped with masonry baffles, machine-guns and grenade launcher chutes. A network of tunnels long is dug beneath the fort and generators are used for lighting and ventilation. Footnotes References Further reading External links Les forts Séré de rivières Le fort de Vaux Memoirs & Diaries: Account of the assaults upon Fort Vaux, Verdun, June 1916 1884 establishments in France Military installations established in the 1880s 19th-century fortifications Battle of Verdun Buildings and structures in Meuse (department) Museums in Meuse (department) Séré de Rivières system World War I museums in France Wilhelm, German Crown Prince
Fort Vaux
[ "Engineering" ]
1,190
[ "Séré de Rivières system", "Fortification lines" ]
3,148,706
https://en.wikipedia.org/wiki/Permeameter
The permeameter is an instrument for rapidly measuring the electromagnetic permeability of samples of iron or steel with sufficient accuracy for many commercial purposes. The name was first applied by Silvanus P. Thompson to an apparatus devised by himself in 1890, which indicates the mechanical force required to detach one end of the sample, arranged as the core of a straight electromagnet, from an iron yoke of special form; when this force is known, the permeability can be easily calculated. References Measuring instruments permeameter.aspx www.glossary.oilfield.slb.com/en/Terms/p/permeameter.aspx
Permeameter
[ "Materials_science", "Technology", "Engineering" ]
141
[ "Materials science stubs", "Electromagnetism stubs", "Measuring instruments" ]
3,148,908
https://en.wikipedia.org/wiki/Lacticaseibacillus%20rhamnosus
Lacticaseibacillus rhamnosus (previously Lactobacillus rhamnosus) is a bacterium that originally was considered to be a subspecies of L. casei, but genetic research found it to be a separate species in the L. casei clade, which also includes L. paracasei and L. zeae. It is a short Gram-positive homofermentative facultative anaerobic non-spore-forming rod that often appears in chains. Some strains of L. rhamnosus bacteria are being used as probiotics, and are particularly useful in treating infections of the female urogenital tract, most particularly very difficult to treat cases of bacterial vaginosis (or "BV"). The species Lacticaseibacillus rhamnosus and Limosilactobacillus reuteri are commonly found in the healthy female genito-urinary tract and are helpful to regain control of dysbiotic bacterial overgrowth during an active infection. L. rhamnosus sometimes is used in dairy products such as fermented milk and as non-starter-lactic acid bacterium (NSLAB) in long-ripened cheese. While frequently considered a beneficial organism, L. rhamnosus may not be as beneficial to certain subsets of the population; in rare circumstances, especially those primarily involving weakened immune system or infants, it may cause endocarditis. Despite the rare infections caused by L. rhamnosus, the species is included in the list of bacterial species with qualified presumed safety (QPS) status of the European Food Safety Agency. Genome Lacticaseibacillus rhamnosus is considered a nomadic organism and strains have been isolated from many different environments including the vagina and the gastrointestinal tract. L. rhamnosus strains have the capacity for strain-specific gene functions that are required to adapt to a large range of environments. Its core genome contains 2,164 genes, out of 4,711 genes in total (the pan-genome). The accessory genome is overtaken by genes encoding carbohydrate transport and metabolism, extracellular polysaccharides, biosynthesis, bacteriocin production, pili production, the CRISPR-Cas system, the clustered regularly interspaced short palindromic repeat (CRISPR) loci, and more than 100 transporter functions and mobile genetic elements such as phages, plasmid genes, and transposons. The genome of the specific strain L. rhamnosus LRB, in this case, taken from a human baby tooth, consists of a circular chromosome of 2,934,954 bp with 46.78% GC content. This genome contains 2,749 total genes with 2,672 that are total protein-coding sequences. This sample did not contain any plasmids. The most extensively studied strain, L. rhamnosus GG, a gut isolate, consists of a genome of 3,010,111 bp. Therefore, the LRB genome is shorter than GG’s genome. LRB lacks the spaCBA gene cluster of GG and is not expected to produce functional pili (6). This difference may help explain why each strain lives in a different habitat. Lacticaseibacillus rhamnosus GG (ATCC 53103) Lacticaseibacillus rhamnosus GG (ATCC 53103) is a strain of L. rhamnosus that was isolated in 1983 from the intestinal tract of a healthy human being; filed for a patent on 17 April 1985, by Sherwood Gorbach and Barry Goldin, the 'GG' derives from the first letters of their surnames. The patent refers to a strain of "L. acidophilus GG" with American Type Culture Collection (ATCC) accession number 53103; later reclassified as a strain of L. rhamnosus. The patent claims the L. rhamnosus GG (ATCC 53103) strain is acid- and bile-stable, has a great avidity for human intestinal mucosal cells, and produces lactic acid. Since the discovery of the L. rhamnosus GG (ATCC 53103) strain, it has been studied extensively on its various health benefits and currently L. rhamnosus GG (ATCC 53103) strain is the world's most studied probiotic bacterium with more than 800 scientific studies. The genome sequence of Lactobacillus rhamnosus GG (ATCC 53103) has been decoded in 2009. History In 1983, L. rhamnosus GG was isolated from the intestinal tract of a healthy human by Sherwood Gorbach and Barry Goldin. Medical research and use While L. rhamnosus GG (ATCC 53103) is able to survive the acid and bile of the stomach and intestine, is claimed to colonize the digestive tract, and to balance intestinal microbiota, evidence suggests that L. rhamnosus, comparable to virtually all probiotic lactobacilli, is only a transient inhabitant and not autochthonous. Lactobacillus rhamnosus GG binds to the gut mucosa. These features make it a favorable organism for the investigation of probiotic supplementation as a potential treatment for a variety of disease states. Diarrhea Lacticaseibacillus rhamnosus GG is beneficial in the prevention of rotavirus diarrhea in children. Prevention and treatment of various types of diarrhea have been shown in children and in adults. L. rhamnosus GG can be beneficial in the prevention of antibiotic-associated diarrhea and nosocomial diarrhea and this has been recently supported by European guidelines. Lactobacillus rhamnosus GG may reduce the risk of traveler's diarrhea. Acute gastroenteritis A position paper published by ESPGHAN Working Group for Probiotics and Prebiotics based on a systematic review and randomized controlled trials (RCTs) suggested that L. rhamnosus GG (low quality of evidence, strong recommendation) may be considered in the management of children with acute gastroenteritis in addition to rehydration therapy. Atopic dermatitis, eczema Lacticaseibacillus rhamnosus GG has been found to be ineffective for treating eczema. However in one non-randomized clinical observation dealing with resistant childhood atopic eczema, a substantial improvement in quality of life was reported in pediatric patients given Lactobacillus rhamnosus as a supplement. Risks The use of L. rhamnosus GG for probiotic therapy has been linked with rare cases of sepsis in certain risk groups, primarily those with a weakened immune system and infants. Ingestion of GG is considered to be safe and data show a significant growth in the consumption of L. rhamnosus GG at the population level did not lead to an increase in Lactobacillus bacteraemia cases. Lacticaseibacillus rhamnosus GR-1 Lacticaseibacillus rhamnosus GR-1 was originally found in the urethra of a healthy female and is nowadays a model strain for vaginal probiotics. A genome comparison between L. rhamnosus GG and L. rhamnosus GR-1 shows that GR-1 lacks spaCBA-encoded pili, an important adhesin in L. rhamnosus GG adhesion to the intestinal epithelial cells. In contrast, L. rhamnosus GR-1 utilises lectin-like proteins to attach to carbohydrates on the surface of the target cell. Lectin-like proteins preferentially bind to nonkeratinized stratified squamous cells which are found in the urethra and vagina. The lectin-like protein 1 purified from L. rhamnosus GR-1 is found to prevent infection by the uropathogenic E. coli UTI89 by inhibiting its adhesion to epithelial cells and by disrupting its biofilm formation. Additionally, it can increase biofilm formation in other beneficial lactobacilli that inhabit the vagina. References Further reading External links Type strain of Lactobacillus rhamnosus at BacDive - the Bacterial Diversity Metadatabase Digestive system Probiotics Lactobacillaceae Bacteria described in 1989
Lacticaseibacillus rhamnosus
[ "Biology" ]
1,812
[ "Digestive system", "Organ systems" ]
3,148,927
https://en.wikipedia.org/wiki/Major%20actinide
Major actinides is a term used in the nuclear power industry that refers to the isotopes of plutonium (239 Pu) uranium (235 U, 238 U) and thorium (232 Th) present in nuclear fuel, as opposed to the minor actinides neptunium, americium, curium, berkelium, and californium, including other isotopes of uranium and plutonium and other actinides. References
Major actinide
[ "Physics" ]
94
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
3,148,933
https://en.wikipedia.org/wiki/IUPAC%20nomenclature%20of%20inorganic%20chemistry
In chemical nomenclature, the IUPAC nomenclature of inorganic chemistry is a systematic method of naming inorganic chemical compounds, as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in Nomenclature of Inorganic Chemistry (which is informally called the Red Book). Ideally, every inorganic compound should have a name from which an unambiguous formula can be determined. There is also an IUPAC nomenclature of organic chemistry. System The names "caffeine" and "3,7-dihydro-1,3,7-trimethyl-1H-purine-2,6-dione" both signify the same chemical compound. The systematic name encodes the structure and composition of the caffeine molecule in some detail, and provides an unambiguous reference to this compound, whereas the name "caffeine" simply names it. These advantages make the systematic name far superior to the common name when absolute clarity and precision are required. However, for the sake of brevity, even professional chemists will use the non-systematic name almost all of the time, because caffeine is a well-known common chemical with a unique structure. Similarly, H2O is most often simply called water in English, though other chemical names do exist. Single atom anions are named with an -ide suffix: for example, H− is hydride. Compounds with a positive ion (cation): The name of the compound is simply the cation's name (usually the same as the element's), followed by the anion. For example, NaCl is sodium chloride, and CaF2 is calcium fluoride. Cations of transition metals able to take multiple charges are labeled with Roman numerals in parentheses to indicate their charge. For example, Cu+ is copper(I), Cu2+ is copper(II). An older, deprecated notation is to append -ous or -ic to the root of the Latin name to name ions with a lesser or greater charge. Under this naming convention, Cu+ is cuprous and Cu2+ is cupric. For naming metal complexes see the page on complex (chemistry). Oxyanions (polyatomic anions containing oxygen) are named with -ite or -ate, for a lesser or greater quantity of oxygen, respectively. For example, is nitrite, while is nitrate. If four oxyanions are possible, the prefixes hypo- and per- are used: hypochlorite is ClO−, perchlorate is . The prefix bi- is a deprecated way of indicating the presence of a single hydrogen ion, as in "sodium bicarbonate" (NaHCO3). The modern method specifically names the hydrogen atom. Thus, NaHCO3 would be pronounced sodium hydrogen carbonate. Positively charged ions are called cations and negatively charged ions are called anions. The cation is always named first. Ions can be metals, non-metals or polyatomic ions. Therefore, the name of the metal or positive polyatomic ion is followed by the name of the non-metal or negative polyatomic ion. The positive ion retains its element name whereas for a single non-metal anion the ending is changed to -ide. Example: sodium chloride, potassium oxide, or calcium carbonate. When the metal has more than one possible ionic charge or oxidation number the name becomes ambiguous. In these cases the oxidation number (the same as the charge) of the metal ion is represented by a Roman numeral in parentheses immediately following the metal ion name. For example, in uranium(VI) fluoride the oxidation number of uranium is 6. Another example is the iron oxides. FeO is iron(II) oxide and Fe2O3 is iron(III) oxide. An older system used prefixes and suffixes to indicate the oxidation number, according to the following scheme: Thus the four oxyacids of chlorine are called hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2) and perchloric acid (HOClO3), and their respective conjugate bases are hypochlorite, chlorite, chlorate and perchlorate ions. This system has partially fallen out of use, but survives in the common names of many chemical compounds: the modern literature contains few references to "ferric chloride" (instead calling it "iron(III) chloride"), but names like "potassium permanganate" (instead of "potassium manganate(VII)") and "sulfuric acid" abound. Traditional naming Simple ionic compounds An ionic compound is named by its cation followed by its anion. See polyatomic ion for a list of possible ions. For cations that take on multiple charges, the charge is written using Roman numerals in parentheses immediately following the element name. For example, Cu(NO3)2 is copper(II) nitrate, because the charge of two nitrate ions () is 2 × −1 = −2, and since the net charge of the ionic compound must be zero, the Cu ion has a 2+ charge. This compound is therefore copper(II) nitrate. In the case of cations with a +4 oxidation state, the only acceptable format for the Roman numeral 4 is IV and not IIII. The Roman numerals in fact show the oxidation number, but in simple ionic compounds (i.e., not metal complexes) this will always equal the ionic charge on the metal. For a simple overview see , for more details see selected pages from IUPAC rules for naming inorganic compounds . List of common ion names Monatomic anions: chloride sulfide phosphide Polyatomic ions: ammonium hydronium nitrate nitrite hypochlorite chlorite chlorate perchlorate sulfite sulfate thiosulfate hydrogen sulfite (or bisulfite) hydrogen carbonate (or bicarbonate) carbonate phosphate hydrogen phosphate dihydrogen phosphate chromate dichromate borate arsenate oxalate cyanide thiocyanate permanganate Hydrates Hydrates are ionic compounds that have absorbed water. They are named as the ionic compound followed by a numerical prefix and -hydrate. The numerical prefixes used are listed below (see IUPAC numerical multiplier): mono- di- tri- tetra- penta- hexa- hepta- octa- nona- deca- For example, CuSO4·5H2O is "copper(II) sulfate pentahydrate". Molecular compounds Inorganic molecular compounds are named with a prefix (see list above) before each element. The more electronegative element is written last and with an -ide suffix. For example, H2O (water) can be called dihydrogen monoxide. Organic molecules do not follow this rule. In addition, the prefix mono- is not used with the first element; for example, SO2 is sulfur dioxide, not "monosulfur dioxide". Sometimes prefixes are shortened when the ending vowel of the prefix "conflicts" with a starting vowel in the compound. This makes the name easier to pronounce; for example, CO is "carbon monoxide" (as opposed to "monooxide"). Common exceptions The "a" of the penta- prefix is not dropped before a vowel. As the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although 'monoxide', rather than 'monooxide', is an allowed exception because of general usage)." There are a number of exceptions and special cases that violate the above rules. Sometimes the prefix is left off the initial atom: I2O5 is known as iodine pentaoxide, but it should be called diiodine pentaoxide. N2O3 is called nitrogen sesquioxide (sesqui- means ). The main oxide of phosphorus is called phosphorus pentaoxide. It should actually be diphosphorus pentaoxide, but it is assumed that there are two phosphorus atoms (P2O5), as they are needed in order to balance the oxidation numbers of the five oxygen atoms. However, people have known for years that the real form of the molecule is P4O10, not P2O5, yet it is not normally called tetraphosphorus decaoxide. In writing formulas, ammonia is NH3 even though nitrogen is more electronegative (in line with the convention used by IUPAC as detailed in Table VI of the red book). Likewise, methane is written as CH4 even though carbon is more electronegative (Hill system). Nomenclature of Inorganic Chemistry Nomenclature of Inorganic Chemistry, commonly referred to by chemists as the Red Book, is a collection of recommendations on IUPAC nomenclature, published at irregular intervals by the IUPAC. The last full edition was published in 2005, in both paper and electronic versions. See also IUPAC nomenclature IUPAC nomenclature of organic chemistry List of inorganic compounds Water of crystallization IUPAC nomenclature of inorganic chemistry 2005 (the Red Book) Nomenclature of Organic Chemistry (the Blue Book) Quantities, Units and Symbols in Physical Chemistry (the Green Book) Compendium of Chemical Terminology (the Gold Book) Compendium of Analytical Nomenclature (the Orange Book) References External links IUPAC website - Nomenclature IUPAC (old site) Red Book IUPAC (old site) Red Book - PDF (2005 Recommendations) Recommendations 2000-Red Book II (incomplete) IUPAC (old site) Nomenclature Books Series (commonly known as the "Colour Books") ChemTeam Highschool Tutorial Chemical nomenclature Inorganic chemistry Chemistry reference works
IUPAC nomenclature of inorganic chemistry
[ "Chemistry" ]
2,056
[ "nan" ]
3,149,297
https://en.wikipedia.org/wiki/Locative%20media
Locative media or location-based media (LBM) is a virtual medium of communication functionally bound to a location. The physical implementation of locative media, however, is not bound to the same location to which the content refers. Location-based media delivers multimedia and other content directly to the user of a mobile device dependent upon their location. Location information determined by means such as mobile phone tracking and other emerging real-time locating system technologies like Wi-Fi or RFID can be used to customize media content presented on the device. Locative media are digital media applied to real places and thus triggering real social interactions. While mobile technologies such as the Global Positioning System (GPS), laptop computers and mobile phones enable locative media, they are not the goal for the development of projects in this field. Description Media content is managed and organized externally of the device on a standard desktop, laptop, server, or cloud computing system. The device then downloads this formatted content with GPS or other RTLS coordinate-based triggers applied to each media sequence. As the location-aware device enters the selected area, centralized services trigger the assigned media, designed to be of optimal relevance to the user and their surroundings. Use of locative technologies "includes a range of experimental uses of geo-technologies including location-based games, artistic critique of surveillance technologies, experiential mapping, and spatial annotation." Location based media allows for the enhancement of any given environment offering explanation, analysis and detailed commentary on what the user is looking at through a combination of video, audio, images and text. The location-aware device can deliver interpretation of cities, parklands, heritage sites, sporting events or any other environment where location based media is required. The content production and pre-production are integral to the overall experience that is created and must have been performed with ultimate consideration of the location and the users position within that location. The media offers a depth to the environment beyond that which is immediately apparent, allowing revelations about background, history and current topical feeds. Locative, ubiquitous and pervasive computing The term 'locative media' was coined by Karlis Kalnins. Locative media is closely related to augmented reality (reality overlaid with virtual reality) and pervasive computing (computers everywhere, as in ubiquitous computing). Whereas augmented reality strives for technical solutions, and pervasive computing is interested in embedded computers, locative media concentrates on social interaction with a place and with technology. Many locative media projects have a social, critical or personal (memory) background. While strictly spoken, any kind of link to additional information set up in space (together with the information that a specific place supplies) would make up location-dependent media, the term locative media is strictly bound to technical projects. Locative media works on locations and yet many of its applications are still location-independent in a technical sense. As in the case of digital media, where the medium itself is not digital but the content is digital, in locative media the medium itself might not be location-oriented, whereas the content is location-oriented. Japanese mobile phone culture embraces location-dependent information and context-awareness. It is projected that in the near future locative media will develop to a significant factor in everyday life. Enabling technologies Locative media projects use technology such as Global Positioning System (GPS), laptop computers, the mobile phone, Geographic Information System (GIS), and web map services such as Mapbox, OpenStreetMap, and Google Maps among others. Whereas GPS allows for the accurate detection of a specific location, mobile computers allow interactive media to be linked to this place. The GIS supplies arbitrary information about the geological, strategic or economic situation of a location. Web maps like Google Maps give a visual representation of a specific place. Another important new technology that links digital data to a specific place is radio-frequency identification (RFID), a successor to barcodes like Semacode. Research that contributes to the field of locative media happens in fields such as pervasive computing, context awareness and mobile technology. The technological background of locative media is sometimes referred to as "location-aware computing". Creative representation Place is often seen as central to creativity; in fact, "for some—regional artists, citizen journalists and environmental organizations for example—a sense of place is a particularly important aspect of representation, and the starting point of conversations." Locative media can propel such conversations in its function as a "poetic form of data visualization," as its output often traces how people move in, and by proxy, make sense of, urban environments. Given the dynamism and hybridity of cities and the networks which comprise them, locative media extends the internet landscape to physical environments where people forge social relations and actions which can be "mobile, plural, differentiated, adventurous, innovative, but also estranged, alienated, impersonalized." Moreover, in using locative technologies, users can expand how they communicate and assert themselves in their environment and, in doing so, explore this continuum of urban interactions. Furthermore, users can assume a more active role in constructing the environments they are situated in accordingly. In turn, artists have been intrigued with locative media as a means of "user-led mapping, social networking and artistic interventions in which the fabric of the urban environment and the contours of the earth become a 'canvas.'" Such projects demystify how resident behaviors in a given city contribute to the culture and sense of personality that cities are often perceived to take on. Design scholars Anne Galloway and Matthew Ward state that "various online lists of pervasive computing and locative media projects draw out the breadth of current classification schema: everything from mobile games, place-based storytelling, spatial annotation and networked performances to device-specific applications." A prominent use of locative media is in locative art. A sub-category of interactive art or new media art, locative art explores the relationships between the real world and the virtual or between people, places or objects in the real world. Examples Notable locative media projects include Bio Mapping by Christian Nold in 2004, locative art projects such as the SpacePlace ZKM/ZKMax bluecasting and participatory urban media access in Munich in 2005 and Britglyph by Alfie Dennen in 2009, and location-based games such as AR Quake by the Wearable Computer Lab at the University of South Australia and Can You See Me Now? in 2001 by Blast Theory in collaboration with the Mixed Reality Lab at the University of Nottingham. In 2005, the Silicon Valley–based collaborators of C5 first exhibited the C5 Landscape Initiative, a suite of four GPS inspired projects that investigate perception of landscape in light of locative media. In William Gibson's 2007 novel Spook Country, locative art is one of the main themes and set pieces in the story. Narrative projects which engage with locative media are sometimes referred to as Location-Aware Fiction, as explored in "Data and Narrative: Location Aware Fiction" a 2003 essay by Kate Armstrong. This location-aware fiction is also known as locative literature, where locative stories and poems can be experienced via digital portals, apps, QR codes and e-books, as well as via analogue forms such as labelling tape, Scrabble tiles, fridge magnets or Post-It notes, and these are forms often used by the writer and artist Matt Blackwood. The Transborder Immigrant Tool by the Electronic Disturbance Theater is a locative media project aimed at providing life saving directions to water for people trying to cross the US / Mexico border. The project attracted global media attention in 2009 and 2010. Articles included a Los Angeles Times cover story focusing on Ricardo Dominguez and an AP story interviewing Micha Cárdenas and Brett Stalbaum. The articles focused on concerns over the legality of the project and the ensuing investigations of the group, which are still underway. The Transborder Immigrant Tool has recently been included in a number of major exhibitions including Here, Not There at the Museum of Contemporary Art San Diego and the 2010 California Biennial at the Orange County Museum of Art. Invisible Threads by Stephanie Rothenberg and Jeff Crouse is a locative media project aimed at creating embodied awareness of sweatshops and just-in-time production through a virtual sweatshop in Second Life. It was performed at the Sundance Film Festival in 2008. The first mobile game to combine both location-based data with augmented reality was Paranormal Activity: Sanctuary, published by Ogmento in February 2011. Inspired by the hit film franchise, the game offered a unique setting where the player’s home town, office or neighborhood became the front lines of a supernatural conflict. The application called Yesterscape was released for iPhone by Japanese company QOOQ inc. in 2013. This augmented reality camera App which can show historical photo of the space as if user look into the time tunnel. QOOQ inc also offers user to add their historical photos via web interface for them to show through Yesterscape. In December 2022, virtual band Gorillaz teamed up with Nexus Studios to create the "Gorillaz Presents" app to enable users to use Google's ARCore Geospatial API to watch a virtual performance of the band's song, "Skinny Ape" at Times Square, New York and Piccadilly Circus, London. See also Location-based service Location-based game Mobile media Soundmap Spatial contextual awareness Urban informatics Virtual graffiti Location-based software (category) References External links Mobile technology articles on the MediaShift Idea Lab Digital media Geomarketing
Locative media
[ "Technology" ]
2,006
[ "Multimedia", "Digital media" ]
3,149,443
https://en.wikipedia.org/wiki/Hecke%20character
In number theory, a Hecke character is a generalisation of a Dirichlet character, introduced by Erich Hecke to construct a class of L-functions larger than Dirichlet L-functions, and a natural setting for the Dedekind zeta-functions and certain others which have functional equations analogous to that of the Riemann zeta-function. Definition A Hecke character is a character of the idele class group of a number field or global function field. It corresponds uniquely to a character of the idele group which is trivial on principal ideles, via composition with the projection map. This definition depends on the definition of a character, which varies slightly between authors: It may be defined as a homomorphism to the non-zero complex numbers (also called a "quasicharacter"), or as a homomorphism to the unit circle in C ("unitary"). Any quasicharacter (of the idele class group) can be written uniquely as a unitary character times a real power of the norm, so there is no big difference between the two definitions. The conductor of a Hecke character χ is the largest ideal m such that χ is a Hecke character mod m. Here we say that χ is a Hecke character mod m if χ (considered as a character on the idele group) is trivial on the group of finite ideles whose every v-adic component lies in 1 + mOv. Größencharakter A Größencharakter (often written Grössencharakter, Grossencharacter, etc.), origin of a Hecke character, going back to Hecke, is defined in terms of a character on fractional ideals. For a number field K, let m = mfm∞ be a K-modulus, with mf, the "finite part", being an integral ideal of K and m∞, the "infinite part", being a (formal) product of real places of K. Let Im denote the group of fractional ideals of K relatively prime to mf and let Pm denote the subgroup of principal fractional ideals (a) where a is near 1 at each place of m in accordance with the multiplicities of its factors: for each finite place v in mf, ordv(a − 1) is at least as large as the exponent for v in mf, and a is positive under each real embedding in m∞. A Größencharakter with modulus m is a group homomorphism from Im into the nonzero complex numbers such that on ideals (a) in Pm its value is equal to the value at a of a continuous homomorphism to the nonzero complex numbers from the product of the multiplicative groups of all Archimedean completions of K where each local component of the homomorphism has the same real part (in the exponent). (Here we embed a into the product of Archimedean completions of K using embeddings corresponding to the various Archimedean places on K.) Thus a Größencharakter may be defined on the ray class group modulo m, which is the quotient Im/Pm. Strictly speaking, Hecke made the stipulation about behavior on principal ideals for those admitting a totally positive generator. So, in terms of the definition given above, he really only worked with moduli where all real places appeared. The role of the infinite part m∞ is now subsumed under the notion of an infinity-type. Relationship between Größencharakter and Hecke character Both are essentially same notion which has 1 to 1 correspondence. The ideal definition is much more complicated than the idelic one, and Hecke's motivation for his definition was to construct L-functions (sometimes referred to as Hecke L-functions) that extend the notion of a Dirichlet L-function from the rationals to other number fields. For a Größencharakter χ, its L-function is defined to be the Dirichlet series carried out over integral ideals relatively prime to the modulus m of the Größencharakter. The notation N(I) means the ideal norm. The common real part condition governing the behavior of Größencharakter on the subgroups Pm implies these Dirichlet series are absolutely convergent in some right half-plane. Hecke proved these L-functions have a meromorphic continuation to the whole complex plane, being analytic except for a simple pole of order 1 at s = 1 when the character is trivial. For primitive Größencharakter (defined relative to a modulus in a similar manner to primitive Dirichlet characters), Hecke showed these L-functions satisfy a functional equation relating the values of the L-function of a character and the L-function of its complex conjugate character. Consider a character ψ of the idele class group, taken to be a map into the unit circle which is 1 on principal ideles and on an exceptional finite set S containing all infinite places. Then ψ generates a character χ of the ideal group IS, the free abelian group on the prime ideals not in S. Take a uniformising element π for each prime p not in S and define a map Π from IS to idele classes by mapping each p to the class of the idele which is π in the p coordinate and 1 everywhere else. Let χ be the composite of Π and ψ. Then χ is well-defined as a character on the ideal group. In the opposite direction, given an admissible character χ of IS there corresponds a unique idele class character ψ. Here admissible refers to the existence of a modulus m based on the set S such that the character χ is 1 on the ideals which are 1 mod m. The characters are 'big' in the sense that the infinity-type when present non-trivially means these characters are not of finite order. The finite-order Hecke characters are all, in a sense, accounted for by class field theory: their L-functions are Artin L-functions, as Artin reciprocity shows. But even a field as simple as the Gaussian field has Hecke characters that go beyond finite order in a serious way (see the example below). Later developments in complex multiplication theory indicated that the proper place of the 'big' characters was to provide the Hasse–Weil L-functions for an important class of algebraic varieties (or even motives). Special cases A Dirichlet character is a Hecke character of finite order. It is determined by values on the set of totally positive principal ideals which are 1 with respect to some modulus m. A Hilbert character is a Dirichlet character of conductor 1. The number of Hilbert characters is the order of the class group of the field. Class field theory identifies the Hilbert characters with the characters of the Galois group of the Hilbert class field. Examples For the field of rational numbers, the idele class group is isomorphic to the product of positive reals with all the unit groups of the p-adic integers. So a quasicharacter can be written as product of a power of the norm with a Dirichlet character. A Hecke character χ of the Gaussian integers of conductor 1 is of the form χ((a)) = |a|s(a/|a|)4n for s imaginary and n an integer, where a is a generator of the ideal (a). The only units are powers of i, so the factor of 4 in the exponent ensures that the character is well defined on ideals. Tate's thesis Hecke's original proof of the functional equation for L(s,χ) used an explicit theta-function. John Tate's 1950 Princeton doctoral dissertation, written under the supervision of Emil Artin, applied Pontryagin duality systematically, to remove the need for any special functions. A similar theory was independently developed by Kenkichi Iwasawa which was the subject of his 1950 ICM talk. A later reformulation in a Bourbaki seminar by showed that parts of Tate's proof could be expressed by distribution theory: the space of distributions (for Schwartz–Bruhat test functions) on the adele group of K transforming under the action of the ideles by a given χ has dimension 1. Algebraic Hecke characters An algebraic Hecke character is a Hecke character taking algebraic values: they were introduced by Weil in 1947 under the name type A0. Such characters occur in class field theory and the theory of complex multiplication. Indeed let E be an elliptic curve defined over a number field F with complex multiplication by the imaginary quadratic field K, and suppose that K is contained in F. Then there is an algebraic Hecke character χ for F, with exceptional set S the set of primes of bad reduction of E together with the infinite places. This character has the property that for a prime ideal p of good reduction, the value χ(p) is a root of the characteristic polynomial of the Frobenius endomorphism. As a consequence, the Hasse–Weil zeta function for E is a product of two Dirichlet series, for χ and its complex conjugate. Notes References J. Tate, Fourier analysis in number fields and Hecke's zeta functions (Tate's 1950 thesis), reprinted in Algebraic Number Theory edd J. W. S. Cassels, A. Fröhlich (1967) pp. 305–347. Number theory Zeta and L-functions
Hecke character
[ "Mathematics" ]
1,986
[ "Discrete mathematics", "Number theory" ]
3,149,490
https://en.wikipedia.org/wiki/Tate%27s%20thesis
In number theory, Tate's thesis is the 1950 PhD thesis of completed under the supervision of Emil Artin at Princeton University. In it, Tate used a translation invariant integration on the locally compact group of ideles to lift the zeta function twisted by a Hecke character, i.e. a Hecke L-function, of a number field to a zeta integral and study its properties. Using harmonic analysis, more precisely the Poisson summation formula, he proved the functional equation and meromorphic continuation of the zeta integral and the Hecke L-function. He also located the poles of the twisted zeta function. His work can be viewed as an elegant and powerful reformulation of a work of Erich Hecke on the proof of the functional equation of the Hecke L-function. Erich Hecke used a generalized theta series associated to an algebraic number field and a lattice in its ring of integers. Iwasawa–Tate theory Kenkichi Iwasawa independently discovered essentially the same method (without an analog of the local theory in Tate's thesis) during the Second World War and announced it in his 1950 International Congress of Mathematicians paper and his letter to Jean Dieudonné written in 1952. Hence this theory is often called Iwasawa–Tate theory. Iwasawa in his letter to Dieudonné derived on several pages not only the meromorphic continuation and functional equation of the L-function, he also proved finiteness of the class number and Dirichlet's theorem on units as immediate byproducts of the main computation. The theory in positive characteristic was developed one decade earlier by Ernst Witt, Wilfried Schmid, and Oswald Teichmüller. Iwasawa–Tate theory uses several structures which come from class field theory, however it does not use any deep result of class field theory. Generalisations Iwasawa–Tate theory was extended to the general linear group GL(n) over an algebraic number field and automorphic representations of its adelic group by Roger Godement and Hervé Jacquet in 1972 which formed the foundations of the Langlands correspondence. Tate's thesis can be viewed as the GL(1) case of the work by Godement–Jacquet. See also Basic Number Theory References Algebraic number theory Zeta and L-functions 1950 in science 1950 documents
Tate's thesis
[ "Mathematics" ]
475
[ "Algebraic number theory", "Number theory" ]
3,149,608
https://en.wikipedia.org/wiki/2005%20Jilin%20chemical%20plant%20explosions
The Jilin chemical plant explosions were a series of explosions which occurred on November 13, 2005, in the No.102 Petrochemical Plant in Jilin City, Jilin Province, China, over the period of an hour. The explosions killed six, injured dozens, and caused the evacuation of tens of thousands of residents. Explosions The cause of the blasts was initially determined two days after the blast: the accident site is a nitration unit for aniline equipment. T-102 tower was jammed up and was not handled properly. The blasts were so powerful that they shattered windows at least 100 to 200 meters away; at least 70 people were injured and six were killed. The fires were finally put out early in the morning of November 14. Over 10,000 people were evacuated from the area, including local residents and students at the north campus of Beihua University and Jilin Institute of Chemical Technology, for fear of further explosions and contamination with harmful chemicals. The CNPC, which owns the company in charge of the factory, Jilin Petrochemical Corporation, asked senior officials to investigate the cause of the incidents. The explosions were not thought to be related to terrorism, and the company told a press conference that they had occurred as a result of a chemical blockage that had gone unfixed. The municipal government asked hotels and restaurants in the city to provide rooms for the evacuated people. Taxi companies also aided in the evacuation. Water pollution The explosion severely polluted the Songhua River, with an estimated 100 tons of pollutants containing benzene and nitrobenzene entering into the river. Exposure to benzene reduces red blood cell count and is linked to leukemia. An 80 km long toxic slick drifted down the Amur River, and the benzene level recorded was at one point 108 times above national safety levels. The slick passed first on the Songhua River through several counties and cities of Jilin province, including Songyuan; it then entered the province of Heilongjiang, with Harbin, capital of Heilongjiang province and one of China's largest cities, being one of the first places to be affected. After traversing the eastern half of Heilongjiang including the city of Jiamusi, the slick converged into the Amur River at the mouth of the Songhua on the border between China and Russia. It passed by the Jewish Autonomous Oblast in Russia, then entered the Russian region of Khabarovsk Krai in the Russian Far East, passing through the cities of Khabarovsk and Komsomolsk-on-Amur before exiting into the Strait of Tartary, itself a bridge between the Sea of Okhotsk and the Sea of Japan portions of the Pacific Ocean. Jilin Province On November 13, a water plant in Jilin city, Jilin, was closed. Several hydropower stations in the upper reach of Songhua River began to increase their discharge flow. On November 15, Songyuan, Jilin, stopped using water from Songhua River. By November 18, water supplies in Songyuan, Jilin, were partially suspended. Water supplies in Songyuan, Jilin, were restored on November 23. Heilongjiang Province Harbin, the capital of Heilongjiang, is one of China's biggest cities with nearly ten million urban residents. It is also dependent on the Songhua River for its water supply. On November 21, the city government of Harbin announced that water supplies would be shut off at noon November 22 for four days for maintenance. Some residents of Harbin have complained that water in some parts of the city had been shut off much earlier than announced. The city also ordered all bathhouses and carwashes to close. At the same time as the enigmatic announcement, rumours ran wild about the possible cause of the shutoff, with some suggesting that an earthquake was imminent (causing some people to camp outdoors) and others claiming that terrorists had poisoned the city's water supply. The news of the shutoff caused panic buying of water, beverages, and foodstuffs in the city's supermarkets, while train tickets and flights out of the area were soon sold out. Meanwhile, dead fish were appearing along the banks of the Songhua upstream from Harbin, further compounding the fears of Harbin residents. Later on the same day, the city government issued another announcement, this time explicitly mentioning the Jilin explosions as the reason for the shutoff. The four-day shutoff was postponed to midnight on November 24. From 9 a.m. to 8 p.m. on November 23, the city temporarily restored the water supply to allow residents to stock up on water, since the slick had not yet reached the city. In the afternoon of the same day, schools in Harbin were closed for one week. Also on November 23, Harbin residents began to receive water from fire trucks, and began voluntary evacuation. The slick itself reached Harbin before dawn on November 24. On that day, the nitrobenzene level at Harbin was recorded at 16.87 times above the national safety level, while the benzene level was increasing, but had not yet exceeded national safety level. The nitrobenzene level doubled on November 25 (0.5805 mg/L), 33.15 times the national safety level, and began to decrease. The benzene level stayed under national safety level. At the same time, the tail of the slick left Zhaoyuan, Daqing, Heilongjiang. Premier Wen Jiabao of the State Council visited Harbin on November 26 to inspect the current situation, including the status of water pollution and water supply. In response to the crisis, trucks transported tens of thousands of metric tons of water from surrounding cities, and thousands of tons of activated carbon from all over the country to Harbin. The government of Harbin also ordered the price of drinking water to be frozen at the level of November 20, in order to combat overpricing. In addition, Harbin is boring ninety-five more deep-water wells, to complement the existing 918 deep-water wells in the city. Fifteen hospitals were on stand-by for possible poisoning victims. Harbin was not the only city to be affected. The slick passed through the city of Jiamusi, which, however, relies more heavily on underground water supply, and thus did not cut off water supplies. Nevertheless, on December 2, Jiamusi shut down its No. 7 Water Plant, which supplies around 70% of the city's water supply, and evacuated half of the population on its Liushu island. It is reported that the entry of several tributaries into the Songhua, such as the Hulan River and the Mudan River, diluted the slick. Water supply in Harbin was resumed in the evening of November 27. Khabarovsk Krai The slick reached the Amur River at December 16, and arrived at the Russian city of Khabarovsk four to five days later. In readiness, a communications hotline had been set up between Chinese and Russian agencies, and China offered water testing and purifying materials, including 1,000 tons of activated carbon to Russia. Khabarovsk planned to shut off its water supply in "extreme circumstances", prompting residents to stock up on water. Maritime pollution After exiting the Amur River, the contaminants entered the Strait of Tartary and then the Sea of Okhotsk and the Sea of Japan, which have Japan, Korea and Far-East Russia on its littorals. Political fall-out Xie Zhenhua, China's Minister of State Environmental Protection Administration, resigned and was succeeded by Zhou Shengxian, former director of the State Forestry Administration. Criticism The Chinese press were critical of the authorities' response to the disaster. Jilin Petrochemicals, which runs the plant that suffered the explosions, initially denied that the explosion could have leaked any pollutants into the Songhua River, saying that it produced only water and carbon dioxide. The media has focused mostly on Harbin, with almost no information on the slick's effect on cities and counties in Jilin province. Heilongjiang responded to the crisis a full week after the explosions occurred—their initial announcement attributed the impending shutoff to "maintenance", and gave only a day's notice; it was the second announcement on the next day that clarified the reason for the shutoff and postponed the shutoff. In response, Vice Governor Jiao Zhengzhong of Jilin province and Deputy General Manager Zeng Yukang of CNPC have visited Harbin and expressed their apologies to the city. On 6 December, the vice-mayor of Jilin, Wang Wei, was found dead in his home. This followed a threat by the Chinese government to severely punish anyone who had covered up the severity of the accident. The threat applied only to the initial explosion and not the extended cover-up of the benzene slick. See also Environment of China Pollution in China References External links Explosions in 2005 Chemical Plant Explosions 2005 disasters in China 2005 industrial disasters Chemical plant explosions Environmental disasters in China Explosions in China Chemical Plant Explosions Industrial fires and explosions in China Water pollution in China 2005 in the environment November 2005 events in China Man-made disasters in China
2005 Jilin chemical plant explosions
[ "Chemistry" ]
1,895
[ "Chemical plant explosions", "Explosions" ]
3,149,730
https://en.wikipedia.org/wiki/Lurex
Lurex is the registered brand name of the Lurex Company, Ltd. for a type of yarn with a metallic appearance. The yarn is made from synthetic film, onto which a metallic aluminium, silver, or gold layer has been vaporized. "Lurex" may also refer to cloth created with the yarn. The word "lurex" is absent in the English language as a common noun: this is the name of the trademark and the company Lurex Company Limited, which launched the production of such yarn based on nylon and polyester—Lurex in the 1970s. The name was based on the English lure—"temptation; attractiveness". The Lurex Company Hugo Wolfram, father of mathematician Stephen Wolfram, served as Managing Director of the Lurex Company; he was also author of three novels. Lurex in media Lurex has been a popular material for movie and television costumes. For example, the bodysuit worn by actress Julie Newmar as Catwoman in the Batman TV series of the 1960s is constructed of black Lurex. It was also seen in the slasher movie franchise Scream as the killer Ghostface costume known as Father Death to conceal the identity during the murder sprees. Originally it was going to be white but was changed over concerns regarding comparisons to the Ku Klux Klan. Referenced in Australian group AC/DC's song 'Rocker' - "Lurex socks, blue suede shoes, V8 car, and tattoos" Its presence for 'sparkle' at the 1920s-themed 50th anniversary party for MOMA in New York City in 1979 was noted in a news story on the gala event. Larry Lurex was a name sometimes used by Queen lead singer Freddie Mercury with a few records being released under that name. See also Metallic fiber Lamé (fabric) References External links Official Lurex website Yarn Synthetic fibers Technical fabrics
Lurex
[ "Chemistry" ]
374
[ "Polymer stubs", "Synthetic materials", "Organic chemistry stubs", "Synthetic fibers" ]
3,149,881
https://en.wikipedia.org/wiki/Irving%20Kaplansky
Irving Kaplansky (March 22, 1917 – June 25, 2006) was a mathematician, college professor, author, and amateur musician. Biography Kaplansky or "Kap" as his friends and colleagues called him was born in Toronto, Ontario, Canada, to Polish-Jewish immigrants; his father worked as a tailor, and his mother ran a grocery and, eventually, a chain of bakeries. He went to Harbord Collegiate Institute receiving the Prince of Wales Scholarship as a teenager. He attended the University of Toronto as an undergraduate and finished first in his class for three consecutive years. In his senior year, he competed in the first William Lowell Putnam Mathematical Competition, becoming one of the first five recipients of the Putnam Fellowship, which paid for graduate studies at Harvard University. Administered by the Mathematical Association of America, the competition is widely considered to be the most difficult mathematics examination in the world and "its difficulty is such that the median score is often zero or one (out of 120) despite being attempted by students specializing in mathematics." After receiving his Ph.D. from Harvard in 1941 as Saunders Mac Lane's first student, he remained at Harvard as a Benjamin Peirce Instructor, and in 1944 moved with Mac Lane to Columbia University for one year to collaborate on work surrounding World War II working on "miscellaneous studies in mathematics applied to warfare analysis with emphasis upon aerial gunnery, studies of fire control equipment, and rocketry and toss bombing" with the Applied Mathematics Panel. He was a member of the Institute for Advanced Study and attended the 1946 Princeton University Bicentennial. He was professor of mathematics at the University of Chicago from 1945 to 1984, and Chair of the department from 1962 to 1967. In 1968, Kaplansky was presented an honorary doctoral degree from Queen's University with the university noting "we honour as a Canadian whose clarity of lectures, elegance of writing, and profundity of research have won him widespread acclaim as the greatest mathematician this country has so far produced." From 1967 to 1969, Kaplansky wrote the mathematics section of Encyclopædia Britannica. Kaplansky was the Director of the Mathematical Sciences Research Institute from 1984 to 1992, and the President of the American Mathematical Society from 1985 to 1986. Kaplansky was also an accomplished amateur musician. He had perfect pitch, studied piano until the age of 15, earned money in high school as a dance band musician, taught Tom Lehrer, and played in Harvard's jazz band in graduate school. He also had a regular program on Harvard's student radio station. After moving to the University of Chicago, he stopped playing for two decades, but then returned to music as an accompanist for student-run Gilbert and Sullivan productions and as a calliope player in football game parades. He often composed music based on mathematical themes. One of those compositions, A Song About Pi, is a melody based on assigning notes to the first 14 decimal places of pi, and has occasionally been performed by his daughter, singer-songwriter Lucy Kaplansky. Mathematical contributions Kaplansky made major contributions to group theory, ring theory, the theory of operator algebras and field theory and created the Kaplansky density theorem, Kaplansky's game and Kaplansky conjecture. He published more than 150 articles and 11 mathematical books. Kaplansky was the doctoral supervisor of 55 students including notable mathematicians Hyman Bass, Susanna S. Epp, Günter Lumer, Eben Matlis, Donald Ornstein, Ed Posner, Alex F. T. W. Rosenberg, Judith D. Sally, and Harold Widom. He has over 950 academic descendants, including many through his academic grandchildren David J. Foulis (who studied with Kaplansky at the University of Chicago before completing his doctorate under the supervision of Kaplansky's student Fred Wright, Jr.) and Carl Pearcy (the student of H. Arlen Brown, who had been jointly supervised by Kaplansky and Paul Halmos). Awards and honors Kaplansky was a member of the National Academy of Sciences and the American Academy of Arts and Sciences, Director of the Mathematical Sciences Research Institute, and President of the American Mathematical Society. He was the plenary speaker at the British Mathematical Colloquium in 1966. Won the William Lowell Putnam Mathematical Competition, the Guggenheim Fellowship, the Jeffery–Williams Prize, and the Leroy P. Steele Prize. Selected publications Books revised edn. 1971 with several later reprintings 2nd edn. 1972 revised edn. 1974 several later reprintings 2nd edn. 1977 1st edn. 1966; revised 1974 with several later reprintings with I. N. Herstein: 2nd edn. 1978 Articles with I. S. Cohen: with Richard F. Arens: See also Kaplansky's theorem on projective modules Notes References Peterson, Ivars. (2013). "A Song about Pi" http://mtarchive.blogspot.com/2013/09/a-song-about-pi.html?m=1 Freund, Peter G. O. Irving Kaplansky and Supersymmetry. External links Irving Kaplansky + Ternary Quadratic Forms Irving Kaplansky + Lie Superalgebras search on author Irving Kaplansky from Google Scholar 1917 births Members of the United States National Academy of Sciences 2006 deaths 20th-century Canadian mathematicians Canadian people of Polish-Jewish descent Institute for Advanced Study visiting scholars Algebraists Group theorists Putnam Fellows University of Toronto alumni Harvard University alumni University of Chicago faculty Scientists from Toronto Canadian expatriate academics in the United States Presidents of the American Mathematical Society
Irving Kaplansky
[ "Mathematics" ]
1,128
[ "Algebra", "Algebraists" ]
3,150,389
https://en.wikipedia.org/wiki/Tissue%20factor%20pathway%20inhibitor
Tissue factor pathway inhibitor (or TFPI) is a single-chain polypeptide which can reversibly inhibit factor Xa (Xa). While Xa is inhibited, the Xa-TFPI complex can subsequently also inhibit the FVIIa-tissue factor complex. TFPI contributes significantly to the inhibition of Xa in vivo, despite being present at concentrations of only 2.5 nM. Genetics The gene for TFPI is located on chromosome 2q31-q32.1, and has nine exons which span 70 kb. A similar gene, termed TFPI2, has been identified on chromosome 7, at locus 7q21.3; in addition to TFPI activity, its product also has retinal pigment epithelial cell growth-promoting properties. Protein structure TFPI has a relative molecular mass of 34,000 to 40,000 depending on the degree of proteolysis of the C-terminal region. TFPI consists of a highly negatively charged amino-terminus, three tandemly linked Kunitz domains, and a highly positively charged carboxy-terminus. With its Kunitz domains, TFPI exhibits significant homology with human inter-alpha-trypsin inhibitor and bovine basic pancreatic trypsin inhibitor. Interactions Tissue factor pathway inhibitor has been shown to interact with Factor X. See also Hemostasis References External links (TFPI1), (TFPI2) Further reading Coagulation system
Tissue factor pathway inhibitor
[ "Chemistry" ]
311
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
3,150,639
https://en.wikipedia.org/wiki/Saturated%20calomel%20electrode
The saturated calomel electrode (SCE) is a reference electrode based on the reaction between elemental mercury and mercury(I) chloride. It has been widely replaced by the silver chloride electrode, however the calomel electrode has a reputation of being more robust. The aqueous phase in contact with the mercury and the mercury(I) chloride (Hg2Cl2, "calomel") is a saturated solution of potassium chloride in water. The electrode is normally linked via a porous frit (sometimes coupled to a salt bridge) to the solution in which the other electrode is immersed. In cell notation the electrode is written as: {Cl^-}(4M) | {Hg2Cl2(s)} | {Hg(l)} | Pt Theory of electrolysis Solubility product The electrode is based on the redox reactions The half reactions can be balanced to the following reaction . Which can be simplified to the precipitation reaction, with the equilibrium constant of the solubility product. The Nernst equations for these half reactions are: The Nernst equation for the balanced reaction is: where E0 is the standard electrode potential for the reaction and aHg is the activity for the mercury cation. At equilibrium, , or equivalently . This equality allows us to find the solubility product. Due to the high concentration of chloride ions, the concentration of mercury ions ([Hg2^2+]) is low. This reduces risk of mercury poisoning for users and other mercury problems. SCE potential The only variable in this equation is the activity (or concentration) of the chloride anion. But since the inner solution is saturated with potassium chloride, this activity is fixed by the solubility of potassium chloride, which is: . This gives the SCE a potential of +0.248 V vs. SHE at 20 °C and +0.244 V vs. SHE at 25 °C, but slightly higher when the chloride solution is less than saturated. For example, a 3.5M KCl electrolyte solution has an increased reference potential of +0.250 V vs. SHE at 25°C while a 1 M solution has a +0.283 V potential at the same temperature. Application The SCE is used in pH measurement, cyclic voltammetry and general aqueous electrochemistry. This electrode and the silver/silver chloride reference electrode work in the same way. In both electrodes, the activity of the metal ion is fixed by the solubility of the metal salt. The calomel electrode contains mercury, which poses much greater health hazards than the silver metal used in the Ag/AgCl electrode. See also Cyclic voltammetry Standard hydrogen electrode Table of standard electrode potentials Reference electrode References Electrodes ja:基準電極#カロメル電極
Saturated calomel electrode
[ "Chemistry" ]
583
[ "Electrochemistry", "Electrodes" ]
3,150,728
https://en.wikipedia.org/wiki/RecLOH
RecLOH is a term in genetics that is an abbreviation for "Recombinant Loss of Heterozygosity". This is a type of mutation which occurs with DNA by recombination. From a pair of equivalent ("homologous"), but slightly different (heterozygous) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover, because genetic information is lost. For Y chromosome In genetic genealogy, the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes, since the target is the very same chromosome instead of the homologous one. During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost. Recombination on the Y-chromosome does not only take place during meiosis, but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency (slipped strand mispairing) of (average fast) Y-STRs, however many recombination products may lead to infertile germ cells and "daughter out". Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome (hairpin). E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X) that are able to verify that there are most frequently really two alleles of each, so we can be sure that there is no gene deletion. Family genealogies have proven many times, that parallel changes on all markers located on the same palindrome are frequently observed and the result of those changes are always twin alleles. So a 9–10, 15-16-17-17, 36-38 haplotype can change in one recombination event to the one mentioned above, because all three markers (DYS459, DYS464 and DYS724) are affected by one and the same recLOH event. See also Null allele Paternal mtDNA transmission List of genetic genealogy topics References External links RecLOH explained Genetics Genetic genealogy
RecLOH
[ "Biology" ]
603
[ "Genetics" ]
3,151,041
https://en.wikipedia.org/wiki/Poncelet
The poncelet (symbol p) is an obsolete unit of power, once used in France and replaced by (ch, metric horsepower). The unit was named after Jean-Victor Poncelet. One poncelet is defined as the power required to raise a hundred-kilogram mass (quintal) at a velocity of one metre per second (100 kilogram-force·m/s). 1 p = 980.665 W =  ch ≈ 1.315 hp (imperial horsepower) References Units of power Obsolete units of measurement Metrication in France
Poncelet
[ "Physics", "Mathematics" ]
113
[ "Obsolete units of measurement", "Physical quantities", "Quantity", "Power (physics)", "Units of power", "Units of measurement" ]
3,151,160
https://en.wikipedia.org/wiki/European%20and%20Developing%20Countries%20Clinical%20Trials%20Partnership
The European and Developing Countries Clinical Trials Partnership (EDCTP) is a partnership between the European Union (EU), Norway, Switzerland and developing countries and other donors, as well as the pharmaceutical industry, to enable clinical trials and the development of new medicines and vaccines against HIV/AIDS, tuberculosis, and malaria. The need for global action against these diseases in order to promote poverty reduction has been recognised by the United Nations, the G8, and the African Union, and the program envisioned the provision of €600 million for the period 2003–2007 in order to translate medical research results into clinical applications relevant to the needs of developing countries. The European Council adopted the Programme for Action: Accelerated action on HIV/AIDS, malaria and tuberculosis in the context of poverty reduction (COM (2001)96, ) on 15 May 2001, following its launch by the European Commission. The Commission proposal was adopted on 16 June 2003 by the Council and the European Parliament. The Programme was to be based on three central pillars: "the impact of existing interventions, the affordability of key pharmaceuticals and trade, and the research and development of specific global public goods." These aims relate specifically with the EU stance on access to essential medicines. Projects and activities EDCTP-funded activities are based on: North/North networking and co-ordination Partnerships among the EU and associated countries, allowing focused collaboration of national and European efforts that were not previously coordinated. North/South networking and co-ordination Partnerships among EU and developing countries that focuses specifically on developing countries' needs, who are jointly involved in setting the research priorities. South/South networking and co-ordination Efforts aimed at creating lasting long-term partnerships between African scientists and research institutions in Africa. Supporting relevant clinical trials Acceleration of the development of new or improved drugs, vaccines and microbicides against HIV/AIDS, malaria and tuberculosis, with a focus on phase II and III clinical trials in sub-Saharan Africa. Capacity building Manifested as a general strengthening of clinical research capacity in Africa, including training activities, workshops and meetings, and the upgrading of clinical trial sites in Africa. Funding The total budget of the EDCTP is €600 million for the period 2003–2007, of which one third (€200 million) is to be provided by the EU budget, an equivalent amounts from Member States' activities, and the remaining €200 million from industry, charities, and private organisations. The Partnership is intended to be a long-term initiative (10–20 years) and a separate legal entity has been created to maintain its independence and flexibility. Taken together, clinical trials based in developing countries where the diseases are endemic, capacity building, and South-South networking are expected to make up over 90% of the overall budget. (figure). Member states and partners Are as follows: Europe EU member states, plus Norway and Switzerland . Developing countries All sub-Saharan African countries. Industry and commercial Bayer GlaxoSmithKline Novartis Pharma Novo Nordisk Sanofi Pasteur Sanofi-Aventis. International initiatives The World Health Organization (WHO-TDR) () Drugs for Neglected Diseases initiative (http://www.dndi.org) International Aids Vaccine Initiative (IAVI) (http://www.iavi.org) European Vaccine Initiative (EVI) (http://www.euvaccine.eu) TB Alliance (http://www.tballiance.org) Private non-profit organisations Aeras Global TB Vaccine Foundation (https://web.archive.org/web/20090318151237/http://www.aeras.org/home/home.php) The Bill and Melinda Gates Foundation (https://web.archive.org/web/20121012112446/http://www.gatesfoundation.org/Pages/home.aspx) International Partnership for Microbicides (https://web.archive.org/web/20071008192438/http://www.ipm-microbicides.org/) Medicines for Malaria Venture (https://web.archive.org/web/20090214212631/http://www.mmv.org/index.php) The Wellcome Trust (http://www.wellcome.ac.uk) See also European Organisation for Research and Treatment of Cancer (EORTC) European Medicines Agency (EMEA) References External links European and Developing Countries Clinical Trials Partnership (EDCTP) Clinical research International medical and health organizations National agencies for drug regulation Medical and health organisations based in the Netherlands
European and Developing Countries Clinical Trials Partnership
[ "Chemistry" ]
961
[ "National agencies for drug regulation", "Drug safety" ]
8,964,423
https://en.wikipedia.org/wiki/Transfer-matrix%20method%20%28statistical%20mechanics%29
In statistical mechanics, the transfer-matrix method is a mathematical technique which is used to write the partition function into a simpler form. It was introduced in 1941 by Hans Kramers and Gregory Wannier. In many one dimensional lattice models, the partition function is first written as an n-fold summation over each possible microstate, and also contains an additional summation of each component's contribution to the energy of the system within each microstate. Overview Higher-dimensional models contain even more summations. For systems with more than a few particles, such expressions can quickly become too complex to work out directly, even by computer. Instead, the partition function can be rewritten in an equivalent way. The basic idea is to write the partition function in the form where v0 and vN+1 are vectors of dimension p and the p × p matrices Wk are the so-called transfer matrices. In some cases, particularly for systems with periodic boundary conditions, the partition function may be written more simply as where "tr" denotes the matrix trace. In either case, the partition function may be solved exactly using eigenanalysis. If the matrices are all the same matrix W, the partition function may be approximated as the Nth power of the largest eigenvalue of W, since the trace is the sum of the eigenvalues and the eigenvalues of the product of two diagonal matrices equals the product of their individual eigenvalues. The transfer-matrix method is used when the total system can be broken into a sequence of subsystems that interact only with adjacent subsystems. For example, a three-dimensional cubical lattice of spins in an Ising model can be decomposed into a sequence of two-dimensional planar lattices of spins that interact only adjacently. The dimension p of the p × p transfer matrix equals the number of states the subsystem may have; the transfer matrix itself Wk encodes the statistical weight associated with a particular state of subsystem k − 1 being next to another state of subsystem k. Importantly, transfer matrix methods allow to tackle probabilistic lattice models from an algebraic perspective, allowing for instance the use of results from representation theory. As an example of observables that can be calculated from this method, the probability of a particular state occurring at position x is given by: Where is the projection matrix for state , having elements Transfer-matrix methods have been critical for many exact solutions of problems in statistical mechanics, including the Zimm–Bragg and Lifson–Roig models of the helix-coil transition, transfer matrix models for protein-DNA binding, as well as the famous exact solution of the two-dimensional Ising model by Lars Onsager. See also Transfer operator References Notes Statistical mechanics Mathematical physics Lattice models
Transfer-matrix method (statistical mechanics)
[ "Physics", "Materials_science", "Mathematics" ]
579
[ "Applied mathematics", "Theoretical physics", "Lattice models", "Computational physics", "Condensed matter physics", "Statistical mechanics", "Mathematical physics" ]
8,964,614
https://en.wikipedia.org/wiki/Evaporation%20%28deposition%29
Evaporation is a common method of thin-film deposition. The source material is evaporated in a vacuum. The vacuum allows vapor particles to travel directly to the target object (substrate), where they condense back to a solid state. Evaporation is used in microfabrication, and to make macro-scale products such as metallized plastic film. History Evaporation deposition was first observed in incandescent light bulbs during the late nineteenth century. The problem of bulb blackening was one of the main obstacles to making bulbs with long life, and received a great amount of study by Thomas Edison and his General Electric company, as well as many others working on their own lightbulbs. The phenomenon was first adapted to a process of vacuum deposition by Pohl and Pringsheim in 1912. However, it found little use until the 1930s, when people began experimenting with ways to make aluminum-coated mirrors for use in telescopes. Aluminum was far too reactive to be used in chemical wet deposition or electroplating methods. John D. Strong was successful in making the first aluminum telescope-mirrors in the 1930s using evaporation deposition. Because it produces an amorphous (glassy) coating rather than a crystalline one, with high uniformity and precise control of thickness, thereafter it became a common process for producing thin-film optical coatings from a variety of materials, both metal and non-metal (dielectric), and has been adopted for many other uses, such as coating plastic toys and automobile parts, the production of semiconductors and microchips, and Mylar films with uses ranging from capacitors to spacecraft thermal control. Physical principle Evaporation involves two basic processes: a hot source evaporates a material and it condenses on a colder substrate that is below its melting point. It resembles the familiar process by which liquid water appears on the lid of a boiling pot. However, the gaseous environment and heat source (see "Equipment" below) are different. Liquids such as water cannot exist in a vacuum, because they require some level of external pressure to hold the atoms and molecules together. In a vacuum, materials sublimate (vaporize), expand outward, and upon contact with a surface condense back into a solid (deposit) without ever passing through a liquid state. Thus, in comparison to water, the process is more like frost forming on a window. Evaporation takes place in a vacuum, i.e. vapors other than the source material are almost entirely removed before the process begins. In high vacuum (with a long mean free path), evaporated particles can travel directly to the deposition target without colliding with the background gas. (By contrast, in the boiling pot example, the water vapor pushes the air out of the pot before it can reach the lid.) At a typical pressure of 10−4 Pa, a 0.4-nm particle has a mean free path of 60 m. Hot objects in the evaporation chamber, such as heating filaments, produce unwanted vapors that limit the quality of the vacuum. Evaporated atoms that collide with foreign particles may react with them; for instance, if aluminium is deposited in the presence of oxygen, it will form aluminium oxide. They also reduce the amount of vapor that reaches the substrate, which makes the thickness difficult to control. Evaporated materials deposit nonuniformly if the substrate has a rough surface (as integrated circuits often do). Because the evaporated material attacks the substrate mostly from a single direction, protruding features block the evaporated material from some areas. This phenomenon is called "shadowing" or "step coverage." When evaporation is performed in poor vacuum or close to atmospheric pressure, the resulting deposition is generally non-uniform and tends not to be a continuous or smooth film. Rather, the deposition will appear fuzzy. Equipment Any evaporation system includes a vacuum pump. It also includes an energy source that evaporates the material to be deposited. Many different energy sources exist: In the thermal method, metal material (in the form of wire, pellets, shot) is fed onto heated semimetal (ceramic) evaporators known as "boats" due to their shape. A pool of melted metal forms in the boat cavity and evaporates into a cloud above the source. Alternatively the source material is placed in a crucible, which is radiatively heated by an electric filament, or the source material may be hung from the filament itself (filament evaporation). Molecular beam epitaxy is an advanced form of thermal evaporation. In the electron-beam method, the source is heated by an electron beam with an energy up to 15 keV. In flash evaporation, a fine wire or powder of source material is fed continuously onto a hot ceramic or metallic bar, and evaporates on contact. Resistive evaporation is accomplished by passing a large current through a resistive wire or foil containing the material to be deposited. The heating element is often referred to as an "evaporation source". Wire type evaporation sources are made from tungsten wire and can be formed into filaments, baskets, heaters or looped shaped point sources. Boat type evaporation sources are made from tungsten, tantalum, molybdenum or ceramic type materials capable of withstanding high temperatures. Induction heating evaporation involves the heating of a source material using an induction heater. Some systems mount the substrate on an out-of-plane planetary mechanism. The mechanism rotates the substrate simultaneously around two axes, to reduce shadowing. Optimization Purity of the deposited film depends on the quality of the vacuum, and on the purity of the source material. At a given vacuum pressure the film purity will be higher at higher deposition rates as this minimises the relative rate of gaseous impurity inclusion. The thickness of the film will vary due to the geometry of the evaporation chamber. Collisions with residual gases aggravate nonuniformity of thickness. Wire filaments for evaporation cannot deposit thick films, because the size of the filament limits the amount of material that can be deposited. Evaporation boats and crucibles offer higher volumes for thicker coatings. Thermal evaporation offers faster evaporation rates than sputtering. Flash evaporation and other methods that use crucibles can deposit thick films. In order to deposit a material, the evaporation system must be able to vaporize it. This makes refractory materials such as tungsten hard to deposit by methods that do not use electron-beam heating. Electron-beam evaporation allows tight control of the evaporation rate. Thus, an electron-beam system with multiple beams and multiple sources can deposit a chemical compound or composite material of known composition. Step coverage Applications An important example of an evaporative process is the production of aluminized PET film packaging film in a roll-to-roll web system. Often, the aluminum layer in this material is not thick enough to be entirely opaque since a thinner layer can be deposited more cheaply than a thick one. The main purpose of the aluminum is to isolate the product from the external environment by creating a barrier to the passage of light, oxygen, or water vapor. Evaporation is commonly used in microfabrication to deposit metal films. Comparison to other deposition methods Alternatives to evaporation, such as sputtering and chemical vapor deposition, have better step coverage. This may be an advantage or disadvantage, depending on the desired result. Sputtering tends to deposit material more slowly than evaporation. Sputtering uses a plasma, which produces many high-speed atoms that bombard the substrate and may damage it. Evaporated atoms have a Maxwellian energy distribution, determined by the temperature of the source, which reduces the number of high-speed atoms. However, electron beams tend to produce X-rays (Bremsstrahlung) and stray electrons, each of which can also damage the substrate. References Semiconductor Devices: Physics and Technology, by S.M. Sze, , has an especially detailed discussion of film deposition by evaporation. R. D. Mathis Company Evaporation Sources Catalog, by R. D. Mathis Company, pages 1 through 7 and page 12, 1992. External links Thin film evaporation reference - properties of common materials Web-page of Society of Vacuum Coaters (Society of Vacuum Coaters) Examples of evaporation sources Physical vapor deposition techniques Thin film deposition Semiconductor device fabrication
Evaporation (deposition)
[ "Chemistry", "Materials_science", "Mathematics" ]
1,773
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Planes (geometry)", "Solid state engineering" ]
8,964,665
https://en.wikipedia.org/wiki/Category%20utility
Category utility is a measure of "category goodness" defined in and . It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (; ) and "collocation index" . It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in . Probability-theoretic definition of category utility The probability-theoretic definition of category utility given in and is as follows: where is a size- set of -ary features, and is a set of categories. The term designates the marginal probability that feature takes on value , and the term designates the category-conditional probability that feature takes on value given that the object in question belongs to category . The motivation and development of this expression for category utility, and the role of the multiplicand as a crude overfitting control, is given in the above sources. Loosely , the term is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category labels. Their difference therefore reflects the relative advantage accruing to the observer by having knowledge of the category structure. Information-theoretic definition of category utility The information-theoretic definition of category utility for a set of entities with size- binary feature set , and a binary category is given in as follows: where is the prior probability of an entity belonging to the positive category (in the absence of any feature information), is the conditional probability of an entity having feature given that the entity belongs to category , is likewise the conditional probability of an entity having feature given that the entity belongs to category , and is the prior probability of an entity possessing feature (in the absence of any category information). The intuition behind the above expression is as follows: The term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . Similarly, the term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . The sum of these two terms in the brackets is therefore the weighted average of these two costs. The final term, , represents the cost (in bits) of optimally encoding (or transmitting) feature information when no category information is available. The value of the category utility will, in the above formulation, be non-negative. Category utility and mutual information and mention that the category utility is equivalent to the mutual information. Here is a simple demonstration of the nature of this equivalence. Assume a set of entities each having the same features, i.e., feature set , with each feature variable having cardinality . That is, each feature has the capacity to adopt any of distinct values (which need not be ordered; all variables can be nominal); for the special case these features would be considered binary, but more generally, for any , the features are simply m-ary. For the purposes of this demonstration, without loss of generality, feature set can be replaced with a single aggregate variable that has cardinality , and adopts a unique value corresponding to each feature combination in the Cartesian product . (Ordinality does not matter, because the mutual information is not sensitive to ordinality.) In what follows, a term such as or simply refers to the probability with which adopts the particular value . (Using the aggregate feature variable replaces multiple summations, and simplifies the presentation to follow.) For this demonstration, also assume a single category variable , which has cardinality . This is equivalent to a classification system in which there are non-intersecting categories. In the special case of there are the two-category case discussed above. From the definition of mutual information for discrete variables, the mutual information between the aggregate feature variable and the category variable is given by: where is the prior probability of feature variable adopting value , is the marginal probability of category variable adopting value , and is the joint probability of variables and simultaneously adopting those respective values. In terms of the conditional probabilities this can be re-written (or defined) as If the original definition of the category utility from above is rewritten with , This equation clearly has the same form as the (blue) equation expressing the mutual information between the feature set and the category variable; the difference is that the sum in the category utility equation runs over independent binary variables , whereas the sum in the mutual information runs over values of the single -ary variable . The two measures are actually equivalent then only when the features , are independent (and assuming that terms in the sum corresponding to are also added). Insensitivity of category utility to ordinality Like the mutual information, the category utility is not sensitive to any ordering in the feature or category variable values. That is, as far as the category utility is concerned, the category set {small,medium,large,jumbo} is not qualitatively different from the category set {desk,fish,tree,mop} since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values {1,2,3,4,5} is not qualitatively different from a feature variable adopting values {fred,joe,bob,sue,elaine}. As far as the category utility or mutual information are concerned, all category and feature variables are nominal variables. For this reason, category utility does not reflect any gestalt aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information. Category "goodness": models and philosophy This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric. What makes a good category? At least since the time of Aristotle there has been a tremendous fascination in philosophy with the nature of concepts and universals. What kind of entity is a concept such as "horse"? Such abstractions do not designate any particular individual in the world, and yet we can scarcely imagine being able to comprehend the world without their use. Does the concept "horse" therefore have an independent existence outside of the mind? If it does, then what is the locus of this independent existence? The question of locus was an important issue on which the classical schools of Plato and Aristotle famously differed. However, they remained in agreement that universals did indeed have a mind-independent existence. There was, therefore, always a fact to the matter about which concepts and universals exist in the world. In the late Middle Ages (perhaps beginning with Occam, although Porphyry also makes a much earlier remark indicating a certain discomfort with the status quo), however, the certainty that existed on this issue began to erode, and it became acceptable among the so-called nominalists and empiricists to consider concepts and universals as strictly mental entities or conventions of language. On this view of concepts—that they are purely representational constructs—a new question then comes to the fore: "Why do we possess one set of concepts rather than another?" What makes one set of concepts "good" and another set of concepts "bad"? This is a question that modern philosophers, and subsequently machine learning theorists and cognitive scientists, have struggled with for many decades. What purpose do concepts serve? One approach to answering such questions is to investigate the "role" or "purpose" of concepts in cognition. Thus the answer to "What are concepts good for in the first place?" by and many others is that classification (conception) is a precursor to induction: By imposing a particular categorization on the universe, an organism gains the ability to deal with physically non-identical objects or situations in an identical fashion, thereby gaining substantial predictive leverage (; ). As J.S. Mill puts it , From this base, Mill reaches the following conclusion, which foreshadows much subsequent thinking about category goodness, including the notion of category utility: One may compare this to the "category utility hypothesis" proposed by : "A category is useful to the extent that it can be expected to improve the ability of a person to accurately predict the features of instances of that category." Mill here seems to be suggesting that the best category structure is one in which object features (properties) are maximally informative about the object's class, and, simultaneously, the object class is maximally informative about the object's features. In other words, a useful classification scheme is one in which category knowledge can be used to accurately infer object properties, and property knowledge can be used to accurately infer object classes. One may also compare this idea to Aristotle's criterion of counter-predication for definitional predicates, as well as to the notion of concepts described in formal concept analysis. Attempts at formalization A variety of different measures have been suggested with an aim of formally capturing this notion of "category goodness," the best known of which is probably the "cue validity". Cue validity of a feature with respect to category is defined as the conditional probability of the category given the feature (;;), , or as the deviation of the conditional probability from the category base rate (;), . Clearly, these measures quantify only inference from feature to category (i.e., cue validity), but not from category to feature, i.e., the category validity . Also, while the cue validity was originally intended to account for the demonstrable appearance of basic categories in human cognition—categories of a particular level of generality that are evidently preferred by human learners—a number of major flaws in the cue validity quickly emerged in this regard (;;, and others). One attempt to address both problems by simultaneously maximizing both feature validity and category validity was made by in defining the "collocation index" as the product , but this construction was fairly ad hoc (see ). The category utility was introduced as a more sophisticated refinement of the cue validity, which attempts to more rigorously quantify the full inferential power of a class structure. As shown above, on a certain view the category utility is equivalent to the mutual information between the feature variable and the category variable. It has been suggested that categories having the greatest overall category utility are those that are not only those "best" in a normative sense, but also those human learners prefer to use, e.g., "basic" categories . Other related measures of category goodness are "cohesion" (;) and "salience" . Applications Category utility is used as the category evaluation measure in the popular conceptual clustering algorithm called COBWEB . See also Abstraction Concept learning Universals Unsupervised learning References . Category utility Category utility
Category utility
[ "Engineering" ]
2,408
[ "Artificial intelligence engineering", "Machine learning" ]
8,965,629
https://en.wikipedia.org/wiki/Glory%20hole%20%28petroleum%20production%29
A glory hole in the context of the offshore petroleum industry is an excavation into the sea floor designed to protect the wellhead equipment installed at the surface of a petroleum well from icebergs or pack ice. An economically attractive alternative for exploiting offshore petroleum resources is a floating platform; however, ice can pose a serious hazard to this solution. While floating platforms can be built to withstand ice loading up to a design threshold, for the largest icebergs or the thickest pack ice the only sensible alternative is to move out of the way. Floating platforms can be disconnected from the wellheads in order to allow them to be moved away from threatening ice, but the wellhead equipment is fixed in place and hence vulnerable. The keel of an iceberg or pack ice can extend far below the surface of the water. If this keel extends deep enough to make contact with the sea floor, it will scour the sea floor as the ice moves with the current. To protect the wellhead equipment from possible scouring, a glory hole is excavated into the sea floor. This excavation must be deep enough to allow adequate clearance between the top of the wellhead equipment and the surrounding sea floor. The resulting glory hole can be either open or cased. A cased glory hole utilizes steel casing as a retaining wall while an open glory hole is simply an excavation. Due to the cost of excavating individual glory holes, typically each glory hole will contain several wellheads. Locating multiple wellheads within a single glory hole is made possible by the use of directional drilling. Etymology The usage of the term glory hole in this context almost certainly is taken from its historical usage in the mining industry to refer to excavations. See also Glory hole (mining) External links DREDGING NEWS ONLINE: Boskalis covers itself in glory in Terra Nova field Oil platforms Petroleum production
Glory hole (petroleum production)
[ "Chemistry", "Engineering" ]
371
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
8,965,926
https://en.wikipedia.org/wiki/Sand%20table
A sand table uses constrained sand for modelling or educational purposes. The original version of a sand table may be the abax used by early Greek students. In the modern era, one common use for a sand table is to make terrain models for military planning and wargaming. Abax An abax was a table covered with sand commonly used by students, particularly in Greece, to perform studies such as writing, geometry, and calculations. The abax was the predecessor to the abacus. Objects, such as stones, were added for counting and then columns for place-valued arithmetic. The demarcation between an abax and an abacus seems to be poorly defined in history; moreover, modern definitions of the word abacus universally describe it as a frame with rods and beads and, in general, do not include the definition of "sand table". The sand table may well have been the predecessor to some board games. ("The word abax, or abacus, is used both for the reckoning-board with its counters and the play-board with its pieces, ..."). Abax is from the old Greek for "sand table". Ghubar An Arabic word for sand (or dust) is ghubar (or gubar), and Western numerals (the decimal digits 0–9) are derived from the style of digits written on ghubar tables in North-West Africa and Iberia, also described as the 'West Arabic' or 'gubar' style. Military use Sand tables have been used for military planning and wargaming for many years as a field expedient, small-scale map, and in training for military actions. In 1890 a Sand table room was built at the Royal Military College of Canada for use in teaching cadets military tactics; this replaced the old sand table room in a pre-college building, in which the weight of the sand had damaged the floor. The use of sand tables increasingly fell out of favour with improved maps, aerial and satellite photography, and later, with digital terrain simulations. More modern sand tables have incorporated Augmented Reality, such as the Augmented Reality Sandtable (ARES) developed by the Army Research Laboratory. Today, virtual and conventional sand tables are used in operations training. In 1991, "Special Forces teams discovered an elaborate sand-table model of the Iraqi military plan for the defense of Kuwait City. Four huge red arrows from the sea pointed at the coastline of Kuwait City and the huge defensive effort positioned there. Small fences of concertina wire marked the shoreline and models of artillery pieces lined the shore area. Throughout the city were plastic models of other artillery and air defense positions, while thin, red-painted strips of board designated supply routes and main highways." In 2006, Google Earth users looking at satellite photography of China found a several kilometre large "sand table" scale model, strikingly reminiscent of a mountainous region (Aksai Chin) which China occupies militarily in a disputed zone with India, 2400 km from the model's location. Speculation has been rife that the terrain is used for military exercises of familiarisation. Education A sand table is a device useful for teaching in the early grades and for special needs children. See also Sandcastle References and notes Taylor, E. B., LL.D (1879), "The History of Games", Fortnightly Review republished in The Eclectic Magazine, New York, W. H. Bidwell, ed., pp. 21–30 External links The History of Computing Project "abacus." The American Heritage Dictionary of the English Language, Fourth Edition. Houghton Mifflin Company, 2004. via Dictionary.com Retrieved 28 August 2007. Sand table in "TACTICS, The Practical Art of Leading Troops in War" 1922 Writing media History of computing History of education History of mathematics Mathematical tools Cartography Sand
Sand table
[ "Mathematics", "Technology" ]
785
[ "Applied mathematics", "Mathematical tools", "nan", "Computers", "History of computing" ]
8,966,592
https://en.wikipedia.org/wiki/Cosine%20similarity
In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in . For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents. The technique is also used to measure cohesion within clusters in the field of data mining. One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data. Definition The cosine of two non-zero vectors can be derived by using the Euclidean dot product formula: Given two n-dimensional vectors of attributes, A and B, the cosine similarity, , is represented using a dot product and magnitude as where and are the th components of vectors and , respectively. The resulting similarity ranges from -1 meaning exactly opposite, to 1 meaning exactly the same, with 0 indicating orthogonality or decorrelation, while in-between values indicate intermediate similarity or dissimilarity. For text matching, the attribute vectors A and B are usually the term frequency vectors of the documents. Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from , since the term frequencies cannot be negative. This remains true when using TF-IDF weights. The angle between two term frequency vectors cannot be greater than 90°. If the attribute vectors are normalized by subtracting the vector means (e.g., ), the measure is called the centered cosine similarity and is equivalent to the Pearson correlation coefficient. For an example of centering, Cosine distance When the distance between two unit-length vectors is defined to be the length of their vector difference then Nonetheless the cosine distance is often defined without the square root or factor of 2: It is important to note that, by virtue of being proportional to squared Euclidean distance, the cosine distance is not a true distance metric; it does not exhibit the triangle inequality property — or, more formally, the Schwarz inequality — and it violates the coincidence axiom. To repair the triangle inequality property while maintaining the same ordering, one can convert to Euclidean distance or angular distance . Alternatively, the triangular inequality that does work for angular distances can be expressed directly in terms of the cosines; see below. Angular distance and similarity The normalized angle, referred to as angular distance, between any two vectors and is a formal distance metric and can be calculated from the cosine similarity. The complement of the angular distance metric can then be used to define angular similarity function bounded between 0 and 1, inclusive. When the vector elements may be positive or negative: Or, if the vector elements are always positive: Unfortunately, computing the inverse cosine () function is slow, making the use of the angular distance more computationally expensive than using the more common (but not metric) cosine distance above. L2-normalized Euclidean distance Another effective proxy for cosine distance can be obtained by normalisation of the vectors, followed by the application of normal Euclidean distance. Using this technique each term in each vector is first divided by the magnitude of the vector, yielding a vector of unit length. Then the Euclidean distance over the end-points of any two vectors is a proper metric which gives the same ordering as the cosine distance (a monotonic transformation of Euclidean distance; see below) for any comparison of vectors, and furthermore avoids the potentially expensive trigonometric operations required to yield a proper metric. Once the normalisation has occurred, the vector space can be used with the full range of techniques available to any Euclidean space, notably standard dimensionality reduction techniques. This normalised form distance is often used within many deep learning algorithms. Otsuka–Ochiai coefficient In biology, there is a similar concept known as the Otsuka–Ochiai coefficient named after Yanosuke Otsuka (also spelled as Ōtsuka, Ootsuka or Otuka, ) and Akira Ochiai (), also known as the Ochiai–Barkman or Ochiai coefficient, which can be represented as: Here, and are sets, and is the number of elements in . If sets are represented as bit vectors, the Otsuka–Ochiai coefficient can be seen to be the same as the cosine similarity. It is identical to the score introduced by Godfrey Thomson. In a recent book, the coefficient is tentatively misattributed to another Japanese researcher with the family name Otsuka. The confusion arises because in 1957 Akira Ochiai attributes the coefficient only to Otsuka (no first name mentioned) by citing an article by Ikuso Hamai (), who in turn cites the original 1936 article by Yanosuke Otsuka. Properties The most noteworthy property of cosine similarity is that it reflects a relative, rather than absolute, comparison of the individual vector dimensions. For any positive constant and vector , the vectors and are maximally similar. The measure is thus most appropriate for data where frequency is more important than absolute values; notably, term frequency in documents. However more recent metrics with a grounding in information theory, such as Jensen–Shannon, SED, and triangular divergence have been shown to have improved semantics in at least some contexts. Cosine similarity is related to Euclidean distance as follows. Denote Euclidean distance by the usual , and observe that (polarization identity) by expansion. When and are normalized to unit length, so this expression is equal to In short, the cosine distance can be expressed in terms of Euclidean distance as . The Euclidean distance is called the chord distance (because it is the length of the chord on the unit circle) and it is the Euclidean distance between the vectors which were normalized to unit sum of squared values within them. Null distribution: For data which can be negative as well as positive, the null distribution for cosine similarity is the distribution of the dot product of two independent random unit vectors. This distribution has a mean of zero and a variance of (where is the number of dimensions), and although the distribution is bounded between -1 and +1, as grows large the distribution is increasingly well-approximated by the normal distribution. Other types of data such as bitstreams, which only take the values 0 or 1, the null distribution takes a different form and may have a nonzero mean. Triangle inequality for cosine similarity The ordinary triangle inequality for angles (i.e., arc lengths on a unit hypersphere) gives us that Because the cosine function decreases as an angle in radians increases, the sense of these inequalities is reversed when we take the cosine of each value: Using the cosine addition and subtraction formulas, these two inequalities can be written in terms of the original cosines, This form of the triangle inequality can be used to bound the minimum and maximum similarity of two objects A and B if the similarities to a reference object C is already known. This is used for example in metric data indexing, but has also been used to accelerate spherical k-means clustering the same way the Euclidean triangle inequality has been used to accelerate regular k-means. Soft cosine measure A soft cosine or ("soft" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity. For example, in the field of natural language processing (NLP) the similarity among features is quite intuitive. Features such as words, n-grams, or syntactic n-grams can be quite similar, though formally they are considered as different features in the VSM. For example, words "play" and "game" are different words and thus mapped to different points in VSM; yet they are semantically related. In case of n-grams or syntactic n-grams, Levenshtein distance can be applied (in fact, Levenshtein distance can be applied to words as well). For calculating soft cosine, the matrix is used to indicate similarity between features. It can be calculated through Levenshtein distance, WordNet similarity, or other similarity measures. Then we just multiply by this matrix. Given two -dimension vectors and , the soft cosine similarity is calculated as follows: where . If there is no similarity between features (, for ), the given equation is equivalent to the conventional cosine similarity formula. The time complexity of this measure is quadratic, which makes it applicable to real-world tasks. Note that the complexity can be reduced to subquadratic. An efficient implementation of such soft cosine similarity is included in the Gensim open source library. See also Sørensen–Dice coefficient Hamming distance Correlation Jaccard index SimRank Information retrieval References External links Weighted cosine measure A tutorial on cosine similarity using Python Information retrieval techniques Similarity measures Data analysis
Cosine similarity
[ "Physics" ]
2,123
[ "Similarity measures", "Physical quantities", "Distance" ]
8,966,784
https://en.wikipedia.org/wiki/Relative%20permeability
In multiphase flow in porous media, the relative permeability of a phase is a dimensionless measure of the effective permeability of that phase. It is the ratio of the effective permeability of that phase to the absolute permeability. It can be viewed as an adaptation of Darcy's law to multiphase flow. For two-phase flow in porous media given steady-state conditions, we can write where is the flux, is the pressure drop, is the viscosity. The subscript indicates that the parameters are for phase . is here the phase permeability (i.e., the effective permeability of phase ), as observed through the equation above. Relative permeability, , for phase is then defined from , as where is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. Relative permeability must be between zero and one. In applications, relative permeability is often represented as a function of water saturation; however, owing to capillary hysteresis one often resorts to a function or curve measured under drainage and another measured under imbibition. Under this approach, the flow of each phase is inhibited by the presence of the other phases. Thus the sum of relative permeabilities over all phases is less than 1. However, apparent relative permeabilities larger than 1 have been obtained since the Darcean approach disregards the viscous coupling effects derived from momentum transfer between the phases (see assumptions below). This coupling could enhance the flow instead of inhibit it. This has been observed in heavy oil petroleum reservoirs when the gas phase flows as bubbles or patches (disconnected). Modelling assumptions The above form for Darcy's law is sometimes also called Darcy's extended law, formulated for horizontal, one-dimensional, immiscible multiphase flow in homogeneous and isotropic porous media. The interactions between the fluids are neglected, so this model assumes that the solid porous media and the other fluids form a new porous matrix through which a phase can flow, implying that the fluid-fluid interfaces remain static in steady-state flow, which is not true, but this approximation has proven useful anyway. Each of the phase saturations must be larger than the irreducible saturation, and each phase is assumed continuous within the porous medium. Based on data from special core analysis laboratory (SCAL) experiments, simplified models of relative permeability as a function of saturation (e.g. water saturation) can be constructed. This article will focus on an oil-water system. Saturation scaling The water saturation is the fraction of the pore volume that is filled with water, and similarly for the oil saturation . Thus, saturations are themselves scaled properties or variables. This gives the constraint The model functions or correlations for relative permeabilities in an oil-water system are therefore usually written as functions of only water saturation, and this makes it natural to select water saturation as the horizontal axis in graphical presentations. Let (also denoted and sometimes ) be the irreducible (or minimal or connate) water saturation, and let be the residual (minimal) oil saturation after water flooding (imbibition). The flowing water saturation window in a water invasion / injection / imbibition process is bounded by a minimum value and a maximum value . In mathematical terms the flowing saturation window is written as By scaling the water saturation to the flowing saturation window, we get a (new or another) normalized water saturation value and a normalized oil saturation value Endpoints Let be oil relative permeability, and let be water relative permeability. There are two ways of scaling phase permeability (i.e. effective permeability of the phase). If we scale phase permeability w.r.t. absolute water permeability (i.e. ), we get an endpoint parameter for both oil and water relative permeability. If we scale phase permeability w.r.t. oil permeability with irreducible water saturation present, endpoint is one, and we are left with only the endpoint parameter. In order to satisfy both options in the mathematical model, it is common to use two endpoint symbols in the model for two-phase relative permeability. The endpoints / endpoint parameters of oil and water relative permeabilities are These symbols have their merits and limits. The symbol emphasize that it represents the top point of . It occurs at irreducible water saturation, and it is the largest value of that can occur for initial water saturation. The competing endpoint symbol occurs in imbibition flow in oil-gas systems. If the permeability basis is oil with irreducible water present, then . The symbol emphasizes that it is occurring at the residual oil saturation. An alternative symbol to is which emphasizes that the reference permeability is oil permeability with irreducible water present. The oil and water relative permeability models are then written as The functions and are called normalised relative permeabilities or shape functions for oil and water, respectively. The endpoint parameters and (which is a simplification of ) are physical properties that are obtained either before or together with the optimization of shape parameters present in the shape functions. There are often many symbols in articles that discuss relative permeability models and modelling. A number of busy core analysts, reservoir engineers and scientists often skip using tedious and time-consuming subscripts, and write e.g. Krow instead of or or krow or oil relative permeability. A variety of symbols are therefore to be expected, and accepted as long as they are explained or defined. The effects that slip or no-slip boundary conditions in pore flow have on endpoint parameters, are discussed by Berg et alios. Corey-model An often used approximation of relative permeability is the Corey correlation which is a power law in saturation. The Corey correlations of the relative permeability for oil and water are then If the permeability basis is normal oil with irreducible water present, then . The empirical parameters and are called curve shape parameters or simply shape parameters, and they can be obtained from measured data either by analytical interpretation of measured data, or by optimization using a core flow numerical simulator to match the experiment (often called history matching). is sometimes appropriate. The physical properties and are obtained either before or together with the optimizing of and . In case of gas-water system or gas-oil system there are Corey correlations similar to the oil-water relative permeabilities correlations shown above. LET-model The Corey-correlation or Corey model has only one degree of freedom for the shape of each relative permeability curve, the shape parameter N. The LET-correlation adds more degrees of freedom in order to accommodate the shape of relative permeability curves in SCAL experiments and in 3D reservoir models that are adjusted to match historic production. These adjustments frequently includes relative permeability curves and endpoints. The LET-type approximation is described by 3 parameters L, E, T. The correlation for water and oil relative permeability with water injection is thus and written using the same normalization as for Corey. Only , , , and have direct physical meaning, while the parameters L, E and T are empirical. The parameter L describes the lower part of the curve, and by similarity and experience the L-values are comparable to the appropriate Corey parameter. The parameter T describes the upper part (or the top part) of the curve in a similar way that the L-parameter describes the lower part of the curve. The parameter E describes the position of the slope (or the elevation) of the curve. A value of one is a neutral value, and the position of the slope is governed by the L- and T-parameters. Increasing the value of the E-parameter pushes the slope towards the high end of the curve. Decreasing the value of the E-parameter pushes the slope towards the lower end of the curve. Experience using the LET correlation indicates the following reasonable ranges for the parameters L, E, and T: L ≥ 0.1, E > 0 and T ≥ 0.1. In case of gas-water system or gas-oil system there are LET correlations similar to the oil-water relative permeabilities correlations shown above. Evaluations After Morris Muskat et alios established the concept of relative permeability in late 1930'ies, the number of correlations, i.e. models, for relative permeability has steadily increased. This creates a need for evaluation of the most common correlations at the current time. Two of the latest (per 2019) and most thorough evaluations are done by Moghadasi et alios and by Sakhaei et alios. Moghadasi et alios evaluated Corey, Chierici and LET correlations for oil/water relative permeability using a sophisticated method that takes into account the number of uncertain model parameters. They found that LET, with the largest number (three) of uncertain parameters, was clearly the best one for both oil and water relative permeability. Sakhaei et alios evaluated 10 common and widely used relative permeability correlations for gas/oil and gas/condensate systems, and found that LET showed best agreement with experimental values for both gas and oil/condensate relative permeability. See also TEM-function Permeability (earth sciences) Capillary pressure Imbibition Drainage Buckley–Leverett equation References External links Relative Permeability Curves Fluid dynamics Porous media
Relative permeability
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,021
[ "Physical phenomena", "Physical quantities", "Porous media", "Quantity", "Chemical engineering", "Materials science", "Piping", "Physical properties", "Fluid dynamics" ]
8,967,165
https://en.wikipedia.org/wiki/Multibody%20system
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements. Introduction The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints. Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum. Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies. Applications While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas: Aerospace engineering (helicopter, landing gears, behavior of machines under different gravity conditions) Biomechanics Combustion engine, gears and transmissions, chain drive, belt drive Dynamic simulation Hoist, conveyor, paper mill Military applications Particle simulation (granular media, sand, molecules) Physics engine Robotics Vehicle simulation (vehicle dynamics, rapid prototyping of vehicles, improvement of stability, comfort optimization, improvement of efficiency, ...) Example The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system. The motion of the mechanism can be viewed in the following gif animation: Concept A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are: cardan joint or Universal Joint; 4 kinematical constraints prismatic joint; relative displacement along one axis is allowed, constrains relative rotation; implies 5 kinematical constraints revolute joint; only one relative rotation is allowed; implies 5 kinematical constraints; see the example above spherical joint; constrains relative displacements in one point, relative rotation is allowed; implies 3 kinematical constraints There are two important terms in multibody systems: degree of freedom and constraint condition. Degree of freedom The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space. A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom. The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis. Constraint condition A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non-holonomic. This is the case for the general rolling constraint. In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies. Equations of motion The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations. The motion of rigid bodies is described by means of (1) (2) These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by , the mass matrix is represented by which may depend on the generalized coordinates. represents the constraint conditions and the matrix (sometimes termed the Jacobian) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces to the according equations of the bodies. The components of the vector are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts, where represents translations and describes the rotations. Quadratic velocity vector In the case of rigid bodies, the so-called quadratic velocity vector is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body. Lagrange multipliers The Lagrange multiplier is related to a constraint condition and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body. Minimal coordinates The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible. Flexible multibody There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms. Flexibility could be take in account in different way. There are three main approaches: Discrete flexible multibody, the flexible body is divided into a set of rigid bodies connected by elastic stiffnesses representative of the body's elasticity Modal condensation, in which elasticity is described through a finite number of modes of vibration of the body by exploiting the degrees of freedom linked to the amplitude of the mode Full flex, all the flexibility of the body is taken into account by discretize body in sub elements with singles displacement linked from elastic material properties See also Dynamic simulation Multibody simulation (solution techniques) Physics engine References J. Wittenburg, Dynamics of Systems of Rigid Bodies, Teubner, Stuttgart (1977). J. Wittenburg, Dynamics of Multibody Systems, Berlin, Springer (2008). K. Magnus, Dynamics of multibody systems, Springer Verlag, Berlin (1978). P.E. Nikravesh, Computer-Aided Analysis of Mechanical Systems, Prentice-Hall (1988). E.J. Haug, Computer-Aided Kinematics and Dynamics of Mechanical Systems, Allyn and Bacon, Boston (1989). H. Bremer and F. Pfeiffer, Elastische Mehrkörpersysteme, B. G. Teubner, Stuttgart, Germany (1992). J. García de Jalón, E. Bayo, Kinematic and Dynamic Simulation of Multibody Systems - The Real-Time Challenge, Springer-Verlag, New York (1994). A.A. Shabana, Dynamics of multibody systems, Second Edition, John Wiley & Sons (1998). M. Géradin, A. Cardona, Flexible multibody dynamics – A finite element approach, Wiley, New York (2001). E. Eich-Soellner, C. Führer, Numerical Methods in Multibody Dynamics, Teubner, Stuttgart, 1998 (reprint Lund, 2008). T. Wasfy and A. Noor, "Computational strategies for flexible multibody systems," ASME. Appl. Mech. Rev. 2003;56(6):553-613. . External links http://real.uwaterloo.ca/~mbody/ Collected links of John McPhee Mechanics Dynamical systems
Multibody system
[ "Physics", "Mathematics", "Engineering" ]
2,238
[ "Mechanics", "Mechanical engineering", "Dynamical systems" ]
8,967,469
https://en.wikipedia.org/wiki/Geodetic%20effect
The geodetic effect (also known as geodetic precession, de Sitter precession or de Sitter effect) represents the effect of the curvature of spacetime, predicted by general relativity, on a vector carried along with an orbiting body. For example, the vector could be the angular momentum of a gyroscope orbiting the Earth, as carried out by the Gravity Probe B experiment. The geodetic effect was first predicted by Willem de Sitter in 1916, who provided relativistic corrections to the Earth–Moon system's motion. De Sitter's work was extended in 1918 by Jan Schouten and in 1920 by Adriaan Fokker. It can also be applied to a particular secular precession of astronomical orbits, equivalent to the rotation of the Laplace–Runge–Lenz vector. The term geodetic effect has two slightly different meanings as the moving body may be spinning or non-spinning. Non-spinning bodies move in geodesics, whereas spinning bodies move in slightly different orbits. The difference between de Sitter precession and Lense–Thirring precession (frame dragging) is that the de Sitter effect is due simply to the presence of a central mass, whereas Lense–Thirring precession is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession. Experimental confirmation The geodetic effect was verified to a precision of better than 0.5% percent by Gravity Probe B, an experiment which measures the tilting of the spin axis of gyroscopes in orbit about the Earth. The first results were announced on April 14, 2007, at the meeting of the American Physical Society. Formulae To derive the precession, assume the system is in a rotating Schwarzschild metric. The nonrotating metric is where c = G = 1. We introduce a rotating coordinate system, with an angular velocity , such that a satellite in a circular orbit in the θ = π/2 plane remains at rest. This gives us In this coordinate system, an observer at radial position r sees a vector positioned at r as rotating with angular frequency ω. This observer, however, sees a vector positioned at some other value of r as rotating at a different rate, due to relativistic time dilation. Transforming the Schwarzschild metric into the rotating frame, and assuming that is a constant, we find with . For a body orbiting in the θ = π/2 plane, we will have β = 1, and the body's world-line will maintain constant spatial coordinates for all time. Now, the metric is in the canonical form From this canonical form, we can easily determine the rotational rate of a gyroscope in proper time where the last equality is true only for free falling observers for which there is no acceleration, and thus . This leads to Solving this equation for ω yields This is essentially Kepler's law of periods, which happens to be relativistically exact when expressed in terms of the time coordinate t of this particular rotating coordinate system. In the rotating frame, the satellite remains at rest, but an observer aboard the satellite sees the gyroscope's angular momentum vector precessing at the rate ω. This observer also sees the distant stars as rotating, but they rotate at a slightly different rate due to time dilation. Let τ be the gyroscope's proper time. Then The −2m/r term is interpreted as the gravitational time dilation, while the additional −m/r is due to the rotation of this frame of reference. Let α' be the accumulated precession in the rotating frame. Since , the precession over the course of one orbit, relative to the distant stars, is given by: With a first-order Taylor series we find Thomas precession One can attempt to break down the de Sitter precession into a kinematic effect called Thomas precession combined with a geometric effect caused by gravitationally curved spacetime. At least one author does describe it this way, but others state that "The Thomas precession comes into play for a gyroscope on the surface of the Earth ..., but not for a gyroscope in a freely moving satellite." An objection to the former interpretation is that the Thomas precession required has the wrong sign. The Fermi-Walker transport equation gives both the geodetic effect and Thomas precession and describes the transport of the spin 4-vector for accelerated motion in curved spacetime. The spin 4-vector is orthogonal to the velocity 4-vector. Fermi-Walker transport preserves this relation. If there is no acceleration, Fermi-Walker transport is just parallel transport along a geodesic and gives the spin precession due to the geodetic effect. For the acceleration due to uniform circular motion in flat Minkowski spacetime, Fermi Walker transport gives the Thomas precession. See also Frame-dragging Geodesics in general relativity Gravity well Timeline of gravitational physics and relativity Notes References Wolfgang Rindler (2006) Relativity: special, general, and cosmological (2nd Ed.), Oxford University Press, External links Gravity Probe B websites at NASA and Stanford University Precession in Curved Space "The Geodetic Effect" Geodetic Effect General relativity
Geodetic effect
[ "Physics" ]
1,123
[ "General relativity", "Theory of relativity" ]
8,967,572
https://en.wikipedia.org/wiki/Polyisoprene
Polyisoprene is strictly speaking a collective name for polymers that are produced by polymerization of isoprene. In practice polyisoprene is commonly used to refer to synthetic cis-1,4-polyisoprene, made by the industrial polymerisation of isoprene. Natural forms of polyisoprene are also used in substantial quantities, the most important being "natural rubber" (mostly cis-1,4-polyisoprene), which is derived from the sap of trees. Both synthetic polyisoprene and natural rubber are highly elastic and consequently used to make tires and a variety of other applications. The trans isomer, which is much harder than the cis isomer, has also seen significant use in the past. It too has been synthesised and extracted from plant sap, the latter resin being known as gutta-percha. These were widely used as an electrical insulator and as components of golf balls. Annual worldwide production of synthetic polyisoprene was 13 million tons in 2007 and 16 million tons in 2020. Synthesis In principle, the polymerization of isoprene can result in four different isomers. The relative amount of each isomer in the polymer is dependent on the mechanism of the polymerization reaction. Anionic chain polymerization, which is initiated by n-Butyllithium, produces cis-1,4-polyisoprene dominant polyisoprene. 90–92% of repeating units are cis-1,4-, 2–3% trans-1,4- and 6–7% 3,4-units. Coordinative chain polymerization: With Ziegler–Natta catalyst TiCl4/Al(i-C4H9)3, a more pure cis-1,4-polyisoprene similar to natural rubber is formed. With Ziegler–Natta catalyst VCl3/Al(i-C4H9)3, trans-dominant polyisoprene is formed. 1,2 and 3,4 dominant polyisoprene is produced MoO2Cl2 catalyst supported by phosphorus ligand and Al(OPhCH3)(i-Bu)2 co-catalyst. History The first reported commercialisation of a stereoregular poly-1,4-isoprene with > 90% cis (90% to 92%) was in 1960 by the Shell Chemical Company. Shell used an alkyl lithium catalyst. 90% cis-1,4 content proved insufficiently crystalline to be useful. In 1962, Goodyear succeeded in making a 98.5% cis polymer using a Ziegler-Natta catalyst, and this went on to commercial success. Usage Natural rubber and synthetic polyisoprene are used primarily for tires. Other applications include latex products, footwear, belting and hoses and condoms. Natural gutta-percha and synthetic trans-1,4-polyisoprene were used for golf balls. See also Synthetic rubber References Organic polymers Rubber
Polyisoprene
[ "Chemistry" ]
619
[ "Organic compounds", "Organic polymers" ]
8,967,705
https://en.wikipedia.org/wiki/Sympathetic%20resonance
Sympathetic resonance or sympathetic vibration is a harmonic phenomenon wherein a passive string or vibratory body responds to external vibrations to which it has a harmonic likeness. The classic example is demonstrated with two similarly-tuned tuning forks. When one fork is struck and held near the other, vibrations are induced in the unstruck fork, even though there is no physical contact between them. In similar fashion, strings will respond to the vibrations of a tuning fork when sufficient harmonic relations exist between them. The effect is most noticeable when the two bodies are tuned in unison or an octave apart (corresponding to the first and second harmonics, integer multiples of the inducing frequency), as there is the greatest similarity in vibrational frequency. Sympathetic resonance is an example of injection locking occurring between coupled oscillators, in this case coupled through vibrating air. In musical instruments, sympathetic resonance can produce both desirable and undesirable effects. According to The New Grove Dictionary of Music and Musicians: Sympathetic resonance in music instruments Sympathetic resonance has been applied to musical instruments from many cultures and time periods, and to string instruments in particular. In instruments with undamped strings (e.g. harps, guitars and kotos), strings will resonate at their fundamental or overtone frequencies when other nearby strings are sounded. For example, an A string at 440 Hz will cause an E string at 330 Hz to resonate, because they share an overtone of 1320 Hz (the third harmonic of A and fourth harmonic of E). Sympathetic resonance is a factor in the timbre of a string instrument. Tailed bridge guitars like the Fender Jaguar differ in timbre from guitars with short bridges, due to the resonance that occurs in their extended floating bridge. Certain instruments are built with sympathetic strings, auxiliary strings which are not directly played but sympathetically produce sound in response to tones played on the main strings. Sympathetic strings can be found on Indian musical instruments such as the sitar, Western Baroque instruments such as the viola d'amore and folk instruments such as the hurdy-gurdy and Hardanger fiddle. Some pianos are built with sympathetic strings, a practice known as aliquot stringing. Sympathetic resonance is sometimes an unwanted effect that must be mitigated when designing an instrument. For example, to dampen resonance in the headstock, some electric guitars use string trees near their tuning pegs. Similarly, the string length behind the bridge must be made as short as possible to dampen resonance. Historical mentions The phenomenon is described by the jewish scholar R. Isaac Arama (died 1494) in his book "Akeydat Yitzchak" as a metaphor to the bi-lateral influence between the human being and the world. Every thing a person does resonates with the entire world and thus causes similar acts everywhere. The human is the active string, the one that is being struck, and the world is the passive instrument that resonate to the same frequencies that the human activate in himself. References Acoustics Resonance
Sympathetic resonance
[ "Physics", "Chemistry" ]
609
[ "Resonance", "Physical phenomena", "Classical mechanics", "Acoustics", "Waves", "Scattering" ]
8,967,742
https://en.wikipedia.org/wiki/Allied%20Telesis
, formerly Allied Telesyn, is a network infrastructuretelecommunications company headquartered in Tokyo, Japan, with other branches in San Jose, California. The company was established in 1987 as a provider of Ethernet and IP access equipment along with IP triple play networks over copper and fiber access infrastructure. Company history March 1987, System Plus Co. is established with ¥1 million capital stock. September, 1987 The company is renamed Allied Telesis K.K. April 1990, Capital stock is increased to 99 million Yen. February 1991, Allied Telesyn Intl. (Asia) Pte., Ltd. is established in Singapore. June 1995, Allied Telesyn Intl. Pty Ltd. is established in Australia. November 1995, Malaysia Sales Office opens. June 1997, Capital stock is increased to 734 million Yen. July 1997, Taiwan Representative Office is launched. May 1999, Acquires a networking division from Teltrend Ltd., US. May 1999, Centrecom Systems Ltd. is established in UK. June 2000, Allied Telesyn Europe Service S.r.l. is established in Italy. June 2000, Allied Telesyn Korea Co., Ltd. is established in the Republic of Korea. July 2000, Allied Telesis K.K. is listed on the Second Section of the Tokyo Stock Exchange. October 2000, Allied Telesyn Labs New Zealand Ltd., an R&D center, is established in Christchurch, New Zealand. March 2001, Allied Telesyn Philippines Inc. is established in the Philippines as a software development base. March 2001, Allied Telesyn International m.b.H is established in Austria. September 2001, Allied Telesis (Suzhou) Co., Ltd. is established in China. October 2001, Allied Telesyn Networks Inc., an R&D center, is established in North Carolina, US. January 2002, Allied Telesis International SA is established in Switzerland. February 2002, Allied Telesyn International S.L.U. is established in Spain. July 2004, Allied Telesis K.K. is renamed Allied Telesis Holdings K.K. March 2005, Allied Telesis K.K. acquires ROOT Inc, a wireless networking company. May 2005, Allied Telesis Capital Corp is established in Washington state, US. December 2006, Allied Telesis Capital Corp opens branch on Yokota Air Base, Japan. June 2007, Allied Telesis launches Switchblade x908 Advanced Layer 3 High-capacity stackable chassis switch. July 2007, Allied Telesis Yokota AFB Branch rolls out IPTV as part of its IVVD contract with AAFES to the Yokota Community. Summer 2008, Allied Telesis Yokota AFB Branch adds 23 channels to its video lineup. September 2008, Allied Telesis Yokota AFB Branch upgrades to a tier one voice carrier for telephony calls to the states. November 2008, Allied Telesis launches "Green" Eco-friendly networking products to the market. October 2012, Allied Telesis launches SwitchBlade x8112 Advanced Layer 3 twelve-slot chassis switch. April 2014, Allied Telesis launches SBx81CFC960 controller card with terabit fabric for the SwitchBlade x8112. July 2014, Allied Telesis launches x310 series stackable edge switches. May 2015, Allied Telesis launches x930 series high-performance distribution switch. May 2015, Allied Telesis launches AR3050S and AR4050S Next-Generation Firewall appliances. August 2015, Sri Lanka's Expressway Traffic Management System adopts Allied Telesis Solutions. Products Allied Telesis is primarily a provider of equipment for enterprise customers, educational and government segments, and businesses. The company offers POTS-to-10G iMAP (integrated Multiservice Access Platform) and iMG (intelligent Multiservice Gateways) with secure VPN routing equipment. See also AlliedWare Plus SwitchBlade List of networking hardware vendors References Computer companies of Japan Computer hardware companies Electronics companies of Japan Networking hardware companies Telecommunications equipment vendors Electronics companies established in 1987 Telecommunications companies established in 1987 1987 establishments in Japan Companies listed on the Tokyo Stock Exchange Japanese brands
Allied Telesis
[ "Technology" ]
852
[ "Computer hardware companies", "Computers" ]
8,968,314
https://en.wikipedia.org/wiki/GatorBox
The GatorBox is a LocalTalk-to-Ethernet bridge, a router used on Macintosh-based networks to allow AppleTalk communications between clients on LocalTalk and Ethernet physical networks. The GatorSystem software also allowed TCP/IP and DECnet protocols to be carried to LocalTalk-equipped clients via tunneling, providing them with access to these normally Ethernet-only systems. The GatorBox was designed and manufactured by Cayman Systems, Inc. When the GatorBox is running GatorPrint software, computers on the Ethernet network can send print jobs to printers on the LocalTalk network using the 'lpr' print spool command. When the GatorBox is running GatorShare software, computers on the LocalTalk network can access Network File System (NFS) hosts on Ethernet. Specifications The original GatorBox (model: 10100) is a desktop model that has a 10 MHz Motorola 68000 CPU, 1 MB RAM, 128 KB EPROM for boot program storage, 2 KB NVRAM for configuration storage, LocalTalk Mini-DIN-8 connector, Serial port Mini-DIN-8 connector, BNC connector, AUI connector, and is powered by an external power supply (16 VAC 1 A transformer that is connected by a 2.5 mm plug). This model requires a software download when it is powered on to be able to operate. The GatorBox CS (model: 10101) is a desktop model that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz). The GatorMIM CS is a media interface module that fits in a Cabletron Multi-Media Access Center (MMAC). The GatorBox CS/Rack (model: 10104) is a rack-mountable version of the GatorBox CS that uses an internal power supply (120/240 V, 1.0 A, 50–60 Hz). The GatorStar GXM integrates the GatorMIM CS with a 24 port LocalTalk repeater. The GatorStar GXR integrates the GatorBox CS/Rack with a 24 port LocalTalk repeater. This model does not have a BNC connector and the serial port is a female DE-9 connector. All "CS" models have 2 MB of memory and can boot from images of the software that have been downloaded into the EPROM using the GatorInstaller application. Software There are three disks in the GatorBox software package. Note that the content of the disks for an original GatorBox is different from that of the GatorBox CS models. Configuration - contains GatorKeeper, MacTCP folder and either GatorInstaller (for CS models) or GatorBox TFTP and GatorBox UDP-TFTP (for original GatorBox model) Application - contains GatorSystem, GatorPrint or GatorShare, which is the software that runs in the GatorBox. The application software for the GatorBox CS product family has a "CS" at the end of the filename. GatorPrint includes GatorSystem functionality. GatorShare includes GatorSystem and GatorPrint functionality. Network Applications - NCSA Telnet, UnStuffit Software Requirements The GatorKeeper 2.0 application requires Macintosh System 6.0.2 up to 7.5.1 and Finder version 6.1 (or later) MacTCP (not Open Transport) See also Kinetics FastPath Line Printer Daemon protocol – Print spooling LocalTalk-to-Ethernet bridge – Other LocalTalk-to-Ethernet bridges MacIP – A tunneling protocol carrying Internet Protocol in AppleTalk References External links GatorBox CS configuration information Internet Archive copy of a configuration guide produced by the University of Illinois Juiced.GS magazine Volume 10, Issue 4 (Dec 2005) – contains an article on how to set up a GatorBox for use with an Apple IIGS Software and scanned manuals for the GatorBox and GatorBox CS Networking hardware
GatorBox
[ "Engineering" ]
841
[ "Computer networks engineering", "Networking hardware" ]
8,968,929
https://en.wikipedia.org/wiki/Wave%20equation%20analysis
Wave equation analysis is a numerical method of analysis for the behavior of driven foundation piles. It predicts the pile capacity versus blow count relationship (bearing graph) and pile driving stress. The model mathematically represents the pile driving hammer and all its accessories (ram, cap, and cap block), as well as the pile, as a series of lumped masses and springs in a one-dimensional analysis. The soil response for each pile segment is modeled as viscoelastic-plastic. The method was first developed in the 1950s by E.A. Smith of the Raymond Pile Driving Company. Wave equation analysis of piles has seen many improvements since the 1950s such as including a thermodynamic diesel hammer model and residual stress. Commercial software packages (such as AllWave-PDP and GRLWEAP) are now available to perform the analysis. One of the principal uses of this method is the performance of a driveability analysis to select the parameters for safe pile installation, including recommendations on cushion stiffness, hammer stroke and other driving system parameters that optimize blow counts and pile stresses during pile driving. For example, when a soft or hard layer causes excessive stresses or unacceptable blow counts. References Smith, E.A.L. (1960) Pile-Driving Analysis by the Wave Equation. Journal of the Engineering Mechanics Division, Proceedings of the American Society of Civil Engineers. Vol. 86, No. EM 4, August. External links The Wave Equation Page for Piling Deep foundations Soil mechanics
Wave equation analysis
[ "Physics" ]
303
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
8,969,039
https://en.wikipedia.org/wiki/Electrochromic%20device
An electrochromic device (ECD) controls optical properties such as optical transmission, absorption, reflectance and/or emittance in a continual but reversible manner on application of voltage (electrochromism). This property enables an ECD to be used for applications like smart glass, electrochromic mirrors, and electrochromic display devices. History The history of electro-coloration goes back to 1704 when Diesbach discovered Prussian blue (hexacyanoferrate), which changes color from transparent to blue under oxidation of iron. In the 1930s, Kobosew and Nekrassow first noted electrochemical coloration in bulk tungsten oxide. While working at Balzers in Lichtenstein, T. Kraus provided a detailed description of the electrochemical coloration in a thin film of tungsten trioxide (WO3) on 30 July 1953. In 1969, S. K. Deb demonstrated electrochromic coloration in WO3 thin films. Deb observed electrochromic color by applying an electric field on the order of 104 Vcm−1 across WO3 thin film. In fact, the real birth of the EC technology is usually attributed to S. K. Deb’s seminal paper of 1973, wherein he described the coloration mechanism in WO3. The electrochromism occurs due to the electrochemical redox reactions that take place in electrochromic materials. Various types of materials and structures can be used to construct electrochromic devices, depending on the specific applications. Device structure Electrochromic (sometimes called electrochromatic) devices are one kind of electrochromic cells. The basic structure of ECD consists of two EC layers separated by an electrolytic layer. The ECD works on an external voltage, for which the conducting electrodes are used on the either side of both EC layers. Electrochromic devices can be categorized in two types depending upon the kind of electrolyte used viz. Laminated ECD are the one in which liquid gel is used while in solid electrolyte EC devices solid inorganic or organic material is used. The basic structure of electrochromic device embodies five superimposed layers on one substrate or positioned between two substrates in a laminated configuration. In this structure there are three principally different kinds of layered materials in the ECD: The EC layer and ion-storage layer conduct ions and electrons and belong to the class of mixed conductors. The electrolyte is a pure ion conductor and separates the two EC layers. The transparent conductors are pure electron conductors. Optical absorption occurs when electrons move into the EC layers from the transparent conductors along with charge balancing ions entering from the electrolyte. Solid-state devices In solid-state electrochromic devices, a solid inorganic or organic material is used as the electrolyte. Ta2O5 and ZrO2 are the most extensively studied inorganic solid electrolytes. Laminated devices Laminated electrochromic devices contain a liquid gel which is used as the electrolyte. Mode of operation Typically, ECD are of two types depending on the modes of device operation, namely the transmission mode and reflectance mode. In the transmission mode, the conducting electrodes are transparent and control the light intensity passing through them; this mode is used in smart-window applications. In the reflectance mode, one of the transparent conducting electrodes (TCE) is replaced with a reflective surface like aluminum, gold or silver, which controls the reflective light intensity; this mode is useful in rear-view mirrors of cars and EC display devices. Applications Smart windows Windows have both direct and indirect impacts on building energy consumption. Electrochromic windows, or the application of electrochromic switchable glazes deposited on to windows, also known as smart windows, are a technology for energy efficiency used in buildings by controlling the amount of sunlight passing through. The solar-optical properties of electrochromic coatings vary over a wide range in response to an applied electrical signal that can be applied via execution of laboratory processes, such as Cyclic Voltammetry (CV). Specifically, these smart windows are made of Tungsten Oxide (WO3). Tungsten Oxide is known to be a standard material used for electrochromic devices because of its wide optical window, ranging from 400-630 nm, and prolonged cyclic stability on the order of thousands of cycles. To enhance the electrochromic performance of Tungsten Oxide coatings, electro chromic coatings are prepared by introducing a small amount of dopamine (DA) into a peroxo tungstic acid (PTA) precursor sol to form tungsten complexes on the surface of nanoparticles. This processing method shows promising cyclic stability as it will last up to thirty five thousand cycles which is greater than that of regular WO3 since new ligand formation promotes plasmonic tuning in nanoparticle electrochemistry. They can also produce less glare than fritted glass. The efficiency of electrochromic windows is dependent on the intrinsic properties of the coating, the placement of the coating within a window system, and parameters related to the building they are used for. In addition to this, electrochromic coating efficiency is directly dependent on the growth kinetics of such thin-film layers since thinner films, and non-even coatings, have a lower optical signal compared to the thicker films with more uniformity having more control and experience a greater optical signal. These windows usually contain layers for tinting in response to increases in incoming sunlight and to protect from UV radiation. For example the glass developed by Gesimat, has a tungsten oxide layer, a polyvinyl butyral layer and a Prussian Blue layer sandwiched by two dual layers of glass and fluorine-doped glass coated with tin oxide. The tungsten oxide and Prussian Blue layers form the positive and negative ends of a battery using the incoming light energy. The polyvinyl butyral (PVB) forms the central layer and serves as a polymer electrolyte. This allows for the flow of ions which, in turn, generates a current. Mirrors Electrochromic mirrors use a combination of optoelectronic sensors and complex electronics that monitor both ambient light and the intensity of the light shining on the surface. As soon as glare makes contact with the surface, these mirrors automatically dim reflections of flashing light from following vehicles at night so that a driver can see them without discomfort. These mirrors, however, only dim relative to the amount of light that shines on them. Other displays Electrochromic displays can operate in one of two modes: reflecting light mode, where light or other radiation strikes a surface and is redirected, or transmitting light mode, which is transmitted through a substrate; the majority of displays operates in a reflective mode. Even though electrochromic devices are considered to be more “passive” since they do not emit light and need external illumination to function, electrochromic coatings on devices have been proposed for flat panel displays and visual-display units (VDUs). For example, an electrochromic coating was featured on an iPod in the early 2000s, and the Nanochromic screen surpassed that of the original iPod in terms of its fidelity in display quality and screen brightness. Electrochromics have been used for other display applications as well; however, the technology is still somewhat nascent and competes with Liquid-crystal displays (LCDs) and their presence in the market. Electrochromic devices do have advantages over materials synthesized to produce LCD based optoelectronics, such as consuming little to no power in producing images and the same amount of power is needed to keep present displays, and there is no restriction to the size of such a device since it is dependent on manufacturing capability and number of electrodes. But they are not regularly used because of their quick response times, 𝜏, estimated by the equation l=(Dt)0.5. For type I-electrochromics (solution-phase) species, the diffusion coefficient is on the order of 10–7 cm2/s. In comparison, for type III-electrochromic species, the diffusion coefficient is on the order of 10–12 cm2/s, which allows for a longer response time on the order of ten seconds compared to almost a millisecond when using type I devices. Such electrochromic displays, to be used commercially, need to be optimized at the materials processing and synthesis level to compete with LCDs in advanced display technologies beyond the iPod. Other applications include dynamically tinting goggles and motorcycle helmet visors, and special paper for drawing on with a stylus. Gallery References Optical devices
Electrochromic device
[ "Materials_science", "Engineering" ]
1,788
[ "Glass engineering and science", "Optical devices" ]
8,969,189
https://en.wikipedia.org/wiki/Electronic%20switch
In electronics, an electronic switch is a switch controlled by an active electronic component or device. Without using moving parts, they are called solid state switches, which distinguishes them from mechanical switches. Electronic switches are considered binary devices because they dramatically change the conductivity of a path in electrical circuit between two extremes when switching between their two states of on and off. History Many people use metonymy to call a variety of devices that conceptually connect or disconnect signals and communication paths between electrical devices as "switches", analogous to the way mechanical switches connect and disconnect paths for electrons to flow between two conductors. The traditional relay is an electromechanical switch that uses an electromagnet controlled by a current to operate a mechanical switching mechanism. Other operating principles are also used (for instance, solid-state relays invented in 1971 control power circuits with no moving parts, instead using a semiconductor device to perform switching—often a silicon-controlled rectifier or triac). Early telephone systems used an electromagnetically operated Strowger switch to connect telephone callers; later telephone exchanges contain one or more electromechanical crossbar switches. Thus the term 'switched' is applied to telecommunications networks, and signifies a network that is circuit switched, providing dedicated circuits for communication between end nodes, such as the public switched telephone network. The term switch has since spread to a variety of digital active devices such as transistors and logic gates whose function is to change their output state between logic states or connect different signal lines. The common feature of all these usages is they refer to devices that control a binary state of either on or off, closed or open, connected or not connected, conducting or not conducting, low impedance or high impedance. Types The diode can be treated as switch that conducts significantly only when forward biased and is otherwise effectively disconnected (high impedance). Specific diode types that can change switching state quickly, such as the Schottky diode and the 1N4148, are called "switching diodes". Vacuum tubes can be used in high voltage applications. The transistor can be operated as a switch. The bipolar junction transistor (BJT) cutoff and saturation regions of operation can respectively be treated as a closed and open switch. The most widely used electronic switch in digital circuits is the metal–oxide–semiconductor field-effect transistor (MOSFET). The analogue switch uses two MOSFET transistors in a transmission gate arrangement as a switch that works much like a relay, with some advantages and several limitations compared to an electromechanical relay. The power transistor(s) in a switching voltage regulator, such as a power supply unit, are used like a switch to alternately let power flow and block power from flowing. Hall switches are a type of Hall sensor that combine the analog Hall effect with threshold detection to produce a magnetically-operated switch. The opto-isolator uses light from an LED controlled by a current which is received by a phototransistor to switch a galvanically-isolated circuit. The insulated-gate bipolar transistor (IGBT) combines advantages of BJTs and power MOSFETs. A silicon controlled rectifier (SCR) can be used for high speed switching for power control application. A TRIAC (TRIode AC), equivalent to two back-to-back SCRs, is a bidirectional switching device. A DIAC stands for DIode AC Switch. A gate turn-off thyristor (GTO) is a bipolar switching device. Electronic switches may also consist of complex configurations that are assisted by physical contact, for instance resistive or capacitive sensing touchscreens. Network switches reconfigure connections between different ports of computers in a computer network. Applications Electronic switches are used in all kinds of common and industrial applications. See also Relay References Electronic circuits
Electronic switch
[ "Engineering" ]
810
[ "Electronic engineering", "Electronic circuits" ]
8,969,359
https://en.wikipedia.org/wiki/Engineering%20research
Engineering research - as a branch of science, it stands primarily for research that is oriented towards achieving a specific goal that would be useful, while seeking to employ the powerful tools already developed in Engineering as well as in non-Engineering sciences such as Physics, Mathematics, Computer science, Chemistry, Biology, etc. Often, some of the knowledge required to develop such tools is nonexistent or is simply not good enough, and the engineering research takes the form of a non-engineering science. Since engineering is extensive, it comprises specialised areas such as bioengineering, mechanical engineering, chemical engineering, electrical and computer engineering, civil and environmental engineering, agricultural engineering, etc. The largest professional organisation is the IEEE that today includes much more than the original Electrical and Electronic Engineering. Major contributors to engineering research around the world include governments, private business, and academia. The results of engineering research can emerge in journal articles, at academic conferences, and in the form of new products on the market. Much engineering research in the United States of America takes place under the aegis of the Department of Defense. Military-related research into science and technology has led to "dual-use" applications, with the adaptation of weaponry, communications and other defense systems for the military and other applications for civilian use. Programmable digital computers and the Internet which connects them, the GPS satellite network, fiber-optic cable, radar and lasers provide examples. See also List of engineering schools Engineer's degree Engineering studies Engineering education research References Research by field Engineering disciplines
Engineering research
[ "Engineering" ]
308
[ "nan" ]
8,969,589
https://en.wikipedia.org/wiki/Windows%20IoT
Windows IoT, short for Windows Internet of Things and formerly known as Windows Embedded, is a family of operating systems from Microsoft designed for use in embedded systems. Microsoft has three different subfamilies of operating systems for embedded devices targeting a wide market, ranging from small-footprint, real-time devices to point of sale (POS) devices like kiosks. Windows Embedded operating systems are available to original equipment manufacturers (OEMs), who make it available to end users preloaded with their hardware, in addition to volume license customers in some cases. In April 2018, Microsoft released Azure Sphere, another operating system designed for IoT applications running on the Linux kernel. The IoT family Microsoft rebranded "Windows Embedded" to "Windows IoT" starting with the release of embedded editions of Windows 10. Enterprise Windows IoT Enterprise is binary identical to Windows 10 and Windows 11 Enterprise, and functions exactly the same, but are licensed and priced for use in embedded devices. It replaces Embedded Industry and Embedded Standard. Plain unlabeled, Retail/Thin Client, Tablet, and Small Tablet SKUs are available, again differing only in licensing. It includes a minor change that allows the use of smaller storage devices, with the possibility of more changes being made in the future. In addition, starting with the LTSC edition of version 21H2, Windows 10 IoT Enterprise LTSC will gain an extra five years of support compared to Windows 10 Enterprise LTSC. Mobile Windows 10 IoT Mobile, also known as Windows 10 IoT Mobile Enterprise, is a binary equivalent of Windows 10 Mobile Enterprise licensed for IoT applications. Unsupported as of January 14, 2020. Core Windows 10 IoT Core is considered by some to be the successor to Windows Embedded Compact, although it maintains very little compatibility with it. Optimized for smaller and lower-cost industry devices, it is also provided free of charge for use in devices like the Raspberry Pi for hobbyist use. Core Pro Windows 10 IoT Core Pro provides the ability to defer and control updates and is licensed only via distributors; it is otherwise identical to the normal IoT Core edition. Server Windows Server IoT 2019 is a full, binary equivalent version of Windows Server 2019, intended to aggregate data from many 'things'. Like the IoT Enterprise variants, it remains identical in behavior to its regularly licensed counterpart, but differs only in licensing terms. It also is offered in both LTSC and SAC options. Embedded family Embedded Compact Windows Embedded Compact (previously known as Windows Embedded CE or Windows CE) is the variant of Windows Embedded for very small computers and embedded systems, including consumer electronics devices like set-top boxes and video game consoles. Windows Embedded Compact is a modular real-time operating system with a specialized kernel that can run in under 1 MB of memory. It comes with the Platform Builder tool that can be used to add modules to the installation image to create a custom installation, depending on the device used. Windows Embedded Compact is available for ARM, MIPS, SuperH and x86 processor architectures. Microsoft made available a specialized variant of Windows Embedded Compact, known as Windows Mobile, for use in mobile phones. It is a customized image of Windows Embedded Compact along with specialized modules for use in Mobile phones. Windows Mobile was available in four editions: Windows Mobile Classic (for Pocket PC), Windows Mobile Standard (for smartphones) and Windows Mobile Professional (for PDA/Pocket PC Phone Edition) and Windows Mobile for Automotive (for communication/entertainment/information systems used in automobiles). Modified variants of Windows Mobile were used for Portable Media Centers. In 2010, Windows Mobile was replaced by Windows Phone 7, which was also based on Windows Embedded Compact, but was not compatible with any previous products. Windows Embedded Compact 2013 is a real-time operating system which runs on ARM, x86, SH, and derivatives of those architectures. It included .NET Framework, UI framework, and various open source drivers and services as 'modules'. Embedded Standard Windows Embedded Standard is the brand of Windows Embedded operating systems designed to provide enterprises and device manufacturers the freedom to choose which capabilities will be part of their industry devices and intelligent system solutions, intended to build ATMs and devices for the healthcare and manufacturing industries, creating industry-specific devices. This brand consists of Windows NT 4.0 Embedded, Windows XP Embedded, Windows Embedded Standard 2009 (WES09), Windows Embedded Standard 7 (WES7, known as Windows Embedded Standard 2011 prior to release), and Windows Embedded 8 Standard. It provides the full Win32 API. Windows Embedded Standard 2009 includes Silverlight, .NET Framework 3.5, Internet Explorer 7, Windows Media Player 11, RDP 6.1, Network Access Protection, Microsoft Baseline Security Analyzer and support for being managed by Windows Server Update Services and System Center Configuration Manager. Windows Embedded Standard 7 is based on Windows 7 and was previously codenamed Windows Embedded 'Quebec'. Windows Embedded Standard 7 includes Windows Vista and Windows 7 features such as Aero, SuperFetch, ReadyBoost, Windows Firewall, Windows Defender, address space layout randomization, Windows Presentation Foundation, Silverlight 2, Windows Media Center among several other packages. It is available in IA-32 and x64 variants and was released in 2010. It has a larger minimum footprint (~300 MB) compared to 40 MB of XPe and also requires product activation. Windows Embedded Standard 7 was released on April 27, 2010. Windows Embedded 8 Standard was released on March 20, 2013. IE11 for this edition of Windows 8 was released in April 2019, with support for IE10 ending on January 31, 2020. For Embedded Systems (FES) Binary identical variants of the editions as are available in retail, but licensed exclusively for use in embedded devices. They are available for both IA-32 as well as x64 processors. Subfamily is known to include Windows for Workgroups 3.11, Windows 95 to 98, Windows NT Workstation, Windows 2000 Professional, Windows ME, Windows XP Professional, Windows Vista Business and Ultimate, Windows 7 Professional and Ultimate, Windows 8 Pro and Enterprise, and Windows 8.1 Pro and Enterprise. This subfamily originally simply had Embedded tacked onto the end of the SKU name until sometime around the release of Windows XP when the naming scheme changed to FES. Examples of this include Windows NT Workstation Embedded, Windows 2000 Pro Embedded, and Windows ME Embedded. Microsoft changed the moniker for FES products again starting with some Windows 8/8.1 based SKUs, simply labeling them Windows Embedded before the Windows version and edition. Two examples of this are Windows Embedded 8 Pro and Windows Embedded 8.1 Enterprise. Server Windows Embedded Server FES products include Server, Home Server, SQL Server, Storage Server, DPM Server, ISA Server, UAG Server, TMG Server, and Unified Data Storage Server etc. of various years including 2000, 2003, 2003 R2, 2004, 2005, 2006, 2007, 2008, 2008 R2, 2012, and 2012 R2 etc. Embedded Industry Windows Embedded Industry is the brand of Windows Embedded operating systems for industry devices and once only for point of sale systems. This brand was originally limited to the Windows Embedded for Point of Service operating system released in 2006, which is based on Windows XP with SP2. Since, Microsoft has released an updated version of Windows Embedded for Point of Service named Windows Embedded POSReady 2009, this time based on Windows XP with SP3. In 2011 Windows Embedded 7 POSReady based on Windows 7 SP1 was released, which succeeded POSReady 2009. Microsoft has since changed the name of this product from "Windows Embedded POSReady" to "Windows Embedded Industry". Microsoft released Windows Embedded 8 Industry in April 2013, followed by 8.1 Industry in October 2013. Embedded NAVReady Windows Embedded NAVReady, also known as Navigation Ready, is a plug-in component for Windows CE 5.0. It is intended to be useful for building portable handheld navigation devices. Embedded Automotive Windows Embedded Automotive, formerly Microsoft Auto, Windows CE for Automotive, Windows Automotive, and Windows Mobile for Automotive, is an embedded operating system based on Windows CE for use on computer systems in automobiles. The latest release, Windows Embedded Automotive 7 was announced on October 19, 2010. Embedded Handheld On January 10, 2011, Microsoft announced Windows Embedded Handheld 6.5. The operating system has compatibility with Windows Mobile 6.5 and is presented as an enterprise handheld device, targeting retailers, delivery companies, and other companies that rely on handheld computing. Windows Embedded Handheld retains backward compatibility with legacy Windows Mobile applications. Windows Embedded 8.1 Handheld was released for manufacturing on April 23, 2014. Known simply as Windows Embedded 8 Handheld (WE8H) prior to release, it was designed as the next generation of Windows Embedded Handheld for line-of-business handheld devices and built on Windows Phone 8.1, which it also has compatibility with. Five Windows Embedded 8.1 Handheld devices have been released; Manufactured by Bluebird, Honeywell and Panasonic as listed below. See also Eclipse ThreadX (previously Microsoft's Azure ThreadX, now donated as open source to the Eclipse Foundation) References Further reading External links IoT Embedded operating systems ARM operating systems X86-64 operating systems
Windows IoT
[ "Technology" ]
1,892
[ "Computing platforms", "Microsoft Windows" ]
8,969,762
https://en.wikipedia.org/wiki/CWSDPMI
CWSDPMI is a 32-bit DPMI host written by Charles W. Sandmann from 1996 to 2010, currently at r7. It is loosely based upon prior GO32.EXE code used in DJGPP v1. It can provide DPMI 0.90+ 32-bit services for programs compiled with latest versions of DJGPP etc. compilers. Since r5, it can also be used for programs requiring a DPMI stub in lieu of PMODE/DJ. It supports up to 4 GB, virtual memory, and hardware interrupt reflection from real mode to protected mode. Programs compiled with DJGPP v2 require a DPMI host, which is usually CWSDPMI.EXE or CWSDPR0.EXE. In case of CWSDPMI.EXE, the default paging/virtual memory file is C:\CWSDPMI.SWP. It is capable of running on a 386 in under 512 KB of RAM. CWSDPMI is functionally similar to other 32-bit DPMI hosts such as HDPMI32, which is part of HX DOS Extender. CWSDPMI r7 is free and open-source software. CWSDPMI editions CWSDSTUB.EXE is a stub loader image for DJGPP which includes CWSDPMI. CWSDPR0.EXE is an alternative version, implemented at request of id Software when writing Quake, which runs at ring 0 with virtual memory disabled. It may be used if access to ring 0 features are desired. It currently does not switch stacks on hardware interrupts, so some DJGPP features such as SIGINT and SIGFPE are not supported and will generate a double fault or stack fault error. Developer Charles W. Sandmann also hoped to eventually supply code for CWSDPMI r7 that allows CWSDPMI to map up to 64 GB memory into the address space upon request. See also DOS extender References External links Official CWSDPMI website DOS extenders DOS software 1996 software
CWSDPMI
[ "Technology", "Engineering" ]
432
[ "Computing stubs", "Computer engineering stubs", "Computer engineering" ]
8,969,863
https://en.wikipedia.org/wiki/Microcosm%20%28clock%29
Microcosm was a unique clock made by Henry Bridges of Waltham Abbey, England. It stood 10–12 feet high, and six across the base, it toured Great Britain, North America and possibly Europe as a visual and musical entertainment as well as demonstrating astronomical movements. It was first advertised for exhibition in 1733, but it is also claimed that Sir Isaac Newton, who died in 1727, checked the mechanism. Several prints survive of Microcosm including one of 1734 showing Newton and Bridges. When Henry Bridges died in 1754 he left the clock to his three youngest children to be sold. It is unclear when the clock left the Bridges family but it continued touring until 1775 when it vanished. Parts of the astronomical clock were found in Paris in 1929 and are now in the British Museum. When on tour, the entrance fee was 1s, which was high for the time. Souvenir pamphlets were also sold. It had 4 parts, from the top: Three scenes which alternated: nine muses playing musical instruments, Orpheus in the forest, and a grove with birds flying and singing Beneath a grand arch were two astronomical clocks, one showing the Ptolemaic system, the other Copernican. Two planetariums, one showing the Solar System, showing 10 months move in 10 minutes, Another showing Jupiter and its four satellites, and on the front face was a seascape with ships sailing and in the foreground, horse-drawn carriages galloping and a gunpowdermill and a windmill turning, swans swimming In the pedestal was a working carpenters’ yard The machine played mechanical music but the organ could also be played by hand. The music was mostly new, some composed especially for it. John R Milburn stated: ‘There were other broadly similar though less comprehensive devices in existence in the first half of the eighteenth century… The importance of Bridges’ ‘Microcosm’, however, lies in the nature of its displays (combining automated pictures to attract the multitude with educational astronomical models) and the widespread publicity that accompanied it on its travels’. It was viewed by George Washington, and by members of the Lunar Society; Richard L Edgeworth left an account of seeing it at Chester in his biography, so links it with the notions of child-centred education promoted by Rousseau. The mechanism was constantly being updated, so was part of the circuit of travelling science shows of the early to mid 18th century, providing education to the public who could afford it. References Individual clocks Astronomical clocks Prehistory and Europe objects in the British Museum
Microcosm (clock)
[ "Astronomy" ]
513
[ "Time in astronomy", "Astronomical clocks", "Astronomical instruments" ]
8,970,065
https://en.wikipedia.org/wiki/Content%20Vectoring%20Protocol
In computer networks, Content Vectoring Protocol is a protocol for filtering data that is crossing a firewall into an external scanning device. An example of this is where all HTTP traffic is virus-scanned before being sent out to the user. This protocol is identified as part of the Checkpoint training as being one of the benefits of their products. It is not known whether this is just a re-working of another protocol that has been re-branded by Checkpoint or if this is a generic Internet protocol. Its default is to use TCP port 18181. It is used separately by few servers implementing firewall to inspect the http content. It may or may not inspect the whole of the content, which is entirely based on the administrator managing the firewall. The administrator can direct the whole of the internet traffic to the content vectoring protocol or specific content coming from specific source to be inspected by the content vectoring protocol. References Network protocols
Content Vectoring Protocol
[ "Technology" ]
188
[ "Computing stubs", "Computer network stubs" ]
8,970,644
https://en.wikipedia.org/wiki/Dynamic%20load%20testing
Dynamic load testing (or dynamic loading) is a method to assess a pile's bearing capacity by applying a dynamic load to the pile head (a falling mass) while recording acceleration and strain on the pile head. Dynamic load testing is a high strain dynamic test which can be applied after pile installation for concrete piles. For steel or timber piles, dynamic load testing can be done during installation or after installation. The procedure is standardized by ASTM D4945-00 Standard Test Method for High Strain Dynamic Testing of Piles. It may be performed on all piles, regardless of their installation method. In addition to bearing capacity, Dynamic Load Testing gives information on resistance distribution (shaft resistance and end bearing) and evaluates the shape and integrity of the foundation element. The foundation bearing capacity results obtained with dynamic load tests correlate well with the results of static load tests performed on the same foundation element. See also Pile integrity test References Rausche, F., Moses, F., Goble, G. G., September, 1972. Soil Resistance Predictions From Pile Dynamics. Journal of the Soil Mechanics and Foundations Division, American Society of Civil Engineers. Reprinted in Current Practices and Future Trends in Deep Foundations, Geotechnical Special Publication No. 125, DiMaggio, J. A., and Hussein, M. H., Eds, August, 2004. American Society of Civil Engineers: Reston, VA; 418–440. Rausche, F., Goble, G.G. and Likins, G.E., Jr. (1985). Dynamic Determination of Pile Capacity. Journal of the Geotechnical Engineering Division, 111(3), 367–383. Salgado, R. (2008). The Engineering of Foundations. New York:McGraw-Hill, Chapter 14 (pp. 669-713). Scanlan, R.H., and Tomko, J.J., 1960, "Dynamic Prediction of Pile Static Bearing Capacity", Journal of the Soil Mechanics and Foundations Division, American Society of Civil Engineers, Vol. 86, No. SM4; 35-61 External links Instrumentation and Pictures of Dynamic Load Test of Piles In situ foundation tests Deep foundations
Dynamic load testing
[ "Engineering" ]
454
[ "Civil engineering", "Civil engineering stubs" ]
8,970,838
https://en.wikipedia.org/wiki/Twinge%20attack
In Internet-based computer-networking, a Twinge attack is a flood of false ICMP packets in an attempt to cripple a system. The attack is spoofed, that is, random fake Internet source addresses are used in the ICMP packets. This makes identification of the source of the malicious packets difficult. The idea of the attack is to either degrade the performance of the attacked computer or make it crash. The attacking program is called Twinge, but the ICMP packets have a particular signature which gives the identity away. As long as the computer is safely behind a router or a firewall, there is nothing to worry about with this attack. With this attack, the adversary intends to prevent the system from operating normally, i.e. a denial of service. Configuring upstream network devices (including firewalls and routers) to ignore ICMP packets from the public Internet will make this almost certainly not succeed. References Denial-of-service attacks
Twinge attack
[ "Technology" ]
201
[ "Computer security stubs", "Computing stubs", "Denial-of-service attacks", "Computer security exploits" ]
8,970,987
https://en.wikipedia.org/wiki/Text%20Analysis%20Portal%20for%20Research
TAPoR (Text Analysis Portal for Research) is a gateway that highlights tools and code snippets usable for textual criticism of all types. The project is housed at the University of Alberta, and is currently led by Geoffrey Rockwell, Stéfan Sinclair, Kirsten C. Uszkalo, and Milena Radzikowska. Users of the portal explore tools to use in their research, and can rate, review, and comment on tools, browse curated lists of recommended tools, and add tags to tools. Tool pages on TAPoR consist of a short description, authorial information, a screenshot of the tool, tags, suggested related tools, and user ratings and comments. Code snippet pages also contain an excerpt of code and a link to the full code's location online. An earlier version of the portal was based at McMaster University, and consisted of a network of six leading Humanities computing centres in Canada: McMaster, University of Victoria (in collaboration with Malaspina UC), University of Alberta, University of Toronto, Université de Montréal (law) and University of New Brunswick. TAPoR developed, a network of nodes at universities across Canada which would have servers and local labs where the best text tools, be they from industry or other sources, could be aggregated and made available. These would be supplemented by representative texts and special infrastructure ... This earlier version allowed researchers to experiment with text analysis tools by either using them without an account through the "TAPoR Tools" interface, or getting an account where they could define texts they wanted to operate on and create a list of favorite tools. TAPoR has also sponsored CaSTA (Canadian Symposium on Text Analysis) conferences including The Face of Text (CaSTA 2004) which focused on text visualization. Selected papers from "The Face of Text" were published by Text Technology, a journal of computer text processing. Citations External links TAPoR 3.0 TAPoR 1.0 on the Wayback Machine McMaster University Computational linguistics
Text Analysis Portal for Research
[ "Technology" ]
407
[ "Natural language and computing", "Computational linguistics" ]
8,971,558
https://en.wikipedia.org/wiki/Robert-Bourassa%20Reservoir
The Robert-Bourassa Reservoir () is a man-made lake in northern Quebec, Canada. It was created in the mid-1970s as part of the James Bay Project and provides the needed water for the Robert-Bourassa and La Grande-2-A generating stations. It has a maximum surface area of , and a surface elevation between and . The reservoir has an estimated volume of , of which is available for hydro-electric power generation. The reservoir is formed behind the Robert-Bourassa Dam that was built across a valley of the La Grande River. This dam was constructed from 1974 to 1978, is 550 m (1,800 ft) wide at its base, and has 23 million m3 (30 million yd3) of fill. There are another 31 smaller dikes keeping the water inside the reservoir. See also List of lakes of Quebec References External links International Lake Environment Committee – La Grande 2 Reservoir Lakes of Nord-du-Québec Reservoirs in Quebec James Bay Project
Robert-Bourassa Reservoir
[ "Engineering" ]
201
[ "James Bay Project", "Macro-engineering" ]
8,971,676
https://en.wikipedia.org/wiki/Zinc%20proteinate
Zinc proteinate is the final product resulting from the chelation of zinc with amino acids and/or partially hydrolyzed proteins. It is used as a nutritional animal feed supplement formulated to prevent and/or correct zinc deficiency in animals. Zinc proteinate can be used in place of zinc sulfate and zinc methionine. References External links Association of American Feed Control Officials Life Cycle Trace Mineral Needs for Reducing Stress in Beef Production, Montana State University Dietary minerals proteinate
Zinc proteinate
[ "Chemistry" ]
95
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
8,971,904
https://en.wikipedia.org/wiki/Coercive%20function
In mathematics, a coercive function is a function that "grows rapidly" at the extremes of the space on which it is defined. Depending on the context different exact definitions of this idea are in use. Coercive vector fields A vector field is called coercive if where "" denotes the usual dot product and denotes the usual Euclidean norm of the vector x. A coercive vector field is in particular norm-coercive since for , by Cauchy–Schwarz inequality. However a norm-coercive mapping is not necessarily a coercive vector field. For instance the rotation by 90° is a norm-coercive mapping which fails to be a coercive vector field since for every . Coercive operators and forms A self-adjoint operator where is a real Hilbert space, is called coercive if there exists a constant such that for all in A bilinear form is called coercive if there exists a constant such that for all in It follows from the Riesz representation theorem that any symmetric (defined as for all in ), continuous ( for all in and some constant ) and coercive bilinear form has the representation for some self-adjoint operator which then turns out to be a coercive operator. Also, given a coercive self-adjoint operator the bilinear form defined as above is coercive. If is a coercive operator then it is a coercive mapping (in the sense of coercivity of a vector field, where one has to replace the dot product with the more general inner product). Indeed, for big (if is bounded, then it readily follows); then replacing by we get that is a coercive operator. One can also show that the converse holds true if is self-adjoint. The definitions of coercivity for vector fields, operators, and bilinear forms are closely related and compatible. Norm-coercive mappings A mapping between two normed vector spaces and is called norm-coercive if and only if More generally, a function between two topological spaces and is called coercive if for every compact subset of there exists a compact subset of such that The composition of a bijective proper map followed by a coercive map is coercive. (Extended valued) coercive functions An (extended valued) function is called coercive if A real valued coercive function is, in particular, norm-coercive. However, a norm-coercive function is not necessarily coercive. For instance, the identity function on is norm-coercive but not coercive. See also Radially unbounded functions References Functional analysis General topology Types of functions
Coercive function
[ "Mathematics" ]
568
[ "General topology", "Functions and mappings", "Functional analysis", "Mathematical objects", "Topology", "Mathematical relations", "Types of functions" ]
8,972,094
https://en.wikipedia.org/wiki/Aqua%20Traiana
The Aqua Traiana (later rebuilt and named the Acqua Paola) was a 1st-century Roman aqueduct built by Emperor Trajan and inaugurated in 109 AD. It channelled water from sources around Lake Bracciano, 40 km (25 mi) north-west of Rome, to ancient Rome. It joined the earlier Aqua Alsietina to share a common lower route into Rome. It had only fallen into disuse in the 17th century. History Frontinus indicated in c. 98 AD that a new aqueduct was being planned, and completion took about a decade. The inauguration of the aqueduct was recorded in the Fasti Ostienses as being dedicated with great fanfare in 109, and stated that the water was tota urbe salientem (issuing throughout the city). The date of inauguration was also significant for its intended uses, being only a few months before the Naumachia Traiani, the vast, grandstand-encircled pool on west bank of the Tiber, intended for naval spectacles (and only two days after the Baths of Trajan on the Oppian Hill, in the heart of Rome, overlooking the lower Forum Romanum and Colosseum). Later the Aqua Traiana powered Rome's important flour mills which became critical to its survival during the Gothic Siege of Rome (537–538) when the Janiculum mills were famously put out of action by the Ostrogoths who cut the urban aqueducts. General Belisarius restored the supply of flour by using mills floating in the Tiber. This aqueduct alone was soon repaired but recent excavations revealed that a major branch of the aqueduct (of two) that had powered the mills was never cleared of its blockage from the siege. Nevertheless, the aqueduct continued to supply the Vatican and western regions of Rome until at least the 9th century. Sources of the aqueduct The Aqua Traiana was fed by a collection of aquifer sources around the western and northern sides of Lake Bracciano. The sources were identified in the 19th century in the following groups, running clockwise around the lake from Bracciano: The seven sources in the Villa Flavia / Fosso di Grotta Renara area. These were gathered together into three tanks named by Cassio and Lanciani as Greca, Spineta and Pisciarello. The seventeenth Century architect Carlo Fontana names three tanks as: Botte Greca, Botte Ornava, and Botte Arciprete (Arch-Priest) then places one additional tank further down the Fosso di Grotta Renara as the Botte di Pisciarelli. One tank is currently called 'Fonte Micciaro'. The sources in the Fosso di Fiora area: These include the source at the monumental Fiora Nymphaeum, another source at the 'Carestia' Nymphaeum approx 1 km from the Fiora, which now lies in ruin, but is documented by various maps in the Orsini collection. A collection of sources at the Vicarello Baths One source close to the contemporary Acqua delle Donne Restaurant. The Sette Botti (seven tanks) immediately to the East of the Acqua delle Donna. Various sources to the north of Monte Rocca Romana in the territory of Bassano Romano and along the Fosso Della Calandrina including the notable Fonte Ceraso. The Aquarelli sources to the North East of the Lake. The Acqua D'Impolline due East of the Lake. The yield of various of these sources were measured and compared in the early 1690s. The most significant and copious source of the Aqua Traiana was pinpointed as close to the Fosso di Fiora in the modern district of Manziana. Subsequently little more was published about the sources for over 150 years probably because of the difficulty of accessing the terrain. Some additional sources of the Trajan aqueduct were identified in 1999 as Acqua Praecilia, located near Manziana. The initial flow of water is enriched along the way by other sources and is carried by the Archi di Boccalupo bridge. At one point there is a hole from which water flows into a collection pool. The height of the Archi di Boccalupo reaches 15 m and it has a brick curtain that alternates with opus reticulatum. Recent research and in particular publication of the Santa Fiora, the primary source, in 2010 spurred other explorers who have been finding new sources and parts of the network. Distribution of Aqua Traiana within Rome How distribution was achieved is mostly subject to speculation, but some suggest that the aqueduct crossed the River Tiber on a high bridge in the area of the modern Ponte Sublicio, and curved around the Aventine before heading north to the Oppio. The aqueduct was found on the Janiculum hill under the present American Academy in Rome by excavations in the 1990s. It fed a number of water mills on the Janiculum, including a sophisticated mill complex revealed by excavations in the 1990s under the present American Academy in Rome. Dilapidation and revival as Acqua Paola Although the Aqua Traiana, along with all the other aqueducts, was cut by the Ostrogoths in 537, it was the only one restored by Belisarius before his departure in 547 in order to supply water to the grain mills. Over the next few centuries it once again fell in to ruin and ceased to function. It was restored a second time around the year 775 by Pope Adrian I as a way of alleviating the need for the Roman people to carry water in casks from the Tiber to supply the fountains at Saint Peter's Basilica. Subsequently, it once again fell into disrepair. Camillo Borghese, on his accession in 1605 as Pope Paul V, initiated work on rebuilding the Aqua Traiana, supervised from 1609 by Giovanni Fontana. At that time, the Roman suburbs west of the Tiber River, including the Vatican, were suffering from chronic water shortage. The new pope persuaded the Municipality of Rome to pay for the development of an aqueduct to provide a better water supply to that part of the city. In 1612, the aqueduct was completed. It was initially called the Acqua Sabbatina or Acqua Bracciano, but was renamed Acqua Paola in honour of Paul V. Not all original Aqua Traiana sources were available to contribute water to the Aqua Paola. The most copious sources at Santa Fiora, for example, had long since been purloined by duke Paolo Giordano Orsini, who had diverted them to power mills and industry in the city of Bracciano. The fountain at the end of the aqueduct was referred to as "Il Fontanone" – the Big Fountain – because of its size. It was in the form of a free-standing triumphal arch constructed in white marble with granite columns on high socles. Most of the material was pillaged from the Forum of Nerva. Originally, it consisted of three large central arches, separated by columns, and a smaller one on each side. Water gushed into five basins at the base of each arch. The designer was Paul V's usual architect, Flaminio Ponzio. Among the team of sculptors involved was Ippolito Buzzi, who was responsible for the Borghese coat-of-arms, flanked by the Borghese eagle and dragon, and held aloft by putti, it is presumed to Ponzio's design. Then, in 1690, Pope Alexander VIII commissioned Carlo Fontana, Giovanni's nephew, to enlarge the fountain. Carlo replaced the five small basins with an enormous single one, the Fontana dell'Acqua Paola, which remains to this day. In more recent times, a small garden has been arranged, hidden behind the structure. See also List of aqueducts in the city of Rome List of aqueducts in the Roman Empire List of Roman aqueducts by date Parco degli Acquedotti Ancient Roman technology Roman engineering References Notes "Trajan's aqueduct sourced by UK father and son", The Times. 2010-04-29. (Archive) "Two-thousand-year-old Roman aqueduct discovered", The Daily Telegraph. 2010-04-29 Fea, C., Storia 1. delle acque antiche sorgenti in Roma, perdute..., 1832 External links Roman Fountains Satellite photo Acqua Paola is the white hemicircle in the center. To the Northeast is San Pietro in Montorio and the Bramante Tempietto. Touring Club Italiano, Roma e Dintorni 1965 p. 454 Il Fontanone Video by Maurizio Meyer My Rome Excavation and historical context of Aqua Traiana at Janiculum mills, 1998–1999 Aqua Trajana in A Topographical Dictionary of Ancient Rome, Samuel Ball Platner (as completed and revised by Thomas Ashby, 1929) Video of the underground structure at the source of the Aqua Traiana taken by British film-makers in 2009. Interactive Atlas Aqua Traiana/Paola American Society of Civil Engineers – International Historic Civil Engineering Landmark Buildings and structures completed in the 2nd century Buildings and structures completed in 1612 Traiana Acqua Paola Trajan 2nd-century establishments in Italy 109 establishments Historic Civil Engineering Landmarks 100s establishments in the Roman Empire 1612 establishments in Italy
Aqua Traiana
[ "Engineering" ]
1,938
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
8,972,424
https://en.wikipedia.org/wiki/World%20Ocean%20Atlas
The World Ocean Atlas (WOA) is a data product of the Ocean Climate Laboratory of the National Centers for Environmental Information (U.S.). The WOA consists of a climatology of fields of in situ ocean properties for the World Ocean. It was first produced in 1994 (based on the earlier Climatological Atlas of the World Ocean, 1982), with later editions at roughly four year intervals in 1998, 2001, 2005, 2009, 2013, 2018, and 2023. Dataset The World Ocean Atlas (WOA) is based on profile data from the World Ocean Database (WOD) Project. The fields that make up the WOA dataset consist of objectively-analysed global grids at 1° spatial resolution. The fields are three-dimensional, and data are typically interpolated onto 33 standardised vertical intervals from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, averaged fields are produced for annual, seasonal and monthly time-scales. The WOA fields include ocean temperature, salinity, dissolved oxygen, apparent oxygen utilisation (AOU), percent oxygen saturation, phosphate, silicic acid, and nitrate. Early editions of the WOA additionally included fields such as mixed layer depth and sea surface height. In addition to the averaged fields of ocean properties, the WOA also contains fields of statistical information concerning the constituent data that the averages were produced from. These include fields such as the number of data points the average is derived from, their standard deviation and standard error. A lower horizontal resolution (5°) version of the WOA is also available. The WOA dataset is primarily available as compressed ASCII, but since WOA 2005 a netCDF version has also been produced. Gallery See also CORA dataset European Atlas of the Seas Geochemical Ocean Sections Study (GEOSECS) Global Ocean Data Analysis Project (GLODAP) World Ocean Circulation Experiment (WOCE) References External links NODC Ocean Climate Laboratory datasets and products Chemical oceanography Oceanography World Ocean
World Ocean Atlas
[ "Physics", "Chemistry", "Environmental_science" ]
434
[ "Chemical oceanography", "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
8,974,081
https://en.wikipedia.org/wiki/ARTHUR
ARTHUR (an acronym for "artillery hunting radar") is a counter-battery radar system originally developed jointly for and in close co-operation with the Norwegian and Swedish armed forces by Ericsson Microwave Systems in both Sweden and Norway. It is also used by the British Army, under the designation TAIPAN. It is a mobile, passive electronically scanned array C-band radar for the purpose of enemy field artillery acquisition and was developed for the primary role as the core element of a brigade or division level counter battery sensor system. The vehicle carrying the radar was originally a Bandvagn 206 developed and produced by Hägglund & Söner, but is now more often delivered on trucks with ISO fasteners. The radar is now developed by Saab AB Electronic Defense Systems (after EMW was sold to Saab in June 2006) and Saab Technologies Norway AS. Role The ARTHUR detects hostile artillery by tracking projectiles in flight. The original ARTHUR Mod A can locate guns at 15–20 km and 120 mm mortars at 30–35 km with a circular error probable of 0.45% of range. This is accurate enough for effective counter-battery fire by friendly artillery batteries. ARTHUR can operate as a stand-alone, medium-range weapons locating radar or a long-range weapon locating system, consisting of two to four radars working in coordination. This flexibility enables the system to maintain a constant surveillance of an area of interest. The upgraded ARTHUR Mod B met the British Army's MAMBA requirement for locating guns, mortars or rockets. It can locate guns at 20–25 km and 120 mm mortars at 35–40 km with a circular probable error of 0.35% of range. MAMBA was successfully used by the British Army in Iraq and Afghanistan, with an availability of 90%. ARTHUR Mod C has a larger antenna and can detect guns at 31 km, mortars at 55 km and rockets at 50–60 km depending on their size, and locate targets at a rate of 100 per minute with CEP 0.2% of range for guns and rockets and 0.1% for mortars. ARTHUR WLR Mod D has several improvements, including an instrumented range of up to 100 km, an accuracy of 0.15% of range, and the ability to cover an arc of 120°. The detection range is between 0.8 and 100 km and could possibly increase to 200 km. More than 100 targets can be tracked at the same time. It was delivered to the British Army in 2024, under the designation TAIPAN. It can be carried by a C-130 or slung under a heavy lift helicopter such as a Chinook. Its air mobility allows it for use by light and rapid reaction forces such as airborne and marine units. Nordic Battle Group The use of the ARTHUR in Nordic Battle Groups will primarily concentrate on preventing the use of artillery barrages in civilian areas, since the radar can identify an artillery unit guilty of targeting civilians. It will also be used to warn friendly mission troops of incoming indirect fire. Operational modes ARTHUR can be operated in two main modes: weapon locating and fire direction. Weapon locating is used to determine the location of the guns, mortars or rocket launchers that fired and their target area. Fire direction is used to adjust the fire of own artillery onto target coordinates. Weapon locating When locating enemy artillery, the radar tracks the up-going trajectory of shells, calculates their points of origin and impact and, with other information, displays it to the radar operator(s). Depending on national tactics, techniques, procedures, the commander's orders and the situation, this information may be used to alert any troops in the impact area and engage the hostile batteries with counter-battery fire. If the users have digital communications networks these messages may be sent automatically. The ARTHUR can determine whether the artillery piece is of artillery-type, rocket-type or mortar-type based upon the curve of the trajectory, the munition's speed, and its range. Fire direction When in fire direction mode the radar calculates the expected impact location of the friendly fire. From this corrections are calculated and reported to hit the target coordinates. Sweden also uses the radar for 'fall of shot' calibration. Threats Radars are easy to detect and locate if the enemy has the necessary ELINT/ESM capability. The consequences of this detection are likely to be attack by artillery fire or aircraft (including anti-radiation missiles) or ECM. In other circumstances ground attack with direct fire or short range indirect fire are the main threat. The usual measures against the first are using a radar horizon to screen from ground-based detection, minimising transmission time, deploying radars singly and moving frequently. Swedish ARTHUR units usually operate in groups of three that guard the immediate surroundings. Operators Current operators – Canadian Army: On lease for operations in Afghanistan. – Czech Armed Forces: 3 units – Danish Army: Being phased out under the last agreement for the Danish Defence – Hellenic Army – Italian Army: Initially leased for use in Iraq 2003/2004. In 2009 Italy decided to buy the system which has been in use since at least 2013 in 5 units. , to be replaced by 8 Thales Ground Master 200 MM/C in 2024 – Reportedly purchased Mod C in 2007, in 2011 Saab received a follow-up order. About 20 are positioned near the Korean Demilitarized Zone. – British Army: As of 22nd July 2024, five next generation Arthur Mod D systems, named TAIPAN by the British Army, have been delivered, accepted, and are now with 5th Regiment Royal Artillery, replacing the previous generation, designated MAMBA. – Armed Forces of Ukraine - Bulgarian Land Forces: Bulgaria's 4th Artillery Regiment began operating the ARTHUR Mod C in late 2024. See also AN/TPQ-36 Firefinder radar AN/TPQ-37 Firefinder radar SLC-2 Radar Swathi Weapon Locating Radar Penicillin (counter-artillery system) Red Color References External links at Saab One35th.com Military encyclopedia Info about ARTHUR from army.cz Ground radars Weapon locating radar Military radars of Sweden Warning systems Military radars of the United Kingdom Military vehicles introduced in the 1990s
ARTHUR
[ "Technology", "Engineering" ]
1,267
[ "Warning systems", "Safety engineering", "Measuring instruments", "Weapon locating radar" ]
17,634,827
https://en.wikipedia.org/wiki/Hill%20reagent
Discovered in 1937 by Robin Hill, Hill reagents allowed the discovery of electron transport chains during photosynthesis. These are dyes that act as artificial electron acceptors, changing color when they are reduced. An example of a Hill reagent is 2,6-dichlorophenolindophenol (DCPIP). References Oxidizing agents Photosynthesis
Hill reagent
[ "Chemistry", "Biology" ]
81
[ "Redox", "Photosynthesis", "Oxidizing agents", "Biochemistry", "Chemical process stubs" ]
17,636,860
https://en.wikipedia.org/wiki/Glacio-geological%20databases
Glacio-geological databases compile data on glacially associated sedimentary deposits and erosional activity from former and current ice-sheets, usually from published peer-reviewed sources. Their purposes are generally directed towards two ends: (Mode 1) compiling information about glacial landforms, which often inform about former ice-flow directions; and (Mode 2) compiling information which dates the absence or presence of ice. These databases are used for a variety of purposes: (i) as bibliographic tools for researchers; (ii) as the quantitative basis of mapping of landforms or dates of ice presence/absence; and (iii) as quantitative databases which are used to constrain physically based mathematical models of ice-sheets. Antarctic Ice Sheet: The AGGDB is a Mode 2 glacio-geological database for the Antarctic ice-sheet using information from around 150 published sources, covering glacial activity mainly from the past 30,000 years. It is available online, and aims to be comprehensive to the end of 2007. British Ice Sheet: BRITICE is a Mode 1 database which aims to map all glacial landforms of Great Britain. Eurasian Ice Sheet: DATED-1 is a Mode 2 database for the Eurasian ice-sheet. Its sister-project DATED-2 uses the information in DATED-1 to map the retreat of the Eurasian ice-sheet since the Last Glacial Maximum. See also Glacial landforms Sediment Geology Ice sheet Exposure Age Dating Radio-carbon dating References Quaternary Physical geography Physical oceanography Scientific databases Geographical databases
Glacio-geological databases
[ "Physics" ]
309
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
17,637,008
https://en.wikipedia.org/wiki/Monohydrogen%20phosphate
Hydrogen phosphate or monohydrogen phosphate (systematic name) is the inorganic ion with the formula [HPO4]2-. Its formula can also be written as [PO3(OH)]2-. Together with dihydrogen phosphate, hydrogenphosphate occurs widely in natural systems. Their salts are used in fertilizers and in cooking. Most hydrogenphosphate salts are colorless, water soluble, and nontoxic. It is a conjugate acid of phosphate [PO4]3- and a conjugate base of dihydrogen phosphate [H2PO4]−. It is formed when a pyrophosphate anion reacts with water by hydrolysis, which can give hydrogenphosphate: + H2O 2 Acid-base equilibria Hydrogenphosphate is an intermediate in the multistep conversion of phosphoric acid to phosphate: Examples Diammonium phosphate, (NH4)2HPO4 Disodium phosphate, Na2HPO4, with varying amounts of water of hydration References Anions Phosphates
Monohydrogen phosphate
[ "Physics", "Chemistry" ]
230
[ "Matter", "Anions", "Salts", "Phosphates", "Ions" ]
17,637,383
https://en.wikipedia.org/wiki/Discrete%20Morse%20theory
Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis. Notation regarding CW complexes Let be a CW complex and denote by its set of cells. Define the incidence function in the following way: given two cells and in , let be the degree of the attaching map from the boundary of to . The boundary operator is the endomorphism of the free abelian group generated by defined by It is a defining property of boundary operators that . In more axiomatic definitions one can find the requirement that which is a consequence of the above definition of the boundary operator and the requirement that . Discrete Morse functions A real-valued function is a discrete Morse function if it satisfies the following two properties: For any cell , the number of cells in the boundary of which satisfy is at most one. For any cell , the number of cells containing in their boundary which satisfy is at most one. It can be shown that the cardinalities in the two conditions cannot both be one simultaneously for a fixed cell , provided that is a regular CW complex. In this case, each cell can be paired with at most one exceptional cell : either a boundary cell with larger value, or a co-boundary cell with smaller value. The cells which have no pairs, i.e., whose function values are strictly higher than their boundary cells and strictly lower than their co-boundary cells are called critical cells. Thus, a discrete Morse function partitions the CW complex into three distinct cell collections: , where: denotes the critical cells which are unpaired, denotes cells which are paired with boundary cells, and denotes cells which are paired with co-boundary cells. By construction, there is a bijection of sets between -dimensional cells in and the -dimensional cells in , which can be denoted by for each natural number . It is an additional technical requirement that for each , the degree of the attaching map from the boundary of to its paired cell is a unit in the underlying ring of . For instance, over the integers , the only allowed values are . This technical requirement is guaranteed, for instance, when one assumes that is a regular CW complex over . The fundamental result of discrete Morse theory establishes that the CW complex is isomorphic on the level of homology to a new complex consisting of only the critical cells. The paired cells in and describe gradient paths between adjacent critical cells which can be used to obtain the boundary operator on . Some details of this construction are provided in the next section. The Morse complex A gradient path is a sequence of paired cells satisfying and . The index of this gradient path is defined to be the integer The division here makes sense because the incidence between paired cells must be . Note that by construction, the values of the discrete Morse function must decrease across . The path is said to connect two critical cells if . This relationship may be expressed as . The multiplicity of this connection is defined to be the integer . Finally, the Morse boundary operator on the critical cells is defined by where the sum is taken over all gradient path connections from to . Basic results Many of the familiar results from continuous Morse theory apply in the discrete setting. The Morse inequalities Let be a Morse complex associated to the CW complex . The number of -cells in is called the -th Morse number. Let denote the -th Betti number of . Then, for any , the following inequalities hold , and Moreover, the Euler characteristic of satisfies Discrete Morse homology and homotopy type Let be a regular CW complex with boundary operator and a discrete Morse function . Let be the associated Morse complex with Morse boundary operator . Then, there is an isomorphism of homology groups and similarly for the homotopy groups. Applications Discrete Morse theory finds its application in molecular shape analysis, skeletonization of digital images/volumes, graph reconstruction from noisy data, denoising noisy point clouds and analysing lithic tools in archaeology. See also Digital Morse theory Stratified Morse theory Shape analysis Topological combinatorics Discrete differential geometry References Combinatorics Morse theory Computational topology
Discrete Morse theory
[ "Mathematics" ]
861
[ "Discrete mathematics", "Computational topology", "Computational mathematics", "Combinatorics", "Topology" ]
17,638,001
https://en.wikipedia.org/wiki/Stokes%20stream%20function
In fluid dynamics, the Stokes stream function is used to describe the streamlines and flow velocity in a three-dimensional incompressible flow with axisymmetry. A surface with a constant value of the Stokes stream function encloses a streamtube, everywhere tangential to the flow velocity vectors. Further, the volume flux within this streamtube is constant, and all the streamlines of the flow are located on this surface. The velocity field associated with the Stokes stream function is solenoidal—it has zero divergence. This stream function is named in honor of George Gabriel Stokes. Cylindrical coordinates Consider a cylindrical coordinate system ( ρ , φ , z ), with the z–axis the line around which the incompressible flow is axisymmetrical, φ the azimuthal angle and ρ the distance to the z–axis. Then the flow velocity components uρ and uz can be expressed in terms of the Stokes stream function by: The azimuthal velocity component uφ does not depend on the stream function. Due to the axisymmetry, all three velocity components ( uρ , uφ , uz ) only depend on ρ and z and not on the azimuth φ. The volume flux, through the surface bounded by a constant value ψ of the Stokes stream function, is equal to 2π ψ. Spherical coordinates In spherical coordinates ( r , θ , φ ), r is the radial distance from the origin, θ is the zenith angle and φ is the azimuthal angle. In axisymmetric flow, with θ = 0 the rotational symmetry axis, the quantities describing the flow are again independent of the azimuth φ. The flow velocity components ur and uθ are related to the Stokes stream function through: Again, the azimuthal velocity component uφ is not a function of the Stokes stream function ψ. The volume flux through a stream tube, bounded by a surface of constant ψ, equals 2π ψ, as before. Vorticity The vorticity is defined as: , where with the unit vector in the –direction. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of vorticity using a Stokes stream function |- |Consider the vorticity as defined by From the definition of the curl in spherical coordinates: First notice that the and components are equal to 0. Secondly substitute and into The result is: Next the following algebra is performed: |} As a result, from the calculation the vorticity vector is found to be equal to: Comparison with cylindrical The cylindrical and spherical coordinate systems are related through and Alternative definition with opposite sign As explained in the general stream function article, definitions using an opposite sign convention – for the relationship between the Stokes stream function and flow velocity – are also in use. Zero divergence In cylindrical coordinates, the divergence of the velocity field u becomes: as expected for an incompressible flow. And in spherical coordinates: Streamlines as curves of constant stream function From calculus it is known that the gradient vector is normal to the curve (see e.g. Level set#Level sets versus the gradient). If it is shown that everywhere using the formula for in terms of then this proves that level curves of are streamlines. Cylindrical coordinates In cylindrical coordinates, . and So that Spherical coordinates And in spherical coordinates and So that Notes References Originally published in 1879, the 6th extended edition appeared first in 1932. Reprinted in: Fluid dynamics
Stokes stream function
[ "Chemistry", "Engineering" ]
722
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
17,638,299
https://en.wikipedia.org/wiki/Polylysine
Polylysine refers to several types of lysine homopolymers, which may differ from each other in terms of stereochemistry (D/L; the L form is natural and usually assumed) and link position (α/ε). Of these types, only ε-poly-L-lysine is produced naturally. Chemical structure The precursor amino acid lysine contains two amino groups, one at the α-carbon and one at the ε-carbon. Either can be the location of polymerization, resulting in α-polylysine or ε-polylysine. Polylysine is a homopolypeptide belonging to the group of cationic polymers: at pH 7, polylysine contains a positively charged hydrophilic amino group. α-Polylysine is a synthetic polymer, which can be composed of either L-lysine or D-lysine. "L" and "D" refer to the chirality at lysine's central carbon. This results in poly-L-lysine (PLL) and poly-D-lysine (PDL) respectively. ε-Polylysine (ε-poly-L-lysine, EPL) is typically produced as a homopolypeptide of approximately 25–30 L-lysine residues. According to research, ε-polylysine is adsorbed electrostatically to the cell surface of the bacteria, followed by a stripping of the outer membrane. This eventually leads to the abnormal distribution of the cytoplasm causing damage to the bacterial cell that is produced by bacterial fermentation. ε-Poly-L-lysine is used as a natural preservative in food products. Production Production of polylysine by natural fermentation is only observed in strains of bacteria in the genus Streptomyces. Streptomyces albulus is most often used in scientific studies and is also used for the commercial production of ε-polylysine. α-Polylysine is synthetically produced by a basic polycondensation reaction. History The production of ε-polylysine by natural fermentation was first described by researchers Shoji Shima and Heiichi Sakai in 1977. Since the late 1980s, ε-polylysine has been approved by the Japanese Ministry of Health, Labour and Welfare as a preservative in food. In January 2004, ε-polylysine became generally recognized as safe (GRAS) certified in the United States. ε-Polylysine In food ε-Polylysine is used commercially as a food preservative in Japan, Korea and in imported items sold in the United States. Food products containing polylysine are mainly found in Japan. The use of polylysine is common in food applications such as boiled rice, cooked vegetables, soups, noodles and sliced fish (sushi). Literature studies have reported an antimicrobial effect of ε-polylysine against yeast, fungi, Gram-positive bacteria and Gram-negative bacteria. Polylysine has a light yellow appearance and is slightly bitter in taste whether in powder or liquid form. α-Polylysine In tissue culture α-Polylysine is commonly used to coat tissue cultureware as an attachment factor which improves cell adherence. This phenomenon is based on the interaction between the positively charged polymer and negatively charged cells or proteins. While the poly-L-lysine (PLL) precursor amino acid occurs naturally, the poly-D-lysine (PDL) precursor is an artificial product. The latter is therefore thought to be resistant to enzymatic degradation and so may prolong cell adherence. Polylysine in drug delivery Polylysine exhibits high positive charge density which allows it to form soluble complexes with negatively charged macromolecules. Polylysine homopolymers or block copolymers have been widely used for delivery of DNA and proteins. Polylysine-based nanoparticles have also been shown to passively accumulate in the injured sites of blood vessels after stroke due to incorporation into newly formed thrombus, which offers a new way to deliver therapeutic agents specifically to the sites of injury after vascular damage. Chemical modification In 2010, hydrophobically modified ε-polylysine was synthesized by reacting EPL with octenyl succinic anhydride (OSA). It was found that OSA-g-EPLs had glass transition temperatures lower than EPL. They were able to form polymer micelles in water and to lower the surface tension of water, confirming their amphiphilic properties. The antimicrobial activities of OSA-g-EPLs were also examined, and the minimum inhibitory concentrations of OSA-g-EPLs against Escherichia coli O157:H7 remained the same as that of EPL. Therefore, modified EPLs have the potential of becoming bifunctional molecules, which can be used either as surfactants or emulsifiers in the encapsulation of water-insoluble drugs or as antimicrobial agents. References Food additives Food preservatives Polymers Amino acid derivatives
Polylysine
[ "Chemistry", "Materials_science" ]
1,098
[ "Polymers", "Polymer chemistry" ]
17,638,686
https://en.wikipedia.org/wiki/Chafery
A chafery is a variety of hearth used in ironmaking for reheating a bloom of iron, in the course of its being drawn out into a bar of wrought iron. The equivalent term for a bloomery was string hearth, except in 17th century Cumbria, where the terminology was that of the finery forge. A finery forge for the Walloon process would typically have one chafery to work two fineries (but sometimes one or three fineries). Chaferies were also used in the potting and stamping forges of the Industrial Revolution. Metallurgy References
Chafery
[ "Chemistry", "Materials_science", "Engineering" ]
122
[ "Metallurgy", "Materials science", "nan" ]
17,639,376
https://en.wikipedia.org/wiki/Sir%20Philip%20Sidney%20game
In biology and game theory, the Sir Philip Sidney game is used as a model for the evolution and maintenance of informative communication between relatives. Developed by John Maynard Smith as a model for chick begging behavior, it has been studied extensively including the development of many modified versions. It was named after a story about Sir Philip Sidney who, fatally wounded, allegedly gave his water to another, saying, "thy necessity is yet greater than mine." The phenomenon Young birds and other animals beg for food from their parents. It appears that in many species the intensity of begging varies with the need of the chick and that parents give more food to those chicks that beg more. Since parents respond differentially, chicks have an incentive to overstate their need since it will result in them receiving more food. If all chicks overstate their need, parents have an incentive to ignore the begging and give food using some other rule. This situation represents a case of animal signaling where there arises an evolutionary question to explain the maintenance of the signal. The Sir Philip Sidney game formalizes a signalling theory suggestion from Amotz Zahavi, the handicap principle, that reliability is maintained by making the signal costly to produce—chicks expend energy in begging. Since it requires energy to beg, only chicks in dire need should be willing to expend the energy to secure food. The game There are two individuals, the signaler and the responder. The responder has some good which can be transferred to the signaler or not. If the responder keeps the good, the responder has a fitness of 1, otherwise the responder has a fitness of (1-d). The signaler can be in one of two states, healthy or needy. If the signaler receives the good, his fitness will be 1. Otherwise his fitness will be (1-b) or (1-a) if healthy or needy respectively (where a>b). The signaler can send a signal or not. If he sends the signal he incurs a cost of c regardless of the outcome. If individuals maximize their own fitness the responder should never transfer the good, since he is reducing his own fitness for no gain. However, it supposed that the signaler and responder are related by some degree r. Each individual attempts to maximize his inclusive fitness, and so in some cases the responder would like to transfer the good. The case of interest is where the responder only wants to transfer the good to the needy signaler, but the signaler would want the good regardless of his state. This creates a partial conflict of interest, where there would be an incentive for deception. Maynard Smith showed, however, that for certain values of c, honest signaling can be an evolutionarily stable strategy. This suggests that it might be sustained by evolution. Criticisms The empirical study of chick begging has cast some doubt on the appropriateness of the Sir Philip Sidney game and on the handicap principle as an explanation for chick begging behavior. Several empirical studies have attempted to measure the cost of begging, in effect measuring c. These studies have found that although there is a cost, it is far lower than would be sufficient to sustain honesty. Since the actual benefits of food are hard to calculate, the required value of c cannot be determined exactly, but it nonetheless has raised concern. In addition to the empirical concern, there has been theoretical concern. In a series of papers, Carl Bergstrom and Michael Lachmann suggest that in many biologically possible cases we should not expect to find signaling despite the fact that it is an evolutionarily stable strategy. They point out that whenever a signaling strategy is evolutionarily stable, non-signaling equilibria are as well. As a result, evolutionary stability alone does not require the evolution of signaling. In addition, they note that in many of these cases the signaling equilibrium is pareto inferior to the non-signaling one – both the chick and parent are worse off than if there was no signaling. Since one would expect non-signalling to be the ancestral state, it is unclear how evolution would move a population from a superior equilibrium to an inferior one. Both of these concerns led Bergstrom and Lachmann to suggest a modified game where honesty is maintained, not by signal cost, but instead by the common interest inherent in interaction among relatives. In their partial pooling model, individuals have no incentive to lie, because the lie would harm their relative proportionally more than it would help them. As a result, they do better by remaining honest. References Game theory game classes Animal communication Philip Sidney
Sir Philip Sidney game
[ "Mathematics" ]
929
[ "Game theory game classes", "Game theory" ]
17,639,853
https://en.wikipedia.org/wiki/Tecticornia%20bibenda
Tecticornia bibenda is a species of plant in the subfamily Salicornioideae of the family Amaranthaceae from Western Australia. Its segmented stem gives it an appearance similar to the Michelin Man. T. bibenda ranked tenth in the top species of 2008 by the International Institute for Species Exploration. References bibenda Caryophyllales of Australia Halophytes Eudicots of Western Australia Plants described in 2007 Taxa named by Kelly Anne Shepherd
Tecticornia bibenda
[ "Chemistry" ]
100
[ "Halophytes", "Salts" ]
17,639,871
https://en.wikipedia.org/wiki/VIA%20OpenBook
VIA OpenBook is a laptop reference design from VIA Technologies, announced in 2008. The laptop case design was released as open source. Specifications Dimensions Dimensions: 24.0w x 17.5d x 3.62h cm (at battery), (9.45w x 6.89d x 1.43h in) Weight: under 1 kg Processor, memory Processor: 1.0 GHz VIA Nano ULV Chipset: VIA VX800 unified Memory: DDR2 SO-DIMM up to 2 GB Hard disk: 160 GB or above Networking, wireless Networking: 10/100/1000 Mbit/s Broadcom Giga NIC Ethernet Wireless: 802.11b/g Broadcom or 802.16e GCT Bluetooth, Wi-Fi, WiMAX, EV-DO /W-CDMA, HSDPA, GPS options. Peripherals Screen: LED 8.9" WVGA 1024 x 600 Graphics: VIA Chrome9 HC3 DX9 3D engine with shared system memory up to 256 MB Card reader: 4-in-1 embedded USB: 3 x (Ver. 2.0 Type A Port) Audio: Realtek HD audio codec, 2 speakers Audio jacks: 1 microphone-in, 1 headphone out Camera: CCD 2.01 megapixel, dual-headed rotary Battery Battery: 4 cell See also Open-design movement External links VIA Unveils VIA OpenBook Mini-Note Reference Design (Press Release) References Subnotebooks VIA Technologies Netbooks Open design Computer-related introductions in 2008
VIA OpenBook
[ "Engineering" ]
324
[ "Design", "Open design" ]
17,639,930
https://en.wikipedia.org/wiki/List%20of%20Angry%20Video%20Game%20Nerd%20episodes
Angry Video Game Nerd (abbreviated as AVGN) is an American web series of comedy-themed retrogaming reviews, created by and starring James Rolfe. The show revolves around reviews that involve acerbic rants about low quality video games. From the beginning of season 2, new episodes were aired first on GameTrailers.com, but are since now aired at Cinemassacre.com, with episodes later being re-aired on Rolfe's own YouTube channel. Episodes are usually scheduled for release on the first or second Wednesday of each month; originally, Rolfe's early work schedule allowed for two episodes per month, but other work commitments changed this to its present arrangement. The only Angry Video Game Nerd episode, although still available for viewing, that never officially made it to/remained on YouTube in its original form was Atari Porn, which was removed after the site flagged it for inappropriate content per its community guidelines. On November 21, 2023, a heavily censored version was officially uploaded to the Cinemassacre YouTube channel, under the name Atari Pork. The two part review of Teenage Mutant Ninja Turtles 3 was removed for copyright issues, however an edited version was reuploaded in 2020 as one video. Two other episodes were later removed for using movie clips from copyrighted films – Rocky and Super Mario Bros. 3 – but were later reuploaded to YouTube after being amended and changed to comply with the website's policies. Series overview Episodes Season 1 (2004–06) Season 2 (2007–08) Season 3 (2008–09) Season 4 (2009–10) Season 5 (2010–11) Season 6 (2011) Season 7 (2012–13) Starting with this season, episodes no longer aired on GameTrailers. Season 8 (2014) Episodes 5 to 16 were released as part of the "12 Days of Shitsmas" event in December 2014. Angry Video Game Nerd: The Movie (2014) At the end of the Spielberg Games review, it was implied that E.T. would be reviewed in The Angry Video Game Nerd: The Movie. Eventually, at TooManyGames 2011 and Magfest 2012, Rolfe confirmed that he would review E.T. in the film. E.T. programmer, Howard Scott Warshaw, also makes an appearance in the film. The film premiered July 21, 2014. Season 9 (2015) Season 10 (2016) Season 11 (2017) Season 12 (2018) Season 13 (2019) Season 14 (2020) Season 15 (2021) Season 16 (2022) Season 17 (2023) Season 18 (2024) Related videos Cinemassacre The following collection of videos features appearances by either James Rolfe, or his character The Nerd: ScrewAttack The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Channel Awesome The following collection of videos features appearances by either James Rolfe, or his character The Nerd. This includes Channel Awesome's collection of videos from the special crossover series between Angry Video Game Nerd and Nostalgia Critic: Cinevore Studios/Mixed Nuts Productions The following collection of videos feature appearances by James Rolfe's character, the Nerd: GameTrailers The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Pat the NES Punk The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Other The following collection of videos features appearances by either James Rolfe, or his character The Nerd: Clip Collection videos Bad Game Cover Art In 2015, from December 1 to 25, a series of mini episodes was released in the style of an advent calendar, in which the Nerd comments on poor examples of video game cover art. The following list these episodes: Shorts YouTube Shorts featuring the Angry Video Game Nerd. Home releases These DVDs and Blu-rays have not been sold in stores, with the exception of the ScrewAttack store. References External links AVGN Full Episode List at Cinemassacre Productions Cinemassacre Angry Video Game Nerd, the Angry Video Nerd, The Angry Video Game Nerd, the
List of Angry Video Game Nerd episodes
[ "Technology" ]
857
[ "Computing-related lists", "Video game lists" ]
17,640,740
https://en.wikipedia.org/wiki/Hayyim%20Selig%20Slonimski
Ḥayyim Selig ben Ya'akov Slonimski () (March 31, 1810 – May 15, 1904), also known by his acronym ḤaZaS (), was a Hebrew publisher, mathematician, astronomer, inventor, science writer, and rabbi. He was among the first to write books on science for a broad Jewish audience, and was the founder of Ha-Tsfira, the first Hebrew-language newspaper with an emphasis on the sciences. Biography Ḥayyim Selig Slonimski was born in Bialystok, in the Grodno Governorate of the Russian Empire (present-day Poland), the oldest son of Rabbi Avraham Ya'akov Bishka and Leah (Neches) Bishka. His father belonged to a family of rabbis, writers, publishers and printers, and his mother was the daughter of Rabbi Yeḥiel Neches, an owner of a well-known beit midrash in Bialystok. Slonimski had a traditional Jewish upbringing and Talmudic education; without a formal secular education, Slonimski taught himself mathematics, astronomy, and foreign languages. An advocate for the education of Eastern European Jews in the sciences, Slonimski introduced a vocabulary of technical terms created partly by himself into the Hebrew language. At age 24 he finished writing a textbook on mathematics, but due to lack of funds, only the first part of which was published in 1834 under the title Mosedei Ḥokhmah. The following year, Slonimski released Sefer Kokhva de-Shavit (1835), a collection of essays on Halley's comet and other astronomy-related topics such as the laws of Kepler and Newton's laws of motion. In 1838 Slonimski settled in Warsaw, where he became acquainted with mathematician and inventor Abraham Stern (1768–1842), whose youngest daughter Sarah Gitel he would later marry in 1842. There he published another astronomical work, the highly popular Toldot ha-Shamayim (1838). He also tried his hand at the applied sciences, and a number of his technological inventions received recognition and awards. The most notable of his inventions was his calculating machine, created in 1842 based on his tables, which he exhibited to the St. Petersburg Academy of Sciences, and for which he was awarded the 1844 Demidov Prize of 2,500 rubles by the Russian Academy of Sciences. He also received a title of honorary citizen, which granted him the right to live outside of the Pale of Settlement to which Jews were normally restricted. In 1844 he published a new formula in Crelle's Journal for calculating the Jewish calendar. In 1853 he invented a chemical process for plating iron vessels with lead to prevent corrosion, and in 1856 a device for simultaneously sending multiple telegrams using just one telegraphic wire. The system of multiple telegraphy perfected by Lord Kelvin in 1858 was based on Slonimski's discovery. Slonimski lived between 1846 and 1858 in Tomaszów Mazowiecki, an industrial town in central Poland. He corresponded with several scientists, notably Alexander von Humboldt, and wrote a sketch of Humboldt's life. In February 1862 in Warsaw, Slonimski launched Ha-Tsfira, the first Hebrew newspaper in Poland, and was the publisher, editor, and chief contributor. It ceased publication after six months due to his departure on the eve of the January Uprising from Warsaw to Zhitomir, the capital of the Ukrainian province Volhynia. There Slonimski was appointed as principal of the rabbinical seminary in Zhitomir and as government censor of Hebrew books. After the seminary was closed by the Russian government in 1874, Slonimski resumed the publication of Ha-Tsfira, first in Berlin and then again in Warsaw, after he obtained the necessary permission from the tsarist government. The newspaper would quickly become a central cultural institution of Polish Jewry. He died in Warsaw on May 15, 1904. The Stalin controversy In 1952 Josef Stalin made a speech in which, among other things, he claimed that it was a Russian who had beat out America in the 19th century in the development of the telegraph. While Stalin's claim was mocked in the United States, Slonimsky's grandson, the musicologist Nicolas Slonimsky, was able to confirm the accuracy of some of Stalin's claims. Major works Mosede Ḥokmah (1834), on the fundamental principles of higher algebra Sefer Kukba di-Shebit (1835), essays on the Halley comet and on astronomy in general Toledot ha-Shamayim (1838), on astronomy and optics Yesode ha-'Ibbur (1852), on the Jewish calendar system and its history Meẓi'ut ha-Nefesh ve-Ḳiyyumah (1852), on the immortality of the soul Ot Zikkaron (1858), a biographical sketch of Alexander von Humboldt See also Slonimski's Theorem References Footnotes External links 1810 births 1904 deaths People from Białystok People from Belostoksky Uyezd Polish Orthodox Jews Polish inventors Inventors from the Russian Empire Jewish scholars Jewish Polish writers Polish mathematicians Jewish scientists Jewish astronomers Demidov Prize laureates Writers from the Russian Empire People of the Haskalah
Hayyim Selig Slonimski
[ "Astronomy" ]
1,099
[ "Astronomers", "Jewish astronomers" ]
17,640,975
https://en.wikipedia.org/wiki/This%20Tiny%20World
This Tiny World () is a 1972 Dutch short documentary film about antique mechanical toys, produced by Charles and Martina Huguenot van der Linden. It won an Oscar in 1973 for Documentary Short Subject. References External links 1972 films 1970s Dutch-language films Dutch short documentary films 1972 independent films Best Documentary Short Subject Academy Award winners Dutch independent films 1972 short documentary films Films about toys Antiques Mechanical toys
This Tiny World
[ "Physics", "Technology" ]
79
[ "Physical systems", "Machines", "Mechanical toys" ]
17,641,068
https://en.wikipedia.org/wiki/Fixed%20platform
A fixed platform is a type of offshore platform used for the extraction of petroleum or gas. These platforms are built on concrete and/or steel legs directly planted onto the seabed, supporting a deck with space for drilling rigs, production facilities and crew quarters. Such platforms are, by virtue of their immobility, designed for very long-term use. Various types of structure are used, steel jacket, concrete caisson, floating steel and even floating concrete. Steel jackets are vertical sections made of tubular steel members, and are usually piled into the seabed. Concrete caisson structures, pioneered by the Condeep concept, often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore (Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Fixed platforms are economically feasible for installation in water depths up to about 500 feet (150 m); for deeper depths a floating production system, or a subsea pipeline to land or to shallower water depths for processing, would usually be considered. See also List of tallest oil platforms List of tallest freestanding steel structures Bullwinkle Platform Pompano Platform Offshore geotechnical engineering Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf References Oil platforms Petroleum production
Fixed platform
[ "Chemistry", "Engineering" ]
294
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
17,641,350
https://en.wikipedia.org/wiki/Dental%20cement
Dental cements have a wide range of dental and orthodontic applications. Common uses include temporary restoration of teeth, cavity linings to provide pulpal protection, sedation or insulation and cementing fixed prosthodontic appliances. Recent uses of dental cement also include two-photon calcium imaging of neuronal activity in brains of animal models in basic experimental neuroscience. Traditionally cements have separate powder and liquid components which are manually mixed. Thus working time, amount and consistency can be individually adapted to the task at hand. Some cements, such as glass ionomer cement (GIC), can come in capsules and are mechanically mixed using rotating or oscillating mixing machines. Resin cements are not cements in a narrow sense, but rather polymer based composite materials. ISO 4049: 2019 classifies these polymer-based luting materials according to curing mode as class 1 (self-cured), class 2 (light-cured), or class 3 (dual-cured). Most of the commercially available products are class 3 materials, combining chemical- and light-activation mechanisms. Ideal cement properties High biocompatibility – zinc phosphate cement is considered the most biocompatible material with a low allergy potential despite the occasional initial acid pain (as a consequence of inadequate powder/liquid ratio) Non-irritant – polycarboxylate cement is considered the most sensitive type due to the properties of polyacrylic acid (PAA). Antibacterial properties to prevent secondary caries Provide a good marginal (bacteria-tight) seal to prevent marginal leakage Resistant to dissolution in saliva, or other oral fluid – a primary cause of decementation is dissolution of the cement at the margins of a restoration High strength in tension, shear and compression to resist stress at the restoration–tooth interface. High compressive strength (minimum 50 microns acc. to ISO 9917-1) Adequate working and setting time Good aesthetics Good thermal insulation properties as a liner under metal restorations Opacity – for diagnostic purposes on radiographs. Low film thickness (maximum 25 microns acc. to ISO 9917-1). Low allergy potential Low shrinkage Retention – if an adhesive bond occurs between the cement and the restorative material, retention is greatly enhanced. Otherwise, the retention depends on the geometry of the tooth preparation. Cements based on phosphoric acid Dental cements based on organometallic chelate compounds Dental applications Dental cements can be utilised in a variety of ways depending on the composition and mixture of the material. The following categories outline the main uses of cements in dental procedures. Temporary restorations Unlike composite and amalgam restorations, cements are usually used as a temporary restorative material. This is generally due to their reduced mechanical properties which may not withstand long-term occlusal load. Glass ionomer cement (GIC) Zinc polycarboxylate cement Zinc oxide eugenol cement Resin-modified glass ionomer cement (RMGIC) Bonded amalgam restorations Amalgam does not bond to tooth tissue and therefore requires mechanical retention in the form of undercuts, slots and grooves. However, if insufficient tooth tissue remains after cavity preparation to provide such retentive features, a cement can be utilised to help retain the amalgam in the cavity. Historically, zinc phosphate and polycarboxylate cements were used for this technique; however, since the mid-1980s composite resins have been the material of choice due to their adhesive properties. Common resin cements utilised for bonded amalgams are RMGIC and dual-cure resin based composite. Liners and pulp protection When a cavity reaches close proximity to the pulp chamber, it is advisable to protect the pulp from further insult by placing a base or liner as a means of insulation from the definitive restoration. Cements indicated for liners and bases include: Zinc oxide eugenol Zinc polycaroxylate Resin-modified glass ionomer cement (RMGIC) Pulp capping is a method to protect the pulp chamber if the clinician suspects it may have been exposed by caries or cavity preparation. Indirect pulp caps are indicated for suspected micro-exposures whereas direct pulp caps are place on a visibly exposed pulp. In order to encourage pulpal recovery, it is important to use a sedative, non-cytotoxic material such as setting calcium hydroxide cement. Luting cements Luting materials are used to cement fixed prosthodontics such as crowns and bridges. Luting cements are often of similar composition to restorative cements; however, they usually have less filler, meaning the cement is less viscous. Resin-modified glass ionomer cement (RMGIC) Glass ionomer cement (GIC) Zinc polycarboxylate cement Zinc oxide eugenol luting cement Summary of clinical applications Composition and classification ISO classification Cements are classified on the basis of their components. Generally, they can be classified into categories: Water-based acid-base cements: zinc phosphate (Zn3(PO4)2), zinc polyacrylate (polycarboxylate), glass ionomer (GIC). These contain metal oxide or silicate fillers embedded in a salt matrix. Non-aqueous/oil base acid-base cements: zinc oxide eugenol and non-eugenol zinc oxide. These contain metal oxide fillers embedded in a metal salt matrix. Resin-based: acrylate or methacrylate resin cements, including the latest generation of self-adhesive resin cements that contain silicate or other types of fillers in an organic resin matrix. Cements can be classified based on the type of their matrix: Phosphate (zinc phosphate, silicophosphate) Polycarboxylate (zinc polycarboxylate, glass ionomer) Phenolate (zinc oxide eugenol and ethoxybenzoic acid [EBA]) Resin (polymeric) Based on time of use: Conventional (zinc phosphate, zinc polycarboxylate, zinc oxide eugenol, glass ionomer cement) Contemporary (resin cements, resin-modified glass ionomers). Resin-based cements These cements are resin-based composites. They are commonly used to definitively cement indirect restorations, especially resin bonded bridges and ceramic or indirect composite restorations, to the tooth tissue. They are usually used in conjunction with a bonding agent as they have no ability to bond to the tooth, although there are some products that can be applied directly to the tooth (self-etching products). There are three main resin-based cements: Light-cured – required a curing lamp to complete set Dual-cured – can be light cured at the restoration margins but chemically cure in areas that the curing lamp cannot penetrate Self-etch – these etch the tooth surface and do not require an intermediate bonding agent Resin cements come in a range of shades to improve aesthetics. Mechanical properties Fracture toughness Thermocycling significantly reduces the fracture toughness of all resin-based cements except RelyX Unicem 2 AND G-CEM LinkAce. Compressive strength All automixed resin-based cements have greater compressive strength than hand-mixed counterpart, except for Variolink II. Zinc polycarboxylate cements Zinc polycarboxylate was invented in 1968 and was revolutionary as it was the first cement to exhibit the ability to chemically bond to the tooth surface. Very little pulpal irritation is seen with its use due to the large size of the polyacrylic acid molecule. This cement is commonly used for the installation of crowns, bridges, inlays, onlays, and orthodontic appliances. Composition: Powder + liquid reaction Zinc oxide (powder) + poly(acrylic) acid (liquid) = Zinc polycarboxylate Zinc polycarboxylate is also sometimes referred to as zinc polyacrylate or zinc polyalkenoate Components of the powder include zinc oxide, stannous fluoride, magnesium oxide, silica and also alumina Components of the liquid include poly(acrylic) acid, itaconic acid and maleic acid. Adhesion: Zinc polycarboxylate cements adhere to enamel and dentine by means of chelation reaction. Indications for use: Temporary restorations Inflamed pulp Bases Cementation of crowns Zinc phosphate cements Zinc phosphate was the very first dental cement to appear on the dental marketplace and is seen as the “standard” for other dental cements to be compared to. The many uses of this cement include permanent cementation of crowns, orthodontic appliances, intraoral splints, inlays, post systems, and fixed partial dentures. Zinc phosphate exhibits a very high compressive strength, average tensile strength and appropriate film thickness when applies according to manufacturer guidelines. However, issues with the clinical use of zinc phosphate are its initially low pH when applied in an oral environment (linked to pulpal irritation) and the cement's inability to chemically bond to the tooth surface, although this has not affected the successful long-term use of the material. Composition: Phosphoric acid liquid Zinc oxide powder Formerly known as the most commonly used luting agent, zinc phosphate cement works successfully for permanent cementation. It does not possess anticariogenic effects, is not adherent to tooth structure, and acquires a moderate degree of intraoral solubility. However, zinc phosphate cement can irritate nerve pulp; hence, pulp protection is required but the use of polycarboxylate cement (zinc polycarboxylate or glass ionomer) is highly recommended since it is a more biologically compatible cement. Known contraindications of dental cements Dental materials such as filling and orthodontic instruments must satisfy biocompatibility requirements as they will be in the oral cavity for a long period of time. Some dental cements can contain chemicals that may induce allergic reactions on various tissues in the oral cavity. Common allergic reactions include stomatitis/dermatitis, urticaria, swelling, rash and rhinorrhea. These may predispose to life-threatening conditions such as anaphylaxis, oedema and cardiac arrhythmias. Eugenol is widely used in dentistry for different applications including impression pastes, periodontal dressings, cements, filling materials, endodontic sealers and dry socket dressings. Zinc oxide eugenol is a cement commonly used for provisional restorations and root canal obturation. Although classified as non-cariogenic by the US Food and Drug Administration, eugenol is proven to be cytotoxic with the risk of anaphylactic reactions in certain patients. Zinc oxide eugenol constituents a mixture of zinc oxide and eugenol to form a polymerised eugenol cement. The setting reaction produces an end product called zinc eugenolate, which readily hydrolyses, producing free eugenol that causes adverse effects on fibroblast and osteoclast-like cells. At high concentrations localised necrosis and reduced healing occurs whereas for low concentrations contact dermatitis is the common clinical manifestation. Allergy contact dermatitis has been proven to be the highest clinical occurrence usually localised to soft tissues with buccal mucosa being the most prevalent. Normally a patch test done by dermatologists will be used to diagnose the condition. Glass ionomer cements have been used to substitute zinc oxide eugenol cements (thus removing the allergen), with positive outcome from patients. References Acid-base Cements (1993) A. D. Wilson and J.W. Nicholson Dental materials
Dental cement
[ "Physics" ]
2,441
[ "Materials", "Dental materials", "Matter" ]
17,641,424
https://en.wikipedia.org/wiki/Induced%20gas%20flotation
Induced gas flotation (IGF) is a water treatment process that clarifies wastewater (or other waters) by removing suspended matter such as oil or solids. The removal is achieved by injecting gas bubbles into the water or wastewater in a flotation tank or basin. The small bubbles adhere to the suspended matter causing the suspended matter to float to the surface of the water where it may then be removed by a skimming device. Induced gas flotation is very widely used in treating the industrial wastewater effluents from oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industrial facilities. A very similar process known as dissolved air flotation is also used for waste water treatment. Froth flotation is commonly used in the processing of mineral ores. IGF units in the oil industry do not use air as the flotation medium due to the explosion risk. These IGF units use natural gas or nitrogen to create the bubbles. Process description The feed water to the IGF float tank is often (but not always) dosed with a coagulant (such as ferric chloride or aluminum sulfate) to flocculate the suspended matter. The bubbles may be generated by an impeller, eductors or a sparger. The bubbles adhere to the suspended matter, causing the suspended matter to float to the surface and form a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the IGF unit. Some IGF unit designs utilize parallel plate packing material to provide more separation surface and therefore to enhance the separation efficiency of the unit. See also API oil-water separator Flotation process Industrial water treatment Industrial wastewater treatment List of waste-water treatment technologies References Environmental engineering Oil refining Flotation processes Water treatment Waste treatment technology
Induced gas flotation
[ "Chemistry", "Engineering", "Environmental_science" ]
388
[ "Water treatment", "Chemical engineering", "Petroleum technology", "Water pollution", "Environmental engineering", "Civil engineering", "Oil refining", "Flotation processes", "Water technology", "Waste treatment technology" ]
17,641,821
https://en.wikipedia.org/wiki/List%20of%20U.S.%20chemical%20weapons%20topics
The United States chemical weapons program began in 1917 during World War I with the creation of the U.S. Army's Gas Service Section and ended 73 years later in 1990 with the country's practical adoption of the Chemical Weapons Convention (signed 1993; entered into force, 1997). Destruction of stockpiled chemical weapons began in 1985 and is still ongoing. The U.S. Army Medical Research Institute of Chemical Defense, at Aberdeen Proving Ground, Maryland, continues to operate for purely defensive research and education purposes. Agencies and organizations Army agencies and schools The U.S. chemical weapons programs have generally been run by the U.S. Army: American Expeditionary Force Gas Service Section American Expeditionary Force Chemical Service Section U.S. Army Gas School U.S. Army Edgewood Chemical Biological Center U.S. Army Soldier and Biological-Chemical Command United States Army Chemical Corps, originally the Chemical Warfare Service United States Army Medical Research Institute of Chemical Defense U.S. Army Chemical Materials Agency Program Executive Office, Assembled Chemical Weapons Alternatives United States Army CBRN School Units Chemical mortar battalion 1st Gas Regiment 2nd Chemical Mortar Battalion Modern chemical depots Active bases Blue Grass Army Depot Pueblo Chemical Depot Closed bases Johnston Atoll Chemical Agent Disposal System (closed 2000) Edgewood Chemical Activity at Aberdeen Proving Ground (closed 2006) Hawthorne Army Depot (eliminated shells 1999) Newport Chemical Depot (closed 2008) Pine Bluff Chemical Activity (closed 2014) Umatilla Chemical Depot (closed 2014) Anniston Chemical Activity (closed 2013) Deseret Chemical Depot with Tooele Chemical Agent Disposal Facility (closed 2013) Older chemical weapons program locations Camp American University Camp Leach Dugway Proving Ground Rocky Mountain Arsenal Navajo Ordnance Depot Treaties, laws and policy The U.S. is party to several treaties which limit chemical weapons: Chemical Weapons Convention Chemical Weapons Convention Implementation Act of 1998 Executive Order 11850 Executive Order 13049 Executive Order 13128 Hague Conventions (1899 and 1907) Treaty relating to the Use of Submarines and Noxious Gases in Warfare - Failed because France objected to clauses relating to submarine warfare Geneva Protocol Public Law 99-145 Weapons Canceled weapon projects While these weapon systems were developed, they were not produced or stored in the US chemical weapons stockpile. BIGEYE bomb XM-736 8-inch binary projectile Vehicles LCI(M), infantry landing craft armed with 4.2 in mortar M1135 nuclear, biological, and chemical reconnaissance vehicle, a variation of the Stryker vehicle M93 Fox MQM-58 Overseer Declared stockpile and other weapons M1 chemical mine M1 chemical bomb M10 smoke tank M104 155 mm shell M110A1/A2 155 mm shell M114 bomblet M121/A1 155 mm shell M122 155 mm shell M125 bomblet, (developed as E54R6) chemical bomblet used with M34A1 cluster bomb M134 bomblet, (developed as E130R1), chemical bomblet for use with Honest John rockets M138 bomblet, sub-munition for the M43 cluster bomb M139 cluster bomblets for the MGR-1 Honest John rocket and other missile systems M2 mortar shell (M2A1) for the M2 4.2 Inch Mortar M23 chemical mine M34A1 cluster bomb (developed as E101R3), first U.S. air-delivered nerve agent weapon M360 105 mm shell M426 8-inch shell M43 cluster bomb M44 generator cluster M47 bomb, 100 lb. World War II-era chemical bomb M55 rocket M6 canister, BZ sub-munition for the M44 generator cluster M60 105 mm shell M687 155 mm shell MC-1 bomb Mk 94 bomb Mk 95 bomb Weteye bomb, also known as the Mk-116 bomb Stockpiled chemical agents Agents stockpiled at the time of Chemical Weapons Convention: isopropyl aminoethylmethyl phosphonite, or QL, part of a binary weapon (VX) Methylphosphonyl difluoride (known to the military as DF) and a mixture of isopropyl alcohol and isopropyl amine (known as OPA), a binary chemical weapon (sarin) Mustard gas Sarin (GB) VX Rainbow Herbicides Older chemical agents Phosgene Chlorine BZ Other equipment Chemical Agent Identification Set (CAIS) People sniffer Exercises, incidents, and accidents Operations and exercises Operation Blue Skies Operation CHASE, an operation that dumped conventional and chemical munitions at sea Operation Davy Jones' Locker, a post-World War II operation aimed at dumping German chemical weapons at seas Operation Geranium, a 1948 operation that dumped lewisite into the Atlantic Ocean. Operation Paperclip, a program beginning in 1945 to bring German scientists to the U.S. Operation Ranch Hand, defoliant operations during the Vietnam War Operation Red Hat, an early 1970 program to repatriate weapons from Okinawa Operation Rock Ready, 1980's testing and rebuilding of the M17 series protective mask Operation Snoopy, Vietnam War people sniffer operations. Operation Steel Box, an operation which moved chemical weapons out of Germany in 1990. Accidents Bombing of the SS John Harvey during the Air Raid on Bari Dugway sheep incident Chemical testing Edgewood Arsenal human experiments Operation LAC, (Large Area Coverage), 1958 test that dropped microscopic particles over much of the United States Operation Top Hat, a 1953 Chemical Corps exercise testing decontamination methods on human subjects Project SHAD Chemical defense program United States Army Medical Research Institute of Chemical Defense See also List of U.S. biological weapons topics United States and weapons of mass destruction MK ULTRA, the CIA-led program to test various chemicals References United States Chemical warfare
List of U.S. chemical weapons topics
[ "Chemistry" ]
1,163
[ "nan" ]
17,642,425
https://en.wikipedia.org/wiki/Scherrer%20equation
The Scherrer equation, in X-ray diffraction and crystallography, is a formula that relates the size of sub-micrometre crystallites in a solid to the broadening of a peak in a diffraction pattern. It is often referred to, incorrectly, as a formula for particle size measurement or analysis. It is named after Paul Scherrer. It is used in the determination of size of crystals in the form of powder. The Scherrer equation can be written as: where: is the mean size of the ordered (crystalline) domains, which may be smaller or equal to the grain size, which may be smaller or equal to the particle size; is a dimensionless shape factor, with a value close to unity. The shape factor has a typical value of about 0.9, but varies with the actual shape of the crystallite; is the X-ray wavelength; is the line broadening at half the maximum intensity (FWHM), after subtracting the instrumental line broadening, in radians. This quantity is also sometimes denoted as ; is the Bragg angle. Applicability The Scherrer equation is limited to nano-scale crystallites, or more-strictly, the coherently scattering domain size, which can be smaller than the crystallite size (due to factors mentioned below). It is not applicable to grains larger than about 0.1 to 0.2 μm, which precludes those observed in most metallographic and ceramographic microstructures. The Scherrer equation provides a lower bound on the coherently scattering domain size, referred to here as the crystallite size for readability. The reason for this is that a variety of factors can contribute to the width of a diffraction peak besides instrumental effects and crystallite size; the most important of these are usually inhomogeneous strain and crystal lattice imperfections. The following sources of peak broadening are dislocations, stacking faults, twinning, microstresses, grain boundaries, sub-boundaries, coherency strain, chemical heterogeneities, and crystallite smallness. These and other imperfections may also result in peak shift, peak asymmetry, anisotropic peak broadening, or other peak shape effects. If all of these other contributions to the peak width, including instrumental broadening, were zero, then the peak width would be determined solely by the crystallite size and the Scherrer equation would apply. If the other contributions to the width are non-zero, then the crystallite size can be larger than that predicted by the Scherrer equation, with the "extra" peak width coming from the other factors. The concept of crystallinity can be used to collectively describe the effect of crystal size and imperfections on peak broadening. Although "particle size" is often used in reference to crystallite size, this term should not be used in association with the Scherrer method because particles are often agglomerations of many crystallites, and XRD gives no information on the particle size. Other techniques, such as sieving, image analysis, or visible light scattering do directly measure particle size. The crystallite size can be thought of as a lower limit of particle size. Derivation for a simple stack of planes To see where the Scherrer equation comes from, it is useful to consider the simplest possible example: a set of N planes separated by the distance, a. The derivation for this simple, effectively one-dimensional case, is straightforward. First, the structure factor for this case is derived, and then an expression for the peak widths is determined. Structure factor for a set of N equally spaced planes This system, effectively a one dimensional perfect crystal, has a structure factor or scattering function S(q): where for N planes, : each sum is a simple geometric series, defining , , and the other series analogously gives: which is further simplified by converting to trigonometric functions: and finally: which gives a set of peaks at , all with heights . Determination of the profile near the peak, and hence the peak width From the definition of FWHM, for a peak at and with a FWHM of , , as the peak height is N. If we take the plus sign (peak is symmetric so either sign will do) and if N is not too small. If is small , then , and we can write the equation as a single non-linear equation , for . The solution to this equation is . Therefore, the size of the set of planes is related to the FWHM in q by To convert to an expression for crystal size in terms of the peak width in the scattering angle used in X-ray powder diffraction, we note that the scattering vector , where the here is the angle between the incident wavevector and the scattered wavevector, which is different from the in the scan. Then the peak width in the variable is approximately , and so which is the Scherrer equation with K = 0.88. This only applies to a perfect 1D set of planes. In the experimentally relevant 3D case, the form of and hence the peaks, depends on the crystal lattice type, and the size and shape of the nanocrystallite. The underlying mathematics becomes more involved than in this simple illustrative example. However, for simple lattices and shapes, expressions have been obtained for the FWHM, for example by Patterson. Just as in 1D, the FWHM varies as the inverse of the characteristic size. For example, for a spherical crystallite with a cubic lattice, the factor of 5.56 simply becomes 6.96, when the size is the diameter D, i.e., the diameter of a spherical nanocrystal is related to the peak FWHM by or in : Peak broadening due to disorder of the second kind The finite size of a crystal is not the only possible reason for broadened peaks in X-ray diffraction. Fluctuations of atoms about the ideal lattice positions that preserve the long-range order of the lattice only give rise to the Debye-Waller factor, which reduces peak heights but does not broaden them. However, fluctuations that cause the correlations between nearby atoms to decrease as their separation increases, does broaden peaks. This can be studied and quantified using the same simple one-dimensional stack of planes as above. The derivation follows that in chapter 9 of Guinier's textbook. This model was pioneered by and applied to a number of materials by Hosemann and collaborators over a number of years. They termed this disorder of the second kind, and referred to this imperfect crystalline ordering as paracrystalline ordering. Disorder of the first kind is the source of the Debye-Waller factor. To derive the model we start with the definition of the structure factor but now we want to consider, for simplicity an infinite crystal, i.e., , and we want to consider pairs of lattice sites. For large , for each of these planes, there are two neighbours planes away, so the above double sum becomes a single sum over pairs of neighbours either side of an atom, at positions and lattice spacings away, times . So, then where is the probability density function for the separation of a pair of planes, lattice spacings apart. For the separation of neighbouring planes we assume for simplicity that the fluctuations around the mean neighbour spacing of a are Gaussian, i.e., that and we also assume that the fluctuations between a plane and its neighbour, and between this neighbour and the next plane, are independent. Then is just the convolution of two s, etc. As the convolution of two Gaussians is just another Gaussian, we have that The sum in is then just a sum of Fourier Transforms of Gaussians, and so for . The sum is just the real part of the sum and so the structure factor of the infinite but disordered crystal is This has peaks at maxima , where. These peaks have heights i.e., the height of successive peaks drop off as the order of the peak (and so ) squared. Unlike finite-size effects that broaden peaks but do not decrease their height, disorder lowers peak heights. Note that here we assuming that the disorder is relatively weak, so that we still have relatively well defined peaks. This is the limit , where . In this limit, near a peak we can approximate , with and obtain which is a Lorentzian or Cauchy function, of FWHM , i.e., the FWHM increases as the square of the order of peak, and so as the square of the wavevector at the peak. Finally, the product of the peak height and the FWHM is constant and equals , in the limit. For the first few peaks where is not large, this is just the limit. Thus finite-size and this type of disorder both cause peak broadening, but there are qualitative differences. Finite-size effects broadens all peaks equally, and does not affect peak heights, while this type of disorder both reduces peak heights and broadens peaks by an amount that increases as . This, in principle, allows the two effects to be distinguished. Also, it means that the Scherrer equation is best applied to the first peak, as disorder of this type affects the first peak the least. Coherence length Within this model the degree of correlation between a pair of planes decreases as the distance between these planes increases, i.e., a pair of planes 10 planes apart have positions that are more weakly correlated than a pair of planes that are nearest neighbours. The correlation is given by , for a pair of planes m planes apart. For sufficiently large m the pair of planes are essentially uncorrelated, in the sense that the uncertainty in their relative positions is so large that it is comparable to the lattice spacing, a. This defines a correlation length, , defined as the separation when the width of , which is equals a. This gives which is in effect an order-of-magnitude estimate for the size of domains of coherent crystalline lattices. Note that the FWHM of the first peak scales as , so the coherence length is approximately 1/FWHM for the first peak. Further reading B.D. Cullity & S.R. Stock, Elements of X-Ray Diffraction, 3rd Ed., Prentice-Hall Inc., 2001, p 96-102, . R. Jenkins & R.L. Snyder, Introduction to X-ray Powder Diffractometry, John Wiley & Sons Inc., 1996, p 89-91, . H.P. Klug & L.E. Alexander, X-Ray Diffraction Procedures, 2nd Ed., John Wiley & Sons Inc., 1974, p 687-703, . B.E. Warren, X-Ray Diffraction, Addison-Wesley Publishing Co., 1969, p 251-254, . References Diffraction
Scherrer equation
[ "Physics", "Chemistry", "Materials_science" ]
2,281
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
17,642,726
https://en.wikipedia.org/wiki/Shape%20factor%20%28image%20analysis%20and%20microscopy%29
Shape factors are dimensionless quantities used in image analysis and microscopy that numerically describe the shape of a particle, independent of its size. Shape factors are calculated from measured dimensions, such as diameter, chord lengths, area, perimeter, centroid, moments, etc. The dimensions of the particles are usually measured from two-dimensional cross-sections or projections, as in a microscope field, but shape factors also apply to three-dimensional objects. The particles could be the grains in a metallurgical or ceramic microstructure, or the microorganisms in a culture, for example. The dimensionless quantities often represent the degree of deviation from an ideal shape, such as a circle, sphere or equilateral polyhedron. Shape factors are often normalized, that is, the value ranges from zero to one. A shape factor equal to one usually represents an ideal case or maximum symmetry, such as a circle, sphere, square or cube. Aspect ratio The most common shape factor is the aspect ratio, a function of the largest diameter and the smallest diameter orthogonal to it: The normalized aspect ratio varies from approaching zero for a very elongated particle, such as a grain in a cold-worked metal, to near unity for an equiaxed grain. The reciprocal of the right side of the above equation is also used, such that the AR varies from one to approaching infinity. Circularity Another very common shape factor is the circularity (or isoperimetric quotient), a function of the perimeter P and the area A: The circularity of a circle is 1, and much less than one for a starfish footprint. The reciprocal of the circularity equation is also used, such that fcirc varies from one for a circle to infinity. Elongation shape factor The less-common elongation shape factor is defined as the square root of the ratio of the two second moments in of the particle around its principal axes. Compactness shape factor The compactness shape factor is a function of the polar second moment in of a particle and a circle of equal area A. The fcomp of a circle is one, and much less than one for the cross-section of an I-beam. Waviness shape factor The waviness shape factor of the perimeter is a function of the convex portion Pcvx of the perimeter to the total. Some properties of metals and ceramics, such as fracture toughness, have been linked to grain shapes. An application of shape factors Greenland, the largest island in the world, has an area of 2,166,086 km2; a coastline (perimeter) of 39,330 km; a north–south length of 2670 km; and an east–west length of 1290 km. The aspect ratio of Greenland is The circularity of Greenland is The aspect ratio is agreeable with an eyeball-estimate on a globe. Such an estimate on a typical flat map, using the Mercator projection, would be less accurate due to the distorted scale at high latitudes. The circularity is deceptively low, due to the fjords that give Greenland a very jagged coastline (see the coastline paradox). A low value of circularity does not necessarily indicate a lack of symmetry, and shape factors are not limited to microscopic objects. References Further reading J.C. Russ & R.T. Dehoff, Practical Stereology, 2nd Ed., Kluwer Academic, 2000. E.E. Underwood, Quantitative Stereology, Addison-Wesley Publishing Co., 1970. G.F. VanderVoort, Metallography: Principles and Practice, ASM International, 1984. Image processing Microscopy
Shape factor (image analysis and microscopy)
[ "Chemistry" ]
749
[ "Microscopy" ]
17,644,264
https://en.wikipedia.org/wiki/List%20of%20bamboo%20species
Bamboo is a group of woody perennial plants in the true grass family Poaceae. In the tribe Bambuseae, also known as bamboo, there are 91 genera and over 1,000 species. The size of bamboo varies from small annuals to giant timber bamboo. Bamboo evolved 30 to 40 million years ago, after the Cretaceous–Paleogene extinction event. Bamboo species can be divided into two groups: sympodial (clumping) and monopodial (running) species. Sympodial species grow from the soil in a slowly expanding tuft, while monopodial species send underground rhizomes to produce shoots several metres from the original "parent" plant. Species References External links Species Bamboo Bamboo Bamboo Bamboos, List Bamboos, List Bamboos, List Bamboos, List Bamboos, List Bamboos, List Articles containing video clips
List of bamboo species
[ "Biology" ]
178
[ "Lists of biota", "Lists of plants", "Plants" ]
17,644,328
https://en.wikipedia.org/wiki/List%20of%20parasites%20of%20humans
Endoparasites Protozoan organisms Helminths (worms) Helminth organisms (also called helminths or intestinal worms) include: Tapeworms Flukes Roundworms Other organisms Ectoparasites References Parasites parasites of humans
List of parasites of humans
[ "Biology" ]
53
[ "Parasites of humans", "Humans and other species" ]
17,644,390
https://en.wikipedia.org/wiki/Factory%20Physics
Factory Physics is a book written by Wallace Hopp and Mark Spearman, which introduces a science of operations for manufacturing management. According to the book's preface, Factory Physics is "a systematic description of the underlying behavior of manufacturing systems. Understanding it enables managers and engineers to work with the natural tendencies of manufacturing systems to: Identify opportunities for improving existing systems Design effective new systems Make the trade-offs needed to coordinate policies from disparate areas The book is used both in industry and in academia for reference and teaching on operations management. It describes a new approach to manufacturing management based on the laws of Factory Physics science. The fundamental Factory Physics framework states that the essential components of all value streams or production processes or service processes are demand and transformation which are described by structural elements of flows and stocks. There are very specific practical, mathematical relationships that enable one to describe and control the performance of flows and stocks. The book states that, in the presence of variability, there are only three buffers available to synchronize demand and transformation with lowest cost and highest service level: Capacity Inventory Response time The book states that its approach enables practical, predictive understanding of flows and stocks and how to best use the three levers to optimally synchronize demand and transformation. This work won the 1996 Institute of Industrial Engineers IIE/Joint Publishers Book of the Year Award. Editions Factory Physics: Foundations of Manufacturing Management, first edition, 1996. 668pp. Factory Physics: Foundations of Manufacturing Management, second edition, 2000. 698pp. Factory Physics: Foundations of Manufacturing Management, third edition, 2008. 720pp. See also CONWIP Supply chain management References 1996 non-fiction books Business books Operations research Collaborative non-fiction books
Factory Physics
[ "Mathematics" ]
351
[ "Applied mathematics", "Operations research" ]
17,644,838
https://en.wikipedia.org/wiki/Fourier%E2%80%93Bros%E2%80%93Iagolnitzer%20transform
In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions (or distributions) on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943). Definitions The Fourier transform of a Schwartz function f in S(Rn) is defined by The FBI transform of f is defined for a ≥ 0 by Thus, when a = 0, it essentially coincides with the Fourier transform. The same formulas can be used to define the Fourier and FBI transforms of tempered distributions in S(Rn). Inversion formula The Fourier inversion formula allows a function f to be recovered from its Fourier transform. In particular Similarly, at a positive value of a, f(0) can be recovered from the FBI transform of f(x) by the inversion formula Criterion for local analyticity Bros and Iagolnitzer showed that a distribution f is locally equal to a real analytic function at y, in the direction ξ if and only if its FBI transform satisfies an inequality of the form for |ξ| sufficiently large. Holmgren's uniqueness theorem A simple consequence of the Bros and Iagolnitzer characterisation of local analyticity is the following regularity result of Lars Hörmander and Mikio Sato ().Theorem. Let P be an elliptic partial differential operator with analytic coefficients defined on an open subset X of Rn. If Pf is analytic in X, then so too is f. When "analytic" is replaced by "smooth" in this theorem, the result is just Hermann Weyl's classical lemma on elliptic regularity, usually proved using Sobolev spaces (Warner 1983). It is a special case of more general results involving the analytic wave front set (see below), which imply Holmgren's classical strengthening of the Cauchy–Kowalevski theorem on linear partial differential equations with real analytic coefficients. In modern language, Holmgren's uniquess theorem states that any distributional solution of such a system of equations must be analytic and therefore unique, by the Cauchy–Kowalevski theorem. The analytic wave front set The analytic wave front set or singular spectrum WFA(f) of a distribution f (or more generally of a hyperfunction) can be defined in terms of the FBI transform () as the complement of the conical set of points (x, λ ξ) (λ > 0) such that the FBI transform satisfies the Bros–Iagolnitzer inequality for y the point at which one would like to test for analyticity, and |ξ| sufficiently large and pointing in the direction one would like to look for the wave front, that is, the direction at which the singularity at y, if it exists, propagates. J.M. Bony (, ) proved that this definition coincided with other definitions introduced independently by Sato, Kashiwara and Kawai and by Hörmander. If P is an mth order linear differential operator having analytic coefficients with principal symboland characteristic variety'''then In particular, when P is elliptic, char P = ø, so that WFA(Pf) = WFA(f''). This is a strengthening of the analytic version of elliptic regularity mentioned above. References (Chapter 9.6, The Analytic Wavefront Set.) . 2nd ed., Birkhäuser (2002), . (Chapter 9, FBI Transform in a Hypo-Analytic Manifold.) Fourier analysis Transforms Generalized functions Mathematical physics
Fourier–Bros–Iagolnitzer transform
[ "Physics", "Mathematics" ]
831
[ "Functions and mappings", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Mathematical relations", "Transforms", "Mathematical physics" ]
14,789,983
https://en.wikipedia.org/wiki/CLD%20chromophore
CLD-1 is a vibrant blue dye originally synthesized for application in nonlinear electro-optics. References Nonlinear optics Dyes Nitriles
CLD chromophore
[ "Chemistry" ]
29
[ "Nitriles", "Functional groups" ]
14,790,084
https://en.wikipedia.org/wiki/Logarithmic%20decrement
Logarithmic decrement, , is used to find the damping ratio of an underdamped system in the time domain. The method of logarithmic decrement becomes less and less precise as the damping ratio increases past about 0.5; it does not apply at all for a damping ratio greater than 1.0 because the system is overdamped. Method The logarithmic decrement is defined as the natural log of the ratio of the amplitudes of any two successive peaks: where x(t) is the overshoot (amplitude - final value) at time t and is the overshoot of the peak n periods away, where n is any integer number of successive, positive peaks. The damping ratio is then found from the logarithmic decrement by: Thus logarithmic decrement also permits evaluation of the Q factor of the system: The damping ratio can then be used to find the natural frequency ωn of vibration of the system from the damped natural frequency ωd: where T, the period of the waveform, is the time between two successive amplitude peaks of the underdamped system. Simplified variation The damping ratio can be found for any two adjacent peaks. This method is used when and is derived from the general method above: where x0 and x1 are amplitudes of any two successive peaks. For system where (not too close to the critically damped regime, where ). Method of fractional overshoot The method of fractional overshoot can be useful for damping ratios between about 0.5 and 0.8. The fractional overshoot is: where xp is the amplitude of the first peak of the step response and xf is the settling amplitude. Then the damping ratio is See also Damping factor References Kinematic properties Logarithms
Logarithmic decrement
[ "Physics", "Mathematics" ]
381
[ "Logarithms", "Mechanical quantities", "Physical quantities", "Quantity", "E (mathematical constant)", "Kinematic properties" ]
14,790,437
https://en.wikipedia.org/wiki/Dual%20loop
Dual-loop is a method of electrical circuit termination used in electronic security applications, particularly modern intruder alarms. It is called 'dual-loop' because two circuits (alarm and anti-tamper) are combined into one using resistors. Its use became widespread in the early 21st century, replacing the basic closed-circuit system, mainly because of changes in international standards and practices. Dual-loop allows the burglar alarm control panel to read the values of end-of-line resistors for the purpose of telling a zone's status. For example: if an alarm system's software uses 2K ohms as its non-alarm value, an inactive detector will give a reading of 2K ohms as the circuit is passing through just one resistor. When the detector goes into an active state (i.e. a door contact being opened), the circuit path has been altered and it must now pass through a second resistor wired in series with the first. This gives a reading of 4K ohms and will trigger an intruder alarm. If a resistance reading is not recognised by the system either due to short-circuit or open-circuit, an anti-tamper alarm will trigger. Dual loop is more commonly known as Balanced EOL (End of Line) resistors. It is more secure than the former Double Pole loop, but nevertheless can be bypassed by someone with sufficient knowledge of security alarm systems. References External links Calculators for resistors in series and parallel. Analog circuits
Dual loop
[ "Engineering" ]
306
[ "Analog circuits", "Electronic engineering" ]
14,791,367
https://en.wikipedia.org/wiki/Phrasal%20template
A phrasal template is a phrase-long collocation that contains one or several empty slots which may be filled by words to produce individual phrases. Description A phrasal template is a phrase-long collocation that contains one or several empty slots which may be filled by words to produce individual phrases. Often there are some restrictions on the grammatic category of the words allowed to fill particular slots. Phrasal templates are akin to forms, in which blanks are to be filled with appropriate data. The term phrasal template first appeared in a linguistic study of prosody in 1983 but doesn't appear to have come into common use until the late 1990s. An example is the phrase "common stocks rose <Number> to <Number>", e.g., "common stocks rose 1.72 to 340.36". The neologism "snowclone" was introduced to refer to a special case of phrasal templates that "clone" popular clichés. For example, a misquotation of Diana Vreeland's "Pink is the navy blue of India" may have given rise to the template "<color> is the new black", which in turn evolved into "<X> is the new <Y>". Use The word game Mad Libs makes use of phrasal templates. The notion is used in natural language processing systems and in natural language generation, such as in application-oriented report generators. See also Backus–Naur form Computational humor – usage of phrasal templates for generation of jokes by computer Joke cycle Phrase structure rules Phraseme Snowclone The king is dead, long live the king! References Grammar frameworks Syntactic entities Computational linguistics
Phrasal template
[ "Technology" ]
360
[ "Natural language and computing", "Computational linguistics" ]