id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
11,684,259
https://en.wikipedia.org/wiki/Nigrospora%20sphaerica
Nigrospora sphaerica is an airborne filamentous fungus in the phylum Ascomycota. It is found in soil, air, and plants as a leaf pathogen. It can occur as an endophyte where it produces antiviral and antifungal secondary metabolites. Sporulation of N. sphaerica causes its initial white coloured colonies to rapidly turn black. N. sphaerica is often confused with the closely related species N. oryzae due to their morphological similarities. History N. sphaerica was first identified by E. W. Mason in 1927. In 1913, S. F. Ashby and E. F. Shepherd isolated fungal cultures from banana plants and sugarcane, respectively, which were classified under the genus Nigrospora due to its morphology. Mason studied these cultures and noticed the persistent appearance of two distinct mean spore sizes. The persistence of the division in spore size led to the classification, by Mason, of the larger spore isolates as N. sphaerica, and the smaller isolates as N. oryzae. Since its classification in 1927, it has been under the class Sordariomycetes. Growth and morphology N. sphaerica colonies grow rapidly and appear hairy or woolly. The conidiophores are short and clustered surfacing from mycelium. They appear translucent in colour and have an average range of 8-11μm in diameter. The conidiophores are often straight stalks or slightly curved. Conidia grow from the tips of the translucent conidiophores. The conidia are brownish black, oblate spheroid, and single celled. On average they range from 16-18μm in diameter. The initial white translucent looking colony of N. sphaerica turns brown/black due to mass sporulation of conidia from the conidiophores. In laboratories, N. sphaerica is grown on potato dextrose agar (PDA) at room temperature. Habitat and ecology N. sphaerica is commonly found in air, soil, various plants, and some cereal grains. It is rarely found in indoor environments. N. sphaerica has been identified in many areas around the world, however it is most prevalent in tropical and subtropical countries. A study shows N. sphaerica to be the most abundant airborne fungal species found in various urban sites in Singapore. Air samples were collected using an RCS microbial air sampler. Fungal spores trapped on the agar strips were developed and counted. They were then cultured into isolates allowing for identification by morphology. Results showed N. sphaerica with the highest spore counts at ground levels and low altitudes around 40m. During asexual reproduction N. sphaerica releases spores known as conidia. The conidia are ejected out forcefully at maximum horizontal distances of 6.7 cm, and 2 cm vertically. Discharge of spores occurs in all directions. The mechanism for projection relies on the conidiophore consisting of a flask-shaped support cell that bears the conidium. Liquid from the support cell squirts through the supporting cell projecting the spore outwards. This characteristic of forcible spore discharge is rarely seen in hyphomycetes. N. sphaerica requires moisture to release spores into the air, therefore accumulation begins around 2:00 a.m. with peak time of abundance occurring around 10:00 a.m. Spore count rapidly decreases after 10:00 a.m. and remains low throughout the day. Plant pathogenicity Decaying plants is one of the most common places where N. sphaerica is found. Many studies around the world found N. sphaerica as a leaf pathogen. N. sphaerica was isolated from various plants displaying leaf spots. These reported cases reveal newly identified plant hosts for the pathogen N. sphaerica that have been validated through Koch’s postulates. The fungus causes a progressively fatal leaf spot diseases of a range of plants including blueberry (Vaccinium corymbosum), licorice (Glycyrrhiza glabra), and Wisteria sinensis (Chinese Wisteria). Initial lesions resemble small red spots around 2–5 mm particularly near the tips and edges of leaves, eventually resulting in complete defoliation. The fungus also causes a blight disease of the commercial tea plant, Camellia sinensis. Symptoms of blight was observed in commercial tea estates in Darjeeling, India. The disease affected plants of all ages, being especially pronounced in younger plants. Fungal colonies displayed an initial white colour that eventually turned gray/brown. Based on these morphological characteristics, N. sphaerica was identified as the fungal pathogen. Inoculation of the pathogen using conidial suspension spray, and re-isolation of N. sphaerica satisfied Koch’s postulates. rRNA sequence comparison of the ITS region confirmed N. sphaerica identity. Cases of leaf spot disease of kiwi fruit (Actinidia deliciosa) have been reported from orchards in Huangshan, Anhui Provence, China. Infected leaves browned and defoliated. Conidia morphology and culture properties suggested N. sphaerica as the etiological agent, later confirmed by Koch’s postulates and ITS identification. Human pathogenicity Often the common response to N. sphaerica in humans is hay fever or asthma. N. sphaerica is not widely considered a true human pathogen, however there are various reported cases of Nigrospora species in human eye and skin infections. Of those, there have only been a handful of reported cases of N. sphaerica infection in human. One specific case study identified N. sphaerica as the cause of an onychomycosis case in a 21-year-old man. Onychomycosis is a fungal infection of the nail. Fungal spores found in the body of the nail resembled the characteristic morphology of N. sphaerica. DNA sequence analysis further confirmed the identity. Another case found N. sphaerica isolated from a corneal ulcer. A woman in south India was diagnosed with a fungal corneal ulcer after being hit in the eye from a cow’s tail. Analysis of corneal scrapings showed presence of hyphae elements suggesting cause of ulcer from a fungal pathogen. Isolated cultures were grown and examined. Conidia and colony characteristics of the culture led to identification of N. sphaerica as the fungal pathogen. It was hypothesized that this special case of fungal corneal ulcer was caused by transfer of spores to the patients eye from contamination with soil (a common habitat of the fungus) or other matter from the cow’s tail. Secondary metabolites Although N. sphaerica is often considered as a pathogen, it can also act as an endophyte depending on its host. Various studies have identified novel metabolites isolated from N. sphaerica. Some of these metabolites act as phytotoxins, while others contain antiviral or antifungal properties. The purpose of the production of many of these metabolites by the fungus are not fully understood or still unknown and is an area that needs to be further studied. Aphidicolin is a mycotoxin originally known to be produced by the fungus, Cephalosporium aphidicola. This antiviral compound was isolated in mycelium culture filtrate of N. sphaerica. Epoxyexserophilone is a metabolite similar to the phytotoxin, exserohilone. Fermentation of N. sphaerica led to the production of epoxyexserophilone. Etiolated wheat coleoptile bioassay indicated that the compound is biologically inactive, and ineffective against both gram-positive and gram-negative bacteria. Nigrosporolide is a 14-membered lactone produced by N. sphaerica. It is structurally related to the phytotoxic metabolite, seiricuprolide, which is produced by the fungus, Seiridium cupressi. The compound is shown to fully inhibit growth of etiolated wheat coleoptiles, at concentrations of 10−3M. Phomalactone (5,6-dihydro-5-hydroxy-6-prop-2-enyl-2H-pyran-2-one) is found to be produced by N. sphaerica. It inhibits mycelial growth of plant pathogenic fungi, Phytophthora infestans. The metabolite also inhibits sporangium and zoospore germination of both P. infestans and Phytophthora capsici. The study also shows that the metabolite reduces progression of late blight disease in tomatoes caused by P. infestans. References Trichosphaeriales Fungi described in 1882 Fungus species
Nigrospora sphaerica
[ "Biology" ]
1,868
[ "Fungi", "Fungus species" ]
11,684,319
https://en.wikipedia.org/wiki/Coniothyrium%20fuckelii
Coniothyrium fuckelii is a fungal plant pathogen, causing stem canker, and that has also been known to cause infections in immunocompromised humans. Two diseases most commonly associated with garden rose dieback are grey mould (Botrytis cinerea) and also rose canker (Coniothyrium fuckelii, syn. Paraconiothyrium fuckelii and Leptosphaeria coniothyrium). The fungal infection of rose canker often occurs through badly timed pruning cuts or injuries to the crown of the rose plant. It then produces tiny black fruiting bodies that are only just visible on the bark of affected branches or stems. This fungus also causes cane blight disease of raspberry bushes. See also List of foliage plant diseases (Agavaceae) References Fungal plant pathogens and diseases Pleosporales Fungus species
Coniothyrium fuckelii
[ "Biology" ]
184
[ "Fungi", "Fungus species" ]
11,684,875
https://en.wikipedia.org/wiki/Semiregular%20polytope
In geometry, by Thorold Gosset's definition a semiregular polytope is usually taken to be a polytope that is vertex-transitive and has all its facets being regular polytopes. E.L. Elte compiled a longer list in 1912 as The Semiregular Polytopes of the Hyperspaces which included a wider definition. Gosset's list In three-dimensional space and below, the terms semiregular polytope and uniform polytope have identical meanings, because all uniform polygons must be regular. However, since not all uniform polyhedra are regular, the number of semiregular polytopes in dimensions higher than three is much smaller than the number of uniform polytopes in the same number of dimensions. The three convex semiregular 4-polytopes are the rectified 5-cell, snub 24-cell and rectified 600-cell. The only semiregular polytopes in higher dimensions are the k21 polytopes, where the rectified 5-cell is the special case of k = 0. These were all listed by Gosset, but a proof of the completeness of this list was not published until the work of for four dimensions, and for higher dimensions. Gosset's 4-polytopes (with his names in parentheses) Rectified 5-cell (Tetroctahedric), Rectified 600-cell (Octicosahedric), Snub 24-cell (Tetricosahedric), , or Semiregular E-polytopes in higher dimensions 5-demicube (5-ic semi-regular), a 5-polytope, ↔ 221 polytope (6-ic semi-regular), a 6-polytope, or 321 polytope (7-ic semi-regular), a 7-polytope, 421 polytope (8-ic semi-regular), an 8-polytope, Euclidean honeycombs Semiregular polytopes can be extended to semiregular honeycombs. The semiregular Euclidean honeycombs are the tetrahedral-octahedral honeycomb (3D), gyrated alternated cubic honeycomb (3D) and the 521 honeycomb (8D). Gosset honeycombs: Tetrahedral-octahedral honeycomb or alternated cubic honeycomb (Simple tetroctahedric check), ↔ (Also quasiregular polytope) Gyrated alternated cubic honeycomb (Complex tetroctahedric check), Semiregular E-honeycomb: 521 honeycomb (9-ic check) (8D Euclidean honeycomb), additionally allowed Euclidean honeycombs as facets of higher-dimensional Euclidean honeycombs, giving the following additional figures: Hypercubic honeycomb prism, named by Gosset as the (n – 1)-ic semi-check (analogous to a single rank or file of a chessboard) Alternated hexagonal slab honeycomb (tetroctahedric semi-check), Hyperbolic honeycombs There are also hyperbolic uniform honeycombs composed of only regular cells , including: Hyperbolic uniform honeycombs, 3D honeycombs: Alternated order-5 cubic honeycomb, ↔ (Also quasiregular polytope) Tetrahedral-octahedral honeycomb, Tetrahedron-icosahedron honeycomb, Paracompact uniform honeycombs, 3D honeycombs, which include uniform tilings as cells: Rectified order-6 tetrahedral honeycomb, Rectified square tiling honeycomb, Rectified order-4 square tiling honeycomb, ↔ Alternated order-6 cubic honeycomb, ↔ (Also quasiregular) Alternated hexagonal tiling honeycomb, ↔ Alternated order-4 hexagonal tiling honeycomb, ↔ Alternated order-5 hexagonal tiling honeycomb, ↔ Alternated order-6 hexagonal tiling honeycomb, ↔ Alternated square tiling honeycomb, ↔ (Also quasiregular) Cubic-square tiling honeycomb, Order-4 square tiling honeycomb, = Tetrahedral-triangular tiling honeycomb, 9D hyperbolic paracompact honeycomb: 621 honeycomb (10-ic check), See also Semiregular polyhedron References Uniform polytopes
Semiregular polytope
[ "Physics" ]
948
[ "Uniform polytopes", "Symmetry" ]
11,685,115
https://en.wikipedia.org/wiki/Overlap%E2%80%93add%20method
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response (FIR) filter : where for outside the region   This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of with short segments of : where is an arbitrary segment length. Then: and can be written as a sum of short convolutions: where the linear convolution is zero outside the region And for any parameter it is equivalent to the -point circular convolution of with in the region   The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over discrete points, and is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. Pseudocode The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end Efficiency considerations When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces output samples, so the number of complex multiplications per output sample is about: For example, when and equals whereas direct evaluation of would require up to complex multiplications per output sample, the worst case being when both and are complex-valued. Also note that for any given has a minimum with respect to Figure 2 is a graph of the values of that minimize for a range of filter lengths (). Instead of , we can also consider applying to a long sequence of length samples. The total number of complex multiplications would be: Comparatively, the number of complex multiplications required by the pseudocode algorithm is: Hence the cost of the overlap–add method scales almost as while the cost of a single, large circular convolution is almost . The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. See also Overlap–save method Circular_convolution#Example Notes References Further reading Signal processing Transforms Fourier analysis Numerical analysis
Overlap–add method
[ "Mathematics", "Technology", "Engineering" ]
712
[ "Functions and mappings", "Telecommunications engineering", "Computer engineering", "Signal processing", "Mathematical objects", "Computational mathematics", "Mathematical relations", "Transforms", "Numerical analysis", "Approximations" ]
11,685,459
https://en.wikipedia.org/wiki/Sound%20speed%20gradient
In acoustics, the sound speed gradient is the rate of change of the speed of sound with distance, for example with depth in the ocean, or height in the Earth's atmosphere. A sound speed gradient leads to refraction of sound wavefronts in the direction of lower sound speed, causing the sound rays to follow a curved path. The radius of curvature of the sound path is inversely proportional to the gradient. When the sun warms the Earth's surface, there is a negative temperature gradient in atmosphere. The speed of sound decreases with decreasing temperature, so this also creates a negative sound speed gradient. The sound wave front travels faster near the ground, so the sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The opposite effect happens when the ground is covered with snow, or in the morning over water, when the sound speed gradient is positive. In this case, sound waves can be refracted from the upper levels down to the surface. In underwater acoustics, speed of sound depends on pressure (hence depth), temperature, and salinity of seawater, thus leading to vertical speed gradients similar to those that exist in atmospheric acoustics. However, when there is a zero sound speed gradient, values of sound speed have the same "isospeed" in all parts of a given water column (there is no change in sound speed with depth). The same effect happens in an isothermal atmosphere with the ideal gas assumption. References See also SOFAR channel Wind gradient Acoustics Spatial gradient
Sound speed gradient
[ "Physics" ]
323
[ "Classical mechanics", "Acoustics" ]
11,686,224
https://en.wikipedia.org/wiki/Chichibabin%20pyridine%20synthesis
The Chichibabin pyridine synthesis () is a method for synthesizing pyridine rings. The reaction involves the condensation reaction of aldehydes, ketones, α,β-Unsaturated carbonyl compounds, or any combination of the above, with ammonia. It was reported by Aleksei Chichibabin in 1924. Methyl-substituted pyridines, which show widespread uses among multiple fields of applied chemistry, are prepared by this methodology. Representative syntheses The syntheses are presently conduced commercially in the presence of oxide catalysts such as modified alumina (Al2O3) or silica (SiO2). The reactants are passed over the catalyst at 350-500 °C. 2-Methylpyridine- and 4-methylpyridine are produced as a mixture from acetaldehyde and ammonia. 3-Methylpyridine and pyridine are produced from acrolein and ammonia. Acrolein and propionaldehyde react with ammonia affords mainly 3-methylpyridine. 5-Ethyl-2-methylpyridine is produced from paraldehyde and ammonia. Mechanism and optimizations These syntheses involve many reactions such as imine synthesis, base-catalyzed aldol condensations, and Michael reactions. Many efforts have been made to improve the method. Conducting the reaction in the gas phase in the presence of aluminium(III) oxide. zeolite (yield 98.9% at 500 K), From nitriles Of the many variations have been explored. one approach employs nitriles as the nitrogen source. For example, acrylonitrile and acetone affords 2-methylpyridine uncontaminated with the 4-methyl derivative. In another variation, alkynes and nitriles react in the presence of organocobalt catalysts, a reaction inspired by alkyne trimerization. See also Chichibabin reaction Gattermann–Skita synthesis Hantzsch pyridine synthesis Ciamician–Dennstedt rearrangement References Pyridine forming reactions Heterocycle forming reactions Name reactions Soviet inventions
Chichibabin pyridine synthesis
[ "Chemistry" ]
455
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
11,686,976
https://en.wikipedia.org/wiki/Aluminium-conductor%20steel-reinforced%20cable
Aluminium conductor steel-reinforced cable (ACSR) is a type of high-capacity, high-strength stranded conductor typically used in overhead power lines. The outer strands are high-purity aluminium, chosen for its good conductivity, low weight, low cost, resistance to corrosion and decent mechanical stress resistance. The centre strand is steel for additional strength to help support the weight of the conductor. Steel is of higher strength than aluminium which allows for increased mechanical tension to be applied on the conductor. Steel also has lower elastic and inelastic deformation (permanent elongation) due to mechanical loading (e.g. wind and ice) as well as a lower coefficient of thermal expansion under current loading. These properties allow ACSR to sag significantly less than all-aluminium conductors. As per the International Electrotechnical Commission (IEC) and The CSA Group (formerly the Canadian Standards Association or CSA) naming convention, ACSR is designated A1/S1A. Design The aluminium alloy and temper used for the outer strands in the United States and Canada is normally 1350-H19 and elsewhere is 1370-H19, each with 99.5+% aluminium content. The temper of the aluminium is defined by the aluminium version's suffix, which in the case of H19 is extra hard. To extend the service life of the steel strands used for the conductor core they are normally galvanized, or coated with zinc to prevent corrosion. The diameters of the strands used for both the aluminum and steel strands vary for different ACSR conductors. ACSR cable still depends on the tensile strength of the aluminium; it is only reinforced by the steel. Because of this, its continuous operating temperature is limited to , the temperature at which aluminium begins to anneal and soften over time. For situations where higher operating temperatures are required, aluminium-conductor steel-supported (ACSS) may be used. Steel core The standard steel core used for ACSR is galvanized steel, but zinc, 5% or 10% aluminium alloy and trace mischmetal coated steel (sometimes called by the trade-names Bezinal or Galfan) and aluminium-clad steel (sometimes called by the trade-name Alumoweld) are also available. Higher strength steel may also be used. In the United States the most commonly used steel is designated GA2 for galvanized steel (G) with class A zinc coating thickness (A) and regular strength (2). Class C zinc coatings are thicker than class A and provide increased corrosion protection at the expense of reduced tensile strength. A regular strength galvanized steel core with Class C coating thickness would be designated GC2. Higher strength grades of steel are designated high-strength (3), extra-high-strength (4), and ultra-high-strength (5). An ultra-high-strength galvanized steel core with class A coating thickness would be designated GA5. The use of higher strength steel cores increases the tensile strength of the conductor allowing for higher tensions which results in lower sag. Zinc-5% aluminium mischmetal coatings are designated with an "M". These coatings provide increased corrosion protection and heat resistance compared to zinc alone. Regular strength Class "A" mischmetal thickness weight coated regular strength steel would be designated MA2. Aluminium-clad steel is designated as "AW". Aluminium-clad steel offers increased corrosion protection and conductivity at the expense of reduced tensile strength. Aluminium-clad steel is commonly specified for coastal applications. IEC and CSA use a different naming convention. The most commonly used steel is S1A for S1 regular strength steel with a class A coating. S1 steel has slightly lower tensile strength than the regular strength steel used in the United States. Per the Canadian CSA standards the S2A strength grade is classified as High Strength steel. The equivalent material per the ASTM standards is the GA2 strength grade and called Regular Strength steel. The CSA S3A strength grade is classified as Extra High Strength steel. The equivalent material per the ASTM standards is the GA3 strength grade called High Strength. The present day CSA standards for overhead electrical conductor do not yet officially recognize the ASTM equivalent GA4 or GA5 grades. The present day CSA standards do not yet officially recognize the ASTM "M" family of zinc alloy coating material. Canadian utilities are using conductors built with the higher strength steels with the "M" zinc alloy coating. Lay Lay of a conductor is determined by four extended fingers; "right" or "left" direction of the lay is determined depending if it matches finger direction from right hand or left hand respectively. Overhead aluminium (AAC, AAAC, ACAR) and ACSR conductors in the USA are always manufactured with the outer conductor layer with a right-hand lay. Going toward the center, each layer has alternating lays. Some conductor types (e.g. copper overhead conductor, OPGW, steel EHS) are different and have left-hand lay on the outer conductor. Some South American countries specify left-hand lay for the outer conductor layer on their ACSR, so those are wound differently than those used in the USA. Sizing ACSR conductors are available in numerous specific sizes, with single or multiple center steel wires and generally larger quantities of aluminium strands. Although rarely used, there are some conductors that have more steel strands than aluminum strands. An ACSR conductor can in part be denoted by its stranding, for example, an ACSR conductor with 72 aluminium strands with a core of 7 steel strands will be called 72/7 ACSR conductor. Cables generally range from #6 AWG ("6/1" – six outer aluminum conductors and one steel reinforcing conductor) to 2167 kcmil ("72/7" – seventy two outer aluminum conductors and seven steel reinforcing conductors). Naming convention To help avoid confusion due to the numerous combinations of stranding of the steel and aluminium strands, code words are used to specify a specific conductor version. In North America bird names are used for the code words while animal names are used elsewhere. For instance in North America, Grosbeak is a (636 kcmil) ACSR conductor with 26/7 Aluminium/Steel stranding whereas Egret is the same total aluminium size (, 636 kcmil conductor) but with 30/19 Aluminium/Steel stranding. Although the number of aluminium strands is different between Grosbeak and Egret, differing sizes of the aluminium strands are used to offset the change in the number of strands such that the total amount of aluminium remains the same. Differences in the number of steel strands result in varying weights of the steel portion and also result in different overall conductor diameters. Most utilities standardize on a specific conductor version when various versions of the same amount of aluminum to avoid issues related to different size hardware (such as splices). Due to the numerous different sizes available, utilities often skip over some of the sizes to reduce their inventory. The various stranding versions result in different electrical and mechanical characteristics. Ampacity ratings Manufacturers of ACSR typically provide ampacity tables for a defined set of assumptions. Individual utilities normally apply different ratings due to using varying assumptions (which may be a result in higher or lower amperage ratings than those the manufacturers provide). Significant variables include wind speed and direction relative to the conductor, sun intensity, emissivity, ambient temperature, and maximum conductor temperature. Conducting properties In three phase electrical power distribution, conductors must be designed to have low electrical impedance in order to assure that the power lost in the distribution of power is minimal. Impedance is a combination of two quantities: resistance and reactance. The resistances of ASCR conductors are tabulated for different conductor designs by the manufacturer at DC and AC frequency assuming specific operating temperatures. The reasons that resistance changes with frequency are largely due to the skin effect, the proximity effect, and hysteresis loss. Depending on the geometry of the conductor as differentiated by the conductor name, these phenomena have varying degrees of affecting the overall resistance in the conductor at AC vs DC frequency. Often not tabulated with ACSR conductors is the electrical reactance of the conductor, which is due largely to the spacing between the other current carrying conductors and the conductor radius. The reactance of the conductor contributes significantly to the overall current that needs to travel through the line, and thus contributes to resistive losses in the line. For more information on transmission line inductance and capacitance, see electric power transmission and overhead power line. Skin effect The skin effect decreases the cross sectional area in which the current travels through the conductor as AC frequency increases. For alternating current, most (63%) of the electric current flows between the surface and the skin depth, δ, which depends on the frequency of the current and the electrical (conductivity) and magnetic properties of the conductor. This decreased area causes the resistance to rise due to the inverse relationship between resistance and conductor cross sectional area. The skin effect benefits the design, as it causes the current to be concentrated towards the low-resistivity aluminum on the outside of the conductor. To illustrate the impact of the skin effect, the American Society for Testing and Materials (ASTM) standard includes the conductivity of the steel core when calculating the DC and AC resistance of the conductor, but the IEC and CSA Group standards do not. Proximity effect In a conductor (ACSR and other types) carrying AC current, if currents are flowing through one or more other nearby conductors the distribution of current within each conductor will be constrained to smaller regions. The resulting current crowding is termed as the proximity effect. This crowding gives an increase in the effective AC resistance of the circuit, with the effect at 60 Hertz being greater than at 50 Hertz. Geometry, conductivity, and frequency are factors in determining the amount of proximity effect. The proximity effect is result of a changing magnetic field which influences the distribution of an electric current flowing within an electrical conductor due to electromagnetic induction. When an alternating current (AC) flows through an isolated conductor, it creates an associated alternating magnetic field around it. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. The result is that the current is concentrated in the areas of the conductor furthest away from nearby conductors carrying current in the same direction. Hysteresis loss Hysteresis in an ACSR conductor is due to the atomic dipoles in the steel core changing direction due to induction from the 60 or 50 Hertz AC current in the conductor. Hysteresis losses in ACSR are undesirable and can be minimized by using an even number of aluminium layers in the conductor. Due to the cancelling effect of the magnetic field from the opposing lay (right-hand and left-hand) conductors for two aluminium layers there is significantly less hysteresis loss in the steel core than there would be for one or three aluminium layers where the magnetic field does not cancel out. The hysteresis effect is negligible on ACSR conductors with even numbers of aluminium layers and so it is not considered in these cases. For ACSR conductors with an odd number of aluminium layers however, a magnetization factor is used to accurately calculate the AC resistance. The correction method for single-layer ACSR is different than that used for three-layer conductors. Due to applying the magnetization factor, a conductor with an odd number of layers has an AC resistance slightly higher than an equivalent conductor with an even number of layers. Due to higher hysteresis losses in the steel and associated heating of the core, an odd-layer design will have a lower ampacity rating (up to a 10% de-rate) than an equivalent even-layer design. All standard ACSR conductors smaller than Partridge ( {266.8 kcmil} 26/7 Aluminium/Steel) have only one layer due to their small diameters so the hysteresis losses cannot be avoided. Non-standard designs ACSR is widely used due to its efficient and economical design. Variations of standard (sometimes called traditional or conventional) ACSR are used in some cases due to the special properties they offer which provide sufficient advantage to justify their added expense. Special conductors may be more economic, offer increased reliability, or provide a unique solution to an otherwise difficult, of impossible, design problem. The main types of special conductors include "trapezoidal wire conductor" (TW) - a conductor having aluminium strands with a trapezoidal shape rather than round) and "self-damping" (SD), sometimes called "self-damping conductor" (SDC). A similar, higher temperature conductor made from annealed aluminium is called "aluminium conductor steel supported" (ACSS) is also available. Trapezoidal wire Trapezoidal-shaped wire (TW) can be used in lieu of round wire in order to "fill in the gaps" and have a 10–15% smaller overall diameter for the same cross-sectional area or a 20–25% larger cross-sectional area for the same overall diameter. Ontario Hydro (Hydro One) introduced trapezoidal-shaped wire ACSR conductor designs in the 1980s to replace existing round-wire ACSR designs (they called them compact conductors; these conductor types are now called ACSR/TW). Ontario Hydro's trapezoidal-shaped wire (TW) designs used the same steel core but increased the aluminium content of the conductor to match the overall diameter of the former round-wire designs (they could then use the same hardware fittings for both the round and the TW conductors). Hydro One's designs for their trapezoidal ACSR/TW conductors only use even numbers of aluminium layers (either two layers or four layers). They do not use designs which have odd number of layers (three layers) due to that design incurring higher hysteresis losses in the steel core. Also in the 1980s, Bonneville Power Administration (BPA) introduced TW designs where the size of the steel core was increased to maintain the same Aluminium/Steel ratio. Self-damping Self-damping (ACSR/SD) is a nearly obsolete conductor technology and is rarely used for new installations. It is a concentric-lay stranded, self-damping conductor designed to control wind induced (Aeolian-type) vibration in overhead transmission lines by internal damping. Self-damping conductors consists of a central core of one or more round steel wires surrounded by two layers of trapezoidal shaped aluminium wires. One or more layers of round aluminium wires may be added as required. SD conductor differs from conventional ACSR in that the aluminium wires in the first two layers are trapezoidal shaped and sized so that each aluminium layer forms a stranded tube which does not collapse onto the layer beneath when under tension, but maintains a small annular gap between layers. The trapezoidal wire layers are separated from each other and from the steel core by the two smaller annular gaps that permit movement between the layers. The round aluminium wire layers are in tight contact with each other and the underlying trapezoidal wire layer. Under vibration, the steel core and the aluminium layers vibrate with different frequencies and impact damping results. This impact damping is sufficient to keep any Aeolian vibration to a low level. The use of trapezoidal strands also results in reduced conductor diameter for a given AC resistance per mile. The major advantages ACSR/SD are: High self-damping allows the use of higher unloaded tension levels resulting in reduced maximum sag and thus reduced structure height and/or fewer structures per km [or per mile]. Reduced diameter for a given AC resistance yielding reduced structure transverse wind and ice loading. The major disadvantages ACSR/SD are: There most likely will be increased installation and clipping costs due to special hardware requirements and specialized stringing methods. The conductor design always requires the use of a steel core even in light loading areas. Aluminium-conductor steel supported Aluminium-conductor steel supported (ACSS) conductor visually appears to be similar to standard ACSR but the aluminium strands are fully annealed. Annealing the aluminium strands reduces the composite conductor strength, but after installation, permanent elongation of the aluminium strands results in a much larger percentage of the conductor tension being carried in the steel core than is true for standard ACSR. This in turn yields reduced composite thermal elongation and increased self-damping. The major advantages of ACSS are: Since the aluminium strands are "dead-soft" to begin with, the conductor may be operated at temperatures in excess of without loss of strength. Since the tension in the aluminium strands is normally low, the conductor's self-damping of Aeolian vibration is high and it may be installed at high unloaded tension levels without the need for separate Stockbridge-type dampers. The major disadvantages of ACSS are: In areas experiencing heavy ice load, the reduced strength of this conductor relative to standard ACSR may make it less desirable. The softness of the annealed aluminium strands and the possible need for pre-stressing prior to clipping and sagging may raise installation costs. Twisted pair Twisted pair (TP) conductor (sometimes called by the trade-names T-2 or VR) has the two sub-conductors twisted (usually with a left-hand lay) about one another generally with a lay length of approximately three meters (nine feet). The conductor cross-section of the TP is a rotating "figure-8". The sub-conductors can be any type of standard ACSR conductor but the conductors need to match one another to provide mechanical balance. The major advantages of TP conductor are: The use of the TP conductor reduces the propensity of ice/wind galloping starting on the line. In an ice storm when ice deposits start to accumulate along the conductor the twisted conductor profile prevents a uniform airfoil shape from forming. With a standard round conductor the airfoil shape results in uplift of the conductor and initiation of the galloping motion. The TP conductor profile and this absence of the uniform airfoil shape inhibits the initiation of the galloping motion. The reduction in motion during icing events helps prevent the phase conductors from contacting each other causing a fault and an associated outage of the electrical circuit. With the reduction in large amplitude motions, closer phase spacing or longer span lengths can be used. This in turn can result in a lower cost of construction. TP conductor is generally installed only in areas that normally are exposed to wind speed and freezing temperature conditions associated with ice buildup. The non-round shape of this conductor reduces the amplitude of Aeolian vibration and the accompanying fatigue inducing strains near splices and conductor attachment clamps. TP conductors can gently rotate to dissipate energy. As a result, TP conductor can be installed to higher tension levels and reduced sags. The major disadvantages of TP conductor are: The non-round cross-section yields wind and ice loadings which are about 11% higher than standard conductor of the same AC resistance per mile. The installation of, and hardware for this conductor, can be somewhat more expensive than standard conductor. Splicing Many electrical circuits are longer than the length of conductor which can be contained on one reel. As a result, splicing is often necessary to join conductors to provide the desired length. It is important that the splice not be the weak link. A splice (joint) must have high physical strength along with a high electrical current rating. Within the limitations of the equipment used to install the conductor from the reels, a sufficient length of conductor is generally purchased that the reel can accommodate to avoid more splices than are absolutely necessary. Splices are designed to run cooler than the conductor. The temperature of the splice is kept lower by having a larger cross-sectional area and thus less electrical resistance than the conductor. Heat generated at the splice is also dissipated faster due to the larger diameter of the splice. Failures of splices are of concern, as a failure of just one splice can cause an outage that affects a large amount of electrical load. Most splices are compression-type splices (crimps). These splices are inexpensive and have good strength and conductivity characteristics. Some splices, called automatics, use a jaw-type design that is faster to install (does not require the heavy compression equipment) and are often used during storm restoration when speed of installation is more important than the long term performance of the splice. Causes for splice failures are numerous. Some of the main failure modes are related to installation issues, such as: insufficient cleaning (wire brushing) of the conductor to eliminate the aluminium oxide layer (which has a high resistance {is a poor electrical conductor}), improper application of conducting grease, improper compression force, improper compression locations or number of compressions. Splice failures can also be due to Aeolian vibration damage as the small vibrations of the conductor over time cause damage (breakage) of the aluminium strands near the ends of the splice. Special splices (two-piece splices) are required on SD-type conductors as the gap between the trapezoidal aluminium layer and the steel core prevents the compression force on the splice to the steel core to be adequate. A two-piece design has a splice for the steel core and a longer and larger-diameter splice for the aluminium portion. The outer splice must be threaded on first and slid along the conductor and the steel splice compressed first and then the outer splice is slid back over the smaller splice and then compressed. This complicated process can easily result in a poor splice. Splices can also fail partially, where they have higher resistance than expected, usually after some time in the field. These can be detected using thermal camera, thermal probes, and direct resistance measurements, even when the line is energized. Such splices usually require replacement, either on deenergized line, by doing a temporary bypass to replace it, or by adding a big splice over the existing splice, without disconnecting. Conductor coatings When ACSR is new, the aluminium has a shiny surface which has a low emissivity for heat radiation and a low absorption of sunlight. As the conductor ages the color becomes dull gray due to the oxidation reaction of the aluminium strands. In high pollution environments, the color may turn almost black after many years of exposure to the elements and chemicals. For aged conductor, the emissivity for heat radiation and the absorption of sunlight increases. Conductor coatings are available that have a high emissivity for high heat radiation and a low absorption of sunlight. These coatings would be applied to new conductor during manufacture. These types of coatings have the ability to potentially increase the current rating of the ACSR conductor. For the same amount of amperage, the temperature of the same conductor will be lower due to the better heat dissipation of the higher emissivity coating. See also ACCC conductor Copper clad steel References Power engineering Electrical wiring
Aluminium-conductor steel-reinforced cable
[ "Physics", "Engineering" ]
4,766
[ "Electrical systems", "Building engineering", "Energy engineering", "Physical systems", "Power engineering", "Electrical engineering", "Electrical wiring" ]
11,688,511
https://en.wikipedia.org/wiki/Motorola%20W220
The Motorola W220 is an entry-level flip-phone for the GSM network, introduced in 2006. The phone features dual-band capabilities, an FM radio, and a 65k color screen. Visually its design was based on the popular Razr phones from the same manufacturer. External links Motorola W220 Motorola W220 - Full phone specifications W220 Mobile phones introduced in 2006
Motorola W220
[ "Technology" ]
83
[ "Mobile technology stubs", "Mobile phone stubs" ]
11,688,824
https://en.wikipedia.org/wiki/Fission%20product%20yield
Nuclear fission splits a heavy nucleus such as uranium or plutonium into two lighter nuclei, which are called fission products. Yield refers to the fraction of a fission product produced per fission. Yield can be broken down by: Individual isotope Chemical element spanning several isotopes of different mass number but same atomic number. Nuclei of a given mass number regardless of atomic number. Known as "chain yield" because it represents a decay chain of beta decay. Isotope and element yields will change as the fission products undergo beta decay, while chain yields do not change after completion of neutron emission by a few neutron-rich initial fission products (delayed neutrons), with half-life measured in seconds. A few isotopes can be produced directly by fission, but not by beta decay because the would-be precursor with atomic number one less is stable and does not decay (atomic number grows by 1 during beta decay). Chain yields do not account for these "shadowed" isotopes; however, they have very low yields (less than a millionth as much as common fission products) because they are far less neutron-rich than the original heavy nuclei. Yield is usually stated as percentage per fission, so that the total yield percentages sum to 200%. Less often, it is stated as percentage of all fission products, so that the percentages sum to 100%. Ternary fission, about 0.2–0.4% of fissions, also produces a third light nucleus such as helium-4 (90%) or tritium (7%). Mass vs. yield curve If a graph of the mass or mole yield of fission products against the atomic number of the fragments is drawn then it has two peaks, one in the area zirconium through to palladium and one at xenon through to neodymium. This is because the fission event causes the nucleus to split in an asymmetric manner, as nuclei closer to magic numbers are more stable. Yield vs. Z - This is a typical distribution for the fission of uranium. Note that in the calculations used to make this graph the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times (time after fission). Because of the stability of nuclei with even numbers of protons and/or neutrons the curve of yield against element is not a smooth curve. It tends to alternate. In general, the higher the energy of the state that undergoes nuclear fission, the more likely a symmetric fission is, hence as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow; for instance, the curve of yield against mass for Pu-239 has a more shallow valley than that observed for U-235, when the neutrons are thermal neutrons. The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259Fm, only one peak is seen. Yield is usually expressed relative to number of fissioning nuclei, not the number of fission product nuclei, that is, yields should sum to 200%. The table in the next section ("Ordered by yield") gives yields for notable radioactive (with half-lives greater than one year, plus iodine-131) fission products, and (the few most absorptive) neutron poison fission products, from thermal neutron fission of U-235 (typical of nuclear power reactors), computed from . The yields in the table sum to only 45.5522%, including 34.8401% which have half-lives greater than one year: The remainder and the unlisted 54.4478% decay with half-lives less than one year into nonradioactive nuclei. This is before accounting for the effects of any subsequent neutron capture; e.g.: 135Xe capturing a neutron and becoming nearly stable 136Xe, rather than decaying to 135Cs which is radioactive with a half-life of 2.3 million years Nonradioactive 133Cs capturing a neutron and becoming 134Cs, which is radioactive with a half-life of 2 years Many of the fission products with mass 147 or greater such as 147Pm, 149Sm, 151Sm, and 155Eu have significant cross sections for neutron capture, so that one heavy fission product atom can undergo multiple successive neutron captures. Besides fission products, the other types of radioactive products are plutonium containing 238Pu, 239Pu, 240Pu, 241Pu, and 242Pu, minor actinides including 237Np, 241Am, 243Am, curium isotopes, and perhaps californium reprocessed uranium containing 236U and other isotopes tritium activation products of neutron capture by the reactor or bomb structure or the environment Fission products from U-235 Cumulative fission yields Cumulative fission yields give the amounts of nuclides produced either directly in the fission or by decay of other nuclides. Ordered by mass number Decays, even if lengthy, are given down to the stable nuclide. Decays with half lives longer than a century are marked with a single asterisk (), while decays with a half life longer than a hundred million years are marked with two asterisks (). Half lives, decay modes, and branching fractions Ordered by thermal neutron absorption cross section References External links HANDBOOK OF NUCLEAR DATA FOR SAFEGUARDS: DATABASE EXTENSIONS, AUGUST 2008 The Live Chart of Nuclides - IAEA Color-map of yields, and detailed data by click on a nuclide. Nuclear fission Nuclear chemistry
Fission product yield
[ "Physics", "Chemistry" ]
1,148
[ "Nuclear chemistry", "Nuclear fission", "nan", "Nuclear physics" ]
11,688,880
https://en.wikipedia.org/wiki/Bare%20Point%20water%20treatment%20plant
The Bare Point Water Treatment Plant is the primary water filtration plant in the city of Thunder Bay, Ontario, drawing 113.6 million litres (25 million gallons) from Lake Superior per day. The plant uses a Zeeweed 1000 Version 3 Ultra-Filtration system, the first of its kind in the world, which reduces the need for harmful chemicals. The Zeeweed system uses long thin straws that suck up water then force it through the small holes of a membrane to filter out particles. The Bare Point plant is located on the shore of Thunder Bay, at the northeastern corner of Thunder Bay's city limits, accessible off Lakeshore Drive. It was first constructed in 1903 and expanded in 1978 and again in 2007 to its current capacity. The plant's treatment method uses pre-chlorination, then coagulation-flocculation followed by membrane ultrafiltration and post chlorine disinfection. References External links Thunder Bay.ca: Treating our Drinking Water   ZeeWeed 1000 Ultrafiltration Membrane Buildings and structures in Thunder Bay Water treatment facilities
Bare Point water treatment plant
[ "Chemistry" ]
217
[ "Water treatment", "Water treatment facilities" ]
11,689,708
https://en.wikipedia.org/wiki/Fritware
Fritware, also known as stone-paste, is a type of pottery in which ground glass (frit) is added to clay to reduce its fusion temperature. The mixture may include quartz or other siliceous material. An organic compound such as gum or glue may be added for binding. The resulting mixture can be fired at a lower temperature than clay alone. A glaze is then applied on the surface. Fritware was invented to give a strong white body, which, combined with tin-glazing of the surface, allowed it to approximate the result of Chinese porcelain. Porcelain was not manufactured in the Islamic world until modern times, and most fine Islamic pottery was made of fritware. Frit was also a significant component in some early European porcelains. Composition and techniques Fritware was invented in the Medieval Islamic world to give a strong white body, which, combined with tin-glazing of the surface, allowed it to approximate the white colour, translucency, and thin walls of Chinese porcelain. True porcelain was not manufactured in the Islamic world until modern times, and most fine Islamic pottery was made of fritware. Frit was also a significant component in some early European porcelains. Although its production centres may have shifted with time and imperial power, fritware remained in continued use throughout the Islamic world with little significant innovation. The technique was used to create many other significant artistic traditions such as lustreware, Raqqa ware, and Iznik pottery. Raw materials in one contemporary recipe used in Jaipur are quartz powder, glass power, fuller's earth, borax and tragacanth gum. Raw materials for a glaze are reported to be glass powder, lead oxide, borax, potassium nitrate, zinc oxide and boric acid. The blue decoration is cobalt oxide. History Frit is crushed glass that is used in ceramics. The pottery produced from the manufacture of frit is often called 'fritware' but has also been referred to as "stonepaste" and "faience" among other names. Fritware was innovative because the glaze and the body of the ceramic piece were made of nearly the same materials, allowing them to fuse better, be less likely to flake, and could also be fired at a lower temperature. The manufacture of proto-fritware began in Iraq in the 9th century AD under the Abbasid Caliphate, and with the establishment of Samarra as its capital in 836, there is extensive evidence of ceramics in the court of the Abbasids both in Samarra and Baghdad. A ninth-century corpus of 'proto-stonepaste' from Baghdad has "relict glass fragments" in its fabric. The glass is alkali-lime-lead-silica and, when the paste was fired or cooled, wollastonite and diopside crystals formed within the glass fragments. The lack of "inclusions of crushed pottery" suggests these fragments did not come from a glaze. The reason for their addition would have been to release alkali into the matrix on firing, which would "accelerate vitrification at a relatively low firing temperature, and thus increase the hardness and density of the [ceramic] body." Following the fall of the Abbasid Caliphate, the main centres of manufacture moved to Egypt where true fritware was invented between the 10th and the 12th centuries under the Fatimids, but the technique then spread throughout the Middle East. There are many variations on designs, colour, and composition, the last often attributed to the differences in mineral compositions of soil and rock used in the production of fritware. The bodies of the fritware ceramics were always made quite thin to imitate their porcelain counterparts in China, a practice not common before the discovery of the frit technique which produced stronger ceramics. In the 13th century the town of Kashan in Iran was an important centre for the production of fritware. Abū'l-Qāsim, who came from a family of tilemakers in the city, wrote a treatise in 1301 on precious stones that included a chapter on the manufacture of fritware. His recipe specified a fritware body containing a mixture of 10 parts silica to 1 part glass frit and 1 part clay. The frit was prepared by mixing powdered quartz with soda which acted as a flux. The mixture was then heated in a kiln. The internal circulation of pottery within the Islamic world from its earliest days was quite common, with the movement of ideas regarding pottery without their physical presence in certain areas being readily apparent. The movement of fritware into China - whose monopoly on porcelain production had prompted the Islamic world to produce fritware to begin with - impacted Chinese porcelain decoration, deriving the signature cobalt blue colour from Islamic traditions of fritware decoration. The transfer of this artistic idea was likely a consequence of the enhanced connection and trade relations between the Middle and Near East and Far East Asia under the Mongols beginning in the 13th century. The Middle and Near East had an initial monopoly on the cobalt colour due to its own richness in cobalt ore, which was especially abundant in Qamsar and Anarak in Persia. Iznik pottery was produced in Ottoman Turkey beginning in the last quarter of 15th century AD. It consists of a body, slip, and glaze, where the body and glaze are 'quartz-frit'. The 'frits' in both cases "are unusual in that they contain lead oxide as well as soda"; the lead oxide would help reduce the thermal expansion coefficient of the ceramic. Microscopic analysis reveals that the material that has been labeled 'frit' is 'interstitial glass' which serves to connect the quartz particles. The glass was added as frit and the interstitial glass formed on firing. In 2011, 29 potteries, employing a total of 300 persons, making fritware were identified in Jaipur. Applications Fritware served a wide variety of purposes in the medieval Islamic world. As a porcelain substitute, the fritware technique was used to craft bowls, vases, and pots, not only as symbols of luxury but also to practical ends. It was similarly used by medieval tilemakers to craft strong tiles with a colourless body that provided a suitable base for underglaze and decoration. Fritware was also known to be used to craft objects beyond pottery and tiling, and has been found to be used in the twelfth century to make objects like chess sets. There is also a tradition of using fritware to create intricate figurines, with surviving examples from the Seljuk Empire. It was also used as the ceramic body for Islamic lustreware, a technique that puts a lustred ceramic glaze onto pottery. Blue pottery A small manufacturing cluster of fritware exists around Jaipur, Rajasthan in India, where it is known as 'Blue Pottery' due its most popular glaze. The Blue Pottery of Jaipur technique may have arrived in India with the Mughals, with production in Jaipur dating to at least as early as the 17th century. References Further reading "Technology of Frit Making in Iznik." Okyar F. Euro Ceramics VIII, Part 3. Trans Tech Publications. 2004, p. 2391-2394. Published for The European Ceramic Society. Pancaroğlu, O. (2007). Perpetual glory: Medieval Islamic ceramics from the Harvey B. Plotnick Collection (1055933707 805629715 M. Bayani, Trans.). Chicago, IL: Art Institute of Chicago. Watson, O. (2004). Ceramics from Islamic lands. New York, NY: Thames & Hudson. Pottery Arabic pottery Iranian pottery Ceramic materials Islamic pottery Arab inventions
Fritware
[ "Engineering" ]
1,580
[ "Ceramic engineering", "Ceramic materials" ]
11,689,793
https://en.wikipedia.org/wiki/Furuno
(commonly known as Furuno) is a Japanese electronics company whose main products are marine electronics, including marine radar systems, fish finders, and navigational instruments. The company also manufactures global positioning systems and medical equipment, and entered the weather radar market in 2013. History Furuno Electric Shokai was founded in Nagasaki, Japan in 1948. The same year, Furuno commercialized the world's first practical fish finder. Manufacturing continued to ramp up as the decade came to a close, and by the mid-1950s, Furuno was producing various Marine supplements, such as early examples of commercial Marine radars. In 1973, Furuno created an early iteration of satellite positioning receivers for vessels at sea. Later that decade, Furuno entered the United States market, establishing an HQ in the United States as Furuno USA. Following this expansion and continued growth, Furuno continued expanding their marine-based radar products. In 2009, Furuno acquired San Francisco based eRide, Inc., a fabless semiconductor company. Following this acquisition, in 2013, Furuno introduced an X-band weather radar, the smallest of its kind. In 2015, the company's GNSS Receiver Modules were used in radio controlled flying quadcopters. In popular culture Furuno's marine electronic devices has been featured in Licence to Kill (1989) as product placement. Gallery Notes External links Furuno Global page Furuno Global page Wiki collection of bibliographic works on Furuno Electronics companies of Japan Defense companies of Japan Global Positioning System Fishing equipment manufacturers Avionics companies Marine electronics Navigation system companies Radar manufacturers Health care companies of Japan Companies listed on the Tokyo Stock Exchange Companies based in Hyōgo Prefecture Electronics companies established in 1938 1938 establishments in Japan Japanese brands
Furuno
[ "Technology", "Engineering" ]
356
[ "Wireless locating", "Aircraft instruments", "Aerospace engineering", "Global Positioning System", "Marine engineering", "Marine electronics" ]
11,690,640
https://en.wikipedia.org/wiki/Operation%20Looking%20Glass
Looking Glass (or Operation Looking Glass) is the historic code name for an airborne command and control center operated by the United States. In more recent years it has been more officially referred to as the ABNCP (Airborne National Command Post).<ref>[http://www.minot.af.mil/News/Article-Display/Article/954496/looking-glass-usstratcoms-airborne-command-post Looking Glass: USSTRATCOM's Airborne Command Post By Airman 1st Class J.T. Armstrong, Public Affairs / Published September 23, 2016. the Airborne Command Post (ABNCP). ..According to Konowicz, the ABNCP is dual purposed. While it primarily functions as a communications relay platform for submarines with its two trailing antenna wires, it also serves as an Airborne Launch Control System (ALCS).] and EC-135, Looking Glass. Federation of American Scientists</ref> It provides command and control of U.S. nuclear forces in the event that ground-based command centers have been destroyed or otherwise rendered inoperable. In such an event, the general officer aboard the Looking Glass serves as the Airborne Emergency Action Officer (AEAO) and by law assumes the authority of the National Command Authority and could command execution of nuclear attacks. The AEAO is supported by a battle staff of approximately 20 people, with another dozen responsible for the operation of the aircraft systems. The name Looking Glass, which is another name for a mirror, was chosen for the Airborne Command Post because the mission operates in parallel with the underground command post at Offutt Air Force Base. History The code name "Looking Glass" came from the aircraft's ability to "mirror" the command and control functions of the underground command post at the U.S. Air Force's Strategic Air Command (SAC) headquarters at Offutt AFB, Nebraska. The SAC Airborne Command Post or "Looking Glass" was initiated in 1960, with the conversion of 5 KC-135A tankers in to Airborne Command Posts. On July 1, 1960 operational testing began under the code name Looking Glass, and operated by the 34th Air Refueling Squadron at Offutt AFB. The mission transferred to the 38th Strategic Reconnaissance Squadron in August 1966, to the 2nd Airborne Command and Control Squadron in April 1970, to the 7th Airborne Command and Control Squadron in July 1994, and to the USSTRATCOM's Strategic Communications Wing One in October 1998. The Strategic Air Command put Looking Glass mission on continuous airborne alert starting February 3, 1961, aircraft from the 34th Air Refueling Squadron based at its headquarters at Offutt AFB, backed up by aircraft flying with the Second Air Force / 913th Air Refueling Squadron at Barksdale AFB, Louisiana, Eighth Air Force / 99th Air Refueling Squadron at Westover AFB, Massachusetts, and Fifteenth Air Force / 22d Air Refueling Squadron, March AFB, California. EC-135 Looking Glass aircraft were airborne 24 hours a day for over 29 years, until July 24, 1990, when "The Glass" ceased continuous airborne alert, but remained on ground or airborne alert 24 hours a day. Looking Glass mirrors ground-based command, control, and communications (C3 or C³) located at the USSTRATCOM Global Operations Center (GOC) at Offutt AFB. The EC-135 Looking Glass aircraft were equipped with the Airborne Launch Control System, capable of transmitting launch commands to U.S. ground-based intercontinental ballistic missiles (ICBMs) in the event that the ground launch control centers were rendered inoperable.LGM-30G Fact Sheet The Looking Glass was also designed to help ensure continuity and reconstitution of the US government in the event of a nuclear attack on North America. Although the two types of aircraft are distinct, the Doomsday Plane'' nickname is also frequently associated with the Boeing E-4 "Nightwatch" Advanced Airborne Command Post mission and aircraft. The Looking Glass was the anchor in what was known as the World Wide Airborne Command Post (WWABNCP) network. This network of specially equipped EC-135 aircraft would launch from ground alert status and establish air-to-air wireless network connections in the event of a U.S. national emergency. Members of the WWABNCP network included: Operation Silk Purse for the Commander in Chief, U.S. European Command (USCINCEUR), based at RAF Mildenhall in the United Kingdom (callsign Seabell); Operation "Scope Light" for the Commander in Chief, U.S. Atlantic Command (CINCLANT), based at Langley AFB, VA; Operation "Blue Eagle" for the Commander in Chief, U.S. Pacific Command (USCINCPAC), based at Hickam AFB, HI; and Operation "Nightwatch" which supported the President of the United States and were based at Andrews AFB, Maryland. In the early 1970s the E-4A aircraft replaced the EC-135Js on this mission. The Eastern Auxiliary (EAST Aux) and Western Auxiliary (West Aux) Command Posts were also part of the WWABNCP ("wah-bin-cop") network and were capable of assuming responsibility for Looking Glass as the anchor. The West Aux 906th Air Refueling Squadron was based at Minot AFB, North Dakota, and moved to the 4th Airborne Command and Control Squadron at Ellsworth AFB, South Dakota, in April 1970 and the East Aux mission 301st Air Refueling Squadron was based at Lockbourne AFB, Ohio, and moved to the 3rd Airborne Command & Control Squadron at Grissom AFB, Indiana, in April 1970. After 1975, East Aux was assumed from the Looking Glass backup ground alert aircraft launched from Offutt AFB. In June 1992, United States Strategic Command took over the Looking Glass mission from the Strategic Air Command, as SAC was disbanded and Strategic Command assumed the nuclear deterrence mission. Current status On October 1, 1998, the United States Navy fleet of E-6Bs replaced the EC-135C in performing the "Looking Glass" mission, previously carried out for 37 years by the U.S. Air Force. Unlike the original Looking Glass aircraft, the E-6Bs are modified Boeing 707 aircraft, not the military-only KC-135. The E-6B provides the National Command Authority with the same capability as the EC-135 fleet to control the nation's intercontinental ballistic missile (ICBM) force, nuclear-capable bombers and submarine-launched ballistic missiles (SLBM). With the assumption of this mission, a USSTRATCOM battle staff now flies with the TACAMO crew. If the USSTRATCOM Global Operations Center (GOC) is unable to function in its role, the E-6B Looking Glass can assume command of all U.S. nuclear-capable forces. Flying aboard each ABNCP is a crew of 22, which includes an air crew, a Communications Systems Officer and team, an Airborne Emergency Action Officer (an Admiral or General officer), a Mission Commander, a Strike Advisor, an Airborne Launch Control System/Intelligence Officer, a Meteorological Effects Officer, a Logistics Officer, a Force Status Controller, and an Emergency Actions NCO. In addition to being able to direct the launch of ICBMs using the Airborne Launch Control System, the E-6B can communicate Emergency Action Messages (EAM) to nuclear submarines running at depth by extending a two and a half-mile-long () trailing wire antenna (TWA) for use with the Survivable Low Frequency Communications System (SLFCS), as the EC-135C could. There was some speculation that the "mystery plane" seen flying over the White House on September 11, 2001, was some newer incarnation of Looking Glass. However, the plane circling the White House on 9/11 was a E-4B (callsign ADDIS77/VENUS77) acting as the tertiary NAOC (Nightwatch) aircraft which was launched from ground alert at Andrews Air Force Base. See also TACAMO Boeing EC-135 Boeing E-4 Advanced Airborne Command Post ("Nightwatch") E-6 Mercury Airborne Launch Control System Airborne Launch Control Center Decapitation strike Letters of last resort Dead Hand (Perimeter) Continuity of Operations Plan Single Integrated Operational Plan Nuclear utilization target selection References External links The History of the PACCS USSTRATCOM ABNCP Fact Sheet Ghosts of the East Coast: Doomsday Ships Cold War museum United States command and control aircraft Disaster preparedness in the United States Nuclear warfare United States nuclear command and control 1961 establishments in the United States
Operation Looking Glass
[ "Chemistry" ]
1,769
[ "Radioactivity", "Nuclear warfare" ]
11,691,345
https://en.wikipedia.org/wiki/USA-87
USA-87, also known as GPS IIA-8, GPS II-17 and GPS SVN-29, was an American navigation satellite which formed part of the Global Positioning System. It was the eighth of nineteen Block IIA GPS satellites to be launched. Background Global Positioning System (GPS) was developed by the U.S. Department of Defense to provide all-weather round-the-clock navigation capabilities for military ground, sea, and air forces. Since its implementation, GPS has also become an integral asset in numerous civilian applications and industries around the globe, including recreational used (e.g., boating, aircraft, hiking), corporate vehicle fleet tracking, and surveying. GPS employs 24 spacecraft in 20,200 km circular orbits inclined at 55.0°. These vehicles are placed in 6 orbit planes with four operational satellites in each plane. GPS Block 2 was the operational system, following the demonstration system composed of Block 1 (Navstar 1 - 11) spacecraft. These spacecraft were 3-axis stabilized, nadir pointing using reaction wheels. Dual solar arrays supplied 710 watts of power. They used S-band (SGLS) communications for control and telemetry and Ultra high frequency (UHF) cross-link between spacecraft. The payload consisted of two L-band navigation signals at 1575.42 MHz (L1) and 1227.60 MHz (L2). Each spacecraft carried 2 rubidium and 2 Cesium clocks and nuclear detonation detection sensors. Built by Rockwell Space Systems for the U.S. Air force, the spacecraft measured 5.3 m across with solar panels deployed and had a design life of 7.5 years. Launch USA-87 was launched at 22:16:00 UTC on 18 December 1992, atop a Delta II launch vehicle, flight number D217, flying in the 7925-9.5 configuration. The launch took place from Launch Complex 17B (LC-17B) at the Cape Canaveral Air Force Station (CCAFS), and placed USA-87 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37XFP apogee motor. Mission On 25 January 1993, USA-87 was in an orbit with a perigee of , an apogee of , a period of 720.00 minutes, and 54.74° of inclination to the equator. It had PRN 29, and operated in slot 5 of plane F of the GPS constellation. The satellite had a mass of . It had a design life of 7.5 years, and ceased operations on 23 October 2007. References Spacecraft launched in 1992 GPS satellites USA satellites
USA-87
[ "Technology" ]
542
[ "Global Positioning System", "GPS satellites" ]
11,691,353
https://en.wikipedia.org/wiki/Healthy%20Skepticism
Healthy Skepticism Inc is an international non-profit organisation whose main aim is to "improve health by reducing harm from misleading drug promotion". Healthy Skepticism was founded in 1983 with the name Medical Lobby for Appropriate Marketing (MaLAM). It was begun by an Australian medical student, Peter R. Mansfield, who had the idea for the organisation during his final year elective in Bangladesh in 1982. MaLAM initially focused on campaigning against questionable marketing practices in developing countries. These included the promotion of appetite stimulants, tonics and anabolic steroids to parents of malnourished children. MaLAM was modelled on Amnesty International and wrote open letters to the international headquarters of pharmaceutical companies questioning them about specific advertisements. MaLAM letters were signed by supporters around the world and contributed to many improvements in drug marketing, several products being removed from the market and many advertising claims changed. MaLAM reported more than 450 alleged violations of the voluntary code of conduct of the drug industry in 1987, and also questioned drug advertisements in Australia during 1993 to 1997 with funding from the Australian federal government. In 2001, the organisation's name was changed to Healthy Skepticism and its work widened to include research, education and advocacy about misleading drug promotion in all countries. It has concentrated recently on raising awareness amongst health professionals of the influence that marketing techniques have on their decisions, and the psychological factors which make them vulnerable to that influence. References Pharmaceutical industry Medical and health organisations based in Australia
Healthy Skepticism
[ "Chemistry", "Biology" ]
292
[ "Pharmaceutical industry", "Pharmacology", "Life sciences industry" ]
11,691,830
https://en.wikipedia.org/wiki/Washington%20Air%20Route%20Traffic%20Control%20Center
Washington Air Route Traffic Control Center (ZDC) is an Area Control Center operated by the Federal Aviation Administration and located at Lawson Rd SE, Leesburg, Virginia, United States. The primary responsibility of ZDC is the separation of airplane flights and the expedited sequencing of arrivals and departures along STARs (Standard Terminal Arrival Routes) and SIDs (Standard Instrument Departures) for the Washington-Baltimore Metropolitan Area, the New York Metropolitan Area, and Philadelphia among many other areas. Washington Center is the second busiest (after Atlanta) ARTCC in the United States. Between January 1, 2017 and December 31, 2017, Washington Center handled 2,554,410 aircraft operations. The Washington ARTCC covers of airspace that includes airports in Maryland, Pennsylvania, West Virginia, Delaware, New Jersey, Virginia, and North Carolina. Basic breakdown of sectors ZDC is divided into 8 areas, numbered 1 through 8, that make up 46 sectors. They are mainly broken down into low altitude, intermediate, high altitude and Super High Altitude sectors. There are 18 low sectors, 14 high sectors, 5 super high sectors and 4 various other type sectors, including 1 high/low altitude sector and 3 intermediate altitude sectors. AREA 1 Super-high sectors 72 Shenandoah SHD 127.925 MHz/269.375 MHz 07 Wahoo WAH 121.925 MHz/346.375 MHz During periods of traffic saturation in the Shenandoah Sector, ZDC will split the Shenandoah sector into two sectors, making Wahoo a second Super High Sector over the Roanoke and Lynchburg, Virginia areas. Shenandoah normally overlaps the Tech High and Gordonsville High Sectors above FL330. When traffic demand is high, Wahoo is activated to overlap the Gordonsville High Sector above FL330 covering J37 and J75 from GVE VOR south while Shenandoah will overlap Tech High Sector above FL 330 and cover traffic on J48 from CSN VOR south, J22 from MOL VOR south and J53 south of ASBUR intersection near LWB VOR. High sectors 32 Gordonsville GVE 133.725/351.900 52 Tech TEC 133.575/270.350 Intermediate sectors 60 Montebello MOL 121.675/284.700 Low sectors 31 Azalea AZA 135.400/263.100 AREA 2 High sectors 36 Raleigh RDU 118.925/322.450 Low sectors 26 Sampson SAM 135.300/285.500 27 Liberty LIB 135.200/348.650 28 Rocky Mount RMT 118.475/279.650 AREA 3 High sectors 37 Marlinton MAR 133.025/323.025 Low sectors 02 Casanova CSN 133.200/282.200 05 Linden LDN 133.550/322.550 22 South Boston SBV 124.050/352.000 29 NEW Valley VAL (formerly Hot Springs HSP)134.400/353.900 Sector 30 OLD Valley VAL has been changed to NEW 72 Shenandoah SHD Super Hi Sector in Area 1 above. Old Valley Sector merged into Sector 29 Hot Springs. AREA 4 Super-high sectors 42 Bryce BRY 118.025/226.675 High sectors 03 Moorefield MOR 133.275/371.900 04 Pinion PIN 133.975/307.025 Low sectors 01 Elkins EKN 128.600/387.100 06 Hagerstown HGR 134.150/227.125 15 Blue Ridge BLR 133.650/285.600 AREA 5 Super-high sectors 39 Snow Hill SWL 121.375/236.825 50 Yorktown YKT 120.75/317.725 High sectors 34 Norfolk ORF 133.825/327.800 54 Salisbury SBY 120.975/257.700 Low sectors 23 Cape Charles CCV 132.550/256.80 51 Casino CAS 127.700/285.400 53 Kenton ENO 132.050/354.150 AREA 6 Super-high sectors 09 Dixon DIW 118.825/360.650 High sectors 35 Wilmington ILM 124.025/269.150 38 Tar River TYI 132.225/354.100 Low sectors 24 Cofield CVI 123.850/323.000 25 New Bern EWN 135.500/281.425 33 Franklin FKN 132.025/290.425 AREA 7 High-altitude sectors 10 Bay BAY 132.275/379.300 12 Brooke BRV 126.875/327.000 16 Hopewell HPW 121.875/323.225 Intermediate sectors 11 Calvert CAL 133.900/281.400 20 Blackstone BKT 127.750/235.625 Low sectors 14 Irons IRS 132.950/351.800 21 Dominion DOM 118.750/377.100 AREA 8 High sectors 58 Coyle CYN 121.025/254.300 59 Sea Isle SIE 133.125/281.450 High-low sectors 19 Woodstown OOD 125.450/363.000 18 DuPont DQO 132.525/287.900 Low sectors 17 Swann SWN 134.500/360.700 References External links Washington ARTCC on IVAO Washington Air Route Traffic Control Center Washington Center Weather Service Unit (CWSU) (NWS/FAA) Air traffic control centers Air traffic control in the United States WAAS reference stations Aviation in Washington, D.C. Year of establishment missing Leesburg, Virginia
Washington Air Route Traffic Control Center
[ "Technology" ]
1,125
[ "Global Positioning System", "WAAS reference stations" ]
11,692,451
https://en.wikipedia.org/wiki/Centimorgan
In genetics, a centimorgan (abbreviated cM) or map unit (m.u.) is a unit for measuring genetic linkage. It is defined as the distance between chromosome positions (also termed loci or markers) for which the expected average number of intervening chromosomal crossovers in a single generation is 0.01. It is often used to infer distance along a chromosome. However, it is not a true physical distance. Relation to physical distance The number of base pairs to which it corresponds varies widely across the genome (different regions of a chromosome have different propensities towards crossover) and it also depends on whether the meiosis in which the crossing-over takes place is a part of oogenesis (formation of female gametes) or spermatogenesis (formation of male gametes). In humans one centimorgan corresponds to about 1 Mb (1,000,000 base pairs or nucleotides) on average. The relationship is only rough, as the physical chromosomal distance corresponding to one centimorgan varies from place to place in the genome, and also varies between males and females since recombination during gamete formation in females is significantly more frequent than in males. Kong et al. calculated that the female genome is 4460 cM long, while the male genome is only 2590 cM long. In contrast, in Plasmodium falciparum one centimorgan corresponds to about 15 kb; markers separated by 15,000 nucleotides have an expected rate of chromosomal crossovers of 0.01 per generation. Note that non-syntenic genes (genes residing on different chromosomes) are inherently unlinked, and cM distances are not applicable to them. Relation to the probability of recombination Because genetic recombination between two markers is detected only if there are an odd number of chromosomal crossovers between the two markers, the distance in centimorgans does not correspond exactly to the probability of genetic recombination. Assuming the Haldane Mapping Function, eponymously devised by J. B. S. Haldane, the number of chromosomal crossovers is distributed according to a Poisson distribution, a genetic distance of d centimorgans will lead to an odd number of chromosomal crossovers, and hence a detectable genetic recombination, with probability where sinh is the hyperbolic sine function. The probability of recombination is approximately d/100 for small values of d and approaches 50% as d goes to infinity. The formula can be inverted, giving the distance in centimorgans as a function of the recombination probability: Shared centimorgans Genealogists often use "shared centimorgans" as a proxy for reciprocal of distance in a family tree, halving with each generational step. So if two individuals differ on average by: 7050 cM, they are identical twins (or actually the same person; 0 steps); 3525 cM, they are parent & child (1 step); 1762 cM, they are (very likely) grandparent & grandchild, or half siblings (2 steps); 881 cM, they are (likely) great grandparent & great grandchild, or half aunt/uncle and half niece/nephew (3 steps); 440 cM, they are (probably) great great grand parent & great great grandchild, or half great aunt/uncle & half great niece/nephew, or half cousins (4 steps); etc. The margin of error increases with each step, so that beyond about 4 steps the ranges overlap to such an extent as to make it difficult to establish how many steps are involved, and beyond about 7 steps any relationship at all is tenuous. The self/twin figure of 7050 cM corresponds to the sum of the cM lengths of human DNA for males and females. When multiple genetic lines are inherited, they combine as root-sum-of-squares, so that full siblings share around 2493 cM or √2 times as much as half siblings. Because some recombinations result in unviable gametes, or offspring that cannot themselves reproduce, the observed genetic distances in families tend to be lower (shared cM higher) than predicted by models based purely on physical recombination rates. Etymology The centimorgan was named in honor of geneticist Thomas Hunt Morgan by J. B. S. Haldane. Its parent unit, the morgan, is rarely used today. See also Mutation rate References Further reading Units of measurement Genetics
Centimorgan
[ "Mathematics", "Biology" ]
936
[ "Quantity", "Genetics", "Units of measurement" ]
11,693,063
https://en.wikipedia.org/wiki/Clitocybe%20nebularis
Clitocybe nebularis or Lepista nebularis, commonly known as the clouded agaric, cloudy clitocybe, or cloud funnel, is an abundant gilled fungus which appears both in conifer-dominated forests and broad-leaved woodland in Europe and North America. Appearing in Britain from mid to late autumn, it is edible, but may cause gastrointestinal issues. Taxonomy The species was first described and named as Agaricus nebularis in 1789 by August Johann Georg Karl Batsch. It was later placed in the genus Clitocybe in 1871 by Paul Kummer as Clitocybe nebularis. After much consideration by many mycologists, over some years, when it was placed for periods in both Lepista, and Gymnopus, it was placed back in Clitocybe with the specific epithet, and 1871 accreditation it retains today.Clitocybe nebularis var. alba Bataille (1911), differs only in having a milk white cap, and is very rare. Description The cap of the mushroom is 5–25 cm (2–8 in) in diameter, convex with an incurved margin, becoming plane to depressed in shape. Cap colours are generally greyish to light brownish-grey, and often covered in a whitish bloom when young. The surface of the cap is usually dry to moist, and radially fibrillose. The gills are pale, adnate to short-decurrent, close and usually forked. The stem measures long and 2–4 cm wide; it is stout, swollen towards the base, becomes hollow with age, and is easily broken. It is usually somewhat lighter than the cap. The flesh is white, and very thick. It usually has a foul-smelling odour, which has been described as slightly farinaceous to spicy, or rancid. The spores are yellow and elliptical. This species is host to the parasitic gilled mushroom Volvariella surrecta, which is found on older specimens. Edibility The species is edible but even a small portion can cause gastrointestinal disturbances for some people. Gallery Similar species The species may be confused with the poisonous Entoloma sinuatum both in Europe or North America, though this species has pink sinuate gills. It also resembles Leucopaxillus albissimus and Tricholoma saponaceum. Leucopaxillus giganteus is also similar in stature, but is whiter. Infundibulicybe geotropa has a pale brown cap. References nebularis Edible fungi Fungi of Europe Fungi of North America Fungi described in 1789 Taxa named by August Batsch Fungus species
Clitocybe nebularis
[ "Biology" ]
554
[ "Fungi", "Fungus species" ]
11,693,664
https://en.wikipedia.org/wiki/Photothermal%20therapy
Photothermal therapy (PTT) refers to efforts to use electromagnetic radiation (most often in infrared wavelengths) for the treatment of various medical conditions, including cancer. This neurotherapy is an extension of photodynamic therapy, in which a photosensitizer is excited with specific band light. This activation brings the sensitizer to an excited state where it then releases vibrational energy (heat), which is what kills the targeted cells. Unlike photodynamic therapy, photothermal therapy does not require oxygen to interact with the target cells or tissues. Current studies also show that photothermal therapy is able to use longer wavelength light, which is less energetic and therefore less harmful to other cells and tissues. Nanoscale materials Most materials of interest currently being investigated for photothermal therapy are on the nanoscale. One of the key reasons behind this is the enhanced permeability and retention effect observed with particles in a certain size range (typically 20 - 300 nm). Molecules in this range have been observed to preferentially accumulate in tumor tissue. When a tumor forms, it requires new blood vessels in order to fuel its growth; these new blood vessels in/near tumors have different properties as compared to regular blood vessels, such as poor lymphatic drainage and a disorganized, leaky vasculature. These factors lead to a significantly higher concentration of certain particles in a tumor as compared to the rest of the body. Gold NanoRods (AuNR) Huang et al. investigated the feasibility of using gold nanorods for both cancer cell imaging as well as photothermal therapy. The authors conjugated antibodies (anti-EGFR monoclonal antibodies) to the surface of gold nanorods, allowing the gold nanorods to bind specifically to certain malignant cancer cells (HSC and HOC malignant cells). After incubating the cells with the gold nanorods, an 800 nm Ti:sapphire laser was used to irradiate the cells at varying powers. The authors reported successful destruction of the malignant cancer cells, while nonmalignant cells were unharmed. When AuNRs are exposed to NIR light, the oscillating electromagnetic field of the light causes the free electrons of the AuNR to collectively coherently oscillate. Changing the size and shape of AuNRs changes the wavelength that gets absorbed. A desired wavelength would be between 700-1000 nm because biological tissue is optically transparent at these wavelengths. While all AuNP are sensitive to change in their shape and size, Au nanorods properties are extremely sensitive to any change in any of their dimensions regarding their length and width or their aspect ratio. When light is shone on a metal NP, the NP forms a dipole oscillation along the direction of the electric field. When the oscillation reaches its maximum, this frequency is called the surface plasmon resonance (SPR). AuNR have two SPR spectrum bands: one in the NIR region caused by its longitudinal oscillation which tends to be stronger with a longer wavelength and one in the visible region caused by the transverse electronic oscillation which tends to be weaker with a shorter wavelength. The SPR characteristics account for the increase in light absorption for the particle. As the AuNR aspect ratio increases, the absorption wavelength is redshifted and light scattering efficiency is increased. The electrons excited by the NIR lose energy quickly after absorption via electron-electron collisions, and as these electrons relax back down, the energy is released as a phonon that then heats the environment of the AuNP which in cancer treatments would be the cancerous cells. This process is observed when a laser has a continuous wave onto the AuNP. Pulsed laser light beams generally results in the AuNP melting or ablation of the particle. Continuous wave lasers take minutes rather than a single pulse time for a pulsed laser, continues wave lasers are able to heat larger areas at once. Gold Nanoshells Gold nanoshells, coated silica nanoparticles with a thin layer of gold. have been conjugated to antibodies (anti-HER2 or anti-IgG) via PEG linkers. After incubation of SKBr3 cancer cells with the gold nanoshells, an 820 nm laser was used to irradiate the cells. Only the cells incubated with the gold nanoshells conjugated with the specific antibody (anti-HER2) were damaged by the laser. Another category of gold nanoshells are gold layer on liposomes, as soft template. In this case, drug can also be encapsulated inside and/or in bilayer and the release can be triggered by laser light. thermo Nano-Architectures (tNAs) The failure of clinical translation of nanoparticles-mediated PTT is mainly ascribed to concerns about their persistence in the body. Indeed, the optical response of anisotropic nanomaterials can be tuned in the NIR region by increasing their size to up to 150 nm. On the other hand, body excretion of non-biodegradable noble metals nanomaterials above 10 nm occurs through the hepatobiliary route in a slow and inefficient manner. A common approach to avoid metal persistence is to reduce the nanoparticles size below the threshold for renal clearance, i.e. ultrasmall nanoparticles (USNPs), meanwhile the maximum light-to-heat transduction is for < 5 nm nanoparticles. On the other hand, the surface plasmon of excretable gold USNPs is in the UV/visible region (far from the first biological windows), severely limiting their potential application in PTT. Excretion of metals has been combined with NIR-triggered PTT by employing ultrasmall-in-nano architectures composed by metal USNPs embedded in biodegradable silica nanocapsules. tNAs are the first reported NIR-absorbing plasmonic ultrasmall-in-nano platforms that jointly combine: i) photothermal conversion efficacy suitable for hyperthermia, ii) multiple photothermal sequences and iii) renal excretion of the building blocks after the therapeutic action. Nowadays, tNAs therapeutic effect has been assessed on valuable 3D models of human pancreatic adenocarcinoma. Graphene and graphene oxide Graphene is viable for photothermal therapy. An 808 nm laser at a power density of 2 W/cm2 was used to irradiate the tumor sites on mice for 5 minutes. As noted by the authors, the power densities of lasers used to heat gold nanorods range from 2 to 4 W/cm2. Thus, these nanoscale graphene sheets require a laser power on the lower end of the range used with gold nanoparticles to photothermally ablate tumors. In 2012, Yang et al. incorporated the promising results regarding nanoscale reduced graphene oxide reported by Robinson et al. into another in vivo mice study.< The therapeutic treatment used in this study involved the use of nanoscale reduced graphene oxide sheets, nearly identical to the ones used by Robinson et al. (but without any active targeting sequences attached). Nanoscale reduced graphene oxide sheets were successfully irradiated in order to completely destroy the targeted tumors. Most notably, the required power density of the 808 nm laser was reduced to 0.15 W/cm2, an order of magnitude lower than previously required power densities. This study demonstrates the higher efficacy of nanoscale reduced graphene oxide sheets as compared to both nanoscale graphene sheets and gold nanorods. Conjugated polymers (CPs) PTT utilizes photothermal transduction agents (PTAs) which can transform light energy to heat through photothermal effect to raise the temperature of tumor area and thus cause the ablation of tumor cells. Specifically, ideal PTAs should have high photothermal conversion efficiency (PCE), excellent optical stability and biocompatibility, and strong light adsorption in the near-infrared (NIR) region (650-1350 nm) due to the deep-tissue penetration and minimal absorption of NIR light in the biological tissues. PTAs mainly include inorganic materials and organic materials. Inorganic PTAs, such as noble metal materials, carbon-based nanomaterials, and other 2D materials, have high PCE and excellent photostability, but they are not biodegradable and thus have potential long-term toxicity in vivo. Organic PTAs including small molecule dyes and conjugated polymers (CPs) have good biocompatibility and biodegradability, but poor photostability. Among them, small molecule dyes, such as cyanine, porphyrin, phthalocyanine, are limited in the field of cancer treatment because of their susceptibility to photobleaching and poor tumor enrichment ability. Conjugated polymers with large π−π conjugated skeleton and a high electron delocalization structure show potential for PTT due to their strong NIR absorption, excellent photostability, low cytotoxicity, outstanding PCE, good dispersibility in aqueous medium, increased accumulation at tumor site, and long blood circulation time. Moreover, conjugated polymers can be easily combined with other imaging agents and drugs to construct multifunctional nanomaterials for selective and synergistic cancer therapy. The CPs used for tumor PTT mainly include polyaniline (PANI), polypyrrole (PPy), polythiophene (PTh), polydopamine (PDA), donor−acceptor (D-A) conjugated polymers, and poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS). Photothermal conversion mechanism The nonradiative process for heat generation of organic PTAs is different from that of inorganic PTAs such as metals and semiconductors which is related with surface plasmon resonance. As shown in the figure, conjugated polymers are first activated to the excited state (S1) under light irradiation and then excited state (S1) decays back to the ground state (S0) via three processes: (I) emitting a photon (fluorescence), (II) intersystem crossing, and (III) nonradiative relaxation (heat generation). Because these three pathways of the S1 decaying back to the S0 are usually competitive in photosensitive materials, light emitting and intersystem crossing must be efficiently reduced in order to increase the heat generation and improve the photothermal conversion efficiency. For conjugated polymers, on the one hand, their unique structures lead to closed stacking of the molecular sensitizers with highly frequent intermolecular collisions which can efficiently quench the fluorescence and intersystem crossing, and thus enhance the yield of nonradiative relaxation. On the other hand, compared with monomeric phototherapeutic molecules, conjugated polymers possess higher stability in vivo against disassembly and photobleaching, longer blood circulation time, and more accumulation at tumor site due to the enhanced permeability and retention (EPR) effect. Therefore, conjugated polymers have high photothermal conversion efficiency and a large amount of heat generation. One of the most widely used equations to calculate photothermal conversion efficiency (η) of organic PTAs is as follows: η = (hAΔΤmax-Qs)/I(1-10-Aλ) where h is the heat transfer coefficient, A is the container surface area, ΔΤmax means the maximum temperature change in the solution, Aλ means the light absorbance, I is the laser power density, and Qs is the heat associated with the light absorbance of the solvent. Furthermore, various efficient methods, especially donor-acceptor (D-A) strategy, have been designed to enhance the photothermal conversion efficiency and heat generation of conjugated polymers. The D-A assembly system in the conjugated polymers contributes to strong intermolecular electron transfer from the donor to the acceptor, thus bringing efficient fluorescence and intersystem crossing quenching, and improved heat generation. In addition, the HOMO-LUMO gap of the D−A conjugated polymers can be easily tuned through changing the selection of electron donor (ED) and electron acceptor (EA) moieties, and thus D−A structured polymers with extremely low band gap can be developed to improve the NIR absorption and photothermal conversion efficiency of CPs. Polyaniline (PANI) Polyaniline (PANI) is one of the earliest types of conjugated polymers reported for tumor PTT. Polypyrrole (PPy) Polypyrrole (PPy) is suited for PTT applications because of its strong NIR absorbance, large PCE, stability, and biocompatibility. In vivo experiments show that tumors treated with PPy NPs could be effectively eliminated under the irradiation of an 808 nm laser (1 W cm−2, 5 min). PPy nanosheets exhibit promising photothermal ablation ability toward cancer cells in the NIR II window for deep-tissue PTT. PPy nanoparticles and its derivative nanomaterials can also be combined with imaging contrast agents and diverse drugs to construct multifunctional theranostic applications in imaging-guided PTT and synergistic treatment, including fluorescent imaging, magnetic resonance imaging (MRI), photoacoustic imaging (PA), computed tomography (CT), photodynamic therapy (PDT), chemotherapy, etc. For example, PPy has been used to encapsulate ultrasmall iron oxide nanoparticles (IONPs) and finally develop IONP@PPy NPs for in vivo MR and PA imaging-guided PTT. Polypyrrole (I-PPy) nanocomposites have been investigated for CT imaging-guided tumor PTT. Polythiophene (PTh) Polythiophene (PTh) and its derivatives-based polymers are also one kind of conjugated polymers for PTT. Polythiophene-based polymers usually exhibit excellent photostability, large light-harvesting ability, easy synthesis, and facile functionalization with different substituents. Conjugated copolymer (C3) with promising photothermal properties can be prepared by linking 2-N,N′-bis(2-(ethyl)hexyl)-perylene-3,4,9,10-tetra-carboxylic acid bis-imide to a thienylvinylene oligomer. C3 was coprecipitated with PEG-PCL and indocyanine green (ICG) to obtain PEG-PCL-C3-ICG nanoparticles for fluorescence-guided photothermal/photodynamic therapy against oral squamous cell carcinoma (OSCC). A biodegradable PLGA-PEGylated DPPV (poly{2,2′-[(2,5-bis(2-hexyldecyl)-3,6-dioxo-2,3,5,6-tetrahydropyrrolo[3,4-c]-pyrrole-1,4-diyl)-dithiophene]-5,5′-diyl-alt-vinylene) conjugated polymer for PA-guided PTT with PCE 71% (@ 808 nm, 0.3 W cm−2). The vinylene bonds in the main chain improves the biodegradability, biocompatibility and photothermal conversion efficiency of CPs. Polydopamine (PDA) Dopamine is one of neurotransmitters in the body which helps cells send impulses. Polydopamine (PDA) is obtained through the self-aggregation of dopamine to form a melanin-like substance under mild alkaline conditions. PDA has strong NIR absorption, good photothermal stability, excellent biocompatibility and biodegradability, and high photothermal conversion efficiency. Furthermore, with π conjugated structure and different active groups, PDA can be easily combined with various materials to achieve multifunction, such as fluorescence imaging, MRI, CT, PA, targeted therapy etc. In view of this, PDA and its composite nanomaterials have a broad application prospect in the biomedical field. Dopamine-melanin colloidal nanospheres is an efficient near-infrared photothermal therapeutic agent for in vivo cancer therapy. PDA can also be modified on the surface of other PTAs, such as gold nanorods, carbon-based materials, to enhance the photothermal stability and efficiency in vivo. For example, PDA-modified spiky gold nanoparticles (SGNP@PDAs) have been investigated for chemo-photothermal therapy. Donor−Acceptor (D−A) CPs Donor−acceptor (D−A) conjugated polymers have been investigated for the medicinal purposes. Nano-PCPDTBT CPs have two moieties: 2-ethylhexyl cyclopentadithiophene and 2,1,3-benzothiadiazole. When the PCPDTBT nanoparticle solution (0.115 mg/mL) was exposed to an 808 nm NIR laser (0.6 W/cm2), the temperature could be increased by more than 30 °C. Wang et al. designed four NIR-absorbing D-A structured conjugated polymer dots (Pdots) containing diketopyrrolo-pyrrole (DPP) and thiophene units as effective photothermal materials with the PCE up to 65% for in vivo cancer therapy. Zhang et al. constructed PBIBDF-BT D-A CPs by using isoindigo derivative (BIBDF) and bithiophene (BT) as EA and ED respectively. PBIBDF-BT was further modified with poly(ethylene glycol)-block-poly(hexyl ethylene phosphate) (mPEG-b-PHEP) to obtain PBIBDF-BT@NP PPE with PCE of 46.7% and high stability in physiological environment. Yang’s group designed PBTPBF-BT CPs, in which the bis(5-oxothieno[3,2-b]pyrrole-6-ylidene)-benzodifurandione (BTPBF) and the 3,3′-didodecyl-2,2′-bithiophene (BT) units acting as EA and ED respectively. The D-A CPs have a maximum absorption peak at 1107 nm and a relative high photothermal conversion efficiency (66.4%). Pu et al. synthesized PC70BM-PCPDTBT D-A CPs via nanoprecipitation of EA (6,6)-phenyl-C71-butyric acid methyl ester (PC70BM) and ED PCPDTBT (SPs) for PA-guided PTT. Wang et al. developed D-A CPs TBDOPV-DT containing thiophene-fused benzodifurandione-based oligo(p-phenylenevinylene) (TBDOPV) as EA unit and 2,2′-bithio-phene (DT) as ED unit. TBDOPV-DT CPs have a strong absorption at 1093 nm and achieve highly efficient NIR-II photothermal conversion. PEDOT:PSS Poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS) is often used in organic electronics and have strong NIR absorption. In 2012, Liu’s group first reported PEGylated PEDOT:PSS polymeric nanoparticle (PEDOT:PSS-PEG) for near-infrared photothermal therapy of cancer. PEDOT:PSS-PEG nanoparticles have high stability in vivo and long blood circulation half-life of 21.4 ± 3.1 h. The PTT in animals showed no appreciable side effects for the tested dose and an excellent therapeutic efficacy under the 808 nm laser irradiation. Kang et al. synthesized magneto-conjugated polymer core−shell MNP@PEDOT:PSS nanoparticles for multimodal imaging-guided PTT. Furthermore, PEDOT:PSS NPs can not only serve as PTAs but also as a drug carrier to load various types of drugs, such as SN38, chemotherapy drugs DOX and photodynamic agent chlorin e6 (Ce6), thus achieving synergistic cancer therapy. See also Photomedicine Light Therapy Hyperthermia therapy Experimental cancer treatment References Further reading Medical physics Photochemistry Medical treatments Light therapy Experimental cancer treatments Oncothermia
Photothermal therapy
[ "Physics", "Chemistry" ]
4,462
[ "nan", "Applied and interdisciplinary physics", "Medical physics" ]
2,140,720
https://en.wikipedia.org/wiki/Domain-specific%20modeling
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, such as computer software. It involves systematic use of a domain-specific language to represent the various facets of a system. Domain-specific modeling languages tend to support higher-level abstractions than general-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. Overview Domain-specific modeling often also includes the idea of code generation: automating the creation of executable source code directly from the domain-specific language models. Being free from the manual creation and maintenance of source code means domain-specific language can significantly improve developer productivity. The reliability of automatic generation compared to manual coding will also reduce the number of defects in the resulting programs thus improving quality. Domain-specific language differs from earlier code generation attempts in the CASE tools of the 1980s or UML tools of the 1990s. In both of these, the code generators and modeling languages were built by tool vendors. While it is possible for a tool vendor to create a domain-specific language and generators, it is more normal for domain-specific language to occur within one organization. One or a few expert developers creates the modeling language and generators, and the rest of the developers use them. Having the modeling language and generator built by the organization that will use them allows a tight fit with their exact domain and in response to changes in the domain. Domain-specific languages can usually cover a range of abstraction levels for a particular domain. For example, a domain-specific modeling language for mobile phones could allow users to specify high-level abstractions for the user interface, as well as lower-level abstractions for storing data such as phone numbers or settings. Likewise, a domain-specific modeling language for financial services could permit users to specify high-level abstractions for clients, as well as lower-level abstractions for implementing stock and bond trading algorithms. Topics Defining domain-specific languages To define a language, one needs a language to write the definition in. The language of a model is often called a metamodel, hence the language for defining a modeling language is a meta-metamodel. Meta-metamodels can be divided into two groups: those that are derived from or customizations of existing languages, and those that have been developed specifically as meta-metamodels. Derived meta-metamodels include entity–relationship diagrams, formal languages, extended Backus–Naur form (EBNF), ontology languages, XML schema, and Meta-Object Facility (MOF). The strengths of these languages tend to be in the familiarity and standardization of the original language. The ethos of domain-specific modeling favors the creation of a new language for a specific task, and so there are unsurprisingly new languages designed as meta-metamodels. The most widely used family of such languages is that of OPRR, GOPRR, and GOPPRR, which focus on supporting things found in modeling languages with the minimum effort. Tool support for domain-specific languages Many General-Purpose Modeling languages already have tool support available in the form of CASE tools. Domain-specific language languages tend to have too small a market size to support the construction of a bespoke CASE tool from scratch. Instead, most tool support for domain-specific language languages is built based on existing domain-specific language frameworks or through domain-specific language environments. A domain-specific language environment may be thought of as a metamodeling tool, i.e., a modeling tool used to define a modeling tool or CASE tool. The resulting tool may either work within the domain-specific language environment, or less commonly be produced as a separate stand-alone program. In the more common case, the domain-specific language environment supports an additional layer of abstraction when compared to a traditional CASE tool. Using a domain-specific language environment can significantly lower the cost of obtaining tool support for a domain-specific language, since a well-designed domain-specific language environment will automate the creation of program parts that are costly to build from scratch, such as domain-specific editors, browsers and components. The domain expert only needs to specify the domain specific constructs and rules, and the domain-specific language environment provides a modeling tool tailored for the target domain. Most existing domain-specific language takes place with domain-specific language environments, either commercial such as MetaEdit+ or Actifsource, open source such as GEMS, or academic such as GME. The increasing popularity of domain-specific language has led to domain-specific language frameworks being added to existing IDEs, e.g. Eclipse Modeling Project (EMP) with EMF and GMF, or in Microsoft's DSL Tools for Software Factories. Domain-specific language and UML The Unified Modeling Language (UML) is a general-purpose modeling language for software-intensive systems that is designed to support mostly object oriented programming. Consequently, in contrast to domain-specific language languages, UML is used for a wide variety of purposes across a broad range of domains. The primitives offered by UML are those of object oriented programming, while domain-specific languages offer primitives whose semantics are familiar to all practitioners in that domain. For example, in the domain of automotive engineering, there will be software models to represent the properties of an anti-lock braking system, or a steering wheel, etc. UML includes a profile mechanism that allows it to be constrained and customized for specific domains and platforms. UML profiles use stereotypes, stereotype attributes (known as tagged values before UML 2.0), and constraints to restrict and extend the scope of UML to a particular domain. Perhaps the best known example of customizing UML for a specific domain is SysML, a domain specific language for systems engineering. UML is a popular choice for various model-driven development approaches whereby technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. For instance, application profiles of the legal document standard Akoma Ntoso can be developed by representing legal concepts and ontologies in UML class objects. See also Computer-aided software engineering Domain-driven design Domain-specific language Framework-specific modeling language General-purpose modeling Domain-specific multimodeling Model-driven engineering Model-driven architecture Software factories Discipline-Specific Modeling References External links Domain-specific modeling for generative software development , Web-article by Martijn Iseger, 2010 Domain Specific Modeling in IoC frameworks Web-article by Ke Jin, 2007 Domain-Specific Modeling for Full Code Generation from Methods & Tools Web-article by Juha-Pekka Tolvanen, 2005 Creating a Domain-Specific Modeling Language for an Existing Framework Web-article by Juha-Pekka Tolvanen, 2006 Programming language topics Simulation programming languages
Domain-specific modeling
[ "Engineering" ]
1,410
[ "Software engineering", "Programming language topics" ]
2,140,750
https://en.wikipedia.org/wiki/Sebkha%20de%20Ndrhamcha
The Sebkha de Ndrhamcha () is a large salt pan in Mauritania that is about in diameter and the lowest point in Mauritania. The Atlantic Ocean borders it to the west, and the Sahara Desert lies directly to its east. Landforms of Mauritania Salt flats Lowest points of countries
Sebkha de Ndrhamcha
[ "Chemistry" ]
67
[ "Salt flats", "Salts" ]
2,140,843
https://en.wikipedia.org/wiki/Tissue%20microarray
Tissue microarrays (also TMAs) consist of paraffin blocks in which up to 1000 separate tissue cores are assembled in array fashion to allow multiplex histological analysis. History The major limitations in molecular clinical analysis of tissues include the cumbersome nature of procedures, limited availability of diagnostic reagents and limited patient sample size. The technique of tissue microarray was developed to address these issues. Multi-tissue blocks were first introduced by H. Battifora in 1986 with his so-called “multitumor (sausage) tissue block" and modified in 1990 with its improvement, "the checkerboard tissue block" . In 1998, J. Kononen and collaborators developed the current technique, which uses a novel sampling approach to produce tissues of regular size and shape that can be more densely and precisely arrayed. Procedure In the tissue microarray technique, a hollow needle is used to remove tissue cores as small as 0.6 mm in diameter from regions of interest in paraffin-embedded tissues such as clinical biopsies or tumor samples. These tissue cores are then inserted in a recipient paraffin block in a precisely spaced, array pattern. Sections from this block are cut using a microtome, mounted on a microscope slide and then analyzed by any method of standard histological analysis. Each microarray block can be cut into 100 – 500 sections, which can be subjected to independent tests. Tests commonly employed in tissue microarray include immunohistochemistry, and fluorescent in situ hybridization. Tissue microarrays are particularly useful in analysis of cancer samples. One variation is a Frozen tissue array. Use in research The use of tissue microarrays in combination with immunohistochemistry has been a preferred method to study and validate cancer biomarkers in various defined cancer patient cohorts. The possibility to assemble a large number of representative cancer samples from a defined patient cohort that also has a corresponding clinical database, provides a powerful resource to study how different protein expression patterns correlate with different clinical parameters. Since patient samples are assembled into the same block, sections can be stained with the same protocol to avoid experimental variability and technical artefacts. Clinical cancer patient cohorts and corresponding tissue microarray sets have been used to study diagnostic, prognostic and treatment predictive cancer biomarkers in most forms of cancer, including lung, breast, colorectal and renal cell cancer. Immunohistochemistry combined with tissue microarrays has also been used with success in large scale efforts to create a map of protein expression on a more global scale. See also Cytomics References Battifora H: The multitumor (sausage) tissue block: novel method for immunohistochemical antibody testing. Lab Invest 1986, 55:244-248. Battifora H, Mehta P: The checkerboard tissue block. An improved multitissue control block. Lab Invest 1990, 63:722-724. Kononen J, Bubendorf L, Kallioniemi A, Barlund M, Schraml P, Leighton S, Torhorst J, Mihatsch MJ, Sauter G, Kallioniemi OP: Tissue microarrays for high-throughput molecular profiling of tumor specimens. Nat Med 1998, 4:844-847. External links National Cancer Institute Tissue Array Research Program Tissues (biology) Microarrays
Tissue microarray
[ "Chemistry", "Materials_science", "Biology" ]
711
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
2,140,955
https://en.wikipedia.org/wiki/ENCODE
The Encyclopedia of DNA Elements (ENCODE) is a public research project which aims "to build a comprehensive parts list of functional elements in the human genome." ENCODE also supports further biomedical research by "generating community resources of genomics data, software, tools and methods for genomics data analysis, and products resulting from data analyses and interpretations." The current phase of ENCODE (2016-2019) is adding depth to its resources by growing the number of cell types, data types, assays and now includes support for examination of the mouse genome. History ENCODE was launched by the US National Human Genome Research Institute (NHGRI) in September 2003. Intended as a follow-up to the Human Genome Project, the ENCODE project aims to identify all functional elements in the human genome. The project involves a worldwide consortium of research groups, and data generated from this project can be accessed through public databases. The initial release of ENCODE was in 2013 and since has been changing according to the recommendations of consortium members and the wider community of scientists who use the Portal to access ENCODE data. The two-part goal for ENCODE is to serve as a publicly accessible data base for "experimental protocols, analytical procedures and the data themselves," and "the same interface should serve carefully curated metadata that record the provenance of the data and justify its interpretation in biological terms." The project began its fourth phase (ENCODE 4) in February 2017. Motivation and Significance Humans are estimated to have approximately 20,000 protein-coding genes, which account for about 1.5% of DNA in the human genome. The primary goal of the ENCODE project is to determine the role of the remaining component of the genome, much of which was traditionally regarded as "junk". The activity and expression of protein-coding genes can be modulated by the regulome - a variety of DNA elements, such as promoters, transcriptional regulatory sequences, and regions of chromatin structure and histone modification. It is thought that changes in the regulation of gene activity can disrupt protein production and cell processes and result in disease. Determining the location of these regulatory elements and how they influence gene transcription could reveal links between variations in the expression of certain genes and the development of disease. ENCODE is also intended as a comprehensive resource to allow the scientific community to better understand how the genome can affect human health, and to "stimulate the development of new therapies to prevent and treat these diseases". The ENCODE Consortium The ENCODE Consortium is composed primarily of scientists who were funded by US National Human Genome Research Institute (NHGRI). Other participants contributing to the project are brought up into the Consortium or Analysis Working Group. The pilot phase consisted of eight research groups and twelve groups participating in the ENCODE Technology Development Phase. After 2007, the number of participants expanded to 440 scientists based in 32 laboratories worldwide as the pilot phase was officially over. At the moment the consortium consists of different centers which perform different tasks. ENCODE is a member of the International Human Epigenome Consortium (IHEC). NHGRI's main requirement for the products from ENCODE-funded research is to be shared in a free, highly accessible manner to all researchers to promote genomic research. ENCODE research allows the reproducibility and thus transparency of the software, methods, data, and other tools related to the genomic analysis. The ENCODE Project ENCODE is currently implemented in four phases: the pilot phase and the technology development phase, which were initiated simultaneously; and the production phase. The fourth phase is a continuation of the third, and includes functional characterization and further integrative analysis for the encyclopedia. The goal of the pilot phase was to identify a set of procedures that, in combination, could be applied cost-effectively and at high-throughput to accurately and comprehensively characterize large regions of the human genome. The pilot phase had to reveal gaps in the current set of tools for detecting functional sequences, and was also thought to reveal whether some methods used by that time were inefficient or unsuitable for large-scale utilization. Some of these problems had to be addressed in the ENCODE technology development phase, which aimed to devise new laboratory and computational methods that would improve our ability to identify known functional sequences or to discover new functional genomic elements. The results of the first two phases determined the best path forward for analyzing the remaining 99% of the human genome in a cost-effective and comprehensive production phase. The ENCODE Phase I Project: The Pilot Project The pilot phase tested and compared existing methods to rigorously analyze a defined portion of the human genome sequence. It was organized as an open consortium and brought together investigators with diverse backgrounds and expertise to evaluate the relative merits of each of a diverse set of techniques, technologies and strategies. The concurrent technology development phase of the project aimed to develop new high throughput methods to identify functional elements. The goal of these efforts was to identify a suite of approaches that would allow the comprehensive identification of all the functional elements in the human genome. Through the ENCODE pilot project, National Human Genome Research Institute (NHGRI) assessed the abilities of different approaches to be scaled up for an effort to analyze the entire human genome and to find gaps in the ability to identify functional elements in genomic sequence. The ENCODE pilot project process involved close interactions between computational and experimental scientists to evaluate a number of methods for annotating the human genome. A set of regions representing approximately 1% (30 Mb) of the human genome was selected as the target for the pilot project and was analyzed by all ENCODE pilot project investigators. All data generated by ENCODE participants on these regions was rapidly released into public databases. Target Selection For use in the ENCODE pilot project, defined regions of the human genome - corresponding to 30Mb, roughly 1% of the total human genome - were selected. These regions served as the foundation on which to test and evaluate the effectiveness and efficiency of a diverse set of methods and technologies for finding various functional elements in human DNA. Prior to embarking upon the target selection, it was decided that 50% of the 30Mb of sequence would be selected manually while the remaining sequence would be selected randomly. The two main criteria for manually selected regions were: 1) the presence of well-studied genes or other known sequence elements, and 2) the existence of a substantial amount of comparative sequence data. A total of 14.82Mb of sequence was manually selected using this approach, consisting of 14 targets that range in size from 500kb to 2Mb. The remaining 50% of the 30Mb of sequence were composed of thirty, 500kb regions selected according to a stratified random-sampling strategy based on gene density and level of non-exonic conservation. The decision to use these particular criteria was made in order to ensure a good sampling of genomic regions varying widely in their content of genes and other functional elements. The human genome was divided into three parts - top 20%, middle 30%, and bottom 50% - along each of two axes: 1) gene density and 2) level of non-exonic conservation with respect to the orthologous mouse genomic sequence (see below), for a total of nine strata. From each stratum, three random regions were chosen for the pilot project. For those strata underrepresented by the manual picks, a fourth region was chosen, resulting in a total of 30 regions. For all strata, a "backup" region was designated for use in the event of unforeseen technical problems. In greater detail, the stratification criteria were as follows: Gene density: The gene density score of a region was the percentage of bases covered either by genes in the Ensembl database, or by human mRNA best BLAT (BLAST-like alignment tool) alignments in the UCSC Genome Browser database. Non-exonic conservation: The region was divided into non-overlapping subwindows of 125 bases. Subwindows that showed less than 75% base alignment with mouse sequence were discarded. For the remaining subwindows, the percentage with at least 80% base identity to mouse, and which did not correspond to Ensembl genes, GenBank mRNA BLASTZ alignments, Fgenesh++ gene predictions, TwinScan gene predictions, spliced EST alignments, or repeated sequences (DNA), was used as the non-exonic conservation score. The above scores were computed within non-overlapping 500 kb windows of finished sequence across the genome, and used to assign each window to a stratum. Pilot Phase Results The pilot phase was successfully finished and the results were published in June 2007 in Nature and in a special issue of Genome Research; the results published in the first paper mentioned advanced the collective knowledge about human genome function in several major areas, included in the following highlights: The human genome is pervasively transcribed, such that the majority of its bases are associated with at least one primary transcript and many transcripts link distal regions to established protein-coding loci. Many novel non-protein-coding transcripts have been identified, with many of these overlapping protein-coding loci and others located in regions of the genome previously thought to be transcriptionally silent. Numerous previously unrecognized transcription start sites have been identified, many of which show chromatin structure and sequence-specific protein-binding properties similar to well-understood promoters. Regulatory sequences that surround transcription start sites are symmetrically distributed, with no bias towards upstream regions. Chromatin accessibility and histone modification patterns are highly predictive of both the presence and activity of transcription start sites. Distal DNaseI hypersensitive sites have characteristic histone modification patterns that reliably distinguish them from promoters; some of these distal sites show marks consistent with insulator function. DNA replication timing is correlated with chromatin structure. A total of 5% of the bases in the genome can be confidently identified as being under evolutionary constraint in mammals; for approximately 60% of these constrained bases, there is evidence of function on the basis of the results of the experimental assays performed to date. Although there is general overlap between genomic regions identified as functional by experimental assays and those under evolutionary constraint, not all bases within these experimentally defined regions show evidence of constraint. Different functional elements vary greatly in their sequence variability across the human population and in their likelihood of residing within a structurally variable region of the genome. Surprisingly, many functional elements are seemingly unconstrained across mammalian evolution. This suggests the possibility of a large pool of neutral elements that are biochemically active but provide no specific benefit to the organism. This pool may serve as a 'warehouse' for natural selection, potentially acting as the source of lineage-specific elements and functionally conserved but non-orthologous elements between species. The ENCODE Phase II Project: The Production Phase Project In September 2007, National Human Genome Research Institute (NHGRI) began funding the production phase of the ENCODE project. In this phase, the goal was to analyze the entire genome and to conduct "additional pilot-scale studies". As in the pilot project, the production effort is organized as an open consortium. In October 2007, NHGRI awarded grants totaling more than $80 million over four years. The production phase also includes a Data Coordination Center, a Data Analysis Center, and a Technology Development Effort. At that time the project evolved into a truly global enterprise, involving 440 scientists from 32 laboratories worldwide. Once the pilot phase was completed, the project "scaled up" in 2007, profiting immensely from new-generation sequencing machines. And the data was, indeed, big; researchers generated around 15 terabytes of raw data. By 2010, over 1,000 genome-wide data sets had been produced by the ENCODE project. Taken together, these data sets show which regions are transcribed into RNA, which regions are likely to control the genes that are used in a particular type of cell, and which regions are associated with a wide variety of proteins. The primary assays used in ENCODE are ChIP-seq, DNase I Hypersensitivity, RNA-seq, and assays of DNA methylation. Production Phase Results In September 2012, the project released a much more extensive set of results, in 30 papers published simultaneously in several journals, including six in Nature, six in Genome Biology and a special issue with 18 publications of Genome Research. The authors described the production and the initial analysis of 1,640 data sets designed to annotate functional elements in the entire human genome, integrating results from diverse experiments within cell types, related experiments involving 147 different cell types, and all ENCODE data with other resources, such as candidate regions from genome-wide association studies (GWAS) and evolutionary constrained regions. Together, these efforts revealed important features about the organization and function of the human genome, which were summarized in an overview paper as follows: The vast majority (80.4%) of the human genome participates in at least one biochemical RNA and/or chromatin associated event in at least one cell type. Much of the genome lies close to a regulatory event: 95% of the genome lies within 8kb of a DNA-protein interaction (as assayed by bound ChIP-seq motifs or DNaseI footprints), and 99% is within 1.7kb of at least one of the biochemical events measured by ENCODE. Primate-specific elements as well as elements without detectable mammalian constraint show, in aggregate, evidence of negative selection; thus some of them are expected to be functional. Classifying the genome into seven chromatin states suggests an initial set of 399,124 regions with enhancer-like features and 70,292 regions with promoters-like features, as well hundreds of thousands of quiescent regions. High-resolution analyses further subdivide the genome into thousands of narrow states with distinct functional properties. It is possible to quantitatively correlate RNA sequence production and processing with both chromatin marks and transcription factor (TF) binding at promoters, indicating that promoter functionality can explain the majority of RNA expression variation. Many non-coding variants in individual genome sequences lie in ENCODE- annotated functional regions; this number is at least as large as those that lie in protein coding genes. SNPs associated with disease by GWAS are enriched within non-coding functional elements, with a majority residing in or near ENCODE-defined regions that are outside of protein coding genes. In many cases, the disease phenotypes can be associated with a specific cell type or TF. The most striking finding was that the fraction of human DNA that is biologically active is considerably higher than even the most optimistic previous estimates. In an overview paper, the ENCODE Consortium reported that its members were able to assign biochemical functions to over 80% of the genome. Much of this was found to be involved in controlling the expression levels of coding DNA, which makes up less than 1% of the genome. The most important new elements of the "encyclopedia" include: A comprehensive map of DNase 1 hypersensitive sites, which are markers for regulatory DNA that is typically located adjacent to genes and allows chemical factors to influence their expression. The map identified nearly 3 million sites of this type, including nearly all that were previously known and many that are novel. A lexicon of short DNA sequences that form recognition motifs for DNA-binding proteins. Approximately 8.4 million such sequences were found, comprising a fraction of the total DNA roughly twice the size of the exome. Thousands of transcription promoters were found to make use of a single stereotyped 50-base-pair footprint. A preliminary sketch of the architecture of the network of human transcription factors, that is, factors that bind to DNA in order to promote or inhibit the expression of genes. The network was found to be quite complex, with factors that operate at different levels as well as numerous feedback loops of various types. A measurement of the fraction of the human genome that is capable of being transcribed into RNA. This fraction was estimated to add up to more than 75% of the total DNA, a much higher value than previous estimates. The project also began to characterize the types of RNA transcripts that are generated at various locations. Data Management and Analysis Capturing, storing, integrating, and displaying the diverse data generated is challenging. The ENCODE Data Coordination Center (DCC) organizes and displays the data generated by the labs in the consortium, and ensures that the data meets specific quality standards when it is released to the public. Before a lab submits any data, the DCC and the lab draft a data agreement that defines the experimental parameters and associated metadata. The DCC validates incoming data to ensure consistency with the agreement. It also ensures that all data is annotated using appropriate Ontologies. It then loads the data onto a test server for preliminary inspection, and coordinates with the labs to organize the data into a consistent set of tracks. When the tracks are ready, the DCC Quality Assurance team performs a series of integrity checks, verifies that the data is presented in a manner consistent with other browser data, and perhaps most importantly, verifies that the metadata and accompanying descriptive text are presented in a way that is useful to our users. The data is released on the public UCSC Genome Browser website only after all of these checks have been satisfied. In parallel, data is analyzed by the ENCODE Data Analysis Center, a consortium of analysis teams from the various production labs plus other researchers. These teams develop standardized protocols to analyze data from novel assays, determine best practices, and produce a consistent set of analytic methods such as standardized peak callers and signal generation from alignment pile-ups. The National Human Genome Research Institute (NHGRI) has identified ENCODE as a "community resource project". This important concept was defined at an international meeting held in Ft. Lauderdale in January 2003 as a research project specifically devised and implemented to create a set of data, reagents, or other material whose primary utility will be as a resource for the broad scientific community. Accordingly, the ENCODE data release policy stipulates that data, once verified, will be deposited into public databases and made available for all to use without restriction. Other projects With the continuation of the third phase, the ENCODE Consortium has become involved with additional projects whose goals run parallel to the ENCODE project. Some of these projects were part of the second phase of ENCODE. modENCODE project The MODel organism ENCyclopedia Of DNA Elements (modENCODE) project is a continuation of the original ENCODE project targeting the identification of functional elements in selected model organism genomes, specifically Drosophila melanogaster and Caenorhabditis elegans. The extension to model organisms permits biological validation of the computational and experimental findings of the ENCODE project, something that is difficult or impossible to do in humans. Funding for the modENCODE project was announced by the National Institutes of Health (NIH) in 2007 and included several different research institutions in the US. The project completed its work in 2012. In late 2010, the modENCODE consortium unveiled its first set of results with publications on annotation and integrative analysis of the worm and fly genomes in Science. Data from these publications is available from the modENCODE web site. modENCODE was run as a Research Network and the consortium was formed by 11 primary projects, divided between worm and fly. The projects spanned the following: Gene structure mRNA and ncRNA expression profiling Transcription factor binding sites Histone modifications and replacement Chromatin structure DNA replication initiation and timing Copy number variation. modERN modERN, short for the model organism encyclopedia of regulatory networks, branched from the modENCODE project. The project has merged the C. elegans and Drosophila groups and focuses on the identification of additional transcription factor binding sites of the respective organisms. The project began at the same time as Phase III of ENCODE, and plans to end in 2017. To date, the project has released 198 experiments, with around 500 other experiments submitted and currently being processed by the DCC. Genomics of Gene Regulation In early 2015, the NIH launched the Genomics of Gene Regulation (GGR) program. The goal of the program, which will last for three years, is to study gene networks and pathways in different systems of the body, with the hopes to further understand the mechanisms controlling gene expressions. Although the ENCODE project is separate from GGR, the ENCODE DCC has been hosting GGR data in the ENCODE portal. Roadmap In 2008, NIH began the Roadmap Epigenomics Mapping Consortium, whose goal was to produce "a public resource of human epigenomic data to catalyze basic biology and disease-oriented research". In February 2015, the consortium released an article titled "Integrative analysis of 111 reference human epigenomes" that fulfilled the consortium's goal. The consortium integrated information and annotated regulatory elements across 127 reference epigenomes, 16 of which were part of the ENCODE project. Data for the Roadmap project can either be found in the Roadmap portal or ENCODE portal. fruitENCODE project The fruitENCODE: an encyclopedia of DNA elements for fruit ripening is a plant ENCODE project that aims to generate DNA methylation, histone modifications, DHS, gene expression, transcription factor binding datasets for all fleshy fruit species at different developmental stages. Prerelease data can be found in the fruitENCODE portal. Criticism of the project Although the consortium claims they are far from finished with the ENCODE project, many reactions to the published papers and the news coverage that accompanied the release were favorable. The Nature editors and ENCODE authors "... collaborated over many months to make the biggest splash possible and capture the attention of not only the research community but also of the public at large". The ENCODE project's claim that 80% of the human genome has biochemical function was rapidly picked up by the popular press who described the results of the project as leading to the death of junk DNA. However the conclusion that most of the genome is "functional" has been criticized on the grounds that ENCODE project used a liberal definition of "functional", namely anything that is transcribed must be functional. This conclusion was arrived at despite the widely accepted view, based on genomic conservation estimates from comparative genomics, that many DNA elements such as pseudogenes that are transcribed are nevertheless non-functional. Furthermore, the ENCODE project has emphasized sensitivity over specificity leading possibly to the detection of many false positives. Somewhat arbitrary choice of cell lines and transcription factors as well as lack of appropriate control experiments were additional major criticisms of ENCODE as random DNA mimics ENCODE-like 'functional' behavior. In response to some of the criticisms, other scientists argued that the wide spread transcription and splicing that is observed in the human genome directly by biochemical testing is a more accurate indicator of genetic function than genomic conservation estimates because conservation estimates are all relative and difficult to align due to incredible variations in genome sizes of even closely related species, it is partially tautological, and these estimates are not based on direct testing for functionality on the genome. Conservation estimates may be used to provide clues to identify possible functional elements in the genome, but it does not limit or cap the total amount of functional elements that could possibly exist in the genome. Furthermore, much of the genome that is being disputed by critics seems to be involved in epigenetic regulation such as gene expression and appears to be necessary for the development of complex organisms. The ENCODE results were not necessarily unexpected since increases in attributions of functionality were foreshadowed by previous decades of research. Additionally, others have noted that the ENCODE project from the very beginning had a scope that was based on seeking biomedically relevant functional elements in the genome not evolutionary functional elements, which are not necessarily the same thing since evolutionary selection is neither sufficient nor necessary to establish a function. It is a very useful proxy to relevant functions, but an imperfect one and not the only one. Recently, ENCODE researchers reiterated that its main goal is identifying functional elements in the human genome. In a follow up paper in 2020, ENCODE stated that functional annotation of identified elements is "still in its infancy." In response to the complaints about the definition of the word "function" some have noted that ENCODE did define what it meant and since the scope of ENCODE was seeking biomedically relevant functional elements in the genome, then the conclusion of the project should be interpreted "as saying that 80 % of the genome is engaging in relevant biochemical activities that are very likely to have causal roles in phenomena deemed relevant to biomedical research." Ewan Birney, one of the ENCODE researchers, commented that "function" was used pragmatically to mean "specific biochemical activity" which included different classes of assays: RNA, "broad" histone modifications, "narrow" histone modifications, DNaseI hypersensitive sites, Transcription Factor ChIP-seq peaks, DNaseI Footprints, Transcription Factor bound motifs, and Exons. In 2014, ENCODE researchers noted that in the literature, functional parts of the genome have been identified differently in previous studies depending on the approaches used. There have been three general approaches used to identify functional parts of the human genome: genetic approaches (which rely on changes in phenotype), evolutionary approaches (which rely on conservation) and biochemical approaches (which rely on biochemical testing and was used by ENCODE). All three have limitations: genetic approaches may miss functional elements that do not manifest physically on the organism, evolutionary approaches have difficulties using accurate multispecies sequence alignments since genomes of even closely related species vary considerably, and with biochemical approaches, though having high reproducibility, the biochemical signatures do not always automatically signify a function. They concluded that in contrast to evolutionary and genetic evidence, biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act and ultimately all three approaches can be used in a complementary way to identify regions that may be functional in human biology and disease. Furthermore, they noted that the biochemical maps provided by ENCODE were the most valuable things from the project since they provide a starting point for testing how these signatures relate to molecular, cellular, and organismal function. The project has also been criticized for its high cost (~$400 million in total) and favoring big science which takes money away from highly productive investigator-initiated research. The pilot ENCODE project cost an estimated $55 million; the scale-up was about $130 million and the US National Human Genome Research Institute NHGRI could award up to $123 million for the next phase. Some researchers argue that a solid return on that investment has yet to be seen. There have been attempts to scour the literature for the papers in which ENCODE plays a significant part and since 2012 there have been 300 papers, 110 of which come from labs without ENCODE funding. An additional problem is that ENCODE is not a unique name dedicated to the ENCODE project exclusively, so the word 'encode' comes up in many genetics and genomics literature. Another major critique is that the results do not justify the amount of time spent on the project and that the project itself is essentially unfinishable. Although often compared to Human Genome Project (HGP) and even termed as the HGP next step, the HGP had a clear endpoint which ENCODE currently lacks. The authors seem to sympathize with the scientific concerns and at the same time try to justify their efforts by giving interviews and explaining ENCODE details not just to the scientific public, but also to mass media. They also claim that it took more than half a century from the realization that DNA is the hereditary material of life to the human genome sequence, so that their plan for the next century would be to really understand the sequence itself. FactorBook The analysis of transcription factor binding data generated by the ENCODE project is currently available in the web-accessible repository FactorBook. Essentially, Factorbook.org is a Wiki-based database for transcription factor-binding data generated by the ENCODE consortium. In the first release, Factorbook contains: 457 ChIP-seq datasets on 119 TFs in a number of human cell lines The average profiles of histone modifications and nucleosome positioning around the TF-binding regions Sequence motifs enriched in the regions and the distance and orientation preferences between motif sites. See also GENCODE SIMAP Functional genomics Human Genome Project 1000 Genomes Project International HapMap Project List of biological databases References External links Official list of ENCODE project publications ENCODE project at the National Human Genome Research Institute Encyclopedia of DNA Elements at the UCSC Genome Browser ENCODE/GENCODE project at the Wellcome Trust Sanger Institute ENCODE-sponsored introductory tutorial FactorBook modENCODE ENCODE threads Explorer at the Nature (journal) Biological databases Genetics or genomics research institutions Population genetics organizations 21st-century encyclopedias
ENCODE
[ "Biology" ]
5,956
[ "Bioinformatics", "Biological databases" ]
2,140,957
https://en.wikipedia.org/wiki/Huggies%20Pull-Ups
Pull-Ups is a brand of disposable diapers made under the Huggies brand of baby products. The product was first introduced in 1989 and became popular with the slogan "I'm a big kid now!" The training pants are marketed with purple packaging: boys' designs are blue and currently feature characters from the Disney Junior show Mickey Mouse Funhouse; girls' designs are purple with the Disney Junior show Minnie's Bow-Toons characters. Huggies Pull-Ups variations Huggies Pull-Ups have been distributed in 4 different types which have been intact since 2011. (not counting the renaming of Wetness Liner.) Learning Designs In March 2005, the original Huggies Pull-Ups were renamed Learning Designs after the small pictures that fade when they become wet. Wetness Liner Wetness Liner Pull-Ups Training Pants were first distributed in 2005 as a competitor to the now defunct Pampers Feel 'N Learn. These Pull-Ups were much like Learning Designs Pull-Ups, except they added special liner to the Wetness Liner ones. This liner is placed on the inside of the Pull-up, where the wearer is most likely to wet, and is sensitive to urine. When the Wetness Liner is exposed to urine, it causes the wearer to feel uncomfortable, and learn that they shouldn't wet themself and should use the toilet instead. Wetness Liner Pull-Ups also have the Learning Designs, which also fade when the wearer wets the pull-up. Cool-Alert name change In 2006, the Wetness Liner Pull-Ups were replaced by Cool-Alert. This variation has been intact since. GoodNites GoodNites are used to control bedwetting. In 2008, The GoodNites disposable underwear split up from the Pull Ups brand merging with the Huggies brand, Then in 2011, GoodNites split up from the Huggies brand and formed their own brand which is the same name as the product. Night-Time At the same time that Wetness Liner was renamed Cool Alert, Pull-Ups introduced Night-Time Pull-Ups. The Night-Time Pull-Ups were very much like a regular Pull-Ups pant, except it has more absorbency, and they have bedtime designs featuring Toy Story characters for boys & girls. The Night-Time Pull-Ups are not available in the 4T-5T or 5T-6T sizes. Potty training The main use for Pull-Ups Training Pants is as an aid for toilet training toddlers and to help them learn not to wet. Although up until 2000 Pull-Ups Training Pants were nothing more than diapers that go off and on like underwear, since 2000, there have been several changes to them. The first one was the addition of magic stars/flowers (now known as Learning Designs on March 2, 2005) on the inside only in 2005-2007 and front of the pant that fade when the wearer wets it as a way of discouraging wetting, and as a motivation to stay dry in time to make it to the potty, and if the wearer stays dry, the stars/flowers will stay on the Pull-Up. Next was the addition of Easy-Open sides. These made it so that the sides of the Pull-Up still go off and on like underwear, but enable parents to easily open the Pull-Up to check to see if the wearer soiled the Pull-Up, or to quickly change a messy Pull-Up. Though many enjoy this feature, some parents have criticized this feature for causing the Pull-Up to rip too easily. History 1989 Huggies introduced Pull-Ups brand disposable training pants. 1991 The first Pull-Ups commercial aired on television and its most famous slogan, "I'm a big kid now!" became its main slogan. 1992 Single-sex Pull-Ups training pants were introduced with customized absorbency placed where boys and girls wet the most and also gender-specific prints: vehicles for boys and animals for girls. 1994 GoodNites disposable underwear for older children were introduced. Leak guards were added to handle wetness better than any other training pant. 1995 A back label was added to the pants to distinguish the obverse from the reverse. 1996 Realistic underwear designs were introduced, with a fly front style for boys and lace style for girls. 1997 Disney character designs were introduced, starring Mickey Mouse for boys and Minnie Mouse for girls. 1999 GoodNites introduces XL size fitting kids well over 100lbs which are offered like all GoodNites of this era in all white only. 2000 Pull-Ups added a wetness indicator on the front of the pants to tell whether or not the wearer is wet. 2003 Toy Story and Disney Princesses designs debuted for boys and girls respectively. The slogan that was used in the original late-1980s and early-1990s commercials, "I'm a big kid now!", was recycled for the product's recent commercials. 2004 Single-sex underwear was introduced with customized absorbency placed were boys and girls wet the most and also gender-specific prints. 2005 Training pants with a Wetness Liner were introduced which are similar to the Learning Designs training pants, but contain a liner that makes the wearer feel when (s)he is wet by having the liner have an unpleasant feel to it when it is wet. 2006 Night-Time training pants were introduced, Wetness Liner training pants were renamed to Cool-Alert, and Cars designs are introduced for boys to match up the film's release. 2008 GoodNites halts its connection with Pull-Ups and is now linked to Huggies and Kimberly-Clark. 2009 The infamous Potty Dance debuted on airwaves. This was deemed appalling due to suggestive movement of pelvic areas, and was since pulled and replaced with a non-offensive version. Flying stars were added to the bright orange background. 2010 Pull-Ups offered a phone call service associated with Disney. Mainly, as a reward for finishing potty training, the parent of the wearer could request a phone call in which the caller pretends to be a Disney Princess or Toy Story character. This limited time offer is currently defunct. Cars designs were replaced with Toy Story 3 designs which corresponded with the latter movie's release. 2011 Toy Story 3 designs were replaced with Cars 2 designs which corresponded with the latter film's release. GoodNites halts its connection with Huggies but is still connected with Kimberly-Clark. 2012 Minnie Mouse returned on some girls' Pull-Ups. The sides on boys' Pull-Ups were recolored from blue to red. 2013 The sides of boys' Pull-Ups were recolored from red to blue. The sides of girls' Pull-Ups were recolored from pink to purple. Monsters University designs were added for both genders to correspond with the film's release. Cinderella was replaced by Ariel on girls' Learning Designs and by Rapunzel on girls' Night*Time Pull-Ups. Toy Story designs return for boys. March 31, 2013 Pull-Ups Cool Alert was discontinued in the United States. 2014 Monsters University, Minnie Mouse, and Toy Story designs were replaced with Doc McStuffins designs for girls and Jake and the Never Land Pirates designs for boys. Pull-Ups Cool Alert returns exclusively online at Amazon, Diapers.com, Drugstore.com, Walmart, Sam’s Club, Target and Peapod.com. Pull-Ups made their training pants more absorbent. 2015 Sofia the First designs debut for girls. Mickey Mouse returned on some boys' Pull-Ups. Minnie Mouse returned on some girls' Pull-Ups. Mater returned on some boys' Night*Time Pull-Ups. Cinderella returned on some girls' Night*Time Pull-Ups. Cool Alert is available for retailers everywhere under its new name Cool & Learn. Pull-Ups' iconic "Big Kid" child photo is removed from packaging. Pull-Ups resembles its 2009 logo sans a yellow outline. 2016 Whisker Haven Tales with the Palace Pets designs debut for girls and Kion from The Lion Guard designs debut for boys. Night*Time Pull-Ups introduced Miles Callisto from Miles from Tomorrowland designs for boys. Belle returns on some girls' Night*Time Pull-Ups. 2017 The Lion Guard designs are replaced with Lightning McQueen and Jackson Storm designs in a single case. Whisker Palace Pets designs are replaced with Doc McStuffins and Minnie Mouse designs in a single case. Cars 3 designs are added for both genders to correspond with the film's release. Toy Story designs replaced the Miles from Tomorrowland prints on the Night*Time variant. 2018 The Lion Guard designs are replaced with Mickey and the Roadster Racers designs. 2019 12M-24M sizes are introduced. Toy Story 4 designs are added for both genders to correspond with the film's release. 2020 New packaging is introduced with the following changes: The iconic "Big Kid" child photo returns; however, instead of using a potty or laying in a bed, the child is simply smiling while wearing the training pants. The base color for both genders is purple, with accents of blue or pink for boys and girls, respectively. Pull-Ups refreshed its logo, with a flat design and small tweaking on the lettering. Toy Story 4 designs were replaced by Mickey Mouse designs on boys' Learning Designs and by Cars designs on boys' Cool & Learn Pull-Ups. Toy Story 4 designs were replaced by Minnie Mouse designs on girls' Learning Designs and by Belle from Beauty and the Beast designs on girls' Cool & Learn Pull-Ups. A plant-based line titled "New Leaf" was introduced with Frozen II designs for both genders. Controversy The Cool-Alert Pull-Ups had a controversial issue regarding that the wearer likely will either get a rash or not feel the cooling effect when (s)he wets the pant. The 2009 Potty Dance commercial had aggravated parents due to its suggestive dancing, mainly, when the toddlers put their hands on their genitalia and make circular motions with their hips. This has been pulled from airwaves and replaced with a more appropriate version by Ralph's World, which replaces the offensive movements with sidesteps. Competition Ever since Huggies Pull-Ups became popular, several other brands tried to copy their product. The first competitor besides store brand training pants were Pampers Trainers made from 1993 until 1995. In 2002, Pampers introduced "Easy Ups" training pants. The Pampers brand also had training pants with a wetness liner called "Feel 'N Learn" which were made from 2004 until 2007. Luvs also had a line of training pants made in the 1990s. Sponsorships Pull-Ups are the official sponsor of ESPN Radio's coverage of Major League Baseball, as well as Westwood One's coverage of Sunday Night Football, and on many terrestrial broadcast television stations and children's TV brands, including Nickelodeon, Disney, Cartoon Network, Universal Kids (formerly known as (PBS Kids) Sprout), etc. See also Huggies Toilet training External links Pull-Ups Training Pants Official Website Products introduced in 1989 Kimberly-Clark brands Diaper brands Toilet training
Huggies Pull-Ups
[ "Biology" ]
2,299
[ "Excretion", "Toilet training" ]
2,141,003
https://en.wikipedia.org/wiki/Indium%20gallium%20phosphide
Indium gallium phosphide (InGaP), also called gallium indium phosphide (GaInP), is a semiconductor composed of indium, gallium and phosphorus. It is used in high-power and high-frequency electronics because of its superior electron velocity with respect to the more common semiconductors silicon and gallium arsenide. It is used mainly in HEMT and HBT structures, but also for the fabrication of high efficiency solar cells used for space applications and, in combination with aluminium (AlGaInP alloy) to make high brightness LEDs with orange-red, orange, yellow, and green colors. Some semiconductor devices such as EFluor Nanocrystal use InGaP as their core particle. Indium gallium phosphide is a solid solution of indium phosphide and gallium phosphide. Ga0.5In0.5P is a solid solution of special importance, which is almost lattice matched to GaAs. This allows, in combination with (AlxGa1−x)0.5In0.5, the growth of lattice matched quantum wells for red emitting semiconductor lasers, e.g., red emitting (650nm) RCLEDs or VCSELs for PMMA plastic optical fibers. Ga0.5In0.5P is used as the high energy junction on double and triple junction photovoltaic cells grown on GaAs. Recent years have shown GaInP/GaAs tandem solar cells with AM0 (sunlight incidence in space = 1.35 kW/m2) efficiencies in excess of 25%. A different composition of GaInP, lattice matched to the underlying GaInAs, is utilized as the high energy junction GaInP/GaInAs/Ge triple junction photovoltaic cells. Growth of GaInP by epitaxy can be complicated by the tendency of GaInP to grow as an ordered material, rather than a truly random solid solution (i.e., a mixture). This changes the bandgap and the electronic and optical properties of the material. See also Gallium phosphide Indium(III) phosphide Indium gallium nitride Indium gallium arsenide GaInP/GaAs solar cell References E.F. Schubert "Light emitting diodes", External links EMCORE Solar Cells Spectrolab Solar Cells NSM Archive Phosphides Indium compounds Gallium compounds III-V semiconductors III-V compounds Solar cells Light-emitting diode materials
Indium gallium phosphide
[ "Physics", "Chemistry", "Materials_science" ]
528
[ "Materials science stubs", "Inorganic compounds", "Semiconductor materials", "Condensed matter physics", "III-V semiconductors", "Light-emitting diode materials", "Condensed matter stubs", "III-V compounds" ]
2,141,015
https://en.wikipedia.org/wiki/Indium%20gallium%20arsenide
Indium gallium arsenide (InGaAs) (alternatively gallium indium arsenide, GaInAs) is a ternary alloy (chemical compound) of indium arsenide (InAs) and gallium arsenide (GaAs). Indium and gallium are group III elements of the periodic table while arsenic is a group V element. Alloys made of these chemical groups are referred to as "III-V" compounds. InGaAs has properties intermediate between those of GaAs and InAs. InGaAs is a room-temperature semiconductor with applications in electronics and photonics. The principal importance of GaInAs is its application as a high-speed, high sensitivity photodetector of choice for optical fiber telecommunications. Nomenclature Indium gallium arsenide (InGaAs) and gallium-indium arsenide (GaInAs) are used interchangeably. According to IUPAC standards the preferred nomenclature for the alloy is GaxIn1-xAs where the group-III elements appear in order of increasing atomic number, as in the related alloy system AlxGa1-xAs. By far, the most important alloy composition from technological and commercial standpoints is Ga0.47In0.53As, which can be deposited in single crystal form on indium phosphide (InP). Materials synthesis GaInAs is not a naturally-occurring material. Single crystal material is required for electronic and photonic device applications. Pearsall and co-workers were the first to describe single-crystal epitaxial growth of In0.53Ga0.47As on (111)-oriented and on (100)-oriented InP substrates. Single crystal material in thin-film form can be grown by epitaxy from the liquid-phase (LPE), vapour-phase (VPE), by molecular beam epitaxy (MBE), and by metalorganic chemical vapour deposition (MO-CVD). Today, most commercial devices are produced by MO-CVD or by MBE. The optical and mechanical properties of InGaAs can be varied by changing the ratio of InAs and GaAs, . Most InGaAs devices are grown on indium phosphide (InP) substrates. In order to match the lattice constant of InP and avoid mechanical strain, is used. This composition has an optical absorption edge at 0.75 eV, corresponding to a cut-off wavelength of λ=1.68 μm at 295 K. By increasing the mole fraction of InAs further compared to GaAs, it is possible to extend the cut-off wavelength up to about λ=2.6 μm. In that case special measures have to be taken to avoid mechanical strain from differences in lattice constants. GaAs is lattice-mismatched to germanium (Ge) by 0.08%. With the addition of 1.5% InAs to the alloy, In0.015Ga0.985As becomes latticed-matched to the Ge substrate, reducing stress in subsequent deposition of GaAs. Electronic and optical properties InGaAs has a lattice parameter that increases linearly with the concentration of InAs in the alloy. The liquid-solid phase diagram shows that during solidification from a solution containing GaAs and InAs, GaAs is taken up at a much higher rate than InAs, depleting the solution of GaAs. During growth from solution, the composition of first material to solidify is rich in GaAs while the last material to solidify is richer in InAs. This feature has been exploited to produce ingots of InGaAs with graded composition along the length of the ingot. However, the strain introduced by the changing lattice constant causes the ingot to be polycrystalline and limits the characterization to a few parameters, such as bandgap and lattice constant with uncertainty due to the continuous compositional grading in these samples. Properties of single crystal GaInAs Single crystal GaInAs Single crystal epitaxial films of GaInAs can be deposited on a single crystal substrate of III-V semiconductor having a lattice parameter close to that of the specific gallium indium arsenide alloy to be synthesized. Three substrates can be used: GaAs, InAs and InP. A good match between the lattice constants of the film and substrate is required to maintain single crystal properties and this limitation permits small variations in composition on the order of a few percent. Therefore, the properties of epitaxial films of GaInAs alloys grown on GaAs are very similar to GaAs and those grown on InAs are very similar to InAs, because lattice mismatch strain does not generally permit significant deviation of the composition from the pure binary substrate. is the alloy whose lattice parameter matches that of InP at 295 K. GaInAs lattice-matched to InP is a semiconductor with properties quite different from GaAs, InAs or InP. It has an energy band gap of 0.75 eV, an electron effective mass of 0.041 and an electron mobility close to 10,000 cm2·V−1·s−1 at room temperature, all of which are more favorable for many electronic and photonic device applications when compared to GaAs, InP or even Si. Measurements of the band gap and electron mobility of single-crystal GaInAs were first published by Takeda and co-workers. FCC lattice parameter Like most materials, the lattice parameter of GaInAs is a function of temperature. The measured coefficient of thermal expansion is  K−1. This is significantly larger than the coefficient for InP which is  K−1. A film that is exactly lattice-matched to InP at room temperature is typically grown at 650 °C with a lattice mismatch of +. Such a film has a mole fraction of GaAs = 0.47. To obtain lattice matching at the growth temperature, it is necessary to increase the GaAs mole fraction to 0.48. Bandgap energy The bandgap energy of GaInAs can be determined from the peak in the photoluminescence spectrum, provided that the total impurity and defect concentration is less than  cm−3. The bandgap energy depends on temperature and increases as the temperature decreases, as can be seen in Fig. 3 for both n-type and p-type samples. The bandgap energy at room temperature for standard InGaAs/InP (53% InAs, 47% GaAs), is 0.75 eV and lies between that of Ge and Si. By coincidence the bandgap of GaInAs is perfectly placed for photodetector and laser applications for the long-wavelength transmission window, (the C-band and L-band) for fiber-optic communications. Effective mass The electron effective mass of GaInAs m*/m° = 0.041 is the smallest for any semiconductor material with an energy bandgap greater than 0.5 eV. The effective mass is determined from the curvature of the energy-momentum relationship: stronger curvature translates into lower effective mass and a larger radius of delocalization. In practical terms, a low effective mass leads directly to high carrier mobility, favoring higher speed of transport and current carrying capacity. A lower carrier effective mass also favors increased tunneling current, a direct result of delocalization. The valence band has two types of charge carriers: light holes: m*/m° = 0.051 and heavy holes: m*/m° = 0.2. The electrical and optical properties of the valence band are dominated by the heavy holes, because the density of these states is much greater than that for light holes. This is also reflected in the mobility of holes at 295 K, which is a factor of 40 lower than that for electrons. Mobility of electrons and holes Electron mobility and hole mobility are key parameters for design and performance of electronic devices. Takeda and co-workers were the first to measure electron mobility in epitaxial films of InGaAs on InP substrates. Measured carrier mobilities for electrons and holes are shown in Figure 4. The mobility of carriers in is unusual in two regards: The very high value of electron mobility The unusually large ratio of electron to hole mobility. The room temperature electron mobility for reasonably pure samples of approaches ·V−1·s−1, which is the largest of any technologically important semiconductor, although significantly less than that for graphene. The mobility is proportional to the carrier conductivity. As mobility increases, so does the current-carrying capacity of transistors. A higher mobility shortens the response time of photodetectors. A larger mobility reduces series resistance, and this improves device efficiency and reduces noise and power consumption. The minority carrier diffusion constant is directly proportional to carrier mobility. The room temperature diffusion constant for electrons at ·s−1 is significantly larger than that of Si, GaAs, Ge or InP, and determines the ultra-fast response of photodetectors. The ratio of electron to hole mobility is the largest of currently-used semiconductors. Applications Photodetectors The principal application of GaInAs is as an infrared detector. The spectral response of a GaInAs photodiode is shown in Figure 5. GaInAs photodiodes are the preferred choice in the wavelength range of 1.1 μm < λ < 1.7 μm. For example, compared to photodiodes made from Ge, GaInAs photodiodes have faster time response, higher quantum efficiency and lower dark current for the same sensor area. GaInAs photodiodes were invented in 1977 by Pearsall. Avalanche photodiodes offer the advantage of additional gain at the expense of response time. These devices are especially useful for detection of single photons in applications such as quantum key distribution where response time is not critical. Avalanche photodetectors require a special structure to reduce reverse leakage current due to tunnelling. The first practical avalanche photodiodes were designed and demonstrated in 1979. In 1980, Pearsall developed a photodiode design that exploits the uniquely short diffusion time of high mobility of electrons in GaInAs, leading to an ultrafast response time. This structure was further developed and subsequently named the UTC, or uni-travelling carrier photodiode. In 1989, Wey and co-workers designed and demonstrated a p-i-n GaInAs/InP photodiodes with a response time shorter than 5 picoseconds for a detector surface measuring 5 μm x 5 μm. Other important innovations include the integrated photodiode – FET receiver and the engineering of GaInAs focal-plane arrays. Lasers Semiconductor lasers are an important application for GaInAs, following photodetectors. GaInAs can be used as a laser medium. Devices have been constructed that operate at wavelengths of 905 nm, 980 nm, 1060 nm, and 1300 nm. InGaAs quantum dots on GaAs have also been studied as lasers. GaInAs/InAlAs quantum-well lasers can be tuned to operate at the λ = 1500 nm low-loss, low-dispersion window for optical fiber telecommunications In 1994, GaInAs/AlInAs quantum wells were used by Jérôme Faist and co-workers who invented and demonstrated a new kind of semiconductor laser based on photon emission by an electron making an optical transition between subbands in the quantum well. They showed that the photon emission regions can be cascaded in series, creating the quantum cascade laser (QCL). The energy of photon emission is a fraction of the bandgap energy. For example, GaInAs/AlInAs QCL operates at room temperature in the wavelength range 3 μm < λ < 8 μm. The wavelength can be changed by modifying the width of the GaInAs quantum well. These lasers are widely used for chemical sensing and pollution control. Photovoltaics and transistors GaInAs is used in triple-junction photovoltaics and also for thermophotovoltaic power generation. can be used as an intermediate band-gap junction in multi-junction photovoltaic cells with a perfect lattice match to Ge. The perfect lattice match to Ge reduces defect density, improving cell efficiency. HEMT devices using InGaAs channels are one of the fastest types of transistor In 2012 MIT researchers announced the smallest transistor ever built from a material other than silicon. The Metal oxide semiconductor field-effect transistor (MOSFET) is 22 nanometers long. This is a promising accomplishment, but more work is needed to show that the reduced size results in improved electronic performance relative to that of silicon or GaAs-based transistors. In 2014, Researchers at Penn State University developed a novel device prototype designed to test nanowires made of compound semiconductors such as InGaAs. The goal of this device was to see if a compound material would retain its superior mobility at nanoscale dimensions in a FinFET device configuration. The results of this test sparked more research, by the same research team, into transistors made of InGaAs which showed that in terms of on current at lower supply voltage, InGaAs performed very well compared to existing silicon devices. In Feb 2015 Intel indicated it may use InGaAs for its 7 nanometer CMOS process in 2017. Safety and toxicity The synthesis of GaInAs, like that of GaAs, most often involves the use of arsine (), an extremely toxic gas. Synthesis of InP likewise most often involves phosphine (). Inhalation of these gases neutralizes oxygen absorption by the bloodstream and can be fatal within a few minutes if toxic dose levels are exceeded. Safe handling involves using a sensitive toxic gas detection system and self-contained breathing apparatus. Once GaInAs is deposited as a thin film on a substrate, it is basically inert and is resistant to abrasion, sublimation or dissolution by common solvents such as water, alcohols or acetones. In device form the volume of the GaInAs is usually less than , and can be neglected compared to the volume of the supporting substrate, InP or GaAs. The National Institutes of Health studied these materials and found: No evidence of carcinogenic activity of gallium arsenide in male F344/N rats exposed to 0.01, 0.1, or Carcinogenic activity in female F344/N rats No evidence of carcinogenic activity in male or female B6C3F1 mice exposed to 0.1, 0.5, or . The World Health Organization's International Agency for Research on Cancer's review of the NIH toxicology study concluded: There is inadequate evidence in humans for the carcinogenicity of gallium arsenide. There is limited evidence in experimental animals for the carcinogenicity of gallium arsenide. The gallium moiety may be responsible for lung cancers observed in female rats REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) is a European initiative to classify and regulate materials that are used, or produced (even as waste) in manufacturing. REACH considers three toxic classes: carcinogenic, reproductive, and mutagenic capacities. The REACH classification procedure consists of two basic phases. In phase one the hazards intrinsic to the material are determined, without any consideration of how the material might be used or encountered in the work place or by a consumer. In phase two the risk of harmful exposure is considered along with procedures that can mitigate exposure. Both GaAs and InP are in phase 1 evaluation. The principal exposure risk occurs during substrate preparation where grinding and polishing generate micron-size particles of GaAs and InP. Similar concerns apply to wafer dicing to make individual devices. This particle dust can be absorbed by breathing or ingestion. The increased ratio of surface area to volume for such particles increases their chemical reactivity. Toxicology studies are based on rat and mice experiments. No comparable studies test the effects of ingesting GaAs or InP dust in a liquid slurry. The REACH procedure, acting under the precautionary principle, interprets "inadequate evidence for carcenogenicity" as "possible carcinogen". As a result, the European Chemicals Agency classified InP in 2010 as a carcinogen and reproductive toxin: Classification & labelling in accordance with Directive 67/548/EEC Classification: Carc. Cat. 2; R45 Repr. Cat. 3; R62 and ECHA classified GaAs in 2010 as a carcinogen and reproductive toxin: Classification & labelling in accordance with Directive 67/548/EEC: Classification3: Carc. Cat. 1; R45 Repro. Cat. 2; R60 See also Gallium arsenide Indium arsenide Indium gallium phosphide Indium gallium zinc oxide References External links NSM data archive at the Ioffe Institute, St. Petersburg, Russia Arsenides Indium compounds Gallium compounds III-V semiconductors III-V compounds Infrared sensor materials
Indium gallium arsenide
[ "Chemistry" ]
3,489
[ "Semiconductor materials", "III-V compounds", "Inorganic compounds", "III-V semiconductors" ]
2,141,062
https://en.wikipedia.org/wiki/Traffic%20camera
A traffic camera is a video camera which observes vehicular traffic on a road. Typically, traffic cameras are put along major roads such as highways, freeways, expressways and arterial roads, and are connected by optical fibers buried alongside or under the road, with electricity provided either by mains power in urban areas, by solar panels or other alternative power sources which provide consistent imagery without the threat of power outages. A monitoring center receives the live video in real-time, and serves as a dispatcher if there is a traffic collision, some other disruptive incident or safety issue. Traffic cameras form a part of most intelligent transportation systems. They are especially valuable in tunnels, where safety equipment can be activated remotely based on information provided by the cameras and other sensors. On surface roads, they are typically mounted on high poles or masts, sometimes with street lights. On arterial roads, they are often mounted on traffic light poles at intersections, where problems are most likely. In remote areas beyond the electrical grid, they are usually powered by a renewable source such as solar power, which also provides a backup source to urban camera infrastructure. Traffic cameras are distinct from road safety cameras, which are put in specific places to enforce rules of the road by taking still photos in a much higher image resolution upon a trigger. Traffic cameras are only for observation and continuously take lower-resolution video, often in full motion, though they may be remotely controlled to focus on an incident in the distance or at an orientation normally outside its field of view, such as a frontage road. Many transmit in analog television formats, though many are being converted to high definition or 4K resolution video as equipment is replaced. Some have a compass built in which displays the cardinal direction at which the camera is aimed, though many installers also provide a reference image with the cardinal direction. These cameras are required to be able to resist weather conditions permanently. A non-public use for traffic cameras is video tolling, where motorists drive through open road tolling gantries, have an image of their license plate taken, and then are billed after automatic number-plate recognition has read the license plate and cross-referenced it with motor vehicle databases. Many transportation departments have linked their camera networks to the internet so that travelers can view traffic conditions. They may show either streaming video or still imagery which refreshes at a set interval, helping travelers determine whether an alternate route should be taken. In the United States and Canada, these often are displayed on state or municipally-run 5-1-1 websites (511 being a telephone service relaying traffic information). These images may be combined with in-road sensors that measure traffic timing and mapping providers such as Google Maps/Waze that allow user-generated traffic information. Many states and provinces consider this information public domain, thus many television stations air live traffic camera imagery during traffic reports on their local news broadcasts, or simply as a moving background during newscasts. Some cable TV systems provide these pictures full-time on a governmental access channel, and some broadcast stations set aside a full digital subchannel solely for traffic information and camera imagery, such as WMVT-DT3 in Milwaukee and WFMZ-DT2 in Allentown, Pennsylvania. However, in some cases for toll roads and other private road authorities, such as the Illinois State Toll Highway Authority, these images are the property of the toll agency (or private company which runs a toll road), and are released exclusively to one station (e.g. ISTHA feeds only to WMAQ-TV). Gallery External links Traffic Cams throughout the province of British Columbia, Canada Traffic Cams throughout the province of Saskatchewan, Canada Traffic Cams throughout the province of Manitoba, Canada Map of live traffic cameras in U.S. Cities Map of live traffic cameras the Twin Cities Live traffic cameras in Queensland, Australia Department of Main Roads Traffic cameras in the State of New Jersey Department of Transportation Live traffic cameras in New Zealand NZ Transport Agency Live traffic cameras in Southern and Eastern Ontario, Canada Intelligent transportation systems
Traffic camera
[ "Technology" ]
821
[ "Information systems", "Warning systems", "Intelligent transportation systems", "Transport systems" ]
2,141,292
https://en.wikipedia.org/wiki/Nicoll%20Highway%20collapse
The Nicoll Highway collapse occurred in Singapore on 20 April 2004 at 3:30pm local time when a Mass Rapid Transit (MRT) tunnel construction site caved in, leading to the collapse of the Nicoll Highway near the Merdeka Bridge. Four workers were killed and three were injured, delaying the construction of the Circle Line (CCL). The collapse was caused by a poorly designed strut-waler support system, a lack of monitoring and proper management of data caused by human error, and organisational failures of the Land Transport Authority (LTA) and construction contractors Nishimatsu and Lum Chang. The Singapore Civil Defence Force extracted three bodies from the site but were unable to retrieve the last due to unstable soil. An inquiry was conducted by Singapore's Manpower Ministry from August 2004 to May 2005, after which three Nishimatsu engineers and an LTA officer were charged under the Factories Act and Building Control Act respectively, and all four defendants were fined. The contractors gave S$ (US$) each to the families of the victims as unconditional compensation. Following the incident, the collapsed site was refilled, and Nicoll Highway was rebuilt and reopened to traffic on 4 December 2004. Heng Yeow Pheow, an LTA foreman whose body was never recovered, was posthumously awarded the Pingat Keberanian (Medal of Valour) for helping his colleagues to safety ahead of himself. In response to inquiry reports, the LTA and the Building and Construction Authority (BCA) revised their construction safety measures so they were above industry standards. The CCL tunnels were realigned, with Nicoll Highway station rebuilt to the south of the original site underneath Republic Avenue. The station and tunnels opened on 17 April 2010, three years later than planned. Background Nicoll Highway and Merdeka Bridge The Singapore Improvement Trust first planned Nicoll Highway in the late 1940s to relieve the heavy rush-hour traffic along Kallang Road and provide an alternative route from Singapore's city centre to Katong and Changi. These plans were finalised in July 1953; they included construction of a bridge spanning the Kallang and Rochor rivers. The construction contract for Kallang Bridge was awarded to Paul Y. Construction Company in association with Messrs Hume Industries and Messrs Sime Darby for $4.485 million (US$million in 2021) in December 1954. On 22 June 1956, Kallang Bridge was renamed Merdeka Bridge to reflect "the confidence and aspiration of the people of Singapore". Merdeka Bridge and Nicoll Highway opened on 17 August that year; crowds gathered on both ends of the bridge to witness the opening ceremony. By August 1967, the highway and the bridge had been widened to accommodate seven lanes. Nicoll Highway station Nicoll Highway station was first announced in November 1999 as part of the Mass Rapid Transit's (MRT) Marina Line (MRL), which consisted of six stations from Dhoby Ghaut to Stadium. In 2001, Nicoll Highway station became part of Circle Line (CCL) Stage 1 when the MRL was incorporated into the CCL. The contract for the construction of Nicoll Highway station and tunnels was awarded to a joint venture between and Lum Chang Building Contractors Pte Ltd at S$270 million (US$ million) on 31 May 2001. In 1996, the joint venture was investigated for breaching safety rules in a previous project; infringements included loose planks on its scaffolding. In 1997, the companies damaged underground telecommunications cables in another underpass construction project. The site was on land reclaimed during the 1970s and consisted of silty old alluvium and a layer of marine clay resulting from sea-level changes of the Kallang Basin. The station and tunnels were constructed from the "bottom-up": cut-and-cover excavation was supported by a network of steel king posts, walers, and struts to keep the site open. Incident At about 3:30pm local time on 20 April 2004, tunnels linking to Nicoll Highway station caved in along with a stretch of Nicoll Highway near the abutment of Merdeka Bridge. The incident happened when most of the workers were on a tea break. The collapse of a tunnel's retaining wall created a hole 100 m long, 130 m wide, and 30 m deep (330 by 430 by100 ft). One person was found dead and three others, who were working on driving machinery at the bottom of the site, were initially reported missing. They included a foreman who had helped evacuate his workers to safety when the site collapsed but did not escape in time because a flight of exit stairs collapsed. Three injured workers were taken to hospital for treatment; two of them were discharged the same day. No motorists were driving along the stretch of road when it collapsed and others stopped in time. Three power cables were severed, resulting in a 15-minute blackout in the Esplanade, Suntec City, and Marina Square regions. The collapse of the highway damaged a gas service line. From initial reports, eyewitnesses heard explosions and saw flames flashing across the highway; the Land Transport Authority (LTA), Singapore's transport agency, said it had no evidence of an explosion and that the witnesses might have mistaken the loud sound of the collapse for an explosion. As a precautionary measure, gas supply to the damaged pipe was shut off. Rescue and safety measures The Singapore Civil Defence Force (SCDF) arrived at the site at 3:42pm. After rescuing the three injured people, specialist SCDF units, such as the Disaster Assistance and Rescue Team and Search Platoon, arrived as reinforcements to search for the missing workers. The first dead victim was found at 6:07pm. All machinery was turned off as the SCDF used a life-detector device in the collapse site but nothing was detected and sniffer dogs were brought into the search. The second body was recovered at 11:42pm on 21 April. Prime Minister Goh Chok Tong visited the site on 21 April; he praised the coordination between the SCDF and the Public Utilities Board (PUB) for the ongoing rescue efforts and expressed relief at the small number of fatalities. Goh extended his condolences to the families of the victims and said the rescue efforts should be the priority rather than apportioning blame. He added the government would convene a public inquiry. President S R Nathan visited the site on 22 April to pay tribute to the rescue workers. A third body was recovered from the site on 22 April at 12:15am. The SCDF had to vertically excavate through a pile of rubble and debris located within three cavities, two of which were flooded and blocked by twisted steel beams and struts. The operation presented significant difficulty due to the limited space for manoeuvring within the cavities and the lack of visibility in flooded areas. The LTA detected stability problems on 23 April at 1:05am and grouting was implemented to stabilise the soil while water was pumped out from cavities, allowing rescuers to further investigate. Heavy rain in the afternoon caused soil erosion and halted the search. Because of the instability of the collapsed area that could bury rescue workers and cause more damage to the surrounding area, the search for the foreman, Heng Yeow Pheow, was called off at 3:30pm. The Nicoll Highway collapse led to the deaths of four people: Vadivil s/o Nadesan, crane operator: A Malaysian of Indian descent, his body was the first to be recovered. The 45-year-old had tried to escape by jumping out of his crane when the incident occurred. He was found caught between a pick-up truck and a container. Liu Rong Quan, construction worker: The body of the 36-year-old Chinese national was found wedged between the wheel and chassis of a truck. Liu had started working at the site ten days before the incident. John Tan Lock Yong, LTA engineer: Tan, the third victim to be found, was found between a tipper truck and a container. Tan had been working on the station construction project for two years. Heng Yeow Pheow, LTA foreman whose body was never recovered. According to survivor accounts, Heng had hurried his workers to safety, saving eight workers, but he was trapped when the collapse occurred. Safety measures were implemented after the collapse to minimise further damage to the collapsed area. A damaged canal had to be blocked up to prevent water from the Kallang River from entering the site, and canvas sheets were laid on slopes in the site to protect the soil. While the surrounding buildings were assessed to be safe, they were later monitored for stability with additional settlement markers and electro-level beams that were installed at the nearby Golden Mile Complex. The LTA halted work at 16 of the 24 CCL excavation sites so these could be reviewed. Near the incident site, the approach slab before the abutment of Merdeka Bridge had collapsed. To prevent displacement of the first span triggering the collapse of the bridge, the first and second spans of the bridge were cut to isolate the first span. This also allowed Crawford Underpass beneath the bridge to be reopened. This project began on 23 April and was completed on 28 April. Eight prism points and five tiltmeters were installed to monitor any bridge movements. The collapsed site was quickly stabilised through the injection of concrete into areas that were vulnerable to movement or further collapse. Several vehicles, equipment and construction materials were retrieved using a specialised crane. The remaining equipment and materials at the site were buried under infill to avoid further collapse. Access to the collapsed site via the completed parts of the tunnel and the shaft was sealed off. Committee of Inquiry Singaporean authorities dismissed terrorism and sabotage as causes of the incident. On 22 April, Singapore's Ministry of Manpower established a Committee of Inquiry (COI) to investigate the cause of the incident. Senior District Judge Richard Magnus was appointed Chairman; he was assisted by assessors Teh Cee Ing from the Nanyang Technological University (NTU) and Lau Joo Ming from the Housing and Development Board. The COI called for 143 witnesses to provide evidence, including 14 experts. The COI visited the site on 23 April and the inquiry was originally scheduled for 1 June. Because all parties involved would need two-and-a-half months to prepare due to complex technical content, the inquiry was postponed to 2 August. Inquiry At the first hearing of the inquiry, the inquiry panel established that there were "fundamental" design flaws in the worksite due to incorrect analysis of soil conditions by the contractors, leading to more pressure on the retaining walls. In April, the LTA had said the collapse happened without warning but the LTA had already found flaws in Nishimatsu–Lum Chang's design in October 2001: the contractor used a design-software simulator with incorrect parameters. An alternative design had been proposed in consultation with an NTU professor but the contractor had rejected the design. The LTA technical advisor for design management had advised against excavation of the site due to incorrect data. In the two months before the cave-in, the tunnel's retaining walls had moved more than the maximum allowed. The contractors had petitioned the LTA to increase the agreed maximum threshold of movement. The contractors had miscalculated the amount of stress on the retaining walls but gave the LTA repeated assurances that their calculations were in order. Nishimatsu's senior on-site supervisor Teng Fong Sin claimed ignorance of the significance of the trigger values taken from the retaining wall. Teng said even if he had been aware of the significance, he lacked the authority to halt the ongoing work. No readings were taken in the two days leading up to the collapse. This was because soil-monitoring instruments, which were placed roughly in the centre of the collapsed area, had been buried and the site supervisor Chakkarapani Balasubramani did not take the readings, although he raised the issue with the main contractor and was told the instruments would be dug out. Nishimatsu engineer Arumaithurai Ahilan said he saw "nothing alarming" in the soil-movement readings and accused Balasubramani of lying in testimony. While he was also alerted to other ground movements, Nishimatsu addressed these cracks by applying cement patches, and no further corrective actions were taken because the buildings did not suffer any structural damage. According to a system analyst from Monosys, the project's subcontractor, the strain-detecting sensors recorded readings that were still below trigger values at 3 pm. These readings were the last obtained before the collapse at 3:30pm. The steel beams to hold up the walls had not been constructed when workers dug further into the site. LTA supervisor Phang Kok Pin, whose duty was to confirm the correct installation of support beams, said he visited the pit typically once or twice a day. He conducted only sporadic inspections and heavily relied on reports from Nishimatsu contractors to confirm the accurate installation of the beams. Nishimatsu supervisors were warned about failing support structures on the day of the collapse but instructed the subcontracting site supervisor Nallusamy Ramadoss to continue installing struts and pouring cement on the buckled struts to strengthen the wall. The struts continued to bend further before the collapse; Ramadoss warned his workers of the danger and evacuated them to safety. Some workers said they were not warned of any danger or given any safety briefings but escaped in time. Other workers also reported hearing "thungs" of bent walers before the cable bridge swayed, and everything around them trembled and collapsed. Resumption and conclusion The inquiry was adjourned on 30 August and resumed on 6 September. An interim report that was released to the government on 13 September noted "glaring and critical shortcomings" in the construction project that were seen in other ongoing construction projects. Additionally, inexperienced personnel had been appointed to monitor the safety of the retaining wall system. The interim report recommended a more-effective safety management system, an industry standard for the safety of temporary works, and a higher standard of reliability and accuracy in monitoring data. The interim report was released so "corrective measures" could be implemented for other construction projects. The LTA project manager Wong Hon Peng, who was informed of the deflection readings four days before the collapse, admitted his lack of respect for safety, that his initial response was "any solution adopted should not bring about claims against LTA" and that he failed to take heed of the warnings. The project manager from Nishimatsu, Yoshiaki Chikushi, also said he was unaware of the extent to which the struts supporting the construction site had buckled, and was consulting with the LTA on the day of the collapse after being alerted of the failing struts. To meet deadlines, Chikushi had accelerated the hacking of a wall that led to the removal of support beams in the excavation, and approved the grouting method that left gaps under some cables running across the site. He did not consider how these methods would cause problems. The final phase of the hearing, which involved the consultation of experts on the causes of weakening of the retaining wall, began on 24 January 2005 and concluded on 2 February. More than 170 witnesses were brought in during the 80 days of the inquiry. The COI released its final report on 13 February 2005; it concluded the incident was preventable and had been caused by human error and organisational failures. The strut-waler support system was poorly designed and was weaker than it should have been, and there was a lack of monitoring and proper management of data. The COI report said the "warning signs", such as excessive wall deflections and surging inclinometer readings, were not seriously addressed, and blamed the collapse on the contractor. The people responsible were accused of indifference and laxity towards the worksite safety of the construction project. To address the lack of safety culture stated in the report, the COI restated several recommendations from its interim report to improve the safety of construction projects. The government accepted the report's recommendations. Aftermath Compensation to the victims Family of the victims were given S$ (US$) each as unconditional, ex gratia compensation by Nishimatsu and Lum Chang. Heng's family received an additional S$380,000 (US$) in settlements from the three construction firms involved with the collapse and S$ (US$) in public donations. The money from the public donations was diverted into a trust fund that was set up by Heng's Member of Parliament Irene Ng from which expenses for his children's upkeep could be drawn until 2019. Honours and awards Nine SCDF officers who were involved in the search and rescue efforts were awarded the Pingat Keberanian (Medal of Valour). SCDF Commissioner James Tan, who was in charge of the rescue team, was awarded the Pingat Pentadbiran Awam – Emas (Public Administration Medal – Gold) and 18 other SCDF officers were awarded other State medals. Heng was posthumously honoured with the Pingat Keberanian for prioritising the safety of his colleagues over his own escape in May 2004. In 2014, three former colleagues whom Heng rescued inaugurated a memorial bench at Tampines Tree Park dedicated to the foreman. The ceremony, initiated by MP Irene Ng, was attended by Heng's wife and his two children. The bench was funded by the Tampines Changkat Citizens' Consultative Committee. A commemorative stone and plaque were also erected at the former site marking where Heng was believed to be buried. On every anniversary, workers from Kori Construction visit the site to offer prayers and incense in honour of Heng. Criminal trials The COI determined that Nishimatsu, L&M Geotechnic, Monosys and thirteen professionals from the LTA and Nishimatsu were responsible for the collapse. Those who received warnings included Nishimatsu personnel, an LTA engineer, soil engineers, and L&M Geotechnic and Monosys, which were engaged in soil analysis. Three others were given counselling by the Manpower Ministry. Nishimatsu and three of its personnel faced criminal charges under the Factories Act. A qualified personn from LTA, who was project director of the CCL and responsible for monitoring the site's readings, faced charges under the Building Control Act. The CCL project director's trial began on 3 October 2005; he was found guilty and fined S$8,000 (US$) on 24 November. On 28 April 2006, three senior executives from Nishimatsu were fined; the company's project director was fined S$120,000 (US$) for his failure to take appropriate measures concerning the buckling walls and for compromising safety due to flawed monitoring of instruments. The company's design manager and project coordinator were each fined S$200,000 (US$) for giving "blind approval" to the flawed designs. Construction safety reforms The LTA and the Building and Construction Authority (BCA) introduced new safety protocols such as a new Project Safety Review which identifies and reduces risks of hazards. Safety requirements are now set above industry standards, which include doubling scaffold access for evacuation routes in an emergency and one man-cage at each excavation area for rescuers. LTA no longer allows contractors to outsource their own geotechnical firms, but appoints an independent monitoring firm to check on instruments. Contractors are also no longer permitted to design and supervise their own temporary works, with the work carried out by independent consultants. Under the Safety Performance Scheme, contractors are now offered incentives or penalties and are required to maintain a Risk Register that identifies all hazards. The contractors and LTA meet every six months over safety performances, and identify and mitigate potential risks during the progress of works. These new regulations were reported to have driven up the costs of CCL construction works, alongside inflation and increasing costs of concrete. Highway reinstatement Following the collapse, the LTA closed off the stretch of Nicoll Highway from Middle Road to Mountbatten Road. Alternative roads leading into the city, including the junction of Kallang Road and Crawford Street, were widened to accommodate diverted traffic. The LTA also converted a bus-only lane at Lorong 1 Geylang towards Mountbatten Road into a traffic lane. On 25 April 2004, a part of Nicoll Highway running from Mountbatten Road to Stadium Drive was restored for motorists accessing the area around National Stadium of Singapore. Crawford Underpass, which runs under Merdeka Bridge, reopened on 29 April. After the collapsed site was refilled, the highway was rebuilt on bored piles so the rebuilt stretch would not be affected by future excavation works. Reconstruction of the highway began on 24 August 2004 and the new stretch of highway reopened on 4 December. Station relocation and opening On 4 February 2005, the LTA announced Nicoll Highway station would be relocated south of the original site along Republic Avenue with a new tunnel alignment between Millenia (now Promenade) and Boulevard (now ) stations. The LTA decided against rebuilding at the original site due to higher costs and engineering challenges posed by debris left there. Prior to the collapse, Nicoll Highway and the adjacent Promenade station were planned to have a cross-platform interchange with an unspecified future line; that had to be realigned because the new Nicoll Highway station had no provision to be an interchange. The new tunnels were designed by Aecom consultants and tunnels to the previous site were demolished with special machinery from Japan. The new station was built using the top-down method while the of tunnels were bored, minimising their impact on the environment. Retaining walls for the new station site were thick and entrenched underground – twice the previous depth. To reduce ground movement, the walls would be embedded into hard layers of soil. To ensure stability and prevent movement of the bored tunnels, the contractor implemented perforated vertical drains, and ground improvement efforts were undertaken in the vicinity of tunnel drainage sumps and cross-passages. On 29 September 2005, the LTA marked the start of the new Nicoll Highway station's construction with a groundbreaking ceremony, during which the diaphragm walls were first installed. Due to the tunnel collapse, the completion date of CCL Stage 1 was initially delayed from 2007 to 2009, and further postponed until 2010. Nicoll Highway station opened on 17 April 2010, along with the stations on CCL Stages 1 and 2. References Sources External links IAP class probes Singapore highway collapse — MIT report Nicoll Highway collapse — Singapore Civil Defence Force Nicoll Highway Collapse – Singapore Infopedia 2004 disasters in Singapore 2004 in Singapore Engineering failures Accidents and incidents involving Mass Rapid Transit (Singapore) Railway accidents in 2004 Construction accidents April 2004 events in Singapore Building and structure collapses in 2004 Building and structure collapses in Asia Kallang Railway accidents and incidents in Singapore Man-made disasters in Singapore
Nicoll Highway collapse
[ "Technology", "Engineering" ]
4,723
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
2,141,586
https://en.wikipedia.org/wiki/Bromate
The bromate anion, BrO, is a bromine-based oxoanion. A bromate is a chemical compound that contains this ion. Examples of bromates include sodium bromate () and potassium bromate (). Bromates are formed many different ways in municipal drinking water. The most common is the reaction of ozone and bromide: Br + → BrO Electrochemical processes, such as electrolysis of brine without a membrane operating to form hypochlorite, will also produce bromate when bromide ion is present in the brine solution. Photoactivation (sunlight exposure) will encourage liquid or gaseous bromine to generate bromate in bromide-containing water. In laboratories bromates can be synthesized by dissolving in a concentrated solution of potassium hydroxide (KOH). The following reactions will take place (via the intermediate creation of hypobromite): + 2 OH− → Br + BrO + 3 BrO → BrO + 2 Br Human health issues Bromate in drinking water is toxic because it is a suspected human carcinogen. Its presence in Coca-Cola's Dasani bottled water forced a recall of that product in the UK. Bromate formation during ozonation Although few by-products are formed by ozonation, ozone reacts with bromide ions in water to produce bromate. Bromide can be found in sufficient concentrations in fresh water to produce (after ozonation) more than 10 ppb of bromate—the maximum contaminant level established by the USEPA. Proposals to reduce bromate formation include: lowering the water pH below 6.0, limiting the doses of ozone, using an alternate water source with a lower bromide concentration, pretreatment with ammonia, and addition of small concentrations of chloramines prior to ozonation. Reservoir pollution On December 14, 2007, the Los Angeles Department of Water and Power (LADWP) announced that it would drain Silver Lake Reservoir and Elysian Reservoir due to bromate contamination. At the Silver Lake and Elysian reservoirs a combination of bromide from well water, chlorine, and sunlight had formed bromate. The decontamination took 4 months, discharging over of contaminated water. On June 9, 2008 the LADWP began covering the surface of the , open Ivanhoe Reservoir with black, plastic shade balls to block the sunlight which causes the naturally present bromide to react with the chlorine used in treatment. 3 million of the 40 cent balls are required to cover the Ivanhoe and Elysian reservoirs. Natural occurrence Currently no bromate-bearing minerals (i.e., the ones with bromate ion being an essential constituent) are known. See also Other bromine anions: References Carcinogens
Bromate
[ "Chemistry", "Environmental_science" ]
568
[ "Carcinogens", "Bromates", "Toxicology", "Oxidizing agents" ]
2,142,202
https://en.wikipedia.org/wiki/Crenation
Crenation (from modern Latin crenatus meaning "scalloped or notched", from popular Latin crena meaning "notch") in botany and zoology, describes an object's shape, especially a leaf or shell, as being round-toothed or having a scalloped edge. The descriptor can apply to objects of different types, including cells, where one mechanism of crenation is the contraction of a cell after exposure to a hypertonic solution, due to the loss of water through osmosis. In a hypertonic environment, the cell has a lower concentration of solutes than the surrounding extracellular fluid, and water diffuses out of the cell by osmosis, causing the cytoplasm to decrease in volume. As a result, the cell shrinks and the cell membrane develops abnormal notchings. Pickling cucumbers and salt-curing of meat are two practical applications of crenation. Plasmolysis is the term which describes plant cells when the cytoplasm shrinks from the cell wall in a hypertonic environment. In plasmolysis, the cell wall stays intact, but the plasma membrane shrinks and the chloroplasts of the plant cell concentrate in the center of the cell. Red blood cells Crenation is also used to describe a feature of red blood cells. These erythrocytes look as if they have projections extending from a smaller central area, like a spiked ball. The crenations may be either large, irregular spicules of acanthocytes, or smaller, more numerous, regularly irregular projections of echinocytes. Acanthocytes and echinocytes may arise from abnormalities of the cell membrane lipids or proteins, or from other disease processes, or as an ex vivo artifact. See also Crenellation Cytorrhysis Hemolysis Plasmolysis References External links Image from Cornell.edu Crenation at medical-dictionary.thefreedictionary.com Animal physiology Membrane biology Solutions
Crenation
[ "Chemistry", "Biology" ]
415
[ "Animals", "Animal physiology", "Membrane biology", "Homogeneous chemical mixtures", "Molecular biology", "Solutions" ]
2,142,269
https://en.wikipedia.org/wiki/Precision%20Time%20Protocol
The Precision Time Protocol (PTP) is a protocol for clock synchronization throughout a computer network with relatively high precision and therefore potentially high accuracy. In a local area network (LAN), accuracy can be sub-microsecond making it suitable for measurement and control systems. PTP is used to synchronize financial transactions, mobile phone tower transmissions, sub-sea acoustic arrays, and networks that require precise timing but lack access to satellite navigation signals. The first version of PTP, IEEE 1588-2002, was published in 2002. IEEE 1588-2008, also known as PTP Version 2, is not backward compatible with the 2002 version. IEEE 1588-2019 was published in November 2019 and includes backward-compatible improvements to the 2008 publication. IEEE 1588-2008 includes a profile concept defining PTP operating parameters and options. Several profiles have been defined for applications including telecommunications, electric power distribution and audiovisual uses. is an adaptation of PTP, called gPTP, for use with Audio Video Bridging (AVB) and Time-Sensitive Networking (TSN). History According to John Eidson, who led the IEEE 1588-2002 standardization effort, "IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP and GPS. IEEE 1588 is designed for local systems requiring accuracies beyond those attainable using NTP. It is also designed for applications that cannot bear the cost of a GPS receiver at each node, or for which GPS signals are inaccessible." PTP was originally defined in the IEEE 1588-2002 standard, officially entitled Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems, and published in 2002. In 2008, IEEE 1588-2008 was released as a revised standard; also known as PTP version 2 (PTPv2), it improves accuracy, precision and robustness but is not backward compatible with the original 2002 version. IEEE 1588-2019 was published in November 2019, is informally known as PTPv2.1 and includes backwards-compatible improvements to the 2008 publication. Architecture The IEEE 1588 standards describe a hierarchical master–slave architecture for clock distribution consisting of one or more network segments and one or more clocks. An ordinary clock is a device with a single network connection that is either the source of or the destination for a synchronization reference. A source is called a leader, a.k.a. master, and a destination is called a follower, a.k.a. slave. A boundary clock has multiple network connections and synchronizes one network segment to another. A single, synchronization leader is selected, a.k.a. elected, for each network segment. The root timing reference is called the grandmaster. A relatively simple PTP architecture consists of ordinary clocks on a single-segment network with no boundary clocks. A grandmaster is elected and all other clocks synchronize to it. IEEE 1588-2008 introduces a clock associated with network equipment used to convey PTP messages. The transparent clock modifies PTP messages as they pass through the device. Timestamps in the messages are corrected for time spent traversing the network equipment. This scheme improves distribution accuracy by compensating for delivery variability across the network. PTP typically uses the same epoch as Unix time (start of 1 January 1970). While the Unix time is based on Coordinated Universal Time (UTC) and is subject to leap seconds, PTP is based on International Atomic Time (TAI). The PTP grandmaster communicates the current offset between UTC and TAI, so that UTC can be computed from the received PTP time. Protocol details Synchronization and management of a PTP system is achieved through the exchange of messages across the communications medium. To this end, PTP uses the following message types. Sync, Follow_Up, Delay_Req and Delay_Resp messages are used by ordinary and boundary clocks and communicate time-related information used to synchronize clocks across the network. Pdelay_Req, Pdelay_Resp and Pdelay_Resp_Follow_Up are used by transparent clocks to measure delays across the communications medium so that they can be compensated for by the system. Transparent clocks and these messages associated with them are not available in original IEEE 1588-2002 PTPv1 standard, and were added in PTPv2. Announce messages are used by the best master clock algorithm in IEEE 1588-2008 to build a clock hierarchy and select the grandmaster. Management messages are used by network management to monitor, configure and maintain a PTP system. Signaling messages are used for non-time-critical communications between clocks. Signaling messages were introduced in IEEE 1588-2008. Messages are categorized as event and general messages. Event messages are time-critical in that accuracy in transmission and receipt timestamp accuracy directly affects clock distribution accuracy. Sync, Delay_Req, Pdelay_Req and Pdelay_resp are event messages. General messages are more conventional protocol data units in that the data in these messages is of importance to PTP, but their transmission and receipt timestamps are not. Announce, Follow_Up, Delay_Resp, Pdelay_Resp_Follow_Up, Management and Signaling messages are members of the general message class. Message transport PTP messages may use the User Datagram Protocol over Internet Protocol (UDP/IP) for transport. IEEE 1588-2002 uses only IPv4 transports, but this has been extended to include IPv6 in IEEE 1588-2008. In IEEE 1588-2002, all PTP messages are sent using multicast messaging, while IEEE 1588-2008 introduced an option for devices to negotiate unicast transmission on a port-by-port basis. Multicast transmissions use IP multicast addressing, for which multicast group addresses are defined for IPv4 and IPv6 (see table). Time-critical event messages (Sync, Delay_req, Pdelay_Req and Pdelay_Resp) are sent to port number 319. General messages (Announce, Follow_Up, Delay_Resp, Pdelay_Resp_Follow_Up, management and signaling) use port number 320. In IEEE 1588-2008, encapsulation is also defined for DeviceNet, ControlNet and PROFINET. Domains A domain is an interacting set of clocks that synchronize to one another using PTP. Clocks are assigned to a domain by virtue of the contents of the Subdomain name (IEEE 1588-2002) or the domainNumber (IEEE 1588-2008) fields in PTP messages they receive or generate. Domains allow multiple clock distribution systems to share the same communications medium. Best master clock algorithm The best master clock algorithm (BMCA) performs a distributed selection of the best clock to act as leader based on the following clock properties: IdentifierA universally unique numeric identifier for the clock. This is typically constructed based on a device's MAC address. QualityBoth versions of IEEE 1588 attempt to quantify clock quality based on expected timing deviation, technology used to implement the clock or location in a clock stratum schema, although only V1 (IEEE 1588-2002) knows a data field stratum. PTP V2 (IEEE 1588-2008) defines the overall quality of a clock by using the data fields clockAccuracy and clockClass. PriorityAn administratively assigned precedence hint used by the BMCA to help select a grandmaster for the PTP domain. IEEE 1588-2002 used a single Boolean variable to indicate precedence. IEEE 1588-2008 features two 8-bit priority fields. VarianceA clock's estimate of its stability based on observation of its performance against the PTP reference. IEEE 1588-2008 uses a hierarchical selection algorithm based on the following properties, in the indicated order: Priority 1the user can assign a specific static-designed priority to each clock, preemptively defining a priority among them. Smaller numeric values indicate higher priority. Classeach clock is a member of a given class, each class getting its own priority. Accuracyprecision between clock and UTC, in nanoseconds (ns) Variancevariability of the clock Priority 2final-defined priority, defining backup order in case the other criteria were not sufficient. Smaller numeric values indicate higher priority. Unique identifierMAC address-based selection is used as a tiebreaker when all other properties are equal. IEEE 1588-2002 uses a selection algorithm based on similar properties. Clock properties are advertised in IEEE 1588-2002 Sync messages and in IEEE 1588-2008 Announce messages. The current leader transmits this information at regular interval. A clock that considers itself a better leader will transmit this information in order to invoke a change of leader. Once the current leader recognizes the better clock, the current leader stops transmitting Sync messages and associated clock properties (Announce messages in the case of IEEE 1588-2008) and the better clock takes over as leader. The BMCA only considers the self-declared quality of clocks and does not take network link quality into consideration. Synchronization Via BMCA, PTP selects a source of time for an IEEE 1588 domain and for each network segment in the domain. Clocks determine the offset between themselves and their leader. Let the variable represent physical time. For a given follower device, the offset at time is defined by: where represents the time measured by the follower clock at physical time , and represents the time measured by the leader clock at physical time . The leader periodically broadcasts the current time as a message to the other clocks. Under IEEE 1588-2002 broadcasts are up to once per second. Under IEEE 1588-2008, up to 10 per second are permitted. Each broadcast begins at time with a Sync message sent by the leader to all the clocks in the domain. A clock receiving this message takes note of the local time when this message is received. The leader may subsequently send a multicast Follow_Up with accurate timestamp. Not all leaders have the ability to present an accurate timestamp in the Sync message. It is only after the transmission is complete that they are able to retrieve an accurate timestamp for the Sync transmission from their network hardware. Leaders with this limitation use the Follow_Up message to convey . Leaders with PTP capabilities built into their network hardware are able to present an accurate timestamp in the Sync message and do not need to send Follow_Up messages. In order to accurately synchronize to their leader, clocks must individually determine the network transit time of the Sync messages. The transit time is determined indirectly by measuring round-trip time from each clock to its leader. The clocks initiate an exchange with their leader designed to measure the transit time . The exchange begins with a clock sending a Delay_Req message at time to the leader. The leader receives and timestamps the Delay_Req at time and responds with a Delay_Resp message. The leader includes the timestamp in the Delay_Resp message. Through these exchanges a clock learns , , and . If is the transit time for the Sync message, and is the constant offset between leader and follower clocks, then Combining the above two equations, we find that The clock now knows the offset during this transaction and can correct itself by this amount to bring it into agreement with their leader. One assumption is that this exchange of messages happens over a period of time so small that this offset can safely be considered constant over that period. Another assumption is that the transit time of a message going from the leader to a follower is equal to the transit time of a message going from the follower to the leader. Finally, it is assumed that both the leader and follower can accurately measure the time they send or receive a message. The degree to which these assumptions hold true determines the accuracy of the clock at the follower device. Optional features IEEE 1588-2008 standard lists the following set of features that implementations may choose to support: Alternate Time-Scale Grand Master Cluster Unicast Masters Alternate Master Path Trace IEEE 1588-2019 adds additional optional and backward-compatible features: Modular transparent clocks Special PTP ports to interface with transports with built-in time distribution Unicast Delay_Req and Delay_Resp messages Manual port configuration overriding BMCA Asymmetry calibration Ability to utilize a physical layer frequency reference (e.g. Synchronous Ethernet) Profile isolation Inter-domain interactions Security TLV for integrity checking Standard performance reporting metrics Slave port monitoring Related initiatives The International IEEE Symposium on Precision Clock Synchronization for Measurement, Control and Communication (ISPCS) is an IEEE-organized annual event that includes a plugtest and a conference program with paper and poster presentations, tutorials and discussions covering several aspects of PTP. The Institute of Embedded Systems (InES) of the Zurich University of Applied Sciences/ZHAW is addressing the practical implementation and application of PTP. IEEE 1588 is a key technology in the LXI Standard for Test and Measurement communication and control. IEEE 802.1AS-2011 is part of the IEEE Audio Video Bridging (AVB) group of standards. It specifies a profile for use of IEEE 1588-2008 for time synchronization over a virtual bridged local area network as defined by IEEE 802.1Q. In particular, 802.1AS defines how IEEE 802.3 (Ethernet), IEEE 802.11 (Wi-Fi), and MoCA can all be parts of the same PTP timing domain. SMPTE 2059-2 is a PTP profile for use in synchronization of broadcast media systems. The AES67 audio networking interoperability standard includes a PTPv2 profile compatible with SMPTE ST2059-2. Dante uses PTPv1 for synchronization. Q-LAN and RAVENNA use PTPv2 for time synchronization. The White Rabbit Project combines Synchronous Ethernet and PTP. Precision Time Protocol Industry Profile PTP profiles (L2P2P and L3E2E) for industrial automation in IEC 62439-3 IEC/IEEE 61850-9-3 PTP profile for substation automation adopted by IEC 61850 Parallel Redundancy Protocol use of PTP profiles (L2P2P and L3E2E) for industrial automation in parallel networks PTP is being studied to be applied as a secure time synchronization protocol in power systems' Wide Area Monitoring See also Notes References External links NIST IEEE 1588 site PTP documentation at InES PTP and Synchronization of LTE mobile networks PTP explained under the installation / maintenance point of view Hirschmann PTP Whitepaper PTP overview in Cisco CGS 2520 Switch Software Configuration Guide Perspectives and priorities on RuggedCom Smart Grid Research IEC 61850 Technologies Projects with Smart Substation Solution The White Rabbit Project PTP IEC&IEEE Precision Time Protocol, Pacworld, September 2016 IEC 62439-3 Annexes A-E Redundant attachment of clocks and network management PTPv2 Timing protocol in AV networks FSMLabs: Single source IEEE PTP 1588 cannot meet financial regulatory standards Synchronization IEEE standards Network time-related software Network protocols Application layer protocols
Precision Time Protocol
[ "Technology", "Engineering" ]
3,187
[ "Computer standards", "Telecommunications engineering", "IEEE standards", "Synchronization" ]
2,142,286
https://en.wikipedia.org/wiki/Critical%20engine
The critical engine of a multi-engine fixed-wing aircraft is the engine that, in the event of failure, would most adversely affect the performance or handling abilities of an aircraft. On propeller aircraft, there is a difference in the remaining yawing moments after failure of the left or the right (outboard) engine when all propellers rotate in the same direction due to the P-factor. On turbojet and turbofan twin-engine aircraft, there usually is no difference between the yawing moments after failure of a left or right engine in no-wind condition. Description When one of the engines on a typical multi-engine aircraft becomes inoperative, a thrust imbalance exists between the operative and inoperative sides of the aircraft. This thrust imbalance causes several negative effects in addition to the loss of one engine's thrust. The tail-design engineer is responsible for determining the size of vertical stabilizer that will comply with the regulatory requirements for the control, and performance of an aircraft after engine failure, such as those set by the Federal Aviation Administration and European Aviation Safety Agency. The experimental test pilot and flight-test engineer use flight testing to determine which of the engines is the critical engine. Factors affecting engine criticality Asymmetrical yaw When one engine fails, a yawing moment develops, which applies a rotational force to the aircraft that tends to turn it toward the wing that carries the engine that failed. A rolling moment might develop, due to the asymmetry of the lift in each wing, with a greater lift generated by the wing with the operating engine. The yawing and rolling moments apply rotational forces that tend to yaw and roll the aircraft towards the failed engine. This tendency is counteracted by the pilot's use of the flight controls, which include the rudder and ailerons. Due to P-factor, a clockwise rotating right-hand propeller on the right wing typically develops its resultant thrust vector at a greater lateral distance from the aircraft's center of gravity than the clockwise rotating left-hand propeller (Figure 1). The failure of the left-hand engine will result in a larger yawing moment by the operating right-hand engine, rather than vice versa. Since the operating right-hand engine produces a larger yawing moment, the pilot will need to use larger deflections of the flight controls or a higher speed in order to maintain control of the aircraft. Thus, the failure of the left-hand engine has a greater impact than failure of the right-hand engine, and the left-hand engine is called the critical engine. On aircraft with propellers that rotate counter-clockwise, such as the de Havilland Dove, the right engine would be the critical engine. Most aircraft that have counter-rotating propellers do not have a critical engine defined by the above mechanism because the two propellers are made to rotate inward from the top of the arc; both engines are critical. Some aircraft, such as the Lockheed P-38 Lightning, purposefully have propellers that rotate outward from the top of the arc, to reduce downward air turbulence, known as downwash, on the central horizontal stabilizer, which makes it easier to fire guns from the aircraft. These engines are both critical, but more critical than inward-rotating propellers. Aircraft with propellers in a push-pull configuration, such as the Cessna 337, may have a critical engine, if failure of one engine has a greater negative effect on aircraft control or climb performance than failure of the other engine. The failure of a critical engine in an aircraft with propellers in a push-pull configuration typically will not generate large yawing or rolling moments. Effect of the critical engine on minimum control speed The standards and certifications that specify airworthiness require that the manufacturer determine a minimum control speed (VMC) at which a pilot can retain control of the aircraft after failure of the critical engine, and publish this speed in the section of the airplane flight manual on limitations. The published minimum control speeds (VMCs) of the aircraft are measured when the critical engine fails or is inoperative, so the effect of the failure of the critical engine is included in the published VMCs. When any one of the other engines fails or is inoperative, the actual VMC that the pilot experiences in flight will be slightly lower, which is safer, but this difference is not documented in the manual. The critical engine is one of the factors that influences the VMCs of the aircraft. The published VMCs are safe regardless of which engine fails or is inoperative, and pilots do not need to know which engine is critical in order to safely fly. The critical engine is defined in aviation regulations for the purpose of designing the tail, and for experimental test pilots to measure VMCs in flight. Other factors like bank angle, and thrust have a much greater effect on VMCs than the difference of a critical and a non-critical engine. The Airbus A400M has an atypical design, because it has counter-rotating propellers on both wings. The propellers on a wing rotate in opposite directions to each other: the propellers rotate from the top of the arc downward toward each other. If both engines on a wing are operative, the shift of the thrust vector with increasing angle of attack is always towards the other engine on the same wing. The effect is that the resultant thrust vector of both engines on the same wing does not shift as the angle of attack of the airplane increases as long as both engines are operating. There is no total P-factor, and failure of either outboard engine (i.e.: engines 1 or 4) will result in no difference in magnitude of the remaining thrust yawing moments with increasing angle of attack, only in the direction left or right. The minimum control speed during takeoff (VMC) and during flight (VMCA) after failure of either one of the outboard engines will be the same unless boosting systems that may be required for controlling the airplane are installed on only one of the outboard engines. Both outboard engines would be critical. If an outboard engine fails, such as engine 1 as shown in Figure 2, the moment arm of the vector of the remaining thrust on that wing moves from in between the engines to a bit outside of the remaining inboard engine. The vector itself is 50% of the opposite thrust vector. The resulting thrust yawing moment is much smaller in this case than for conventional propeller rotation. The maximum rudder yawing moment to counteract the asymmetrical thrust can be smaller and, consequently, the size of the vertical tail can be smaller. The feathering system of the large, 8-bladed, 17.5-foot (5.33 m) diameter drag propellers must be automatic, very rapid and failure-free, to ensure the lowest possible propeller drag following a propulsion-system malfunction. If not, a failure of the feathering system of an outboard engine will increase propeller drag, which in turn enhances the thrust yawing moment considerably, thus increasing actual VMC(A). The control power generated by the small vertical tail and rudder alone is low by the small design. Only rapid reduction of thrust of the opposite engine, or increased airspeed can restore the required control power to maintain straight flight following the failure of a feathering system. Designing and approving the feathering system for this airplane is challenging for design engineers and certification authorities. On airplanes with very powerful engines, the problem of asymmetrical thrust is solved by applying automatic thrust asymmetry compensation, but this has consequences for takeoff performance. Elimination The Rutan Boomerang is an asymmetrical aircraft designed with engines with slightly different power outputs to produce an aircraft that eliminates the dangers of asymmetric thrust in the event of failure of either of its two engines. References External links On-line Engine-out Trainer at University of North Dakota Aircraft engines Aviation safety
Critical engine
[ "Technology" ]
1,606
[ "Engines", "Aircraft engines" ]
2,142,405
https://en.wikipedia.org/wiki/Palm%20i705
The Palm i705 was an upgrade from the last series of Palm PDAs to use the now discontinued Palm.net service via Mobitex to access the World Wide Web from Palm devices. It featured 8MB of onboard memory and an SD/MMC slot for additional storage or SDIO cards. It used the Motorola Dragonball VZ 33 MHz processor and ran Palm OS 4.1, it was noted as being the first Palm.net capable device without a flip out antenna and with an internal rechargeable battery, although it was the third and final of the three models manufactured by Palm that were capable of utilizing this network. See also Palm.net Palm (PDA) Palm OS PalmSource, Inc. Palm, Inc. Graffiti (Palm OS) External links Palm i705 Handheld Debuts: Only Secure, Integrated Wireless, Email Solution With Web Access, Palm Press Release, January 28, 2002 i705 68k-based mobile devices
Palm i705
[ "Technology" ]
196
[ "Mobile computer stubs", "Mobile technology stubs" ]
2,142,653
https://en.wikipedia.org/wiki/Crankcase
A crankcase is the housing in a piston engine that surrounds the crankshaft. In most modern engines, the crankcase is integrated into the engine block. Two-stroke engines typically use a crankcase-compression design, resulting in the fuel/air mixture passing through the crankcase before entering the cylinder(s). This design of the engine does not include an oil sump in the crankcase. Four-stroke engines typically have an oil sump at the bottom of the crankcase and the majority of the engine's oil is held within the crankcase. The fuel/air mixture does not pass through the crankcase, though a small amount of exhaust gasses often enter as "blow-by" from the combustion chamber, particularly in engines with worn rings. The crankcase often forms the upper half of the main bearing journals (with the bearing caps forming the other half), although in some engines the crankcase completely surrounds the main bearing journals. An open-crank engine has no crankcase. This design was used in early engines and remains in use in some large marine diesel engines. Two-stroke engines Crankcase-compression Many two-stroke engines use a crankcase-compression design, where a partial vacuum draws the fuel/air mixture into the engine as the piston moves upwards. Then as the piston travels downward, the inlet port is uncovered and the compressed fuel/air mixture is pushed from the crankcase into the combustion chamber. Crankcase-compression designs are often used in small petrol (gasoline) engines for motorcycles, generator sets and garden equipment. This design has also been used in some small diesel engines, however it is less common. Both sides of the piston are used as working surfaces: the upper side is the power piston, the lower side acts as a pump. Therefore an inlet valve is not required. Unlike other types of engines, there is no supply of oil to the crankcase, because it handles the fuel/air mixture. Instead, two-stroke oil is mixed with the fuel used by the engine and burned in the combustion chamber. Lubricating crankcase Large two-stroke engines do not use crankcase compression, but instead a separate scavenge blower or supercharger to draw the fuel/air mixture into the compression chamber. Therefore the crankcases are similar to a four-stroke engine in that they are solely used for lubrication purposes. Four-stroke engines Most four-stroke engines use a crankcase that contains the engine's lubricating oil, as either a wet sump system or the less common dry sump system. Unlike a two-stroke (crankcase-compression) engine, the crankcase in a four-stroke engine is not used for the fuel/air mixture. Oil circulation Engine oil is recirculated around a four-stroke engine (rather than burning it as happens in a two-stroke engine) and much of this occurs within the crankcase. Oil is stored either at the bottom of the crankcase (in a wet sump engine) or in a separate reservoir (in a dry sump system). From here the oil is pressurized by an oil pump (and usually passes through an oil filter) before it is squirted into the crankshaft and connecting rod bearings and onto the cylinder walls, and eventually drips off into the bottom of the crankcase. Even in a wet sump system, the crankshaft has minimal contact with the sump oil. Otherwise, the high-speed rotation of the crankshaft would cause the oil to froth, making it difficult for the oil pump to move the oil, which can starve the engine of lubrication. Oil from the sump may splash onto the crankshaft due to g-forces or bumpy roads, which is referred to as windage. Ventilation of combustion gasses Although the piston rings are intended to seal the combustion chamber from the crankcase, it is normal for some combustion gases to escape around the piston rings and enter the crankcase. This phenomenon is known as blow-by. If these gases accumulated within the crankcase, it would cause unwanted pressurisation of the crankcase, contamination of the oil and rust from condensation. To prevent this, modern engines use a crankcase ventilation system to expel the combustion gases from the crankcase. In most cases, the gases are passed through to the intake manifold. Open-crank engines Early engines were of the "open-crank" style, that is, there was no enclosed crankcase. The crankshaft and associated parts were open to the environment. That made for a messy environment, because oil spray from the moving parts was not contained. Another disadvantage was that dirt and dust could get into moving engine parts, causing excessive wear and possible malfunction of the engine. Frequent cleaning of the engine was required to keep it in normal working order. Some two-stroke diesel engines, such as the large slow-speed engines used in ships, have the crankcase as a separate space from the cylinders, or as an open crank. The spaces between the crosshead piston and the crankshaft, may be largely open for maintenance access. See also Tunnel crankcase References Engine technology Engine components
Crankcase
[ "Technology" ]
1,055
[ "Engine technology", "Engine components", "Engines" ]
2,142,734
https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case Roman letters, such as or and so on. Random variables, in this context, usually refer to something in words, such as "the height of a subject" for a continuous variable, or "the number of cars in the school car park" for a discrete variable, or "the colour of the next bicycle" for a categorical variable. They do not represent a single number or a single category. For instance, if is written, then it represents the probability that a particular realisation of a random variable (e.g., height, number of cars, or bicycle colour), X, would be equal to a particular value or category (e.g., 1.735 m, 52, or purple), . It is important that and are not confused into meaning the same thing. is an idea, is a value. Clearly they are related, but they do not have identical meanings. Particular realisations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to distinguish the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P to avoid having to define "P is a probability" and is short for , where is the event space, is a random variable that is a function of (i.e., it depends upon ), and is some outcome of interest within the domain specified by (say, a particular height, or a particular colour of a car). notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring ("or" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of X and Y X is independent of Y is often written or , and X is independent of Y given W is often written or , the conditional probability, is the probability of given Statistics Greek letters (e.g. θ, β) are commonly used to denote unknown parameters (population parameters). A tilde (~) denotes "has the probability distribution of". Placing a hat, or caret (also known as a circumflex), over a true parameter denotes an estimator of it, e.g., is an estimator for . The arithmetic mean of a series of values is often denoted by placing an "overbar" over the symbol, e.g. , pronounced " bar". Some commonly used symbols for sample statistics are given below: the sample mean , the sample variance , the sample standard deviation , the sample correlation coefficient , the sample cumulants . Some commonly used symbols for population parameters are given below: the population mean , the population variance , the population standard deviation , the population correlation , the population cumulants , is used for the order statistic, where is the sample minimum and is the sample maximum from a total sample size . Critical values The α-level upper critical value of a probability distribution is the value exceeded with probability , that is, the value such that , where is the cumulative distribution function. There are standard notations for the upper critical values of some commonly used distributions in statistics: or for the standard normal distribution or for the t-distribution with degrees of freedom or for the chi-squared distribution with degrees of freedom or for the F-distribution with and degrees of freedom Linear algebra Matrices are usually denoted by boldface capital letters, e.g. . Column vectors are usually denoted by boldface lowercase letters, e.g. . The transpose operator is denoted by either a superscript T (e.g. ) or a prime symbol (e.g. ). A row vector is written as the transpose of a column vector, e.g. or . Abbreviations Common abbreviations include: a.e. almost everywhere a.s. almost surely cdf cumulative distribution function cmf cumulative mass function df degrees of freedom (also ) i.i.d. independent and identically distributed pdf probability density function pmf probability mass function r.v. random variable w.p. with probability; wp1 with probability 1 i.o. infinitely often, i.e. ult. ultimately, i.e. See also Glossary of probability and statistics Combinations and permutations History of mathematical notation References External links Earliest Uses of Symbols in Probability and Statistics, maintained by Jeff Miller. Notation Mathematical notation
Notation in probability and statistics
[ "Mathematics" ]
1,111
[ "Applied mathematics", "Probability and statistics", "nan" ]
2,142,850
https://en.wikipedia.org/wiki/Glossary%20of%20probability%20and%20statistics
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Notation in probability and statistics Probability axioms Glossary of experimental design List of statistical topics List of probability topics Glossary of areas of mathematics Glossary of calculus References External links Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton) Glossary Statistics-related lists Probability and statistics Probability and statistics Wikipedia glossaries using description lists
Glossary of probability and statistics
[ "Mathematics" ]
149
[ "Applied mathematics", "Probability and statistics" ]
2,142,913
https://en.wikipedia.org/wiki/Diquark
In particle physics, a diquark, or diquark correlation/clustering, is a hypothetical state of two quarks grouped inside a baryon (that consists of three quarks). Corresponding models of baryons are referred to as quark–diquark models. The diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interaction. The existence of diquarks inside the nucleons is a disputed issue, but it helps to explain some nucleon properties and to reproduce experimental data sensitive to the nucleon structure. Diquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872). Formation The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetric. When both quarks are correlated in this way they tend to form a very low energy configuration. This low energy configuration has become known as a diquark. Controversy Many scientists theorize that a diquark should not be considered a particle. Even though they may contain two quarks they are not color neutral, and therefore cannot exist as isolated bound states. So instead they tend to float freely inside hadrons as composite entities; while free-floating they have a size of about . This also happens to be the same size as the hadron itself. Uses Diquarks are the conceptual building blocks, and as such give scientists an ordering principle for the most important states in the hadronic spectrum. There are many different pieces of evidence that prove diquarks are fundamental in the structure of hadrons. One of the most compelling pieces of evidence comes from a recent study of baryons. In this study the baryon had one heavy and two light quarks. Since the heavy quark is inert, the scientists were able to discern the properties of the different quark configurations in the hadronic spectrum. Λ and Σ baryon experiment An experiment was conducted using diquarks in an attempt to study the Λ and Σ baryons that are produced in the creation of hadrons created by fast-moving quarks. In the experiment the quarks ionized the vacuum area. This produced the quark–antiquark pairs, which then converted themselves into mesons. When generating a baryon by assembling quarks, it is helpful if the quarks first form a stable two-quark state. The Λ and the Σ are created as a result of up, down and strange quarks. Scientists found that the Λ contains the [ud] diquark, however the Σ does not. From this experiment scientists inferred that Λ baryons are more common than Σ baryons, and indeed they are more common by a factor of 10. References Further reading Quarks Hypothetical elementary particles
Diquark
[ "Physics" ]
602
[ "Hypothetical elementary particles", "Unsolved problems in physics", "Physics beyond the Standard Model" ]
2,143,519
https://en.wikipedia.org/wiki/Serge%20Vaudenay
Serge Vaudenay (born 5 April 1968) is a French cryptographer and professor, director of the Communications Systems Section at the École Polytechnique Fédérale de Lausanne Serge Vaudenay entered the École Normale Supérieure in Paris as a normalien student in 1989. In 1992, he passed the agrégation in mathematics. He completed his Ph.D. studies at the computer science laboratory of École Normale Supérieure, and defended it in 1995 at the Paris Diderot University; his advisor was Jacques Stern. From 1995 to 1999, he was a senior research fellow at French National Centre for Scientific Research (CNRS). In 1999, he moved to a professorship at the École Polytechnique Fédérale de Lausanne where he leads the Laboratory of Security and Cryptography (LASEC). LASEC is host to two popular security programs developed by its members: iChair, developed by Thomas Baignères and Matthieu Finiasz, a popular on-line submission and review server used by many cryptography conferences; and, Ophcrack, a Microsoft Windows password cracker based on rainbow tables by Philippe Oechslin. In spring 2020, with Martin Vuagnoux he identifies also various security vulnerabilities in SwissCovid, the Swiss digital contact tracing application. The system would thus allow a third party to trace the movements of a phone using the application by means of Bluetooth sensors scattered along its path, for example in a building. Another possible attack would be to copy identifiers from the phones of people who may be ill (for example, in a hospital), and to reproduce those identifiers in order to receive notification of exposure to COVID-19 and illegitimately benefit from quarantine (thus entitling them to paid leave, a postponed examination, or other benefits). The system would also allow a third party to use a phone using the application by means of Bluetooth sensors scattered along the way. Vaudenay and his team have developed several security protocols for a number of projects and in particular to reinforce the biometric identification technology based on vein scanning developed by Lambert Sonna Momo. Vaudenay has published several papers related to cryptanalysis and design of block ciphers and protocols. He is one of the authors of the IDEA NXT (FOX) algorithm (together with Pascal Junod). He was the inventor of the padding oracle attack on CBC mode of encryption. Vaudenay also discovered a severe vulnerability in the SSL/TLS protocol; the attack he forged could lead to the interception of the password. He also published a paper about biased statistical properties in the Blowfish cipher and is one of the authors of the best attack on the Bluetooth cipher E0. In 1997 he introduced decorrelation theory, a system for designing block ciphers to be provably secure against many cryptanalytic attacks. Vaudenay was appointed program chair of Eurocrypt 2006, PKC 2005, FSE 1998; and in 2006 elected as board member of the International Association for Cryptologic Research. References External links Serge Vaudenay's Homepage LASEC at EPFL iChair at LASEC Ophcrack at Sourceforge French cryptographers 1968 births Living people People from Saint-Maur-des-Fossés Modern cryptographers École Normale Supérieure alumni French computer scientists Academic staff of the École Polytechnique Fédérale de Lausanne
Serge Vaudenay
[ "Technology" ]
708
[ "Computing stubs", "Computer specialist stubs" ]
2,143,560
https://en.wikipedia.org/wiki/Ky%20Fan%20inequality
In mathematics, there are two different results that share the common name of the Ky Fan inequality. One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval. The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan. They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. Statement of the classical version If with for i = 1, ..., n, then with equality if and only if x1 = x2 = ⋅ ⋅ ⋅ = xn. Remark Let denote the arithmetic and geometric mean, respectively, of x1, . . ., xn, and let denote the arithmetic and geometric mean, respectively, of 1 − x1, . . ., 1 − xn. Then the Ky Fan inequality can be written as which shows the similarity to the inequality of arithmetic and geometric means given by Gn ≤ An. Generalization with weights If xi ∈ [0,] and γi ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ1 + . . . + γn = 1, then with the convention 00 := 0. Equality holds if and only if either γixi = 0 for all i = 1, . . ., n or all xi > 0 and there exists x ∈ (0,] such that x = xi for all i = 1, . . ., n with γi > 0. The classical version corresponds to γi = 1/n for all i = 1, . . ., n. Proof of the generalization Idea: Apply Jensen's inequality to the strictly concave function Detailed proof: (a) If at least one xi is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if and only if the right-hand side is also zero, which is the case when γixi = 0 for all i = 1, . . ., n. (b) Assume now that all xi > 0. If there is an i with γi = 0, then the corresponding xi > 0 has no effect on either side of the inequality, hence the ith term can be omitted. Therefore, we may assume that γi > 0 for all i in the following. If x1 = x2 = . . . = xn, then equality holds. It remains to show strict inequality if not all xi are equal. The function f is strictly concave on (0,], because we have for its second derivative Using the functional equation for the natural logarithm and Jensen's inequality for the strictly concave f, we obtain that where we used in the last step that the γi sum to one. Taking the exponential of both sides gives the Ky Fan inequality. The Ky Fan inequality in game theory A second inequality is also called the Ky Fan Inequality, because of a 1972 paper, "A minimax inequality and its applications". This second inequality is equivalent to the Brouwer Fixed Point Theorem, but is often more convenient. Let S be a compact convex subset of a finite-dimensional vector space V, and let be a function from to the real numbers that is lower semicontinuous in x, concave in y and has for all z in S. Then there exists such that for all . This Ky Fan Inequality is used to establish the existence of equilibria in various games studied in economics. References External links Inequalities Articles containing proofs
Ky Fan inequality
[ "Mathematics" ]
837
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
2,143,634
https://en.wikipedia.org/wiki/Water-meadow
A water-meadow (also water meadow or watermeadow) is an area of grassland subject to controlled irrigation to increase agricultural productivity. Water-meadows were mainly used in Europe from the 16th to the early 20th centuries. Working water-meadows have now largely disappeared, but the field patterns and water channels of derelict water-meadows remain common in areas where they were used, such as parts of Northern Italy, Switzerland and England. Derelict water-meadows are often of importance as wetland wildlife habitats. Water-meadows should not be confused with flood-meadows, which are naturally covered in shallow water by seasonal flooding from a river. "Water-meadow" is sometimes used more loosely to mean any level grassland beside a river. Types Two main types of water-meadow were used. Catchwork water-meadow These were used for fields on slopes, and relatively little engineering skill was required to construct them. Water from a stream or spring was fed to the top of a sloping field, and gentle sloping terraces were formed along which the water could trickle in a zig-zag fashion down the field. The water could be used again for fields lower down the slope. Bedwork water-meadow Bedwork or floated water-meadows were built on almost-level fields along broad river valleys; they required careful construction to ensure correct operation. A leat, called a main, carrier or top carrier, diverted water from the river and carried it down the valley at a gentler slope than the river, producing a hydrostatic head between the two. Mains were often along the edge of the valley, each main supplying up to about of the valley. The water from the main was used to supply many smaller carriers, on the crests of ridges built across the fields. The channel on the crest of each ridge would overflow slowly down the sides (the panes) of the ridge, the channel eventually tapering to an end at the tip of the ridge. The seeping water would then collect between the ridges, in drains or drawns, these joining to form a bottom carrier or tail drain which returned the water to the river. The ridges and the drains made an interlocking grid (like interlaced fingers), but the ridge-top channels and the drains did not connect directly. A by-carrier took any water not needed for irrigation straight from the main back to the river. The ridges varied in height depending on the available head – usually from around . The pattern of carriers and drains was generally regular, but it was adapted to fit the natural topography of the ground and the locations of suitable places for the offtake and return of water. The water flow was controlled by a system of hatches (sluice gates) and stops (small earth or wooden-board dams). Irrigation could be provided separately for each section of water-meadow. Sometimes aqueducts took carriers over drains, and causeways and culverts provided access for wagons. The working or floating (irrigation) and maintenance of the water-meadow was done by a highly skilled craftsman called a drowner or waterman, who was often employed by several adjacent farmers. The terminology used for watermeadows varied considerably with locality and dialect. Uses Water-meadow irrigation did not aim to flood the ground, but to keep it continuously damp – a working water-meadow has no standing water. Irrigation in early spring kept frosts off the ground and so allowed grass to grow several weeks earlier than otherwise, and in dry summer weather irrigation kept the grass growing. It also allowed the ground to absorb any plant nutrients or silt carried by the river water – this fertilised the grassland, and incidentally also reduced eutrophication of the river water by nutrient pollution. The grass was used both for making hay and for grazing by livestock (usually cattle or sheep). Derelict water-meadows Former water-meadows are found along many river valleys, where the sluice gates, channels and field ridges may still be visible (however the ridges should not be confused with ridge and furrow topography, which is found on drier ground and has a very different origin in arable farming). The drains in a derelict water-meadow are generally clogged and wet, and most of the carrier channels are dry, with the smaller ones on the ridge-tops often invisible. If any main carrier channels still flow, they usually connect permanently to the by-carriers. The larger sluices may be concealed under the roots of trees (such as crack willows), which have grown up from seedlings established in the brickwork. The complex mixture of wet and drier ground often gives derelict water-meadows particularly high wetland biodiversity. Working water-meadows Derelict water-meadows can be transformed into wildlife protection and conservation areas by repairing and operating the irrigation, as is the case of Josefov Meadows in the Czech Republic. By imitating the natural river flooding which is rare in modern straightened and dammed rivers, a rich biodiversity can be restored and attract and sustain many rare and protected wetland species. See also Flood-meadow Coastal plain Field Flooded grasslands and savannas Grassland Pasture Plain Prairie Riparian zone Wet meadow Floodplain Berm Further reading Hadrian Cook and Tom Williamson (eds.), Water Management in the English Landscape: Field, Marsh and Meadow. Edinburgh: Edinburgh University Press, 1999. External links Includes detailed description of bedwork and catchwork water-meadows. Upper Test Valley Description of the upper River Test valley in southern England, including description of catchwork water-meadows. Harnham Water Meadows Includes animation of water flow. Water Meadows: The lush pastures of the river valleys Description, terminology and diagrams of floated water-meadows. Nitrogen Transformations in Wetlands: Effects of Water Flow Patterns—PhD thesis on watermeadows (PDF) Parapotamische Nutzungssysteme – Wiesenwässerung am Fuß des Kaiserstuhls—PhD thesis on watermeadows Environmental terminology History of agriculture Landscape history Meadows Rivers Water and the environment Wetlands
Water-meadow
[ "Environmental_science" ]
1,219
[ "Hydrology", "Wetlands" ]
2,143,855
https://en.wikipedia.org/wiki/Sauter%20mean%20diameter
In fluid dynamics, Sauter mean diameter (SMD) is an average measure of particle size. It was originally developed by German scientist Josef Sauter in the late 1920s. It is defined as the diameter of a sphere that has the same volume/surface area ratio as a particle of interest. Several methods have been devised to obtain a good estimate of the SMD. Definition The Sauter diameter (SD, also denoted D[3,2] or d_{32}) for a given particle is defined as: where ds is the so-called surface diameter and dv is the volume diameter, defined as: The quantities Ap and Vp are the ordinary surface area and volume of the particle, respectively. The equation may be simplified further as: This is usually taken as the mean of several measurements, to obtain the Sauter mean diameter (SMD): This provides intrinsic data that help determine the particle size for fluid problems. Applications The SMD can be defined as the diameter of a drop having the same volume/surface area ratio as the entire spray. SMD is especially important in calculations where the active surface area is important. Such areas include catalysis and applications in fuel combustion. See also Sphericity References Fluid dynamics Length
Sauter mean diameter
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
253
[ "Scalar physical quantities", "Physical quantities", "Distance", "Chemical engineering", "Quantity", "Size", "Length", "Piping", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
2,144,050
https://en.wikipedia.org/wiki/Hydrion%20paper
Hydrion is a trademarked name for a popular line of compound pH indicators, marketed by Micro Essential Laboratory Inc., exhibiting a series of color changes (typically producing a recognizably different color for each pH unit) over a range of pH values. Although solutions are available, the most common forms of Hydrion are a series of papers impregnated with various mixtures of indicator dyes. It is considered a "universal indicator". See also PHydrion External links Micro Essential Laboratory, Inc. website PH indicators
Hydrion paper
[ "Chemistry", "Materials_science" ]
112
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
2,144,362
https://en.wikipedia.org/wiki/Voice%20user%20interface
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface. Voice user interfaces have been added to automobiles, home automation systems, computer operating systems, home appliances like washing machines and microwave ovens, and television remote controls. They are the primary way of interacting with virtual assistants on smartphones and smart speakers. Older automated attendants (which route phone calls to the correct extension) and interactive voice response systems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons via DTMF tones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons. Newer voice command devices are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriate feedback, accurately imitating a natural conversation. Overview A VUI is the interface to any speech application. Only a short time ago, controlling a machine by simply talking to it was only possible in science fiction. Until recently, this area was considered to be artificial intelligence. However, advances in technologies like text-to-speech, speech-to-text, natural language processing, and cloud services contributed to the mass adoption of these types of interfaces. VUIs have become more commonplace, and people are taking advantage of the value that these hands-free, eyes-free interfaces provide in many situations. VUIs need to respond to input reliably, or they will be rejected and often ridiculed by their users. Designing a good VUI requires interdisciplinary talents of computer science, linguistics and human factors psychology – all of which are skills that are expensive and hard to come by. Even with advanced development tools, constructing an effective VUI requires an in-depth understanding of both the tasks to be performed, as well as the target audience that will use the final system. The closer the VUI matches the user's mental model of the task, the easier it will be to use with little or no training, resulting in both higher efficiency and higher user satisfaction. A VUI designed for the general public should emphasize ease of use and provide a lot of help and guidance for first-time callers. In contrast, a VUI designed for a small group of power users (including field service workers), should focus more on productivity and less on help and guidance. Such applications should streamline the call flows, minimize prompts, eliminate unnecessary iterations and allow elaborate "mixed initiative dialogs", which enable callers to enter several pieces of information in a single utterance and in any order or combination. In short, speech applications have to be carefully crafted for the specific business process that is being automated. Not all business processes render themselves equally well for speech automation. In general, the more complex the inquiries and transactions are, the more challenging they will be to automate, and the more likely they will be to fail with the general public. In some scenarios, automation is simply not applicable, so live agent assistance is the only option. A legal advice hotline, for example, would be very difficult to automate. On the flip side, speech is perfect for handling quick and routine transactions, like changing the status of a work order, completing a time or expense entry, or transferring funds between accounts. History Early applications for VUI included voice-activated dialing of phones, either directly or through a (typically Bluetooth) headset or vehicle audio system. In 2007, a CNN business article reported that voice command was over a billion dollar industry and that companies like Google and Apple were trying to create speech recognition features. In the years since the article was published, the world has witnessed a variety of voice command devices. Additionally, Google has created a speech recognition engine called Pico TTS and Apple released Siri. Voice command devices are becoming more widely available, and innovative ways for using the human voice are always being created. For example, Business Week suggests that the future remote controller is going to be the human voice. Currently Xbox Live allows such features and Jobs hinted at such a feature on the new Apple TV. Voice command software products on computing devices Both Apple Mac and Windows PC provide built in speech recognition features for their latest operating systems. Microsoft Windows Two Microsoft operating systems, Windows 7 and Windows Vista, provide speech recognition capabilities. Microsoft integrated voice commands into their operating systems to provide a mechanism for people who want to limit their use of the mouse and keyboard, but still want to maintain or increase their overall productivity. Windows Vista With Windows Vista voice control, a user may dictate documents and emails in mainstream applications, start and switch between applications, control the operating system, format documents, save documents, edit files, efficiently correct errors, and fill out forms on the Web. The speech recognition software learns automatically every time a user uses it, and speech recognition is available in English (U.S.), English (U.K.), German (Germany), French (France), Spanish (Spain), Japanese, Chinese (Traditional), and Chinese (Simplified). In addition, the software comes with an interactive tutorial, which can be used to train both the user and the speech recognition engine. Windows 7 In addition to all the features provided in Windows Vista, Windows 7 provides a wizard for setting up the microphone and a tutorial on how to use the feature. Mac OS X All Mac OS X computers come pre-installed with the speech recognition software. The software is user-independent, and it allows for a user to, "navigate menus and enter keyboard shortcuts; speak checkbox names, radio button names, list items, and button names; and open, close, control, and switch among applications." However, the Apple website recommends a user buy a commercial product called Dictate. Commercial products If a user is not satisfied with the built in speech recognition software or a user does not have a built speech recognition software for their OS, then a user may experiment with a commercial product such as Braina Pro or DragonNaturallySpeaking for Windows PCs, and Dictate, the name of the same software for Mac OS. Voice command mobile devices Any mobile device running Android OS, Microsoft Windows Phone, iOS 9 or later, or Blackberry OS provides voice command capabilities. In addition to the built-in speech recognition software for each mobile phone's operating system, a user may download third party voice command applications from each operating system's application store: Apple App store, Google Play, Windows Phone Marketplace (initially Windows Marketplace for Mobile), or BlackBerry App World. Android OS Google has developed an open source operating system called Android, which allows a user to perform voice commands such as: send text messages, listen to music, get directions, call businesses, call contacts, send email, view a map, go to websites, write a note, and search Google. The speech recognition software is available for all devices since Android 2.2 "Froyo", but the settings must be set to English. Google allows for the user to change the language, and the user is prompted when he or she first uses the speech recognition feature if he or she would like their voice data to be attached to their Google account. If a user decides to opt into this service, it allows Google to train the software to the user's voice. Google introduced the Google Assistant with Android 7.0 "Nougat". It is much more advanced than the older version. Amazon.com has the Echo that uses Amazon's custom version of Android to provide a voice interface. Microsoft Windows Windows Phone is Microsoft's mobile device's operating system. On Windows Phone 7.5, the speech app is user independent and can be used to: call someone from your contact list, call any phone number, redial the last number, send a text message, call your voice mail, open an application, read appointments, query phone status, and search the web. In addition, speech can also be used during a phone call, and the following actions are possible during a phone call: press a number, turn the speaker phone on, or call someone, which puts the current call on hold. Windows 10 introduces Cortana, a voice control system that replaces the formerly used voice control on Windows phones. iOS Apple added Voice Control to its family of iOS devices as a new feature of iPhone OS 3. The iPhone 4S, iPad 3, iPad Mini 1G, iPad Air, iPad Pro 1G, iPod Touch 5G and later, all come with a more advanced voice assistant called Siri. Voice Control can still be enabled through the Settings menu of newer devices. Siri is a user independent built-in speech recognition feature that allows a user to issue voice commands. With the assistance of Siri a user may issue commands like, send a text message, check the weather, set a reminder, find information, schedule meetings, send an email, find a contact, set an alarm, get directions, track your stocks, set a timer, and ask for examples of sample voice command queries. In addition, Siri works with Bluetooth and wired headphones. Apple introduced Personal Voice as an accessibility feature in iOS 17, launched on September 18, 2023. This feature allows users to create a personalized, machine learning-generated (AI) version of their voice for use in text-to-speech applications. Designed particularly for individuals with speech impairments, Personal Voice helps preserve the unique sound of a user's voice. It enhances Siri and other accessibility tools by providing a more personalized and inclusive user experience. Personal Voice reflects Apple's ongoing commitment to accessibility and innovation. Amazon Alexa In 2014 Amazon introduced the Alexa smart home device. Its main purpose was just a smart speaker, that allowed the consumer to control the device with their voice. Eventually, it turned into a novelty device that had the ability to control home appliance with voice. Now almost all the appliances are controllable with Alexa, including light bulbs and temperature. By allowing voice control, Alexa can connect to smart home technology allowing you to lock your house, control the temperature, and activate various devices. This form of A.I allows for someone to simply ask it a question, and in response the Alexa searches for, finds, and recites the answer back to you. Speech recognition in cars As car technology improves, more features will be added to cars and these features could potentially distract a driver. Voice commands for cars, according to CNET, should allow a driver to issue commands and not be distracted. CNET stated that Nuance was suggesting that in the future they would create a software that resembled Siri, but for cars. Most speech recognition software on the market in 2011 had only about 50 to 60 voice commands, but Ford Sync had 10,000. However, CNET suggested that even 10,000 voice commands was not sufficient given the complexity and the variety of tasks a user may want to do while driving. Voice command for cars is different from voice command for mobile phones and for computers because a driver may use the feature to look for nearby restaurants, look for gas, driving directions, road conditions, and the location of the nearest hotel. Currently, technology allows a driver to issue voice commands on both a portable GPS like a Garmin and a car manufacturer navigation system. List of Voice Command Systems Provided By Motor Manufacturers: Ford Sync Lexus Voice Command Chrysler UConnect Honda Accord GM IntelliLink BMW Mercedes Pioneer Harman Hyundai Non-verbal input While most voice user interfaces are designed to support interaction through spoken human language, there have also been recent explorations in designing interfaces take non-verbal human sounds as input. In these systems, the user controls the interface by emitting non-speech sounds such as humming, whistling, or blowing into a microphone. One such example of a non-verbal voice user interface is Blendie, an interactive art installation created by Kelly Dobson. The piece comprised a classic 1950s-era blender which was retrofitted to respond to microphone input. To control the blender, the user must mimic the whirring mechanical sounds that a blender typically makes: the blender will spin slowly in response to a user's low-pitched growl, and increase in speed as the user makes higher-pitched vocal sounds. Another example is VoiceDraw, a research system that enables digital drawing for individuals with limited motor abilities. VoiceDraw allows users to "paint" strokes on a digital canvas by modulating vowel sounds, which are mapped to brush directions. Modulating other paralinguistic features (e.g. the loudness of their voice) allows the user to control different features of the drawing, such as the thickness of the brush stroke. Other approaches include adopting non-verbal sounds to augment touch-based interfaces (e.g. on a mobile phone) to support new types of gestures that wouldn't be possible with finger input alone. Design challenges Voice interfaces pose a substantial number of challenges for usability. In contrast to graphical user interfaces (GUIs), best practices for voice interface design are still emergent. Discoverability With purely audio-based interaction, voice user interfaces tend to suffer from low discoverability: it is difficult for users to understand the scope of a system's capabilities. In order for the system to convey what is possible without a visual display, it would need to enumerate the available options, which can become tedious or infeasible. Low discoverability often results in users reporting confusion over what they are "allowed" to say, or a mismatch in expectations about the breadth of a system's understanding. Transcription While speech recognition technology has improved considerably in recent years, voice user interfaces still suffer from parsing or transcription errors in which a user's speech is not interpreted correctly. These errors tend to be especially prevalent when the speech content uses technical vocabulary (e.g. medical terminology) or unconventional spellings such as musical artist or song names. Understanding Effective system design to maximize conversational understanding remains an open area of research. Voice user interfaces that interpret and manage conversational state are challenging to design due to the inherent difficulty of integrating complex natural language processing tasks like coreference resolution, named-entity recognition, information retrieval, and dialog management. Most voice assistants today are capable of executing single commands very well but limited in their ability to manage dialogue beyond a narrow task or a couple turns in a conversation. Privacy implications Privacy concerns are raised by the fact that voice commands are available to the providers of voice-user interfaces in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner. Additionally to the linguistic content of recorded speech, a user's manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin. See also Speech synthesis List of speech recognition software Natural-language user interface User interface design Voice browser Speech recognition in Linux Linguatronic Voice computing References External links Voice Interfaces: Assessing the Potential by Jakob Nielsen The Rise of Voice: A Timeline Voice First Glossary of Terms Voice First A Reading List User interface techniques Voice technology Speech recognition History of human–computer interaction
Voice user interface
[ "Technology" ]
3,169
[ "History of human–computer interaction", "History of computing" ]
2,144,400
https://en.wikipedia.org/wiki/Titanic%20conspiracy%20theories
On April 14, 1912, the Titanic collided with an iceberg, damaging the hull's plates below the waterline on the starboard side, causing the front compartments to flood. The ship then sank two hours and forty minutes later, with approximately 1,496 fatalities as a result of drowning or hypothermia. Since then, many conspiracy theories have been suggested regarding the disaster. These theories have been refuted by subject-matter experts. Pack ice The pack ice theory is not a conspiracy theory since it accepts that the sinking was an accident. However, it differs from the commonly accepted theory and is considered implausible by the vast majority of historians. Captain L. M. Collins, a former member of the Ice Pilotage Service, based a conclusion on three pieces of evidence and going off of his own experience of ice navigation and witness statements given at the two post-disaster enquiries, that what the Titanic hit was not an iceberg but low-lying pack ice. His book, called The Sinking of the Titanic: The Mystery Solved (2003) goes into further detail about the events. There were no reports of haze the entire night of the sinking, but at 11.30 pm the two lookouts spotted what they believed to be haze on the horizon, extending approximately 20° on either side of the ship's bow. Collins believes that what they saw was not haze but a strip of pack ice, ahead of the ship. Each witness had a different description of the ice. high by the lookouts, high by Quartermaster Rowe on the deck, and only very low in the water by Fourth Officer Boxhall, on the starboard side near the darkened bridge. "An optical phenomenon that is well known to ice navigators" where the flat sea and extreme cold distort the appearance of objects near the waterline, making them appear to be the height of the ship's lights, about above the surface near the bow, and high alongside the superstructure explains what probably happened by the witnesses' descriptions. The Titanic made a turn by rotating one-third of the way from the bow, which caused her rudder to hard over and crushed her starboard side into an iceberg. This would have caused the ship to flood, capsize, and sink within minutes, damaging the starboard side of the hull and potentially the superstructure. Olympic exchange hypothesis One of the controversial and elaborate theories surrounding the sinking of the Titanic was advanced by Robin Gardiner in his book Titanic: The Ship That Never Sank? (1998). Gardiner draws on several events and coincidences that occurred in the months, days, and hours leading up to the sinking of the Titanic, and concludes that the ship that sank was in fact Titanics sister ship , disguised as Titanic, as an insurance scam by its owners, the International Mercantile Marine Group, controlled by American financier J.P. Morgan that had acquired the White Star Line in 1902. Researchers Bruce Beveridge and Steve Hall took issue with many of Gardiner's claims in their book Olympic and Titanic: The Truth Behind the Conspiracy (2004). Author Mark Chirnside has also raised serious questions about the switch theory. British historian Gareth Russell, for his part, calls the theory "so painfully ridiculous that one can only lament the thousands of trees which lost their lives to provide the paper on which it has been articulated." He notes that, "since the sister ships had significant interior architectural and design differences, switching them secretly in a week would be nearly impossible from a practical standpoint. A switch would also not be economically worthwhile, since the ship's owners could have simply damaged the ship while docked (for instance, by setting a fire) and collected the insurance money from that 'accident', which would have been far less severe, and infinitely less stupid, than sailing her out into the middle of the Atlantic with thousands of people, and their luggage, on board, and ramming her into an iceberg". Deliberately sunk Another claim, that started gaining traction in late 2017, says that J.P. Morgan deliberately sank the ship in order to kill off several millionaires who were in opposition to the Federal Reserve. Some of the wealthiest men in the world were aboard the Titanic for her maiden voyage, several of whom, including John Jacob Astor IV, Benjamin Guggenheim, and Isidor Straus, were allegedly opposed to the creation of a U.S. central bank. No evidence of their opposition to Morgan's centralized banking ideas has been found –– Astor and Guggenheim never spoke publicly on the subject, while Straus spoke in favor of the concept. All three men died during the sinking. Conspiracy theorists suggest that J. P. Morgan, the 74 year-old financier who set up the eponymous banking firm, arranged to have the men board the ship and then sunk it to eliminate them. Morgan cancelled his ticket for Titanic's maiden voyage due to a reported illness. Guggenheim's ticket wasn't even purchased before Morgan's cancellation. Morgan, nicknamed the "Napoleon of Wall Street", had helped create General Electric, U.S. Steel, and International Harvester, and was credited with almost single-handedly saving the U.S. banking system during the Panic of 1907. Morgan did have a hand in the creation of the Federal Reserve, and owned the International Mercantile Marine, which owned the White Star Line, and thus the Titanic. Morgan, who had attended the Titanic launching in 1911, had booked a personal suite aboard the ship with his own private promenade deck and a bath equipped with specially designed cigar holders. He was reportedly booked on the ship's maiden voyage but instead cancelled the trip and remained at the French resort of Aix-les-Bains to enjoy his morning massages and sulfur baths. His allegedly last-minute cancellation has fuelled speculation among conspiracy theorists that he knew of the ship's fate. This theory has been refuted by Titanic experts George Behe, Don Lynch, and Ray Lepien who have each provided alternate, more widely-accepted theories as to why Morgan cancelled his trip. Conspiracy theorist Stew Peters has advanced an alternative version of the theory, alleging the Rothschilds were behind both the Federal Reserve and the Titanic’s sinking. Peters also claimed that the Titan submersible implosion was orchestrated via sabotage in order to prevent its own passengers from discovering that the Titanic was sunk by a "controlled demolition" instead of an iceberg. Closed watertight doors Another theory involves Titanics watertight doors. This theory suggests that if these doors had been opened, the Titanic would have settled on an even keel and therefore, perhaps, remained afloat long enough for rescue ships to arrive. However, this theory has been rebutted for two reasons: first, the first four compartments were naturally watertight, thus it was impossible to lower the concentration of water in the bow significantly. Second, Bedford and Hacket have shown by calculations that any significant amount of water aft of boiler room No. 4 would have resulted in capsizing of the Titanic, which would have occurred about 30 minutes earlier than the actual time of sinking. Additionally, the lighting would have been lost about 70 minutes after the collision due to the flooding of the boiler rooms. Bedford and Hacket also analyzed the hypothetical case that there were no bulkheads at all. Then, the vessel would have capsized about 70 minutes before the actual time of sinking and lighting would have been lost about 40 minutes after the collision. Later, in a 1998 documentary titled Titanic: Secrets Revealed, the Discovery Channel ran model simulations which also rebutted this theory. The simulations indicated that opening Titanics watertight doors would have caused the ship to capsize earlier than it actually sank by more than a half-hour, supporting the findings of Bedford and Hacket. Expansion joints hypothesis Titanic researchers continued to debate the causes and mechanics of the ships breakup. According to his book, A Night to Remember, Walter Lord described Titanic as assuming an "absolutely perpendicular" position shortly before its final plunge. This view remained largely unchallenged even after the wreck was discovered by Robert Ballard in 1985, which confirmed that Titanic had broken in two pieces at or near the surface; paintings by noted marine artist Ken Marschall and as imagined onscreen in James Cameron's film Titanic, both depicted the ship attaining a steep angle prior to the breakup. Most researchers acknowledged that Titanics aft expansion joint—designed to allow for flexing of the hull in a seaway—played little to no role in the ship's breakup, though debate continued as to whether the ship had broken from the top downwards or from the bottom upwards. In 2005, a History Channel expedition to the wreck site scrutinized two large sections of Titanics keel, which constituted the portion of the ship's bottom from immediately below the site of the break. With assistance from naval architect Roger Long, the team analysed the wreckage and developed a new break-up scenario which was publicised in the television documentary Titanic's Final Moments: Missing Pieces in 2006. One hallmark of this new theory was the claim that Titanics angle at the time of the breakup was far less than had been commonly assumed—according to Long, no greater than 11°. Long also suspected that Titanics breakup may have begun with the premature failure of the ship's aft expansion joint, and ultimately exacerbated the loss of life by causing Titanic to sink faster than anticipated. In 2006, the History Channel sponsored dives on Titanics newer sister ship, , which verified that the design of Britannics expansion joints was superior to that incorporated in the Titanic. To further explore Long's theory, the History Channel commissioned a new computer simulation by JMS Engineering. The simulation, whose results were featured in the 2007 documentary Titanic's Achilles Heel, partially refuted Long's suspicions by demonstrating that Titanics expansion joints were strong enough to deal with any and all stresses the ship could reasonably be expected to encounter in service and, during the sinking, actually outperformed their design specifications. But, most important is that the expansion joints were part of the superstructure, which was situated above the strength deck (B-deck) and therefore above the top of the structural hull girder. Thus, the expansion joints had no meaning for the support of the hull. They played no role in the breaking of the hull. They simply opened up and parted as the hull flexed or broke beneath them. Brad Matsen's 2008 book Titanic's Last Secrets endorses the expansion joint theory. One common oversight is the fact that the collapse of the first funnel at a relatively shallow angle occurred when the forward expansion joint, over which several funnels stays crossed, opened as the hull was beginning to stress. The opening of the joint stretched and snapped the stays. The forward momentum of the ship as she took a sudden lurch forwards and downwards sent the unsupported funnel toppling onto the starboard bridge wing. One theory that would support the fracturing of the hull is that the Titanic partly grounded on the shelf of ice below the waterline as she collided with the iceberg, perhaps damaging the keel and underbelly. Later during the sinking, it was noticed that Boiler Room four flooded from below the floor grates rather than from over the top of the watertight bulkhead. This would be consistent with additional damage along the keel compromising the integrity of the hull. Fire in coal bunker This claim states that fire began in one of Titanic coal bunkers approximately 10 days prior to the ship's departure, and continued to burn for several days into the voyage. Fires occurred frequently on board steamships due to spontaneous combustion of the coal. The fires had to be extinguished with fire hoses, by moving the coal on top to another bunker and by removing the burning coal and feeding it into the furnace. This event has led some authors to theorize that the fire exacerbated the effects of the iceberg collision, by reducing the structural integrity of the hull and a critical bulkhead. Senan Molony has suggested that attempts to extinguish the fire – by shoveling burning coals into the engine furnaces – may have been the primary reason for the Titanic steaming at full speed prior to the collision, despite ice warnings. Most experts disagree. Samuel Halpern has concluded that "the bunker fire would not have weakened the watertight bulkhead sufficiently to cause it to collapse." Also, it has been alternatively suggested that the coal bunker fire actually helped Titanic to last longer during the sinking and prevented the ship from rolling over to starboard after the impact, due to the subtle port list created by the moving of coal inside the ship prior to the encounter with the iceberg. Some of these foremost Titanic experts have published a detailed rebuttal of Molony's claims. See also Encyclopedia Titanica Legends and myths regarding The Titanic References Bibliography External links Was there a fire aboard Titanic?, CBC News Olympic & Titanic – An Analysis Of The Robin Gardiner Conspiracy Theory Science and technology-related conspiracy theories Pseudohistory conspiracy theories Death conspiracy theories Pseudoscience
Titanic conspiracy theories
[ "Technology" ]
2,656
[ "Science and technology-related conspiracy theories" ]
2,144,488
https://en.wikipedia.org/wiki/Speech%20Recognition%20Grammar%20Specification
Speech Recognition Grammar Specification (SRGS) is a W3C standard for how speech recognition grammars are specified. A speech recognition grammar is a set of word patterns, and tells a speech recognition system what to expect a human to say. For instance, if you call an auto-attendant application, it will prompt you for the name of a person (with the expectation that your call will be transferred to that person's phone). It will then start up a speech recognizer, giving it a speech recognition grammar. This grammar contains the names of the people in the auto attendant's directory and a collection of sentence patterns that are the typical responses from callers to the prompt. SRGS specifies two alternate but equivalent syntaxes, one based on XML, and one using augmented BNF format. In practice, the XML syntax is used more frequently. Both the ABNF and XML form have the expressive power of a context-free grammar. A grammar processor that does not support recursive grammars has the expressive power of a finite-state machine or regular expression language. If the speech recognizer returned just a string containing the actual words spoken by the user, the voice application would have to do the tedious job of extracting the semantic meaning from those words. For this reason, SRGS grammars can be decorated with tag elements, which when executed, build up the semantic result. SRGS does not specify the contents of the tag elements: this is done in a companion W3C standard, Semantic Interpretation for Speech Recognition (SISR). SISR is based on ECMAScript, and ECMAScript statements inside the SRGS tags build up an ECMAScript semantic result object that is easy for the voice application to process. Both SRGS and SISR are W3C Recommendations, the final stage of the W3C standards track. The W3C VoiceXML standard, which defines how voice dialogs are specified, depends heavily on SRGS and SISR. Examples Here is an example of the augmented BNF of SRGS, as it could be used in an auto attendant application: #ABNF 1.0 ISO-8859-1; // Default grammar language is US English language en-US; // Single language attachment to tokens // Note that "fr-CA" (Canadian French) is applied to only // the word "oui" because of precedence rules $yes = yes | oui!fr-CA; // Single language attachment to an expansion $people1 = (Michel Tremblay | André Roy)!fr-CA; // Handling language-specific pronunciations of the same word // A capable speech recognizer will listen for Mexican Spanish and // US English pronunciations. $people2 = Jose!en-US | Jose!es-MX; /** * Multi-lingual input possible * @example may I speak to André Roy * @example may I speak to Jose */ public $request = may I speak to ($people1 | $people2); Here is the same SRGS example, using the XML form: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" "http://www.w3.org/TR/speech-grammar/grammar.dtd"> <!-- the default grammar language is US English --> <grammar xmlns="http://www.w3.org/2001/06/grammar" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/06/grammar http://www.w3.org/TR/speech-grammar/grammar.xsd" xml:lang="en-US" version="1.0"> <!-- single language attachment to tokens "yes" inherits US English language "oui" is Canadian French language --> <rule id="yes"> <one-of> <item>yes</item> <item xml:lang="fr-CA">oui</item> </one-of> </rule> <!-- Single language attachment to an expansion --> <rule id="people1"> <one-of xml:lang="fr-CA"> <item>Michel Tremblay</item> <item>André Roy</item> </one-of> </rule> <!-- Handling language-specific pronunciations of the same word A capable speech recognizer will listen for Mexican Spanish and US English pronunciations. --> <rule id="people2"> <one-of> <item xml:lang="en-US">Jose</item> <item xml:lang="es-MX">Jose</item> </one-of> </rule> <!-- Multi-lingual input is possible --> <rule id="request" scope="public"> <example> may I speak to André Roy </example> <example> may I speak to Jose </example> may I speak to <one-of> <item> <ruleref uri="#people1"/> </item> <item> <ruleref uri="#people2"/> </item> </one-of> </rule> </grammar> See also SISR VoiceXML Pronunciation Lexicon Specification (PLS) Natural Language Semantics Markup Language JSGF External links SRGS Specification (W3C Recommendation) SISR Specification (W3C Recommendation) VoiceXML Forum World Wide Web Consortium standards XML-based standards
Speech Recognition Grammar Specification
[ "Technology" ]
1,246
[ "Computer standards", "XML-based standards" ]
2,144,540
https://en.wikipedia.org/wiki/Toxaphene
Toxaphene was an insecticide used primarily for cotton in the southern United States during the late 1960s and the 1970s. Toxaphene is a mixture of over 670 different chemicals and is produced by reacting chlorine gas with camphene. It can be most commonly found as a yellow to amber waxy solid. Toxaphene was banned in the United States in 1990 and was banned globally by the 2001 Stockholm Convention on Persistent Organic Pollutants. It is a very persistent chemical that can remain in the environment for 1–14 years without degrading, particularly in the soil. Testing performed on animals, mostly rats and mice, has demonstrated that toxaphene is harmful to animals. Exposure to toxaphene has proven to stimulate the central nervous system, as well as induce morphological changes in the thyroid, liver, and kidneys. Toxaphene has been shown to cause adverse health effects in humans. The main sources of exposure are through food, drinking water, breathing contaminated air, and direct contact with contaminated soil. Exposure to high levels of toxaphene can cause damage to the lungs, nervous system, liver, kidneys, and in extreme cases, may even cause death. It is thought to be a potential carcinogen in humans, though this has not yet been proven. Composition Toxaphene is a synthetic organic mixture composed of over 670 chemicals, formed by the chlorination of camphene (C10H16) to an overall chlorine content of 67–69% by weight. The bulk of the compounds (mostly chlorobornanes, chlorocamphenes, and other bicyclic chloroorganic compounds) found in toxaphene have chemical formulas ranging from C10H11Cl5 to C10H6Cl12, with a mean formula of C10H10Cl8. The formula weights of these compounds range from 308 to 551 grams/mole; the theoretical mean formula has a value of 414 grams/mole. Toxaphene is usually seen as a yellow to amber waxy solid with a piney odor. It is highly insoluble in water but freely soluble in aromatic hydrocarbons and readily soluble in aliphatic organic solvents. It is stable at room temperature and pressure. It is volatile enough to be transported for long distances through the atmosphere. Applications Advertisements for Toxaphene were seen in agricultural periodicals such as Farm Journal as early as 1950. Toxaphene was primarily used as a pesticide for cotton in the southern United States during the late 1960s and 1970s. It was also used on small grains, maize, vegetables, and soybeans. Outside of the realm of crops, it was also used to control ectoparasites such as lice, flies, ticks, mange, and scam mites on livestock. In some cases it was used to kill undesirable fish species in lakes and streams. The breakdown of usage can be summarized: 85% on cotton, 7% to control insect pests on livestock and poultry, 5% on other field crops, 3% on soybeans, and less than 1% on sorghum. The first recorded usage of toxaphene was in 1966 in the United States, and by the early to mid 1970s, toxaphene was the United States' most heavily used pesticide. Over 34 million pounds of toxaphene were used annually from 1966 to 1976. As a result of Environmental Protection Agency restrictions, annual toxaphene usage fell to 6.6 million pounds in 1982. In 1990, the EPA banned all usage of toxaphene in the United States. Toxaphene is still used in countries outside the United States but much of this usage has been undocumented. Between 1970 and 1995, global usage of toxaphene was estimated to be 670 million kilograms (1.5 billion pounds). Production Toxaphene was first produced in the United States in 1947 although it was not heavily used until 1966. By 1975, toxaphene production reached its peak at 59.4 million pounds annually. Production decreased more than 90% from this value by 1982 due to Environmental Protection Agency restrictions. Overall, an estimated 234,000 metric tons (over 500 million pounds) have been produced in the United States. Between 25% and 35% of the toxaphene produced in the United States has been exported. There are currently 11 toxaphene suppliers worldwide. Environmental effects When released into the environment, toxaphene can be quite persistent and exists in the air, soil, and water. In water, it can evaporate easily and is fairly insoluble. Its solubility is 3 mg/L of water at 22 degrees Celsius. Toxaphene breaks down very slowly and has a half-life of up to 12 years in the soil. It is most commonly found in air, soil, and sediment found at the bottom of lakes or streams. It can also be present in many parts of the world where it was never used because toxaphene is able to evaporate and travel long distances through air currents. Toxaphene can eventually be degraded, through dechlorination, in the air using sunlight to break it down. The degradation of toxaphene usually occurs under aerobic conditions. The levels of toxaphene have decreased since its ban. However, due to its persistence, it can still be found in the environment today. Exposure The three main paths of exposure to toxaphene are ingestion, inhalation, and absorption. For humans, the main source of toxaphene exposure is through ingested seafood. When toxaphene enters the body, it usually accumulates in fatty tissues. It is broken down through dechlorination and oxidation in the liver, and the byproducts are eliminated through feces. People that live near an area that has high toxaphene contamination are at high risk to toxaphene exposure through inhalation of contaminated air or direct skin contact with contaminated soil or water. Eating large quantities of fish on a daily basis also increases susceptibility to toxaphene exposure. Finally, exposure is rare, yet possible through drinking water when contaminated by toxaphene runoff from the soil. However, toxaphene has been rarely seen at high levels in drinking water due to toxaphene's nearly complete insolubility in water. Shellfish, algae, fish and marine mammals have all been shown to exhibit high levels of toxaphene. People in the Canadian Arctic, where a traditional diet consists of fish and marine animals, have been shown to consume ten times the accepted daily intake of toxaphene. Also, blubber from beluga whales in the Arctic were found to have unhealthy and toxic levels of toxaphene. Health effects In humans When inhaled or ingested, sufficient quantities of toxaphene can damage the lungs, nervous system, and kidneys, and may cause death. The major health effects of toxaphene involve central nervous system stimulation leading to convulsive seizures. The dose necessary to induce nonfatal convulsions in humans is about 10 milligrams per kilogram body weight per day. Several deaths linked to toxaphene have been documented in which an unknown quantity of toxaphene was ingested intentionally or accidentally from food contamination. The deaths are attributed to respiratory failure resulting from seizures. Chronic inhalation exposure in humans results in reversible respiratory toxicity. A study conducted between 1954 and 1972 of male agricultural workers and agronomists exposed to toxaphene and other pesticides showed that there are higher proportions of bronchial carcinoma in the test group than in the unexposed general population. However, toxaphene may not have been the main pesticide responsible for tumor production. Tests on lab animals show that toxaphene causes liver and kidney cancer, so the EPA has classified it as a Group B2 carcinogen, meaning it is a probable human carcinogen. The International Agency for Research on Cancer has classified it as a Group 2B carcinogen. Toxaphene can be detected in blood, urine, breast milk, and body tissues if a person has been exposed to high levels, but it is removed from the body quickly, so detection has to occur within several days of exposure. It is not known whether toxaphene can affect reproduction in humans. In animals Toxaphene was used to treat mange in cattle in California in the 1970s and there were reports of cattle deaths following the toxaphene treatment. Chronic oral exposure in animals affects the liver, the kidney, the spleen, the adrenal and thyroid glands, the central nervous system, and the immune system. Toxaphene stimulates the central nervous system by antagonizing neurons leading to hyperpolarization of neurons and increased neuronal activity. Regulations Toxaphene has been found on at least 68 of the 1,699 National Priorities List sites identified by the United States Environmental Protection Agency. Toxaphene has been forbidden in Germany since 1980. Most uses of toxaphene were cancelled in the U.S. in 1982 with the exception of use on livestock in emergency situations, and for controlling insects on banana and pineapple crops in Puerto Rico and the U.S. Virgin Islands. All uses of toxaphene were cancelled in the U.S. in 1990. Toxaphene has been banned in 37 countries, including Austria, Belize, Brazil, Costa Rica, Dominican Republic, Egypt, the EU, India, Ireland, Kenya, Korea, Mexico, Panama, Singapore, Thailand and Tonga. Its use has been severely restricted in 11 other countries, including Argentina, Columbia, Dominica, Honduras, Nicaragua, Pakistan, South Africa, Turkey, and Venezuela. In the Stockholm Convention on POPs, which came into effect on 17 May 2004, twelve POPs were listed to be eliminated or their production and use restricted. The OCPs or pesticide-POPs identified on this list have been termed the "dirty dozen" and include aldrin, chlordane, DDT, dieldrin, endrin, heptachlor, hexachlorobenzene, mirex, and toxaphene. The EPA has determined that lifetime exposure to 0.01 milligrams per liter of toxaphene in the drinking water is not expected to cause any adverse noncancer effects if the only source of exposure is drinking water, and has established the maximum contaminant level (MCL) of toxaphene at 0.003 mg/L. The United States Food and Drug Administration (FDA) uses the same level for the maximum level permissible in bottled water. The FDA has determined that the concentration of toxaphene in bottled drinking water should not exceed 0.003 milligrams per liter. The United States Department of Transportation lists toxaphene as a hazardous material and has special requirements for marking, labeling, and transporting the material. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Trade names Trade names and synonyms include Chlorinated camphene, Octachlorocamphene, Camphochlor, Agricide Maggot Killer, Alltex, Allotox, Crestoxo, Compound 3956, Estonox, Fasco-Terpene, Geniphene, Hercules 3956, M5055, Melipax, Motox, Penphene, Phenacide, Phenatox, Strobane-T, Toxadust, Toxakil, Vertac 90%, Toxon 63, Attac, Anatox, Royal Brand Bean Tox 82, Cotton Tox MP82, Security Tox-Sol-6, Security Tox-MP cotton spray, Security Motox 63 cotton spray, Agro-Chem Brand Torbidan 28, and Dr Roger's TOXENE. References External links ASTDR ToxFAQs for Toxaphene CDC - NIOSH Pocket Guide for Chemical Hazards - Chlorinated Camphene Obsolete pesticides Organochloride insecticides IARC Group 2B carcinogens Endocrine disruptors Cycloalkenes Persistent organic pollutants under the Stockholm Convention Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Bicyclic compounds
Toxaphene
[ "Chemistry" ]
2,646
[ "Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution", "Endocrine disruptors", "Persistent organic pollutants under the Stockholm Convention" ]
2,144,587
https://en.wikipedia.org/wiki/Bema
A bema is an elevated platform used as an orator's podium. The term can refer to the raised area in a sanctuary. In Jewish synagogues, where it is used for Torah reading during services, the term used is bima or bimah. Ancient Greece The Ancient Greek bēma () means both 'platform' and 'step', being derived from bainein (, 'to go'). The original use of the bema in Athens was as a tribunal from which orators addressed the citizens as well as the courts of law, for instance, in the Pnyx. In Greek law courts the two parties to a dispute presented their arguments each from separate bemas. By metonymy, bema was also a place of judgement, being the extension of the raised seat of the judge, as described in the New Testament, in and , and further, as the seat of the Roman emperor, in , and of God, in , when speaking in judgment. Judaism Etymology The post-Biblical Hebrew bima (), 'platform' or 'pulpit', is almost certainly derived from the Ancient Greek word for a raised platform, bema (). A philological link to the Biblical Hebrew bama (), 'high place' has been suggested. Alternative names The bimah (Hebrew plural: bimot) in synagogues is also known as the almemar or almemor among some Ashkenazi Jews, from Arabic minbar "pulpit". Among Sephardic Jews, it is known as a tevah "box, case" or migdal-etz ('tower of wood'). Purpose The raised nature of the bimah is to demonstrate the importance of the Torah reader at that moment in time, and to make it easier to hear the recitation of the Torah. Description and use The bima became a standard fixture in synagogues, where the weekly Torah portion and haftara are read. In antiquity, the bima was made of stone, but in modern times it is usually a rectangular wooden platform approached by steps. As in the Temple, the synagogal bima is typically elevated by two or three steps. A raised bima will generally have a railing. This was a religious requirement for safety in bima more than ten handbreadths high, or between . A lower bimah (even one step) will typically have a railing as a practical measure to prevent someone from inadvertently stepping off. In Orthodox Judaism, the bima is located in the center of the synagogue, separate from the Torah ark. In other branches of Judaism, the bima and the Ark are joined together. Reform Judaism moved the bima close to or around the Torah ark. At the celebration of Shavuot, when synagogues are decorated with flowers, many synagogues have special arches that they place over the bima and adorn with floral displays. Christianity The ceremonial use of a bema carried over from Judaism into early Christian church architecture. It was originally a raised platform with a lectern and seats for the clergy, from which lessons from the Scriptures were read and the sermon was delivered. In Western Christianity the bema developed over time into the chancel (or presbytery) and the pulpit. In Byzantine, Armenian Rite, West Syriac and Alexandrian Rites of Eastern Christianity bema generally remains the name of the platform which composes the sanctuary; it consists of both the area behind the iconostasion and the platform in front of it from which the deacon leads the ektenias (litanies) together with the ambo from which the priest delivers the sermon and distributes Holy Communion. It may be approached by one or several steps. The bema is composed of the altar (the area behind the iconostasion), the soleas (the pathway in front of the iconostasion), and the ambo (the area in front of the Holy Doors which projects westward into the nave). Orthodox laity do not normally step up onto the bema except to receive Holy Communion. Islam In Islam, the minbar "pulpit" is a standard furnishing in every congregational mosque. The earliest record of a minbar dates back to between 628 and 631. See also Ambon (liturgy) High place, raised place of worship Peak sanctuaries Templon Tribune (architecture) References External links Sacral architecture Synagogue architecture Pulpits Church architecture Eastern Christian liturgical objects Architecture of Athens Ancient Athens
Bema
[ "Engineering" ]
907
[ "Sacral architecture", "Architecture" ]
2,144,841
https://en.wikipedia.org/wiki/Mental%20management
Mental management is a concept within the field of cognitive psychology that explores the cognitive, cerebral or thought-based processes in their different forms. Originally developed during the 1970s by French educator and philosopher Antoine de La Garanderie, mental management was developed for individuals to use their own mental activities and processes more effectively. History Developed during the 1970s, mental management theory has expanded on several previous academic dialogues. Having first emerged for educational purposes, the mental management approach has been developed on the research of French educator and philosopher Antoine de La Garanderie who identified distinct mental processes that support learning. His research builds on several previous theories, including the works on introspection by Jean Martin Charcot, Alfred Binet, Maine de Biran and Albert Burloud, who was La Garanderie's professor. Today, its studies are used in four domains systematically; individual functioning; education; therapy; and business. There have been developments over the past century that have combined original and newfound scientific techniques and studies to further our understanding of mental processes and hence, improve mental management. This includes the following activities and processes to manage the mind well: attention; retrieval; comprehension; thinking; and imagining. Managing the mind well involves maximising the effectiveness and efficiency of cognitive processes associated through applying deliberate approaches, processes and activities. The computer revolution of the 1950s led in turn to a ‘cognitive revolution’ in psychology during the 1960s and 1970s with the focus upon information processing (via analogies to computers and programs) leading to an interest in internal mental processes, rather than just in the overt human behaviour. This led to the development of the cognitive mental process model, which is compared to the behaviourist model to outline the distinction of the scope of cognitive psychology in which the components of mental management will be explored in more detail. Theory French philosopher and educator Antoine de La Garanderie's research led to creating awareness in individuals on their own mental activities and processes and to make them develop and use them effectively. His theory aimed to establish educational profiles based on the "mental gestures of learning" specific to each child. According to him, each individual's understanding, memorising and reproducing techniques for information are differentiated. This concept can show the different cognitive mechanisms which play a part in thinking and learning, and ultimately make up mental management. The three main components of mental management include project; evocation; and mental gestures. By defining a project, the learning strategy will differ and the requirements will enable them to implement an effective and appropriate method. Evocation is the voluntary mental reconstruction procedure of all perceptions coming from the external world through senses. Evocation is the foundation stone of La Garanderie's theory, allowing individuals to reproduce what they learn at a later point in time. There are three specific profiles of evocation corresponding to a particular channel that the child uses to imagine the information: 1. Visual evocation includes the transformation of textual information into visual information, for example drawings, diagrams and the use of colour in underlining. 2. Auditory evocation includes oral or mental repetition of information, for example listening to lessons on an audio device. 3. Kinaesthetic evocation includes movements, feelings, smells and tastes, for example the use of gestures and movements to learn. Constituting mental management is the organisation, improvement and use of these activities and processes. Cognitive psychology Cognitive psychology is the study of the mind as an information processor. It rose to great importance in the mid-1950s due to the dissatisfaction with the behaviourist psychological models. The emphasis of psychology shifted away from studying the mind in favour of understanding human information processing, relating to perception, attention, language, memory, thinking, and consciousness. The main concern of cognitive psychology is how information is received from the senses, processed by the brain and how this processing directs how humans behave. It is a multifaceted approach in which various cognitive functions work together to understand not only individuals and groups, but also society as a whole. Behaviourist model vs. cognitive model Mental Management falls within the cognitive model of psychology and needs to be distinguished from the behaviourist model, which considers mental processes to be unobservable and therefore akin to a ‘black box’. More specifically, the behaviourist model assumes that the process linking behaviour to the stimulus cannot be studied. It therefore describes the conceptualisation of psychological disorders in terms of overt behaviour patterns produced by learning and the influence of reinforcement contingencies. Treatment techniques associated with this approach include systematic de-sensitisation and modelling and focusing on modifying ineffective or maladaptive patterns. In contrast, the cognitive model represents a theoretical view of thought and mental operations, provides explanations for observed phenomena and makes predictions about behavioural consequences. Specifically, it describes the mental events that connect the input from the environment with the behavioural output. The approach assumes that people are continually creating and accessing internal representations (models) of what they are experiencing in the world for the purposes of perception, comprehension, and behaviour selection (action). Treatment techniques associated with this approach include cognitive behavioural therapy which involves defining, observing, analysing and interpreting patterns, and reframing these as more optimal ways of thinking. Processes of mental management There are five different processes of mental management, which La Garanderie distinguishes as different types of mental gestures. Attention In psychology, attention is defined as "a state in which cognitive resources are focused on certain aspects of the environment rather than on others and the central nervous system is in a state of readiness to respond to stimuli." In mental management it describes the essential first step required to enable the subsequent step of retrieving memorized information. The gesture of attention is linked to the perception from our five senses and the evocation of the subject. For example, a parent or teacher can activate a child's attention with instructive phrases using the imperative tense. Retrieval Retrieval is defined by the American Psychological Association as "the process of recovering or locating information stored in memory. Retrieval is the final stage of memory, after encoding and retention." These associated stages are dealt with on an implicit basis in mental management. Retrieval is distinguished by La Garanderie as the gesture of memorisation, which involves bringing back evocations for the purpose of reproducing them in the short-, medium-, and long-term. Comprehension Comprehension is defined as the "act or capability of understanding something, especially the meaning of a communication," by the American Psychological Association. It involves making sense in a subjective sense which does not require the understanding to be correct. La Garanderie distinguishes comprehension as the gesture of understanding, which allows us to constantly shift between what is perceived and what is evoked to find the meaning of new information. Thinking The American Psychological Association defines thinking as a "cognitive behaviour in which ideas, images, mental representations or other hypothetical elements of thought are experienced or manipulated." In the context of mental management, the thinking process also involves "self-reflection" which involves the "examination, contemplation and analysis of ones thoughts, feeling and actions". Thinking, or the gesture of reflection, involves selecting the notions or theory that has already been learnt, and allow us to think through the task to be accomplished. Imagining Imagination is the faculty that produces ideas and images in the absence of direct sensory data, often by combining fragments of previous sensory experiences into new syntheses. It is a critical component of mental management as it captures the change involved in improving or optimising the mental processes. The gesture of creative imagination allows for an individual to invent new approaches based on what they already know. This allows individuals to make comparisons and develop responses to problems outside of a logical framework. Measurement of mental processes The measurement of mental processes can involve invasive or non-invasive ways to measure human activity in the brain, known as neuroimaging. Neuroimaging is defined as "a clinical specialty concerned with producing images of the brain by non-invasive techniques, such as computed tomography and magnetic resonance imaging." Computed tomography is “radiography in which a three-dimensional image of a body structure is constructed by computer from a series of plane cross-sectional images made along an axis.” Magnetic resonance imaging, commonly referred to as MRI, is "a noninvasive diagnostic technique that produces computerised images of internal body tissues and is based on nuclear magnetic resonance of atoms within the body induced by the application of radio waves." These advances help to understand brain specificity, and therefore were able to contribute to theories of mental management processes. Through these methods of measuring mental processes, it was found that different parts of the brain are responsible for different mental processes, for example that the frontal lobe is responsible for abstract thinking. Practical applications of mental management The principles of mental management apply practically to many aspects in the real world, particularly in the areas of education and individual development. To develop the mental management processes in education, necessary knowledge, skills, methods and techniques are taught to students to help them understand how their minds work and to help them discover the ways to work their minds more efficiently in the education process. These teachings are carried out in three steps: (1) getting to know the mind; (2) developing skills; and (3) attaining mental independence. These practices are now being applied beyond education at an individual level in the context of self-improvement as well as in the work domain, including managers and leaders. Mental management theories were applied to a real-life case study in Mancinelli, Gentili, Priori, and Valituttis’ 2004 study on concept maps in kindergarten. The study's finding was that real learning is derived from the child's evocation, as defined by La Garanderie. Evocation is the voluntary mental reconstruction procedure of all perceptions coming from the external world through senses. The project concluded that the child builds meanings of their senses through evocation, with perception and evocation both key in specifying the information to learn from. Without such a mental activity, learning is partial and lacks important parts. Further findings showed that the observer's mental characteristics affect the contents derived by the child from observation. The interaction, perception and evocation processes unveil correct concepts and misconceptions. Using methods of La Garanderie's pedagogic dialogue, children were reached mentally and disclosed their thoughts on their experience. The child's thoughts were found to be guided by the project's success or failure and accordingly the child's thoughts divert from their infantile characteristics and progress and adjust slowly. References Cognitive psychology Pedagogy
Mental management
[ "Biology" ]
2,148
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
2,145,009
https://en.wikipedia.org/wiki/Heliocentric%20orbit
A heliocentric orbit (also called circumsolar orbit) is an orbit around the barycenter of the Solar System, which is usually located within or very near the surface of the Sun. All planets, comets, and asteroids in the Solar System, and the Sun itself are in such orbits, as are many artificial probes and pieces of debris. The moons of planets in the Solar System, by contrast, are not in heliocentric orbits, as they orbit their respective planet (although the Moon has a convex orbit around the Sun). The barycenter of the Solar System, while always very near the Sun, moves through space as time passes, depending on where other large bodies in the Solar System, such as Jupiter and other large gas planets, are located at that time. A similar phenomenon allows the detection of exoplanets by way of the radial-velocity method. The helio- prefix is derived from the Greek word "ἥλιος", meaning "Sun", and also Helios, the personification of the Sun in Greek mythology. The first spacecraft to be put in a heliocentric orbit was Luna 1 in 1959. An incorrectly timed upper-stage burn caused it to miss its planned impact on the Moon. Trans-Mars injection A trans-Mars injection (TMI) is a heliocentric orbit in which a propulsive maneuver is used to set a spacecraft on a trajectory, also known as Mars transfer orbit, which will place it as far as Mars orbit. Every two years, low-energy transfer windows open up, which allow movement between the two planets with the lowest possible energy requirements. Transfer injections can place spacecraft into either a Hohmann transfer orbit or bi-elliptic transfer orbit. Trans-Mars injections can be either a single maneuver burn, such as that used by the NASA MAVEN orbiter in 2013, or a series of perigee kicks, such as that used by the ISRO Mars Orbiter Mission in 2013. See also Astrodynamics Earth's orbit Geocentric orbit Heliocentrism List of artificial objects in heliocentric orbit List of orbits Low-energy transfer References Orbits Astrodynamics Spacecraft propulsion Orbital maneuvers Exploration of Mars
Heliocentric orbit
[ "Engineering" ]
457
[ "Astrodynamics", "Aerospace engineering" ]
2,145,101
https://en.wikipedia.org/wiki/Floating%20breech
A floating breech is a breechblock of a firearm that is not held rigidly to the barrel at the moment of firing, but instead is free to move in the opposite direction to the projectile. This can help to reduce the recoil induced in the body of the firearm so long as the subsequent motion of the breechblock is retarded in some manner - either by a spring, or by back-pressure against a piston attached to the breechblock provided by tapping the expelled propellant gases. The motion of the breech and/or the expansion of the expelled gases can also be used to power a case-ejection mechanism and/or reloading mechanism. If the breechblock and barrel are locked for firing but the barrel is not fixed to the body, it is described as a floating action. If the ammunition is caseless, the time required to expel the previous case is removed from the cycle time of an automatic firearm and a higher rate of fire can be obtained than with normal ammunition. The Heckler & Koch G11 Assault rifle uses caseless ammunition, but has a rotating breech, not a floating breech. However, the barrel, breech and magazine as a whole 'float' within the housing of the weapon. Note that the breechblock in this firearm rotates about an axis perpendicular to the main axis of the barrel, whereas most other rotating breechblocks rotate about the same axis as the long axis of the barrel. See also Floating chamber blowback References Firearm components
Floating breech
[ "Technology" ]
310
[ "Firearm components", "Components" ]
2,145,168
https://en.wikipedia.org/wiki/Metric%20tensor%20%28general%20relativity%29
In general relativity, the metric tensor (in this context often abbreviated to simply the metric) is the fundamental object of study. The metric captures all the geometric and causal structure of spacetime, being used to define notions such as time, distance, volume, curvature, angle, and separation of the future and the past. In general relativity, the metric tensor plays the role of the gravitational potential in the classical theory of gravitation, although the physical content of the associated equations is entirely different. Gutfreund and Renn say "that in general relativity the gravitational potential is represented by the metric tensor." Notation and conventions This article works with a metric signature that is mostly positive (); see sign convention. The gravitation constant will be kept explicit. This article employs the Einstein summation convention, where repeated indices are automatically summed over. Definition Mathematically, spacetime is represented by a four-dimensional differentiable manifold and the metric tensor is given as a covariant, second-degree, symmetric tensor on , conventionally denoted by . Moreover, the metric is required to be nondegenerate with signature . A manifold equipped with such a metric is a type of Lorentzian manifold. Explicitly, the metric tensor is a symmetric bilinear form on each tangent space of that varies in a smooth (or differentiable) manner from point to point. Given two tangent vectors and at a point in , the metric can be evaluated on and to give a real number: This is a generalization of the dot product of ordinary Euclidean space. Unlike Euclidean space – where the dot product is positive definite – the metric is indefinite and gives each tangent space the structure of Minkowski space. Local coordinates and matrix representations Physicists usually work in local coordinates (i.e. coordinates defined on some local patch of ). In local coordinates (where is an index that runs from 0 to 3) the metric can be written in the form The factors are one-form gradients of the scalar coordinate fields . The metric is thus a linear combination of tensor products of one-form gradients of coordinates. The coefficients are a set of 16 real-valued functions (since the tensor is a tensor field, which is defined at all points of a spacetime manifold). In order for the metric to be symmetric giving 10 independent coefficients. If the local coordinates are specified, or understood from context, the metric can be written as a symmetric matrix with entries . The nondegeneracy of means that this matrix is non-singular (i.e. has non-vanishing determinant), while the Lorentzian signature of implies that the matrix has one negative and three positive eigenvalues. Physicists often refer to this matrix or the coordinates themselves as the metric (see, however, abstract index notation). With the quantities being regarded as the components of an infinitesimal coordinate displacement four-vector (not to be confused with the one-forms of the same notation above), the metric determines the invariant square of an infinitesimal line element, often referred to as an interval. The interval is often denoted The interval imparts information about the causal structure of spacetime. When , the interval is timelike and the square root of the absolute value of is an incremental proper time. Only timelike intervals can be physically traversed by a massive object. When , the interval is lightlike, and can only be traversed by (massless) things that move at the speed of light. When , the interval is spacelike and the square root of acts as an incremental proper length. Spacelike intervals cannot be traversed, since they connect events that are outside each other's light cones. Events can be causally related only if they are within each other's light cones. The components of the metric depend on the choice of local coordinate system. Under a change of coordinates , the metric components transform as Properties The metric tensor plays a key role in index manipulation. In index notation, the coefficients of the metric tensor provide a link between covariant and contravariant components of other tensors. Contracting the contravariant index of a tensor with one of a covariant metric tensor coefficient has the effect of lowering the index and similarly a contravariant metric coefficient raises the index Applying this property of raising and lowering indices to the metric tensor components themselves leads to the property For a diagonal metric (one for which coefficients ; i.e. the basis vectors are orthogonal to each other), this implies that a given covariant coefficient of the metric tensor is the inverse of the corresponding contravariant coefficient , etc. Examples Flat spacetime The simplest example of a Lorentzian manifold is flat spacetime, which can be given as with coordinates and the metric These coordinates actually cover all of . The flat space metric (or Minkowski metric) is often denoted by the symbol and is the metric used in special relativity. In the above coordinates, the matrix representation of is (An alternative convention replaces coordinate by , and defines as in .) In spherical coordinates , the flat space metric takes the form where is the standard metric on the 2-sphere. Black hole metrics The Schwarzschild metric describes an uncharged, non-rotating black hole. There are also metrics that describe rotating and charged black holes. Schwarzschild metric Besides the flat space metric the most important metric in general relativity is the Schwarzschild metric which can be given in one set of local coordinates by where, again, is the standard metric on the 2-sphere. Here, is the gravitation constant and is a constant with the dimensions of mass. Its derivation can be found here. The Schwarzschild metric approaches the Minkowski metric as approaches zero (except at the origin where it is undefined). Similarly, when goes to infinity, the Schwarzschild metric approaches the Minkowski metric. With coordinates the metric can be written as Several other systems of coordinates have been devised for the Schwarzschild metric: Eddington–Finkelstein coordinates, Gullstrand–Painlevé coordinates, Kruskal–Szekeres coordinates, and Lemaître coordinates. Rotating and charged black holes The Schwarzschild solution supposes an object that is not rotating in space and is not charged. To account for charge, the metric must satisfy the Einstein field equations like before, as well as Maxwell's equations in a curved spacetime. A charged, non-rotating mass is described by the Reissner–Nordström metric. Rotating black holes are described by the Kerr metric (uncharged) and the Kerr–Newman metric (charged). Other metrics Other notable metrics are: Alcubierre metric, de Sitter/anti-de Sitter metrics, Friedmann–Lemaître–Robertson–Walker metric, Isotropic coordinates, Lemaître–Tolman metric, Peres metric, Rindler coordinates, Weyl–Lewis–Papapetrou coordinates, Gödel metric. Some of them are without the event horizon or can be without the gravitational singularity. Volume The metric induces a natural volume form (up to a sign), which can be used to integrate over a region of a manifold. Given local coordinates for the manifold, the volume form can be written where is the determinant of the matrix of components of the metric tensor for the given coordinate system. Curvature The metric completely determines the curvature of spacetime. According to the fundamental theorem of Riemannian geometry, there is a unique connection on any semi-Riemannian manifold that is compatible with the metric and torsion-free. This connection is called the Levi-Civita connection. The Christoffel symbols of this connection are given in terms of partial derivatives of the metric in local coordinates by the formula (where commas indicate partial derivatives). The curvature of spacetime is then given by the Riemann curvature tensor which is defined in terms of the Levi-Civita connection ∇. In local coordinates this tensor is given by: The curvature is then expressible purely in terms of the metric and its derivatives. Einstein's equations One of the core ideas of general relativity is that the metric (and the associated geometry of spacetime) is determined by the matter and energy content of spacetime. Einstein's field equations: where the Ricci curvature tensor and the scalar curvature relate the metric (and the associated curvature tensors) to the stress–energy tensor . This tensor equation is a complicated set of nonlinear partial differential equations for the metric components. Exact solutions of Einstein's field equations are very difficult to find. See also Alternatives to general relativity Introduction to the mathematics of general relativity Mathematics of general relativity Ricci calculus References See general relativity resources for a list of references. Tensors in general relativity Time in physics
Metric tensor (general relativity)
[ "Physics", "Engineering" ]
1,804
[ "Time in physics", "Physical phenomena", "Tensors", "Physical quantities", "Tensor physical quantities", "Tensors in general relativity" ]
2,145,170
https://en.wikipedia.org/wiki/Multicategory
In mathematics (especially category theory), a multicategory is a generalization of the concept of category that allows morphisms of multiple arity. If morphisms in a category are viewed as analogous to functions, then morphisms in a multicategory are analogous to functions of several variables. Multicategories are also sometimes called operads, or colored operads. Definition A (non-symmetric) multicategory consists of a collection (often a proper class) of objects; for every finite sequence of objects () and every object Y, a set of morphisms from to Y; and for every object X, a special identity morphism (with n = 1) from X to X. Additionally, there are composition operations: Given a sequence of sequences of objects, a sequence of objects, and an object Z: if for each , fj is a morphism from to Yj; and g is a morphism from to Z: then there is a composite morphism from to Z. This must satisfy certain axioms: If m = 1, Z = Y0, and g is the identity morphism for Y0, then g(f0) = f0; if for each , nj = 1, , and fj is the identity morphism for Yj, then ; and an associativity condition: if for each and , is a morphism from to , then are identical morphisms from to Z. Comcategories A comcategory (co-multi-category) is a totally ordered set O of objects, a set A of multiarrows with two functions where O% is the set of all finite ordered sequences of elements of O. The dual image of a multiarrow f may be summarized A comcategory C also has a multiproduct with the usual character of a composition operation. C is said to be associative if there holds a multiproduct axiom in relation to this operator. Any multicategory, symmetric or non-symmetric, together with a total-ordering of the object set, can be made into an equivalent comcategory. A multiorder is a comcategory satisfying the following conditions. There is at most one multiarrow with given head and ground. Each object x has a unit multiarrow. A multiarrow is a unit if its ground has one entry. Multiorders are a generalization of partial orders (posets), and were first introduced (in passing) by Tom Leinster. Examples There is a multicategory whose objects are (small) sets, where a morphism from the sets X1, X2, ..., and Xn to the set Y is an n-ary function, that is a function from the Cartesian product X1 × X2 × ... × Xn to Y. There is a multicategory whose objects are vector spaces (over the rational numbers, say), where a morphism from the vector spaces X1, X2, ..., and Xn to the vector space Y is a multilinear operator, that is a linear transformation from the tensor product X1 ⊗ X2 ⊗ ... ⊗ Xn to Y. More generally, given any monoidal category C, there is a multicategory whose objects are objects of C, where a morphism from the C-objects X1, X2, ..., and Xn to the C-object Y is a C-morphism from the monoidal product of X1, X2, ..., and Xn to Y. An operad is a multicategory with one unique object; except in degenerate cases, such a multicategory does not come from a monoidal category. Examples of multiorders include pointed multisets , integer partitions , and combinatory separations . The triangles (or compositions) of any multiorder are morphisms of a (not necessarily associative) category of contractions and a comcategory of decompositions. The contraction category for the multiorder of multimin partitions is the simplest known category of multisets. Applications Multicategories are often incorrectly considered to belong to higher category theory, as their original application was the observation that the operators and identities satisfied by higher categories are the objects and multiarrows of a multicategory. The study of n-categories was in turn motivated by applications in algebraic topology and attempts to describe the homotopy theory of higher dimensional manifolds. However it has mostly grown out of this motivation and is now also considered to be part of pure mathematics. The correspondence between contractions and decompositions of triangles in a multiorder allows one to construct an associative algebra called its incidence algebra. Any element that is nonzero on all unit arrows has a compositional inverse, and the Möbius function of a multiorder is defined as the compositional inverse of the zeta function (constant-one) in its incidence algebra. History Multicategories were first introduced under that name by Jim Lambek in "Deductive systems and categories II" (1969) He mentions (p. 108) that he was "told that multicategories have also been studied by [Jean] Benabou and [Pierre] Cartier", and indeed Leinster opines that "the idea might have occurred to anyone who knew what both a category and a multilinear map were". References Category theory
Multicategory
[ "Mathematics" ]
1,160
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
2,145,345
https://en.wikipedia.org/wiki/Ganglia%20%28software%29
Ganglia is a scalable, distributed monitoring tool for high-performance computing systems, clusters and networks. The software is used to view either live or recorded statistics covering metrics such as CPU load averages or network utilization for many nodes. Ganglia software is bundled with enterprise-level Linux distributions such as Red Hat Enterprise Linux (RHEL) or the CentOS repackaging of the same. Ganglia grew out of requirements for monitoring systems by Berkeley (University of California) but now sees use by commercial and educational organisations such as Cray, MIT, NASA and Twitter. Ganglia It is based on a hierarchical design targeted at federations of clusters. It relies on a multicast-based listen/announce protocol to monitor state within clusters and uses a tree of point-to-point connections amongst representative cluster nodes to federate clusters and aggregate their state. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on over 500 clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes. The ganglia system comprises two unique daemons, a PHP-based web front-end, and a few other small utility programs. Ganglia Monitoring Daemon (gmond) Gmond is a multi-threaded daemon which runs on each cluster node you want to monitor. Installation does not require having a common NFS filesystem or a database back-end, installing special accounts or maintaining configuration files. Gmond has four main responsibilities: Monitor changes in host state. Announce relevant changes. Listen to the state of all other ganglia nodes via a unicast or multicast channel. Answer requests for an XML description of the cluster state. Each gmond transmits information in two different ways: Unicasting or Multicasting host state in external data representation (XDR) format using UDP messages. Sending XML over a TCP connection. Ganglia Meta Daemon (gmetad) Federation in Ganglia is achieved using a tree of point-to-point connections amongst representative cluster nodes to aggregate the state of multiple clusters. At each node in the tree, a Ganglia Meta Daemon (gmetad) periodically polls a collection of child data sources, parses the collected XML, saves all numeric, volatile metrics to round-robin databases and exports the aggregated XML over a TCP socket to clients. Data sources may be either gmond daemons, representing specific clusters, or other gmetad daemons, representing sets of clusters. Data sources use source IP addresses for access control and can be specified using multiple IP addresses for failover. The latter capability is natural for aggregating data from clusters since each gmond daemon contains the entire state of its cluster. Ganglia PHP Web Front-end The Ganglia web front-end provides a view of the gathered information via real-time dynamic web pages. Most importantly, it displays Ganglia data in a meaningful way for system administrators and computer users. Although the web front-end to ganglia started as a simple HTML view of the XML tree, it has evolved into a system that keeps a colorful history of all collected data. The Ganglia web front-end caters to system administrators and users. For example, one can view the CPU utilization over the past hour, day, week, month, or year. The web front-end shows similar graphs for memory usage, disk usage, network statistics, number of running processes, and all other Ganglia metrics. The web front-end depends on the existence of the gmetad which provides it with data from several Ganglia sources. Specifically, the web front-end will open the local port 8651 (by default) and expects to receive a Ganglia XML tree. The web pages themselves are highly dynamic; any change to the Ganglia data appears immediately on the site. This behavior leads to a very responsive site, but requires that the full XML tree be parsed on every page access. Therefore, the Ganglia web front-end should run on a fairly powerful, dedicated machine if it presents a large amount of data. The Ganglia web front-end is written in PHP, and uses graphs generated by gmetad to display history information. It has been tested on many flavours of Unix (primarily Linux) with the Apache webserver and the PHP5 module. References External links Wikimedia Ganglia instance Free network management software Free software programmed in C Free software programmed in Perl Free software programmed in Python Internet Protocol based network software Parallel computing System administration System monitors Software using the BSD license Multi-agent network management software
Ganglia (software)
[ "Technology" ]
1,008
[ "Information systems", "System administration" ]
10,704,974
https://en.wikipedia.org/wiki/Structural%20risk%20minimization
Structural risk minimization (SRM) is an inductive principle of use in machine learning. Commonly in machine learning, a generalized model must be selected from a finite data set, with the consequent problem of overfitting – the model becoming too strongly tailored to the particularities of the training set and generalizing poorly to new data. The SRM principle addresses this problem by balancing the model's complexity against its success at fitting the training data. This principle was first set out in a 1974 book by Vladimir Vapnik and Alexey Chervonenkis and uses the VC dimension. In practical terms, Structural Risk Minimization is implemented by minimizing , where is the train error, the function is called a regularization function, and is a constant. is chosen such that it takes large values on parameters that belong to high-capacity subsets of the parameter space. Minimizing in effect limits the capacity of the accessible subsets of the parameter space, thereby controlling the trade-off between minimizing the training error and minimizing the expected gap between the training error and test error. The SRM problem can be formulated in terms of data. Given n data points consisting of data x and labels y, the objective is often expressed in the following manner: The first term is the mean squared error (MSE) term between the value of the learned model, , and the given labels . This term is the training error, , that was discussed earlier. The second term, places a prior over the weights, to favor sparsity and penalize larger weights. The trade-off coefficient, , is a hyperparameter that places more or less importance on the regularization term. Larger encourages sparser weights at the expense of a more optimal MSE, and smaller relaxes regularization allowing the model to fit to data. Note that as the weights become zero, and as , the model typically suffers from overfitting. See also Vapnik–Chervonenkis theory Support vector machines Model selection Occam Learning Empirical risk minimization Ridge regression Regularization (mathematics) References External links Structural risk minimization at the support vector machines website. Machine learning
Structural risk minimization
[ "Technology", "Engineering" ]
447
[ "Machine learning", "Computer science stubs", "Computer science", "Artificial intelligence engineering", "Computing stubs" ]
10,705,807
https://en.wikipedia.org/wiki/Gillham%20code
Gillham code is a zero-padded 12-bit binary code using a parallel nine- to eleven-wire interface, the Gillham interface, that is used to transmit uncorrected barometric altitude between an encoding altimeter or analog air data computer and a digital transponder. It is a modified form of a Gray code and is sometimes referred to simply as a "Gray code" in avionics literature. History The Gillham interface and code are an outgrowth of the 12-bit IFF Mark X system, which was introduced in the 1950s. The civil transponder interrogation modes A and C were defined in air traffic control (ATC) and secondary surveillance radar (SSR) in 1960. The code is named after Ronald Lionel Gillham, a signals officer at Air Navigational Services, Ministry of Transport and Civil Aviation, who had been appointed a civil member of the Most Excellent Order of the British Empire (MBE) in the Queen's 1955 Birthday Honours. He was the UK's representative to the International Air Transport Association (IATA) committee developing the specification for the second generation of air traffic control system, known in the UK as "Plan Ahead", and is said to have had the idea of using a modified Gray code. The final code variant was developed in late 1961 for the ICAO Communications Division meeting (VII COM) held in January/February 1962, and described in a 1962 FAA report. The exact timeframe and circumstances of the term Gillham code being coined are unclear, but by 1963 the code was already recognized under this name. By the mid-1960s the code was also known as MOA–Gillham code or ICAO–Gillham code. ARINC 572 specified the code as well in 1968. Once recommended by the ICAO for automatic height transmission for air traffic control purposes, the interface is now discouraged and has been mostly replaced by modern serial communication in newer aircraft. Altitude encoder An altitude encoder takes the form of a small metal box containing a pressure sensor and signal conditioning electronics. The pressure sensor is often heated, which requires a warm-up time during which height information is either unavailable or inaccurate. Older style units can have a warm-up time of up to 10 minutes; more modern units warm up in less than 2 minutes. Some of the very latest encoders incorporate unheated 'instant on' type sensors. During the warm-up of older style units the height information may gradually increase until it settles at its final value. This is not normally a problem as the power would typically be applied before the aircraft enters the runway and so it would be transmitting correct height information soon after take-off. The encoder has an open-collector output, compatible with 14 V or 28 V electrical systems. Coding The height information is represented as 11 binary digits in a parallel form using 11 separate lines designated D2 D4 A1 A2 A4 B1 B2 B4 C1 C2 C4. As a twelfth bit, the Gillham code contains a D1 bit but this is unused and consequently set to zero in practical applications. Different classes of altitude encoder do not use all of the available bits. All use the A, B and C bits; increasing altitude limits require more of the D bits. Up to and including 30700 ft does not require any of the D bits (9-wire interface). This is suitable for most light general aviation aircraft. Up to and including 62700 ft requires D4 (10-wire interface). Up to and including 126700 ft requires D4 and D2 (11-wire interface). D1 is never used. Decoding Bits D2 (msbit) through B4 (lsbit) encode the pressure altitude in 500 ft increments (above a base altitude of −1000±250 ft) in a standard 8-bit reflected binary code (Gray code). The specification stops at code 1000000 (126500±250 ft), above which D1 would be needed as a most significant bit. Bits C1, C2 and C4 use a mirrored 5-state 3-bit Gray BCD code of a Giannini Datex code type (with the first 5 states resembling O'Brien code type II) to encode the offset from the 500 ft altitude in 100 ft increments. Specifically, if the parity of the 500 ft code is even then codes 001, 011, 010, 110 and 100 encode −200, −100, 0, +100 and +200 ft relative to the 500 ft altitude. If the parity is odd, the assignments are reversed. Codes 000, 101 and 111 are not used. The Gillham code can be decoded using various methods. Standard techniques use hardware or software solutions. The latter often uses a lookup table but an algorithmic approach can be taken. See also Air traffic control radar beacon system (ATCRBS) Selective Identification Feature (SIF) IFF code Flight level ARINC 429 Notes References Further reading (NB. Supersedes MIL-HDBK-231(AS) (1970-07-01).) Annex 10 - Volume IV - Surveillance Radar and Collision Avoidance Systems ; 4th Edition; ICAO; 280 pages; 2007. DO-181E Minimum Operational Performance Standards for ATCRBS / Mode S Airborne Equipment; Rev E; RTCA; 2011. Data transmission Avionics
Gillham code
[ "Technology" ]
1,105
[ "Avionics", "Aircraft instruments" ]
10,705,831
https://en.wikipedia.org/wiki/Isochore%20%28genetics%29
In genetics, an isochore is a large region of genomic DNA (greater than 300 kilobases) with a high degree of uniformity in GC content; that is, guanine (G) and cytosine (C) bases. The distribution of bases within a genome is non-random: different regions of the genome have different amounts of G-C base pairs, such that regions can be classified and identified by the proportion of G-C base pairs they contain. Bernardi and colleagues first noticed the compositional non-uniformity of vertebrate genomes using thermal melting and density gradient centrifugation. The DNA fragments extracted by the gradient centrifugation were later termed "isochores", which was subsequently defined as "very long (much greater than 200 KB) DNA segments" that "are fairly homogeneous in base composition and belong to a small number of major classes distinguished by differences in guanine-cytosine (GC) content". Subsequently, the isochores "grew" and were claimed to be ">300 kb in size." The theory proposed that the isochore composition of genomes varies markedly between "warm-blooded" (homeotherm) vertebrates and "cold-blooded" (poikilotherm) vertebrates and later became known as the isochore theory. The thermodynamic stability hypothesis The isochore theory purported that the genome of "warm-blooded" vertebrates (mammals and birds) are mosaics of long isochoric regions of alternating GC-poor and GC-rich composition, as opposed to the genome of "cold-blooded" vertebrates (fishes and amphibians) that were supposed to lack GC-rich isochores. These findings were explained by the thermodynamic stability hypothesis, attributing genomic structure to body temperature. GC-rich isochores were purported to be a form of adaptation to environmental pressures, as an increase in genomic GC-content could protect DNA, RNA, and proteins from degradation by heat. Despite its attractive simplicity, the thermodynamic stability hypothesis has been repeatedly shown to be in error . Many authors showed the absence of a relationship between temperature and GC-content in vertebrates, while others showed the existence of GC-rich domains in "cold-blooded" vertebrates such as crocodiles, amphibians, and fish. Principles of the isochore theory The isochore theory was the first to identify the nonuniformity of nucleotide composition within vertebrate genomes and predict that the genome of "warm-blooded" vertebrates such as mammals and birds are mosaic of isochores (Bernardi et al. 1985). The human genome, for example, was described as a mosaic of alternating low and high GC content isochores belonging to five compositional families, L1, L2, H1, H2, and H3, whose corresponding ranges of GC contents were said to be <38%, 38%-42%, 42%-47%, 47%-52%, and >52%, respectively. The main predictions of the isochore theory are that: GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. The genome organization of warm-blooded vertebrates is a mosaic of mostly GC-rich isochores. Genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity than warm-blooded vertebrates. Homogeneous domains do not reach the high GC levels attained by the genomes of warm-blooded vertebrates. The neutralist-selectionist controversy Two opposite explanations that endeavored to explain the formations of isochores were vigorously debated as part of the neutralist-selectionist controversy. The first view was that isochores reflect variable mutation processes among genomic regions consistent with the neutral model. Alternatively, isochores were posited as a result of natural selection for certain compositional environment required by certain genes. Several hypotheses derive from the selectionist view, such as the thermodynamic stability hypothesis and the biased gene conversion hypothesis. Thus far, none of the theories provides a comprehensive explanation to the genome structure, and the topic is still under debate. The rise and fall of the isochore theory The isochore theory became one of the most useful theories in molecular evolution for many years. It was the first and most comprehensive attempt to explain the long-range compositional heterogeneity of vertebrate genomes within an evolutionary framework. Despite the interest in the early years in the isochore model, in recent years, the theory’s methodology, terminology, and predictions have been challenged. Because this theory was proposed in the 20th century before complete genomes were sequenced, it could not be fully tested for nearly 30 years. In the beginning of the 21st century, when the first genomes were made available it was clear that isochores do not exist in the human genome nor in other mammalian genomes. When failed to find isochores, many attacked the very existence of isochores. The most important predictor of isochores, GC3 was shown to have no predictable power to the GC content of nearby genomic regions, refuting findings from over 30 years of research, which were the basis for many isochore studies. Isochore-originators replied that the term was misinterpreted as isochores are not "homogeneous" but rather fairly homogeneous regions with a heterogeneous nature (especially) of GC-rich regions at the 5 kb scale, which only added to the already growing confusion. The reason for this ongoing frustration was the ambiguous definition of isochores as long and homogeneous, allowed some researchers to discover "isochores" and others to dismiss them, although both camps used the same data. The unfortunate side effect of this controversy was an "arms race" in which isochores are frequently redefined and relabeled following conflicting findings that failed to reveal "mosaic of isochores." The unfortunate outcomes of this controversy and the following terminological-methodological mud were the loss of interest in isochores by the scientific community. When the most important core-concept in isochoric literature, the thermodynamic stability hypothesis, was rejected, the theory lost its appeal. Even today, there is no clear definition to isochores nor is there an algorithm that detects isochores. Isochores are detected manually by visual inspection of GC content curves , however because this approach lacks scientific merit and is difficult to replicate by independent groups, the findings remain disputed. The compositional domain model As the study of isochores was de facto abandoned by most scientists, an alternative theory was proposed to describe the compositional organization of genomes in accordance with the most recent genomic studies. The Compositional Domain Model depicts genomes as a medley of short and long homogeneous and nonhomogeneous domains. The theory defines "compositional domains" as genomic regions with distinct GC-contents as determined by a computational segmentation algorithm. The homogeneity of compositional domains is compared to that of the chromosome on which they reside using the F-test, which separated them into compositionally homogeneous domains and compositionally nonhomogeneous domains based on the outcome of test. Compositionally homogeneous domains that are sufficiently long (≥ 300 kb) are termed isochores or isochoric domains. These terms are in accordance with the literature as they provide clear distinction between isochoric- and nonisochoric-domains. A comprehensive study of the human genome unraveled a genomic organization where two-thirds of the genome is a mixture of many short compositionally homogeneous domains and relatively few long ones. The remaining portion of the genome is composed of nonhomogeneous domains. In terms of coverage, only 1% of the total number of compositionally homogeneous domains could be considered "isochores" which covered less than 20% of the genome. Since its inception the theory received wide attention and was extensively used to explain findings emerging from over dozen new genome sequencing studies. However, many important questions remain open, such as which evolutionary forces shaped the structure of compositional domains and the ways they differ between different species. References DNA Molecular biology Biological classification ca:Contingut GC
Isochore (genetics)
[ "Chemistry", "Biology" ]
1,770
[ "Biochemistry", "nan", "Molecular biology" ]
10,706,645
https://en.wikipedia.org/wiki/Bachelor%20of%20Computer%20Information%20Systems
The Bachelor of Computer Information Systems, also known as Bachelor of Computer & Information Science by the University of Oregon and The Ohio State University, (abbreviated BSc CIS) is an undergraduate or bachelor's degree that focuses on practical applications of technology to support organizations while adding value to their offerings. In order to apply technology effectively in this manner, a broad range of subjects are covered, such as communications, business, networking, software design, and mathematics. This degree encompasses the entirety of the Computing field and therefore is very useful when applying to computing positions of various sectors. Some computer information systems programs have received accreditation from ABET, the recognized U.S. accreditor of college and university programs in applied science, computing, engineering, and technology. References Computer Information Systems
Bachelor of Computer Information Systems
[ "Technology" ]
155
[ "Computer science education", "Computer science" ]
10,707,241
https://en.wikipedia.org/wiki/Consumer%20ethnocentrism
Consumer ethnocentrism is a psychological concept that describes how consumers purchase products based on country of origin. It refers to ethnocentric views held by consumers in one country, the in-group, towards products from another country, the out-group (Shimp & Sharma, 1987). Consumers may believe that it is not appropriate, and possibly even immoral, to buy products from other countries. Consumer ethnocentrism is derived from the more general psychological concept of ethnocentrism. Basically, ethnocentric individuals tend to view their group as superior to others. As such, they view other groups from the perspective of their own and reject those that are different while accepting those that are similar (Netemeyer et al., 1991; Shimp & Sharma, 1987). This, in turn, derives from earlier sociological theories of in-groups and out-groups (Shimp & Sharma, 1987). Ethnocentrism, it is consistently found, is normal for an in-group to an out-group (Jones, 1997; Ryan & Bogart, 1997). Purchasing foreign products may be viewed as improper because it costs domestic jobs and hurts the economy. The purchase of foreign products may even be seen as simply unpatriotic (Klein, 2002; Netemeyer et al., 1991; Sharma, Shimp, & Shin, 1995; Shimp & Sharma, 1987). Attributes Consumer ethnocentrism gives individuals an understanding of what purchases are acceptable to the in-group, as well as feelings of identity and belonging. For consumers who are not ethnocentric or polycentric consumers, products are evaluated on their merits less pertaining or exclusive of their country of origin, or possibly even viewed more positively because they are foreign (Shimp & Sharma, 1987; Vida & Dmitrovic, 2001). Brodowsky (1998) studied consumer ethnocentrism among car buyers in the United States and found a strong positive relationship between high ethnocentrism and country-based bias in the evaluation of automobiles. Consumers with low ethnocentrism appeared to evaluate automobiles based more on the merits of the actual automobile rather than its country of origin. Brodowsky suggests that understanding consumer ethnocentrism is critical for knowing the country of origin effects. Several antecedents of consumer ethnocentrism have been identified by various studies. Consumers who tend to be less ethnocentric are those who are young, those who are male, those who are better educated, and those with higher income levels (Balabanis et al., 2001; Good & Huddleston, 1995; Sharma et al., 1995). Balabanis et al. found that the determinants of consumer ethnocentrism may vary from country to country and culture to culture. In Turkey, patriotism was found to be the most important motive for consumer ethnocentrism (Acikdilli et al., 2017). This, it was theorized, was due to Turkey's collectivist culture, with patriotism being an important expression of loyalty to the group. In the more individualistic Czech Republic, feelings of nationalism based on a sense of superiority and dominance appeared to provide the most important contribution to consumer ethnocentrism. The CETSCALE Shimp and Sharma (1987) developed consumer ethnocentrism into a measurable construct through the use of the consumer ethnocentric tendencies scale (CETSCALE). The initial development of the CETSCALE began with 225 different questions, which were narrowed down to 100 before being sent to a survey group for the first purification study. Through repeated purification studies, the number of questions was finally reduced to 17. Repeated studies by Shimp and Sharma validated the CETSCALE in the U.S. While the 17-item CETSCALE is the original version developed by Shimp and Sharma (1987), shortened versions have been used. One, with 10 items, was developed alongside the full version. This is probably the most frequently used version of the CETSCALE, as a result of its relatively few number of questions (Balabanis et al., 2001; Klein, 2002; Klein et al., 1998; Neese & Hult, 2002; Netemeyer et al., 1991; Vida & Dmitrovic, 2001). Other versions have been used with success, including a version used by Klein (2002) with just four items that was found to have a .96 correlation with the 10-item version. The first major test of the validity of the CETSCALE in countries other than the U.S. was carried out in 1991 (Netemeyer et al., 1991; Wang, 1996). Netemeyer et al. surveyed students in the U.S., France, Japan, and West Germany and compared the results. Both the 17-item version and the 10-item version were tested. It was found that both versions of the CETSCALE were reliable across the different cultures where it was tested. The results also helped validate the CETSCALE as a measure of consumer ethnocentricity. Since that time, the CETSCALE has been used in many studies in many different countries and cultures. References External links Ethnocentrism Consumer behaviour
Consumer ethnocentrism
[ "Biology" ]
1,128
[ "Behavior", "Consumer behaviour", "Human behavior" ]
10,708,073
https://en.wikipedia.org/wiki/Web%20series
A web series (also known as webseries, short-form series, and web show) is a series of short scripted or non-scripted online videos, generally in episodic form, released on the Internet (i.e. World Wide Web), which first emerged in the late 1990s and became more prominent in the early 2000s. A single instance of a web series program can be called an episode or a webisode. The scale of a web series is small and a typical episode can be anywhere from three to fifteen minutes in length. Web series are distributed online on video sharing websites and apps, such as YouTube, Vimeo and TikTok, and can be watched on devices such as smartphones, tablets, desktops, laptops, and Smart TVs (or television sets connected to the Internet with a media streaming device). They can also be released on social media platforms. Because of the nature of the Internet, a web series may be interactive and immersive. Web series are classified as new media. Web series are different from streaming television series, as the latter is purposed to be watched on streaming platforms such as Netflix, Amazon Prime Video, and Hotstar. Although the designing of a web series can be similar to that of a television series their development and production does not entail the same financial investment required for a television series. The popularity of some web series, however, have led to them being optioned for television. a number of awards have been established to celebrate excellence in web series, like the Streamys, Webbys, IAWTV, and Indie Series Awards, although the Streamys and IAWTV also cover programs on streaming platforms. Most major award ceremonies have also created web series and digital media award categories, including the Emmy Awards and the Canadian Screen Awards. There are also several web series festivals, most notably in Los Angeles and Vancouver. History 1990s In April 1995, "Global Village Idiots", an episode of the reality-based program Rox on public access cable television in Bloomington, Indiana, was uploaded to the Internet, making Rox the first series distributed via the web. The same year, Scott Zakarin created The Spot, an episodic online story which integrated photos, videos, and blogs into the storyline. Likened to Melrose Place-on-the-Web, The Spot featured a rotating cast of characters playing trendy twenty-somethings who rented rooms in a fabled Santa Monica, California beach house called "The Spot". The Spot earned the title of Infoseek's "Cool Site of the Year," an award which later became the Webby. In January 1999, Showtime licensed the animated sci-fi web series WhirlGirl, making it the first independently produced web series licensed by a national television network. In February 1999, the series premiered simultaneously on Showtime and online. The character occasionally appeared on Showtime, for example hosting a "Lethal Ladies" programming block, but spent most of her time online, appearing in 100 webisodes. 2000s As broadband bandwidth began to increase in speed and availability, delivering high-quality video over the Internet became a reality. In the early 2000s, the Japanese anime industry began broadcasting original net animation (ONA), a type of original video animation (OVA) series, on the Internet. Early examples of ONA series include Infinite Ryvius: Illusion (2000), Ajimu (2001), and Mahou Yuugi (2001). In 2003, Microsoft launched MSN Video, offering NBC-related content. Its web series Weird TV 2000, a spin-off of the syndicated television series Weird TV, featured dozens of shorts, comedy sketches, and mini-documentaries produced exclusively for MSN Video. The video-sharing site YouTube was launched in early 2005, allowing users to share television programs. YouTube co-founder Jawed Karim said the inspiration for YouTube first came from Janet Jackson's role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site. From 2003 to 2006, many independent web series began to garner and achieve significant popularity, most notably the science fiction series known as Red vs. Blue by Rooster Teeth. The series was distributed independently using online portals YouTube and Revver, as well as the Rooster Teeth website, acquiring over 100 million social media views during its run. (Rooster Teeth would eventually create computer-animated web series RWBY in 2013.) In 2004, adult animated series Salad Fingers was created, which amassed a cult following. The drama Sam Has 7 Friends, which ran in the summer and fall of 2006, was nominated for a Daytime Emmy Award, and was temporarily removed from the Internet when it was acquired by Michael Eisner. In 2004–2005, Spanish producer Pedro Alonso Pablos recorded a series of video interviews featuring actors and directors such as Guillermo del Toro, Santiago Segura, Álex de la Iglesia, and Keanu Reeves, which were distributed through his own website. lonelygirl15, California Heaven, "The Burg", and SamHas7Friends also gained popularity during this time, acquiring audiences in the millions. (Science fiction thriller lonelygirl15 was so successful that it secured a sponsorship deal with Neutrogena in 2007.) In 2004, Stewart St. John, executive producer and head writer of 1990s webisodics The Spot, revived the brand for online audiences as The Spot (2.0), with a new cast, and as a separate soap opera on Sprint PCS Vision-enabled cell phones, creating the first American mobile phone series. St. John and partner Todd Fisher produced over 2,500 daily videos of the mobile soap, driving story lines across platforms to its web counterpart. In 2007, the creators of lonelygirl15 followed up the series' success with KateModern, a comedy-drama series that debuted on social network Bebo, and took place in the same fictional universe as their previous show. Big Fantastic created and produced Prom Queen, which was financed and distributed by Vuguru, and debuted on MySpace. These web serials highlighted interactivity with the audience in addition to the narrative on relatively low budgets. In contrast, the eight-webisode series Sanctuary, starring actor/producer Amanda Tapping, cost $4.3 Million to produce. Both Sanctuary and Prom Queen were nominated for a Daytime Emmy Award. Award-winning producer/director Marshall Herskovitz created the drama Quarterlife, which debuted on MySpace and was later distributed on NBC. In 2008, major television studios began releasing web series, such as the ABC comedy show Squeegies, the NBC sci-fi show Gemini Division, and the Bravo reality series The Malan Show. Warner Bros. relaunched The WB as an online network beginning with original mystery web series, Sorority Forever, created and produced by Big Fantastic and executive produced by McG. Meanwhile, MTV announced a new original web series created by Craig Brewer, $5 Cover, that brought together the indie music world and new media expansion. Joss Whedon created, produced and self-financed musical comedy-drama Dr. Horrible's Sing-Along Blog starring Neil Patrick Harris and Felicia Day. Big Fantastic wrote and produced Foreign Body, a mystery web series that served as a prequel to Robin Cook's novel of the same name. Beckett and Goodfried founded a new Internet studio, EQAL, and produced a spin-off from lonelygirl15 titled LG15: The Resistance. Mainstream press began to provide coverage. In the United Kingdom, KateModern ended its run on Bebo. Bebo also hosted a six-month-long reality travel show, The Gap Year, produced by Endemol UK, which also made interactive sci-fi drama Kirill for MSN. During MIPCOM in October 2008, MySpace announced plans for a second series and indicated that it was in talks with Australian cable network Foxtel to distribute their first series on network television. Additionally, MySpace spoke of their plans to produce versions of the MySpace Road Tour reality series in other countries. The emerging potential for success in web video caught the attention of top entertainment executives in America, including former Disney executive Michael Eisner, head of the Tornante Company at the time. Torante's Vuguru subdivision partnered with Canadian media conglomerate Rogers Media on October 26, securing plans to produce upwards of 30 new web shows a year. Rogers Media agreed to help fund and distribute Vuguru's upcoming productions, thereby solidifying a connection between old and new media. In 2009, the first web series festival was established, named the Los Angeles Web Series Festival. Production and distribution The rise in popularity of mobile Internet video, along with technological improvements to storage, bandwidth, and bitrates, led to the erasure of accessibility and affordability barriers. This meant that high-speed broadband and streaming video capabilities for producing and distributing a web series became a feasible alternative to "traditional" series production, which was formerly mostly done for broadcast and cable television. In comparison with traditional TV series production, web series are typically less expensive to produce. This has allowed a wider range of creators to develop web series. As well, since web series are made available online, instead of being aired at a single preset time to specific regions, they enable producers to reach a potentially global audience who can access the shows 24 hours a day and seven days a week, at the time of their choosing. Moreover, in the 2010s, the rising affordability of tablets and smartphones and the rising ownership rates of these devices in industrialized nations means that web series are available to a wider range of potential viewers, including commuters, travelers, and other people who are on the go. The emerging potential for success in web video has caught the eye of some of the top entertainment executives in America, including former Disney executive and current head of the Tornante Company, Michael Eisner. Eisner's Vuguru subdivision of Tornante partnered with Canadian media conglomerate Rogers Media on October 26, 2009, securing plans to produce over 30 new web shows a year. Rogers Media will help fund and distribute Vuguru's upcoming productions, solidifying a connection between traditional media and new media such as web series. Web series can be distributed directly from the producers' websites, through streaming services or via online video sharing websites . Awards The Webby Awards, established in 1996 by the International Academy of Digital Arts and Sciences (IADAS), and the Indie Series Awards, established in 2009 by We Love Soaps, recognize independently produced comedy, drama, and reality TV entertainment created for the web. In 2009, the International Academy of Web Television (IAWTV) was founded with the mission to support and recognize artistic and technological achievements in the digital entertainment industry. It administered the selection of winners for the Streamy Awards, (which awards web series content) in 2009 and 2010. Due to the poor reception and execution of the 2010 Streamy Awards, IAWTV decided to halt its production of the award ceremony. The IAWTV followed this decision by forming the IAWTV Awards (which recognize creators, cast, and crew of short form digital series from around the world) in 2012. See also Duanju Web anime Webtoon Tubefilter Digital content Podcast International Emmy Award for Best Short-Form Series Los Angeles Web Series Festival Melbourne WebFest Vancouver Web Series Festival Channel 101 Shorty Awards List of web series Notes References Further reading External links Snobby Robot (magazine for web series creators) Internet properties established in 1995 Digital media New media Broadcasting
Web series
[ "Technology" ]
2,379
[ "Multimedia", "New media", "Digital media" ]
10,708,254
https://en.wikipedia.org/wiki/System%20safety
The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards. System approach A system is defined as a set or group of interacting, interrelated or interdependent elements or parts, that are organized and integrated to form a collective unity or a unified whole, to achieve a common objective. This definition lays emphasis on the interactions between the parts of a system and the external environment to perform a specific task or function in the context of an operational environment. This focus on interactions is to take a view on the expected or unexpected demands (inputs) that will be placed on the system and see whether necessary and sufficient resources are available to process the demands. These might take form of stresses. These stresses can be either expected, as part of normal operations, or unexpected, as part of unforeseen acts or conditions that produce beyond-normal (i.e., abnormal) stresses. This definition of a system, therefore, includes not only the product or the process but also the influences that the surrounding environment (including human interactions) may have on the product’s or process’s safety performance. Conversely, system safety also takes into account the effects of the system on its surrounding environment. Thus, a correct definition and management of interfaces becomes very important. Broader definitions of a system are the hardware, software, human systems integration, procedures and training. Therefore, system safety as part of the systems engineering process should systematically address all of these domains and areas in engineering and operations in a concerted fashion to prevent, eliminate and control hazards. A “system", therefore, has implicit as well as explicit definition of boundaries to which the systematic process of hazard identification, hazard analysis and control is applied. The system can range in complexity from a crewed spacecraft to an autonomous machine tool. The system safety concept helps the system designer(s) to model, analyse, gain awareness about, understand and eliminate the hazards, and apply controls to achieve an acceptable level of safety. Ineffective decision making in safety matters is regarded as the first step in the sequence of hazardous flow of events in the "Swiss cheese" model of accident causation. Communications regarding system risk have an important role to play in correcting risk perceptions by creating, analysing and understanding information model to show what factors create and control the hazardous process. For almost any system, product, or service, the most effective means of limiting product liability and accident risks is to implement an organized system safety function, beginning in the conceptual design phase and continuing through to its development, fabrication, testing, production, use and ultimate disposal. The aim of the system safety concept is to gain assurance that a system and associated functionality behaves safely and is safe to operate. This assurance is necessary. Technological advances in the past have produced positive as well as negative effects. Root cause analysis A root cause analysis identifies the set of multiple causes that together might create a potential accident. Root cause techniques have been successfully borrowed from other disciplines and adapted to meet the needs of the system safety concept, most notably the tree structure from fault tree analysis, which was originally an engineering technique. The root cause analysis techniques can be categorised into two groups: a) tree techniques, and b) check list methods. There are several root causal analysis techniques, e.g. Management Oversight and Risk Tree (MORT) analysis. Others are Event and Causal Factor Analysis (ECFA), Multilinear Events Sequencing, Sequentially Timed Events Plotting Procedure, and Savannah River Plant Root Cause Analysis System. Use in other fields Safety engineering Safety engineering describes some methods used in nuclear and other industries. Traditional safety engineering techniques are focused on the consequences of human error and do not investigate the causes or reasons for the occurrence of human error. System safety concept can be applied to this traditional field to help identify the set of conditions for safe operation of the system. Modern and more complex systems in military and NASA with computer application and controls require functional hazard analyses and a set of detailed specifications at all levels that address safety attributes to be inherent in the design. The process following a system safety program plan, preliminary hazard analyses, functional hazard assessments and system safety assessments are to produce evidence based documentation that will drive safety systems that are certifiable and that will hold up in litigation. The primary focus of any system safety plan, hazard analysis and safety assessment is to implement a comprehensive process to systematically predict or identify the operational behavior of any safety-critical failure condition or fault condition or human error that could lead to a hazard and potential mishap. This is used to influence requirements to drive control strategies and safety attributes in the form of safety design features or safety devices to prevent, eliminate and control (mitigation) safety risk. In the distant past hazards were the focus for very simple systems, but as technology and complexity advanced in the 1970s and 1980s more modern and effective methods and techniques were invented using holistic approaches. Modern system safety is comprehensive and is risk based, requirements based, functional based and criteria based with goal structured objectives to yield engineering evidence to verify safety functionality is deterministic and acceptable risk in the intended operating environment. Software intensive systems that command, control and monitor safety-critical functions require extensive software safety analyses to influence detail design requirements, especially in more autonomous or robotic systems with little or no operator intervention. Systems of systems, such as a modern military aircraft or fighting ship with multiple parts and systems with multiple integration, sensor fusion, networking and interoperable systems will require much partnering and coordination with multiple suppliers and vendors responsible for ensuring safety is a vital attribute planned in the overall system. Weapon system safety Weapon System Safety is an important application of the system safety field, due to the potentially destructive effects of a system failure or malfunction. A healthy skeptical attitude towards the system, when it is at the requirements definition and drawing-board stage, by conducting functional hazard analyses, would help in learning about the factors that create hazards and mitigations that control the hazards. A rigorous process is usually formally implemented as part of systems engineering to influence the design and improve the situation before the errors and faults weaken the system defences and cause accidents. Typically, weapons systems pertaining to ships, land vehicles, guided missiles and aircraft differ in hazards and effects; some are inherent, such as explosives, and some are created due to the specific operating environments (as in, for example, aircraft sustaining flight). In the military aircraft industry safety-critical functions are identified and the overall design architecture of hardware, software and human systems integration are thoroughly analyzed and explicit safety requirements are derived and specified during proven hazard analysis process to establish safeguards to ensure essential functions are not lost or function correctly in a predictable manner. Conducting comprehensive hazard analyses and determining credible faults, failure conditions, contributing influences and causal factors, that can contribute to or cause hazards, are an essentially part of the systems engineering process. Explicit safety requirements must be derived, developed, implemented, and verified with objective safety evidence and ample safety documentation showing due diligence. Highly complex software intensive systems with many complex interactions affecting safety-critical functions requires extensive planning, special know-how, use of analytical tools, accurate models, modern methods and proven techniques. Prevention of mishaps is the objective. References External links Organisations System Safety Society Naval Safety Center Naval Ordnance Safety & Security Activity System safety guidance FAA System Safety Handbook Safety engineering
System safety
[ "Engineering" ]
1,649
[ "Safety engineering", "Systems engineering" ]
10,708,900
https://en.wikipedia.org/wiki/Reversible%20reference%20system%20propagation%20algorithm
Reversible reference system propagation algorithm (r-RESPA) is a time stepping algorithm used in molecular dynamics. It evolves the system state over time, where the L is the Liouville operator. References Molecular dynamics Hamiltonian mechanics
Reversible reference system propagation algorithm
[ "Physics", "Chemistry", "Mathematics" ]
50
[ "Molecular physics", "Theoretical physics", "Classical mechanics", "Computational physics", "Molecular dynamics", "Hamiltonian mechanics", "Computational chemistry", "Dynamical systems" ]
10,708,943
https://en.wikipedia.org/wiki/Pen%20%28enclosure%29
A pen is an enclosure for holding livestock. It may also perhaps be used as a term for an enclosure for other animals such as pets that are unwanted inside the house. The term describes types of enclosures that may confine one or many animals. Construction and terminology vary depending on the region of the world, purpose, animal species to be confined, local materials used and tradition. Pen or penning as a verb refers to the act of confining animals in an enclosure. Similar terms are kraal, boma, and corrals. Encyclopædia Britannica notes usage of the term "kraal" for elephant corrals in India, Sri Lanka, and Thailand. Australia and New Zealand In Australia and New Zealand a pen is a small enclosure for livestock (especially sheep or cattle), which is part of a larger construction, e.g. calf pen, forcing pen (or yard) in sheep or cattle yards, or a sweating pen or catching pen in a shearing shed. In Australian and New Zealand English, a paddock may encompass a large, fenced grazing area of many acres, not to be confused with the American English use of paddock as interchangeable with corral or pen, describing smaller, confined areas. Britain In British English, a sheep pen is also called a folding, sheepfold or sheepcote. Modern shepherds more commonly use terms such as closing or confinement pen for small sheep pens. Most structures today referred to as sheepfolds are ancient dry stone semicircles. India Kraal term is used for an elephant enclosure, as for jailing an elephant who had injured two villagers in Kanha Tiger Reserve in 2020. Sri Lanka Panamure was an enclosure and associated town founded in 1896 within a forest owned by Francis Molamure, where 10 roundups of wild elephants occurred, the last in 1950. The term kraal referred to the enclosure and to a roundup/hunt. Thailand The Elephant Kraal of Ayutthaya, in Ayutthaya, a provincial capital, dates from the 1500s. The last roundup of wild elephants was in 1903. United States In the United States, the term pen usually describes outdoor small enclosures for holding animals. These may be for encasing livestock or pets that cannot be kept indoors. Pens may be named by their purpose, such as a holding pen, used for short-term confinement. A pen for cattle may also be called a corral, a term borrowed from the Spanish language. Groups of pens that are part of a larger complex may be called a stockyard, where a series of pens hold a large number of animals, or a feedlot, which is a type of stockyard used to confine animals that are being fattened. A large pen for horses is called a paddock (Eastern US) or a corral (Western US). In some places, an exhibition arena may be called a show pen. A small pen for horses (no more than 15–20 feet on any side) is only known as a pen if it lacks any roof or shelter, otherwise, it is called a stall and is part of a stable. A large fenced grazing area of many acres is called a pasture, or, in some cases, rangeland. Notable corrals Several notable corrals are known in the United States, including many listed on the National Register of Historic Places, either in intact form or in ruins. Other regions Primitive pens in South Africa are called kraals. Keddah is the term used in India for the enclosure constructed to entrap elephants, while in Sri Lanka the word employed in the same meaning is corral. In Indonesia it called kandang. Exercise pen For pets, specialized folding fencing referred to as an exercise pen, x-pen, or ex-pen, is used to surround an area, usually outdoors but not always, in which the animals can freely move around. They are commonly used for dogs, such as to give puppies or adult dogs more space than dog crates, but can also be used for rabbits and other animals. Exercise pens are usually made of sturdy wire, but can also be plastic or wood. Horses, during training, are often exercised in a round pen, sometimes referred to as an exercise pen. Pen mating Pen mating means that, ideally, a cohort of females is brought into the male's pen and he services them all while they are in the pen. This is the least labor-intensive mating system because the females are just left to mate at will. This mating is also the most efficient in terms of male power and efficiency as they do not need to do much in terms of exercising their power. See also Pinfold and pound (village) are synonyms of animal pound, where a poundmaster may operate Boô Kraal Boma (enclosure) Compound (enclosure) References "Macquarie Dictionary, The", 2nd edition, 1991 External links Corlannau / Sheepfolds Agricultural buildings Animal equipment Livestock Buildings and structures used to confine animals Livestock herding equipment an:Corral
Pen (enclosure)
[ "Biology" ]
1,043
[ "Animal equipment", "Animals" ]
10,708,962
https://en.wikipedia.org/wiki/Bruce%20Bueno%20de%20Mesquita
Bruce Bueno de Mesquita (; born November 24, 1946) is a political scientist, professor at New York University, and senior fellow at Stanford University's Hoover Institution. Biography Bueno de Mesquita graduated from Stuyvesant High School in 1963, (along with Richard Axel and Alexander Rosenberg), earned his BA degree from Queens College, New York in 1967 and then his MA and PhD from the University of Michigan. He specializes in international relations, foreign policy, and nation building. He is one of the originators of selectorate theory, and was also the director of New York University's Alexander Hamilton Center for Political Economy from 2006 to 2016. He was a founding partner at Mesquita & Roundell, until that company merged with his other company, Selectors, LLC, that used the selectorate model for macro-level policy analysis. Now, the company is called Selectors, LLC and uses both the forecasting model and the selectorate approach in consulting. Bueno de Mesquita is discussed in an August 16, 2009 Sunday New York Times Magazine article entitled "Can Game Theory Predict When Iran Will Get the Bomb?" In December 2008 he was also the subject of a History Channel two-hour special entitled "The Next Nostradamus" and has been featured on the 2021 Netflix series How to Become a Tyrant. He is the author of many books, including The Dictator's Handbook, co-authored with Alastair Smith, and the book The Invention of Power (January 2022). Work in forecasting Into the early 2000s, Bueno de Mesquita was known for his development of an expected utility model (EUM) capable of predicting the outcome of policy events over a unidimensional policy space. His EUM used Duncan Black's median voter theorem to calculate the median voter position of an N-player bargaining game and solved for the median voter position as the outcome of several bargaining rounds using other ad-hoc components in the process. The first implementation of the EUM was used to successfully predict the successor of Indian Prime Minister Y. B. Chavan after his government collapsed (this was additionally the first known time the model was tested). Bueno de Mesquita's model not only correctly predicted that Charan Singh would become prime minister (a prediction that few experts in Indian politics at the time predicted) but also that Y. B. Chavan would be in Singh's cabinet, that Indira Gandhi would briefly support Chavan's government, and that the government would soon collapse (all events that did occur). From the early success of his model, Bueno de Mesquita began a long and continuing career of consulting using refined implementations of his forecasting model. A declassified assessment by the Central Intelligence Agency rated his model as being 90 percent accurate. Since 2005 or so, Bueno de Mesquita developed a superior model, now known as the Predictioneer's Game or PG that forecasts in a multi-dimensional space, uses the Schofield mean voter theorem, and solves for Perfect Bayesian Equilibrium in an N-player bargaining game that includes the possibility of coercion, essentially a greatly generalized version of the 2-player game in War and Reason. This model predicts significantly more accurately and does a substantially better job of identifying opportunities that players have to improve the outcome by exploiting uncertainties. This model is documented in A New Model for Predicting Policy Choices: Preliminary Tests, and discussed and applied to examples in The Predictioneer's Game. Bueno de Mesquita's forecasting model have greatly contributed to the study of political events using forecasting methods, especially through his numerous papers that document elements of his models and predictions. Bueno de Mesquita has published dozens of forecasts in academic journals. The entirety of his models have never been released to the general public. Publications Forecasting Political Events: The Future of Hong Kong (with David Newman and Alvin Rabushka). New Haven: Yale University Press, 1985. War and Reason (with David Lalman). Yale University Press, 1994. Predicting Politics. Columbus, OH: Ohio State University Press, 2002. (with Alastair Smith, Randolph M. Siverson, James D. Morrow) (with Kiron K. Skinner, Serhiy Kudelia, Condoleezza Rice) Principles of International Politics. 2013. The Invention of Power: Popes, Kings, and the Birth of the West. PublicAffairs. 2022. ISBN 9781541768758 Family Bueno de Mesquita has three children and six grandchildren. His son, Ethan Bueno de Mesquita, is a political scientist currently serving as dean of the Harris School of Public Policy at the University of Chicago. References External links Bruce Bueno de Mesquita's under faculty at NYU Bruce Bueno de Mesquita's biography at Hoover To See The Future, Use The Logic Of Self-Interest – NPR audio clip (TED2009) The New Nostradamus – on the use by Bruce Bueno de Mesquita of rational choice theory in political forecasting Example of Model Applied To Iran’s Nuclear Program Compilation of criticisms of Bueno de Mesquita's models Game theorists Living people 1946 births American political scientists Queens College, City University of New York alumni University of Michigan College of Literature, Science, and the Arts alumni Scientists from New York City New York University faculty American people of Portuguese-Jewish descent Presidents of the International Studies Association American people of Portuguese descent
Bruce Bueno de Mesquita
[ "Mathematics" ]
1,152
[ "Game theorists", "Game theory" ]
10,709,738
https://en.wikipedia.org/wiki/DVB-RCS
DVB-RCS (Digital Video Broadcasting - Return Channel via Satellite) provides a method by which the DVB-S platform (and in theory also the DVB-S2 platform) can become a bi-directional, asymmetric data path using wireless between broadcasters and customers. It is a specification for an interactive on-demand multimedia satellite communication system formulated in 1999 by the DVB consortium. Without this method, various degrees of interactivity can be offered, without implying any return channel back from the user to the service provider: Data Carrousel or Electronic Programs Guides (EPG) are examples of such enhanced TV services which make use of “local interactivity”, without any return path from customer to provider. Chronology The 5th revision of the DVB-RCS standard was completed in 2008. A major update included the very first broadband mobile standardization. This extended version, formally referred to as "ETSI EN 301 790 v 1.5.1" is also known as "DVB-RCS+M". The "+M" version added several new features, such as the ability to use "DVB-S2" bursts in the uplink channel back to the satellite. It incorporated signal fade mitigation techniques and other solutions to combat short term signal loss. In contrast to other satellite communications systems, DVB-RCS was created in an open environment where any DVB member can participate. DVB membership is open to all companies willing to subscribe. The work group called "DVB TM-RCS" is currently pursuing other technical solutions for the approved commercial system. In 2009 technical work started for a new version of DVB-RCS called "DVB-RCS NG" (Next Generation). In this more powerful version of the standard "RCS2" there will be support for Higher Layers for Satellite (HLS) communication. Evolution In older systems, interactive video broadcasting was possible as a result of using physical cables for connectivity. However, in remote areas cable connections may be unavailable, two-way communication was then impossible via traditional means. One possible solution was to use a satellite linked connection for the return (uplink) channel in addition to the standard downlink channel. This option is more expensive to implement than with cabled connections in built-up areas, but may be more cost effective for remote areas where the costs of laying cable to users would not be recovered for a long time. Additional costs involved in RCS systems include the costs of a two-way satellite antenna and renting data bandwidth from a satellite communications provider. Advantages DVB-RCS is a mature open source satellite communication standard with highly efficient bandwidth management. This make it a cost-efficient alternative solution for many users. It also provides an established foundation for further satellite communications research. Hardware implementation To implement this kind of communication, a user will require a device called a SIT, (Satellite Interactive Terminal, "astromodem" or satellite modem). A suitable satellite-dish is also required. Some systems are supplied as a pre-built combination. The user receives multimedia stream transmissions via the downlink-signals from the satellite. The user sends requests for service signals via the "SIT" and the uplink channel to the satellite. Upon receipt of the command from the user the satellite sends the user request data to the service provider. This takes about 0.5 seconds to connect each way with the satellite. (1 second total for satellite up and downlinks, and another second to the service provider and back, a total of 2 seconds Round-trip delay time). This technology can also be used for internet access via satellite. The downward route is from the service provider to the satellite, (via a standard uplink station), then via the downlink of the satellite to the users "SIT". Signal encoding uses phase-shift keying (QPSK or GMSK). The corresponding upward route is via the uplink-channel provided by the "SIT", data requests are transferred via the satellite to the service provider. The signal is then processed by a burst demodulator, (using the MF-TDMA protocol via the data scheduler). The data requested is then routed over the wired internet. The protocol used for the satellite-SIT portion of the journey is Multiple Frequency Time Division Multiple Access (MF-TDMA). Using this protocol, the user receives data in packets (bursts) that may not be a continuous stream, but when stored and rearranged will generate a virtual 2-dimensional data array. A scheduler is used to maintain these bursts and eliminate duplicates. This protocol is implemented in such a way that different users will receive varying amounts of packet bursts, this helps to regulate the data stream from the satellite-link according to user demands. Standards The standard that implements DVB-RCS is ETSI EN 301.790. References External links https://web.archive.org/web/20070412202934/https://www.dvb.org/documents/newsletters/DVB-SCENE%20June%202002.pdf https://satlabs.org/dvbrcssymposium https://www.ist-satsix.org Digital Video Broadcasting Satellite broadcasting
DVB-RCS
[ "Engineering" ]
1,093
[ "Telecommunications engineering", "Satellite broadcasting" ]
10,710,154
https://en.wikipedia.org/wiki/Design%20rationale
A design rationale is an explicit documentation of the reasons behind decisions made when designing a system or artifact. As initially developed by W.R. Kunz and Horst Rittel, design rationale seeks to provide argumentation-based structure to the political, collaborative process of addressing wicked problems. Overview A design rationale is the explicit listing of decisions made during a design process, and the reasons why those decisions were made. Its primary goal is to support designers by providing a means to record and communicate the argumentation and reasoning behind the design process. It should therefore include: the reasons behind a design decision, the justification for it, the other alternatives considered, the trade offs evaluated, and the argumentation that led to the decision. Several science areas are involved in the study of design rationales, such as computer science cognitive science, artificial intelligence, and knowledge management. For supporting design rationale, various frameworks have been proposed, such as QOC, DRCS, IBIS, and DRL. History While argumentation formats can be traced back to Stephen Toulmin's work in the 1950s datums, claims, warrants, backings and rebuttals, the origin of design rationale can be traced back to W.R. Kunz and Horst Rittel's development of the Issue-Based Information System (IBIS) notation in 1970. Several variants on IBIS have since been proposed. The first was Procedural Hierarchy of Issues (PHI), first described in Ray McCall's PhD Dissertation although not named at the time. IBIS was also modified, in this case to support Software Engineering, by Potts & Bruns. The Potts & Bruns approach was then extended by the Decision Representation Language (DRL). which itself was extended by RATSpeak. Questions Options and Criteria (QOC), also known as Design Space Analysis is an alternative representation for argumentation-based rationale, as are Win-Win and the Decision Recommendation and Intent Model (DRIM). The first Rationale Management System (RMS) was PROTOCOL, which supported PHI, which was followed by other PHI-based systems MIKROPOLIS and PHIDIAS. The first system providing IBIS support was Hans Dehlinger's STIEC. Rittel developed a small system in 1983 (also not published) and the better known gIBIS (graphical IBIS) was developed in 1987. Not all successful DR approaches involve structured argumentation. For example, Carroll and Rosson's Scenario-Claims Analysis approach captures rationale in scenarios that describe how the system is used and how well the system features support the user goals. Carroll and Rosson's approach to design rationale is intended to help designers of computer software and hardware identify underlying design tradeoffs and make inferences about the impact of potential design interventions. Key concepts in design rationale There are a number of ways to characterize DR approaches. Some key distinguishing features are how it is captured, how it is represented, and how it can be used. Rationale capture Rationale capture is the process of acquiring rationale information to a rationale management Capture methods A method called "Reconstruction" captures rationales in a raw form such as video, and then reconstruct them into a more structured form. The advantage of Reconstruction method is that rationales can be carefully captured and capturing process won't disrupt the designer. But this method might result in high cost and biases of the person producing the rationales The "Record-and-replay" method simply captures rationales as they unfold. Rationales are synchronously captured in a video conference or asynchronously captured via bulletin board or email-based discussion. If the system has informal and semi-formal representation, the method will be helpful. The "Methodological byproduct" method captures rationales during the process of design following a schema. But it's hard to design such a schema. The advantage of this method is its low cost. With a rich knowledge base (KB) created in advance, the "Apprentice" method captures rationales by asking questions when confusing or disagreeing with the designer's action. This method benefits not only the user but the system. In "Automatic Generation" method, design rationales are automatically generated from an execution history at low cost. It has the ability in maintaining consistent and up-to-date rationales. But the cost of compiling the execution history is high due to the complexity and difficulty of some machine-learning problems. The "Historian" method let a person or computer program watches all designer's actions but does not make suggestions. Rationales are captured during the design process. Rationale representation The choice of design rationale representation is very important to make sure that the rationales we capture is what we desire and we can use efficiently. According to the degree of formality, the approaches that are used to represent design rationale can be divided into three main categories: informal, semiformal, or formal. In the informal representation, rationales can be recorded and captured by just using our traditionally accepted methods and media, such as word processors, audio and video records or even hand writings. However, these descriptions make it hard for automatic interpretation or other computer-based supports. In the formal representation, the rationale must be collected under a strict format so that the rationale can be interpreted and understood by computers. However, due to the strict format of rationale defined by formal representations, the contents can hardly be understood by human being and the process of capturing design rationale will require more efforts to finish, and therefore becomes more intrusive. Semiformal representations try to combine the advantages of informal and formal representations. On one hand, the information captured should be able to be processed by computers so that more computer based support can be provided. On the other hand, the procedure and method used to capture information of design rationale should not be very intrusive. In the system with a semiformal representation, the information expected is suggested and the users can capture rationale by following the instructions to either fill out the attributes according to some templates or just type into natural language descriptions. Argumentation-based models The Toulmin model One commonly accepted way for semiformal design rationale representation is structuring design rationale as argumentation. The earliest argumentation-based model used by many design rationale systems is the Toulmin model. The Toulmin model defines the rules of design rationale argumentation with six steps: Claim is made; Supporting data are provided; Warrant provides evidence to the existing relations; Warrant can be supported by a backing; Model qualifiers (some, many, most, etc.) are provided; Possible rebuttals are also considered. One advantage of Toulmin model is that it uses words and concepts which can be easily understood by most people. Issue-Based Information System (IBIS) Another important approach to argumentation of design rationale is Rittel and Kunz's IBIS (Issue-Based Information System), which is actually not a software system but an argumentative notation. It has been implemented in software form by gIBIS (graphical IBIS), itIBIS (test-based IBIS), Compendium, and other software. IBIS uses some rationale elements (denoted as nodes) such as issues, positions, arguments, resolutions and several relationships such as more general than, logical successor to, temporal successor to, replaces and similar to, to link the issue discussions. Procedural Hierarchy of Issues (PHI) PHI (Procedural Hierarchy of Issues) extended IBIS to noncontroversial issues and redefined the relationships. PHI adds the subissue relationship which means one issue's resolution depends on the resolution of another issue. Questions, Options, and Criteria (QOC) QOC (Questions, Options, and Criteria) is used for design space analysis. Similar to IBIS, QOC identifies the key design problems as questions and possible answers to questions as options. In addition, QOC uses criteria to explicitly describe the methods to evaluate the options, such as the requirements to be satisfied or the properties desired. The options are linked with criteria positively or negatively and these links are defined as assessments. Decision Representation Language (DRL) DRL (Decision Representation Language) extends the Potts and Bruns model of DR and defines the primary elements as decision problems, alternatives, goals, claims and groups. Lee (1991) has argued that DRL is more expressive than other languages. DRL focuses more on the representation of decision making and its rationale instead of on design rationale. RATSpeak Based on DRL, RATSpeak is developed and used as the representation language in SEURAT (Software Engineering Using RATionale). RATSpeak takes into account requirements (functional and non-functional) as part of the arguments for alternatives to the decision problems. SEURAT also includes an Argument Ontology which is a hierarchy of argument types and includes the types of claims used in the system. WinWin Spiral Model The WinWin Spiral Model, which is used in the WinWin approach, adds the WinWin negotiation activities, including identifying key stakeholders of the systems, and identifying the win conditions of each stakeholder and negotiation, into the front of each cycle of the spiral software development model in order to achieve a mutually satisfactory (winwin) agreement for all stakeholders of the project. In the WinWin Spiral Model, the goals of each stakeholder are defined as Win conditions. Once there is a conflict between win conditions, it is captured as an Issue. Then the stakeholders invent Options and explore trade-offs to resolve the issue. When the issue is solved, an Agreement which satisfies the win conditions of stakeholders and captures the agreed option is achieved. Design rationale behind the decisions is captured during the process of the WinWin model and will be used by stakeholders and the designers to improve their later decision making. The WinWin Spiral model reduces the overheads of the capture of design rationale by providing stakeholders a well-defined process to negotiate. In an ontology of decision rationale is defined and their model utilizes the ontology to address the problem of supporting decision maintenance in the WinWin collaboration framework. Design Recommendation and Intent Model (DRIM) DRIM (Design Recommendation and Intent Model) is used in SHARED-DRIM. The main structure of DRIM is a proposal which consists of the intents of each designer, the recommendations that satisfy the intents and the justifications of the recommendations. Negotiations are also needed when conflicts exist between the intents of different designers. The recommendation accepted becomes a design decision, and the rationale of the unaccepted but proposed recommendations are also recorded during this process, which can be useful during the iterative design and/or system maintenance. Applications Design rationale has the potential to be used in many different ways. One set of uses, defined by Burge and Brown (1998), are: Design verification — The design rationale can be used to verify if the design decisions and the product itself are the reflection of what the designers and the users actually wanted. Design evaluation — The design rationale is used to evaluate the various design alternatives discussed during the design process. Design maintenance — The design rationale helps to determine the changes that are necessary to modify the design. Design reuse — The design rationale is used to determine how the existing design could be reused for a new requirement with or without any changes in it. If there is a need to modify the design, then the DR also suggests what needs to be modified in the design. Design teaching — The design rationale could be used as a resource to teach people who are unfamiliar with the design and the system. Design communication — The design rationale facilitates better communication among people who are involved in the design process and thus helps to come up with a better design. Design assistance — The design rationale could be used to verify the design decisions made during the design process. Design documentation — The design rationale is used to document the entire design process which involves the meeting room deliberations, alternatives discussed, reasons behind the design decisions and the product overview. DR is used by research communities in software engineering, mechanical design, artificial intelligence, civil engineering, and human-computer interaction research. In software engineering, it could be used to support the designers ideas during requirement analysis, capturing and documenting design meetings and predicting possible issues due to new design approach. In software architecture and outsourcing solution design, it can justify the outcome of architectural decisions and serve as a design guide. In civil engineering, it helps to coordinate the variety of work that the designers do at the same time in different areas of a construction project. It also help the designers to understand and respect each other's ideas and resolve any possible issues. The DR can also be used by the project managers to maintain their project plan and the project status up to date. Also, the project team members who missed a design meeting can refer back the DR to learn what was discussed on a particular topic. The unresolved issues captured in DR could be used to organize further meetings on those topics. Design rationale helps the designers to avoid the same mistakes made in the previous design. This can also be helpful to avoid duplication of work. In some cases DR could save time and money when a software system is upgraded from its previous versions. There are several books and articles that provide excellent surveys of rationale approaches applied to HCI, Engineering Design and Software Engineering. See also Goal structuring notation IDEF6 Method engineering Problem structuring methods Theory of justification References Further reading Books Special Issues Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), Special Issue: Fall 2008, Vol.22 No.4 Design Rationale http://web.cs.wpi.edu/~aiedam/SpecialIssues/Burge-Bracewell.html Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), Special Issue on Representing and Using Design Rationale, 1997, Vol.11 No.2, Cambridge University Press Workshops Second Workshop on SHAring and Reusing architectural Knowledge - Architecture, rationale, and Design Intent (SHARK/ADI 2007), (RC.rug.nl) as part of the 29th Int. Conf. on Software Engineering (ICSE 2007) (CS.ucl.ac.uk) Workshop on Design Rationale: Problems and Progress (Muohio.edu) Workshop Chairs: Janet Burge and Rob Bracewell, Held 9 July 2006 in conjunction with Design, Computing, and Cognition '06. Eindhoven, (wwwfaculty.arch.usyd.edu.au) Netherlands External links Bcisive.austhink.com: A commercial software package designed for design rationale and decision rationale more broadly. Graphical interface, sharing capabilities. Compendium: A hypermedia tool that provides visual knowledge management capabilities based around IBIS. Free Java application, binary and source, with an active user community who meet annually. designVUE: A tool for visual knowledge capture based on IBIS and other methods. Free Java application. SEURAT: An Eclipse plug-in that integrates rationale capture and use with a software development environment. SEURAT is available as an open source project in GitHub (). Argument mapping Design Justification (epistemology) Software design
Design rationale
[ "Engineering" ]
3,152
[ "Design", "Software design" ]
10,711,519
https://en.wikipedia.org/wiki/Torkel%20Franz%C3%A9n
Torkel Franzén (1 April 1950, Norrbotten County – 19 April 2006, Stockholm) was a Swedish academic. Biography Franzén worked at the Department of Computer Science and Electrical Engineering at Luleå University of Technology, Sweden, in the fields of mathematical logic and computer science. He was known for his work on Gödel's incompleteness theorems and for his contributions to Usenet. He was active in the online science fiction fan community, and even issued his own electronic fanzine Frotz on his fiftieth birthday. He died of bone cancer at age 56. Selected works Gödel's Theorem: An Incomplete Guide to its Use and Abuse. Wellesley, Massachusetts: A K Peters, Ltd., 2005. x + 172 pp. . Inexhaustibility: A Non-Exhaustive Treatment. Wellesley, Massachusetts: A K Peters, Ltd., 2004. Lecture Notes in Logic, #16, Association for Symbolic Logic. . The Popular Impact of Gödel's Incompleteness Theorem, Notices of the American Mathematical Society, 53, #4 (April 2006), pp. 440–443. Provability and Truth (Acta universitatis stockholmiensis, Stockholm Studies in Philosophy 9) (1987) See also Gödel's incompleteness theorems References External links Home page Raatikainen, Panu. Review of Gödel's Theorem: An Incomplete Guide to Its Use and Abuse. Notices of the American Mathematical Society, Vol. 54, No. 3 (March 2007), pp. 380–3. 1950 births 2006 deaths Usenet people 20th-century Swedish mathematicians 21st-century Swedish mathematicians Mathematical logicians Academic staff of the Luleå University of Technology
Torkel Franzén
[ "Mathematics" ]
352
[ "Mathematical logic", "Mathematical logicians" ]
10,714,141
https://en.wikipedia.org/wiki/Touch%20memory
Touch Memory (or contact memory) is an electronic identification device packaged in a coin-shaped stainless steel container. Touch memory is accessed when a touch probe comes into contact with a memory button. Read and/or write operations between the probe and memory chip are performed with just a momentary contact. Thousands of reads and writes can be performed with a single chip and data integrity can last over 100 years. Touch memory complements such technologies as bar codes, RFID tags, magnetic stripe, proximity cards and smart cards. Uses Touch memory is used in such areas as Access control Asset management eCash Gaming systems Thermochron applications Time and attendance Examples The US Postal Service uses touch memory for tracking collection times on its large collection boxes. Healthcare, transportation, and trade show organizations also use the technology. Advantages Unlike bar codes and magnetic stripe cards, many touch memory solutions can be written to as well as being read. Communication rate, and product breadth, of touch memory goes well beyond the simple memory products typically available with RFID. The durability of the stainless-steel-clad touch memory is much greater than the thin plastic of a smart card. See also 1-Wire protocol References Automatic identification and data capture Radio-frequency identification
Touch memory
[ "Technology", "Engineering" ]
247
[ "Radio electronics", "Computer hardware stubs", "Radio-frequency identification", "Data", "Automatic identification and data capture", "Computing stubs" ]
10,714,473
https://en.wikipedia.org/wiki/Polymerase%20chain%20reaction%20optimization
The polymerase chain reaction (PCR) is a commonly used molecular biology tool for amplifying DNA, and various techniques for PCR optimization which have been developed by molecular biologists to improve PCR performance and minimize failure. Contamination and PCR The PCR method is extremely sensitive, requiring only a few DNA molecules in a single reaction for amplification across several orders of magnitude. Therefore, adequate measures to avoid contamination from any DNA present in the lab environment (bacteria, viruses, or human sources) are required. Because products from previous PCR amplifications are a common source of contamination, many molecular biology labs have implemented procedures that involve dividing the lab into separate areas. One lab area is dedicated to preparation and handling of pre-PCR reagents and the setup of the PCR reaction, and another area to post-PCR processing, such as gel electrophoresis or PCR product purification. For the setup of PCR reactions, many standard operating procedures involve using pipettes with filter tips and wearing fresh laboratory gloves, and in some cases a laminar flow cabinet with UV lamp as a work station (to destroy any extraneomultimer formation). PCR is routinely assessed against a negative control reaction that is set up identically to the experimental PCR, but without template DNA, and performed alongside the experimental PCR. Hairpins Secondary structures in the DNA can result in folding or knotting of DNA template or primers, leading to decreased product yield or failure of the reaction. Hairpins, which consist of internal folds caused by base-pairing between nucleotides in inverted repeats within single-stranded DNA, are common secondary structures and may result in failed PCRs. Typically, primer design that includes a check for potential secondary structures in the primers, or addition of DMSO or glycerol to the PCR to minimize secondary structures in the DNA template, are used in the optimization of PCRs that have a history of failure due to suspected DNA hairpins. Polymerase errors Taq polymerase lacks a 3′ to 5′ exonuclease activity. Thus, Taq has no error-proof-reading activity, which consists of excision of any newly misincorporated nucleotide base from the nascent (i.e., extending) DNA strand that does not match with its opposite base in the complementary DNA strand. The lack in 3′ to 5′ proofreading of the Taq enzyme results in a high error rate (mutations per nucleotide per cycle) of approximately 1 in bases, which affects the fidelity of the PCR, especially if errors occur early in the PCR with low amounts of starting material, causing accumulation of a large proportion of amplified DNA with incorrect sequence in the final product. Several "high-fidelity" thermostable DNA polymerases, having engineered 3′ to 5′ exonuclease activity, have become available that permit more accurate amplification for use in PCRs for sequencing or cloning of products. Examples of polymerases with 3′ to 5′ exonuclease activity include: KOD DNA polymerase, a recombinant form of Thermococcus kodakaraensis KOD1; Vent, which is extracted from Thermococcus litoralis; Pfu DNA polymerase, which is extracted from Pyrococcus furiosus; Pwo, which is extracted from Pyrococcus woesii; Q5 polymerase, with 280x higher fidelity amplification compared with Taq. Magnesium concentration Magnesium is required as a co-factor for thermostable DNA polymerase. Taq polymerase is a magnesium-dependent enzyme and determining the optimum concentration to use is critical to the success of the PCR reaction. Some of the components of the reaction mixture such as template concentration, dNTPs and the presence of chelating agents (EDTA) or proteins can reduce the amount of free magnesium present thus reducing the activity of the enzyme. Primers which bind to incorrect template sites are stabilized in the presence of excessive magnesium concentrations and so results in decreased specificity of the reaction. Excessive magnesium concentrations also stabilize double stranded DNA and prevent complete denaturation of the DNA during PCR reducing the product yield. Inadequate thawing of MgCl2 may result in the formation of concentration gradients within the magnesium chloride solution supplied with the DNA polymerase and also contributes to many failed experiments . Size and other limitations PCR works readily with a DNA template of up to two to three thousand base pairs in length. However, above this size, product yields often decrease, as with increasing length stochastic effects such as premature termination by the polymerase begin to affect the efficiency of the PCR. It is possible to amplify larger pieces of up to 50,000 base pairs with a slower heating cycle and special polymerases. These are polymerases fused to a processivity-enhancing DNA-binding protein, enhancing adherence of the polymerase to the DNA. Other valuable properties of the chimeric polymerases TopoTaq and PfuC2 include enhanced thermostability, specificity and resistance to contaminants and inhibitors. They were engineered using the unique helix-hairpin-helix (HhH) DNA binding domains of topoisomerase V from hyperthermophile Methanopyrus kandleri. Chimeric polymerases overcome many limitations of native enzymes and are used in direct PCR amplification from cell cultures and even food samples, thus by-passing laborious DNA isolation steps. A robust strand-displacement activity of the hybrid TopoTaq polymerase helps solve PCR problems that can be caused by hairpins and G-loaded double helices. Helices with a high G-C content possess a higher melting temperature, often impairing PCR, depending on the conditions. Non-specific priming Non-specific binding of primers frequently occurs and may occur for several reasons. These include repeat sequences in the DNA template, non-specific binding between primer and template, high or low G-C content in the template, or incomplete primer binding, leaving the 5' end of the primer unattached to the template. Non-specific binding of degenerate primers is also common. Manipulation of annealing temperature and magnesium ion concentration may be used to increase specificity. For example, lower concentrations of magnesium or other cations may prevent non-specific primer interactions, thus enabling successful PCR. A "hot-start" polymerase enzyme whose activity is blocked unless it is heated to high temperature (e.g., 90–98˚C) during the denaturation step of the first cycle, is commonly used to prevent non-specific priming during reaction preparation at lower temperatures. Chemically mediated hot-start PCRs require higher temperatures and longer incubation times for polymerase activation, compared with antibody or aptamer-based hot-start PCRs. Other methods to increase specificity include Nested PCR and Touchdown PCR. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design. Touchdown polymerase chain reaction or touchdown style polymerase chain reaction is a method of polymerase chain reaction by which primers will avoid amplifying nonspecific sequences. The annealing temperature during a polymerase chain reaction determines the specificity of primer annealing. The melting point of the primer sets the upper limit on annealing temperature. At temperatures just below this point, only very specific base pairing between the primer and the template will occur. At lower temperatures, the primers bind less specifically. Nonspecific primer binding obscures polymerase chain reaction results, as the nonspecific sequences to which primers anneal in early steps of amplification will "swamp out" any specific sequences because of the exponential nature of polymerase amplification. The earliest steps of a touchdown polymerase chain reaction cycle have high annealing temperatures. The annealing temperature is decreased in increments for every subsequent set of cycles (the number of individual cycles and increments of temperature decrease is chosen by the experimenter). The primer will anneal at the highest temperature which is least-permissive of nonspecific binding that it is able to tolerate. Thus, the first sequence amplified is the one between the regions of greatest primer specificity; it is most likely that this is the sequence of interest. These fragments will be further amplified during subsequent rounds at lower temperatures, and will out compete the nonspecific sequences to which the primers may bind at those lower temperatures. If the primer initially (during the higher-temperature phases) binds to the sequence of interest, subsequent rounds of polymerase chain reaction can be performed upon the product to further amplify those fragments. Primer dimers Annealing of the 3' end of one primer to itself or the second primer may cause primer extension, resulting in the formation of so-called primer dimers, visible as low-molecular-weight bands on PCR gels. Primer dimer formation often competes with formation of the DNA fragment of interest, and may be avoided using primers that are designed such that they lack complementarity—especially at the 3' ends—to itself or the other primer used in the reaction. If primer design is constrained by other factors and if primer-dimers do occur, methods to limit their formation may include optimisation of the MgCl2 concentration or increasing the annealing temperature in the PCR. Deoxynucleotides Deoxynucleotides (dNTPs) may bind Mg2+ ions and thus affect the concentration of free magnesium ions in the reaction. In addition, excessive amounts of dNTPs can increase the error rate of DNA polymerase and even inhibit the reaction. An imbalance in the proportion of the four dNTPs can result in misincorporation into the newly formed DNA strand and contribute to a decrease in the fidelity of DNA polymerase. References Biochemistry methods DNA Molecular biology Polymerase chain reaction
Polymerase chain reaction optimization
[ "Chemistry", "Biology" ]
2,081
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Molecular biology", "Biochemistry" ]
15,861,652
https://en.wikipedia.org/wiki/Euglobulin%20lysis%20time
The euglobulin lysis time (ELT) is a test that measures overall fibrinolysis. The test is performed by mixing citrated platelet-poor plasma with acid in a glass test tube. This acidification causes the precipitation of certain clotting factors in a complex called the euglobulin fraction. The euglobulin fraction contains the important fibrinolytic factors fibrinogen, PAI-1, tissue plasminogen activator (tPA), plasminogen, and to a lesser extent α2-antiplasmin. The euglobulin fraction also contains factor VIII. After precipitation, the euglobulin fraction is resuspended in a borate solution. Clotting is then activated by the addition of calcium chloride at 37 °C. Historically, subsequent amount of fibrinolysis was determined by eye, by observing the clot within the test tube at ten-minute intervals until complete lysis had occurred. Newer automated methods have also been developed. These methods use the same principle as the older technique, but use a spectrophotometer to track clot lysis as a function of optical density. References Blood tests
Euglobulin lysis time
[ "Chemistry" ]
251
[ "Blood tests", "Chemical pathology" ]
15,861,718
https://en.wikipedia.org/wiki/Amorphous%20calcium%20phosphate
Amorphous calcium phosphate (ACP) is a glassy solid that is formed from the chemical decomposition of a mixture of dissolved phosphate and calcium salts (e.g. (NH4)2HPO4 + Ca(NO3)2). The resulting amorphous mixture consists mostly of calcium and phosphate, but also contains varying amounts of water and hydrogen and hydroxide ions, depending on the synthesis conditions. Such mixtures are also known as calcium phosphate cement. ACP is generally categorized into either "amorphous tricalcium phosphate" (ATCP) or calcium-deficient hydroxyapatite (CDHA). CDHA is sometimes termed "apatitic calcium triphosphate." The composition of amorphous calcium phosphate is CaxHy(PO4)z·nH2O, where n is between 3 and 4.5. CDHA has a general formula of Ca9(HPO4)(PO4)5(OH). Precipitation from a moderately supersaturated and basic solution of a magnesium salt produces amorphous magnesium calcium phosphate (AMCP), in which magnesium incorporated into the ACP structure. A commercial preparation of ACP is casein phosphopeptide-amorphous calcium phosphate (CPP-ACP), derived from cow milk. It is sold under various brand names including Recaldent and Tooth Mousse, intended to be applied directly to teeth. Its clinical usefulness is unproven. Biogenic ACP Biogenic ACP has been found in the inner ear of embryonic sharks, mammalian milk and dental enamel. However, whilst its unequivocal presence in bones and teeth is debated, there is evidence that transient amorphous precursors are involved in the development of bone and teeth. The ACP in bovine milk (CPP-ACP) is believed to involve calcium phosphate nanoclusters in a shell of casein phosphopeptides. A typical casein micelle of radius 100 nm contains around 10,000 casein molecules and 800 nanoclusters of ACP, each about 4.8 nm in diameter. The concentration of calcium phosphate is higher in milk than in serum, but it rarely forms deposits of insoluble phosphates. Unfolded phosphopeptides are believed to sequester ACP nanoclusters and form stable complexes in other biofluids such as urine and blood serum, preventing deposition of insoluble calcium phosphates and calcification of soft tissue. In the laboratory, stored samples of formulations of artificial blood, serum, urine and milk (which approximate the pH of the naturally occurring fluid) deposit insoluble phosphates. The addition of suitable phosphopeptides prevents precipitation. Posner's clusters Following investigations into the composition of amorphous calcium phosphates precipitated under different conditions, Posner and Betts suggested in the mid-1970s that the structural unit of ACP was a neutral cluster Ca9(PO4)6. Calculations support the description of a cluster with central Ca2+ ion surrounded by six phosphate PO43− anions, which in turn are surrounded by eight further calcium ions. The resulting cluster is estimated to have a diameter of around 950 pm (0.95 nm). These are now generally referred to as Posner's clusters. Precipitated ACP is believed to be made up of particles containing a number of Posner's clusters with water in the intervening spaces. While plasma spray coated ACP may contain Posner's clusters, there cannot be any water present. New studies propose the idea of posner clusters acting as neural qbits because their entangled 31P have a long relaxation time and are in a S6 symmetry. The idea behind it is that the posner molecules meld and release calcium-ions which stimulises the neurons. Use in dental treatment Amorphous calcium phosphate in combination with casein phosphopeptide has been used as a dental treatment to treat incipient dental decay. ACP sees its main use as an occluding agent, which aids in reducing sensitivity. Studies have shown that it does form a remineralized phase of hydroxyapatite consistent with the natural enamel. In addition, clinical studies have shown that patients who whiten their teeth have reduced sensitivity after treatment. It is believed that ACP hydrolyzes under physiological temperatures and pH to form octacalcium phosphate as an intermediate, and then surface apatite. Method of mineralization ACP lacks the long-range, periodic atomic-scale order of crystalline calcium phosphates. The X-ray diffraction pattern is broad and diffuse with a maximum at , and no other different features compared with well-crystallized hydroxyapatite. Under electron microscopy, its morphological form is shown as small spheroidal particles in the scale of tenths nanometer. In aqueous media, ACP is easily transformed into crystalline phases such as octacalcium phosphate and apatite due to the growing of microcrystallites. It has been demonstrated that ACP has better osteoconductivity and biodegradability than tricalcium phosphate and hydroxyapatite in vivo. Moreover, it can increase alkaline phosphatase activities of mesoblasts, enhance cell proliferation and promote cell adhesion. The unique role of ACP during the formation of mineralized tissues makes it a promising candidate material for tissue repair and regeneration. ACP may also be a potential remineralizing agent in dental applications. Recently developed ACP-filled bioactive composites are believed to be effective anti-demineralizing/remineralizing agents for the preservation and repair of tooth structures. See also Remineralisation of teeth References External links Dental materials
Amorphous calcium phosphate
[ "Physics" ]
1,204
[ "Materials", "Dental materials", "Matter" ]
15,862,818
https://en.wikipedia.org/wiki/Olivier%20Fourdan
Olivier Fourdan is the creator of the Xfce desktop environment, for which development began at the end of 1996. He started his career as a new technologies production engineer as well as in web development and embedded Linux systems. Fourdan has been working for Red Hat since 2007, interrupted by 2 years at Intel during 2013 and 2014. As of 2017, he is active in the adoption of Wayland, working on many different components, amongst them GTK, Mutter, GNOME Control Center, XWayland, and Mesa3D. References French computer programmers Free software programmers Linux people Living people Year of birth missing (living people) Red Hat employees Xfce
Olivier Fourdan
[ "Technology" ]
133
[ "Computing stubs", "Computer specialist stubs" ]
15,863,269
https://en.wikipedia.org/wiki/HD%20125351
HD 125351 or A Boötis (A Boo) is spectroscopic binary in the constellation Boötes. The system has an apparent magnitude of +4.97, with a spectrum matching a K-type giant star. It is approximately 233 light-years from Earth. References External links HR 5361 Image HD 125351 Bootis, A 125351 069879 5361 Durchmusterung objects K-type giants Boötes Spectroscopic binaries
HD 125351
[ "Astronomy" ]
102
[ "Boötes", "Constellations" ]
15,864,343
https://en.wikipedia.org/wiki/List%20of%20IEEE%20publications
The publications of the Institute of Electrical and Electronics Engineers (IEEE) constitute around 30% of the world literature in the electrical and electronics engineering and computer science fields, publishing well over 100 peer-reviewed journals. The content in these journals as well as the content from several hundred annual conferences are available in the IEEE's online digital library. The IEEE also publishes more than 750 conference proceedings every year. In addition, the IEEE Standards Association maintains over 1,300 standards in engineering. Some of the journals are published in association with other societies, like the Association for Computing Machinery (ACM), the American Society of Mechanical Engineers (ASME), the Optical Society (OSA), and the Minerals, Metals & Materials Society (TMS). Journals Magazines Other Communications and Networks, Journal of, by the Korean Institute of Communications Sciences (KICS) and technically cosponsored by the IEEE Communications Society See also :Category:IEEE conferences, many with published proceedings. List of American Society of Mechanical Engineers academic journals List of American Society of Civil Engineers academic journals References IEEE
List of IEEE publications
[ "Engineering" ]
217
[ "Electrical engineering", "Electrical-engineering-related lists" ]
15,865,266
https://en.wikipedia.org/wiki/Waru%20Waru
Waru Waru is an Aymara term for the agricultural technique developed by pre-Hispanic people in the Andes region of South America from Ecuador to Bolivia; this regional agricultural technique is also referred to as camellones in Spanish. Functionally similar agricultural techniques have been developed in other parts of the world, all of which fall under the broad category of raised field agriculture. This type of altiplano field agriculture consists of parallel canals alternated by raised planting beds, which would be strategically located on floodplains or near a water source so that the fields could be properly irrigated. These flooded fields were composed of soil that was rich in nutrients due to the presence of aquatic plants and other organic materials. Through the process of mounding up this soil to create planting beds, natural, recyclable fertilizer was made available in a region where nitrogen-rich soils were rare. By trapping solar radiation during the day, this raised field agricultural method also protected crops from freezing overnight. These raised planting beds were irrigated very efficiently by the adjacent canals which extended the growing season significantly, allowing for more food yield. Waru Waru were able to yield larger amounts of food than previous agricultural methods due to the overall efficiency of the system. This technique is dated to around 300 B.C., and is most commonly associated with the Tiwanaku culture of the Lake Titicaca region in southern Bolivia, who used this method to grow crops like potatoes and quinoa. This type of agriculture also created artificial ecosystems, which attracted other food sources such as fish and lake birds. Past cultures in the Lake Titicaca region likely utilized these additional resources as a subsistence method. It combines raised beds with irrigation channels to prevent damage by soil erosion during floods. These fields ensure both collecting of water (either fluvial water, rainwater or phreatic water) and subsequent drainage. The drainage aspect of this method makes it particularly useful in many areas subjected to risks of brutal floods, such as tropical parts of Bolivia and Peru where it emerged. Raised field agricultural methods have been used in many other countries such as China, Mexico and Belize. Mexican Chinampas were similar to Waru Waru in that they were created on or near a water source in order to properly irrigate crops. Raised fields are known in Belize from various sites, including Pulltrouser Swamp. Modern Uses In the 1960s, geographers William Denevan, George Plafker, and Kenneth Lee found evidence of raised-field agriculture that had been utilized in the Llanos de Moxos region of Bolivia's Amazon basin, a region that was previously thought to have been unable to sustain large-scale agriculture because of what was believed to have been an unfavorable rainforest environment. This discovery led to a joint experimental archaeology project in the region involving archaeologist Clark Erickson, the Inter-American Foundation, the Parroquia of San Ignacio, the Bolivian Institute of Archaeology, and the University of Pennsylvania Museum of Archaeology and Anthropology. The goal of this experiment was to attempt to restore indigenous raised-field agriculture in the region. This project began in 1990 at the Biological Station of the Beni Department in Bolivia. Because of the experiment's success, it was later implemented further in collaboration with local indigenous communities. The indigenous community provided land for the project and the Inter-American Foundation paid them wages to build and maintain the plots, which successfully produced manioc and maize. These plots did not require extensive upkeep following the initial season's planting, and were self-sufficient because of the artificial ecosystems that they created. This agricultural method was also revived by Alan Kolata of the University of Chicago in 1984, in Tiwanaku, Bolivia as well as Puno, Peru. Research on Waru Waru and its effectiveness in the past has led to a resurgence of the technique amongst contemporary Aymara- and Quechua-speaking native peoples in Bolivia and Peru. By utilizing this centuries-old technique, modern people in the region have been able to make use of the harsh altiplano landscape around Lake Titicaca. This method is now being used in different areas of South America where farming is difficult, such as the altiplano and the Amazon basin. Because of this method, indigenous people are now able to farm the landscape much more efficiently and without the use of modern equipment. This method also allows for large-scale agriculture to be performed in the Amazon basin without having to rely on deforestation. Experiments Research was done at two raised-field sites by Diego Sanchez de Lozada et al. in the northern altiplano of Bolivia near Lake Titicaca in an effort to better understand the effects of frost on potato crops. At an altitude of , these crops were subject to temperature and moisture variation. Temperatures of the soil on top of the high raised mounds was about 1 degree Celsius higher than the temperature of the ground in nearby fields, showing that the raised-field technique was able to partially mitigate frost effects on potato crops at night. Temperature and moisture analysis of the raised fields showed that the higher temperature present was due to above-ground processes, which caused cold air to fall to the canals and not on the planted rows. The frost mitigation effects of the raised field system kept crops from freezing overnight, which increased crop yield. History Lake Titicaca Region 16th Century Spanish accounts of the Lake Titicaca region mentioned the different types of agriculture utilized by the native peoples in detail, however there was never any mention of raised fields in their records. The lack of Spanish accounts strongly suggests that these Waru Waru were no longer in use by the time the conquistadors reached the Lake Titicaca region. The raised fields of the region are numerous and range in size, however they are generally wide, long, and tall. These pre-Hispanic fields cover about of land in Bolivia and Peru, and sit above an altitude of around 3,800 m. Radiocarbon dates taken from habitation sites associated with raised field agriculture in the region indicate usage sometimes between 1000 B.C. to A.D. 400. Thermoluminescence dating was also used to date pottery shards in associated areas, the results of which agree with the radiocarbon dates. Field stratigraphy was used to provide relative dates of the usage of certain raised fields in the area. The habitation sites in association with these fields indicate large populations and long-term occupations, suggesting that raised field agriculture was able to sustain large numbers of people. These dates provided from Andean sites suggest that this form of agriculture was a relatively early phenomenon in the area that slowly expanded throughout the region, and was utilized by various cultures during different time periods. See also Chinampa References Permaculture Irrigation History of agriculture Sustainable agriculture Quechua Ecological techniques Biological techniques and tools Ancient inventions
Waru Waru
[ "Biology" ]
1,395
[ "Ecological techniques", "nan" ]
15,865,362
https://en.wikipedia.org/wiki/Dead%20Sea%20salt
Dead Sea salt refers to salt and other mineral deposits extracted or taken from the Dead Sea. The composition of this material differs significantly from oceanic salt. History Dead Sea salt was used by the peoples of Ancient Egypt and it has been utilized in various unguents, skin creams, and soaps since then. Mineral composition The Dead Sea's mineral composition varies with season, rainfall, depth of deposit, and ambient temperature. Most oceanic salt is approximately 85 wt.% sodium chloride (the same salt as table salt) while Dead Sea salt is only 30.5 wt.% of this, with the remainder composed of other dried minerals and salts. The concentrations of the major ions present in the Dead Sea water are given in the following table: The chemical composition of the crystallized Dead Sea salts does not necessarily correspond to the results presented in this table because of composition changes due to the process of fractional crystallization. The main detritic minerals present in the Dead Sea mud were carried by runoff streams flowing into the Dead Sea. They constituted large mud deposits intermixed with salt layers during the Holocene era. Their elemental composition expressed as equivalent oxides (except for Cl– and Br–) is given here below: Except for chloride and bromide, the results of the elemental composition of the Dead Sea mud given here above are presented as equivalent oxides for the sake of convenience. To illustrate this chemical convention, the neutral sodium sulfate (Na2SO4) is reported here as basic sodium oxide (Na2O) and acidic sulfur trioxide (SO3), neither of which can naturally occur under these free forms in this mud. However, one will note that the elemental composition given here above is incomplete as a major component is lacking in this table: carbon dioxide (CO2) accounting for the significant carbonate fraction present in this mud. Therapeutic benefits Dead Sea salts have been claimed to treat the following conditions: Rheumatologic conditions Rheumatologic conditions can be treated in the balneotherapy of rheumatoid arthritis, psoriatic arthritis, and osteoarthritis. The minerals are absorbed while soaking, stimulating blood circulation. Common skin ailments Skin disorders such as acne and psoriasis may be relieved by regularly soaking the affected area in water with added Dead Sea salt. The National Psoriasis Foundation recommends Dead Sea and Dead Sea salts as effective treatments for psoriasis. High concentration of magnesium in Dead Sea salt may be helpful in improving skin hydration and reducing inflammation, although Epsom salt is a much less expensive salt that also contains high amounts of magnesium and therefore may be equally as useful for this purpose. Allergies The high concentration of bromide and magnesium in the Dead Sea salt may help relieve allergic reactions of the skin by reducing inflammation. Skin ageing Dead Sea salt may reduce the depth of skin wrinkling, a form of skin ageing. See also Tourism in the Palestinian territories Tourism in Jordan Tourism in Israel Medical tourism in Israel References Bathing Salts Skin care Traditional medicine Dead Sea
Dead Sea salt
[ "Chemistry" ]
621
[ "Salts" ]
15,865,418
https://en.wikipedia.org/wiki/Exposure%20action%20value
An Exposure Action Value (EAV) or Action Value (AV) is a limit set on occupational exposure to noise where, when those values are exceeded, employers must take steps to monitor the exposure levels. These levels are measured in decibels. The American Occupational Safety and Health Administration (OSHA) set the EAV to 85 dB(A). When the eight-hour time-weighted average (TWA) reaches 85 dB(A), employers are required to administer a continuing, effective hearing conservation program. The program consists of monitoring, employee notification, observation, an audiometric testing program, hearing protectors, training programs, and record-keeping requirements. Purpose The purpose of the EAV is to ensure that employees are not suffering from high levels of noise exposure. OSHA requires employers to take steps to reduce exposure levels when the TWA reaches 90 dB(A). The EAV is to ensure that the exposure levels do not reach 90 dB(A) or more. It is also to ensure that employees are not experiencing noise-induced hearing loss. Use A noise dosimeter is used to measure noise exposure to employees. Dosimeters can be used to determine the TWA. If it is determined that levels of noise exposure have reached the EAV, employers are required to implement a hearing conservation program. The hearing conservation program consists of many different aspects. The first aspect is monitoring. The employer is required to monitor noise exposure for all of its employees who may be exposed at a TWA at or above 85 dB(A). This is to identify employees for inclusion in the hearing conservation program and to enable the proper selection of hearing protectors. The second aspect is the audiometric testing program. Employees exposed to levels at or above the EAV will undergo audiometric testing. The first test is called a baseline. It provides a standard to compare future audiometric tests. If a significant change in hearing capabilities occurs (called a standard threshold shift) greater steps must be taken to ensure the employee is protected from high levels of noise exposure. The third aspect is the implementation of hearing protection. Employers must make hearing protection available to all employees who are exposed to noise levels of 85 dB(A) or greater. This is to be at no cost to employees. Employees can pick whichever type of hearing protection they prefer. This also requires an ongoing evaluation of the hearing protection. The fourth aspect is a training program. The training program must cover the effects of noise on hearing, the purpose of hearing protection, and the purpose of audiometric testing. The last aspect is record keeping. Records of employee audiometric tests must be retained for two years. This information must also be available to the employees. Vibration Just like the Exposure Action Value or Action Value for noise, safety administrations across the world are publishing values for vibrations. As of right now, The American Occupational Safety and Health Administration has not officially published a list of appropriate, time-weighted EAV guidelines for employers to follow. However, companies in the United States are encouraged by the National Institute for Occupational Safety and Health to follow the vibration limits set up by the ACGIH, which publishes a Threshold Limit Value or TLV in their annual book of TLVs and BEIs. The goal is for employers to have these numbers to make a conscious effort to lower the amount of harmful exposure absorbed by their workers. Since the link from physical vibration to damage caused by Raynaud's phenomenon is less clearly defined between the EAV for noise and noise-induced hearing loss, an upper extreme limit Exposure Limit Value or ELV is provided as well to give a margin. See also Hearing impairment Audiometry Hearing Conservation Program Noise-induced hearing loss References Sysco Environmental External links OSHA Occupational Noise Exposure Regulation 1910.95 NIOSH Power Tools Sound Power and Vibrations Database Occupational safety and health Industrial hygiene Safety engineering Hearing Protective gear Noise control Environmental standards Occupational Safety and Health Administration
Exposure action value
[ "Engineering" ]
789
[ "Safety engineering", "Systems engineering" ]
15,865,532
https://en.wikipedia.org/wiki/Tim%20Ingold
Timothy Ingold (born 1 November 1948) is a British anthropologist, and Chair of Social Anthropology at the University of Aberdeen. Background Ingold was educated at Leighton Park School in Reading, and his father was the mycologist Cecil Terence Ingold. He attended Churchill College, Cambridge, initially studying natural sciences but shifting to anthropology (BA in Social Anthropology 1970, PhD 1976). His doctoral work was conducted with the Skolt Sámi of northeastern Finland, studying their ecological adaptations, social organisation and ethnic politics. His field work was primarily in the village of Sevettijärvi and in 2024 he donated his field diaries documenting Skolt life in the area in the early 1970s to the community. Ingold taught at the University of Helsinki (1973–74) and then the University of Manchester, becoming Professor in 1990 and Max Gluckman Professor in 1995. In 1999, he moved to the University of Aberdeen. In 2015, he received an honorary doctorate from Leuphana University of Lüneburg (Germany). He has four children. Contributions His interests are wide-ranging and his scholarly approach is individualistic. They include environmental perception, language, technology and skilled practice, art and architecture, creativity, theories of evolution in anthropology, human-animal relations, and ecological approaches in anthropology. Early concern was with northern circumpolar peoples, looking comparatively at hunting, pastoralism and ranching as alternative ways in which such peoples have based a livelihood on reindeer or caribou. In his recent work, he links the themes of environmental perception and skilled practice, replacing traditional models of genetic and cultural transmission, founded upon the alliance of neo-Darwinian biology and cognitive science, with a relational approach focusing on the growth of embodied skills of perception and action within social and environmental contexts of human development. This has taken him to examining the use of lines in culture, and the relationship between anthropology, architecture, art and design. He discusses his entire career in From science to art and back again: The pendulum of an anthropologist (2016). Writing within the anthropological realm of phenomenology, Ingold explores the human as an organism which 'feels' its way through the world that "is itself in motion"; constantly creating and being changed by spaces and places as they are encountered. Honours and awards Ingold was appointed Commander of the Order of the British Empire (CBE) in the 2022 Birthday Honours for services to anthropology. Rivers Memorial Medal, RAI (1989) Fellow of the British Academy (1997) Fellow of the Royal Society of Edinburgh (2000) Huxley Memorial Medal recipient —established in 1900 in memory of Thomas Henry Huxley— for services to anthropology by the Council of the Royal Anthropological Institute of Great Britain and Ireland, the highest honour of the RAI (2014) Honorary doctorate of the Leuphana University of Lüneburg (2015) Bibliography Ingold, T. (2021). Correspondences. Polity, London, UK. Ingold, T. (2018). Anthropology: Why it matters. Polity, London, UK. Ingold, T. (2017). Anthropology and/as education. Routledge, London, UK. Ingold, T. (2015). The Life of Lines. Routledge, London, UK. Ingold, T. (2013). Making: Anthropology, Archaeology, Art and Architecture. Routledge, London, UK. Ingold, T. & Palsson, G. (eds.) (2013). Biosocial Becomings: Integrating Social and Biological Anthropology. Cambridge University Press, Cambridge, MS. Janowski, M. & Ingold, T. (eds.) (2012). Imagining Landscapes: Past, Present and Future. Ashgate, Abingdon, UK. Ingold, T. (2011). Being Alive: Essays on Movement, Knowledge and Description. Routledge, London, UK. Ingold, T. (2011). Redrawing Anthropology: Materials, movements, lines. Ashgate, Aldershot. Ingold, T. & Vergunst, J. (eds.) (2008). Ways of Walking: Ethnography and Practice on Foot. Ashgate, Aldershot. Ingold, T. (2007). Lines: A Brief History. Routledge, Oxon, UK. Hallam, E. & Ingold, T. (2007). Creativity and Cultural Improvisation. A.S.A. Monographs, vol. 44, Berg Publishers, Oxford. Ingold, T. (2000). The perception of the environment: essays on livelihood, dwelling and skill. London: Routledge. Ingold, T. (1996). Key Debates In Anthropology Ingold, T. (1986). Evolution and social life. Cambridge: Cambridge University Press. Ingold, T. (1986). The appropriation of nature: essays on human ecology and social relations. Manchester: Manchester University Press. Ingold T. (1980). Hunters, pastoralists and ranchers: reindeer economies and their transformations . Cambridge: Cambridge University Press. Ingold T. (1976). The Skolt Lapps today. Cambridge: Cambridge University Press. See also Taskscape Further reading Tim Ingold. In the Gathering of Shadows of Material Things. Exploring Materiality and Connectivity in Anthropology and Beyond. Schorch,P., Saxer, M., Elders, M., (eds.), UCL Press. 2020 Tim Ingold. On the Distinction between Evolution and History. Social Evolution & History. Vol. 1, num.1, 2002, pp. 5–24 Tim Ingold. Towards an Ecology of Materials. Audio recording of lecture given in University College Dublin, February 2012. Tim Ingold. Interview with Tim Ingold on October 05, 2011. In Ponto Urbe, Revista do Núcleo de Antropologia Urbana da USP, Num.11, Dec. 2012. See also New materialisms References Social anthropologists British anthropologists Fellows of the British Academy Fellows of the Royal Society of Edinburgh Alumni of Churchill College, Cambridge Academics of the University of Aberdeen 1948 births Living people Environmental social scientists Commanders of the Order of the British Empire
Tim Ingold
[ "Environmental_science" ]
1,259
[ "Environmental social scientists", "Environmental social science" ]
15,865,988
https://en.wikipedia.org/wiki/Selenoxide%20elimination
Selenoxide elimination (also called α-selenation) is a method for the chemical synthesis of alkenes from selenoxides. It is most commonly used to synthesize α,β-unsaturated carbonyl compounds from the corresponding saturated analogues. It is mechanistically related to the Cope reaction. Mechanism and stereochemistry After the development of sulfoxide elimination as an effective method for generating carbon–carbon double bonds, it was discovered that selenoxides undergo a similar process, albeit much more rapidly. Most selenoxides decompose to the corresponding alkenes at temperatures between −50 and 40 °C. Evidence suggests that the elimination is syn; however, epimerization at both carbon and selenium (both of which are stereogenic) may occur during the reaction. As selenoxides can be readily prepared from nucleophilic carbonyl derivatives (enols and enolates), selenoxide elimination has grown into a general method for the preparation of α,β-unsaturated carbonyl compounds. (1) Mechanism Elimination of selenoxides takes place through an intramolecular syn elimination pathway. The carbon–hydrogen and carbon–selenium bonds are co-planar in the transition state. (2) The reaction is highly trans-selective when acyclic α-phenylseleno carbonyl compounds are employed. Formation of conjugated double bonds is favored. Endocyclic double bonds tend to predominate over exocyclic ones, unless no syn hydrogen is available in the ring. Selenium in these reactions is almost always stereogenic, and the effect of epimerization at selenium (which is acid-catalyzed and occurs readily) on the elimination reaction is nearly unknown. In one example, separation and warming of selenoxides 1 and 2 revealed that 2 decomposes at 0 °C, while 1, which presumably has more difficulty accessing the necessary syn conformation for elimination, is stable to 5 °C. (3)Kinetic isotope effect studies have found a ratio of pre-exponential factors of AH/AD of 0.092 for sulfoxide elimination reactions, indicating that quantum tunneling plays an important role in the hydrogen transfer process. Scope and limitations Selanylating and oxidizing reagents α-Selanylation of carbonyl compounds can be accomplished with electrophilic or nucleophilic selanylating reagents. Usually, simple phenylseleno compounds are used in elimination reactions; although 2-nitrophenylselenides react more quickly, they are more expensive to prepare, and phenylselenides typically react in minutes. Electrophilic selanylating reagents can be used in conjunction with enols, enolates, or enol ethers. Phenylselanating reagents include: Diphenyl diselenide Benzeneselanyl chloride Benzeneselanyl bromide Benzeneselinyl chloride Sodium benzeneselenolate Trimethylsilyl phenyl selenide The most common oxidizing agent employed is hydrogen peroxide (H2O2). It is sometimes used in excess, to overcome catalytic decomposition of H2O2 by selenium; however, undesired oxidation of starting material has been observed under these conditions. Oxidation of products (via the Baeyer-Villiger reaction, for instance) has also been observed. (4) For substrates whose product olefins are sensitive to oxidation, meta-Chloroperoxybenzoic acid (mCPBA) can be employed as an oxidant. It oxidizes selenides below the temperature at which they decompose to alkenes; thus, all oxidant is consumed before elimination begins. Buffering with an amine base is necessary before warming to avoid acid-mediated side reactions. (5) Ozone, which gives only dioxygen as a byproduct after oxidation, is used to oxidize selenides when special conditions are required for thermolysis or extreme care is necessary during workup. Quinones can be synthesized from the corresponding cyclic unsaturated carbonyl compounds using this method. (6) Substrates α-Phenylseleno aldehydes, which are usually prepared from the corresponding enol ethers, are usually oxidized with mCPBA or ozone, as hydrogen peroxide causes over-oxidation. α-Phenylseleno ketones can be prepared by kinetically controlled enolate formation and trapping with an electrophilic selanylating reagent such as benzeneselenyl chloride. A second deprotonation, forming a selenium-substituted enolate, allows alkylation or hydroxyalkylation of these substrates. (7) Base-sensitive substrates may be selanylated under acid-catalyzed conditions (as enols) using benzeneselenyl chloride. Hydrochloric acid generated during the selanylation of transient enol catalyzes tautomerization. (8) The seleno-Pummerer reaction is a significant side reaction that may occur under conditions when acid is present. Protonation of the selenoxide intermediate, followed by elimination of hydroxide and hydrolysis, leads to α-dicarbonyl compounds. The reaction is not a problem for more electron-rich carbonyls—generally, fewer side reactions are observed in eliminations of esters and amides. (9) A second significant side reaction in reactions of ketones and aldehydes is selanylation of the intermediate selenoxide. This process leads to elimination products retaining a carbon-selenium bond, and is more difficult to prevent than the seleno-Pummerer reaction. Tertiary selenoxides, which are unable to undergo enolization, do not react further with selenium electrophiles. (10) Comparison with other methods Analogous sulfoxide eliminations are generally harder to implement than selenoxide eliminations. Formation of the carbon–sulfur bond is usually accomplished with highly reactive sulfenyl chlorides, which must be prepared for immediate use. However, sulfoxides are more stable than the corresponding selenoxides, and elimination is usually carried out as a distinct operation. This allows thermolysis conditions to be optimized (although the high temperatures required may cause other thermal processes). In addition, sulfoxides may be carried through multiple synthetic steps before elimination is carried out. (11) The combination of silyl enol ethers with palladium(II) acetate (Pd(OAc)2), the Saegusa oxidation, gives enones. However, the reaction requires stoichiometric amounts of Pd(OAc)2 and thus is not amenable to large-scale synthesis. Catalytic variants have been developed. (12) For β-dicarbonyl compounds, DDQ can be used as an oxidizing agent in the synthesis of enediones. Additionally, some specialized systems give better yields upon DDQ oxidation. (13) See also Saegusa–Ito oxidation References Organic redox reactions Chemical synthesis
Selenoxide elimination
[ "Chemistry" ]
1,507
[ "Organic reactions", "Organic redox reactions", "nan", "Chemical synthesis" ]
15,866,439
https://en.wikipedia.org/wiki/Hadamard%20regularization
In mathematics, Hadamard regularization (also called Hadamard finite part or Hadamard's partie finie) is a method of regularizing divergent integrals by dropping some divergent terms and keeping the finite part, introduced by . showed that this can be interpreted as taking the meromorphic continuation of a convergent integral. If the Cauchy principal value integral exists, then it may be differentiated with respect to to obtain the Hadamard finite part integral as follows: Note that the symbols and are used here to denote Cauchy principal value and Hadamard finite-part integrals respectively. The Hadamard finite part integral above (for ) may also be given by the following equivalent definitions: The definitions above may be derived by assuming that the function is differentiable infinitely many times at , that is, by assuming that can be represented by its Taylor series about . For details, see . (Note that the term in the second equivalent definition above is missing in but this is corrected in the errata sheet of the book.) Integral equations containing Hadamard finite part integrals (with unknown) are termed hypersingular integral equations. Hypersingular integral equations arise in the formulation of many problems in mechanics, such as in fracture analysis. Example Consider the divergent integral Its Cauchy principal value also diverges since To assign a finite value to this divergent integral, we may consider The inner Cauchy principal value is given by Therefore, Note that this value does not represent the area under the curve , which is clearly always positive. However, it can be seen where this comes from. Recall the Cauchy principal value of this integral, when evaluated at the endpoints, took the form If one removes the infinite components, the pair of terms, that which remains is which equals the value derived above. References . . . . . . . Integrals Summability methods
Hadamard regularization
[ "Mathematics" ]
389
[ "Sequences and series", "Summability methods", "Mathematical structures" ]
15,867,639
https://en.wikipedia.org/wiki/TimeSys
Timesys Corporation is a company selling Linux open source software security, engineering services, and development tools, for the embedded software market. The firm also helps software development teams build and maintain a custom Linux platform for embedded processors from integrated circuit manufacturers such as Atmel, Freescale, Intel, Texas Instruments, and Xilinx. On December 12, 2023, Lynx Software Technologies announced that it had completed the purchase of TimeSys Corporation. History Based in Pittsburgh, Pennsylvania, it was founded in 1995 by principals associated with Carnegie Mellon University and initially provided the first real-time enhanced embedded Linux distribution, known as Timesys Linux/RT. Timesys joined the OSDL in 2003, and in 2004, was the first to register a carrier-grade Linux distribution. In 2005, Timesys open-sourced their software. At that time, the firm announced LinuxLink, a software development framework that helps embedded software development teams configure, patch, build and maintain an open source Linux platform. It includes a Linux kernel, GNU toolchain, packages and libraries and development tools. Subscribers are provided with regular updates, documentation and support. All Linux platform components and updates are open source and are provided through the LinuxLink Factory custom platform builder. Embedded Linux platforms, developed and maintained through LinuxLink, exist in hundreds of consumer electronics, medical device, industrial automation and networking products. LinuxLink evolved to become a portal for customers to gain access to cybersecurity products and services, development tools, and to ask for help. Timesys added end-to-end device security in 2019. Vigiles is a vulnerability and mitigation tracker for automatically identifying, monitoring and tracking vulnerabilities specific to a developer's actual product configurations, along with triage and mitigation collaboration tools to fix issues. Timesys provides secure by design services, to implementsecurity features including secure boot, over-the-air updates, device encryption, hardening, and security audits. For long-term support of embedded devices, Timesys offers a Linux OS board support package maintenance service. References External links Timesys Corporation website Software companies based in Pennsylvania Linux companies Embedded Linux Privately held companies based in Pennsylvania Software companies of the United States 2023 mergers and acquisitions
TimeSys
[ "Technology" ]
457
[ "Computing stubs", "Computer company stubs" ]
15,868,711
https://en.wikipedia.org/wiki/Lehmer%20random%20number%20generator
The Lehmer random number generator (named after D. H. Lehmer), sometimes also referred to as the Park–Miller random number generator (after Stephen K. Park and Keith W. Miller), is a type of linear congruential generator (LCG) that operates in multiplicative group of integers modulo n. The general formula is where the modulus m is a prime number or a power of a prime number, the multiplier a is an element of high multiplicative order modulo m (e.g., a primitive root modulo n), and the seed X is coprime to m. Other names are multiplicative linear congruential generator (MLCG) and multiplicative congruential generator (MCG). Parameters in common use In 1988, Park and Miller suggested a Lehmer RNG with particular parameters m = 2 − 1 = 2,147,483,647 (a Mersenne prime M) and a = 7 = 16,807 (a primitive root modulo M), now known as MINSTD. Although MINSTD was later criticized by Marsaglia and Sullivan (1993), it is still in use today (in particular, in CarbonLib and C++11's minstd_rand0). Park, Miller and Stockmeyer responded to the criticism (1993), saying: Given the dynamic nature of the area, it is difficult for nonspecialists to make decisions about what generator to use. "Give me something I can understand, implement and port... it needn't be state-of-the-art, just make sure it's reasonably good and efficient." Our article and the associated minimal standard generator was an attempt to respond to this request. Five years later, we see no need to alter our response other than to suggest the use of the multiplier a = 48271 in place of 16807. This revised constant is used in C++11's minstd_rand random number generator. The Sinclair ZX81 and its successors use the Lehmer RNG with parameters m = 2 + 1 = 65,537 (a Fermat prime F) and a = 75 (a primitive root modulo F). The CRAY random number generator RANF is a Lehmer RNG with the power-of-two modulus m = 2 and a = 44,485,709,377,909. The GNU Scientific Library includes several random number generators of the Lehmer form, including MINSTD, RANF, and the infamous IBM random number generator RANDU. Choice of modulus Most commonly, the modulus is chosen as a prime number, making the choice of a coprime seed trivial (any 0 < X < m will do). This produces the best-quality output, but introduces some implementation complexity, and the range of the output is unlikely to match the desired application; converting to the desired range requires an additional multiplication. Using a modulus m which is a power of two makes for a particularly convenient computer implementation, but comes at a cost: the period is at most m/4, and the lower bits have periods shorter than that. This is because the lowest k bits form a modulo-2 generator all by themselves; the higher-order bits never affect lower-order bits. The values X are always odd (bit 0 never changes), bits 2 and 1 alternate (the lower 3 bits repeat with a period of 2), the lower 4 bits repeat with a period of 4, and so on. Therefore, the application using these random numbers must use the most significant bits; reducing to a smaller range using a modulo operation with an even modulus will produce disastrous results. To achieve this period, the multiplier must satisfy a ≡ ±3 (mod 8), and the seed X must be odd. Using a composite modulus is possible, but the generator must be seeded with a value coprime to m, or the period will be greatly reduced. For example, a modulus of F = 2 + 1 might seem attractive, as the outputs can be easily mapped to a 32-bit word 0 ≤ X − 1 < 2. However, a seed of X = 6700417 (which divides 2 + 1) or any multiple would lead to an output with a period of only 640. Another generator with a composite modulus is the one recommended by Nakazawa & Nakazawa: m = × = ≈ 2 a = (any of ±a (mod m) will do as well) As both factors of the modulus are less than 2, it is possible to maintain the state modulo each of the factors, and construct the output value using the Chinese remainder theorem, using no more than 64-bit intermediate arithmetic. A more popular implementation for large periods is a combined linear congruential generator; combining (e.g. by summing their outputs) several generators is equivalent to the output of a single generator whose modulus is the product of the component generators' moduli. and whose period is the least common multiple of the component periods. Although the periods will share a common divisor of 2, the moduli can be chosen so that is the only common divisor and the resultant period is (m − 1)(m − 1)···(m − 1)/2. One example of this is the Wichmann–Hill generator. Relation to LCG While the Lehmer RNG can be viewed as a particular case of the linear congruential generator with , it is a special case that implies certain restrictions and properties. In particular, for the Lehmer RNG, the initial seed X must be coprime to the modulus m, which is not required for LCGs in general. The choice of the modulus m and the multiplier a is also more restrictive for the Lehmer RNG. In contrast to LCG, the maximum period of the Lehmer RNG equals m − 1, and it is such when m is prime and a is a primitive root modulo m. On the other hand, the discrete logarithms (to base a or any primitive root modulo m) of X in represent a linear congruential sequence modulo the Euler totient . Implementation A prime modulus requires the computation of a double-width product and an explicit reduction step. If a modulus just less than a power of 2 is used (the Mersenne primes 231 − 1 and 261 − 1 are popular, as are 232 − 5 and 264 − 59), reduction modulo can be implemented more cheaply than a general double-width division using the identity . The basic reduction step divides the product into two e-bit parts, multiplies the high part by d, and adds them: . This can be followed by subtracting m until the result is in range. The number of subtractions is limited to ad/m, which can be easily limited to one if d is small and is chosen. (This condition also ensures that d  is a single-width product; if it is violated, a double-width product must be computed.) When the modulus is a Mersenne prime (d = 1), the procedure is particularly simple. Not only is multiplication by d trivial, but the conditional subtraction can be replaced by an unconditional shift and addition. To see this, note that the algorithm guarantees that , meaning that x = 0 and x = m are both impossible. This avoids the need to consider equivalent e-bit representations of the state; only values where the high bits are non-zero need reduction. The low e bits of the product ax cannot represent a value larger than m, and the high bits will never hold a value greater than a − 1 ≤ m − 2. Thus the first reduction step produces a value at most m + a − 1 ≤ 2m − 2 = 2e+1 − 4. This is an (e + 1)-bit number, which can be greater than m (i.e. might have bit e set), but the high half is at most 1, and if it is, the low e bits will be strictly less than m. Thus whether the high bit is 1 or 0, a second reduction step (addition of the halves) will never overflow e bits, and the sum will be the desired value. If d > 1, conditional subtraction can also be avoided, but the procedure is more intricate. The fundamental challenge of a modulus like 232 − 5 lies in ensuring that we produce only one representation for values such as 1 ≡ 232 − 4. The solution is to temporarily add d, so that the range of possible values is d through 2e − 1, and reduce values larger than e bits in a way that never generates representations less than d. Finally subtracting the temporary offset produces the desired value. Begin by assuming that we have a partially reduced value y bounded so that 0 ≤ y < 2m = 2e+1 − 2d. In this case, a single offset subtraction step will produce 0 ≤  < m. To see this, consider two cases: 0 ≤ y < m = 2e − d In this case, y + d < 2e and  = y < m, as desired. m ≤ y < 2m In this case, 2e ≤ y + d < 2e+1 is an (e + 1)-bit number, and  = 1. Thus,  = (y + d) − 2e + d − d = y − 2e + d = y − m < m, as desired. Because the multiplied high part is d, the sum is at least d, and subtracting the offset never causes underflow. (For the case of a Lehmer generator specifically, a zero state or its image y = m will never occur, so an offset of d − 1 will work just the same, if that is more convenient. This reduces the offset to 0 in the Mersenne prime case, when d = 1.) Reducing a larger product ax to less than 2m = 2e+1 − 2d can be done by one or more reduction steps without an offset. If ad ≤ m, then one additional reduction step suffices. Since x < m, ax < am ≤ (a − 1)2e, and one reduction step converts this to at most 2e − 1 + (a − 1)d = m + ad − 1. This is within the limit of 2m if ad − 1 < m, which is the initial assumption. If ad > m, then it is possible for the first reduction step to produce a sum greater than 2m = 2e+1 − 2d, which is too large for the final reduction step. (It also requires the multiplication by d to produce a product larger than e bits, as mentioned above.) However, as long as d2 < 2e, the first reduction will produce a value in the range required for the preceding case of two reduction steps to apply. Schrage's method If a double-width product is not available, Schrage's method, also called the approximate factoring method, may be used to compute , but this comes at the cost: The modulus must be representable in a signed integer; arithmetic operations must allow a range of ±m. The choice of multiplier a is restricted. We require that , commonly achieved by choosing a ≤ . One division (with remainder) per iteration is required. While this technique is popular for portable implementations in high-level languages which lack double-width operations, on modern computers division by a constant is usually implemented using double-width multiplication, so this technique should be avoided if efficiency is a concern. Even in high-level languages, if the multiplier a is limited to , then the double-width product ax can be computed using two single-width multiplications, and reduced using the techniques described above. To use Schrage's method, first factor , i.e. precompute the auxiliary constants and = (m−r)/a. Then, each iteration, compute . This equality holds because so if we factor x = (x mod q) + q, we get: The reason it does not overflow is that both terms are less than m. Since x mod q < q ≤ m/a, the first term is strictly less than am/a = m and may be computed with a single-width product. If a is chosen so that r ≤ q (and thus r/q ≤ 1), then the second term is also less than m: r  ≤ rx/q = x(r/q) ≤ x(1) = x < m. Thus, the difference lies in the range [1−m, m−1] and can be reduced to [0, m−1] with a single conditional add. This technique may be extended to allow a negative r (−q ≤ r < 0), changing the final reduction to a conditional subtract. The technique may also be extended to allow larger a by applying it recursively. Of the two terms subtracted to produce the final result, only the second (r ) risks overflow. But this is itself a modular multiplication by a compile-time constant r, and may be implemented by the same technique. Because each step, on average, halves the size of the multiplier (0 ≤ r < a, average value (a−1)/2), this would appear to require one step per bit and be spectacularly inefficient. However, each step also divides x by an ever-increasing quotient , and quickly a point is reached where the argument is 0 and the recursion may be terminated. Sample C99 code Using C code, the Park-Miller RNG can be written as follows: uint32_t lcg_parkmiller(uint32_t *state) { return *state = (uint64_t)*state * 48271 % 0x7fffffff; } This function can be called repeatedly to generate pseudorandom numbers, as long as the caller is careful to initialize the state to any number greater than zero and less than the modulus. In this implementation, 64-bit arithmetic is required; otherwise, the product of two 32-bit integers may overflow. To avoid the 64-bit division, do the reduction by hand: uint32_t lcg_parkmiller(uint32_t *state) { uint64_t product = (uint64_t)*state * 48271; uint32_t x = (product & 0x7fffffff) + (product >> 31); x = (x & 0x7fffffff) + (x >> 31); return *state = x; } To use only 32-bit arithmetic, use Schrage's method: uint32_t lcg_parkmiller(uint32_t *state) { // Precomputed parameters for Schrage's method const uint32_t M = 0x7fffffff; const uint32_t A = 48271; const uint32_t Q = M / A; // 44488 const uint32_t R = M % A; // 3399 uint32_t div = *state / Q; // max: M / Q = A = 48,271 uint32_t rem = *state % Q; // max: Q - 1 = 44,487 int32_t s = rem * A; // max: 44,487 * 48,271 = 2,147,431,977 = 0x7fff3629 int32_t t = div * R; // max: 48,271 * 3,399 = 164,073,129 int32_t result = s - t; if (result < 0) result += M; return *state = result; } or use two 16×16-bit multiplies: uint32_t lcg_parkmiller(uint32_t *state) { const uint32_t A = 48271; uint32_t low = (*state & 0x7fff) * A; // max: 32,767 * 48,271 = 1,581,695,857 = 0x5e46c371 uint32_t high = (*state >> 15) * A; // max: 65,535 * 48,271 = 3,163,439,985 = 0xbc8e4371 uint32_t x = low + ((high & 0xffff) << 15) + (high >> 16); // max: 0x5e46c371 + 0x7fff8000 + 0xbc8e = 0xde46ffff x = (x & 0x7fffffff) + (x >> 31); return *state = x; } Another popular Lehmer generator uses the prime modulus 2−5: uint32_t lcg_rand(uint32_t *state) { return *state = (uint64_t)*state * 279470273u % 0xfffffffb; } This can also be written without a 64-bit division: uint32_t lcg_rand(uint32_t *state) { uint64_t product = (uint64_t)*state * 279470273u; uint32_t x; // Not required because 5 * 279470273 = 0x5349e3c5 fits in 32 bits. // product = (product & 0xffffffff) + 5 * (product >> 32); // A multiplier larger than 0x33333333 = 858,993,459 would need it. // The multiply result fits into 32 bits, but the sum might be 33 bits. product = (product & 0xffffffff) + 5 * (uint32_t)(product >> 32); product += 4; // This sum is guaranteed to be 32 bits. x = (uint32_t)product + 5 * (uint32_t)(product >> 32); return *state = x - 4; } Many other Lehmer generators have good properties. The following modulo-2128 Lehmer generator requires 128-bit support from the compiler and uses a multiplier computed by L'Ecuyer. It has a period of 2126: static unsigned __int128 state; /* The state must be seeded with an odd value. */ void seed(unsigned __int128 seed) { state = seed << 1 | 1; } uint64_t next(void) { // GCC cannot write 128-bit literals, so we use an expression const unsigned __int128 mult = (unsigned __int128)0x12e15e35b500f16e << 64 | 0x2e714eb2b37916a5; state *= mult; return state >> 64; } The generator computes an odd 128-bit value and returns its upper 64 bits. This generator passes BigCrush from TestU01, but fails the TMFn test from PractRand. That test has been designed to catch exactly the defect of this type of generator: since the modulus is a power of 2, the period of the lowest bit in the output is only 262, rather than 2126. Linear congruential generators with a power-of-2 modulus have a similar behavior. The following core routine improves upon the speed of the above code for integer workloads (if the constant declaration is allowed to be optimized out of a calculation loop by the compiler): uint64_t next(void) { uint64_t result = state >> 64; // GCC cannot write 128-bit literals, so we use an expression const unsigned __int128 mult = (unsigned __int128)0x12e15e35b500f16e << 64 | 0x2e714eb2b37916a5; state *= mult; return result; } However, because the multiplication is deferred, it is not suitable for hashing, since the first call simply returns the upper 64 bits of the seed state. References (journal version: Annals of the Computation Laboratory of Harvard University, Vol. 26 (1951)). Steve Park, Random Number Generators External links Primes just less than a power of two may be useful for choosing moduli. Part of Prime Pages. Pseudorandom number generators Modular arithmetic
Lehmer random number generator
[ "Mathematics" ]
4,439
[ "Arithmetic", "Modular arithmetic", "Number theory" ]
15,868,771
https://en.wikipedia.org/wiki/Open%20Computing%20Facility
The Open Computing Facility is a student organization at the University of California, Berkeley, and a chartered program of the ASUC. Founded in 1989, the OCF is an all-volunteer, student-run organization dedicated to providing free and accessible computing resources to all members of the University community. The mission of the OCF is "to provide an environment where no member of Berkeley's campus community is denied the computer resources he or she seeks, to appeal to all members of the Berkeley campus community with unmet computing needs, and to provide a place for those interested in computing to fully explore that interest." The OCF provides the following services, among others, to UC Berkeley students, staff, alumni, and affiliates: A 30-seat computer lab with Linux workstations, 4K monitors, and mechanical keyboards Webhosting for individuals and student groups Linux shell access Email forwarding Free printing quotas for lab users Access to a high-performance computing cluster featuring modern NVIDIA GPUs Software mirrors of popular Linux distributions and open source projects, available over rsync, http, and https A Linux systems administration DeCal To further the OCF's goal of promoting accessibility, the OCF publishes its board meeting minutes, tech talks, and Unix system administration DeCal materials online for all to see and use. References External links Open Computing Facility Open Computing Facility Documentation Open Computing Facility Status Blog Open Computing Facility Software Mirrors Open Computing Facility 1989 establishments in California
Open Computing Facility
[ "Technology" ]
300
[ "Computing stubs" ]
15,868,806
https://en.wikipedia.org/wiki/Specific%20force
Specific force (SF) is a mass-specific quantity defined as the quotient of force per unit mass. It is a physical quantity of kind acceleration, with dimension of length per time squared and units of metre per second squared (m·s−2). It is normally applied to forces other than gravity, to emulate the relationship between gravitational acceleration and gravitational force. It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it. The g-force is an instance of specific force measured in units of the standard gravity (g) instead of m/s², i.e., in multiples of g (e.g., "3 g"). Type of acceleration The (mass-)specific force is not a coordinate acceleration, but rather a proper acceleration, which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration. The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces. Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, not the "standard" acceleration that is relative to a coordinate system. Hydraulics In open channel hydraulics, specific force () has a different meaning: where Q is the discharge, g is the acceleration due to gravity, A is the cross-sectional area of flow, and z is the depth of the centroid of flow area A. See also Acceleration Proper acceleration References Physical quantities Hydraulic engineering Acceleration
Specific force
[ "Physics", "Mathematics", "Engineering", "Environmental_science" ]
499
[ "Physical phenomena", "Hydrology", "Physical quantities", "Acceleration", "Quantity", "Classical mechanics stubs", "Physical properties", "Classical mechanics", "Physical systems", "Hydraulics", "Civil engineering", "Wikipedia categories named after physical quantities", "Hydraulic engineeri...
15,869,259
https://en.wikipedia.org/wiki/Standard%20array
In coding theory, a standard array (or Slepian array) is a by array that lists all elements of a particular vector space. Standard arrays are used to decode linear codes; i.e. to find the corresponding codeword for any received vector. Definition A standard array for an [n,k]-code is a by array where: The first row lists all codewords (with the 0 codeword on the extreme left) Each row is a coset with the coset leader in the first column The entry in the i-th row and j-th column is the sum of the i-th coset leader and the j-th codeword. For example, the [5,2]-code = {0, 01101, 10110, 11011} has a standard array as follows: The above is only one possibility for the standard array; had 00011 been chosen as the first coset leader of weight two, another standard array representing the code would have been constructed. The first row contains the 0 vector and the codewords of (0 itself being a codeword). Also, the leftmost column contains the vectors of minimum weight enumerating vectors of weight 1 first and then using vectors of weight 2. Also each possible vector in the vector space appears exactly once. Constructing a standard array Because each possible vector can appear only once in a standard array some care must be taken during construction. A standard array can be created as follows: List the codewords of , starting with 0, as the first row Choose any vector of minimum weight not already in the array. Write this as the first entry of the next row. This vector is denoted the 'coset leader'. Fill out the row by adding the coset leader to the codeword at the top of each column. The sum of the i-th coset leader and the j-th codeword becomes the entry in row i, column j. Repeat steps 2 and 3 until all rows/cosets are listed and each vector appears exactly once. Adding vectors is done mod q. For example, binary codes are added mod 2 (which equivalent to bit-wise XOR addition). For example, in , 11000 + 11011 = 00011. That selecting different coset leaders will create a slightly different but equivalent standard array, and will not affect results when decoding. Construction example Let be the binary [4,2]-code. i.e. C = {0000, 1011, 0101, 1110}. To construct the standard array, we first list the codewords in a row. We then select a vector of minimum weight (in this case, weight 1) that has not been used. This vector becomes the coset leader for the second row. Following step 3, we complete the row by adding the coset leader to each codeword. We then repeat steps 2 and 3 until we have completed all rows. We stop when we have reached rows. In this example we could not have chosen the vector 0001 as the coset leader of the final row, even though it meets the criteria of having minimal weight (1), because the vector was already present in the array. We could, however, have chosen it as the first coset leader and constructed a different standard array. Decoding via standard array To decode a vector using a standard array, subtract the error vector - or coset leader - from the vector received. The result will be one of the codewords in . For example, say we are using the code C = {0000, 1011, 0101, 1110}, and have constructed the corresponding standard array, as shown from the example above. If we receive the vector 0110 as a message, we find that vector in the standard array. We then subtract the vector's coset leader, namely 1000, to get the result 1110. We have received the codeword 1110. Decoding via a standard array is a form of nearest neighbour decoding. In practice, decoding via a standard array requires large amounts of storage - a code with 32 codewords requires a standard array with entries. Other forms of decoding, such as syndrome decoding, are more efficient. Decoding via standard array does not guarantee that all vectors are decoded correctly. If we receive the vector 1010, using the standard array above would decode the message as 1110, a codeword distance 1 away. However, 1010 is also distance 1 away from the codeword 1011. In such a case some implementations might ask for the message to be resent, or the ambiguous bit may be marked as an erasure and a following outer code may correct it. This ambiguity is another reason that different decoding methods are sometimes used. See also Linear code References Coding theory
Standard array
[ "Mathematics" ]
990
[ "Discrete mathematics", "Coding theory" ]
15,872,404
https://en.wikipedia.org/wiki/Mormon%20views%20on%20evolution
The Church of Jesus Christ of Latter-day Saints (LDS Church) takes no official position on whether or not biological evolution has occurred, nor on the validity of the modern evolutionary synthesis as a scientific theory. In the twentieth century, the First Presidency of the LDS Church published doctrinal statements on the origin of man and creation. In addition, individual leaders of the church have expressed a variety of personal opinions on evolution, many of which have affected the beliefs and perceptions of Latter-day Saints. There have been three public statements from the First Presidency (1909, 1910, 1925) and one private statement from the First Presidency (1931) about the LDS Church's view on evolution. The 1909 statement was a delayed response to the publication of On the Origin of Species by Charles Darwin. In the statement, the First Presidency affirmed their doctrine that Adam is the direct, divine offspring of God. The statement declares evolution as "the theories of men", but does not directly qualify it as untrue or evil. In response to the 1911 Brigham Young University modernism controversy, the First Presidency issued an official statement in its 1910 Christmas message that the church members should be kind to everyone regardless of differences in opinion about evolution and that proven science is accepted by the church with joy. In 1925, in response to the Scopes Trial, the First Presidency published a statement, similar in content to the 1909 statement, but with "anti-science" language removed. A private memo written in 1931 by the First Presidency to church general authorities confirmed a neutral stance on the existence of pre-Adamites and "death before the fall." It further asserted that geology, biology, and other sciences were best left to scientists (and implicitly, not theologians), and were not central to the Gospel. There are a variety of LDS Church publications that address evolution, often with neutral or opposing viewpoints. In order to address students' questions about the church's position on evolution in biology and related classes, Brigham Young University (BYU) released a library packet on evolution in 1992. This packet contains the first three official First Presidency statement as well as the "Evolution" section in the Encyclopedia of Mormonism to supplement normal course material. Statements from church presidents are mixed with some vehemently against evolution and the theories of Charles Darwin, and some willing to admit that the circumstances of earth's creation are unknown and that evolution could explain some aspects of creation. In the 1930s, church leaders Joseph Fielding Smith, B. H. Roberts, and James E. Talmage debated about the existence of pre-Adamites, eliciting a memo from the First Presidency in 1931 claiming a neutral stance on pre-Adamites. Since the publication of On the Origin of Species, some Latter-day Saint scientists have published essays or speeches to try and reconcile science and Mormon doctrine. Many of these scientists subscribe to the idea that evolution is the natural process God used to create the Earth and its inhabitants and that there are commonalities between Mormon doctrine and foundations of evolutionary biology. Debate and questioning among members of the LDS Church continues concerning evolution, religion, and the reconciliation between the two. Although articles from publications like BYU Studies often represent neutral or pro-evolutionary stances, LDS-sponsored publications such as the Ensign tend to publish articles with anti-evolutionary views. Studies published since 2014 have found that the majority of Latter-day Saints do not believe humans evolved over time. A 2018 study in the Journal of Contemporary Religion found that very liberal or moderate members of the LDS Church were more likely to accept evolution as their education level increased, whereas very conservative members were less likely to accept evolution as their education level increased. Another 2018 study found that over time, Latter-day Saint undergraduate attitudes towards evolution have changed from antagonistic to accepting. The researchers attributed this attitude change to more primary school exposure to evolution and a reduction in the number of anti-evolution statements from the First Presidency. Official doctrine The LDS Church has no official position on the theory of evolution or the details of "what happened on earth before Adam and Eve, including how their bodies were created." Even so, some church general authorities have made statements suggesting that, in their opinion, evolution is opposed to scriptural teaching. Apostles Joseph Fielding Smith and Bruce R. McConkie were among the most well-known advocates of this position. Other church authorities and members have made statements suggesting that, in their opinion, evolution is not in opposition to scriptural doctrine. Examples of this position have come from B. H. Roberts, James E. Talmage, and John A. Widtsoe. While maintaining its "no position" stance, the LDS Church has produced a number of official publications that have included discussion and personal statements from these various church leaders on evolution and the "origin of man." These statements generally adopt the position, as a church-approved encyclopedia entry states, "[t]he scriptures tell why man was created, but they do not tell how, though the Lord has promised that he will tell that when he comes again." First Presidency statements There have been three authoritative public statements (1909, 1910, and 1925) and one private statement (1931) given from the LDS Church's highest authority, the First Presidency, which represents the church's doctrinal position on the origin of mankind. The 1909 and 1925 statements of the First Presidency have been subsequently endorsed by church leaders such as apostle Boyd K. Packer in 1988. In February 2002, the entire 1909 First Presidency message was reprinted in the church's Ensign magazine. 1909 statement "The Origin of Man" Historically, Latter-day Saints were isolated in the western plains when The Origin of Species was published by Charles Darwin in 1859. Consequently, there was little discussion about evolution among Mormon communities. The Latter-day Saints were trying to survive and build settlements in Utah and evolution was not a prominent concern for them. George Q. Cannon of the Quorum of the Twelve responded to Darwin in 1861, stated that revelation is superior to science, but considered the possibility of evolution among animals and plants. This was and is not considered doctrine. The building of the transcontinental railroad in 1869 allowed for the Saints to gain access to outside ideas and influences. Because of this new knowledge, Mormon schools sought to combat scientific theories such as evolution with faith. Publications helped reaffirm church doctrine; however, views on evolution were mixed. Some believed a belief in evolution was equivalent to atheism, whereas some sought to find common ground between evolution and faith. Due to the many differing opinions that emerged, in the early 1900s the LDS Church began to officially respond to the theories that had already been discussed for nearly fifty years. The first official statement from the First Presidency on the issue of evolution was in 1909, the centennial of Darwin's birth and the 50th anniversary of the publication of On the Origin of Species. Church president Joseph F. Smith appointed a committee headed by Orson F. Whitney, a member of Quorum of the Twelve, to prepare an official statement, "basing its belief on divine revelation, ancient and modern, proclaim[ing] man to be the direct and lineal offspring of Deity." This teaching regarding the origin of man differs from traditional Christianity's doctrine of creation, referred to by some as "creationism", which consists of belief in a fiat creation. In addition, the statement declares human evolution as one of the "theories of men", but falls short of explicitly declaring it untrue or evil. It states that, "man began life as a human being, in the likeness of our heavenly father". Moreover, it states that although man begins life as a germ or embryo, it does not mean that, "[Adam] began life as anything less than a man, or less than the human germ or embryo that becomes a man" Supported by signatures from the First Presidency, the statement was published in November 1909. The statement did not define the origins of animals other than humans, nor did it venture into any more specifics regarding the origin of man. 1910 statement "Words in Season from the First Presidency" In response to continual questions from church members regarding evolution, as well as problems preceding the 1911 Brigham Young University modernism controversy, in its 1910 Christmas message, the First Presidency made reference to the church's position on science. It stated that the church is not hostile to science and that "diversity of opinion does not necessitate intolerance of spirit". The message continues by stating that proven science is accepted with joy, but theories, speculation, or anything contrary to revelation or common sense are not accepted. 1925 statement "Mormon View of Evolution" In 1925, in the midst of the Scopes Trial in Tennessee, a new First Presidency issued an official statement which reaffirmed the doctrine that Adam was the first man upon the earth and that he was created in the image of God. There is a short article in the Encyclopedia of Mormonism which is largely composed of quotes from the 1909 and 1925 statements. It states that men and women are created in the image of the "universal Father and Mother", and Adam, like Christ was a pre-existing spirit who took a body to become a "living soul". It continues by stating that because man is "endowed with divine attributes", he "is capable, by experience through ages and aeons, of evolving into a God." The official statement was initially published in Deseret News on July 18, 1925 and later published in the Improvement Era in September 1925. The 1925 statement is shorter than the 1909 statement, containing selected excerpts from the 1909 statement. "Anti-science" language was removed and the title was altered from "The Origin of Man" to "Mormon View of Evolution". The comment which concluded that theories of evolution are "theories of men" in the 1909 official statement was no longer included in the 1925 official statement. The First Presidency has not publicly issued an official statement on evolution since 1925. 1931 statement "First Presidency Minutes" In April 1931, the First Presidency sent out a lengthy memo to all church general authorities in response to the debate between B. H. Roberts of the Presidency of the Seventy and Joseph Fielding Smith of the Quorum of the Twelve on the existence of pre-Adamites. The memo stated the church's neutral stance on the existence of pre-Adamites. Official church publications The subject of evolution has been addressed in several official publications of the church. General conference speeches The LDS Church has published several general conference talks mentioning evolution. In the October 1984 conference, apostle Boyd K. Packer stated that "no one with reverence for God could believe that His children evolved from slime or from reptiles" as well as affirming that "those who accept the theory of evolution don't show much enthusiasm for genealogical research." In the April 2012 conference, apostle Russell M. Nelson discussed the human body stating "some people erroneously think that these marvelous physical attributes happened by chance or resulted from a big bang somewhere". He then compared this to an "explosion in a printing shop produc[ing] a dictionary". Instruction manuals Old Testament Student Seminary Manual The Old Testament Student Manual, published by the Church Educational System, contains several quotes by general authorities as well as academics from a variety of backgrounds (both members of the church and non-members) related to organic evolution and the origins of the earth. The 2003 edition states that there is no official stance on the age of the earth but that evidence for a longer process is substantial and very few people believe the earth was actually created in the space of one week. However, it also includes a quote from Joseph Fielding Smith indicating his interpretation of church doctrine as it pertains to the theory of organic evolution. He asserts that organic evolution is incompatible and inconsistent with revelations from God and that to accept it is to reject the plan of salvation. Doctrine and Covenants and Church History Seminary Teacher Manual Doctrine and Covenants mentions "the seven thousand years of [the earth's] continuance, or its temporal existence", which has been interpreted by Joseph Fielding Smith and Bruce R. McConkie as a statement suggesting that the earth is no more than about six thousand years old (the seventh thousand-year period being the future millennium). Speciation generally occurs over very large spans of time. However, in relation to this verse, the manual for seminary teachers explains: "It may be helpful to explain that the 7,000 years refers to the time since the Fall of Adam and Eve. It is not referring to the actual age of the earth including the periods of creation." BYU Library packet on evolution Since 1992 at the LDS-owned universities, a packet of authoritative statements approved by the BYU Board of Trustees (composed of the First Presidency, other general authorities, and general organizational leaders) has been provided to students in classes when discussing the topic of organic evolution. The packet was assembled due to the large number of questions students had about evolution and the origins of man and is intended to be distributed along with other course material. The packet includes the first three Official First Presidency statements on the origin of man as well as the "Evolution" section in the Encyclopedia of Mormonism which includes elements from the 1909 and 1925 statements as well as the 1931 "First Presidency Minutes". Official magazines Ensign In 1982, the Ensign, an official periodical of the church, published an article entitled "Christ and the Creation" by Bruce R. McConkie, which stated that "[m]ortality and procreation and death all had their beginnings with the Fall." In an earlier edition of the Ensign published in 1980, McConkie stated that "the greatest heresy in the sectarian world ... is that God is a spirit nothingness which fills the immensity of space, and that creation came through evolutionary processes." New Era A July 2016 article for young adults in the New Era acknowledged questions about how the age of the earth, dinosaurs, and evolution fit with church teachings, stating "it does all fit together, but there are still a lot of questions." The article offered no further explanation to how science and LDS teachings fit together, and stated "nothing that science reveals can disprove your faith" and told youth "not to get worried in the meantime." A few months later in the same magazine, the church published an anonymously authored article stating that "the Church has no official position of the theory of evolution." The article continues by stating that the theory of organic evolution should be left for scientific study and that no details about the what happened before Adam and Eve and how their bodies were created have not been revealed, but the origin of man is clear from the teaching of the church. A much earlier anonymously-authored article from 2004 did not attempt to reconcile church teachings and scientific views of evolution, but stated that not having the answers does not discredit the existence of God, and that God will not reveal more unto us until we prove our faith. An example was provided of how the author avoided a classroom debate on evolution by stating that they knew God existed and created us. The article also quoted past church president Gordon B. Hinckley giving his own example of how he chose to drop the question and not let it bother him. Subsequent letters from youth stated that the youth viewed themselves as against evolution and supportive of intelligent design. A previous article in the New Era also showed youth viewing evolution as an antagonistic idea to their faith and becoming upset when it was taught and another featured a church seventy using scientific arguments in an attempt to disprove evolutionary natural selection and adaptation. Improvement Era The Improvement Era was an official periodical of the church between 1897 and 1970. In the April 1910 edition in the "Priesthood Quorum's Table" section of that periodical, Genesis is cited as well as other scriptures from Genesis and the Pearl of Great Price. The article states that it is unclear whether the mortal bodies of man evolved through natural processes, whether Adam and Eve where transplanted to Earth from another place, or whether they were born on Earth in mortality. The article states that those questions are not fully answered in the church's current revelation and scripture. The article cites the answer is attributed to the church's First Presidency. Canonized scriptures Some verses in the standard works raise questions about the compatibility of scriptural teachings and scientists' current understanding of organic evolution. One such verse, in Doctrine and Covenants describes the "temporal existence" of the earth as 7,000 years old. Other scriptural verses suggest that no organisms died before the fall of Adam. In the Book of Mormon, the prophet Lehi teaches: "If Adam had not transgressed he would not have fallen, but he would have remained in the garden of Eden. And all things which were created must have remained in the same state in which they were after they were created; and they must have remained forever, and had no end". In Moses in the Pearl of Great Price, the prophet Enoch states: "Because that Adam fell, we are; and by his fall came death; and we are made partakers of misery and woe." Bible Dictionary In the Bible Dictionary of the LDS Church, the entry for "Fall of Adam" previously included the following statement: "Before the fall, Adam and Eve had physical bodies but no blood. There was no sin, no death, and no children among any of the earthly creations." Under the entry "Flesh", it is written: "Since flesh often means mortality, Adam is spoken of as the 'first flesh' upon the earth, meaning he was the first mortal on the earth, all things being created in a non-mortal condition, and becoming mortal through the fall of Adam. As noted above, the Bible Dictionary is published by the LDS Church, and its preface states: "It [the Bible Dictionary] is not intended as an official or revealed endorsement by the church of the doctrinal, historical, cultural, and other matters set forth." Statements from church presidents Every statement by an LDS Church president does not necessarily constitute official church doctrine, but a statement from him is generally regarded by church membership as authoritative and usually represents doctrine. Official church doctrine is however presented and taught unitedly by the entire First Presidency, usually released in an official letter or other authorized publication. Brigham Young Brigham Young, the church's second president, stated that the LDS Church differs from other Christian churches, because they do not seek to clash their ideas with scientific theory. He continued that whether God began with an empty Earth, whether he created out of nothing, whether he made it in six days or millions of years will remain a mystery unless God reveals something about it. Young made the following statement two years later, stating the injustice of the fact that the theories of scientists are taught in school, but not the principles of the gospel. He wrote that for this purpose, he created Brigham Young Academy, so that God's revelation could be taught in schools with books written by members of the LDS Church. Young also stated that he was, "resolutely and uncompromisingly opposed" to "the theories...of Darwin." John Taylor John Taylor was the second church president to comment directly on Darwinian theory. In his 1882 book Mediation and Atonement, Taylor stated that nature and creation is governed by the laws of man and organisms exist in the same form since creation, as contradicted by the ideas of evolutionists. Taylor continued that man did not originate from chaos of matter, but from "the faculties and powers of a God". Joseph F. Smith Soon after the First Presidency's 1909 statement, Joseph F. Smith professed in an editorial that "the Church itself has no philosophy about the modus operandi employed by the Lord in His creation of the world." However, in the very same month (and in the wake of the evolution controversy that had recently ensued at Brigham Young University), Smith published and signed a statement wherein he explained some of the conflicts between revealed religion and the theories of evolution. He cited the 1911 Brigham Young University modernism controversy, stating that evolution is in conflict with scriptures and modern revelation. He continues that the church holds that "divine revelation" must be the "standard" and is "truth". Smith mentions that "science has changed from age to age", and "philosophic theories of life" have their place, but do not belong in LDS Church school classes and anywhere else when they contradict the word of God. A 1910 editorial in a church magazine that enumerates various possibilities for creation is usually attributed to Smith or to the First Presidency. Included in the listed possibilities were the ideas that Adam and Eve: (1) "evolved in natural processes to present perfection"; (2) were "transplanted [to earth] from another sphere"; or (3) were "born here ... as other mortals have been." Smith authored an editorial the next year in the church magazine discouraging the discussion of evolution in church school stating that members of the church believe the theory of evolution was "more or less a fallacy." David O. McKay In a 1952 speech to students at BYU, McKay used the theory of evolution as an example while suggesting that science can "leave [a student] with his soul unanchored." He stated that a professor that denies "divine agency in creation" imposes on the student that life was created by chance. McKay insisted that students should be led to a "counterbalancing thought" that "God is the Creator of the earth", "the Father of our souls and spirits", and "the purpose of creation is theirs (God and Jesus Christ)." In the April 1968 general conference, McKay's son, David, read a message on his father's behalf that was an edited version of the 1952 speech, including the omission of the word "beautiful" when describing the theory of evolution. In 1954, McKay quoted the Old Testament while affirming to members of the BYU faculty that living things only reproduce "after their kind". He quoted Genesis which states, "Let the earth bring forth the living creatures after his kind, cattle and creeping things, and the beast of the earth after his kind." Spencer W. Kimball At a 1975 church women's conference, church president Spencer W. Kimball quoted, "And, I God created man in mine own image, and in the image of mine Only Begotten created I him; male and female created I them." (Kimball added that "the story of the rib, of course, is figurative.") Kimball continued, "we don't know exactly how [Adam and Eve's] coming into this world happened, and when we're able to understand it the Lord will tell us." Ezra Taft Benson Prior to becoming president of the LDS Church, Ezra Taft Benson gave an April 1981 general conference address in which he stated that "the theory of man’s development from lower forms of life" is a "false idea". In 1988, after becoming president of the church, Benson published a book counseling members of the church to use the Book of Mormon to counter the theories of evolution. He wrote that "we have not been using the Book of Mormon as we should. Our homes are not as strong unless we are using it to bring our children to Christ. Our families may be corrupted by worldly trends and teachings unless we know how to use the book to expose and combat the falsehoods in socialism, organic evolution, rationalism, humanism, etc." In 1988, Benson published another book that included his earlier warnings about the "deceptions" of Charles Darwin. He wrote that educational institutions serve to mislead youth, which explains—he noted—why the church advises that youth attend church institutions, allowing parents to closely observe the education of their children and clear up "the deceptions of men like . . . Charles Darwin. Gordon B. Hinckley In a 1997 speech at an Institute of Religion in Ogden, Utah, church president Gordon B. Hinckley said: "People ask me every now and again if I believe in evolution. I tell them I am not concerned with organic evolution. I do not worry about it. I passed through that argument long ago." wherein he contrasts "organic evolution" with the evolution and improvement of individuals: In the late 1990s, Hinckley recalled his university studies of anthropology and geology to reporter Larry A. Witham: "'Studied all about it. Didn't worry me then. Doesn't worry me now'", insisting that the church only requires the belief that Adam was the first man of '"what we would call the human race."' In 2004, an official church magazine printed a quote from Hinckley from a 1983 speech where he expressed a similar sentiment. Statements from apostles In the early 1900s, many general authorities, specifically those with science backgrounds, subscribed to the idea of an old earth, yet most of them rejected Darwinism. Joseph Fielding Smith and other general authorities were against the old earth theory as well as Darwin's theory of evolution. Individual leaders of the church have expressed a variety of personal opinions on biological evolution and as such these do not necessarily constitute official church doctrine. Statements from the 1930s Roberts–Smith–Talmage dispute In 1930, B. H. Roberts, the presiding member of the First Council of the Seventy, was assigned by the First Presidency to create a study manual for the Melchizedek priesthood holders of the church. Entitled The Truth, The Way, The Life, the draft of the manual that was submitted to the First Presidency and the Quorum of the Twelve Apostles for approval stated that death had been occurring on Earth for millions of years prior to the fall of Adam and that human-like pre-Adamites had lived on the Earth. On 5 April 1930, Joseph Fielding Smith, a junior member of the Quorum of the Twelve Apostles and the son of a late church president, "vigorously promulgated [the] opposite point of view" in a speech that was published in a church magazine. In his widely read speech, Smith taught as doctrine that there had been no death on earth until after the fall of Adam and that there were no "pre-Adamites". In 1931, both Roberts and Smith were permitted to present their views to the First Presidency and the Quorum of the Twelve. After hearing both sides, the First Presidency issued a memo to the general authorities of the church which stated while they agree with the idea that "Adam is the primal parent of our race", there is no advantage to continuing the discussion and that church members should focus on "[bearing] the message of the restored gospel to the people of the world" and that those sciences do not have anything to do with, "the salvation of the souls of mankind". They stated that continuation of the discussion would only lead to "confusion, division, and misunderstanding if continued further." Another of the apostles, geologist James E. Talmage, pointed out that Smith's views could be misinterpreted as the church's official position, since Smith's views were widely circulated in a church magazine but Roberts's views were limited to an internal church document. As a result, the First Presidency gave permission to Talmage to give a speech promoting views that were contrary to Smith's. In his speech on August 9, 1931, in the Salt Lake Tabernacle, Talmage taught the same principles that Roberts had originally outlined in his draft manual. Over Smith's objections, the First Presidency authorized a church publication of Talmage's speech in pamphlet form. In 1965, Talmage's speech was reprinted again by the church in an official church magazine. As Talmage points out in the article, "The outstanding point of difference ... is the point of time which man in some state has lived on this planet." With regards to evolution in general, Talmage challenged many of its aspects in the same speech. He said that he does not believe Adam descended from cavemen or lower forms of men, but is divinely created. He did, however, state that were it true that Adam evolved from lower form, it only seems likely that men will continue to evolve into something higher as a part of eternal progression. He continued by stating that, "evolution is true so far as it means development, and progress, and advancement in all the works of God", and that the scriptures, "should not be discredited by theories of men; they cannot be discredited by fact and truth." Talmage considered the possibility of pre-Adamites; however, he denied speciation and evolution. Roberts died in 1933 and The Truth, The Way, The Life remained unpublished until 1994, when it was published by an independent publisher. Although it is apparent that Roberts and Smith may have had differing views on whether there was death before the fall of Adam, it is evident that they may have had similar views against organic evolution as the explanation for the origin of man. For example, Roberts wrote that "the theory of evolution as advocated by many modern scientists lies stranded upon the shore of idle speculation. There is one other objection to be urged against the theory of evolution before leaving it; it is contrary to the revelations of God." Roberts further criticized the theories of evolution by stating that Darwin's claims of evolution are contrary to the experience and knowledge of man, because the law of nature requires that every organism reproduces of its own kind, and while variation may occur, changes usually revert due to extinction, chromosomal infertility, or by reversion to original species. Joseph Fielding Smith In 1954, when he was President of the Quorum of the Twelve Apostles, Smith wrote at length about his personal views on evolution in his book Man, His Origin and Destiny stating that it was a destructive and contaminating influence and that "If the Bible does not kill Evolution, Evolution will kill the Bible." He further stated that "There is not and cannot be, any compromise between the Gospel of Jesus Christ and the theories of evolution" and that "It is not possible for a logical mind to hold both Bible teaching and evolutionary teaching at the same time" since "If you accept [the scriptures] you cannot accept organic evolution." In response to an inquiry about the book from the head of the University of Utah Geology Department, church president David O. McKay affirmed that "the Church has officially taken no position" on evolution, Smith's book "is not approved by the Church", and that the book is entirely Smith's "views for which he alone is responsible". Smith also produced personal statements on evolution in his Doctrines of Salvation including that "If evolution is true, the church is false" since "If life began on Earth as advocated by Darwin ... then the doctrines of the church are false". Smith stated about his views on evolution, "No Adam, no fall; no fall, no atonement; no atonement, no savior." Smith also asserted that "There was no death of any living creature before the fall of Adam! Adam’s mission was to bring to pass the fall and it came upon the earth and living things throughout all nature. Anything contrary to this doctrine is diametrically opposed to the doctrines revealed to the Church! If there was any creature increasing by propagation before the fall, then throw away the Book of Mormon, deny your faith, the Book of Abraham and the revelations in the Doctrine and Covenants! Our scriptures most emphatically tell us that death came through the fall, and has passed upon all creatures including the earth itself. For this earth of ours was pronounced good when the Lord finished it. It became fallen and subject to death as did all things upon its face, through the transgression of Adam." Bruce R. McConkie Bruce R. McConkie was an influential church leader and author on the topic of evolution, having been published several times speaking strongly on the topic. He stated his view in 1982 at BYU that there was no death in the world for Adam or for any form of life before the fall, and that trying to reconcile religion and organic evolution was a false and devilish heresy among church members. In 1984, McConkie disparaged the "evolutionary fantasies of biologists" and stated that yet to be revealed "doctrines will completely destroy the whole theory of organic evolution" and stated that any religion that assumes humans are a product of evolution cannot offer salvation since true believers know humans were made in a state in which there was no procreation or death. In his popular and controversial reference book Mormon Doctrine, McConkie devoted ten pages to his entry on evolution. After canvassing statements of past church leaders, the standard works, and the 1909 First Presidency statement, McConkie concluded that "[t]here is no harmony between the truths of revealed religion and the theories of organic evolution." The evolution entry in Mormon Doctrine quotes extensively from Smith's Man, His Origin and Destiny. McConkie characterized the intellect of those Latter-day Saints who believe in evolution while simultaneously having knowledge of church doctrines on life and creation as "scrubby and grovelling". McConkie included a disclaimer in Mormon Doctrine stating that he alone was responsible for the doctrinal and scriptural interpretations. The 1958 edition falsely stated that the "official doctrine of the Church" asserted a "falsity of the theory of organic evolution." McConkie also wrote that "there were no pre-Adamites," that Adam was not the "end-product of evolution," and that there "was no death in the world, either for man or for any form of life until after the Fall of Adam." Russell M. Nelson Prior to becoming president of the LDS Church, Russell M. Nelson stated in a 2007 interview with the Pew Research Center that "to think that man evolved from one species to another is, to me, incomprehensible. Man has always been man. Dogs have always been dogs. Monkeys have always been monkeys. It's just the way genetics works." He also stated in 1987 in a church magazine article that he found the theory of evolution unbelievable. Academic The earliest instance in which science and evolution were used to support LDS doctrine occurred in a series of six published articles in 1895, "Theosophy and Mormonism" by Nels L. Nelson. These articles were published in 1904 in Scientific Aspects of Mormonism. Nelson used the ideas of evolution to consider the spiritual and physical development of God and humans. Nelson's view of evolution is spiritual with deliberate use of scientific processes by God rather than as a random, accidental process. Mormon philosopher William Henry Chamberlin's Essay on Nature (1915) and Frederick J. Pack's Science and Belief in God (1924) defended the theory of evolution; both attempted to reconcile religion and evolution. In a work, Pack states, "no warfare exists between 'Mormonism' and true science." In 1978, dean of the College of Biology and Agriculture at BYU, A. Lester Allen, tried to present an approach to evolution from the perspective of an LDS biologist. Allen established seven doctrinal landmarks that are fundamental beliefs of the LDS Church, but considered that human's limited perspective and limited perception of reality means that humans may not very well understand the circumstances surrounding the creation of Adam and Eve and the existence of the Garden of Eden using only their mortal senses. Allen also stated that besides core doctrine of the LDS Church relating to the existence of Adam, Eve, and the Garden of Eden, all hypotheses are fair game for "responsible scientists" to consider and investigate. In 2018, BYU professor and evolutionary biologist Steven L. Peck at a Mormon studies conference at Utah Valley University explained that Mormons believe in "eternal progression" and that the universe was organized from pre-existing matter, which are ideas also held by evolutionary biologists. Views in the early 2000s There is an ongoing discussion and questioning among members of the LDS Church concerning the religion, evolution, and the reconciliation between the two. There are a number of current Mormon-related publications with articles on evolution. According to scholar Michael R. Ash, a great number of church members read the Ensign, which generally publishes articles with unfavorable views on evolution. Other publications like BYU Studies, FARMS Review of Books, Dialogue, and Sunstone have published pro-evolution or neutral articles. The official stance of the church on evolution is neutral. Though scholar Joseph Baker argues that the church's position is rather "skeptically neutral", because the church continues to endorse their 1910 statement. There are many church members, including scientists, who accept evolution as a legitimate scientific theory. In a 2014 U.S. Religious Landscape Study, researchers found that 52% of Mormons believe that humans always existed in their present form while 42% believe that humans evolved over time. More specifically, 29% of Mormons believe that evolution is guided by a supreme being, while 11% believe the evolution occurred due to natural processes. A 2017 study, the Next Mormons Survey, professor Benjamin Knoll surveyed Mormons about their beliefs in evolution. Of those surveyed, 74% responded that they were confident or had faith that God created Adam and Eve in the last 10,000 years and that Adam and Eve did not evolve from other forms of life. When asked whether evolution is the best explanation for how God brought about life on Earth, 33% of Mormons were confident or had faith that this was not true. After analyzing the results Knoll suggested that 37% of Mormons completely reject God-guided evolution. Another 37% accept God-guided evolution for life on Earth, but feel that Adam and Eve were an exception and were physically created by God. The other 26% were split between the belief that Adam and Eve may have been created through the process of evolution and the disbelief in God-guided evolution and the existence of a physical Adam and Eve. Moreover, unlike other studies conducted which have found a correlation between education level and belief in evolution, Next Mormons Survey found no correlation between education level and belief in evolution among Mormons. In contrast, a 2018 study of American Mormons in the Journal of Contemporary Religion found that education was a defining factor of evolution acceptance. This is, however, only true when accounting for political ideology as well. The study determined that among those with moderate or liberal political ideology, the probability of accepting evolution increases with increasing education level. The correlation between evolution acceptance and education level was even higher among liberals. The probability of accepting evolution among very liberal Mormons with an 8th grade or less education was 9%, while the probability of accepting evolution among very liberal Mormons with a post-graduate degree increases to 82%. The findings were different from conservative Mormons who showed a decrease in probability of accepting evolution as their education level increased. A very conservative Mormon with an 8th grade education or less had a 35% probability of accepting evolution, whereas a very conservative Mormon with a post-graduate degree was 20% likely to accept evolution. Baker suggests that low rates of acceptance of evolution of Mormons may be related to the high rates of political conservatism among Mormons. A 2018 study in PLOS One researched the attitudes toward evolution of Latter-day Saint undergraduates. The study revealed that there has been a recent shift of attitude towards evolution among LDS undergraduates. These attitudes have shifted from antagonistic to accepting. The researchers cited examples of more acceptance of fossil and geological records, as well as an acceptance of the old age of the earth. The researchers attributed this attitude change to several factors including primary-school exposure to evolution and a reduction in the number of anti-evolution statements from the First Presidency. See also William Henry Chamberlin (philosopher) Ralph Vary Chamberlin Relationship between religion and science Ahmadiyya views on evolution Evolution and the Roman Catholic Church Jainism and non-creationism Jewish views on evolution Hindu views on evolution Issues in Science and Religion Notes References Further reading . Christian creationism Christianity and evolution Evolution Evolution Evolution Evolution Religion and science Creationism Evolution and religion
Mormon views on evolution
[ "Biology" ]
8,191
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
15,872,649
https://en.wikipedia.org/wiki/Zubenhakrabi
Zubenhakrabi (or Zuben Hakrabi) is the traditional name for some stars in the constellation Libra. It can refer to: γ Lib in Bode's small star atlas, Vorstellung der Gestirne. η Lib in Burritt's star map. ν Lib in Bečvář's star catalogue. σ Lib in Bayer's Uranometria. υ Lib in a Japanese guide book of the constellations. It is probably a typographical error. Libra (constellation)
Zubenhakrabi
[ "Astronomy" ]
117
[ "Libra (constellation)", "Constellations" ]
15,872,993
https://en.wikipedia.org/wiki/Thermococcus%20litoralis
Thermococcus litoralis (T. litoralis) is a species of Archaea that is found around deep-sea hydrothermal vents as well as shallow submarine thermal springs and oil wells. It is an anaerobic organotroph hyperthermophile that is between in diameter. Like the other species in the order thermococcales, T. litoralis is an irregular hyperthermophile coccus that grows between . Unlike many other thermococci, T. litoralis is non-motile. Its cell wall consists only of a single S-layer that does not form hexagonal lattices. Additionally, while many thermococcales obligately use sulfur as an electron acceptor in metabolism, T. litoralis only needs sulfur to help stimulate growth, and can live without it. T. litoralis has recently been popularized by the scientific community for its ability to produce an alternative DNA polymerase to the commonly used Taq polymerase. The T. litoralis polymerase, dubbed the vent polymerase, has been shown to have a lower error rate than Taq due to its proofreading 3’–5’ exonuclease abilities, but higher than Pfu polymerase. DNA polymerase The DNA polymerase of Thermococcus litoralis is stable at high temperatures, with a half-life of eight hours at and two hours at . It also has a proofreading activity that is able to reduce mutation frequencies to a level 2–4 times lower than most non-proofreading DNA polymerases. Habitat and ecology T. litoralis grows near shallow and deep sea hydrothermal vents in extremely hot water. The optimal growth temperature for T. litoralis is 85–88 °C. It also prefers slightly acidic waters, growing between pH 4.0 to 8.0 with the optimal pH between 6.0–6.4. Unlike many other hyperthermophiles, T. litoralis is only facultatively dependent on sulfur as a final electron acceptor in fermentation, producing hydrogen gas in its absence and hydrogen sulfide when present. Additionally, T. litoralis has been shown to produce an exopolysaccharide (EPS) that could possibly help it form a biofilm. It is made of mannose, sulfites, and phosphorus. Physiology T. litoralis can utilize pyruvate, maltose, and amino acids as energy sources. In a laboratory setting, T. litoralis must be supplied with amino acids in order to grow at non-reduced rates. The only amino acids it does not require are asparagine, glutamine, alanine, and glutamate. These amino acids may not be vital for T. litoralis because asparagine and glutamine tend to deaminate at high temperatures found around hydrothermic vents and alanine and glutamate can usually be produced by other hyperthermophilic archaea. The main carbon source for T. litoralis seems to be maltose, which can be brought into the cell via a maltose-trehalose ABC transporter. T. litoralis has a specialized glycolytic pathway called the modified Embden–Meyerhoff (EM) pathway. One way the modified EM pathway in T. litoralis deviates from the common EM pathway is that the modified version contains an ADP dependent hexose kinase and PFK instead of an ATP dependent versions of the enzymes. Novel strains New DNA analysis has shown several isolates of T. litoralis, MW and Z-1614, which are most likely new strains. MW and Z-1614 were confirmed to be strains of T. litoralis through DNA-DNA hybridization, C–G ratios (38–41 mol%), and immunoblotting analyses. They slightly differ in morphology from the previously isolated T. litoralis in that they all have flagella. Through the same processes it has been shown that the previously discovered Caldococcus litoralis was actually T. litoralis. The genome for T. litoralis has yet to be fully sequenced. References Further reading External links Type strain of Thermococcus litoralis at BacDive - the Bacterial Diversity Metadatabase Euryarchaeota Organisms living on hydrothermal vents Archaea described in 2001
Thermococcus litoralis
[ "Biology" ]
918
[ "Organisms by adaptation", "Organisms living on hydrothermal vents", "Organisms by habitat" ]
15,874,481
https://en.wikipedia.org/wiki/Sulfadoxine
Sulfadoxine (also spelled sulphadoxine) is an ultra-long-lasting sulfonamide used in combination with pyrimethamine to treat malaria. It is also used to prevent malaria but due to high levels of sulphadoxine-pyrimethamine resistance, this use has become less common. It is also used, usually in combination with other drugs, to treat or prevent various infections in livestock. Mechanism of action Sulfadoxine competitively inhibits dihydropteroate synthase, interfering with folate synthesis. See also Sulfadoxine/pyrimethamine References 4-Aminophenyl compounds Ethers Pyrimidines Sulfonamide antibiotics Dihydropteroate synthetase inhibitors Antimalarial agents
Sulfadoxine
[ "Chemistry" ]
169
[ "Organic compounds", "Functional groups", "Ethers" ]
15,875,142
https://en.wikipedia.org/wiki/Fit-PC
The fit-PC is a small, light, fan-less nettop computer manufactured by the Israeli company CompuLab. Many fit-PC models are available. fit-PC 1.0 was introduced in July 2007, fit-PC Slim was introduced in September 2008, fit-PC 2 was introduced in May 2009, fit-PC 3 was introduced in early 2012, and fit-PC 4 was introduced spring 2014. The device is power-efficient (fit-PC 1 was about 5 W) and therefore considered to be a green computing project, capable of using open source software and creating minimal electronic waste. Current models fit-PC2 On February 19, 2009, Compulab announced the fit-PC2, which is "a major upgrade to the fit-PC product line". Detailed specifications for the fit-PC2 include an Intel Atom Z5xx Silverthorne processor (1.1/1.6/2.0 GHz options), up to 2GB of RAM, 160GB SATA Hard Drive, GigaBit LAN and more. The fit-PC2 is also capable of HD video playback. Its declared power consumption is only 6W, and according to the manufacturer, it saves 96% of the power used by a standard desktop. fit-PC2 is the most power efficient PC on the Energy-Star list. The fit-PC2 is based on the GMA 500 (Graphics Media Accelerator). Unfortunately the open source driver included in Linux kernel 2.6.39 does not support VA-API video or OpenGL/3D acceleration. The fit-PC2 is being phased out and is being replaced by the fitlet, the fitlet was designed to replace the groundbreaking (and still popular) CompuLab fit-PC2. fit-PC2i On December 2, 2009, Compulab announced the fit-PC2i, a fit-PC2 variation targeting networking and industrial applications. fit-PC2i adds a second Gbit Ethernet port, Wake-on-LAN, S/PDIF output and RS-232 port, has two fewer USB ports, and no IR. fit-PC3 The fit-PC3 has been released early 2012. See the fit-PC3 article. fit-PC4 The fit-PC4 has been released spring 2014. fitlet The fitlet has been announced January 14, 2015. It has 3 CPU/SoC variations, and 5 feature variations, though only 7 models have been announced so far. Obsolete models fit-PC Slim On September 16, 2008, Compulab announced the Fit-PC Slim, which at 11 x 10 x 3 cm is smaller than fit-PC 1.0. Hardware fit-PC Slim uses 500 MHz AMD Geode LX800 processor and has 512mb soldered-on RAM. The computer includes a VGA output, a serial port with a custom connector, Ethernet, b/g WLAN, and 3 USB ports (2 on the front panel). The system has an upgradeable 2.5" 60GB ATA hard drive. Software fit-PC Slim has General Software BIOS supporting PXE and booting from a USB CDROM or USB thumb drive. It is pre-installed with either Windows Vista or with Ubuntu 8.10 and Gentoo Linux 2008.0 . Also Windows Embedded can be used, or pre-installed on a FlowDrive. Availability The fit-PC Slim end-of-life was announced on 19 June 2009 with the general availability of fit-PC2. fit-PC 1.0 fit-PC 1.0 is an earlier model that has the following differences Limited to 256mb RAM No Wi-Fi Dual 100BaseT Ethernet Larger form factor - 12 x 11.6 x 4 cm Only 2 USB ports Hard disk is upgradeable No power button and indicator LEDs 5 V power supply See also Trim-Slice, an ARM mini-computer also made by CompuLab Industrial PC Media center (disambiguation) Media PC Nettop References External links fit-PC website Compulab website fit-PC Australia website fit-PC2 Users forum fit-PC US Website Computers and the environment Israeli brands Linux-based devices Mini PC Products introduced in 2007
Fit-PC
[ "Technology" ]
867
[ "Computers and the environment", "Computing and society", "Computers" ]
15,875,167
https://en.wikipedia.org/wiki/Pickering%20emulsion
A Ramsden emulsion, sometimes named Pickering emulsion, is an emulsion that is stabilized by solid particles (for example colloidal silica) which adsorb onto the interface between the water and oil phases. Typically, the emulsions are either water-in-oil or oil-in-water emulsions, but other more complex systems such as water-in-water, oil-in-oil, water-in-oil-in-water, and oil-in-water-in-oil also do exist. Pickering emulsions were named after S.U. Pickering, who described the phenomenon in 1907, although the effect was first recognized by Walter Ramsden in 1903. Overview If oil and water are mixed and small oil droplets are formed and dispersed throughout the water (oil-in-water emulsion), eventually the droplets will coalesce to decrease the amount of energy in the system. However, if solid particles are added to the mixture, they will bind to the surface of the interface and prevent the droplets from coalescing, making the emulsion more stable. Particle properties such as hydrophobicity, shape, and size, as well as the electrolyte concentration of the continuous phase and the volume ratio of the two phases can have an effect on the stability of the emulsion. The particle’s contact angle to the surface of the droplet is a characteristic of the hydrophobicity of the particle. If the contact angle of the particle to the interface is low, the particle will be mostly wetted by the droplet and therefore will not be likely to prevent coalescence of the droplets. Particles that are partially hydrophobic are better stabilizers because they are partially wettable by both liquids and therefore bind better to the surface of the droplets. The optimal contact angle for a stable emulsion is achieved when the particle is equally wetted by the two phases (i.e. 90° contact angle). The stabilization energy is given by where r is the particle radius, is the interfacial tension, and is the contact angle of the particle with the interface. When the contact angle is approximately 90°, the energy required to stabilize the system is at its minimum. Generally, the phase that preferentially wets the particle will be the continuous phase in the emulsion system. The most common type of Ramsden emulsions are oil-in-water emulsions due to the hydrophilicity of most organic particles. One example of a Ramsden-stabilized emulsion is homogenized milk. The milk protein (casein) units are adsorbed at the surface of the milk fat globules and act as surfactants. The casein replaces the milkfat globule membrane, which is damaged during homogenization. Other examples of emulsions where Ramsden particles may be the stabilizing species are for example detergents, low-fat chocolates, mayonnaises and margarines. Ramsden emulsions have gained increased attention and research interest during the last 20 years when the use of traditional surfactants was questioned due to environmental, health and cost issues. Synthetic nanoparticles as Ramsden emulsion stabilizers with well-defined sizes and compositions have been the primarily particles of interest until recently when also natural organic particles have gained increased attention. They are believed to have advantages such as cost-efficiency and degradability, and are issued from renewable resources. Pickering emulsions find applications for enhanced oil recovery or water remediation. Certain Pickering emulsions remain stable even under gastric conditions and show an extraordinary resistance against gastric lipolysis, facilitating their use for controlled lipid digestion and satiation or oral delivery systems. Additionally, it has been demonstrated that the stability of the Ramsden emulsions can be improved by the use of amphiphilic "Janus particles", namely particles that have one hydrophobic and one hydrophilic side, due to the higher adsorption energy of the particles at the liquid-liquid interface. This is evident when observing emulsion stabilization using polyelectrolytes. It is also possible to use latex particles for Ramsden stabilization and then fuse these particles to form a permeable shell or capsule, called a colloidosome. Moreover, Ramsden emulsion droplets are also suitable templates for micro-encapsulation and the formation of closed, non-permeable capsules. This form of encapsulation can also be applied to water-in-water emulsions (dispersions of phase-separated aqueous polymer solutions), and can also be reversible. Pickering-stabilized microbubbles may have applications as ultrasound contrast agents. See also Liquid marbles References Chemical mixtures Condensed matter physics Soft matter Drug delivery devices Dosage forms
Pickering emulsion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
991
[ "Pharmacology", "Soft matter", "Phases of matter", "Drug delivery devices", "Materials science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
15,875,494
https://en.wikipedia.org/wiki/Brown-dwarf%20desert
The brown-dwarf desert is a theorized range of orbits around a star within which brown dwarfs are unlikely to be found as companion objects. This is usually up to 5 AU around solar mass stars. The paucity of brown dwarfs in close orbits was first noted between 1998 and 2000 when a sufficient number of extrasolar planets had been found to perform statistical studies. Astronomers discovered there is a distinct shortage of brown dwarfs within 5 AU of the stars with companions, while there was an abundance of free-floating brown dwarfs being discovered. Subsequent studies have shown that brown dwarfs orbiting within 3–5 AU are found around less than 1% of stars with a mass similar to the Sun (). Of the brown dwarfs that were found in the brown-dwarf desert, most were found in multiple systems, suggesting that binarity was a key factor in the creation of brown-dwarf desert inhabitants. One of the many possible reasons for the existence of the desert relates to planetary (and brown dwarf) migration. If a brown dwarf were to form within 5 AU of its companion star, it could plausibly begin migrating inwards towards the central star and eventually fall into the star itself. That being said, the exact details of migration within a protoplanetary disk are not completely understood, and it is equally plausible that brown dwarf companions to FGK dwarfs would not undergo appreciable migration after their formation. A second possible reason is, depending on which formation paradigm is invoked, that a formation by core accretion should make the formation of higher mass brown dwarfs unlikely, as the gas accretion rate during runaway accretion onto high mass forming objects is reduced due to gap formation in the disk. The limited disk life time then truncates the mass range, limiting the maximum masses to approximately 10 Jupiter masses (). This effect might be somewhat mitigated by the fact that objects of and above might excite eccentric perturbations in the disk, allowing for non-negligible mass accretion even in the presence of a gap. Objects that form further outside (a>80 AU), where the disk is prone to gravitational instabilities, might be able to reach the masses required to cross the planet–brown dwarf threshold. For these objects it might be unlikely to migrate into the inner regions of the disk, however, due to the long type-II migration timescale for massive objects in the brown dwarf mass regime. See also Neptunian desert References Stellar astronomy - -
Brown-dwarf desert
[ "Astronomy" ]
508
[ "Astronomical sub-disciplines", "Stellar astronomy" ]
15,875,500
https://en.wikipedia.org/wiki/Algorithmic%20mechanism%20design
Algorithmic mechanism design (AMD) lies at the intersection of economic game theory, optimization, and computer science. The prototypical problem in mechanism design is to design a system for multiple self-interested participants, such that the participants' self-interested actions at equilibrium lead to good system performance. Typical objectives studied include revenue maximization and social welfare maximization. Algorithmic mechanism design differs from classical economic mechanism design in several respects. It typically employs the analytic tools of theoretical computer science, such as worst case analysis and approximation ratios, in contrast to classical mechanism design in economics which often makes distributional assumptions about the agents. It also considers computational constraints to be of central importance: mechanisms that cannot be efficiently implemented in polynomial time are not considered to be viable solutions to a mechanism design problem. This often, for example, rules out the classic economic mechanism, the Vickrey–Clarke–Groves auction. History Noam Nisan and Amir Ronen first coined "Algorithmic mechanism design" in a research paper published in 1999. See also Algorithmic game theory Computational social choice Metagame Incentive compatible Vickrey–Clarke–Groves mechanism References and notes Further reading . Mechanism design Algorithms
Algorithmic mechanism design
[ "Mathematics" ]
237
[ "Applied mathematics", "Algorithms", "Mathematical logic", "Game theory", "Mechanism design" ]
9,200,590
https://en.wikipedia.org/wiki/Gaussian%20free%20field
In probability theory and statistical mechanics, the Gaussian free field (GFF) is a Gaussian random field, a central model of random surfaces (random height functions). The discrete version can be defined on any graph, usually a lattice in d-dimensional Euclidean space. The continuum version is defined on Rd or on a bounded subdomain of Rd. It can be thought of as a natural generalization of one-dimensional Brownian motion to d time (but still one space) dimensions: it is a random (generalized) function from Rd to R. In particular, the one-dimensional continuum GFF is just the standard one-dimensional Brownian motion or Brownian bridge on an interval. In the theory of random surfaces, it is also called the harmonic crystal. It is also the starting point for many constructions in quantum field theory, where it is called the Euclidean bosonic massless free field. A key property of the 2-dimensional GFF is conformal invariance, which relates it in several ways to the Schramm–Loewner evolution, see and . Similarly to Brownian motion, which is the scaling limit of a wide range of discrete random walk models (see Donsker's theorem), the continuum GFF is the scaling limit of not only the discrete GFF on lattices, but of many random height function models, such as the height function of uniform random planar domino tilings, see . The planar GFF is also the limit of the fluctuations of the characteristic polynomial of a random matrix model, the Ginibre ensemble, see . The structure of the discrete GFF on any graph is closely related to the behaviour of the simple random walk on the graph. For instance, the discrete GFF plays a key role in the proof by of several conjectures about the cover time of graphs (the expected number of steps it takes for the random walk to visit all the vertices). Definition of the discrete GFF Let P(x, y) be the transition kernel of the Markov chain given by a random walk on a finite graph G(V, E). Let U be a fixed non-empty subset of the vertices V, and take the set of all real-valued functions with some prescribed values on U. We then define a Hamiltonian by Then, the random function with probability density proportional to with respect to the Lebesgue measure on is called the discrete GFF with boundary U. It is not hard to show that the expected value is the discrete harmonic extension of the boundary values from U (harmonic with respect to the transition kernel P), and the covariances are equal to the discrete Green's function G(x, y). So, in one sentence, the discrete GFF is the Gaussian random field on V with covariance structure given by the Green's function associated to the transition kernel P. The continuum field The definition of the continuum field necessarily uses some abstract machinery, since it does not exist as a random height function. Instead, it is a random generalized function, or in other words, a probability distribution on distributions (with two different meanings of the word "distribution"). Given a domain Ω ⊆ Rn, consider the Dirichlet inner product for smooth functions ƒ and g on Ω, coinciding with some prescribed boundary function on , where is the gradient vector at . Then take the Hilbert space closure with respect to this inner product, this is the Sobolev space . The continuum GFF on is a Gaussian random field indexed by , i.e., a collection of Gaussian random variables, one for each , denoted by , such that the covariance structure is for all . Such a random field indeed exists, and its distribution is unique. Given any orthonormal basis of (with the given boundary condition), we can form the formal infinite sum where the are i.i.d. standard normal variables. This random sum almost surely will not exist as an element of , since if it did then However, it exists as a random generalized function, since for any we have hence is a centered Gaussian random variable with finite variance Special case: n = 1 Although the above argument shows that does not exist as a random element of , it still could be that it is a random function on in some larger function space. In fact, in dimension , an orthonormal basis of is given by where form an orthonormal basis of and then is easily seen to be a one-dimensional Brownian motion (or Brownian bridge, if the boundary values for are set up that way). So, in this case, it is a random continuous function (not belonging to , however). For instance, if is the Haar basis, then this is Lévy's construction of Brownian motion, see, e.g., Section 3 of . On the other hand, for it can indeed be shown to exist only as a generalized function, see . Special case: n = 2 In dimension n = 2, the conformal invariance of the continuum GFF is clear from the invariance of the Dirichlet inner product. The corresponding two-dimensional conformal field theory describes a massless free scalar boson. See also Brownian sheet References Statistical mechanics Stochastic processes
Gaussian free field
[ "Physics" ]
1,094
[ "Statistical mechanics" ]
9,201,770
https://en.wikipedia.org/wiki/Gankyil
The Gankyil (, Lhasa ) or "wheel of joy" () is a symbol and ritual tool used in Tibetan and East Asian Buddhism. It is composed of three (sometimes two or four) swirling and interconnected blades. The traditional spinning direction is clockwise (right turning), but the counter-clockwise ones are also common. The gankyil as inner wheel of the dharmachakra is depicted on the Flag of Sikkim, Joseon, and is also depicted on the Flag of Tibet and Emblem of Tibet. Exegesis In addition to linking the gankyil with the "wish-fulfilling jewel" (Skt. cintamani), Robert Beer makes the following connections: The "victory" referred to above is symbolised by the dhvaja or "victory banner". Wallace (2001: p. 77) identifies the ānandacakra with the heart of the "cosmic body" of which Mount Meru is the epicentre: Associated triunes Ground, path, and fruit "ground", "base" () "path", "method" () "fruit", "product" () Three humours of traditional Tibetan medicine Attributes connected with the three humors (Sanskrit: tridoshas, Tibetan: nyi pa gsum): Desire (Tibetan: འདོད་ཆགས། ’dod chags) is aligned with the humor Wind (rlung, , Sanskrit: vata - "air and aether constitution") Hatred (Tibetan: ཞེ་སྡང་། zhe sdang) is aligned with the humor Bile (Tripa, mkhris pa, Sanskrit: pitta - "fire and water constitution") Ignorance (Tibetan: གཏི་མུག gti mug) is aligned with the humor Phlegm (Béken bad kan, Sanskrit: kapha - "earth and water constitution"). Study, reflection, and meditation Study ( Tibetan: ཐོས་པ། thos + pa) Reflection ( Tibetan: བསམ་པ།sam+ pa) Meditation ( Tibetan: སྒོམ་པ། sgom pa) These three aspects are the mūlaprajñā of the sādhanā of the prajñāpāramitā, the "pāramitā of wisdom". Hence, these three are related to, but distinct from, the Prajñāpāramitā that denotes a particular cycle of discourse in the Buddhist literature that relates to the doctrinal field (kṣetra) of the second turning of the dharmacakra. Mula dharmas of the path The Dzogchen teachings focus on three terms: View (Tibetan: ལྟ་བ། lta-ba), Meditation (Tibetan: སྒོམ་པ། sgom pa), Action (Tibetan: སྤྱོད་པ། spyod-pa). Triratna doctrine The Triratna, Triple Jewel or Three Gems are triunic are therefore represented by the Gankyil: Buddha (Tibetan: སངས་རྒྱས།, Sangye, Wyl. sangs rgyas) Dharma (Tibetan: ཆོས།, Cho; Wyl. chos) Sangha (Tibetan: དགེ་དུན།, Gendun; Wyl. dge 'dun) Three Roots The Three Roots are: Guru (Tibetan: བླ་མ།, Wyl. bla ma) Yidam (Tibetan: ཡི་དམ།, Wyl. yi dam; Skt. istadevata) Dakini (Tibetan: མཁའ་འགྲོ་མ།, Khandroma; Wyl. mkha 'gro ma ) Three Higher Trainings The three higher trainings (Tibetan:ལྷག་བའི་བསླབ་པ་གསུམ་, lhagpe labpa sum, or Wyl. bslab pa gsum) discipline (Tibetan: ཚུལ་ཁྲིམས་ཀྱི་བསླབ་པ།, Wyl. tshul khrims kyi bslab pa) meditation (Tibetan: ཏིང་ངེ་འཛན་གྱི་བསླབ་པ།, Wyl. ting nge 'dzin gyi bslab pa) wisdom (Tibetan: ཤེས་རབ་ཀྱི་བསླབ་པ།, Wyl. shes rab kyi bslab pa ) Three Dharma Seals The indivisible essence of the Three Dharma Seals (ལྟ་བ་བཀའ་རྟགས་ཀྱི་ཕྱག་རྒྱ་གསུམ།) is embodied and encoded within the Gankyil: Impermanence (Tibetan: འདུ་བྱེ་ཐམས་ཅད་མི་རྟག་ཅིང་།) anatta (Tibetan: ཆོས་རྣམས་སྟོང་ཞིང་བདག་མེད་པ།) Nirvana (Tibetan: མྱང་ངན་འདས་པ་ཞི་བའོ།།) Three Turnings of the Wheel of Dharma As the inner wheel of the Vajrayana Dharmacakra, the gankyil also represents the syncretic union and embodiment of Gautama Buddha's Three Turnings of the Wheel of Dharma. The pedagogic upaya doctrine and classification of the "three turnings of the wheel" was first postulated by the Yogacara school. Trikaya doctrine The gankyil is the energetic signature of the Trikaya, realised through the transmutation of the obscurations forded by the Three poisons (refer klesha) and therefore in the Bhavachakra the Gankyil is an aniconic depiction of the snake, boar and fowl. Gankyil is to Dharmachakra, as still eye is to cyclone, as Bindu is to Mandala. The Gankyil is the inner wheel of the Vajrayana Dharmacakra (refer Himalayan Ashtamangala). The Gankyil is symbolic of the Trikaya doctrine of dharmakaya (Tibetan: ཆོས་སྐུ།, Wyl.Chos sku), sambhogakaya (Tibetan:ལོངས་སྐུ་ Wyl. longs sku) and nirmanakaya (Tibetan:སྤྲུལ་སྐུ། Wyl.sprul sku) and also of the Buddhist understanding of the interdependence of the Three Vajras: of mind, voice and body. The divisions of the teaching of Dzogchen are for the purposes of explanation only; just as the Gankyil divisions are understood to dissolve in the energetic whirl of the Wheel of Joy. Three cycles of Nyingmapa Dzogchen The Gankyil also embodies the three cycles of Nyingma Dzogchen codified by Mañjuśrīmitra: Semde [Tibetan:སེམས་སྡེ།] Longdé [Tibetan:ཀློང་སྡེ།] Mengagde [Tibetan:མན་ངག་སྡེ།] This classification determined the exposition of the Dzogchen teachings in the subsequent centuries. Three Spheres "Three spheres" (Sanskrit: trimandala; Tibetan: འཁོར་གསུམ།'khor gsum). The conceptualizations pertaining to: subject, object, and action Sound, light and rays The triunic continuua of the esoteric Dzogchen doctrine of 'sound, light and rays' (སྒྲ་འོད་ཟེར་གསུམ། Wylie: sgra 'od zer gsum) is held within the energetic signature of the Gankyil. The doctrine of 'Sound, light and rays' is intimately connected with the Dzogchen teaching of the 'three aspects of the manifestation of energy'. Though thoroughly interpenetrating and nonlocalised, 'sound' may be understood to reside at the heart, the 'mind'-wheel; 'light' at the throat, the 'voice'-wheel; and 'rays' at the head, the 'body'-wheel. Some Dzogchen lineages for various purposes, locate 'rays' at the Ah-wheel (for Five Pure Lights pranayama) and 'light' at the Aum-wheel (for rainbow body), and there are other enumerations. Three lineages of Nyingmapa Dzogchen The Gankyil also embodies the three tantric lineages as Penor Rinpoche, a Nyingmapa, states: According to the history of the origin of tantras there are three lineages: The Lineage of Buddha's Intention, which refers to the teachings of the Truth Body originating from the primordial Buddha Samantabhadra, who is said to have taught tantras to an assembly of completely enlightened beings emanated from the Truth Body itself. Therefore, this level of teaching is considered as being completely beyond the reach of ordinary human beings. The Lineage of the Knowledge Holders corresponds to the teachings of the Enjoyment Body originating from Vajrasattva and Vajrapani, whose human lineage begins with Garab Dorje of the Ögyan Dakini land. From him the lineage passed to Manjushrimitra, Shrisimha and then to Guru Rinpoche, Jnanasutra, Vimalamitra and Vairochana who disseminated it in Tibet. Lastly, the Human Whispered Lineage corresponds to the teachings of the Emanation Body, originating from the Five Buddha Families. They were passed on to Shrisimha, who transmitted them to Guru Rinpoche, who in giving them to Vimalamitra started the lineage which has continued in Tibet until the present day. Three aspects of energy in Dzogchen The Gankyil also embodies the energy manifested in the three aspects that yield the energetic emergence (Tibetan: རང་བྱུན། rang byung) of phenomena ( Tibetan: ཆོས་ Wylie: "chos" Sanskrit: dharmas) and sentient beings (Tibetan: ཡིད་ཅན། yid can): dang (གདངས། Wylie: gDangs), this is an infinite and formless level of compassionate energy and reflective capacity, it is "an awareness free from any restrictions and as an energy free from any limits or form." rolpa (རོལ་པ། Wylie: Rol-pa). These are the manifestations which appear to be internal to the individual (such as when a crystal ball seems to reflect something inside itself). tsal (རྩལ། Wylie: rTsal, is "the manifestation of the energy of the individual him or herself, as an apparently 'external' world," though this apparent externality is only just "a manifestation of our own energy, at the level of Tsal." This is explained through the use of a crystal prism which reflects and refracts white light into various other forms of light. Though not discrete correlates, dang equates to dharmakaya; rolpa to sambhogakaya; and tsal to nirmanakaya. In Bon Three Treasures of Yungdrung Bon In Bon, the gankyil denotes the three principal terma cycles of Yungdrung Bon: the Northern Treasure (), the Central Treasure () and the Southern Treasure (). The Northern Treasure is compiled from texts revealed in Zhangzhung and northern Tibet, the Southern Treasure from texts revealed in Bhutan and southern Tibet, and the Central Treasure from texts revealed in Ü-Tsang near Samye. The gankyil is the central part of the shang (Tibetan: gchang), a traditional ritual tool and instrument of the Bönpo shaman. See also Borromean rings Taegeuk Taijitu Tomoe Triskelion References Citations Works cited Beer, Robert (2003). The Handbook of Tibetan Buddhist Symbols. Serindia Publications. Source: (accessed: December 7, 2007) Besch, {Nils} Florian (2006). Tibetan Medicine Off the Roads: Modernizing the Work of the Amchi in Spiti. Source: (accessed: February 11, 2008) Günther, Herbert (undated). Three, Two, Five. (accessed: April 30, 2007) Ingersoll, Ernest (1928). Dragons and Dragon Lore. (accessed: June 12, 2008)\*Kazin, Alfred (1946). The Portable Blake. (Selected and arranged with an introduction by Alfred Kazin.) New York: The Viking Press. . Nalimov, V. V. (1982). Realms of the Unconscious: The Enchanted Frontier. University Park, PA: ISI Press. Penor Rinpoche (undated). The school of Nyingma thought (accessed: June 12, 2008) Southworth, Franklink C. (2005? forthcoming). Proto-Dravidian Agriculture. Source: (accessed: February 10, 2008) Van Schaik, Sam (2004). Approaching the Great Perfection: Simultaneous and Gradual Methods of Dzogchen Practice in the Longchen Nyingtig. Wisdom Publications. . Source: (accessed: February 2, 2008) Wayman, Alex (?) A Problem of 'Synonyms' in the Tibetan Language: Bsgom pa and Goms pa. Source: [to be supplied when have more bandwidth] (accessed: February 10, 2008) NB: published in the Journal of the Tibet Society. External links Entry for dga' 'khyil in Rang Jung Yeshe Wiki (with picture). Buddhist symbols Tantric practices Tibetan Buddhist practices Rotational symmetry
Gankyil
[ "Physics" ]
2,784
[ "Symmetry", "Rotational symmetry" ]
9,202,262
https://en.wikipedia.org/wiki/Finger-counting
Finger-counting, also known as dactylonomy, is the act of counting using one's fingers. There are multiple different systems used across time and between cultures, though many of these have seen a decline in use because of the spread of Arabic numerals. Finger-counting can serve as a form of manual communication, particularly in marketplace trading – including hand signaling during open outcry in floor trading – and also in hand games, such as morra. Finger-counting is known to go back to ancient Egypt at least, and probably even further back. Historical counting Complex systems of dactylonomy were used in the ancient world. The Greco-Roman author Plutarch, in his Lives, mentions finger counting as being used in Persia in the first centuries CE, so the practice may have originated in Iran. It was later used widely in medieval Islamic lands. The earliest reference to this method of using the hands to refer to the natural numbers may have been in some Prophetic traditions going back to the early days of Islam during the early 600s. In one tradition as reported by Yusayra, Muhammad enjoined upon his female companions to express praise to God and to count using their fingers (=واعقدن بالأنامل )( سنن الترمذي). In Arabic, dactylonomy is known as "Number reckoning by finger folding" (=حساب العقود ). The practice was well known in the Arabic-speaking world and was quite commonly used as evidenced by the numerous references to it in Classical Arabic literature. Poets could allude to a miser by saying that his hand made "ninety-three", i.e. a closed fist, the sign of avarice. When an old man was asked how old he was he could answer by showing a closed fist, meaning 93. The gesture for 50 was used by some poets (for example Ibn Al-Moutaz) to describe the beak of the goshawk. Some of the gestures used to refer to numbers were even known in Arabic by special technical terms such as Kas' (=القصع ) for the gesture signifying 29, Dabth (=الـضَـبْـث ) for 63 and Daff (= الـضَـفّ) for 99 (فقه اللغة). The polymath Al-Jahiz advised schoolmasters in his book Al-Bayan (البيان والتبيين) to teach finger counting which he placed among the five methods of human expression. Similarly, Al-Suli, in his Handbook for Secretaries, wrote that scribes preferred dactylonomy to any other system because it required neither materials nor an instrument, apart from a limb. Furthermore, it ensured secrecy and was thus in keeping with the dignity of the scribe's profession. Books dealing with dactylonomy, such as a treatise by the mathematician Abu'l-Wafa al-Buzajani, gave rules for performing complex operations, including the approximate determination of square roots. Several pedagogical poems dealt exclusively with finger counting, some of which were translated into European languages, including a short poem by Shamsuddeen Al-Mawsili (translated into French by Aristide Marre) and one by Abul-Hasan Al-Maghribi (translated into German by Julius Ruska). A very similar form is presented by the English monk and historian Bede in the first chapter of his De temporum ratione, (725), entitled "Tractatus de computo, vel loquela per gestum digitorum", which allowed counting up to 9,999 on two hands, though it was apparently little-used for numbers of 100 or more. This system remained in use through the European Middle Ages, being presented in slightly modified form by Luca Pacioli in his seminal Summa de arithmetica (1494). By country or region Finger-counting varies between cultures and over time, and is studied by ethnomathematics. Cultural differences in counting are sometimes used as a shibboleth, particularly to distinguish nationalities in war time. These form a plot point in the film Inglourious Basterds, by Quentin Tarantino, and in the book Pi in the Sky, by John D. Barrow. Asia Finger-counting systems in use in many regions of Asia allow for counting to 12 by using a single hand. The thumb acts as a pointer touching the three finger bones of each finger in turn, starting with the outermost bone of the little finger. One hand is used to count numbers up to 12. The other hand is used to display the number of completed base-12s. This continues until twelve dozen is reached, therefore 144 is counted. Chinese number gestures count up to 10 but can exhibit some regional differences. In Japan, counting for oneself begins with the palm of one hand open. Like in East Slavic countries, the thumb represents number 1; the little finger is number 5. Digits are folded inwards while counting, starting with the thumb. A closed palm indicates number 5. By reversing the action, number 6 is indicated by extending the little finger. A return to an open palm signals the number 10. However to indicate numerals to others, the hand is used in the same manner as an English speaker. The index finger becomes number 1; the thumb now represents number 5. For numbers above five, the appropriate number of fingers from the other hand are placed against the palm. For example, number 7 is represented by the index and middle finger pressed against the palm of the open hand. Number 10 is displayed by presenting both hands open with outward palms. In Korea, Chisanbop allows for signing any number between 0 and 99. Western world In the Western world a finger is raised for each unit. While there are extensive differences between and even within countries, there are, generally speaking, two systems. The main difference between the two systems is that the "German" or "French" system starts counting with the thumb, while the "American" system starts counting with the index finger. In the system used for example in Germany and France, the thumb represents 1, the thumb plus the index finger represents 2, and so on, until the thumb plus the index, middle, ring, and little fingers represents 5. This continues on to the other hand, where the entire one hand plus the thumb of the other hand means 6, and so on. In the system used in the Americas, the index finger represents 1; the index and middle fingers represents 2; the index, middle and ring fingers represents 3; the index, middle, ring, and little fingers represents 4; and the four fingers plus the thumb represents 5. This continues on to the other hand, where the entire one hand plus the index finger of the other hand means 6, and so on. Non-decimal finger counting In finger binary (base 2), each finger represents a different bit, for example thumb for 1, index for 2, middle for 4, ring for 8, and pinky for 16. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both. In senary finger counting (base 6), one hand represents the units (0 to 5) and the other hand represents multiples of 6. It counts up to 55senary (35decimal). Two related representations can be expressed: wholes and sixths (counts up to 5.5 by sixths), sixths and thirty-sixths (counts up to 0.55 by thirty-sixths). For example, "12" (left 1 right 2) can represent eight (12 senary), four-thirds (1.2 senary) or two-ninths (0.12 senary). Other body-based counting systems Undoubtedly the decimal (base-10) counting system came to prominence due to the widespread use of finger counting, but many other counting systems have been used throughout the world. Likewise, base-20 counting systems, such as used by the Pre-Columbian Mayan, are likely due to counting on fingers and toes. This is suggested in the languages of Central Brazilian tribes, where the word for twenty often incorporates the word for "feet". Other languages using a base-20 system often refer to twenty in terms of "men", that is, 1 "man" = 20 "fingers and toes". For instance, the Dene-Dinje tribe of North America refer to 5 as "my hand dies", 10 as "my hands have died", 15 as "my hands are dead and one foot is dead", and 20 as "a man dies". Even the French language today shows remnants of a Gaulish base-20 system in the names of the numbers from 60 through 99. For example, sixty-five is (literally, "sixty [and] five"), while seventy-five is (literally, "sixty [and] fifteen"). The Yuki language in California and the Pamean languages in Mexico have octal (base-8) systems because the speakers count using the spaces between their fingers rather than the fingers themselves. In languages of New Guinea and Australia, such as the Telefol language of Papua New Guinea, body counting is used, to give higher base counting systems, up to base-27. In Muralug Island, the counting system works as follows: Starting with the little finger of the left hand, count each finger, then for six through ten, successively touch and name the left wrist, left elbow, left shoulder, left breast and sternum. Then for eleven through to nineteen count the body parts in reverse order on the right side of the body (with the right little finger signifying nineteen). A variant among the Papuans of New Guinea uses on the left, the fingers, then the wrist, elbow, shoulder, left ear and left eye. Then on the right, the eye, nose, mouth, right ear, shoulder, wrist and finally, the fingers of the right hand, adding up to 22 anusi which means little finger. See also Knuckle mnemonic Tally marks Prehistoric numerals Notes References ; 2nd edition, Brown University Press, 1957; reprint, New York: Dover publications, 1969; reprint, New York: Barnes and Noble Books, 1993. External links Counting in American Sign Language Counting with your fingers in France Finger Counting Questionnaire Yutaka Nishiyama, Counting With The Fingers. Articles containing video clips
Finger-counting
[ "Mathematics" ]
2,195
[ "Numeral systems", "Finger-counting" ]
9,202,674
https://en.wikipedia.org/wiki/Cell%20survival%20curve
Introduction & History The cell survival curve is a curve often used in radiobiology that represents the relationship between the amount of cells retaining reproductive capabilities and the absorbed dose of radiation from said cells. Tumor cells are able to grow infinitely, while normal cells must undergo treatment in order to grow indefinitely (see Cellular senescence). The cell survival curve refers to specific quantities of radiation that affect a cell's ability to reproduce. Very high amounts of radiation (10,000 rads or 100 Gy) can cause complete abatement of cellular function (cell death). These values are much larger when compared to the mean lethal dose of around 2 Gy that is required for loss of reproductive function. In order to gain an accurate estimate of the reproductive viability of cells in the face of radioactive stimulus, cells are generally subject to a Clonogenic assay. There are two generally accepted models that show significance towards cell-survival curves: the multi-hit (target theory) model and the repair model. The first mammalian cell radiation survival curve was developed by Puck and Marcus in 1956 examining the actions of x-rays on mammalian cells using HeLa cells. Clonogenic Survival Assay With Survival Curves Clonogenic survival assays are generally used to garner data for a cell survival curve. Clonogenic survival assays begin by harvesting from a growing cell stock through gentle scraping and application of Trypsin. Cells are then counted per unit volume manually (through application of Hemocytometer) or electrically. Cells are then isolated and incubated for a 1-2 week period. The plating efficiency is determined by the ratio of colonies observed and colonies plated. When viewing this in relation to the cell survival curve, separate cells are then plated parallel with increasing doses of radiation. Surviving fraction is the ratio between the number of colonies that survive said doses of radiation divided by the cells seeded when taking plating efficiency into account. Cell Survival Curves Cell survival curves are generally plotted as a Logarithmic surviving fraction of cells versus a linear dose of Radiation. There are two basic types of cell survival curves: Linear (exponential) or curved. Linear survival curves reflect cells irradiated with high LET radiation. The relationship between the surviving fraction (S) and the dose (D) is S = e^(-a)D where -a represents the slope. The relationship can also be expressed as S = e^(-D/D0) where D0 represents 1/a. When the dosage is equal to 1/A, S = 0.37 (or e^-1). For this reason, D0 is often called the mean lethal dose, the dose that creates one lethal event per target on average. Curved cell survival curves (cells exposed to low-LET radiation) show two distinct regions: low dose regions and high dose regions. The low-dose region is often referred to as the “shoulder”, and in this region there are fewer cell inactivations per unit dose. The high-dose region generally trends towards a straight line. Two general interpretations have been made on the differences between the low-dose region and the high dose region. One of these is the Target Theory, and the other interpretation reflects the efficiency of enzymatic repair diminishing with increased numbers of lesions, often referring to repair models. Different Models & Relationship to Cell Survival Curve Target Theory (Multi-Hit) Target Theory, often called the “multi hit model” when examining cell radiation, examines how ionizing radiation affects biological cell functions or survival. Target theory shows many different types of models that help to explain the role radiation plays in cell death or injury. The single-target single-hit model states that there is a single target that must be hit by radiation in order to inactive a cell. Essentially, radiation targeting in this model is random and must hit a specific location on the cell, such as the DNA, to inactivate it. The model that better aligns with Eukaryotic cells and the cell survival curve is the multi-target model. This model takes into account that there are multiple key components of the cell that must each be damaged with radiation in order to inactivate the cell. This introduces the “shoulder” seen in cell survival curves that represents the cell’s initial resistance to damage at lower doses of radiation before a linear curve is present over at higher doses. Repair Models Repair models have generally been brought about to scientific discussion in this scenario through the presence of the “shoulder” in low LET dose cells (curved cell-survival models). The shoulder of a low LET dose cell survival curve represents the point in which low doses of radiation do not diminish cell’s survivability to a point where an accumulation of said low dosages can cause a loss of cell reproduction. Some damage, fittingly, is thus dubbed “sublethal damage”. The repair model, instead of the target theory, emphasizes how a cell utilizes said cell’s repair mechanisms up to the limit of the cell’s repair machinery (the end of the shoulder). After the limit of the cell’s machinery is reached, the repair model generally notes that if any damage is left unhealed the dying process will be initiated. This differs from the target model’s idea that cell death occurs if a certain number of targets are hit. Synthesis Between Two Models Observations of the cell survival curve and the phenomenon of the shoulder help explain a multi-faceted approach towards what causes cell death from radiation. Synthesis of the repair models and the target theory help explain cell death in the face of radiation, noting two different mechanisms. There has been academic debate over exact mechanisms facing cell survival in radiation. These models reflect the fact that cell survival fractions are exponential functions with a dose-dependent term in the exponent due to the Poisson statistics underlying the Stochastic process. Application of the Cell Survival Curve & Current Research The cell survival curve has a multitude of practical applications when it comes to Physiology, medicine, etc. One such application is Dose–response relationship, or the examination of the minimum required dose of radiation it takes to receive certain therapeutic outcomes including cell inactivation. There are clinical applications that highlight things such as Focused ultrasound and Gamma knife surgeries and procedures to treat brain tumors, abnormal blood vessels, etc. These rely on precision of radiation doses as represented through cell survival curves. Radiation treatment has grown in prevalence and the cell survival curve has important implications in many facets of radiation treatments and procedures. Recent research has proposed the idea that HeLa cells damaged with radiation (cells that are unique in that they have short-shouldered survival curves and two peaks of radio resistance during the cell cycle phase) show the fact that radio-suppression is mediated by intra-s checkpoints and reduces survival of cells in the s-phase. This information is relatively new and could possibly be extrapolated onto other types of cells. See also Dose fractionation Dose–response relationship Chronic radiation syndrome External links [Cell Survival Curves - MIT OpenCourseWare](https://dspace.mit.edu/bitstream/handle/1721.1/104092/22-01-fall-2006/contents/lecture-notes/cell_survival_cu.pdf) – Notes explaining simple concepts relating to the cell survival curve and subsequent context. [Targeting the Tumor Microenvironment in Radiation Oncology - PubMed Central (PMC3823792)](https://pmc.ncbi.nlm.nih.gov/articles/PMC3823792/) – Article discussing radiation on tumors and subsequent cell survival curves. [The Linear Quadratic Model and Fractionated Radiation Therapy - British Journal of Cancer](https://pmc.ncbi.nlm.nih.gov/articles/PMC2149137/pdf/brjcancersuppl00064-0010.pdf) – PDF from a journal discussing mathematical models related to cell survival curve. [Cell Survival Curve Analysis - International Journal of Radiation Oncology (PMC6203314)](https://pmc.ncbi.nlm.nih.gov/articles/PMC6203314/#:~:text=Figure%203.&text=Average%20dose%20profiles%20(A)%20and,survival%20curve%20(Figure%203A).) – Scholarly article on research of dose-dependent radiation. [Journal of Radiation Research](https://academic.oup.com/jrr/article/65/2/256/7499573) – Study on the effects of radiation on cellular survival and damage response produced in 2024 showing current research. [The Radiobiological Significance of the Cell Survival Curve - PubMed](https://pubmed.ncbi.nlm.nih.gov/13319584/) – Foundational research article discussing the importance of cell survival curves in radiobiology. References Cell Survival Curves. MIT OpenCourseWare. Retrieved from https://dspace.mit.edu/bitstream/handle/1721.1/104092/22-01-fall-2006/contents/lecture-notes/cell_survival_cu.pdf Targeting the Tumor Microenvironment in Radiation Oncology. PubMed Central (PMC3823792). Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC3823792/ The Radiobiological Significance of the Cell Survival Curve. PubMed. Retrieved from https://pubmed.ncbi.nlm.nih.gov/13319584/ Cell Survival Curve Analysis. International Journal of Radiation Oncology. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC6203314/#:~:text=Figure%203.&text=Average%20dose%20profiles%20(A)%20and,survival%20curve%20(Figure%203A). The Linear Quadratic Model and Fractionated Radiation Therapy. British Journal of Cancer. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC2149137/pdf/brjcancersuppl00064-0010.pdf Cell Survival Curves. MIT OpenCourseWare. Retrieved from https://dspace.mit.edu/bitstream/handle/1721.1/104092/22-01-fall-2006/contents/lecture-notes/cell_survival_cu.pdf The Linear Quadratic Model and Fractionated Radiation Therapy. British Journal of Cancer. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC2149137/pdf/brjcancersuppl00064-0010.pdf Targeting the Tumor Microenvironment in Radiation Oncology. PubMed Central (PMC3823792). Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC3823792/ Cell Survival Curve Analysis. International Journal of Radiation Oncology. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC6203314/#:~:text=Figure%203.&text=Average%20dose%20profiles%20(A)%20and,survival%20curve%20(Figure%203A). The Radiobiological Significance of the Cell Survival Curve. PubMed. Retrieved from https://pubmed.ncbi.nlm.nih.gov/13319584/ Journal of Radiation Research. Retrieved from https://academic.oup.com/jrr/article/65/2/256/7499573 Curves Radiobiology
Cell survival curve
[ "Chemistry", "Biology" ]
2,508
[ "Radiobiology", "Radioactivity" ]
9,202,889
https://en.wikipedia.org/wiki/THC-O-phosphate
THC-O-phosphate is a water-soluble organophosphate ester derivative of tetrahydrocannabinol (THC), which functions as a metabolic prodrug for THC itself. It was invented in 1978 in an attempt to get around the poor water solubility of THC and make it easier to inject for the purposes of animal research into its pharmacology and mechanism of action. The main disadvantage of THC phosphate ester is the slow rate of hydrolysis of the ester link, resulting in delayed onset of action and lower potency than the parent drug. Pharmacologically, it is comparable to the action of psilocybin as a metabolic prodrug for psilocin. THC phosphate ester is made by reacting THC with phosphoryl chloride using pyridine as a solvent, following by quenching with water to produce THC phosphate ester. In the original research the less active but more stable isomer Δ8-THC was used, but the same reaction scheme could be used to make the phosphate ester of the more active isomer Δ9-THC. See also THC-O-acetate THC hemisuccinate THC morpholinylbutyrate References Benzochromenes Cannabinoids Phosphate esters Prodrugs
THC-O-phosphate
[ "Chemistry" ]
284
[ "Chemicals in medicine", "Prodrugs" ]
9,202,993
https://en.wikipedia.org/wiki/List%20of%20PSPACE-complete%20problems
Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive. Games and puzzles Generalized versions of: Amazons Atomix Checkers if a draw is forced after a polynomial number of non-jump moves Dyson Telescope Game Cross Purposes Geography Two-player game version of Instant Insanity Ko-free Go Ladder capturing in Go Gomoku Hex Konane Lemmings Node Kayles Poset Game Reversi River Crossing Rush Hour Finding optimal play in Mahjong solitaire Scrabble Sokoban Super Mario Bros. Black Pebble game Black-White Pebble game Acyclic pebble game One-player pebble game Token on acyclic directed graph games: Logic Quantified boolean formulas First-order logic of equality Provability in intuitionistic propositional logic Satisfaction in modal logic S4 First-order theory of the natural numbers under the successor operation First-order theory of the natural numbers under the standard order First-order theory of the integers under the standard order First-order theory of well-ordered sets First-order theory of binary strings under lexicographic ordering First-order theory of a finite Boolean algebra Stochastic satisfiability Linear temporal logic satisfiability and model checking Lambda calculus Type inhabitation problem for simply typed lambda calculus Automata and language theory Circuit theory Integer circuit evaluation Automata theory Word problem for linear bounded automata Word problem for quasi-realtime automata Emptiness problem for a nondeterministic two-way finite state automaton Equivalence problem for nondeterministic finite automata Word problem and emptiness problem for non-erasing stack automata Emptiness of intersection of an unbounded number of deterministic finite automata A generalized version of Langton's Ant Minimizing nondeterministic finite automata Formal languages Word problem for context-sensitive language Intersection emptiness for an unbounded number of regular languages Regular Expression Star-Freeness Equivalence problem for regular expressions Emptiness problem for regular expressions with intersection. Equivalence problem for star-free regular expressions with squaring. Covering for linear grammars Structural equivalence for linear grammars Equivalence problem for Regular grammars Emptiness problem for ET0L grammars Word problem for ET0L grammars Tree transducer language membership problem for top down finite-state tree transducers Graph theory succinct versions of many graph problems, with graphs represented as Boolean circuits, ordered binary decision diagrams or other related representations: s-t reachability problem for succinct graphs. This is essentially the same as the simplest plan existence problem in automated planning and scheduling. planarity of succinct graphs acyclicity of succinct graphs connectedness of succinct graphs existence of Eulerian paths in a succinct graph Bounded two-player Constraint Logic Canadian traveller problem. Determining whether routes selected by the Border Gateway Protocol will eventually converge to a stable state for a given set of path preferences Deterministic constraint logic (unbounded) Dynamic graph reliability. Graph coloring game Node Kayles game and clique-forming game: two players alternately select vertices and the induced subgraph must be an independent set (resp. clique). The last to play wins. Nondeterministic Constraint Logic (unbounded) Others Finite horizon POMDPs (Partially Observable Markov Decision Processes). Hidden Model MDPs (hmMDPs). Dynamic Markov process. Detection of inclusion dependencies in a relational database Computation of any Nash equilibrium of a 2-player normal-form game, that may be obtained via the Lemke–Howson algorithm. The Corridor Tiling Problem: given a set of Wang tiles, a chosen tile and a width given in unary notation, is there any height such that an rectangle can be tiled such that all the border tiles are ? See also List of NP-complete problems Notes References Eppstein's page on computational complexity of games The Complexity of Approximating PSPACE-complete problems for hierarchical specifications Mathematics-related lists Lists of problems
List of PSPACE-complete problems
[ "Mathematics" ]
847
[ "PSPACE-complete problems", "Mathematical problems", "Computational problems" ]
9,203,623
https://en.wikipedia.org/wiki/Mobile%20news
Mobile news refers to both the delivery and creation of news using mobile devices. Mobile news delivery Today, mobile news delivery can be done via SMS, by specialized applications, or using mobile versions of media websites. According to a recent market study across six countries (France, Germany, Italy, Spain, UK, and US), 16.9% of consumers access news and information via mobile devices, either via browser, downloaded application, or SMS alerts. The demand for mobile news delivery is growing quickly, with 107 percent growth in daily access to mobile news in the last year alone. For example, the New York Times mobile site registered 19 million views in May 2008, compared to 500,000 in January 2007. July 18, 2011, Time Warner announced that news coverage from CNN and Headline News will be streamed live over the Internet and available for people to view on their laptops, smartphones, or tablets if they subscribe to certain paid TV services. From 2014 many media companies launched their native mobile application including Newsdash to engage global users by delivering quick and short news of their choice. Mobile news creation Mobile news also has the potential to place the power of breaking news reporting in the hands of small communities and facilitate a much better exchange of information among users due to the ease of usage of mobile phones compared with conventional media such as radio, TV or newspapers, though issues of quality, journalistic standards and professionalism are of concern to some critics.. Mobile telephony and full featured mobile devices also facilitate activism and citizen journalism. In addition to individual efforts, major media outlets like CNN, Reuters, and Yahoo are attempting to harness the power of citizen journalists. The creation of mobile news was fuelled first by the popularity of receiving text alerts, and then hugely accelerated when mobile companies embraced social media, making content creation easy and accessible. See also Citizen journalist Mobile journalism Ushahidi References External links Wireless and the Future of the Newspaper Business Citizen journalism Journalism Mobile content Mobile telecommunications
Mobile news
[ "Technology" ]
401
[ "Mobile content", "Mobile telecommunications" ]
9,204,787
https://en.wikipedia.org/wiki/Upstream%20server
In computer networking, upstream server refers to a server that provides service to another server. In other words, upstream server is a server that is located higher in a hierarchy of servers. The highest server in the hierarchy is sometimes called the origin server—the application server on which a given resource resides or is to be created. The inverse term, downstream server, is rarely used. The terms are exclusively used in contexts where requests and responses move in opposite ways. It is not used when discussing hierarchical routing or hierarchical network topologies, as packets can be transferred both ways. For example, in the domain name system, a name server in a company's local area network often forwards requests to the internet service provider's (ISP's) name servers, instead of resolving the domain name directly — it can be said that the ISP's name servers are upstream to the local server. Conversely, the ISP's servers typically resolve domain names from the domain's authoritative origin servers — the authoritative servers are said to be upstream to the ISP's servers. Note that the hierarchy of resolvers is unrelated to the actual domain name hierarchy. References Network management
Upstream server
[ "Technology", "Engineering" ]
237
[ "Computing stubs", "Computer networks engineering", "Network management", "Computer network stubs" ]