text
stringlengths
11
320k
source
stringlengths
26
161
In theoretical physics , the Weyl transformation , named after German mathematician Hermann Weyl , is a local rescaling of the metric tensor : g a b → e − 2 ω ( x ) g a b {\displaystyle g_{ab}\rightarrow e^{-2\omega (x)}g_{ab}} which produces another metric in the same conformal class . A theory or an expression invariant under this transformation is called conformally invariant , or is said to possess Weyl invariance or Weyl symmetry . The Weyl symmetry is an important symmetry in conformal field theory . It is, for example, a symmetry of the Polyakov action . When quantum mechanical effects break the conformal invariance of a theory, it is said to exhibit a conformal anomaly or Weyl anomaly . The ordinary Levi-Civita connection and associated spin connections are not invariant under Weyl transformations. Weyl connections are a class of affine connections that is invariant, although no Weyl connection is individual invariant under Weyl transformations. A quantity φ {\displaystyle \varphi } has conformal weight k {\displaystyle k} if, under the Weyl transformation, it transforms via Thus conformally weighted quantities belong to certain density bundles ; see also conformal dimension . Let A μ {\displaystyle A_{\mu }} be the connection one-form associated to the Levi-Civita connection of g {\displaystyle g} . Introduce a connection that depends also on an initial one-form ∂ μ ω {\displaystyle \partial _{\mu }\omega } via Then D μ φ ≡ ∂ μ φ + k B μ φ {\displaystyle D_{\mu }\varphi \equiv \partial _{\mu }\varphi +kB_{\mu }\varphi } is covariant and has conformal weight k − 1 {\displaystyle k-1} . For the transformation We can derive the following formulas Note that the Weyl tensor is invariant under a Weyl rescaling. This relativity -related article is a stub . You can help Wikipedia by expanding it . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . This article about theoretical physics is a stub . You can help Wikipedia by expanding it . This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Weyl_transformation
In general relativity , the Weyl–Lewis–Papapetrou coordinates are used in solutions to the vacuum region surrounding an axisymmetric distribution of mass–energy . They are named for Hermann Weyl , Thomas Lewis, and Achilles Papapetrou . [ 1 ] [ 2 ] [ 3 ] The square of the line element is of the form: [ 4 ] where ( t , ρ , ϕ , z ) {\displaystyle (t,\rho ,\phi ,z)} are the cylindrical Weyl–Lewis–Papapetrou coordinates in 3 + 1 {\displaystyle 3+1} -dimensional spacetime , and λ {\displaystyle \lambda } , ν {\displaystyle \nu } , ω {\displaystyle \omega } , and B {\displaystyle B} , are unknown functions of the spatial non-angular coordinates ρ {\displaystyle \rho } and z {\displaystyle z} only. Different authors define the functions of the coordinates differently. This relativity -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Weyl–Lewis–Papapetrou_coordinates
Whale feces , the excrement of whales , has a vital role in the ecology of oceans , [ 2 ] earning whales the title of "marine ecosystem engineers ." This significant ecological role stems from the nutrients and compounds found in whale feces, which have far-reaching effects on marine life. Nitrogen and iron chelate released by cetacean species offer significant benefits to the marine food chain and contribute to long-term carbon sequestration . Additionally, whale feces contains a wealth of information about the health, natural history, and ecology of individual animals or groups. This source of information includes DNA , hormones , toxins , and various other chemicals. Studying whale feces provides valuable insights into the lives of these marine creatures, aiding scientists in understanding their behaviors, diets, and overall well-being. Furthermore, the nutrients released through whale feces play a vital role in marine ecosystems , supporting phytoplankton growth, enhancing the food chain, and contributing to the overall health of the oceans. In addition to feces, the digestive system of sperm whales produces ambergris , a solid, waxy, flammable substance of a dull grey or blackish color which can be found floating on the sea or washed up on the coast. [ 3 ] Whales excrete plumes of liquid feces that are flocculent in nature, consisting of loose aggregations of particles. [ 2 ] [ 4 ] These feces, often found floating on the sea surface after being excreted underwater before it dissociates, contain undigested hard objects such as squid beaks. [ 2 ] [ 5 ] Fecal samples are characterized by color, odor, texture, and buoyancy, providing valuable information about the health and ecology of whales. [ 6 ] Flatulence has been recorded in whales. [ 5 ] Whales transport more nitrogen through their feces in the Gulf of Maine than all of the rivers in that system combined. One of the crucial roles of whale feces is in nutrient cycling , particularly nitrogen circulation in the ocean. Whales transport more nitrogen through their feces in certain regions than all the rivers combined, enriching both primary and secondary productivity. Additionally, the iron -rich feces of krill-eating whales encourage phytoplankton growth, benefiting the marine food chain and sequestering carbon dioxide for extended periods. The Southern Ocean , rich in nutrients but iron-deficient, experiences increased phytoplankton blooms due to whale feces, acting as a significant carbon sink . The phenomenon of whales defecating near the water's surface reverses the typical flow of nutrients in the ocean's biological pump , contributing to the "whale pump." Whales feed at deeper levels where krill is found, and their fecal matter, rich in iron, rises to the surface. This action enhances phytoplankton productivity and supports fish populations. Whales, along with krill, form a positive feedback loop , where their populations contribute to the recycling of iron, further boosting phytoplankton growth. A study in the Gulf of Maine extrapolated from modern levels nitrogen recycling in the sea due to marine mammals , such as cetaceans and seals , prior to the advent of commercial culling, estimating a former level thrice that of supply of nitrogen fixed from the atmosphere . Even today, despite reduction of marine mammal populations and increase in nitrogen uptake from the atmosphere and nitrogen pollution, the local clustering of marine mammals plays a significant role in maintaining the productivity in the regions they frequent. [ 1 ] The enrichment is not only in primary productivity but also secondary productivity in the form of abundance in fish populations. [ 2 ] The study assumes that whales tend to defecate more commonly in the upper part of the water column, which they frequent for breathing; additionally the feces tend to float. Whales feed at deeper levels of the ocean where krill is found. [ 1 ] The fecal action of whales thus reverses the usual flow of nutrients of the ocean's "biological pump" due to the downward flow of " marine snow " and other detritus from surface to bottom. The phenomenon has been termed the "whale pump". [ 2 ] The Gulf of Maine study also found that the view of whales and other marine mammals as competitors for fishing, advocated by some nations, is incorrect as whales play a vital role in maintaining the productivity of phytoplankton and consequently the fish. Culling marine mammal populations threatens the nutrient supply and the productivity of fishing grounds. [ 2 ] In addition, the feces of krill-eating whales is rich in iron. [ 5 ] The release of iron from whale feces encourages the growth of phytoplankton in the sea, [ 5 ] which not only benefits the marine food chain , but also sequesters carbon for long periods of time. [ 5 ] When phytoplankton, which is not consumed in its lifetime, perishes, it descends through the euphotic zone and settles down into the depths of sea. Phytoplankton sequesters an estimated 2 billion tons of carbon dioxide into the ocean each year, causing the ocean to become a sink of carbon dioxide which holds an estimated 90% of all sequestered carbon. [ 8 ] The Southern Ocean is amongst the largest ranges for phytoplankton and has the characteristic of being nutrient-rich in terms of phosphate, nitrate and silicate, while it is iron-deficient at the same time. [ 9 ] Increases of nutrient iron results in blooming of phytoplankton. Whale feces is up to 10 million times richer in iron than the surrounding sea water and plays a vital role in providing the iron required for maintaining phytoplankton biomass on the earth. [ 9 ] The iron defecation of just the 12,000 strong sperm whale population in the Southern Ocean results in the sequestration of 200,000 tonnes of atmospheric carbon per year. [ 9 ] A study of the Southern Ocean found that whales not only recycled iron concentrations vital for phytoplankton, but also formed, along with krill, a major source of sequestered iron in the ocean, up to 24% of the iron held in the surface waters of Southern Ocean. Whales formed part of a positive feedback loop and if whale populations are allowed to recover in the Southern Ocean, greater productivity of phytoplankton will result as larger amounts of iron are recycled through the system. [ 10 ] Accordingly, whales are referred to as "marine ecosystem engineers". [ 11 ] A study conducted in the Fernando de Noronha Archipelago of the southwest Atlantic Ocean , revealed the feces and vomit of Spinner dolphins ( Stenella longirostris ) formed part of the diet of twelve species of reef fish from seven different families. The most prolific consumer was the black triggerfish or black durgon ( Melichthys niger ), which could even discern the postures dolphins assumed prior to voiding and positioned themselves for effective feeding. All these offal eating fish species are recorded plankton eaters and it is considered that this type of feeding may represent a change in its usual diet, i.e. drifting plankton. [ 12 ] Whales, along with other large animals, play a significant role in the transport of nutrients in global ecological cycles. Population reduction of whales and other large animals has severely affected the efficacy of pump mechanisms which transport nutrients from the deep sea to the continental shelves. [ 13 ] Whale feces contain DNA, hormones, toxins and other chemicals which can give information on a number of aspects of the health, natural history and ecology of the animal concerned. Feces have also provided information on the bacteria present in the gastro-intestinal tract of whales and dolphins. A 2016 research study used fecal analysis of wild orcas, which spent the summer season in the Salish Sea , for quantitively estimation of prey species. The analysis was consistent with earlier estimates based on surface prey remains. The study found that salmonids comprised over 98.6% of the identified genetic sequences with Chinook and Coho salmon species as the most important prey species. [ 14 ] A research study, published in 2012, on impacts of overfishing and maritime traffic on a wild population of the Southern Resident Killer Whales of the western seaboard of North America, was based on the chemical analysis of fecal specimens of orcas. The study aimed to find out the reasons for orca decline for which three causes were hypothesized - disturbance by boats and ships, lack of food, and, long-term exposure of toxins which accumulate in whale fat, namely DDT, PBDT and PCB. [ 15 ] Fecal samples of orca were detected with the help of a trained spotter dog, a black labrador retriever , named "Tucker", from a firm Conservation Canines . The dog could detect fresh scat from orcas while following in a boat 200 to 400 meters (660 to 1,310 ft) behind a pod of orcas. Fecal samples collected were tested for the presence and quantity of DNA , as well as stress, nutrition and reproductive hormones , and toxins such as PBDE , PCB , and DDT congeners . [ 16 ] The fecal samples were analyzed over time and co-related to boat densities over time and the quantity of Fraser River Chinook salmon, the main constituent of orca diet in those regions. Boat densities and the salmon abundance over time were estimated independently. [ 16 ] Glucocorticoids in orcas rise when the animal faces psychological tension or starvation. The study found that prey is maximum in August, at which time, boats are most abundant. Conversely, the availability of salmon was minimum in late fall when the level of marine boat traffic was also the least. Glucocorticoid levels were highest in the fall when there was a shortage of prey and maximum during August at the height of availability of food. [ 16 ] Similarly, thyroid hormones co-relate to nutritional stress, enabling animals to lower metabolism rates to better conserve declining nutrition. The Southern Resident Killer Whales arrive in the study area in spring after having fed on salmons from early spring spawning on other rivers when their thyroid hormone levels are highest. The hormone levels decline as the animals arrive in the study area, plateau during the time of fish availability and decline further during the period of nutritional scarcity. [ 16 ] The toxin analysis was ongoing at the time of publication of research. So far, presence of congeners of the three toxins in whale feces are found to be proportionate to the levels of these chemicals measured in samples of orca flesh during biopsy. The results indicate that restoring the abundance and quality of available prey is an important first measure to restoring orca populations in the area under study. [ 16 ] An analysis of feces of two dolphin and one whale species led to the discovery of a new species of Helicobacter , namely Helicobacter cetorum , the bacteria being associated with clinical symptoms and gastritis in the cetaceans. [ 17 ]
https://en.wikipedia.org/wiki/Whale_feces
The Wharton olefin synthesis or the Wharton reaction is a chemical reaction that involves the reduction of α,β-epoxy ketones using hydrazine to give allylic alcohols . [ 1 ] [ 2 ] [ 3 ] This reaction, introduced in 1961 by P. S. Wharton, is an extension of the Wolff–Kishner reduction . The general features of this synthesis are: 1) the epoxidation of α,β-unsaturated ketones is achieved usually in basic conditions using hydrogen peroxide solution in high yield; 2) the epoxy ketone is treated with 2–3 equivalents of a hydrazine hydrate in presence of substoichiometric amounts of acetic acid . This reaction occurs rapidly at room temperature with the evolution of nitrogen and the formation of an allylic alcohol. [ 1 ] It can be used to synthesize carenol compounds. Wharton's initial procedure has been improved. [ 4 ] The mechanism of the Wharton reaction begins with reaction of the ketone ( 1 ) with hydrazine to form a hydrazone ( 2 ). Rearrangement of the hydrazone gives intermediate 3 , which can decompose giving off nitrogen gas forming the desired product 4 . The final decomposition can proceed by an ionic or a radical pathway, depending on reaction temperature, solvent used, and structure of intermediate 3 . [ 5 ] The Wharton olefin synthesis allows the transformation of an α,β unsaturated ketone into an allylic alcohol . The epoxide starting material can be generated by a number of methods, with the most common being reaction of the corresponding alkene with hydrogen peroxide or m-chloroperoxybenzoic acid . The Wharton reaction also commonly suffers from reduction of the allylic alcohol product down to the aliphatic alcohol. This is thought to be due to the oxidation of hydrazine to diimide under the conditions employed in the reaction. [ 6 ] The classical Wharton olefin synthesis has two limitations: The methodology has been implemented in synthesis of complex molecules:
https://en.wikipedia.org/wiki/Wharton_reaction
What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry , is a 2005 non-fiction book by John Markoff . The book details the history of the personal computer , closely tying the ideologies of the collaboration -driven, World War II -era defense research community to the embryonic cooperatives and psychedelics use of the American counterculture of the 1960s . The book follows the history chronologically, beginning with Vannevar Bush 's description of his inspirational memex machine in his 1945 article " As We May Think ". Markoff describes many of the people and organizations who helped develop the ideology and technology of the computer as we know it today, including Doug Engelbart , Xerox PARC , Apple Computer and Microsoft Windows . Markoff argues for a direct connection between the counterculture of the late 1950s and 1960s (using examples such as Kepler's Books in Menlo Park , California ) and the development of the computer industry. The book also discusses the early split between the idea of commercial and free-supply computing. The main part of the title, "What the Dormouse Said," is a reference to a line at the end of the 1967 Jefferson Airplane song " White Rabbit ": "Remember what the dormouse said: feed your head." [ 1 ] which is itself a reference to Lewis Carroll 's Alice's Adventures in Wonderland . This article about a computer book or series of books is a stub . You can help Wikipedia by expanding it . This article about a non-fiction history book is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/What_the_Dormouse_Said
Whatman plc is a Cytiva brand specialising in laboratory filtration products and separation technologies. Whatman products cover a range of laboratory applications that require filtration, sample collection (cards and kits), blotting, lateral flow components and flow-through assays and other general laboratory accessories. Formerly Whatman plc, the company was originally acquired in 2008 by GE Healthcare, which became Cytiva in April 2020. The papermaker James Whatman the Elder (1702–1759) founded the Whatman papermaking enterprise in 1740 in Maidstone , Kent , England. He made revolutionary advances to the craft in England and is credited [ 2 ] as the inventor of wove paper (or Vélin), an innovation used for high-quality art and printing. His son, James Whatman the Younger (1741–1798), further developed the company's techniques. [ 3 ] At a time when the craft was based in smaller paper mills , Whatman innovations led to the large-scale and widespread industrialisation of paper manufacturing. John Baskerville (1707–1775), who needed paper that would take a light impression of the printing plate, approached Whatman; the resultant paper was used for the edition of Virgil 's poetry, embellished with Baskerville's typography and designs. [ 3 ] The earliest examples of wove paper, bearing his watermark , appeared after 1740. [ 4 ] The Whatman business is credited with the invention of the wove wire mesh used to mould and align pulp fibres. [ 2 ] This is the principal method used in the mass production of most modern paper. The Whatmans held a part interest in the establishment at Turkey Mill, near Maidstone, after 1740; [ 1 ] this was wholly acquired through the elder Whatman's marriage to Ann Harris. [ 2 ] "Handmade" paper bearing the Whatman's mark continued in production for special editions and art books [ 3 ] until 2002. [ 5 ] On 4 February 2008 GE Healthcare , a unit of General Electric , acquired Whatman plc at 270p per share in cash for each Whatman share, valuing Whatman at approximately £363 million (approximately $713 million.) Last production at Maidstone (Springfield Mill) occurred on 17 June 2014. [ 5 ] The Whatman product range covers
https://en.wikipedia.org/wiki/Whatman_plc
A wheat lamp is a type of incandescent light designed for use in underground mining , named for inventor Grant Wheat and manufactured by Koehler Lighting Products in Wilkes-Barre, Pennsylvania , United States, a region known for extensive mining activity. [ 1 ] [ 2 ] A safety lamp designed for use in potentially hazardous atmospheres such as firedamp and coal dust , the lamp is mounted on the front of the miner's helmet and powered by a wet cell battery worn on the miner's belt. The average wheat lamp uses a three to five watt bulb which will typically operate for five to 16 hours depending on the amp-hour capacity of the battery and the current draw of the bulb being used. [ 3 ] A grain-of-wheat lamp is an unrelated, very small incandescent lamp used in medical and optical instruments, as well as for illuminating miniature railroad and similar models. This article about mining is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wheat_lamp
A Wheatstone bridge is an electrical circuit used to measure an unknown electrical resistance by balancing two legs of a bridge circuit , one leg of which includes the unknown component. The primary benefit of the circuit is its ability to provide extremely accurate measurements (in contrast with something like a simple voltage divider ). [ 1 ] Its operation is similar to the original potentiometer . The Wheatstone bridge was invented by Samuel Hunter Christie (sometimes spelled "Christy") in 1833 and improved and popularized by Sir Charles Wheatstone in 1843. [ 2 ] One of the Wheatstone bridge's initial uses was for soil analysis and comparison. [ 3 ] In the figure, R x is the fixed, yet unknown, resistance to be measured. R 1 , R 2 , and R 3 are resistors of known resistance and the resistance of R 2 is adjustable. The resistance R 2 is adjusted until the bridge is "balanced" and no current flows through the galvanometer V g . At this point, the potential difference between the two midpoints (B and D) will be zero. Therefore the ratio of the two resistances in the known leg ( R 2 / R 1 ) is equal to the ratio of the two resistances in the unknown leg ( R x / R 3 ) . If the bridge is unbalanced, the direction of the current indicates whether R 2 is too high or too low. At the point of balance, Detecting zero current with a galvanometer can be done to extremely high precision. Therefore, if R 1 , R 2 , and R 3 are known to high precision, then R x can be measured to high precision. Very small changes in R x disrupt the balance and are readily detected. Alternatively, if R 1 , R 2 , and R 3 are known, but R 2 is not adjustable, the voltage difference across or current flow through the meter can be used to calculate the value of R x , using Kirchhoff's circuit laws . This setup is frequently used in strain gauge and resistance thermometer measurements, as it is usually faster to read a voltage level off a meter than to adjust a resistance to zero the voltage. At the point of balance, both the voltage and the current between the two midpoints (B and D) are zero. Therefore, I 1 = I 2 , I 3 = I x , V D = V B . Because of V D = V B , then V DC = V BC and V AD = V AB . Dividing the last two equations by members and using the above currents equalities, then ADC and ABC form two voltage dividers , with V G {\displaystyle V_{G}} equal to the difference in output voltages. Thus First, Kirchhoff's first law is used to find the currents in junctions B and D: Then, Kirchhoff's second law is used for finding the voltage in the loops ABDA and BCDB: When the bridge is balanced, then I G = 0 , so the second set of equations can be rewritten as: Then, equation (1) is divided by equation (2) and the resulting equation is rearranged, giving: Due to I 3 = I x and I 1 = I 2 being proportional from Kirchhoff's First Law, I 3 I 2 / I 1 I x cancels out of the above equation. The desired value of R x is now known to be given as: On the other hand, if the resistance of the galvanometer is high enough that I G is negligible, it is possible to compute R x from the three other resistor values and the supply voltage ( V S ), or the supply voltage from all four resistor values. To do so, one has to work out the voltage from each potential divider and subtract one from the other. The equations for this are: where V G is the voltage of node D relative to node B. The Wheatstone bridge illustrates the concept of a difference measurement, which can be extremely accurate. Variations on the Wheatstone bridge can be used to measure capacitance , inductance , impedance and other quantities, such as the amount of combustible gases in a sample, with an explosimeter . The Kelvin bridge was specially adapted from the Wheatstone bridge for measuring very low resistances. In many cases, the significance of measuring the unknown resistance is related to measuring the impact of some physical phenomenon (such as force, temperature, pressure, etc.) which thereby allows the use of Wheatstone bridge in measuring those elements indirectly. The concept was extended to alternating current measurements by James Clerk Maxwell in 1865 [ 4 ] and further improved as Blumlein bridge by Alan Blumlein in British Patent no. 323,037, 1928. The Wheatstone bridge is the fundamental bridge, but there are other modifications that can be made to measure various kinds of resistances when the fundamental Wheatstone bridge is not suitable. Some of the modifications are:
https://en.wikipedia.org/wiki/Wheatstone_bridge
A wheel is a type of algebra (in the sense of universal algebra ) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring . The term wheel is inspired by the topological picture ⊙ {\displaystyle \odot } of the real projective line together with an extra point ⊥ ( bottom element ) such that ⊥ = 0 / 0 {\displaystyle \bot =0/0} . [ 1 ] [ 2 ] A wheel can be regarded as the equivalent of a commutative ring (and semiring ) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution . [ 2 ] A wheel is an algebraic structure ( W , 0 , 1 , + , ⋅ , / ) {\displaystyle (W,0,1,+,\cdot ,/)} , in which and satisfying the following properties: Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument / x {\displaystyle /x} similar (but not identical) to the multiplicative inverse x − 1 {\displaystyle x^{-1}} , such that a / b {\displaystyle a/b} becomes shorthand for a ⋅ / b = / b ⋅ a {\displaystyle a\cdot /b=/b\cdot a} , but neither a ⋅ b − 1 {\displaystyle a\cdot b^{-1}} nor b − 1 ⋅ a {\displaystyle b^{-1}\cdot a} in general, and modifies the rules of algebra such that Other identities that may be derived are where the negation − x {\displaystyle -x} is defined by − x = a x {\displaystyle -x=ax} and x − y = x + ( − y ) {\displaystyle x-y=x+(-y)} if there is an element a {\displaystyle a} such that 1 + a = 0 {\displaystyle 1+a=0} (thus in the general case x − x ≠ 0 {\displaystyle x-x\neq 0} ). However, for values of x {\displaystyle x} satisfying 0 x = 0 {\displaystyle 0x=0} and 0 / x = 0 {\displaystyle 0/x=0} , we get the usual If negation can be defined as above then the subset { x ∣ 0 x = 0 } {\displaystyle \{x\mid 0x=0\}} is a commutative ring , and every commutative ring is such a subset of a wheel. If x {\displaystyle x} is an invertible element of the commutative ring then x − 1 = / x {\displaystyle x^{-1}=/x} . Thus, whenever x − 1 {\displaystyle x^{-1}} makes sense, it is equal to / x {\displaystyle /x} , but the latter is always defined, even when x = 0 {\displaystyle x=0} . [ 1 ] Let A {\displaystyle A} be a commutative ring, and let S {\displaystyle S} be a multiplicative submonoid of A {\displaystyle A} . Define the congruence relation ∼ S {\displaystyle \sim _{S}} on A × A {\displaystyle A\times A} via Define the wheel of fractions of A {\displaystyle A} with respect to S {\displaystyle S} as the quotient A × A / ∼ S {\displaystyle A\times A~/{\sim _{S}}} (and denoting the equivalence class containing ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} as [ x 1 , x 2 ] {\displaystyle [x_{1},x_{2}]} ) with the operations In general, this structure is not a ring unless it is trivial, as 0 x ≠ 0 {\displaystyle 0x\neq 0} in the usual sense - here with x = [ 0 , 0 ] {\displaystyle x=[0,0]} we get 0 x = [ 0 , 0 ] {\displaystyle 0x=[0,0]} , although that implies that ∼ S {\displaystyle \sim _{S}} is an improper relation on our wheel W {\displaystyle W} . This follows from the fact that [ 0 , 0 ] = [ 0 , 1 ] ⟹ 0 ∈ S {\displaystyle [0,0]=[0,1]\implies 0\in S} , which is also not true in general. [ 1 ] The special case of the above starting with a field produces a projective line extended to a wheel by adjoining a bottom element noted ⊥ , where 0 / 0 = ⊥ {\displaystyle 0/0=\bot } . The projective line is itself an extension of the original field by an element ∞ {\displaystyle \infty } , where z / 0 = ∞ {\displaystyle z/0=\infty } for any element z ≠ 0 {\displaystyle z\neq 0} in the field. However, 0 / 0 {\displaystyle 0/0} is still undefined on the projective line, but is defined in its extension to a wheel. Starting with the real numbers , the corresponding projective "line" is geometrically a circle , and then the extra point 0 / 0 {\displaystyle 0/0} gives the shape that is the source of the term "wheel". Or starting with the complex numbers instead, the corresponding projective "line" is a sphere (the Riemann sphere ), and then the extra point gives a 3-dimensional version of a wheel.
https://en.wikipedia.org/wiki/Wheel_theory
In civil engineering , a wheel tractor-scraper (also known as a land scraper , land leveler , tournapull or simply called a scraper ) is a type of heavy equipment used for earthmoving . It has a pan/hopper for loading and carrying material. The pan has a tapered horizontal front cutting edge that cuts into the soil like a carpenter's plane or cheese slicer and fills the hopper which has a movable ejection system. The horsepower of the machine, depth of the cut, type of material, and slope of the cut area affect how quickly the pan is filled. When full, the pan is raised, the apron is closed, and the scraper transports its load to the fill area . There the pan height is set and the lip is opened (the lip is what the bottom edge of the apron is called), so that the ejection system can be engaged for dumping the load. The forward momentum or speed of the machine affects how big an area is covered with the load. A high pan height and slow speed will dump the load over a short distance. With the pan set close to the ground, a higher speed will spread the material more thinly over a larger area. In an "elevating scraper" a conveyor moves material from the cutting edge into the hopper. R.G LeTourneau conceived the idea of the self-propelled motor scraper while recovering from a near-fatal auto accident. He was an earth moving contractor dealer in bulldozer accessories and envisaged a pulled trailer that could excavate and pick up earth as it moved. He approached bulldozer manufacturer Caterpillar in the mid-1930s with his idea, but it was turned down, so he founded his own company. The first Tournapull, called the Model A, was rolled out of his factory and into trials in 1937. [ 1 ] [ 2 ] [ 3 ] This concept was further developed by LeTourneau Westinghouse Company. [ 4 ] Most current scrapers have two axles, although historically tri-axle configurations were dominant. The scraper is a large piece of equipment which is used in mining, construction, agriculture and other earthmoving applications. The rear part has a vertically moveable hopper (also known as the bowl) with a sharp horizontal front edge. The hopper can be hydraulically lowered and raised. When the hopper is lowered, the front edge cuts into the soil or clay like a plane and fills the hopper. When the hopper is full (8 to 34 m 3 or 10 to 44 cu yd heaped, depending on type) it is raised, and closed with a vertical blade (known as the apron). The scraper can transport its load to the fill area where the blade is raised, the back panel of the hopper, or the ejector, is hydraulically pushed forward and the load tumbles out. Then the empty scraper returns to the cut site and repeats the cycle. On the 'elevating scraper' the bowl is filled by a type of conveyor arrangement fitted with a horizontal flights to move the material engaged by the cutting edge into the bowl as the machine moves forward. Elevating scrapers do not require assistance from push-tractors. The pioneer developer of the elevating scraper was Hancock Manufacturing Company of Lubbock, Texas USA. Scrapers can be very efficient on short hauls where the cut and fill areas are close together and have sufficient length to fill the hopper. The heavier scraper types have two engines ("tandem powered"), one driving the front wheels, one driving the rear wheels, with engines up to 400 kW (536 hp ). Multiple scrapers can work together in a push-pull fashion but this requires a long cut area. Smaller scrapers may be towed by a bulldozer .
https://en.wikipedia.org/wiki/Wheel_tractor-scraper
A wheel washing system is a device used to clean the tires of trucks when they are leaving a site, helping to control and eliminate pollution of public roads. [ 1 ] The installation can be made in or above the ground for either temporary or permanent applications. [ citation needed ] There are two types of wheel washing systems: roller systems and drive-through systems. [ 2 ]
https://en.wikipedia.org/wiki/Wheel_washing_system
In both road and rail vehicles , the wheelbase is the horizontal distance between the centers of the front and rear wheels. For road vehicles with more than two axles (e.g. some trucks), the wheelbase is the distance between the steering (front) axle and the centerpoint of the driving axle group. In the case of a tri-axle truck, the wheelbase would be the distance between the steering axle and a point midway between the two rear axles. [ 1 ] The wheelbase of a vehicle equals the distance between its front and rear wheels. At equilibrium, the total torque of the forces acting on a vehicle is zero. Therefore, the wheelbase is related to the force on each pair of tires by the following formula: where F f {\displaystyle F_{f}} is the force on the front tires, F r {\displaystyle F_{r}} is the force on the rear tires, L {\displaystyle L} is the wheelbase, d r {\displaystyle d_{r}} is the distance from the center of mass (CM) to the rear wheels, d f {\displaystyle d_{f}} is the distance from the center of mass to the front wheels ( d f {\displaystyle d_{f}} + d r {\displaystyle d_{r}} = L {\displaystyle L} ), m {\displaystyle m} is the mass of the vehicle, and g {\displaystyle g} is the gravity constant . So, for example, when a truck is loaded, its center of gravity shifts rearward and the force on the rear tires increases. The vehicle will then ride lower. The amount the vehicle sinks will depend on counter acting forces, like the size of the tires, tire pressure, and the spring rate of the suspension . If the vehicle is accelerating or decelerating, extra torque is placed on the rear or front tire respectively. The equation relating the wheelbase, height above the ground of the CM, and the force on each pair of tires becomes: where F f {\displaystyle F_{f}} is the force on the front tires, F r {\displaystyle F_{r}} is the force on the rear tires, d r {\displaystyle d_{r}} is the distance from the CM to the rear wheels, d f {\displaystyle d_{f}} is the distance from the CM to the front wheels, L {\displaystyle L} is the wheelbase, m {\displaystyle m} is the mass of the vehicle, g {\displaystyle g} is the acceleration of gravity (approx. 9.8 m/s 2 ), h c m {\displaystyle h_{cm}} is the height of the CM above the ground, a {\displaystyle a} is the acceleration (or deceleration if the value is negative). So, as is common experience, when the vehicle accelerates, the rear usually sinks and the front rises depending on the suspension. Likewise, when braking the front noses down and the rear rises. [ 2 ] Because of the effect the wheelbase has on the weight distribution of the vehicle, wheelbase dimensions are crucial to the balance and steering. For example, a car with a much greater weight load on the rear tends to understeer due to the lack of the load (force) on the front tires and therefore the grip ( friction ) from them. This is why it is crucial, when towing a single-axle caravan, to distribute the caravan's weight so that down-thrust on the tow-hook is about 100 pounds force (400 N). Likewise, a car may oversteer or even "spin out" if there is too much force on the front tires and not enough on the rear tires. Also, when turning there is lateral torque placed upon the tires which imparts a turning force that depends upon the length of the tire distances from the CM. Thus, in a car with a short wheelbase ("SWB"), the short lever arm from the CM to the rear wheel will result in a greater lateral force on the rear tire which means greater acceleration and less time for the driver to adjust and prevent a spin out or worse. Wheelbases provide the basis for one of the most common vehicle size class systems. Some vehicles are offered with long-wheelbase variants to increase the spaciousness and therefore the luxury of the vehicle. This practice can often be found on full-size cars like the Mercedes-Benz S-Class , but ultra-luxury vehicles such as the Rolls-Royce Phantom and even large family cars like the Rover 75 came with 'limousine' versions. Prime Minister of the United Kingdom Tony Blair was given a long-wheelbase version of the Rover 75 for official use. [ 3 ] and even some SUVs like the VW Tiguan and Jeep Wrangler are available with long wheelbases. In contrast, coupé varieties of some vehicles such as the Honda Accord are usually built on shorter wheelbases than the sedans they are derived from. The wheelbase on many commercially available bicycles and motorcycles is so short, relative to the height of their centers of mass , that they are able to perform stoppies and wheelies . In skateboarding the word 'wheelbase' is used for the distance between the two inner pairs of mounting holes on the deck. This is different from the distance between the rotational centers of the two wheel pairs. A reason for this alternative use is that decks are sold with prefabricated holes, but usually without trucks and wheels. It is therefore easier to use the prefabricated holes for measuring and describing this characteristic of the deck. A common misconception is that the choice of wheelbase is influenced by the height of the skateboarder. However, the length of the deck would then be a better candidate, because the wheelbase affects characteristics useful in different speeds or terrains regardless of the height of the skateboarder. For example, a deck with a long wheelbase, say 22 inches (55.9 cm), will respond slowly to turns, which is often desirable in high speeds. A deck with a short wheelbase, say 14 inches (35.6 cm), will respond quickly to turns, which is often desirable when skating backyard pools or other terrains requiring quick or intense turns. In rail vehicles, the wheelbase follows a similar concept. However, since the wheels may be of different sizes (for example, on a steam locomotive ), the measurement is taken between the points where the wheels contact the rail, and not between the centers of the wheels. On vehicles where the wheelsets (axles) are mounted inside the vehicle frame (mostly in steam locomotives), the wheelbase is the distance between the front-most and rear-most wheelsets. On vehicles where the wheelsets are mounted on bogies (American: trucks) , three wheelbase measurements can be distinguished: The wheelbase affects the rail vehicle's capability to negotiate curves. Short-wheelbased vehicles can negotiate sharper curves. On some larger wheelbase locomotives, inner wheels may lack flanges in order to pass curves. The wheelbase also affects the load the vehicle poses to the track, track infrastructure and bridges. All other conditions being equal, a shorter wheelbase vehicle represents a more concentrated load to the track than a longer wheelbase vehicle. As railway lines are designed to take a predetermined maximum load per unit of length (tonnes per meter, or pounds per foot), the rail vehicles' wheelbase is designed according to their intended gross weight. The higher the gross weight, the longer the wheelbase must be.
https://en.wikipedia.org/wiki/Wheelbase
The Wheeler Jump is a type of subroutine call methodology that was used on some early computers that lacked hardware support for saving the return address. The concept was developed by David Wheeler while working on the pioneering EDSAC machine in the 1950s. [ 1 ] EDSAC had not been built with subroutines in mind, and lacked a suitable processor register or a hardware stack that might allow the return address to be easily stored. Wheeler's solution was a particular way to write the subroutine code. To implement it, the last line of the subroutine was a "jump to this address" instruction, which would normally be followed by a memory location. In a Wheeler subroutine, this address was normally set to a dummy number, say 0. To call the routine, the address of the caller would be placed in the accumulator and then the code would jump to the starting point of the routine. The first instructions in the routine would calculate the return address based on the value in the accumulator, typically the next memory location so an increment will suffice, and then write the result to the dummy address previously set aside. When the routine runs its course it naturally reaches the end of the routine which now says "jump to the return address". As writing to memory is a slow process compared to register access, this methodology is not particularly fast. It also is not capable of expressing recursion . [ 2 ] The addition of new registers for this sort of duty was a key design goal of EDSAC 2 . This example demonstrates the technique using a pseudo- assembler language for a simple byte-oriented accumulator-based machine with a single register, A: When this code completes, the JUMP instruction in address 90 will naturally return to location 13, the next instruction after the subroutine. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wheeler_Jump
The incremental inductance rule , attributed to Harold Alden Wheeler [ 1 ] by Gupta [ 2 ] : 101 and others [ 3 ] : 80 is a formula used to compute skin effect resistance and internal inductance in parallel transmission lines when the frequency is high enough that the skin effect is fully developed. Wheeler's concept is that the internal inductance of a conductor is the difference between the computed external inductance and the external inductance computed with all the conductive surfaces receded by one half of the skin depth. Skin effect resistance is assumed to be equal to the reactance of the internal inductance. Gupta [ 2 ] : 67 gives a general equation with partial derivatives replacing the difference of inductance. Wadell [ 4 ] : 27 and Gupta [ 2 ] : 67 state that the thickness and corner radius of the conductors should be large with respect to the skin depth. Garg [ 3 ] : 80 further states that the thickness of the conductors must be at least four times the skin depth. Garg [ 3 ] : 80 states that the calculation is unchanged if the dielectric is taken to be air and that L = Z c / V p {\displaystyle L=Z_{\mathrm {c} }/V_{\mathrm {p} }} where Z c {\displaystyle Z_{\mathrm {c} }} is the characteristic impedance and V p {\displaystyle V_{\mathrm {p} }} the velocity of propagation, i.e. the speed of light. Paul, 2007, [ 5 ] [ a ] : 149 disputes the accuracy of R s k i n = ω L i n t {\displaystyle R_{\mathrm {skin} }=\omega L_{\mathrm {int} }} at very high frequency for rectangular conductors such as stripline and microstrip due to a non-uniform distribution of current on the conductor. At very high frequency, the current crowds into the corners of the conductor. In the top figure, if and then the internal inductance is and the skin effect resistance is
https://en.wikipedia.org/wiki/Wheeler_incremental_inductance_rule
When Engineering Fails is a 1998 film written and presented by Henry Petroski . [ 1 ] It examines the causes of major disasters, including the explosion of the Space Shuttle Challenger , and compares the risks of computer-assisted design with those of traditional engineering methods. The original title of the film was To Engineer Is Human , the title of Petroski's non-fiction book about design failures. [ 2 ] This article about a scientific documentary film is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/When_Engineering_Fails
When Topology Meets Chemistry: A Topological Look At Molecular Chirality is a book in chemical graph theory on the graph-theoretic analysis of chirality in molecular structures. It was written by Erica Flapan , based on a series of lectures she gave in 1996 at the Institut Henri Poincaré , [ 1 ] and was published in 2000 by the Cambridge University Press and Mathematical Association of America as the first volume in their shared Outlooks book series. [ 2 ] A chiral molecule is a molecular structure that is different from its mirror image. This property, while seemingly abstract, can have big consequences in biochemistry, where the shape of molecules is essential to their chemical function, [ 3 ] and where a chiral molecule can have very different biological activities from its mirror-image molecule. [ 4 ] When Topology Meets Chemistry concerns the mathematical analysis of molecular chirality. The book has seven chapters, beginning with an introductory overview and ending with a chapter on the chirality of DNA molecules. [ 2 ] Other topics covered through the book include the rigid geometric chirality of tree-like molecular structures such as tartaric acid , and the stronger topological chirality of molecules that cannot be deformed into their mirror image without breaking and re-forming some of their molecular bonds. It discusses results of Flapan and Jonathan Simon on molecules with the molecular structure of Möbius ladders , according to which every embedding of a Möbius ladder with an odd number of rungs is chiral while Möbius ladders with an even number of rungs have achiral embeddings. It uses the symmetries of graphs, in a result that the symmetries of certain graphs can always be extended to topological symmetries of three-dimensional space, from which it follows that non-planar graphs with no self-inverse symmetry are always chiral. It discusses graphs for which every embedding is topologically knotted or linked . And it includes material on the use of knot invariants to detect topological chirality. [ 1 ] [ 2 ] [ 4 ] [ 5 ] The book is self-contained, and requires only an undergraduate level of mathematics. [ 3 ] [ 5 ] It includes many exercises, [ 2 ] making it suitable for use as a textbook at both the advanced undergraduate and introductory graduate levels. [ 1 ] Reviewer Buks van Rensburg describes the book's presentation as "efficient and intuitive", and recommends the book to "every mathematician or chemist interested in the notions of chirality and symmetry". [ 6 ]
https://en.wikipedia.org/wiki/When_Topology_Meets_Chemistry
Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF ) is a book by George Lakoff , a cognitive linguist , and Rafael E. Núñez , a psychologist . Published in 2000, WMCF seeks to found a cognitive science of mathematics , a theory of embodied mathematics based on conceptual metaphor . Mathematics makes up that part of the human conceptual system that is special in the following way: Nikolay Lobachevsky said "There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world." A common type of conceptual blending process would seem to apply to the entire mathematical procession. Lakoff and Núñez's avowed purpose is to begin laying the foundations for a truly scientific understanding of mathematics, one grounded in processes common to all human cognition. They find that four distinct but related processes metaphorically structure basic arithmetic: object collection, object construction, using a measuring stick, and moving along a path. WMCF builds on earlier books by Lakoff (1987) and Lakoff and Johnson (1980, 1999), which analyze such concepts of metaphor and image schemata from second-generation cognitive science . Some of the concepts in these earlier books, such as the interesting technical ideas in Lakoff (1987), are absent from WMCF . Lakoff and Núñez hold that mathematics results from the human cognitive apparatus and must therefore be understood in cognitive terms. WMCF advocates (and includes some examples of) a cognitive idea analysis of mathematics which analyzes mathematical ideas in terms of the human experiences, metaphors, generalizations, and other cognitive mechanisms giving rise to them. A standard mathematical education does not develop such idea analysis techniques because it does not pursue considerations of A) what structures of the mind allow it to do mathematics or B) the philosophy of mathematics . Lakoff and Núñez start by reviewing the psychological literature, concluding that human beings appear to have an innate ability, called subitizing , to count, add, and subtract up to about 4 or 5. They document this conclusion by reviewing the literature, published in recent decades, describing experiments with infant subjects. For example, infants quickly become excited or curious when presented with "impossible" situations, such as having three toys appear when only two were initially present. The authors argue that mathematics goes far beyond this very elementary level due to a large number of metaphorical constructions. For example, the Pythagorean position that all is number, and the associated crisis of confidence that came about with the discovery of the irrationality of the square root of two , arises solely from a metaphorical relation between the length of the diagonal of a square, and the possible numbers of objects. Much of WMCF deals with the important concepts of infinity and of limit processes, seeking to explain how finite humans living in a finite world could ultimately conceive of the actual infinite . Thus much of WMCF is, in effect, a study of the epistemological foundations of the calculus . Lakoff and Núñez conclude that while the potential infinite is not metaphorical, the actual infinite is. Moreover, they deem all manifestations of actual infinity to be instances of what they call the "Basic Metaphor of Infinity", as represented by the ever-increasing sequence 1, 2, 3, ... WMCF emphatically rejects the Platonistic philosophy of mathematics . They emphasize that all we know and can ever know is human mathematics , the mathematics arising from the human intellect. The question of whether there is a "transcendent" mathematics independent of human thought is a meaningless question, like asking if colors are transcendent of human thought—colors are only varying wavelengths of light, it is our interpretation of physical stimuli that make them colors. WMCF (p. 81) likewise criticizes the emphasis mathematicians place on the concept of closure . Lakoff and Núñez argue that the expectation of closure is an artifact of the human mind's ability to relate fundamentally different concepts via metaphor. WMCF concerns itself mainly with proposing and establishing an alternative view of mathematics, one grounding the field in the realities of human biology and experience. It is not a work of technical mathematics or philosophy. Lakoff and Núñez are not the first to argue that conventional approaches to the philosophy of mathematics are flawed. For example, they do not seem all that familiar with the content of Davis and Hersh (1981), even though the book warmly acknowledges Hersh's support. Lakoff and Núñez cite Saunders Mac Lane (the inventor, with Samuel Eilenberg , of category theory ) in support of their position. Mathematics, Form and Function (1986), an overview of mathematics intended for philosophers, proposes that mathematical concepts are ultimately grounded in ordinary human activities, mostly interactions with the physical world. [ 1 ] Conceptual metaphors described in WMCF , in addition to the Basic Metaphor of Infinity, include: Mathematical reasoning requires variables ranging over some universe of discourse , so that we can reason about generalities rather than merely about particulars. WMCF argues that reasoning with such variables implicitly relies on what it terms the Fundamental Metonymy of Algebra. WMCF (p. 151) includes the following example of what the authors term "metaphorical ambiguity." Take the set A = { { ∅ } , { ∅ , { ∅ } } } . {\displaystyle A=\{\{\emptyset \},\{\emptyset ,\{\emptyset \}\}\}.} Then recall two bits of standard terminology from elementary set theory : By (1), A is the set {1,2}. But (1) and (2) together say that A is also the ordered pair (0,1). Both statements cannot be correct; the ordered pair (0,1) and the unordered pair {1,2} are fully distinct concepts. Lakoff and Johnson (1999) term this situation "metaphorically ambiguous." This simple example calls into question any Platonistic foundations for mathematics. While (1) and (2) above are admittedly canonical, especially within the consensus set theory known as the Zermelo–Fraenkel axiomatization , WMCF does not let on that they are but one of several definitions that have been proposed since the dawning of set theory. For example, Frege , Principia Mathematica , and New Foundations (a body of axiomatic set theory begun by Quine in 1937) define cardinals and ordinals as equivalence classes under the relations of equinumerosity and similarity , so that this conundrum does not arise. In Quinian set theory, A is simply an instance of the number 2. For technical reasons, defining the ordered pair as in (2) above is awkward in Quinian set theory. Two solutions have been proposed: The "Romance of Mathematics" is WMCF ' s light-hearted term for a perennial philosophical viewpoint about mathematics which the authors describe and then dismiss as an intellectual myth: It is very much an open question whether WMCF will eventually prove to be the start of a new school in the philosophy of mathematics . Hence the main value of WMCF so far may be a critical one: its critique of Platonism and romanticism in mathematics. Many working mathematicians resist the approach and conclusions of Lakoff and Núñez. Reviews of WMCF by mathematicians in professional journals, while often respectful of its focus on conceptual strategies and metaphors as paths for understanding mathematics, have taken exception to some of the WMCF ' s philosophical arguments on the grounds that mathematical statements have lasting 'objective' meanings. [ 2 ] For example, Fermat's Last Theorem means exactly what it meant when Fermat initially proposed it in 1664. Other reviewers have pointed out that multiple conceptual strategies can be employed in connection with the same mathematically defined term, often by the same person (a point that is compatible with the view that we routinely understand the 'same' concept with different metaphors). The metaphor and the conceptual strategy are not the same as the formal definition which mathematicians employ. However, WMCF points out that formal definitions are built using words and symbols that have meaning only in terms of human experience. Critiques of WMCF include the humorous: It's difficult for me to conceive of a metaphor for a real number raised to a complex power, but if there is one, I'd sure like to see it. — Joseph Auslander [ 3 ] and the physically informed: But their analysis leaves at least a couple of questions insufficiently answered. For one thing, the authors ignore the fact that brains not only observe nature, but also are part of nature. Perhaps the math that brains invent takes the form it does because math had a hand in forming the brains in the first place (through the operation of natural laws in constraining the evolution of life). Furthermore, it's one thing to fit equations to aspects of reality that are already known. It's something else for that math to tell of phenomena never previously suspected. When Paul Dirac's equations describing electrons produced more than one solution, he surmised that nature must possess other particles, now known as antimatter. But scientists did not discover such particles until after Dirac's math told him they must exist. If math is a human invention, nature seems to know what was going to be invented. [ 3 ] Lakoff and Núñez tend to dismiss the negative opinions mathematicians have expressed about WMCF , because their critics do not appreciate the insights of cognitive science. Lakoff and Núñez maintain that their argument can only be understood using the discoveries of recent decades about the way human brains process language and meaning. They argue that any arguments or criticisms that are not grounded in this understanding cannot address the content of the book. [ 4 ] It has been pointed out that it is not at all clear that WMCF establishes that the claim "intelligent alien life would have mathematical ability" is a myth. To do this, it would be required to show that intelligence and mathematical ability are separable, and this has not been done. On Earth, intelligence and mathematical ability seem to go hand in hand in all life-forms, as pointed out by Keith Devlin among others. [ 5 ] The authors of WMCF have not explained how this situation would (or even could) be different anywhere else. Lakoff and Núñez also appear not to appreciate the extent to which intuitionists and constructivists have presaged their attack on the Romance of (Platonic) Mathematics. Brouwer , the founder of the intuitionist / constructivist point of view, in his dissertation On the Foundation of Mathematics , argued that mathematics was a mental construction, a free creation of the mind and totally independent of logic and language. He goes on to criticize the formalists for building verbal structures that are studied without intuitive interpretation. Symbolic language should not be confused with mathematics; it reflects, but does not contain, mathematical reality. [ 6 ] Educators have taken some interest in what WMCF suggests about how mathematics is learned, and why students find some elementary concepts more difficult than others. However, even from an educational perspective, WMCF is still problematic. From the conceptual metaphor theory's point of view, metaphors reside in a different realm, the abstract, from that of 'real world', the concrete. In other words, despite their claim of mathematics being human,  established mathematical knowledge — which is what we learn in school — is assumed to be and treated as abstract, completely detached from its physical origin. It cannot account for the way learners could access to such knowledge. [ 7 ] WMCF is also criticized for its monist approach. First, it ignores the fact that the sensori-motor experience upon which our linguistic structure — thus, mathematics — is assumed to be based may vary across cultures and situations. [ 8 ] Second, the mathematics WMCF is concerned with is "almost entirely... standard utterances in textbooks and curricula", [ 8 ] which is the most-well established body of knowledge. It is negligent of the dynamic and diverse nature of the history of mathematics. WMCF's logo-centric approach is another target for critics. While it is predominantly interested in the association between language and mathematics, it does not account for how non-linguistic factors contribute to the emergence of mathematical ideas (e.g. See Radford, 2009; [ 9 ] Rotman, 2008 [ 10 ] ). WMCF (pp. 378–79) concludes with some key points, a number of which follow. Mathematics arises from our bodies and brains, our everyday experiences, and the concerns of human societies and cultures. It is: The cognitive approach to formal systems , as described and implemented in WMCF , need not be confined to mathematics, but should also prove fruitful when applied to formal logic, and to formal philosophy such as Edward Zalta 's theory of abstract objects . Lakoff and Johnson (1999) fruitfully employ the cognitive approach to rethink a good deal of the philosophy of mind , epistemology , metaphysics , and the history of ideas .
https://en.wikipedia.org/wiki/Where_Mathematics_Comes_From
Whipple was a proposed space observatory in the NASA Discovery Program . [ 1 ] The observatory would try to search for objects in the Kuiper belt and the theorized Oort cloud by conducting blind occultation observations. [ 2 ] Although the Oort cloud was hypothesized in the 1950s, it has not yet been directly observed. [ 2 ] The mission would attempt to detect Oort cloud objects by scanning for brief moments where the objects would block the light of background stars. [ 2 ] In 2011, three finalists were selected for the 2016 Discovery Program, and Whipple was not among them, but it was awarded funding to continue its technological development efforts. [ 3 ] Whipple would orbit in a halo orbit around the Earth–Sun L 2 and have a photometer that would try to detect Oort cloud and Kuiper belt objects (KBOs) by recording their transits of distant stars. [ 1 ] It would be designed to detect objects out to 10 000 AU . [ 1 ] Some of the mission goals included directly detecting the Oort cloud for the first time and determining the outer limit of the Kuiper belt. [ 1 ] Whipple would be designed to detect objects as small as a kilometer (half a mile) across at a distance of 3,200 billion kilometers; 22,000 astronomical units (2 × 10 ^ 12 mi). [ 4 ] Its telescope would need a relatively wide field of view and fast recording cadence to capture transits that may last only seconds. [ 5 ] In 2011, Whipple was one of three proposals to win a technology development award in a Discovery Program selection. [ 4 ] The design proposed was a catadioptric Cassegrain telescope with a 77-centimeter aperture (30.3 inches). [ 6 ] It would have a wide field of view with a fast read-out CMOS detector to achieve the desired time and photometric sensitivity. [ 7 ] The smallest KBO yet detected was discovered in 2009 by poring over data from the Hubble Space Telescope 's fine guidance sensors . [ 8 ] Astronomers detected a transit of an object against a distant star, which, based on the duration and amount of dimming, was calculated to be a KBO about 1,000 meters (3,200 ft) in diameter. [ 8 ] It has been suggested that the Kepler space telescope may be able to detect objects in the Oort cloud by their occultation of background stars. [ 9 ]
https://en.wikipedia.org/wiki/Whipple_(spacecraft)
The Fred Whipple Award , established in 1989 by the Planetary Sciences Section of the American Geophysical Union , is presented to an individual who makes an outstanding contribution to the field of planetary science . [ 1 ] The award was established to honor Fred Whipple . The Whipple Award includes an opportunity to present an invited lecture during the American Geophysical Union Fall Meeting. Source: AGU
https://en.wikipedia.org/wiki/Whipple_Award
A whippletree , or whiffletree , [ 1 ] [ 2 ] is a mechanism to distribute force evenly through linkages . It is also referred to as an equalizer , leader bar , or double tree . It consists of a bar pivoted at or near the centre, with force applied from one direction to the pivot and from the other direction to the tips. Several whippletrees may be used in series to distribute the force further, such as to simulate pressure over an area as when applying loading to test airplane wings. Whippletrees may be used either in compression or tension . They were also used for subtraction and addition calculations in mechanical computers. Tension whippletrees are used in artful hung mobiles, such as those by artist Alexander Calder . Whippletrees are used in tension to distribute forces from a point load to the traces of draught animals (the traces are the chains or straps on each side of the harness, on which the animal pulls). For these, the whippletree consists of a loose horizontal bar between the draught animal and its load. The centre of the bar is connected to the load, and the traces attach to its ends. [ 1 ] [ 2 ] Whippletrees are used especially when pulling a dragged load such as a plough , harrow , log or canal boat or for pulling a vehicle (by the leaders in a team with more than one row of animals). A swingletree , or singletree , is a special kind of whippletree used for a horse-drawn vehicle . The term swingletree is sometimes used for draught whippletrees. [ 1 ] [ 2 ] A whippletree balances the pull from each side of the animal, preventing the load from tugging alternately on each side. It also keeps a point load from pulling the traces in onto the sides of the animal. If several animals are used abreast, further whippletrees may be used behind the first. Thus, with two animals, each has its own whippletree, and a further one balances the loads from their two whippletrees—an arrangement sometimes known as a double-tree , or for the leaders in a larger team, leader-bars . With three or more animals abreast, even more whippletrees are needed; some may be made asymmetrical to balance odd numbers of animals. Multiple whippletrees balance the pulls from the different animals, ensuring that each takes an equal share of the work. Whippletrees are also used in modern agriculture—for example, to link several ganged agricultural implements such as harrows , mowers or rollers to a tractor . This combines several small loads into a single load at the tractor hitch (the reverse of the use for draught animals). A series of whippletrees is used in compression in a standard windshield wiper to distribute the point force of the sprung wiper arm evenly along the wiper blade. Some designs for large telescopes use whippletrees [ 3 ] to support the optical elements. The tree provides distributed mechanical support, reducing localised mechanical deflections, which in turn reduces optical distortion. [ 4 ] Unlike the applications described above, which are two-dimensional, the whippletrees in telescope mirror support cells are three-dimensional designs, [ 5 ] since the tree must support multiple points over an area. Linkage-type mechanical analog computers use whippletree linkages to add and subtract quantities represented by straight-line motions. [ 6 ] The illustration here of whippletrees for a three-animal team is very similar to a group of linkage adders and subtracters: "load" is the equivalent of the output sum/difference of the individual inputs. Inside the computer, cylinders on the knob shafts have thin metal tapes wrapped around them to convert rotary to linear motion. One widely used application was in the IBM Selectric typewriter (and the IBM 2741 derived from it), where the linkages summed binary mechanical inputs to rotate and tilt the type ball. This type of computing method was also used for naval gunnery , such as the MK 56 Gun Fire Control System and sonar fire-control systems .
https://en.wikipedia.org/wiki/Whippletree_(mechanism)
A whirlwind mill is a beater mill for pulverising and micro-pulverising in process engineering . Whirlwind mills essentially consist of a mill base, a mill cover and a rotor . The inner side of the cover is equipped with wear protection elements. The top of the rotor is equipped with precrushing tools, and its side is covered with numerous U-shaped grinding tools. The grinding stock is fed to the mill via an inlet box and is pre-crushed by the tools on top of the rotor. The precrushing tools also carry the product into the milling zone at the side of the rotor. There the grinding stock is fluidised in the air stream between rotor and stator caused by rotation and the U-shaped grinding tools. The special design of these tools creates massive air whirls in the grinding zone (this is where the name of the mill comes from). These air whirls cause the main grinding effect. The particles collide with each other in these whirlwinds. The final particle size can be adjusted by changing the clearance between rotor and stator, air flow and rotor speed. [ 1 ] [ 2 ] Whirlwind mills are basically used for pulverisation and micro-pulverisation of soft to medium hard products. In addition they can be used for cryogenic grinding, combined grinding/drying, combined drying/blending and defibration of organic substances (such as paper, cellulose, etc.). Whirlwind Mills can be found in different industries, such as chemical, plastic, building material, and food industry.
https://en.wikipedia.org/wiki/Whirlwind_mill
Metal whiskering is a phenomenon that occurs in electrical devices when metals form long whisker-like projections over time. Tin whiskers were noticed and documented in the vacuum tube era of electronics early in the 20th century in equipment that used pure, or almost pure, tin solder in their production. It was noticed that small metal hairs or tendrils grew between metal solder pads, causing short circuits . Metal whiskers form in the presence of compressive stress. Germanium , zinc , cadmium , and even lead whiskers have been documented. [ 1 ] Many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), the addition of elements like copper and nickel, and the inclusion of conformal coatings . [ 2 ] Traditionally, lead has been added to slow down whisker growth in tin-based solders. Following the Restriction of Hazardous Substances Directive (RoHS), the European Union banned the use of lead in most consumer electronic products from 2006 due to health problems associated with lead and the "high-tech trash" problem, leading to a re-focusing on the issue of whisker formation in lead-free solders . Metal whiskering is a crystalline metallurgical phenomenon involving the spontaneous growth of tiny, filiform hairs from a metallic surface. The effect is primarily seen on elemental metals but also occurs with alloys . The mechanism behind metal whisker growth is not well understood , but seems to be encouraged by compressive mechanical stresses including: Metal whiskers differ from metallic dendrites in several respects: dendrites are fern -shaped and grow across the surface of the metal, while metal whiskers are hair-like and project normal to the surface. Dendrite growth requires moisture capable of dissolving the metal into a solution of metal ions, which are then redistributed by electromigration in the presence of an electromagnetic field . While the precise mechanism for whisker formation remains unknown, it is known that whisker formation does not require either dissolution of the metal or the presence of an electromagnetic field. Whiskers can cause short circuits and arcing in electrical equipment. The phenomenon was discovered by telephone companies in the late 1940s and it was later found that the addition of lead to tin solder provided mitigation. [ 6 ] The European Restriction of Hazardous Substances Directive (RoHS), which took effect on July 1, 2006, restricted the use of lead in various types of electronic and electrical equipment. This has driven the use of lead-free alloys with a focus on preventing whisker formation (see § Mitigation and elimination ) . Others have focused on the development of oxygen-barrier coatings to prevent whisker formation. [ 7 ] Airborne zinc whiskers have been responsible for increased system failure rates in computer server rooms . Zinc whiskers grow from galvanized (electroplated) metal surfaces at a rate of up to a millimeter per year with a diameter of a few micrometers. Whiskers can form on the underside of zinc electroplated floor tiles on raised floors. These whiskers can then become airborne within the floor plenum when the tiles are disturbed, usually during maintenance. Whiskers can be small enough to pass through air filters and can settle inside equipment, resulting in short circuits and system failure. [ 8 ] Tin whiskers do not have to be airborne to damage equipment, as they are typically already growing directly in the environment where they can produce short circuits, i.e., the electronic equipment itself. At frequencies above 6 gigahertz or in fast digital circuits , tin whiskers can act like miniature antennas , affecting the circuit impedance and causing reflections. In computer disk drives they can break off and cause head crashes or bearing failures. [ 9 ] Tin whiskers often cause failures in relays and have been found upon examination of failed relays in nuclear power facilities. [ 10 ] Pacemakers have been recalled due to tin whiskers. [ 11 ] Research has also identified a particular failure mode for tin whiskers in vacuum (such as in space), where in high-power components a short-circuiting tin whisker is ionized into a plasma that is capable of conducting hundreds of amperes of current, massively increasing the damaging effect of the short circuit. [ 12 ] The possible increase in the use of pure tin in electronics due to the RoHS directive drove the Joint Electron Device Engineering Council (JEDEC) and IPC electronic trade association to release a tin whisker acceptance testing standard and mitigation practices guideline intended to help manufacturers reduce the risk of tin whiskers in lead-free products. [ 13 ] Silver whiskers often appear in conjunction with a layer of silver sulfide , which forms on the surface of silver electrical contacts operating in an atmosphere rich in hydrogen sulfide and high humidity . Such atmospheres can exist in sewage treatment plants and paper mills . Whiskers over 20 micrometres (μm) in length were observed on gold-plated surfaces and noted in a 2003 NASA internal memorandum. [ 14 ] The effects of metal whiskering were chronicled on History Channel 's program Engineering Disasters 19. [ 15 ] Several approaches are used to reduce or eliminate whisker growth, with ongoing research in the area. Conformal compound coatings stop the whiskers from penetrating a barrier, reaching a nearby termination and forming a short. [ 12 ] Termination finishes of nickel, gold or palladium have been shown to eliminate whisker formation in controlled trials. [ 16 ] Galaxy IV was a telecommunications satellite that was disabled and lost due to short circuits caused by tin whiskers in 1998. It was initially thought that space weather contributed to the failure, but later it was discovered that a conformal coating had been misapplied, allowing whiskers to form in the pure tin plating, find their way through a missing coating area, and cause a failure of the main control computer. The manufacturer, Hughes, has moved to nickel plating, rather than tin, to reduce the risk of whisker growth. The trade-off has been an increase in weight, adding 50 to 100 kilograms (110 to 220 lb) per payload. [ 17 ] On April 17, 2005, the Millstone Nuclear Power Plant in Connecticut was shut down due to a "false alarm" that indicated an unsafe pressure drop in the reactor's steam system when the steam pressure was actually nominal. The false alarm was caused by a tin whisker that short circuited the logic board responsible for monitoring the steam pressure lines in the power plant. [ 18 ] In September 2011, three NASA investigators claimed that they identified tin whiskers on the accelerator position sensors [ 19 ] of sampled Toyota Camry models that could contribute to the "stuck accelerator" crashes affecting certain Toyota models during 2005–2010. [ 20 ] This contradicted an earlier 10-month joint investigation by the National Highway Traffic Safety Administration (NHTSA) and a large group of other NASA researchers that found no electronic defects. [ 21 ] In 2012, NHTSA maintained: "We do not believe that tin whiskers are a plausible explanation for these incidents...[the likely cause was] pedal misapplication ." [ 22 ] Toyota also maintains that tin whiskers were not the cause of any stuck accelerator issues: "In the words of U.S. Transportation Secretary Ray LaHood, 'The verdict is in. There is no electronic-based cause for unintended high-speed acceleration in Toyotas. Period. ' " According to a Toyota press release, "no data indicates that tin whiskers are more prone to occur in Toyota vehicles than any other vehicle in the marketplace." Toyota also states that "their systems are designed to reduce the risk that tin whiskers will form in the first place." [ 23 ]
https://en.wikipedia.org/wiki/Whisker_(metallurgy)
Whistler Water is a manufacturer and supplier of bottled water. Their water originates from mountains just north of Whistler in British Columbia , Canada and is bottled a short distance away in Burnaby , British Columbia. Whistler Water has been supplying their products for over 25 years. [ 1 ] Whistler Water's original source was located at Function Junction ( British Columbia Highway 99 at Cheakamus Valley Road) in Whistler, British Columbia , and was used from 1991 to 1992 before it was moved to a secondary source because of industrial intrusion. Whistler Water, since 1992, has been sourced from Place Glacier outside Whistler, British Columbia. In 1997, Whistler Water introduced the Whistler Glacial Spring Water 'mountain bottle' packaging and began targeting export markets aggressively. [ 2 ] Whistler Pure Glacial Spring Water is sourced from the glaciers located in the Coast Mountains just north of Whistler, British Columbia. [ 3 ] Geography Whistler Water Inc. obtains its water from an aquifer located at the foothill of remote mountains that are part of the Pacific Range of the Coast Mountains in British Columbia. [ 4 ] The Coast Mountains Coast Mountains comprise many of the world's largest temperate-latitude icecaps and glaciers. Most of the land in the range is tightly controlled Crown Land on which all activities are rigorously regulated by the Provincial Government, keeping both the glacier and the aquifer highly protected from human and industrial contamination. [ 5 ] Whistler Water's bottling facility in Burnaby , British Columbia is also co-packer for private label customers, as well as a number of international private label bottle brands and carbonated products. Bottle sizes are 350 mL, 500 mL, 500 mL sport cap, 1 L, 1L sport cap, 1.5 L and a 4 L jug. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Whistler_Water
White Heat Cold Logic (2008), edited by Paul Brown , Charlie Gere , Nicholas Lambert, and Catherine Mason , is a book about the history of British computer art during 1960–1980. [ 1 ] The book includes 29 contributed chapters by a variety of authors. The book was published in 2008 by MIT Press , [ 2 ] in hardcover format. It also includes a series foreword by Sean Cubbitt, the editor-in-chief of the Leonardo Book Series. The following authors contributed chapters in the book: The book has been reviewed in a number of publications and online, including:
https://en.wikipedia.org/wiki/White_Heat_Cold_Logic
The White House Big Dig was the name used in press reports to describe a multi-year construction project at the White House that began in September 2010 and temporarily concluded in 2012, with a second phase planned for the future. According to the General Services Administration (GSA), the $376-million project, which involved a multi-story excavation adjacent to the West Wing , was to replace electrical wiring and update air conditioning. A second phase of the project, with an unannounced start date, will involve a similar excavation adjacent to the East Wing . Funds for the White House Big Dig were allocated by a congressional appropriation made in late 2001. [ 1 ] [ 2 ] Despite the utilitarian description of its purpose, the project came to be the object of intense media speculation. The Washington Post characterized the GSA description of the project as a "nothing to see here story" while The New York Times , citing an anonymous source, claimed it was "security-related construction." [ 3 ] [ 4 ] The Associated Press reported that a privacy screen was placed around the construction site for its duration and sub-contractors on the project were required to cover identifying marks or logos on their company vehicles, measures which it implied were unusual. [ 2 ] ABC News , meanwhile, equated the construction project as a "mystery" on-par with "what happened to the dinosaurs ". In a story set to the theme song from the science fiction television program The X-Files , reporter John Berman sarcastically commented "maybe it is a bunch of pipes and wires ... just like Area 51 ". [ 5 ] In 2013, RealClearPolitics reported that a "clone" of the Oval Office would be built in the Eisenhower Executive Office Building , as the Oval Office would be unusable during the second phase of the White House Big Dig. [ 6 ] White House press secretary Jay Carney subsequently rebutted that report as false. [ 7 ]
https://en.wikipedia.org/wiki/White_House_Big_Dig
White Lead (Painting) Convention, 1921 is an International Labour Organization Convention established in 1921 to advance the prohibition of using white lead in paint. As of 2017 many leading global nations, including the United States, the United Kingdom, Germany, Japan, China and India remain outside the organization. As of 2013, the convention has been ratified by 63 states:
https://en.wikipedia.org/wiki/White_Lead_(Painting)_Convention,_1921
White Space Internet uses a part of the radio spectrum known as white spaces . The frequency range is created when there are gaps between the coverage areas of television channels . The spaces can provide broadband internet access that is similar to that of 4G mobile. [ 1 ] In a 2012 test of the technology, the city of Wilmington, North Carolina implemented technology utilizing the white space systems "to connect the city's infrastructure, allowing public officials to remotely turn lights on and off in parks, provide public wireless broadband to certain areas of the city, and monitor water levels." [ 1 ] The initial tests of this internet showed that white space signals travel further and with less interference than Wi-Fi and Bluetooth . White space can help to alleviate some of the problems that are occurring with networks being over crowded. [ 2 ] In 2013 the system was still in use. [ 3 ] Carlson Wireless Technologies users are utilizing white space in order to access broadband internet. Carlson Wireless Technologies has been able to conduct research that has proven white space internet to cover around 10 kilometers in diameter, which covers 100 times further than WI-Fi. [ 4 ] Also, white space is considered non-line-of-sight (NLOS). It differs from microwave links that need line-of-sight. The white space works with the lower-frequency UHF signals in order to connect between devices. NLOS allows white space to cover areas that have obstacles with limited issue. [ 4 ] White space technology has been suggested for several countries. Microsoft has white space databases , and is advancing white space technology to Jamaica, Namibia, Philippines, Tanzania, Taiwan, Colombia, United Kingdom, and the United States. [ 5 ] Also, Google has decided to push white space technology to Cape Town, South Africa. [ 1 ] An argument against white space Internet is that it uses a radio frequency range commonly used for television, and is not Super Wi-Fi . [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/White_Space_Internet
White blood cells (scientific name leukocytes ), also called immune cells or immunocytes , are cells of the immune system that are involved in protecting the body against both infectious disease and foreign entities. White blood cells are generally larger than red blood cells . They include three main subtypes: granulocytes , lymphocytes and monocytes . [ 2 ] All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells . [ 3 ] Leukocytes are found throughout the body, including the blood and lymphatic system . [ 4 ] All white blood cells have nuclei , which distinguishes them from the other blood cells , the anucleated red blood cells (RBCs) and platelets . The different white blood cells are usually classified by cell lineage ( myeloid cells or lymphoid cells ). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes ( monocytes , and lymphocytes (T cells and B cells)). [ 5 ] Myeloid cells ( myelocytes ) include neutrophils , eosinophils , mast cells , basophils , and monocytes . [ 6 ] Monocytes are further subdivided into dendritic cells and macrophages . Monocytes, macrophages, and neutrophils are phagocytic . Lymphoid cells ( lymphocytes ) include T cells (subdivided into helper T cells , memory T cells , cytotoxic T cells ), B cells (subdivided into plasma cells and memory B cells ), and natural killer cells . Historically, white blood cells were classified by their physical characteristics ( granulocytes and agranulocytes ), but this classification system is less frequently used now. Produced in the bone marrow , white blood cells defend the body against infections and disease . An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders. The number of leukocytes in the blood is often an indicator of disease , and thus the white blood cell count is an important subset of the complete blood count . The normal white cell count is usually between 4 × 10 9 /L and 1.1 × 10 10 /L. In the US, this is usually expressed as 4,000 to 11,000 white blood cells per microliter of blood. [ 7 ] White blood cells make up approximately 1% of the total blood volume in a healthy adult, [ 8 ] making them substantially less numerous than the red blood cells at 40% to 45% . However, this 1% of the blood makes a huge difference to health because immunity depends on it. An increase in the number of leukocytes over the upper limits is called leukocytosis . It is normal when it is part of healthy immune responses, which happen frequently. It is occasionally abnormal when it is neoplastic or autoimmune in origin. A decrease below the lower limit is called leukopenia , which indicates a weakened immune system. The name "white blood cell" derives from the physical appearance of a blood sample after centrifugation . White cells are found in the buffy coat , a thin, typically white layer of nucleated cells between the sedimented red blood cells and the blood plasma . The scientific term leukocyte directly reflects its description. It is derived from the Greek roots leuk - meaning "white" and cyt - meaning "cell". The buffy coat may sometimes be green if there are large amounts of neutrophils in the sample, due to the heme -containing enzyme myeloperoxidase that they produce. [ citation needed ] All white blood cells are nucleated, which distinguishes them from the anucleated red blood cells and platelets. Types of leukocytes can be classified in standard ways. Two pairs of broadest categories classify them either by structure ( granulocytes or agranulocytes ) or by cell lineage (myeloid cells or lymphoid cells). These broadest categories can be further divided into the five main types: neutrophils , eosinophils , basophils , lymphocytes , and monocytes . [ 6 ] A good way to remember the relative proportions of WBCs is "Never Let Monkeys Eat Bananas". [ 9 ] These types are distinguished by their physical and functional characteristics. Monocytes and neutrophils are phagocytic . Further subtypes can be classified. Granulocytes are distinguished from agranulocytes by their nucleus shape (lobed versus round, that is, polymorphonuclear versus mononuclear) and by their cytoplasm granules (present or absent, or more precisely, visible on light microscopy or not thus visible). The other dichotomy is by lineage: Myeloid cells (neutrophils, monocytes, eosinophils and basophils) are distinguished from lymphoid cells (lymphocytes) by hematopoietic lineage ( cellular differentiation lineage). [ 10 ] Lymphocytes can be further classified as T cells, B cells, and natural killer cells. Neutrophils are the most abundant white blood cell, constituting 60–70% of the circulating leukocytes. [ 8 ] They defend against bacterial or fungal infection. They are usually first responders to microbial infection; their activity and death in large numbers form pus . They are commonly referred to as polymorphonuclear (PMN) leukocytes, although, in the technical sense, PMN refers to all granulocytes. They have a multi-lobed nucleus, which consists of three to five lobes connected by slender strands. [ 13 ] This gives the neutrophils the appearance of having multiple nuclei, hence the name polymorphonuclear leukocyte. The cytoplasm may look transparent because of fine granules that are pale lilac when stained. Neutrophils are active in phagocytosing bacteria and are present in large amount in the pus of wounds. These cells are not able to renew their lysosomes (used in digesting microbes) and die after having phagocytosed a few pathogens. [ 14 ] Neutrophils are the most common cell type seen in the early stages of acute inflammation. The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours. [ 15 ] [ 16 ] Eosinophils compose about 2–4% of white blood cells in circulating blood. This count fluctuates throughout the day, seasonally, and during menstruation . It rises in response to allergies, parasitic infections, collagen diseases, and disease of the spleen and central nervous system. They are rare in the blood, but numerous in the mucous membranes of the respiratory, digestive, and lower urinary tracts. [ 13 ] They primarily deal with parasitic infections. Eosinophils are also the predominant inflammatory cells in allergic reactions. The most important causes of eosinophilia include allergies such as asthma, hay fever, and hives; and parasitic infections. They secrete chemicals that destroy large parasites, such as hookworms and tapeworms, that are too big for any one white blood cell to phagocytize. In general, their nuclei are bi-lobed. The lobes are connected by a thin strand. [ 13 ] The cytoplasm is full of granules that assume a characteristic pink-orange color with eosin staining. Basophils are chiefly responsible for allergic and antigen response by releasing the chemical histamine causing the dilation of blood vessels . Because they are the rarest of the white blood cells (less than 0.5% of the total count) and share physicochemical properties with other blood cells, they are difficult to study. [ 17 ] They can be recognized by several coarse, dark violet granules, giving them a blue hue. The nucleus is bi- or tri-lobed, but it is hard to see because of the number of coarse granules that hide it. They secrete two chemicals that aid in the body's defenses: histamine and heparin . Histamine is responsible for widening blood vessels and increasing the flow of blood to injured tissue. It also makes blood vessels more permeable so neutrophils and clotting proteins can get into connective tissue more easily. Heparin is an anticoagulant that inhibits blood clotting and promotes the movement of white blood cells into an area. Basophils can also release chemical signals that attract eosinophils and neutrophils to an infection site. [ 13 ] Lymphocytes are much more common in the lymphatic system than in blood. Lymphocytes are distinguished by having a deeply staining nucleus that may be eccentric in location, and a relatively small amount of cytoplasm. Lymphocytes include: Monocytes, the largest type of white blood cell, share the "vacuum cleaner" ( phagocytosis ) function of neutrophils, but are much longer lived as they have an extra role: they present pieces of pathogens to T cells so that the pathogens may be recognized again and killed. This causes an antibody response to be mounted. Monocytes eventually leave the bloodstream and become tissue macrophages , which remove dead cell debris as well as attack microorganisms. Neither dead cell debris nor attacking microorganisms can be dealt with effectively by the neutrophils. Unlike neutrophils, monocytes are able to replace their lysosomal contents and are thought to have a much longer active life. They have the kidney-shaped nucleus and are typically not granulated. They also possess abundant cytoplasm. Some leucocytes migrate into the tissues of the body to take up a permanent residence at that location rather than remaining in the blood. Often these cells have specific names depending upon which tissue they settle in, such as fixed macrophages in the liver, which become known as Kupffer cells . These cells still serve a role in the immune system. The two commonly used categories of white blood cell disorders divide them quantitatively into those causing excessive numbers ( proliferative disorders) and those causing insufficient numbers ( leukopenias ). [ 18 ] Leukocytosis is usually healthy (e.g., fighting an infection ), but it also may be dysfunctionally proliferative. Proliferative disorders of white blood cells can be classed as myeloproliferative and lymphoproliferative . Some are autoimmune , but many are neoplastic . Another way to categorize disorders of white blood cells is qualitatively . There are various disorders in which the number of white blood cells is normal but the cells do not function normally. [ 19 ] Neoplasia of white blood cells can be benign but is often malignant . Of the various tumors of the blood and lymph , cancers of white blood cells can be broadly classified as leukemias and lymphomas , although those categories overlap and are often grouped together. A range of disorders can cause decreases in white blood cells. This type of white blood cell decreased is usually the neutrophil. In this case the decrease may be called neutropenia or granulocytopenia. Less commonly, a decrease in lymphocytes (called lymphocytopenia or lymphopenia) may be seen. [ 18 ] Neutropenia can be acquired or intrinsic . [ 20 ] A decrease in levels of neutrophils on lab tests is due to either decreased production of neutrophils or increased removal from the blood. [ 18 ] Symptoms of neutropenia are associated with the underlying cause of the decrease in neutrophils. For example, the most common cause of acquired neutropenia is drug-induced, so an individual may have symptoms of medication overdose or toxicity. Treatment is also aimed at the underlying cause of the neutropenia. [ 21 ] One severe consequence of neutropenia is that it can increase the risk of infection. [ 19 ] Defined as total lymphocyte count below 1.0x10 9 /L, the cells most commonly affected are CD4+ T cells. Like neutropenia, lymphocytopenia may be acquired or intrinsic and there are many causes. [ 19 ] This is not a complete list. Like neutropenia, symptoms and treatment of lymphocytopenia are directed at the underlying cause of the change in cell counts. An increase in the number of white blood cells in circulation is called leukocytosis . [ 18 ] This increase is most commonly caused by inflammation . [ 18 ] There are four major causes: increase of production in bone marrow, increased release from storage in bone marrow, decreased attachment to veins and arteries, decreased uptake by tissues. [ 18 ] Leukocytosis may affect one or more cell lines and can be neutrophilic, eosinophilic, basophilic, monocytosis, or lymphocytosis. Neutrophilia is an increase in the absolute neutrophil count in the peripheral circulation . Normal blood values vary by age. [ 19 ] Neutrophilia can be caused by a direct problem with blood cells (primary disease). It can also occur as a consequence of an underlying disease (secondary). Most cases of neutrophilia are secondary to inflammation. [ 21 ] Primary causes [ 21 ] Secondary causes [ 21 ] A normal eosinophil count is considered to be less than 0.65 × 10 9 /L. [ 19 ] Eosinophil counts are higher in newborns and vary with age, time (lower in the morning and higher at night), exercise, environment, and exposure to allergens. [ 19 ] Eosinophilia is never a normal lab finding. Efforts should always be made to discover the underlying cause, though the cause may not always be found. [ 19 ] The complete blood cell count is a blood panel that includes the overall white blood cell count and differential count, a count of each type of white blood cell. Reference ranges for blood tests specify the typical counts in healthy people. The normal total leucocyte count in an adult is 4000 to 11,000 per mm 3 of blood. Differential leucocyte count: number/ (%) of different types of leucocytes per cubic mm. of blood. Below are reference ranges for various types leucocytes. [ 23 ]
https://en.wikipedia.org/wiki/White_blood_cell
A white box system is a mechanical system installed in the engine room of a ship for controlling and monitoring the engine room bilge water discharge from the vessel. The system consists of all vital components for monitoring and controlling the discharge from the vessel's oily water separator . The white box includes a stainless steel cage with a locked door. The bilge water from the oily water separator is pumped through the white box and analyzed by an oil content meter. A flow switch secures that there is flow through the oil content meter and a flow meter counts the accumulated discharged overboard volume. If the door is opened, the oil content exceeds the legal limit of 15 parts per million (PPM) or the flow to the oil content meter is lost the three way valve will immediately redirect the bilge water back to the bilge water holding tank. All components inside the system are connected to a digital recorder mounted in the engine control room that records the oil content, three way valve position, flow through oil content meter, accumulated discharged volume, door position together with the vessels geographical position and time. The chief engineer possesses the key and when locked, the system cannot be tampered with and equally importantly provides the evidence that the vessel has been compliant. The recorder data is stored in an encrypted format and can be presented to any official body such as Port State Control , United States Coast Guard , Vetting or Classification society officials to prove that the vessel has been compliant to MARPOL 73/78 (the International Convention for the Prevention of Pollution From Ships) or any national regulation, and that no illegal discharge [ 1 ] has been made.
https://en.wikipedia.org/wiki/White_box_system
The White catalyst is a transition metal coordination complex named after the chemist by whom it was first synthesized, M. Christina White , a professor at the University of Illinois . The catalyst has been used in a variety of allylic C-H functionalization reactions of α-olefins . In addition, it has been shown to catalyze oxidative Heck reactions . This compound is commercially available. It may be prepared by oxidation of 1,2-bis(phenylthio)ethane to the sulfoxide , followed by reaction with palladium acetate . [ 1 ] The reaction mechanism of allylic C-H acetoxylation has been studied. [ 2 ] The first step in the catalytic cycle is cleavage of the allylic C-H bond. The sulfoxide ligand is thought to promote this step by generating a highly electrophilic , possibly cationic palladium species in situ . This species coordinates to the alkene and acidifies the adjacent C-H bond, which allows acetate to abstract the proton and forms a π-allyl palladium complex ( II ). Subsequently, a π-acid such as benzoquinone coordinates to the palladium, activating the π-allyl complex to nucleophilic attack ( III ). A nucleophile , in this case acetate, attacks to reductively eliminate palladium, generating the product and palladium(0) ( IV ). The palladium(0) is reoxidized to palladium(II) by benzoquinone and the sulfoxide ligand reassociates, closing the catalytic cycle. The White catalyst was originally developed for use in a branched allylic acetoxylation reaction. [ 2 ] An enantioselective version of this reaction was subsequently reported, using chromium(III) salen fluoride as a chiral cocatalyst. [ 3 ] A macrolactonization reaction based on the branched allylic esterification was developed for the preparation of 14- to 19-membered macrolides . [ 4 ] This method was applied to the total synthesis of 6-deoxyerythronolide B . [ 5 ] In addition to acetate, a wide variety of carboxylic acids may be employed as nucleophiles in the branch allylic esterification reaction. As the first step in an esterification/Heck sequence, aliphatic and aromatic carboxylates were demonstrated, including amino acids . [ 6 ] The White catalyst can effect both branched and linear regioselective allylic C-H aminations. In order to promote nucleophilic attack at the internal terminus of the π-allyl to generate branched product, a tethered N -sulfonyl carbamate nucleophile is used. This strategy has been applied to the synthesis of 1,2 and 1,3- amino alcohols . [ 7 ] [ 8 ] The amination proceeds with high yields and good diastereoselectivity , and the products may be readily elaborated to amino acids and other synthetic intermediates and natural products . Key to the development of the reaction was identification of a very acidic nitrogen nucleophile with a pKa close to acetic acid , as more basic nucleophiles divert reactivity to aminopalladation. The intermolecular version of the allylic C-H amination is also known. [ 9 ] Using methyl N -tosyl carbamate nucleophile, the linear E-allylic amine products are obtained from α-olefin substrates. It has been shown that functionalization of the π-allyl intermediate may be promoted by chromium(III) salen chloride activation of the electrophile , or Hunig's base activation of the nucleophile . [ 10 ] In 2008, simultaneous publications described the allylic C-H alkylation of allylarene substrates. [ 11 ] [ 12 ] These reactions were catalyzed by the White catalyst or by an earlier version of the complex bearing benzyl substituents on the sulfoxide in place of phenyl. It was demonstrated that an additional sulfoxide ligand, dimethylsulfoxide (DMSO), was essential for promoting functionalization of the π-allyl intermediate; the bis-sulfoxide ligand alone was unable to complete the catalytic cycle. The White catalyst has been found to be an effective catalyst for an oxidative version of the classic Heck reaction . Rather than performing allylic C-H cleavage—a relatively slow process—the catalyst quickly transmetallates with a boronic acid . This aryl palladium intermediate undergoes a 1,2-addition across the alkene double bond. β-Hydride elimination releases the product. The oxidative Heck was originally reported as a sequential process following allylic C-H esterification. [ 6 ] It was subsequently demonstrated as a stand-alone method for a broad range of α-olefin substrates. [ 13 ] The regioselectivity of the reaction is controlled by directing groups such as carbonyls , alcohols and amines .
https://en.wikipedia.org/wiki/White_catalyst
In computing and electronic systems, binary-coded decimal ( BCD ) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits , usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow). In byte -oriented systems (i.e. most modern computers), the term unpacked BCD [ 1 ] usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3 ). The ten states representing a BCD digit are sometimes called tetrades [ 2 ] [ 3 ] (the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care -states are named pseudo-tetrad(e)s [ de ] , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] pseudo-decimals , [ 3 ] or pseudo-decimal digits . [ 9 ] [ 10 ] [ nb 1 ] BCD's main virtue, in comparison to binary positional systems , is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage. BCD was used in many early decimal computers , and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation 's VAX , the Burroughs B1700 , and the Motorola 68000 -series processors. BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM ; x86 in long mode ). However, decimal fixed-point and decimal floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in binary floating point formats cannot be tolerated. [ 11 ] BCD takes advantage of the fact that any one decimal numeral can be represented by a four-bit pattern. An obvious way of encoding digits is Natural BCD (NBCD), where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called "8421" encoding. This scheme can also be referred to as Simple Binary-Coded Decimal ( SBCD ) or BCD 8421 , and is the most common encoding. [ 12 ] Others include the so-called "4221" and "7421" encoding – named after the weighting used for the bits – and " Excess-3 ". [ 13 ] For example, the BCD digit 6, 0110'b in 8421 notation, is 1100'b in 4221 (two encodings are possible), 0110'b in 7421, while in Excess-3 it is 1001'b ( 6 + 3 = 9 {\displaystyle 6+3=9} ). The following table represents decimal digits from 0 to 9 in various BCD encoding systems. In the headers, the " 8 4 2 1 " indicates the weight of each bit. In the fifth column ("BCD 8 4 −2 −1"), two of the weights are negative. Both ASCII and EBCDIC character codes for the digits, which are examples of zoned BCD, are also shown. As most computers deal with data in 8-bit bytes , it is possible to use one of the following methods to encode a BCD number: As an example, encoding the decimal number 91 using unpacked BCD results in the following binary pattern of two bytes: In packed BCD, the same number would fit into a single byte: Hence the numerical range for one unpacked BCD byte is zero through nine inclusive, whereas the range for one packed BCD byte is zero through ninety-nine inclusive. To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number 12345 in packed BCD, using big-endian format, a program would encode as follows: Here, the most significant nibble of the most significant byte has been encoded as zero, so the number is stored as 012345 (but formatting routines might replace or remove leading zeros). Packed BCD is more efficient in storage usage than unpacked BCD; encoding the same number (with the leading zero) in unpacked format would consume twice the storage. Shifting and masking operations are used to pack or unpack a packed BCD digit. Other bitwise operations are used to convert a numeral to its equivalent bit pattern or reverse the process. Some computers whose words are multiples of an octet (8-bit byte), for example contemporary IBM mainframe systems, support packed BCD (or packed decimal [ 38 ] ) numeric representations, in which each nibble represents either a decimal digit or a sign. [ nb 8 ] Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations are big endian , i.e. with the more significant digit in the upper half of each byte, and with the leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag, although some unsigned representations lack a sign flag. As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value, and the lowest nibble indicates the sign of the decimal integer value. Standard sign values are 1100 ( hex C) for positive (+) and 1101 (D) for negative (−). This convention comes from the zone field for EBCDIC characters and the signed overpunch representation. Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. IBM System/360 processors will use the 1010 (A) and 1011 (B) signs if the A bit is set in the PSW, for the ASCII-8 standard that never passed. Most implementations also provide unsigned BCD values with a sign nibble of 1111 (F). [ 39 ] [ 40 ] [ 41 ] ILE RPG uses 1111 (F) for positive and 1101 (D) for negative. [ 42 ] These match the EBCDIC zone for digits without a sign overpunch. In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D). Burroughs systems used 1101 (D) for negative, and any other value is considered a positive sign value (the processors will normalize a positive sign to 1100 (C)). No matter how many bytes wide a word is, there is always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2 n )−1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires ⁠ 1 / 2 ⁠ ( d +1) bytes of storage space. For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign and can represent values ranging from ±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as: Like character strings, the first byte of the packed decimal – that with the most significant two digits – is usually stored in the lowest address in memory, independent of the endianness of the machine. In contrast, a 4-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647. While packed BCD does not make optimal use of storage (using about 20% more memory than binary notation to store the same numbers), conversion to ASCII , EBCDIC, or the various encodings of Unicode is made trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions. Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. It is supported in PL/I as "FIXED DECIMAL". Beside the IBM System/360 and later compatible mainframes, packed BCD is implemented in the native instruction set of the original VAX processors from Digital Equipment Corporation and some models of the SDS Sigma series mainframes, and is the native format for the Burroughs Medium Systems line of mainframes (descended from the 1950s Electrodata 200 series ). Ten's complement representations for negative numbers offer an alternative approach to encoding the sign of packed (and other) BCD numbers. In this case, positive numbers always have a most significant digit between 0 and 4 (inclusive), while negative numbers are represented by the 10's complement of the corresponding positive number. As a result, this system allows for 32-bit packed BCD numbers to range from −50,000,000 to +49,999,999, and −1 is represented as 99999999. (As with two's complement binary numbers, the range is not symmetric about zero.) Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits. For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the fourth and fifth digits: The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler, and the generated code acts accordingly for the various arithmetic operations. If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 2 10 (1,024) is greater than 10 3 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chen–Ho encoding and densely packed decimal (DPD). The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD. Some implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Each decimal digit is stored in one 8-bit [ nb 9 ] byte, with the lower four bits encoding the digit in BCD form. The upper four [ nb 10 ] bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit, or to values representing plus or minus. EBCDIC [ nb 11 ] systems use a zone value of 1111 2 (F 16 ), yielding F0 16 -F9 16 , the codes for "0" through "9", a zone value of 1100 2 (C 16 ) for positive, yielding C0 16 -C9 16 , the codes for "{" through "I" and a zone value of 1110 2 (D 16 ) for negative, yielding D0 16 -D9 16 , the codes for the characters "}" through "R". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex). For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123: (*) Note: These characters vary depending on the local character code page setting. Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50: It is possible to perform addition by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 − 10) when the five-bit result of adding a pair of digits has a value greater than 9. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 2 4 = 16), but only 10 values are valid (0000 through 1001). For example: 10001 is the binary, not decimal, representation of the desired result, but the most significant 1 (the "carry") cannot fit in a 4-bit binary number. In BCD as in decimal, there cannot exist a value greater than 9 (1001) per digit. To correct this, 6 (0110) is added to the total, and then the result is treated as two nibbles: The two nibbles of the result, 0001 and 0111, correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result. This technique can be extended to adding multiple digits by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9. Some CPUs provide a half-carry flag to facilitate BCD arithmetic adjustments following binary addition and subtraction operations. The Intel 8080 , the Zilog Z80 and the CPUs of the x86 family provide the opcode DAA (Decimal Adjust Accumulator). Subtraction is done by adding the ten's complement of the subtrahend to the minuend . To represent the sign of a number in BCD, the number 0000 is used to represent a positive number , and 1001 is used to represent a negative number . The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 − 432. In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 − 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number −432 can be represented. So, −432 in signed BCD is 1001 0101 0110 1000. Now that both numbers are represented in signed BCD, they can be added together: Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. So, adding 6 to the invalid entries results in the following: Thus the result of the subtraction is 1001 1001 0010 0101 (−925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct since 357 − 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 − 925 = 75, so the calculated answer is −75. If there are a different number of nibbles being added together (such as 1053 − 2), the number with the fewer digits must first be prefixed with zeros before taking the ten's complement or subtracting. So, with 1053 − 2, 2 would have to first be represented as 0002 in BCD, and the ten's complement of 0002 would have to be calculated. IBM used the terms Binary-Coded Decimal Interchange Code (BCDIC, sometimes just called BCD), for 6-bit alphanumeric codes that represented numbers, upper-case letters and special characters. Some variation of BCDIC alphamerics is used in most early IBM computers, including the IBM 1620 (introduced in 1959), IBM 1400 series , and non- decimal architecture members of the IBM 700/7000 series . The IBM 1400 series are character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit ( C ) and a word mark bit ( M ). For encoding digits 1 through 9 , B and A are zero and the digit value represented by standard 4-bit BCD in bits 8 through 1 . For most other characters bits B and A are derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A , an "11 zone" set B , and a "0 zone" (a 0 punch combined with any others) set A . Thus the letter A , which is (12,1) in the punched card format, is encoded (B,A,1) . The currency symbol $ , (11,8,3) in the punched card, was encoded in memory as (B,8,2,1) . This allows the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case is digit 0 , represented by a lone 0 punch in the card, and (8,2) in core memory. [ 43 ] The memory of the IBM 1620 is organized into 6-bit addressable digits, the usual 8, 4, 2, 1 plus F , used as a flag bit and C , an odd parity check bit. BCD alphamerics are encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the 12 , 11 , and 0 "zone punches" as in the 1400 series. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. In the decimal architecture IBM 7070 , IBM 7072 , and IBM 7074 alphamerics are encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. With the introduction of System/360 , IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length packed BCD numeric data type is also implemented, providing machine instructions that perform arithmetic directly on packed decimal data. On the IBM 1130 and 1800 , packed BCD is supported in software by IBM's Commercial Subroutine Package. Today, BCD data is still heavily used in IBM databases such as IBM Db2 and processors such as z/Architecture and POWER6 and later Power ISA processors. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software. The Digital Equipment Corporation VAX series includes instructions that can perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations. [ 41 ] The VAX's packed BCD format is compatible with that on IBM System/360 and IBM's later compatible processors. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This is invoked automatically via exception handling when the defunct instructions are encountered, so that programs using them can execute without modification on the newer machines. Many processors have hardware support for BCD-encoded integer arithmetic. For example, the 6502 , [ 44 ] [ 45 ] the Motorola 68000 series , [ 46 ] and the x86 series. [ 47 ] The Intel x86 architecture supports a unique 18-digit (ten-byte) BCD format that can be loaded into and stored from the floating point registers, from where computations can be performed. [ 48 ] In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data are still extremely common in commercial and financial applications. There are tricks for implementing packed BCD and zoned decimal add–or–subtract operations using short but difficult to understand sequences of word-parallel logic and binary arithmetic operations. [ 49 ] For example, the following code (written in C ) computes an unsigned 8-digit packed BCD addition using 32-bit binary operations: BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing with such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to an overall simpler system than converting to and from binary. Most pocket calculators do all their calculations in BCD. The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, representing numbers internally in BCD format results in smaller code, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature dedicated arithmetic modes, which assist when writing routines that manipulate BCD quantities. [ 50 ] [ 51 ] Various BCD implementations exist that employ other representations for numbers. Programmable calculators manufactured by Texas Instruments , Hewlett-Packard , and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such as infinity , underflow / overflow , and error (a blinking display). Signed decimal values may be represented in several ways. The COBOL programming language, for example, supports five zoned decimal formats, with each one encoding the numeric sign in a different way: 3GPP developed TBCD , [ 53 ] an expansion to BCD where the remaining (unused) bit combinations are used to add specific telephony symbols, [ 54 ] [ 55 ] similar to those in telephone keypad design. The mentioned 3GPP document defines TBCD-STRING with swapped nibbles in each byte. Bits, octets and digits indexed from 1, bits from the right, digits and octets from the left. bits 8765 of octet n encoding digit 2 n bits 4321 of octet n encoding digit 2( n – 1) + 1 Meaning number 1234 , would become 21 43 in TBCD. This format is used in modern mobile telephony to send dialed numbers, as well as operator ID (the MCC/MNC tuple), IMEI , IMSI (SUPI), et.c. [ 56 ] [ 57 ] If errors in representation and computation are more important than the speed of conversion to and from display, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2 × 10 −1 . This representation allows rapid multiplication and division, but may require shifting by a power of 10 during addition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimal places that do not then require this adjustment—particularly financial applications where 2 or 4 digits after the decimal point are usually enough. Indeed, this is almost a form of fixed point arithmetic since the position of the radix point is implied. The Hertz and Chen–Ho encodings provide Boolean transformations for converting groups of three BCD-encoded digits to and from 10-bit values [ nb 1 ] that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely packed decimal (DPD) is a similar scheme [ nb 1 ] that is used for most of the significand , except the lead digit, for one of the two alternative decimal encodings specified in the IEEE 754-2008 floating-point standard. The BIOS in many personal computers stores the date and time in BCD because the MC6818 real-time clock chip used in the original IBM PC AT motherboard provided the time encoded in BCD. This form is easily converted into ASCII for display. [ 58 ] [ 59 ] The Atari 8-bit computers use a BCD format for floating point numbers. The MOS Technology 6502 processor has a BCD mode for the addition and subtraction instructions. The Psion Organiser 1 handheld computer's manufacturer-supplied software also uses BCD to implement floating point; later Psion models use binary exclusively. Early models of the PlayStation 3 store the date and time in BCD. This led to a worldwide outage of the console on 1 March 2010. The last two digits of the year stored as BCD were misinterpreted as 16 causing an error in the unit's date, rendering most functions inoperable. This has been referred to as the Year 2010 problem . In the 1972 case Gottschalk v. Benson , the U.S. Supreme Court overturned a lower court 's decision that had allowed a patent for converting BCD-encoded numbers to binary on a computer. The decision noted that a patent "would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself". [ 60 ] This was a landmark judgement that determined the patentability of software and algorithms .
https://en.wikipedia.org/wiki/White_code
White etching cracks ( WEC ), or white structure flaking or brittle flaking , is a type of rolling contact fatigue (RCF) damage that can occur in bearing steels under certain conditions, such as hydrogen embrittlement , high stress , inadequate lubrication, and high temperature. WEC is characterised by the presence of white areas of microstructural alteration in the material, which can lead to the formation of small cracks that can grow and propagate over time, eventually leading to premature failure of the bearing. WEC has been observed in a variety of applications, including wind turbine gearboxes, automotive engines, and other heavy machinery. The exact mechanism of WEC formation is still a subject of research, but it is believed to be related to a combination of microstructural changes, such as phase transformations and grain boundary degradation, and cyclic loading . White etching cracks (WECs), first reported in 1996, [ 2 ] are cracks that can form in the microstructure of bearing steel , leading to the development of a network of branched white cracks. [ 3 ] They are usually observed in bearings that have failed due to rolling contact fatigue or accelerated rolling contact fatigue. [ 4 ] These cracks can significantly shorten the reliability and operating life of bearings, both in the wind power industry and in several industrial applications. [ 5 ] [ 6 ] The exact cause of WECs and their significance in rolling bearing failures have been the subject of much research and discussion. [ 8 ] [ 6 ] Ultimately, the formation of WECs appears to be influenced by a complex interplay between material, mechanical, and chemical factors, [ 3 ] including hydrogen embrittlement, high stresses from sliding contact , inclusions , [ 9 ] electrical currents, [ 10 ] and temperature. They all also have all been identified as potential drivers of WECs. [ 11 ] One of the most commonly quoted potential causes of WECs is hydrogen embrittlement caused by an unstable equilibrium between material, mechanical, and chemical aspects, [ 3 ] which occurs when hydrogen atoms diffuse into the bearing steel, causing micro-cracks to form. [ 8 ] Hydrogen can come from a variety of sources, including the hydrocarbon lubricant or water contamination, and it is often used in laboratory tests to reproduce WECs. [ 12 ] Mechanisms behind the generation of hydrogen from lubricants was attributed to three primary factors contributing: decomposition of lubricants through catalytic reactions with a fresh metal surface, breakage of molecular chains within the lubricant due to shear on the sliding surface, and thermal decomposition of lubricants caused by heat generation during sliding. [ 13 ] Hydrogen generation is influenced by lubricity, wear width, and the catalytic reaction of a fresh metal surface. [ 13 ] Stresses higher than anticipated can also accelerate rolling contact fatigue, which is a known precursor to WECs. [ 4 ] WECs commence at subsurface during the initial phases of their formation, [ 14 ] particularly at non-metallic inclusions. As the sliding contact period extended, these cracks extended from the subsurface region to the contact surface, ultimately leading to flaking. Furthermore, there was an observable rise in the extent of microstructural modifications near the cracks, suggesting that the presence of the crack is a precursor to these alterations. [ 15 ] [ 12 ] The direction of sliding on the bearing surface played a significant role in WEC formation. When the traction force opposed the direction of over-rolling (referred to as negative sliding), it consistently led to the development of WECs. Conversely, when the traction force aligned with the over-rolling direction (positive sliding), WECs did not manifest. The magnitude of sliding exerted a dominant influence on WEC formation. Tests conducted at a sliding-to-rolling ratio (SRR) of -30% consistently resulted in the generation of WECs, while no WECs were observed in tests at -5% SRR. Furthermore, the number of WECs appeared to correlate with variations in contact severity, including changes in surface roughness, rolling speed, and lubricant temperature. [ 16 ] One of the primary causes of WECs is the passage of electrical current through the bearings. Both Alternating Current (AC) and Direct Current (DC) can lead to the formation of WECs, albeit through slightly different mechanisms. In general, hydrogen generation from lubricants can be accelerated by electric current, potentially accelerating WEC formation. [ 17 ] Under certain conditions, when the current densities are low (less than 1 mA/mm2), electrical discharges can significantly shorten the lifespan of bearings by causing WECs. These WECs can develop in under 50 hours due to electrical discharges . Electrostatic sensors prove to be useful in detecting these critical discharges early on, which are associated with failures induced by WECs. [ 18 ] The analysis revealed that different reaction layers form in the examined areas, depending on the electrical polarity . [ 10 ] In the case of AC, the rapid change in polarity involves the creation of a plasma channel through the lubricant film in the bearing, leading to a momentary, intense discharge of energy. The localised heating and rapid cooling associated with these discharges can cause changes in the microstructure of the steel, leading to the formation of WEAs and WECs. [ 19 ] On the other hand, DC can cause a steady flow of electrons through the bearing. This can lead to the electrochemical dissolution of the metal, a process known as fretting corrosion . The constant flow of current can also cause local heating, leading to thermal gradients within the bearing material. These gradients can cause stresses that lead to the formation of WECs. [ 19 ] WECs are sub-surface white cracks networks within local microstructural changes that are characterised by a changed microstructure known as white etching area (WEA). [ 3 ] The term "white etching" refers to the white appearance of the altered microstructure of a polished and etched steel sample in the affected areas. [ 20 ] The WEA is formed by amorphisation ( phase transformation ) of the martensitic microstructure due to friction at the crack faces during over-rolling, [ 21 ] and these areas appear white under an optical microscope due to their low-etching response to the etchant. [ 22 ] [ 23 ] [ 24 ] The microstructure of WECs consists of ultra-fine, nano-crystalline , carbide-free ferrite , or ferrite with a very fine distribution of carbide particles that exhibits a high degree of crystallographic misorientation. [ 25 ] [ 26 ] WEC propagation is mostly transgranular [ 27 ] and does not follow a certain cleavage plane . [ 28 ] Researchers observed three distinct types of microstructural alterations near the generated cracks: uniform white etching areas (WEAs), thin elongated regions of dark etching areas (DEA), and mixed regions comprising both light and dark etching areas with some misshaped carbides. [ 16 ] During repeated stress cycles, the position of the crack constantly shifts, leaving behind an area of intense plastic deformation composed of ferritic , martensite , austenite (due to austenitization ) and carbides . nano-grains, i.e., WEAs. [ 29 ] [ 26 ] The microscopic displacement of the crack plane in a single stress cycle accumulates to form micron-sized WEAs during repeated stress cycles. After the initial development of a fatigue crack around inclusions, the faces of the crack rub against each other during cycles of compressive stress. This results in the creation of WEAs through localised intense plastic deformation . It also causes partial bonding of the opposing crack faces and material transfer between them. Consequently, the WEC reopens at a slightly different location compared to its previous position during the release of stress. [ 30 ] Furthermore, it has been acknowledged that WEA is one of the phases that arise from different processes and is generally observed as a result of a phase transformation in rolling contact fatigue . [ 26 ] WEA is harder than the matrix and . [ 29 ] Additionally, WECs are caused by stresses higher than anticipated and occur due to bearing rolling contact fatigue as well as accelerated rolling contact fatigue. [ 4 ] WECs in bearings are accompanied with a white etching matter (WEM). WEM forms asymmetrically along WECs. There is no significant microstructural differences between the untransformed material adjacent to cracking and the parent material although WEM exhibits variable carbon content and increased hardness compared to the parent material. A study in 2019 suggests that WEM may initiate ahead of the crack, challenging the conventional crack-rubbing mechanism. [ 31 ] Triple disc rolling contact fatigue (RCF) Rig is a specialised testing apparatus used in the field of tribology and materials science to evaluate the fatigue resistance and durability of materials subjected to rolling contact. [ 32 ] This rig is designed for simulating the conditions encountered in various mechanical systems, such as rolling bearings, gears, and other components exposed to repeated rolling and sliding motions. The rig typically consists of three discs or rollers arranged in a specific configuration. [ 33 ] These discs can represent the interacting components of interest, such as a rolling bearing. The rig also allows precise control over the loading conditions, including the magnitude of the load, contact pressure, and contact geometry. [ 15 ] [ 8 ] PCS Instruments Micro-pitting Rig (MPR) is a specialised testing instrument used in the field of tribology and mechanical engineering to study micro- pitting , a type of surface damage that occurs in lubricated rolling and sliding contact systems. The MPR is designed to simulate real-world operating conditions by subjecting test specimens, often gears or rolling bearings, to controlled rolling and sliding contact under lubricated conditions. [ 16 ] Offshore wind turbines are subject to challenging environmental conditions, including corrosive saltwater, high wind forces, and potential electrical currents. These conditions can contribute to bearing failures and impact the reliability and maintenance of wind turbines. [ 6 ] [ 11 ] Several factors that can lead to bearing failures, such as corrosion , fatigue , wear , improper lubrication, high electric currents, and the need for improved materials and designs to ensure the longevity and performance of bearings in offshore wind turbines. [ 34 ] [ 35 ] [ 36 ] WECs negatively affects the reliability of bearings, not only in the wind industry but also in various other industrial applications such as electric motors, paper machines, industrial gearboxes, pumps, ship propulsion systems, and the automotive sector. [ 37 ] [ 38 ] 60% of wind turbines failures are linked to WEC. [ 39 ] [ 40 ] [ 41 ] In October 2018, a workshop on WECs was organised in Düsseldorf by a junior research group funded by the German Federal Ministry of Education and Research (BMBF). Representatives from academia and industry gathered to discuss the mechanisms behind WEC formation in wind turbines , focusing on the fundamental material processes causing this phenomenon. [ 42 ]
https://en.wikipedia.org/wiki/White_etching_cracks
The white feather is a widely recognised propaganda symbol. [ 1 ] [ 2 ] The white feather was most prominently used in the ' white feather movement ' in Britain during the First World War , in which women gave white feathers to non-enlisting men symbolizing cowardice and shaming them into signing up. Other than the White Feather movement, it has, among other things, represented cowardice or conscientious pacifism ; as in A. E. W. Mason's 1902 book The Four Feathers . In the United States armed forces, however, it is used to signify extraordinary bravery and excellence in combat marksmanship. The use of the phrase "white feather" to symbolise cowardice is attested from the late 18th century, according to the Oxford English Dictionary . The OED cites A Classical Dictionary of the Vulgar Tongue (1785), in which lexicographer Francis Grose wrote "White feather, he has a white feather, he is a coward, an allusion to a game cock, where having a white feather, is a proof he is not of the true game breed". [ 3 ] This was in the context of cockfighting , a common entertainment in Georgian England . Shame was exerted upon men in England and France who had not taken the cross at the time of the Third Crusade . "A great many men sent each other wool and distaff, hinting that if anyone failed to join this military undertaking they were only fit for women's work". [ 4 ] Wool played an important role in the medieval economy , and a distaff is a tool for spinning the raw material into yarn; the activities of textile production were so firmly associated with girls and women that "distaff" became a metonym for women's work . In Britain, Admiral Charles Penrose-Fitzgerald founded in August 1914 what became known as the "Order of the White Feather", where groups of young women to hand out white feathers to men in civilian attire in public places. [ 5 ] This was intended to shame them into enlisting in military service. The practice of presenting a white ribbon, personally or by mail, also occurred in other areas of the British Empire such as Australia [ 6 ] and New Zealand. [ 7 ] The white feather campaign was renewed during World War II . [ 8 ] [ 9 ] In contrast, the white feather has been used by some pacifist organisations as an icon of abstinence from violence. In the 1870s, the Māori prophet of passive resistance Te Whiti o Rongomai promoted the wearing of white feathers by his followers at Parihaka . They are still worn by the iwi associated with that area, and by Te Āti Awa in Wellington . They are known as te raukura , which literally means the red feather, but metaphorically, the chiefly feather. They are usually three in number, interpreted as standing for "glory to God, peace on earth, goodwill toward people" (Luke 2:14). Albatross feathers are preferred but any white feathers will do. They are usually worn in the hair or on the lapel (but not from the ear). Some time after the war, pacifists found an alternative interpretation of the white feather as a symbol of peace. The apocryphal story goes that in 1775, Quakers in a Friends meeting house in Easton, New York were faced by a tribe of Indians on the war path. Rather than flee, the Quakers fell silent and waited. The Indian chief came into the meeting house and finding no weapons he declared the Quakers as friends. On leaving he took a white feather from his quiver and attached it to the door as a sign to leave the building unharmed. [ 10 ] In 1937 the Peace Pledge Union sold 500 white feather badges as symbols of peace. Among other matters, such campaigns in World War I showed anecdotally from the time indicate that the campaign was unpopular among soldiers, not least because soldiers who were home on leave could find themselves presented with feathers. [ 5 ] [ 6 ] Some persons who volunteered to enlist but were rejected on medical and other grounds were also presented white feathers. [ 11 ] [ 12 ] In Australia this led to the formation of groups such as the Rejected Volunteers' Association [ 13 ] and the Australian Patriots' League, and the wearing of a badge to recognise their patriotism. [ 14 ] Other individuals were not of enlistment age, [ 15 ] or performing other important war work. [ 16 ] In the United States, the white feather has also become a symbol of courage, persistence, and superior combat marksmanship. Its most notable wearer was US Marine Corps Gunnery Sergeant Carlos Hathcock , who was awarded the Silver Star medal for bravery during the Vietnam War . Hathcock picked up a white feather on a mission and wore it in his hat to taunt the enemy. He was so feared by enemy troops that they put a price on his head. Its wear on combat headgear flaunts an insultingly-easy target for enemy snipers. [ 19 ]
https://en.wikipedia.org/wiki/White_feather
In general relativity , a white hole is a hypothetical region of spacetime and singularity that cannot be entered from the outside, although energy , matter , light and information can escape from it. In this sense, it is the reverse of a black hole , from which energy, matter, light and information cannot escape. White holes appear in the theory of eternal black holes . In addition to a black hole region in the future, such a solution of the Einstein field equations has a white hole region in its past. [ 1 ] This region does not exist for black holes that have formed through gravitational collapse , however, nor are there any observed physical processes through which a white hole could be formed. Supermassive black holes (SMBHs) are theoretically predicted to be at the center of every galaxy and may be essential for their formation. Stephen Hawking [ 2 ] and others have proposed that these supermassive black holes could spawn supermassive white holes. [ 3 ] Like black holes, white holes have properties such as mass , charge , and angular momentum . They attract matter like any other mass, but objects falling towards a white hole would never actually reach the white hole's event horizon (though in the case of the maximally extended Schwarzschild solution , discussed below, the white hole event horizon in the past becomes a black hole event horizon in the future, so any object falling towards it will eventually reach the black hole horizon). Imagine a gravitational field, without a surface. Acceleration due to gravity is the greatest on the surface of any body. But since black holes lack a surface, acceleration due to gravity increases exponentially, but never reaches a final value as there is no considered surface in a singularity. In quantum mechanics , the black hole emits Hawking radiation and so it can come to thermal equilibrium with a gas of radiation (not compulsory). Because a thermal-equilibrium state is time-reversal-invariant, Stephen Hawking argued that the time reversal of a black hole in thermal equilibrium results in a white hole in thermal equilibrium (each absorbing and emitting energy to equivalent degrees). [ 4 ] [ further explanation needed ] Consequently, this may imply that black holes and white holes are reciprocal in structure, wherein the Hawking radiation from an ordinary black hole is identified with a white hole's emission of energy and matter. Hawking's semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment, [ 5 ] where a black hole in anti-de Sitter space is described by a thermal gas in a gauge theory , whose time reversal is the same as itself. In the 1930s, physicists Robert Oppenheimer and Hartland Snyder introduced the idea of white holes as a solution to Einstein's equations of general relativity . These equations, the foundation of modern physics, describe the curvature of spacetime due to massive objects. Whereas black holes are born from the collapse of stars, white holes represent the theoretical birth of space, time, and potentially even universes. At the center, space and time do not end into a singularity, but continue across a short transition region where the Einstein equations are violated by quantum effects. From this region, space and time emerge with the structure of a white hole interior, a possibility already suggested by John Lighton Synge . [ 6 ] The possibility of the existence of white holes was put forward by cosmologist Igor Novikov in 1964, [ 7 ] developed by Nikolai Kardashev . [ 8 ] White holes are predicted as part of a solution to the Einstein field equations known as the maximally extended version of the Schwarzschild metric [ clarification needed ] describing an eternal black hole with no charge and no rotation. Here, "maximally extended" implies that spacetime should not have any "edges". For any possible trajectory of a free-falling particle (following a geodesic ) in spacetime, it should be possible to continue this path arbitrarily far into the particle's future, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region, which allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. For an observer outside using Schwarzschild coordinates , infalling particles take an infinite time to reach the black hole horizon infinitely far in the future, while outgoing particles that pass the observer have been traveling outward for an infinite time since crossing the white hole horizon infinitely far in the past (however, the particles or other objects experience only a finite proper time between crossing the horizon and passing the outside observer). The black hole/white hole appears "eternal" from the perspective of an outside observer, in the sense that particles traveling outward from the white hole interior region can pass the observer at any time, and particles traveling inward, which will eventually reach the black hole interior region can also pass the observer at any time. Just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black-hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white-hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates (see figure). [ 9 ] In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein-Rosen bridge" or Schwarzschild wormhole . [ 9 ] Depending on where the space-like hypersurface is chosen, the Einstein-Rosen bridge can either connect two black hole event horizons in each universe (with points in the interior of the bridge being part of the black hole region of the spacetime), or two white hole event horizons in each universe (with points in the interior of the bridge being part of the white hole region). It is impossible to use the bridge to cross from one universe to the other, however, because it is impossible to enter a white hole event horizon from the outside, and anyone entering a black hole horizon from either universe will inevitably hit the black hole singularity. Note that the maximally extended Schwarzschild metric describes an idealized black hole/white hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region. [ 10 ] But because the equations of general relativity are time-reversible – they exhibit Time reversal symmetry – general relativity must also allow the time-reverse of this type of "realistic" black hole that forms from collapsing matter. The time-reversed case would be a white hole that has existed since the beginning of the universe, and that emits matter until it finally "explodes" and disappears. [ 11 ] Despite the fact that such objects are permitted theoretically, they are not taken as seriously as black holes by physicists, since there would be no processes that would naturally lead to their formation; they could exist only if they were built into the initial conditions of the Big Bang . [ 11 ] Additionally, it is predicted that such a white hole would be highly "unstable" in the sense that if any small amount of matter fell towards the horizon from the outside, this would prevent the white hole's explosion as seen by distant observers, with the matter emitted from the singularity never able to escape the white hole's gravitational radius. [ 12 ] Depending on the type of black hole solution considered, there are several types of white holes. In the case of the Schwarzschild black hole mentioned above, a geodesic coming out of a white hole comes from the "gravitational singularity" it contains. In the case of a black hole possessing an electric charge ψ ** Ώ ** ώ ( Reissner-Nordström black hole ) or an angular momentum , then the white hole happens to be the "exit door" of a black hole existing in another universe. Such a black hole – white hole configuration is called a wormhole . In both cases, however, it is not possible to reach the region "in" the white hole, so the behavior of it – and, in particular, what may come out of it – is completely impossible to predict. In this sense, a white hole is a configuration according to which the evolution of the universe cannot be predicted, because it is not deterministic. A "bare singularity" is another example of a non-deterministic configuration, but does not have the status of a white hole, however, because there is no region inaccessible from a given region. In its basic conception, the Big Bang can be seen as a naked singularity in outer space, but does not correspond to a white hole. [ 13 ] In its mode of formation, a black hole comes from a residue of a massive star whose core contracts until it turns into a black hole. Such a configuration is not static: we start from a massive and extended body which contracts to give a black hole. The black hole therefore does not exist for all eternity, and there is no corresponding white hole. To be able to exist, a white hole must either arise from a physical process leading to its formation, or be present from the creation of the universe . None of these solutions appears satisfactory: there is no known astrophysical process that can lead to the formation of such a configuration, and imposing it from the creation of the universe amounts to assuming a very specific set of initial conditions which has no concrete motivation. In view of the enormous quantities radiated by quasars , whose luminosity makes it possible to observe them from several billion light-years away, it had been assumed that they were the seat of exotic physical phenomena such as a white hole, or a phenomenon of continuous creation of matter (see the article on the steady state theory ). These ideas are now abandoned, the observed properties of quasars being very well explained by those of an accretion disk in the center of which is a supermassive black hole . [ 13 ] A view of black holes first proposed in the late 1980s might be interpreted as shedding some light on the nature of classical white holes. Some researchers have proposed that when a black hole forms, a Big Bang may occur at the core/ singularity , which would create a new universe that expands outside of the parent universe . [ 14 ] [ 15 ] [ 16 ] The Einstein–Cartan–Sciama–Kibble theory of gravity extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor , as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum ( spin ) of matter. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, however, the minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter on the other side of the event horizon reaches an enormous but finite density and rebounds, forming a regular Einstein–Rosen bridge. [ 17 ] The other side of the bridge becomes a new, growing baby universe. For observers in the baby universe, the parent universe appears as the only white hole. Accordingly, the observable universe is the Einstein–Rosen interior of a black hole existing as one of possibly many inside a larger universe. The Big Bang was a nonsingular Big Bounce at which the observable universe had a finite, minimum scale factor. [ 18 ] Shockwave cosmology , proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. [ 19 ] This black hole eventually becomes a white hole as the matter density reduces with the expansion. A related theory gives an alternative to dark energy. [ 20 ] A 2012 paper argues that the Big Bang itself is a white hole. [ 21 ] It further suggests that the emergence of a white hole, which was named a "Small Bang", is spontaneous—all the matter is ejected at a single pulse. Thus, unlike black holes, white holes cannot be continuously observed; rather, their effects can be detected only around the event itself. The paper even proposed identifying a new group of gamma-ray bursts with white holes. Unlike black holes for which there is a well-studied physical process, gravitational collapse (which gives rise to black holes when a star somewhat more massive than the sun exhausts its nuclear "fuel"), there is no clear analogous process that leads reliably to the production of white holes. Although some hypotheses have been put forward: At present, very few scientists believe in the existence of white holes and it is considered only a mathematical exercise with no real-world counterpart. [ 27 ]
https://en.wikipedia.org/wiki/White_hole
A white light scanner ( WLS ) is a device for performing surface height measurements of an object using coherence scanning interferometry ( CSI ) with spectrally-broadband, "white light" illumination. Different configurations of scanning interferometer may be used to measure macroscopic objects with surface profiles measuring in the centimeter range, to microscopic objects with surface profiles measuring in the micrometer range. For large-scale non-interferometric measurement systems, see structured-light 3D scanner . Vertical scanning interferometry is an example of low-coherence interferometry, which exploits the low coherence of white light. Interference will only be achieved when the path length delays of the interferometer are matched within the coherence time of the light source. VSI monitors the fringe contrast rather than the shape of the fringes. Fig. 2 illustrates a Twyman–Green interferometer set up for white light scanning of a macroscopic object. Light from the test specimen is mixed with light reflected from the reference mirror to form an interference pattern. Fringes appear in the CCD image only where the optical path lengths differ by less than half the coherence length of the light source, which is generally on the order of micrometers. The interference signal (correlogram) is recorded and analyzed as either the specimen or reference mirror is scanned. The focus position of any particular point on the surface of the specimen corresponds to the point of maximum fringe contrast (i.e. where the modulation of the correlogram is greatest). Fig. 3 illustrates a white light interferometric microscope using a Mirau interferometer in the objective. Other forms of interferometer used with white light include the Michelson interferometer (for low magnification objectives, where the reference mirror in a Mirau objective would interrupt too much of the aperture) and the Linnik interferometer (for high magnification objectives with limited working distance). [ 1 ] The objective (or alternatively, the sample) is moved vertically over the full height range of the sample, and the position of maximum fringe contrast is found for each pixel. [ 2 ] [ 3 ] The chief benefit of low-coherence interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, [ 4 ] [ 5 ] [ 6 ] and as seen in Fig. 1, which scans a 180 μm × 140 μm × 10 μm volume, it is well suited to profiling steps and rough surfaces. The axial resolution of the system is determined by the coherence length of the light source and is typically in the micrometer range. [ 7 ] [ 8 ] [ 9 ] Industrial applications include in-process surface metrology , roughness measurement, 3D surface metrology in hard-to-reach spaces and in hostile environments, profilometry of surfaces with high aspect ratio features (grooves, channels, holes), and film thickness measurement (semi-conductor and optical industries, etc.). [ 10 ] White-light interferometry scanning (WLS) systems capture intensity data at a series of positions along the vertical axis , determining where the surface is located by using the shape of the white-light interferogram, the localized phase of the interferogram, or a combination of both shape and phase. The white light interferogram actually consists of the superposition of fringes generated by multiple wavelengths, obtaining peak fringe contrast as a function of scan position, that is, the red portion of the object beam interferes with the red portion of the reference beam , the blue interferes with the blue, and so forth. In a WLS system, an imaging interferometer is vertically scanned to vary the optical path difference . During this process, a series of interference patterns are formed at each pixel in the instrument field of view . This results in an interference function, with interference varying as a function of optical path difference. The data are stored digitally and processed in a variety of ways depending on the system manufacturer, including being Fourier-transformed into frequency space, subject to cross-correlation methods, or analysis in the spatial domain. If a Fourier transform is used, the original intensity data are expressed in terms of interference phase as a function of wavenumber . Wavenumber k is a representation of wavelength in the spatial frequency domain, defined by k = 2π/λ. If phase is plotted versus wavenumber, the slope of the function corresponds to the relative change in group-velocity optical path difference D G by D h = D G /2n G where n G is group-velocity index of refraction . If this calculation is performed for each pixel, a three-dimensional surface height map emerges from the data. In the actual measuring process, the optical path difference is steadily increased by scanning the objective vertically using a precision mechanical stage or piezoelectric positioner. Interference data are captured at each step in the scan. In effect, an interferogram is captured as a function of vertical position for each pixel in the detector array. To sift through the large amount of data acquired over long scans, many different techniques can be employed. Most methods allow the instrument to reject raw data that do not exhibit sufficient signal-to-noise. The intensity data as a function of the optical path difference are processed and converted to height information of the sample.
https://en.wikipedia.org/wiki/White_light_scanner
White matter dissection refers to a special anatomical technique able to reveal the subcortical organization of white matter fibers in the human or animal cadaver brain. The first studies of cerebral white matter (WM) were described by Galen and by the subsequent efforts of Vesalius on human cadaver specimens. [ 2 ] [ 3 ] The interest for the deep anatomy of the brain pushed anatomist during centuries to create and develop different techniques for specimen preparation and dissection in order to better reveal the complex white matter architectural organization. [ 2 ] [ 3 ] [ 4 ] [ 5 ] However, the biggest impact on the dissection of white matter anatomy was made by Joseph Klingler who developed a new method for specimens preparation and dissection. This technique became more feasible and widely used due to an increased quality of dissection and surprising quality of anatomical details. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Klingler developed a new method of brain fixation, by freezing already formalin-fixed brains before dissection. [ 6 ] [ 7 ] First, the water crystallization induced by freezing disrupts the structure of the grey matter (which has a high water content). This process made possible to peel off the cortex from the brain surface without damaging the subcortical white matter organization underneath. Second, the freezing process along the WM fibers, induced a clear separation between them facilitating the dissection by progressive peeling of the fibers. [ 4 ] [ 8 ] [ 9 ] White matter fibre dissection is nowadays considered as a valuable tool to enhance our knowledge about brain connectivity, [ 5 ] [ 9 ] [ 10 ] [ 1 ] and has been used to validate tractographic results and vice versa with good consistency between the two techniques, [ 11 ] but also for neurosurgical training and neuroanatomical teaching.
https://en.wikipedia.org/wiki/White_matter_dissection
The white metals are a series of often decorative bright metal alloys used as a base for plated silverware , ornaments or novelties, as well as any of several lead -based or tin -based alloys used for things like bearings , jewellery , miniature figures , fusible plugs , some medals and metal type . [ 1 ] The term is also used in the antiques trade for an item suspected of being silver, but not hallmarked . A white metal alloy may include antimony , tin , lead , cadmium , bismuth , and zinc (some of which are quite toxic). Not all of these metals are found in all white metal alloys. Metals are mixed to achieve a desired goal or need. As an example, a base metal for jewellery needs to be castable , polishable , have good flow characteristics, have the ability to cast fine detail without an excessive amount of porosity and cast at between 230 and 300 °C (446 and 572 °F ). [ 1 ] In compliance with British law, the British fine art trade uses the term "white metal" in auction catalogues to describe foreign silver items which do not carry British Assay Office hallmarks , but which are nonetheless understood to be silver and are priced accordingly. Tin-lead and tin- copper alloys such as Babbitt metal [ 2 ] have a low melting point, which is ideal for use as solder , but these alloys also have ideal characteristics for plain bearings . Most importantly for bearings, the material should be hard and wear-resistant and have a low coefficient of friction. It must also be shock-resistant, tough and sufficiently ductile to allow for slight misalignment prior to running-in. Pure metals are soft, tough and ductile, with a high coefficient of friction. Intermetallic compounds are hard and wear-resistant but brittle. By themselves, they do not make ideal bearing materials. Alloys consist of small particles of a hard compound embedded in the tough, ductile background of a solid solution. In service, the latter can wear away slightly, leaving the hard compound to carry the load. That wear also provides channels to allow in lubricant ( oils ). All bearing metals contain antimony (Sb), which forms hard cubic crystals. White metals are commonly used in bearings and bushings because of their high load-bearing capacity and self-lubricating properties, which reduce friction and extend the lifespan of these components. [ 3 ] [ 4 ] In the automotive industry , they are found in engine components like crankshaft and connecting rod bearings. [ 5 ] Additionally, white metal is popular in jewellery and decorative items due to its cost-effectiveness and ability to mimic more expensive metals like silver . [ 6 ] In the printing industry, white metal alloys were historically used to cast typefaces. [ 7 ]
https://en.wikipedia.org/wiki/White_metal
White phosphorus , yellow phosphorus , or simply tetraphosphorus (P 4 ) is an allotrope of phosphorus . It is a translucent waxy solid that quickly yellows in light (due to its photochemical conversion into red phosphorus ), [ 2 ] and impure white phosphorus is for this reason called yellow phosphorus. White phosphorus is the first allotrope of phosphorus, and in fact the first elementary substance to be discovered that was not known since ancient times. [ 3 ] It glows greenish in the dark (when exposed to oxygen) and is highly flammable and pyrophoric (self-igniting) upon contact with air. It is toxic , causing severe liver damage on ingestion and phossy jaw from chronic ingestion or inhalation. The odour of combustion of this form has a characteristic garlic odor, and samples are commonly coated with white " diphosphorus pentoxide ", which consists of P 4 O 10 tetrahedra with oxygen inserted between the phosphorus atoms and at their vertices. White phosphorus is only slightly soluble in water and can be stored under water. P 4 is soluble in benzene , oils , carbon disulfide , and disulfur dichloride . White phosphorus exists as molecules of four phosphorus atoms in a tetrahedral structure, joined by six phosphorus—phosphorus single bonds . The tetrahedral arrangement results in ring strain and instability. [ 4 ] Although both are called "white phosphorus", in fact two different crystal allotropes are known, interchanging reversibly at 195.2 K. [ 5 ] The element's standard state is the body-centered cubic α form, which is actually metastable under standard conditions . [ 4 ] The β form is believed to have a hexagonal crystal structure. [ 5 ] Molten and gaseous white phosphorus also retains the tetrahedral molecules, until 800 °C (1,500 °F; 1,100 K) when it starts decomposing to P 2 molecules. [ 6 ] The P 4 molecule in the gas phase has a P-P bond length of r g = 2.1994(3) Å as was determined by gas electron diffraction . [ 7 ] The β form of white phosphorus contains three slightly different P 4 molecules, i.e. 18 different P-P bond lengths — between 2.1768(5) and 2.1920(5) Å. The average P-P bond length is 2.183(5) Å. [ 6 ] Despite white phosphorus not being the most stable allotropes of phosphorus, its molecular nature allows it to be easily purified. Thus, it is defined to have a zero enthalpy of formation . In base , white phosphorus spontaneously disproportionates to phosphine and various phosphorus oxyacid salts. [ 8 ] Many reactions of white phosphorus involve insertion into the P-P bonds, such as the reaction with oxygen, sulfur, phosphorus tribromide and the NO + ion . It ignites spontaneously in air at about 50 °C (122 °F), and at much lower temperatures if finely divided (due to melting-point depression ). Phosphorus reacts with oxygen, usually forming two oxides depending on the amount of available oxygen: P 4 O 6 ( phosphorus trioxide ) when reacted with a limited supply of oxygen, and P 4 O 10 when reacted with excess oxygen. On rare occasions, P 4 O 7 , P 4 O 8 , and P 4 O 9 are also formed, but in small amounts. This combustion gives phosphorus(V) oxide: The white allotrope can be produced using several methods. In the industrial process, phosphate rock is heated in an electric or fuel-fired furnace in the presence of carbon and silica . [ 9 ] Elemental phosphorus is then liberated as a vapour and can be collected under phosphoric acid . An idealized equation for this carbothermal reaction is shown for calcium phosphate (although phosphate rock contains substantial amounts of fluoroapatite , which would also form silicon tetrafluoride ): In this way, an estimated 750,000 tons were produced in 1988. [ 10 ] Most (83% in 1988) white phosphorus is used as a precursor to phosphoric acid, half of which is used for food or medical products where purity is important. The other half is used for detergents. [ needs update ] Much of the remaining 17% is mainly used for the production of chlorinated compounds phosphorus trichloride , phosphorus oxychloride , and phosphorus pentachloride : [ 11 ] Other products derived from white phosphorus include phosphorus pentasulfide and various metal phosphides. [ 10 ] Although white phosphorus forms the tetrahedron , the simplest possible Platonic solid , no other polyhedral phosphorus clusters are known. [ 12 ] White phosphorus converts to the thermodynamically-stabler red allotrope, but that allotrope is not isolated polyhedra. A cubane-type cluster , in particular, is unlikely to form, [ 12 ] and the closest approach is the half-phosphorus compound P 4 (CH) 4 , produced from phosphaalkynes . [ 13 ] Other clusters are more thermodynamically favorable, and some have been partially formed as components of larger polyelemental compounds. [ 12 ] White phosphorus is acutely toxic, with a lethal dose of 50-100 mg (1 mg/kg body weight). Its mode of action is not known but is thought to involve its reducing properties, possibly forming intermediate reducing compounds such as hypophosphite, phosphite, and phosphine. It damages the liver, kidneys, and other organs before eventually being metabolized to non-toxic phosphate. Chronic low-level exposure leads to tooth loss and phossy jaw which appears to be caused by the formation of amino bisphosphonates . [ 10 ] [ 14 ] [ 15 ] White phosphorus is used as a weapon because it is pyrophoric. For the same reasons, it is dangerous to handle. Measures are taken to protect samples from air since it will react with oxygen at ambient temperatures, and even in small samples this can lead to self-heating and eventual combustion. There are anecdotal reports of problems for beachcombers who may collect washed-up samples while unaware of their true nature. [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/White_phosphorus
White pox disease (also " acroporid serratiosis " and " patchy necrosis "), first noted in 1996 on coral reefs near the Florida Keys , is a coral disease affecting Elkhorn coral ( Acropora palmata ) throughout the Caribbean . It causes irregular white patches or blotches on the coral that result from the loss of coral tissue . These patches distinguish white pox disease from white band disease which produces a distinctive white band where the coral skeleton has been denuded. The blotches caused by this disease are also clearly differentiated from coral bleaching and scars caused by coral-eating snails. [ 1 ] It is very contagious, spreading to nearby coral. [ 2 ] At the locations where white pox disease has been observed, it is estimated to have reduced the living tissue in elkhorn corals by 50–80%. [ 3 ] In the Florida Keys National Marine Sanctuary (FKNMS), the losses of living coral are estimated to average around 88%. [ 1 ] Elkhorn coral was formerly the dominant shallow water reef-building coral throughout the Caribbean but now is listed as a threatened species, due in part to the disease. [ 4 ] Elkhorn coral is the first species of coral to be listed as threatened in the United States . [ 5 ] The pathogen responsible is believed to be Serratia marcescens , a common intestinal bacterium found in humans and other animals. [ 1 ] [ 6 ] This is the first time it has been linked to the death of coral. [ 7 ] The specific source of the bacteria that is killing the coral is currently unknown. As well as being a part of human and animal gut flora , S. marcescens can live in soil and water as a "free living" microbe . [ 7 ] Research is needed to find and confirm the exact source(s) of the pathogen, possible sources include sewage treatment plant effluent, marine fish feces and seabird guano. [ 1 ]
https://en.wikipedia.org/wiki/White_pox_disease
Whiteboard animation is the process of which an author physically draws and records an illustrated story using a whiteboard , or whiteboard-like surface, and marker pens . The animations frequently are aided with narration by script. The authors commonly use time-lapsed drawing and stop motion animation to liven hand-drawn illustrations, with YouTube used as a common platform. It is also used in television and internet advertisements to communicate with consumers in a personal way. The earliest videos made using Whiteboard Animation were published in 2009 on YouTube , used mostly for experimental purposes until developing into a storytelling device, focusing mostly on narratives and educational explanations. "Whiteboard animation" refers to a specific type of presentation that uses the process of creating a series of drawn pictures on a whiteboard that are recorded in sequence and then played back to create an animated presentation. The actual effect of whiteboard animation is time-lapse, or stop-motion. The actual sequential frame by frame animation is rarely used but has been incorporated. Other terms are "video scribing" and "animated doodling". These video animation styles are now seen in many variations and have taken a turn into many other animation styles. With the introduction of software to create whiteboard animations, the process has many different manifestations of varying quality. Those who use whiteboard animation are typically businesses and educators. The whiteboard animation production procedure begins with creating a topic. Once the topic is chosen, scriptwriting begins. After the content is created, it is time to create rough drafts of animations. These assist to set up the inventive bearing and timing for the movement. The rest of the process is as follows: The steps listed above are not set in stone, they should be used as a guideline to create a whiteboard animation production. Whiteboard animation has been used in a few TV spots and on internet video sites such as YouTube and Vimeo . Early types were UPS Whiteboard Commercials. Many companies and firms of all sectors and sizes are incorporating this style into their modus operandi to teach company employees different company policies or demonstrate a new software or product to consumers. For educational purposes, whiteboard animation videos have been used for learning online to teach languages, as chapter summaries for educational textbooks, and for the public communication of academic scholarship. [ 2 ] A 2016 study of whiteboard animation found that, despite claims and high popularity, there is little to no compelling experimental evidence that they are more effective in learning, motivation, or persuasion than other forms of learning. [ 3 ] Starting in 2010, the Royal Society of Arts converted selected speeches and books from its public events program into whiteboard animations. Made by whiteboard animation studio Cognitive , the first 14 RSA Animate videos gained 46 million views in 2011, making the RSA's YouTube channel the no.1 nonprofit channel worldwide. [ 4 ]
https://en.wikipedia.org/wiki/Whiteboard_animation
Whiteboarding when used in the context of computing, is the placement of shared files on an on-screen shared notebook or whiteboard. Videoconferencing and data conferencing software often lets documents as on a physical whiteboard . In hybrid whiteboarding , special handwriting detection software allows for physical whiteboards to be shared with remote and distant users, often allowing for the simultaneous addition of digital content. [ 1 ] Whiteboarding sessions — both in-office and virtual — provide teams with a collaborative, creative environment for brainstorming new ideas and solving problems. Without a defined structure in place, however, these sessions can quickly unravel and get off track. [ 2 ] With this type of software, several people can work on the image at the same time, each seeing changes the others make in near-real time. Electronic whiteboarding was included at least as early as 1996 in the CoolTalk tool in Netscape Navigator 3.0. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Whiteboarding
In mathematics , point-free geometry is a geometry whose primitive ontological notion is region rather than point . Two axiomatic systems are set out below, one grounded in mereology , the other in mereotopology and known as connection theory . Point-free geometry was first formulated by Alfred North Whitehead , [ 1 ] not as a theory of geometry or of spacetime , but of "events" and of an "extension relation " between events. Whitehead's purposes were as much philosophical as scientific and mathematical. [ 2 ] Whitehead did not set out his theories in a manner that would satisfy present-day canons of formality. The two formal first-order theories described in this entry were devised by others in order to clarify and refine Whitehead's theories. The domain of discourse for both theories consists of "regions." All unquantified variables in this entry should be taken as tacitly universally quantified ; hence all axioms should be taken as universal closures . No axiom requires more than three quantified variables; hence a translation of first-order theories into relation algebra is possible. Each set of axioms has but four existential quantifiers . The fundamental primitive binary relation is inclusion , denoted by the infix operator "≤", which corresponds to the binary Parthood relation that is a standard feature in mereological theories. The intuitive meaning of x ≤ y is " x is part of y ." Assuming that equality, denoted by the infix operator "=", is part of the background logic, the binary relation Proper Part , denoted by the infix operator "<", is defined as: The axioms are: [ 3 ] A model of G1–G7 is an inclusion space . Definition . [ 4 ] Given some inclusion space S, an abstractive class is a class G of regions such that S\G is totally ordered by inclusion. Moreover, there does not exist a region included in all of the regions included in G . Intuitively, an abstractive class defines a geometrical entity whose dimensionality is less than that of the inclusion space. For example, if the inclusion space is the Euclidean plane , then the corresponding abstractive classes are points and lines . Inclusion-based point-free geometry (henceforth "point-free geometry") is essentially an axiomatization of Simons's system W. [ 5 ] In turn, W formalizes a theory of Whitehead [ 6 ] whose axioms are not made explicit. Point-free geometry is W with this defect repaired. Simons did not repair this defect, instead proposing in a footnote that the reader do so as an exercise. The primitive relation of W is Proper Part, a strict partial order . The theory [ 7 ] of Whitehead (1919) has a single primitive binary relation K defined as xKy ↔ y < x . Hence K is the converse of Proper Part. Simons's WP1 asserts that Proper Part is irreflexive and so corresponds to G1 . G3 establishes that inclusion, unlike Proper Part, is antisymmetric . Point-free geometry is closely related to a dense linear order D , whose axioms are G1-3 , G5 , and the totality axiom x ≤ y ∨ y ≤ x . {\displaystyle x\leq y\lor y\leq x.} [ 8 ] Hence inclusion-based point-free geometry would be a proper extension of D (namely D ∪ { G4 , G6 , G7 }), were it not that the D relation "≤" is a total order . A different approach was proposed in Whitehead (1929), one inspired by De Laguna (1922). Whitehead took as primitive the topological notion of "contact" between two regions, resulting in a primitive "connection relation" between events. Connection theory C is a first-order theory that distills the first 12 of Whitehead's 31 assumptions [ 9 ] into 6 axioms, C1-C6 . [ 10 ] C is a proper fragment of the theories proposed by Clarke, [ 11 ] who noted their mereological character. Theories that, like C , feature both inclusion and topological primitives, are called mereotopologies . C has one primitive relation , binary "connection," denoted by the prefixed predicate letter C . That x is included in y can now be defined as x ≤ y ↔ ∀z[ Czx → Czy ]. Unlike the case with inclusion spaces, connection theory enables defining "non-tangential" inclusion, [ 12 ] a total order that enables the construction of abstractive classes. Gerla and Miranda (2008) argue that only thus can mereotopology unambiguously define a point . A model of C is a connection space . Following the verbal description of each axiom is the identifier of the corresponding axiom in Casati and Varzi (1999). Their system SMT ( strong mereotopology ) consists of C1-C3 , and is essentially due to Clarke (1981). [ 13 ] Any mereotopology can be made atomless by invoking C4 , without risking paradox or triviality. Hence C extends the atomless variant of SMT by means of the axioms C5 and C6 , suggested by chapter 2 of part 4 of Process and Reality . [ 14 ] Biacino and Gerla (1991) showed that every model of Clarke's theory is a Boolean algebra , and models of such algebras cannot distinguish connection from overlap. It is doubtful whether either fact is faithful to Whitehead's intent.
https://en.wikipedia.org/wiki/Whitehead's_point-free_geometry
In group theory , a branch of abstract algebra , the Whitehead problem is the following question: Is every abelian group A with Ext 1 ( A , Z ) = 0 a free abelian group ? Saharon Shelah proved that Whitehead's problem is independent of ZFC , the standard axioms of set theory. [ 1 ] Assume that A is an abelian group such that every short exact sequence must split if B is also abelian. The Whitehead problem then asks: must A be free? This splitting requirement is equivalent to the condition Ext 1 ( A , Z ) = 0. Abelian groups A satisfying this condition are sometimes called Whitehead groups , so Whitehead's problem asks: is every Whitehead group free? It should be mentioned that if this condition is strengthened by requiring that the exact sequence must split for any abelian group C , then it is well known that this is equivalent to A being free. Caution : The converse of Whitehead's problem, namely that every free abelian group is Whitehead, is a well known group-theoretical fact. Some authors call Whitehead group only a non-free group A satisfying Ext 1 ( A , Z ) = 0. Whitehead's problem then asks: do Whitehead groups exist? Saharon Shelah showed that, given the canonical ZFC axiom system, the problem is independent of the usual axioms of set theory . [ 1 ] More precisely, he showed that: Since the consistency of ZFC implies the consistency of both of the following: Whitehead's problem cannot be resolved in ZFC. J. H. C. Whitehead , motivated by the second Cousin problem , first posed the problem in the 1950s. Stein answered the question in the affirmative for countable groups. [ 2 ] Progress for larger groups was slow, and the problem was considered an important one in algebra for some years. Shelah's result was completely unexpected. While the existence of undecidable statements had been known since Gödel's incompleteness theorem of 1931, previous examples of undecidable statements (such as the continuum hypothesis ) had all been in pure set theory . The Whitehead problem was the first purely algebraic problem to be proved undecidable. Shelah later showed that the Whitehead problem remains undecidable even if one assumes the continuum hypothesis. [ 3 ] [ 4 ] In fact, it remains undecidable even under the generalised continuum hypothesis . [ 5 ] The Whitehead conjecture is true if all sets are constructible . That this and other statements about uncountable abelian groups are provably independent of ZFC shows that the theory of such groups is very sensitive to the assumed underlying set theory .
https://en.wikipedia.org/wiki/Whitehead_problem
Whiteout or white-out [ 1 ] is a weather condition in which visibility and contrast are severely reduced by snow , fog , or sand . The horizon disappears from view while the sky and landscape appear featureless, leaving no points of visual reference by which to navigate. A whiteout may be due simply to extremely heavy snowfall rates as seen in lake effect conditions, or to other factors such as diffuse lighting from overcast clouds , mist or fog , or a background of snow. A person traveling in a true whiteout is at significant risk of becoming completely disoriented and losing their way, even in familiar surroundings. Motorists typically have to stop their cars where they are, as the road is impossible to see. Normal snowfalls and blizzards , where snow is falling at 3 or 5 centimeters per hour (1 or 2 in/h), or where the relief visibility is not clear yet having a clear field of view for over 9 meters (30 ft), are often incorrectly called whiteouts. There are three different forms of a whiteout: A whiteout should not be confused with flat-light. Whilst there are similarities, both the causes and effects are different. A whiteout is a reduction and scattering of sunlight. [ 2 ] [ 3 ] [ better source needed ] Flat-light is a diffusion of sunlight. [ 2 ] [ 3 ] [ better source needed ] Whiteout conditions pose threats to mountain climbers, skiers, aviation, and mobile ground traffic. Motorists, especially those on large high-speed routes, are also at risk. There have been many major multiple-vehicle collisions associated with whiteout conditions. One forward motorist may come to a complete stop when they cannot see the road, while the motorist behind is still moving. Local, short-duration whiteout conditions can be created artificially in the vicinity of airports and helipads due to aircraft operations. Snow on the ground can be stirred up by helicopter rotor down-wash or airplane jet blast, presenting hazards to both aircraft and bystanders on the ground.
https://en.wikipedia.org/wiki/Whiteout_(weather)
Whitetopping is the covering of an existing asphalt pavement with a layer of Portland cement concrete . Whitetopping is divided into types depending on the thickness of the concrete layer and whether the layer is bonded to the asphalt substrate. Unbonded whitetopping, also called conventional whitetopping, uses concrete thicknesses of 20cm (8") or more that is not bonded to the asphalt. Bonded whitetopping uses thicknesses of 5 to 15cm (2-6") bonded to the asphalt pavement and is divided into two types, thin and ultrathin. The bond is made by texturing the asphalt. Thin whitetopping uses a bonded layer of concrete that is 10 - 15cm (4-6") thick while an ultrathin layer is 5 to 10 cm (2-4") thick. Ultrathin whitetopping is suitable for light duty uses, such as roads with low traffic volume, parking lots and small airports . Fiber reinforced concrete is used in some thin whitetopping overlays and almost all ultrathin whitetopping overlays. Whitetopping is suitable for asphalt pavement with little deterioration, although repairs can be made to the asphalt if necessary. If the pavement is badly damaged, it should be completely removed and a new concrete pavement should be installed. The pavement should be relatively hard, as well. Deterioration of overlays is significantly increased on asphalt bases with high viscosity . If a grade or a distance between the pavement and a bridge needs to be preserved, the asphalt can be milled so that the height of the pavement does not change. However, whitetopping requires the asphalt layer to be at least 7.5cm (3") thick. If necessary, a section of new concrete roadway can be placed under a bridge with gentle slopes on either side that meet up with the whitetopped portions of the road.
https://en.wikipedia.org/wiki/Whitetopping
Whitewash , calcimine , kalsomine , calsomine , asbestis or lime paint is a type of paint made from slaked lime ( calcium hydroxide , Ca(OH) 2 ) or chalk ( calcium carbonate , CaCO 3 ), sometimes known as "whiting". Various other additives are sometimes used. Whitewash cures through a reaction with carbon dioxide in the atmosphere to form calcium carbonate in the form of calcite , a type of reaction generally known as carbonation or by the more specific term, carbonatation . It is usually applied to exteriors, or interiors of rural dairies because of its mildly antibacterial properties. Whitewash can be tinted for decorative use and is sometimes painted inside structures such as the hallways of apartment buildings. A small amount can rub off onto clothing . In Britain and Ireland, whitewash was used historically in interiors and exteriors of workers' cottages and still retains something of this association with rural poverty . In the United States, a similar attitude is expressed in the old saying "Too proud to whitewash and too poor to paint." [ 1 ] The historic California Missions were commonly whitewashed, giving them their distinctive bright white appearance. Whitewash is especially compatible with masonry because it is absorbed easily and the resultant chemical reaction hardens the medium. [ citation needed ] Lime wash is pure slaked lime in water. It produces a unique surface glow due to the double refraction of calcite crystals. Limewash and whitewash both cure to become the same material. When whitewash or limewash is initially applied, it has very low opacity , which can lead novices to overthicken the paint. Drying increases opacity and subsequent curing increases opacity even further. Limewash relies on being drawn into a substrate unlike a modern paint that adheres to the surface. The process of being drawn in needs to be controlled by damping down. If a wall is not damped, it can leave the lime and pigments on the surface powdery; if the wall is saturated, then there is no surface tension and this can result in failure of the limewash. Damping down is not difficult but it does need to be considered before application of the limewash. [ 2 ] Additives traditionally used include water glass , glue , egg white , Portland cement , salt , soap , milk , flour , molasses, alum, and soil . Whitewash is sometimes coloured with earths to achieve colours spanning the range of broken white , cream , yellow and a range of browns . The blue laundry dye (such as Reckitt's "Dolly Blue" in the UK, Ireland and Australia, Loulaki in Greece, or Mrs. Stewart's Bluing in North America), formerly widely used to give a bright tinge to boiled white textiles, was a common 19th century addition. Historically, pig's blood was added to give the colour Suffolk pink , a colour still widely used on house exteriors in some areas of the UK. If animal blood is applied excessively, its iron oxide can compromise the lime binder's strength. Pozzolanic materials are occasionally added to give a much harder wearing paint finish. This addition creates a short open time and therefore requires timely application of the altered paint. Linseed oil is sometimes added (typically 0.5-2%) to improve adhesion on difficult surfaces. Cement addition makes a harder wearing paint in white or grey . Open time is short, so this is added at point of use. Cement restricts the breathable aspects of the limewash and is inadvisable for preserved historic buildings. Dilute glues improve paint toughness. Wheat flour has been used as a strength enhancing binder. Salt is often added to prevent mold . Basic limewash can be inadequate in its ability to prevent rain-driven water ingress. Additives are being developed but these have the potential for affecting free vapor permeability. For this reason silicate paints , more common in Germany, are gaining popularity in the UK over limewash. Whitewash is applied to trees, especially fruit trees, to prevent sun scald . [ 3 ] Most often only the lower trunk is painted. In Poland painting the whole trunk is also said [ citation needed ] to help keep the body of the tree cool in late winter and early spring months and hence help prevent fruit trees from blooming too soon, i.e. when warm sunny days could promote rapid tree warming, rising sap and bloom and intermittent frosty nights could damage outer tree rings and destroy the young buds and blossoms. In the middle of the 20th century, when family farms with dairy barns were common in the Upper Midwest of the United States , whitewash was a necessary part of routine barn maintenance . A traditional animal barn contains a variety of extremely rough surfaces that are difficult to wash and keep clean, such as stone and brick masonry, and also rough-cut lumber for the ceiling. If left alone, these surfaces collect dust, dirt, insect debris and wastes, and can become very dirty. Whitewash aids in sanitation by coating and smoothing over the rough surfaces. Successive applications of whitewash build up layers of scale that flake off and, in the process, remove surface debris. The coating also has antimicrobial properties that provide hygienic and sanitary benefits for animal barns . [ 4 ] Whitewash was painted on the internal walls of Royal Navy vessels during the Age of Sail to improve light levels inside a vessel's gundeck , reduce bacteria and prevent wear and tear on hull timbers. [ 5 ] It was also used during the Second World War by the German armed forces as an easy-to-apply winter camouflage for soft- and hard-skinned vehicles, aircraft and helmets. [ 6 ] The incident of Tom Sawyer whitewashing a fence as punishment is a famous image in American literature. It appears in The Adventures of Tom Sawyer written in 1876 by Mark Twain . In the 1934 film, Fugitive Lovers , Madge Evans drops a bottle of cosmetics that she calls her "Calcimine". Metaphorically, whitewashing refers to suppression or "glossing over" (possibly a close parallel construction) of potentially damaging or unwelcome information. In many Commonwealth areas, a whitewash refers to a game in which one side fails to score at all; the usage is especially found in cricket . [ 7 ]
https://en.wikipedia.org/wiki/Whitewash
A whiting event is a phenomenon that occurs when a suspended cloud of fine-grained calcium carbonate precipitates in water bodies , typically during summer months, as a result of photosynthetic microbiological activity or sediment disturbance. [ 1 ] [ 2 ] [ 3 ] The phenomenon gets its name from the white, chalky color it imbues to the water. These events have been shown to occur in temperate waters as well as tropical ones, and they can span for hundreds of meters. [ 3 ] They can also occur in both marine and freshwater environments. [ 4 ] The origin of whiting events is debated among the scientific community, and it is unclear if there is a single, specific cause. Generally, they are thought to result from either bottom sediment re-suspension or by increased activity of certain microscopic life such as phytoplankton . [ 5 ] [ 6 ] [ 1 ] Because whiting events affect aquatic chemistry, physical properties, and carbon cycling , studying the mechanisms behind them holds scientific relevance in various ways. [ 7 ] [ 2 ] [ 8 ] [ 9 ] [ 10 ] Whiting event clouds consist of calcium carbonate polymorphs ; aragonite tends to be the dominant precipitate, but some studies in oligotrophic and mesotrophic lakes show calcite is favored. [ 3 ] [ 7 ] Whiting events have been observed in tropical and temperate waters, and they can potentially cover hundreds of meters. [ 3 ] They tend to occur more often in summer months, as warmer waters promote calcium carbonate precipitation, and in hard waters . [ 3 ] [ 10 ] Whitings are typically characterized by cloudy, white patches of water, but they can also be tanner in hue in very shallow waters (less than 5m deep). [ 2 ] In some cases, the whiting might be cryptic (not visible at the surface), but still generate calcium carbonate. [ 11 ] These shallow water whiting events also tend to last less than a day in comparison to deeper water events that can last for several days up to several months. [ 2 ] Regardless of the event's lifespan, the clouds it produces increase turbidity and hamper light penetration. [ 10 ] Some debate exists surrounding the exact cause of whiting events. And although much research exists on the subject, there is still no definitive consensus on the chemical mechanisms behind it. The three most common suggested causes for the phenomenon are: microbiological processes, re-suspension of marine or bottom sediments, and spontaneous direct precipitation from water. [ 12 ] [ 3 ] [ 2 ] Of these three, the last has been ruled unlikely due to the unfavorable reaction kinetics of spontaneous calcium carbonate precipitation. [ 2 ] It is also worth noting that it may be possible for more than one of the aforementioned factors to contribute to whiting events in the same region. [ 12 ] Substantial findings indicate photosynthetic picoplankton , pico cyanobacteria , and phytoplankton activity creates favorable conditions for carbonate precipitation. [ 3 ] [ 2 ] [ 7 ] This link arises as a result of planktonic blooms being observed coinciding with the events. [ 2 ] [ 7 ] Subsequently, via photosynthesis, these organisms uptake inorganic carbon , raise water pH , and alter water alkalinity , which promotes calcium carbonate precipitation. [ 2 ] [ 7 ] The thermodynamic influence of inorganic carbon on whiting calcium carbonate production is shown in the equation below. Furthermore, cases exist in which the type of calcium carbonate found in the whiting cloud matches the type found on local cyanobacteria membranes. [ 4 ] It's hypothesized that the extracellular polymeric substances (EPS) these microorganisms produce can act as seed crystals that provide a start for the precipitation process. [ 2 ] [ 7 ] Current research on the specifics of these EPS and the exact physiological mechanisms of the microorganisms' carbon uptake, however, are limited. [ 2 ] [ 7 ] In shallower waters, evidence supports that activity of local fisherman and marine life such as fish and certain shark species can disturb bottom sediments containing calcium carbonate particles and lead to their suspension. [ 2 ] In addition, as microorganisms impact water chemistry in observable ways and require certain nutrient levels to thrive, whiting events found occurring in nutrient-poor waters where no significant alkalinity difference exists between whiting and non-whiting waters support the idea of sediment re-suspension as a primary cause. [ 13 ] Whiting events have a unique effect on the waters around them. The fact that calcium carbonate clouds increase turbidity and light reflectance holds implications for organisms and processes that depend on light. [ 4 ] In addition, whiting events can function as a transport mechanism for organic carbon to the benthic zone , which is relevant to nutrient cycling . [ 14 ] The cyanobacteria abundant clouds also hold the potential to act as a means to study the microorganism's role in carbon cycling (especially in relation to climate change) and their possible role in finding petroleum source rocks . [ 9 ] [ 8 ]
https://en.wikipedia.org/wiki/Whiting_event
The Whiting reaction is an organic reaction converting a propargyl diol into a diene using lithium aluminium hydride . [ 1 ] This organic reduction has been applied in the synthesis of fecapentaene , a suspected cause of colon cancer : [ 2 ] Protecting groups are tetrahydropyranyl and TBSMS ; the final step is deprotection with tetra-n-butylammonium fluoride .
https://en.wikipedia.org/wiki/Whiting_reaction
The position of Whitley Professor of Biochemistry at the University of Oxford is one of the permanent chairs of the university, and the first in the field of biochemistry at the university. It is associated with a fellowship at Trinity College, Oxford , and was established with an endowment of £10,000 by Edward Whitley of Trinity College. [ 1 ] Benjamin Moore was nominated by Whitley, a former student of Moore, as the first professor. [ 2 ] [ 3 ] Since its creation, the position has been held by:
https://en.wikipedia.org/wiki/Whitley_Professor_of_Biochemistry
In mathematics , in particular in mathematical analysis , the Whitney extension theorem is a partial converse to Taylor's theorem . Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A . It is a result of Hassler Whitney . A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure . The starting point, then, is an examination of the statement of Taylor's theorem. Given a real-valued C m function f ( x ) on R n , Taylor's theorem asserts that for each a , x , y ∈ R n , there is a function R α ( x , y ) approaching 0 uniformly as x , y → a such that where the sum is over multi-indices α . Let f α = D α f for each multi-index α . Differentiating (1) with respect to x , and possibly replacing R as needed, yields where R α is o (| x − y | m −| α | ) uniformly as x , y → a . Note that ( 2 ) may be regarded as purely a compatibility condition between the functions f α which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f . It is this insight which facilitates the following statement: Theorem. Suppose that f α are a collection of functions on a closed subset A of R n for all multi-indices α with | α | ≤ m {\displaystyle |\alpha |\leq m} satisfying the compatibility condition ( 2 ) at all points x , y , and a of A . Then there exists a function F ( x ) of class C m such that: Proofs are given in the original paper of Whitney (1934) , and in Malgrange (1967) , Bierstone (1980) and Hörmander (1990) . Seeley (1964) proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space R n ,+ of points where x n ≥ 0 is a smooth function f on the interior x n for which the derivatives ∂ α f extend to continuous functions on the half space. On the boundary x n = 0, f restricts to smooth function. By Borel's lemma , f can be extended to a smooth function on the whole of R n . Since Borel's lemma is local in nature, the same argument shows that if Ω {\displaystyle \Omega } is a (bounded or unbounded) domain in R n with smooth boundary, then any smooth function on the closure of Ω {\displaystyle \Omega } can be extended to a smooth function on R n . Seeley's result for a half line gives a uniform extension map which is linear, continuous (for the topology of uniform convergence of functions and their derivatives on compacta) and takes functions supported in [0, R ] into functions supported in [− R , R ] To define E , {\displaystyle E,} set [ 1 ] where φ is a smooth function of compact support on R equal to 1 near 0 and the sequences ( a m ), ( b m ) satisfy: A solution to this system of equations can be obtained by taking b n = 2 n {\displaystyle b_{n}=2^{n}} and seeking an entire function such that g ( 2 j ) = ( − 1 ) j . {\displaystyle g\left(2^{j}\right)=(-1)^{j}.} That such a function can be constructed follows from the Weierstrass theorem and Mittag-Leffler theorem . [ 2 ] It can be seen directly by setting [ 3 ] an entire function with simple zeros at 2 j . {\displaystyle 2^{j}.} The derivatives W '(2 j ) are bounded above and below. Similarly the function meromorphic with simple poles and prescribed residues at 2 j . {\displaystyle 2^{j}.} By construction is an entire function with the required properties. The definition for a half space in R n by applying the operator E to the last variable x n . Similarly, using a smooth partition of unity and a local change of variables, the result for a half space implies the existence of an analogous extending map for any domain Ω {\displaystyle \Omega } in R n with smooth boundary.
https://en.wikipedia.org/wiki/Whitney_extension_theorem
In mathematics , the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness . It was first proved by Hassler Whitney in 1957, [ 1 ] and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation. Denote the value of the best uniform approximation of a function f ∈ C ( [ a , b ] ) {\displaystyle f\in C([a,b])} by algebraic polynomials P n {\displaystyle P_{n}} of degree ≤ n {\displaystyle \leq n} by The moduli of smoothness of order k {\displaystyle k} of a function f ∈ C ( [ a , b ] ) {\displaystyle f\in C([a,b])} are defined as: where Δ h k {\displaystyle \Delta _{h}^{k}} is the finite difference of order k {\displaystyle k} . Theorem: [ 2 ] [Whitney, 1957] If f ∈ C ( [ a , b ] ) {\displaystyle f\in C([a,b])} , then where W k {\displaystyle W_{k}} is a constant depending only on k {\displaystyle k} . The Whitney constant W ( k ) {\displaystyle W(k)} is the smallest value of W k {\displaystyle W_{k}} for which the above inequality holds. The theorem is particularly useful when applied on intervals of small length, leading to good estimates on the error of spline approximation. The original proof given by Whitney follows an analytic argument which utilizes the properties of moduli of smoothness . However, it can also be proved in a much shorter way using Peetre's K-functionals. [ 3 ] Let: where L ( x ; F ; x 0 , … , x k ) {\displaystyle L(x;F;x_{0},\ldots ,x_{k})} is the Lagrange polynomial for F {\displaystyle F} at the nodes { x 0 , … , x k } {\displaystyle \{x_{0},\ldots ,x_{k}\}} . Now fix some x ∈ [ a , b ] {\displaystyle x\in [a,b]} and choose δ {\displaystyle \delta } for which ( x + k δ ) ∈ [ a , b ] {\displaystyle (x+k\delta )\in [a,b]} . Then: Therefore: And since we have ‖ G ‖ C ( [ a , b ] ) ≤ h ω k ( h ) {\displaystyle \|G\|_{C([a,b])}\leq h\omega _{k}(h)} , (a property of moduli of smoothness ) Since δ {\displaystyle \delta } can always be chosen in such a way that h ≥ | δ | ≥ h / 2 {\displaystyle h\geq |\delta |\geq h/2} , this completes the proof. It is important to have sharp estimates of the Whitney constants. It is easily shown that W ( 1 ) = 1 / 2 {\displaystyle W(1)=1/2} , and it was first proved by Burkill (1952) that W ( 2 ) ≤ 1 {\displaystyle W(2)\leq 1} , who conjectured that W ( k ) ≤ 1 {\displaystyle W(k)\leq 1} for all k {\displaystyle k} . Whitney was also able to prove that [ 2 ] and In 1964, Brudnyi was able to obtain the estimate W ( k ) = O ( k 2 k ) {\displaystyle W(k)=O(k^{2k})} , and in 1982, Sendov proved that W ( k ) ≤ ( k + 1 ) k k {\displaystyle W(k)\leq (k+1)k^{k}} . Then, in 1985, Ivanov and Takev proved that W ( k ) = O ( k ln ⁡ k ) {\displaystyle W(k)=O(k\ln k)} , and Binev proved that W ( k ) = O ( k ) {\displaystyle W(k)=O(k)} . Sendov conjectured that W ( k ) ≤ 1 {\displaystyle W(k)\leq 1} for all k {\displaystyle k} , and in 1985 was able to prove that the Whitney constants are bounded above by an absolute constant, that is, W ( k ) ≤ 6 {\displaystyle W(k)\leq 6} for all k {\displaystyle k} . Kryakin, Gilewicz, and Shevchuk (2002) [ 4 ] were able to show that W ( k ) ≤ 2 {\displaystyle W(k)\leq 2} for k ≤ 82000 {\displaystyle k\leq 82000} , and that W ( k ) ≤ 2 + 1 e 2 {\displaystyle W(k)\leq 2+{\frac {1}{e^{2}}}} for all k {\displaystyle k} .
https://en.wikipedia.org/wiki/Whitney_inequality
The Whitworth Society was founded in 1923 by Henry Selby Hele-Shaw , then president of the Institution of Mechanical Engineers . Its purposes are to promote engineering in the United Kingdom, and more specifically to support all Whitworth Scholars, the recipients of a scholarship funded by Joseph Whitworth 's scholarship scheme, which started in 1868. [ 1 ] A Whitworth Scholar is the result of completing a successful Whitworth Scholarship. Membership of the Society is limited to Whitworth Scholars, Senior Scholars, Fellows, Exhibitioners and Prizemen. The Society is a way for making contact with all successful "Whitworths" and provides a way for making information contacts and connections from more senior members to recently successful Scholars. The Society also serves as a way to commemorate Joseph Whitworth and acknowledge his contributions to engineering education. [ 2 ] The annual dinner and annual general meeting is held on the evening of 18 March (or nearest Friday to) to commemorate the date in 1868 when Joseph Whitworth wrote to Benjamin Disraeli , offering to found the Whitworth Scholarships. Traditionally the dinner has been held in London until more recent times where the meal and meeting is alternated, one-year London and one-year Manchester. There is a summer meeting held over two-days normally at the beginning of July. The event is largely informal and ordinarily arranged by the President of the Society. A record of all scholars is kept by the Society, until recent years, this was in hardback form (see image) presented when an individual was elected a scholar. In recent times, the register is kept electronically and provided by USB flash drive as part of the awards ceremony. A Whitworth Scholarship, named after Joseph Whitworth, is an "award for outstanding engineers, who have excellent academic and practical skills and the qualities needed to succeed in industry, to take an engineering degree-level programme in any engineering discipline". [ 3 ] On 18 March 1868, Joseph Whitworth wrote to then Prime Minister Benjamin Disraeli to fund 30 scholarships for the value of £100 for young men in the United Kingdom. This was met favourably by the Government at the time as minuted on 27 March 1868 by the council. [ 4 ] After the adoption of this by Government, Whitworth presented a memorandum setting out the requirements of the awards which included examinations in mathematics, mechanics, physics, and chemistry, including metallurgy and in the following handicrafts: Smith's Work, turning, filing, fitting, pattern making and moulding. [ 5 ] Whitworth's intent was to support those individuals with practical skills, training, typically those who today have completed an apprenticeship who had the desire to continue onto further, higher education, university degree courses. The Scholarships continue over 150-years after inception of the idea. [ 6 ] In 2018 the prize money awarded is up to £9,000 per annum for an undergraduate programme and £15,000 per annum for a post graduate research programme. The prize money is still funded by the original money provided in Trust by Joseph Whitworth. The criteria for a scholarship remains consistent with the original mandate of 1868, practical skills with aptitude for science and mathematic based academia. In 2018, the conditions for application for a scholarship are to: In 1984, as a result of consultation with the Whitworth Society, the administration of the Awards and Scholarship programmes was transferred from the Department of Education & Science (at the time) to the Institution of Mechanical Engineers . [ 8 ] Today, the scholarship programme lasts for the duration of an individual's academic studies, typically 3–4 years of full-time degree studies. During this time, the individuals are termed "award holders". [ 9 ] If the continued monitoring of progress and overall academic achievement is deemed satisfactory, the award holder becomes a Whitworth Scholar. This occasion is commemorated at the Institution of Mechanical Engineers Vision Awards ceremony ordinarily carried out in the September/October period of each year. [ 10 ] A Whitworth Scholar is the accolade given to those who have successfully completed a Whitworth Scholarship. It is rare on the basis that only a small number of scholarships are issued each year which has quite specific application conditions and a tough review process. A Whitworth Scholar is permitted to use the post-nominal letters , WhSch. Typically there is an awards ceremony for the successful scholars where a certificate and medal (shown opposite) are presented. In recent years forms part of the IMechE's vision awards in September/October time each year. There are recognised post-nominals which are permitted to be used after an individual's name. They are as follows.
https://en.wikipedia.org/wiki/Whitworth_Society
The Whole Earth Blazar Telescope (WEBT) is an international consortium of astronomers created in 1997, with the aim to study a particular category of Active Galactic Nuclei (AGN) called blazars , which are characterized by strong and fast brightness variability, on time scales down to hours or less. This collaboration involves many telescopes observing at optical, near-infrared, and radio (millimetric and centimetric) wavelengths . Thanks to their different geographic location all around the world, the emission variations of the pointed source can be monitored 24 hours a day, with the observing task moving from east to west as the Earth rotates. WEBT observations are often carried out in conjunction with observations at higher frequencies , from ultraviolet to gamma rays, performed by both space and ground-based telescopes. In this way, information on blazar emission over almost the whole electromagnetic spectrum can be obtained. The multi-wavelength studies performed by the WEBT have the purpose of understanding the physical mechanisms that rule the variable emission of these celestial objects. This emission mainly comes from a plasma jet pointing closely to the line of sight, and originating from a supermassive black hole located in the core of the host galaxy. The WEBT was founded in autumn 1997 by John Mattox , from the Institute of Astrophysical Research at the Boston University , as a collaboration among optical observers. Three years after, in 2000, the leadership was committed to Massimo Villata , from the Observatory of Turin . A constitution was issued, defining purposes and management of the organization. Soon after, also radio and near-infrared observers joined the consortium. Until February 2009, the WEBT has organised 24 observing campaigns, with the participation of more than one hundred telescopes. Each campaign is devoted to a specific source, and is led by a Campaign Manager appointed by the President. The Campaign Manager is responsible for the observing strategy, data collection, analysis and interpretation, and finally takes care of the publication of the results. This is the list of the blazars that have been targets of WEBT campaigns: After eighteen years of operations, more than 160 scientific publications have been released. [ 20 ] On September 4, 2007, the WEBT started a new project: the GLAST-AGILE Support Program (GASP). Its aim is to provide observing support at longer wavelengths to the observations by the gamma-ray satellites GLAST (Gamma-ray Large Area Space Telescope, later renamed Fermi Gamma-ray Space Telescope in honor of the famous Italian physicist Enrico Fermi ), and AGILE (Astro-rivelatore Gamma a Immagini LEggero). The GASP strategy is a long-term monitoring of selected targets, with periodic data gathering and analysis. The list of the GASP monitored blazars includes 28 bright objects: 3C 66A , AO 0235+16, PKS 0420−01, PKS 0528+134, S5 0716+71, PKS 0735+17, OJ 248, OJ 49, 4C 71.07 , OJ 287 , S4 0954+65, Markarian 421 , 4C 29.45, ON 231, 3C 273 , 3C 279 , PKS 1510−08, DA 406, 4C 38.41, 3C 345 , Markarian 501 , 4C 51.37, 3C 371 , PKS 2155−304, BL Lacertae , CTA 102 , 3C 454.3 and 1ES 2344+514.
https://en.wikipedia.org/wiki/Whole_Earth_Blazar_Telescope
The Whole Earth Telescope is an international network of astronomers that collaborate to study variable stars . The distribution of the observatories in longitude allow the selected targets to be continuously monitored despite the rotation of the Earth . [ 2 ] This concept was devised by American astronomers R. Edward Nather and Don E. Winget of the University of Texas at Austin . [ 4 ] The consortium consists of individual astronomers interested in collaborating to study targets designated by a principal investigator. Where colleagues are not available, astronomers are dispatched to sites that allow telescope time to visitors. [ 5 ] Initial funding for WET came from a grant by the US National Science Foundation , which lasted through 1998. [ 6 ] For each site, an observing run begins when the sky is dark, and continues until stopped by weather or dawn. A photometer is used to observe the target object, a nearby comparison star, and the background sky. The data is then sent to the control center. Each site in turn takes up an overlapping observation run, so the result is, ideally, a continuous sequence of data that can then be processed. [ 7 ] After constructing a light curve , the data is subject to a Fourier transform to obtain the frequencies of pulsation . [ 8 ] Referred to as an XCov, [ 9 ] the typical observing run with the WET lasts from 10 to 14 days, and is scheduled for once or twice a year. [ 7 ] The first observation run took place in March, 1988, and it included the Multiple Mirror Telescope in the US, a 1.8 m aperture telescope at the South African Astronomical Observatory , and the IUE observatory in orbit around the Earth. The first target for the run was the star PG 1346+082, or CR Boötis , [ 10 ] an AM CVn star . The second target was V803 Centauri , a cataclysmic binary . [ 11 ] The campaign was able to monitor the star systems for a continual period of 15 days from six participating sites. [ 9 ] The early focus of the program was the study of pulsating white dwarfs . [ 7 ] Most such stars exhibiting non-radial pulsations have multiple pulsation modes, with some having frequencies on the order of a cycle per day. The only way to observe these extended frequencies is continually over durations longer than 24 hours. [ 12 ] The observations of PG 1159-035 with the WET, reported in 1991, initiated the study of white dwarf seismology, [ 13 ] later termed asteroseismology . By 1998, WET runs had been performed on pulsating white dwarfs of the DOV, DBV, and DAV types, Delta Scuti variables , a rapidly oscillating Ap star , and cataclysmic variables . [ 8 ] A total of 16 XCov runs had been completed by May 1998, often covering more than one target per run. Only one failure was reported, for the roAp star HD 166473. [ 8 ] Operations for WET moved to Iowa State University in 1995 when the International Institute for Theoretical and Applied Physics offered to help fund the WET program. [ 6 ] In 2004, the governing council of WET agreed to study private funding for its operations. This resulted in the formation of the Delaware Astroseismic Research Center (DARC) the following year, and WET operations were moved from Iowa to Delaware. The first run supported by DARC was XCONV25 during May 2006. Operations are supported by the Mount Cuba Astronomical Observatory and the University of Delaware . [ 4 ] The ability to collect photometric data over a long period is vulnerable to weather conditions, the need to allocate time for each telescope, and the situation of each participating astronomer. It was recognized that satellites could accomplish the same task with fewer issues, but at a far higher cost. The MOST spacecraft , launched in 2003, was an early effort to pursue this application. It was able to monitor individual stars for periods of up to 30 days, but was limited to a visual magnitude of 6 or brighter. The Kepler space telescope was launched in 2009 and was able to observe some stars continuously for up to four years. As of 2021, the TESS satellite is performing asteroseismology down to magnitude 17. [ 14 ]
https://en.wikipedia.org/wiki/Whole_Earth_Telescope
Aquatic toxicology is the study of the effects of manufactured chemicals and other anthropogenic and natural materials and activities on aquatic organisms at various levels of organization, from subcellular through individual organisms to communities and ecosystems . [ 1 ] Aquatic toxicology is a multidisciplinary field which integrates toxicology , aquatic ecology and aquatic chemistry . [ 1 ] This field of study includes freshwater , marine water and sediment environments. Common tests include standardized acute and chronic toxicity tests lasting 24–96 hours (acute test) to 7 days or more (chronic tests). These tests measure endpoints such as survival, growth, reproduction, that are measured at each concentration in a gradient, along with a control test. [ 2 ] Typically using selected organisms with ecologically relevant sensitivity to toxicants and a well-established literature background. These organisms can be easily acquired or cultured in lab and are easy to handle. [ 3 ] While basic research in toxicology began in multiple countries in the 1800s, it was not until around the 1930s [ 4 ] that the use of acute toxicity testing, especially on fish, was established. Due to the wide use of the organochlorine pesticide DDT [l,l,l-trichloro-2,2-bis(p-chlorophenyl)ethane] and its linkage to causing fish death, the field of aquatic toxicology grew. At first, studies focused mainly on oysters and mussels, as they could not move away from the toxic environment. The results of these studies eventually led to the implementation of programs that monitor concentrations of aquatic pollutants in oysters and mussels, such as the Mussel Watch program of the National Oceanic and Atmospheric Administration (NOAA). [ 5 ] Over the next two decades, the effects of chemicals and wastes on non-human species became more of a public issue and the era of the pickle-jar bioassays began as efforts increased to standardize toxicity testing techniques. [ 1 ] In the United States, the passage of the Federal Water Pollution Control Act of 1947 marked the first comprehensive legislation [ 6 ] for the control of water pollution and was followed by the Federal Water Pollution Control Act in 1956. [ 7 ] In 1962, public and governmental interests were renewed, in large part due to the publication of Rachel Carson 's Silent Spring , and three years later the Water Quality Act of 1965 was passed, which directed states to develop water quality standards. [ 1 ] Public awareness, as well as scientific and governmental concern, continued to grow throughout the 1970s and by the end of the decade research had expanded to include hazard evaluation and risk analysis . [ 1 ] In the subsequent decades, aquatic toxicology has continued to expand and internationalize so that there is now a strong application of toxicity testing for environmental protection . Aquatic toxicology is continuing to evolve as risk assessment is becoming more practiced in the field. The field is gaining popularity as it has begun to link the effects of pollutants on marine animals to humans who eat fish and other marine life. Aquatic toxicology tests ( assays ): toxicity tests are used to provide qualitative and quantitative data on adverse (deleterious) effects on aquatic organisms from a toxicant . Toxicity tests can be used to assess the potential for damage to an aquatic environment and provide a database that can be used to assess the risk associated within a situation for a specific toxicant. Aquatic toxicology tests can be performed in the field or in the laboratory. Field experiments generally refer to multiple species exposure, but single species can be caged for a set duration, and laboratory experiments generally refer to single species exposure. A dose–response relationship is most commonly used with a sigmoidal curve to quantify the toxic effects at a selected end-point or criteria for effect (i.e. death or other adverse effect to the organism). Concentration is on the x-axis and percent inhibition or response is on the y-axis. [ 1 ] The criteria for effects, or endpoints tested for, can include lethal and sublethal effects (see Toxicological effects ). [ 1 ] There are different types of toxicity tests that can be performed on various test species. Different species differ in their susceptibility to chemicals, most likely due to differences in accessibility, metabolic rate , excretion rate , genetic factors , dietary factors, age, sex, health and stress level of the organism. Common standard test species are the fathead minnow (Pimephales promelas), daphnids ( Daphnia magna , D. pulex , D. pulicaria , Ceriodaphnia dubia ), midge (Chironomus tentans, C. riparius), rainbow trout (Oncorhynchus mykiss), sheepshead minnow (Cyprinodon variegatu), [ 8 ] zebra fish ( Danio rerio ), [ 9 ] mysids (Mysidopsis), oyster (Crassotreas), scud (Hyalalla Azteca), grass shrimp (Palaemonetes pugio) and mussels ( Mytilus galloprovincialis ). [ 10 ] As defined by ASTM International , these species are routinely selected on the basis of availability, commercial, recreational, and ecological importance, past successful use, and regulatory use. [ 1 ] A variety of acceptable standardized test methods have been published. Some of the more widely accepted agencies to publish methods are: the American Public Health Association , US Environmental Protection Agency (EPA), ASTM International, International Organization for Standardization , Environment and Climate Change Canada , and Organisation for Economic Co-operation and Development . Standardized tests offer the ability to compare results between laboratories. [ 1 ] There are many kinds of toxicity tests widely accepted in the scientific literature and by regulatory agencies. The type of test used depends on many factors: Specific regulatory agency conducting the test, resources available, physical and chemical characteristics of the environment, type of toxicant, test species available, laboratory vs. field testing, end-point selection, and time and resources available to conduct the assays are some of the most common influencing factors on test design. [ 1 ] Exposure systems are four general techniques the controls and test organisms are exposed to the dealing with treated and diluted water or the test solutions. Acute tests are short-term exposure tests (14 days or less) [ 12 ] and generally use lethality as an endpoint. In acute exposures, organisms come into contact with higher doses of the toxicant in a single event or in multiple events over a short period of time and usually produce immediate effects, depending on absorption time of the toxicant. These tests are generally conducted on organisms during a specific time period of the organism's life cycle, and are considered partial life cycle tests. Acute tests are not valid if mortality in the control sample is greater than 10%. However, this control acceptability criterion is dependent upon the species and the duration of the test. Results are reported in EC50, or concentration that will affect fifty percent of the sample size. [ 1 ] Chronic tests are long-term tests (weeks, months years), relative to the test organism's life span [ 13 ] (>10% of life span), and generally use sub-lethal endpoints. In chronic exposures, organisms come into contact with low, continuous doses of a toxicant. Chronic exposures may induce effects to acute exposure, but can also result in effects that develop slowly. Chronic tests are generally considered full life cycle tests and cover an entire generation time or reproductive life cycle ("egg to egg"). Chronic tests are not considered valid if mortality in the control sample is greater than 20%. These results have generally been reported in NOECs (No observed effects level) and LOECs (Lowest observed effects level). However, NOECs and LOECs are becoming less common as endpoints are dependent on the concentration series chosen for the test. These reports are starting to become a topic of debate in the field because of the way it may alter the results of the tests. For example, if the concentration rate of the NOEC is 100, 50, 25, 11.25, 6.25 and the toxicology is reported at 2%, the NOEC would report the concentration as 6.25. Early life stage tests are considered as subchronic exposures that are less than a complete reproductive life cycle and include exposure during early, sensitive life stages of an organism. These exposures are also called critical life stage, embryo-larval, or egg-fry tests. Early life stage tests are not considered valid if mortality in the control sample is greater than 30%. [ 1 ] Short-term sublethal tests are used to evaluate the toxicity of effluents to aquatic organisms. These methods are developed by the EPA, and only focus on the most sensitive life stages. Endpoints for these test include changes in growth, reproduction and survival. NOECs, LOECs and EC50s are reported in these tests. Bioaccumulation tests are toxicity tests that can be used for hydrophobic chemicals that may accumulated in the fatty tissue of aquatic organisms. Toxicants with low solubilities in water generally can be stored in the fatty tissue due to the high lipid content in this tissue. The storage of these toxicants within the organism may lead to cumulative toxicity. Bioaccumulation tests use bioconcentration factors (BCF) to predict concentrations of hydrophobic contaminants in organisms. The BCF is the ratio of the average concentration of test chemical accumulated in the tissue of the test organism (under steady state conditions) to the average measured concentration in the water. Freshwater tests and saltwater tests have different standard methods, especially as set by the regulatory agencies. However, these tests generally include a control (negative and/or positive), a geometric dilution series or other appropriate logarithmic dilution series, test chambers and equal numbers of replicates, and a test organism. Exact exposure time and test duration will depend on type of test (acute vs. chronic) and organism type. Temperature, water quality parameters and light will depend on regulator requirements and organism type. [ 1 ] In the US, many wastewater dischargers (e.g., factories, power plants, refineries , mines, municipal sewage treatment plants) are required to conduct periodic whole effluent toxicity (WET) tests under the National Pollutant Discharge Elimination System (NPDES) permit program, pursuant to the Clean Water Act . For facilities discharging to freshwater, effluent is used to perform static-acute multi-concentration toxicity tests with Ceriodaphnia dubia (water flea) and Pimephales promelas (fathead minnow), among other species. The test organisms are exposed for 48 hours under static conditions with five concentrations of the effluent. The major deviation in the short-term chronic effluent toxicity tests and the acute effluent toxicity tests is that the short-term chronic test lasts for seven days and the acute test lasts for 48 hours. For discharges to marine and estuarine waters, the test species used are sheepshead minnow ( Cyprinodon variegatus ), inland silverside ( Menidia beryllina ), Americamysis bahia , and purple sea urchin ( Strongylocentrotus purpuratus ). [ 14 ] [ 15 ] At some point most chemicals originating from both anthropogenic and natural sources accumulate in sediment. For this reason, sediment toxicity can play a major role in the adverse biological effects seen in aquatic organisms, especially those inhabiting benthic habitats. A recommended approach for sediment testing is to apply the sediment quality triad (SQT) which involves simultaneously examining sediment chemistry, toxicity, field alterations, bioaccumulation , and bioavailability assessments that can be used in a laboratory or in the field. Due to the expansion of SQTs, it is now more commonly referred to as "Sediment Assessment Framework." Collection, handling, and storage of sediment can have an effect on bioavailability and for this reason standard methods have been developed to suit this purpose. [ 1 ] Toxicity can be broken down into two broad categories of direct and indirect toxicity. Direct toxicity results from a toxicant acting at the site of action in or on the organism. Indirect toxicity occurs with a change in the physical, chemical, or biological environment. Lethality is most common effect used in toxicology and used as an endpoint for acute toxicity tests. While conducting chronic toxicity tests sublethal effects are endpoints that are looked at. These endpoints include behavioral, physiological, biochemical, and histological changes. [ 1 ] There are a number of effects that occur when an organism is simultaneously exposed to two or more toxicants. These effects include additive effects, synergistic effects, potentiation effects, and antagonistic effects. An additive effect occurs when combined effect is equal to a combination or sum of the individual effects. A synergistic effect occurs when the combination of effects is much greater than the two individual effects added together. Potentiation is an effect that occurs when an individual chemical has no effect is added to a toxicant, and the combination has a greater effect than just the toxicant alone. Finally, an antagonistic effect occurs when a combination of chemicals has less of an effect than the sum of their individual effects. [ 1 ] All terms were derived from Rand. [ 1 ] In the United States, aquatic toxicology plays an important role in the NPDES wastewater permit program. While most wastewater dischargers typically conduct analytical chemistry testing for known pollutants , whole effluent toxicity tests have been standardized and are performed routinely as a tool for evaluating the potential harmful effects of other pollutants not specifically regulated in the discharge permits. [ 14 ] EPA's water quality program has published water quality criteria (for individual pollutants) and water quality standards (for water bodies) that were derived from aquatic toxicity tests. [ 23 ] [ 24 ] While sediment quality guidelines are not meant for regulation, they provide a way to rank and compare sediment quality developed by National Oceanic and Atmospheric Administration (NOAA). [ 25 ] These sediment quality guidelines are summarized in NOAA's Screening Quick Reference Tables (SQuiRT) for many different chemicals. [ 26 ]
https://en.wikipedia.org/wiki/Whole_effluent_toxicity
Whole genome bisulfite sequencing is a next-generation sequencing technology used to determine the DNA methylation status of single cytosines by treating the DNA with sodium bisulfite before high-throughput DNA sequencing . The DNA methylation status at various genes can reveal information regarding gene regulation and transcriptional activities. [ 1 ] This technique was developed in 2009 along with reduced representation bisulfite sequencing after bisulfite sequencing became the gold standard for DNA methylation analysis. [ 2 ] [ 3 ] Whole genome bisulfite sequencing measures single-cytosine methylation levels genome-wide and directly estimates the ratio of molecules methylated rather than enrichment levels. Currently, this technique has recognized and tested approximately 95% of all cytosines in known genomes. [ 4 ] With the improvement of library preparation methods and next-generation sequencing technology over the past decade, whole genome bisulfite sequencing has become an increasingly widespread and informative method for analyzing DNA methylation in epigenomic-wide studies. [ 5 ] Prior to the development of whole genome bisulfite sequencing, genome methylation analysis relied heavily on early non-specific and differential methods such as paper chromatography , high-performance liquid chromatography , and thin-layer chromatography to analyze methylation profiles. [ 6 ] These methods were limited by the inability to amplify methylated DNA via polymerase chain reaction in vitro due to loss of methylation status. [ 6 ] As a result, much of these early methods relied on detecting and analyzing naturally-manifested methylated cytosines in vivo rather than chemically methylated cytosines. In 1970, a breakthrough occurred when it was discovered that treating DNA with sodium bisulfite deaminated cytosine residues into uracil. [ 6 ] In the following decade, this discovery led to the revelation that unmethylated cytosine reacted much faster to sodium bisulfite treatment than did 5-methylcytosine . This difference in reaction rates created the possibility of identifying chemical changes in DNA as an easily detectable genetic marker. [ 6 ] Whole genome bisulfite sequencing was derived as a combination of this bisulfite treatment and next-generation sequencing technology, such as shotgun sequencing . The whole genome sequencing technique was first applied to the DNA methylation mapping at single nucleotide resolution to Arabidopsis thaliana in 2008, and shortly after in 2009, the first single-base-resolution DNA methylation map of the entire human genome was created using whole genome bisulfite sequencing. [ 7 ] [ 5 ] Since its development, many various protocols of whole genome bisulfite sequencing have been developed aiming to improve the efficiency and efficacy of its single-base mapping. As the costs of next-generation sequencing have decreased, whole genome bisulfite sequencing has become more widely used in clinical and experimental research. [ 3 ] Currently, multiple public datasets of genomic data have been established, and this technique has recognized and tested approximately 95% of all cytosines in known genomes. [ 4 ] The following steps are derived from one potential workflow of conventional whole genome bisulfite sequencing: target DNA extraction, bisulfite conversion, library amplification, and bioinformatics analysis. [ 8 ] However, various sequencing systems and analysis tools often adapt the technical parameters and order of the following step processes in order to optimize assay coverage and efficacy. [ 3 ] Library preparation protocols undergo DNA fragmentation, end repair, dA-tailing, and adapter ligation prior to bisulfite treatment and library amplification. Standard fragmentation under high-throughput technology such as Illumina Genome Analyser and Solexa requires nebulization to generate fragments that range from 0-1200 base pairs. [ 9 ] After fragmentation, end repair enzymes and complementary adapters are then applied to the DNA in an end-prep polymerase chain reaction and adapter ligation reaction, respectively. Size selection occurs before the DNA is treated with sodium bisulfite. Conventional methods of eukaryotic DNA preparation during sequencing use a wide variety of DNA input amount, varying from as little as 10 ng for novel NGS library alternatives, such as the tagmentation approach, to as much as 500-1000 ng of DNA as sample input. [ 10 ] The adapter-ligated DNA sample is treated with sodium bisulfite, a chemical compound that converts unmethylated cytosines into uracil , at low pH and high temperatures. [ 11 ] [ 12 ] The chemical reaction is depicted in Figure 1, where sulfonation occurs at the carbon-6 position of cytosine to produce the intermediate cytosine sulfonate. [ 13 ] This intermediate then undergoes irreversible hydrolytic deamination to create uracil sulfonate. Under alkaline conditions, uracil sulfonate desulfonates to generate uracil. [ 13 ] This enables methylation detection by distinguishing the methylated cytosines (5-methylcytosine), which resist bisulfite treatment, from uracil. During amplification by polymerase chain reaction, the uracils are converted into thymines . [ 3 ] Methylated cytosines are then recognized as cytosines. Their locations are then identified by comparison of the bisulfite-treated and original DNA sequence. Following bisulfite treatment, purification of the sample is required to remove unwanted products including bisulfite salts. [ 13 ] In order to amplify the epigenome library, bisulfite-treated DNA is primed to generate DNA with a specific tagging sequence. The 3' end of this sequence is then tagged again, creating DNA fragments with markers on either end. These fragments are amplified in a final polymerase chain reaction reaction, after which the library is prepped for sequencing-by-synthesis. [ 8 ] This is demonstrated in Figure 2, in which high-throughput sequencing system developed by biotechnology company, Illumina, perform comprehensive assays based on sequencing-by-synthesis of base pairs. [ 8 ] Following library amplification, a series of analyses can be performed on the expanded library to determine various methylation characteristics or map a genome-wide methylation profile. [ 8 ] One such study aligns the new reads against the reference genome in order to directly compare locations of methylated cytosines and C-T mismatches. This requires software such as SOAP for side-by-side comparison of the genomes. [ 8 ] Another potential sequencing analysis is methylated cytosine calling, which computes methylated cytosine ratios by mapping probabilities based on read quality. This helps determine methylated cytosine locations across the genome. [ 8 ] Finally, global trends of methylome can be analyzed by calculating the distribution ratios of CG, CHGG, and CHH in methylated cytosines across the genome. [ 8 ] These ratios can reflect features of whole genome methylation maps of certain species. Due to its ability to screen methylation status at single-nucleotide resolution across a given genome, whole genome bisulfite sequencing has become increasingly promising in aiding fundamental epigenomics research, novel hypotheses on DNA methylation, and investigations of future large-scale epidemiological studies. [ 3 ] [ 5 ] This whole genome approach is also capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. [ 6 ] The whole genome bisulfite sequencing technique is capable of sensitive cytosine-methylation detection under specific sequences across an entire genome, which increases its potential to identify specific DNA methylation sites and their relation to certain gene expressions. [ 6 ] The use of whole genome bisulfite sequencing to create the first human DNA methylome in 2009 also helped identify a significant ratio of non-CG methylation. [ 6 ] As a result, multiple single-base resolution methylomes of the human genome continue to be produced in order to identify the role of intragenic DNA methylation in gene expression and regulation. Future studies aim to use whole genome bisulfite sequencing in order to investigate the role DNA methylation has in multifarious cellular processes such as cellular differentiation , embryogenesis , X-inactivation, genomic imprinting , and tumorigenesis . [ 4 ] Single-nucleotide maps have already been sequenced for two human cell lines, H1 human embryonic stem cells and IMR90 fetal lung fibroblasts, in order to study patterns of non-CG methylation in human cells. [ 4 ] Whole genome bisulfite sequencing has also been applied to developmental biology studies in which non-CG methylation was discovered prevalent in pluripotent stem cells and oocytes. This technique helped researchers discover that non-CG methylation accumulated during oocyte growth and covered over half of all methylation in mouse germinal vesicle oocytes. [ 14 ] Similarly, in plants, whole genome bisulfite sequencing was used to examine CG, CHH, and CHG [ clarification needed ] methylation. It was then discovered that the plant germline conserved CG and CHG methylation while mammals lost CHH methylation in microspores and sperm cells. [ 14 ] The unlimited resources provided by the approach of an entire genome have spurred many novel hypotheses on how whole genome bisulfite sequencing could be used in other various fields including disease diagnosis and forensic science. Studies have shown that whole genome bisulfite sequencing could detect abnormal methylation, or more specifically hyper-methylated suppressor genes, that are often seen in cancers including leukemia. [ 14 ] Additionally, whole genome bisulfite sequencing has been applied to blood spot samples in forensic investigations to generate high-quality DNA methylation analyses on dried stains. [ 14 ] The widespread use of whole genome bisulfite sequencing has been primarily limited by its excessive cost, complex data output, and minimal required coverage. Due to the high amount and subsequent cost of DNA input, many studies using whole genome bisulfite sequencing assays occur with few or no biological replicates. [ 15 ] For human samples, the US National Institutes of Health (NIH) Roadmap Epigenomics Project recommends a minimum of 30x coverage sequencing to achieve accurate results and approximately 80 million aligned, high quality reads. [ 16 ] Consequently, large-scale studies for genomic-wide methylation profiling remain less cost-effective, often requiring multiple re-sequences of the entire genome multiple times for every experiment. [ 17 ] Current studies are being conducted to reduce the conventional minimum coverage requirements while maintaining mapping accuracy. Finally, the technique is also limited the complexity of data and lack of sufficiently advanced analytical tools for downstream computational requirements. [ 2 ] The current bioinformatics requirements for accurate data interpretation are ahead of existing technology, which stalls the accessibility of sequencing results to the general public. Additionally, there are biological limitations concerning various steps in the standard protocol, particularly in the library preparation method. One of the biggest concerns is the potential of bias in the base composition of sequences and over-representation of methylated DNA data following bioinformatics analyses. [ 9 ] Bias can arise from multiple unintended effects of bisulfite conversion including DNA degradation. This degradation can cause uneven sequence coverage by misrepresenting genomic sequences and overestimating 5-methylcytosine values. [ 3 ] Additionally, the bisulfite conversion process only distinguishes unmethylated cytosine from 5-methylcytosine. As a result, specificity between 5-methylcytosine and 5-hydroxymethylcytosine is limited. [ 3 ] Another potential source of bias rises from polymerase chain reaction amplification of the library, which affects sequences with highly skewed base compositions due to high rates of polymerase sequence errors in high AT-content, bisulfite-converted DNA. [ 3 ]
https://en.wikipedia.org/wiki/Whole_genome_bisulfite_sequencing
Whole genome sequencing ( WGS ), also known as full genome sequencing or just genome sequencing , is the process of determining the entirety of the DNA sequence of an organism's genome at a single time. [ 2 ] [ 3 ] [ 4 ] This entails sequencing all of an organism's chromosomal DNA as well as DNA contained in the mitochondria and, for plants, in the chloroplast . Whole genome sequencing has largely been used as a research tool, but was being introduced to clinics in 2014. [ 5 ] [ 6 ] [ 7 ] In the future of personalized medicine , whole genome sequence data may be an important tool to guide therapeutic intervention. [ 8 ] The tool of gene sequencing at SNP level is also used to pinpoint functional variants from association studies and improve the knowledge available to researchers interested in evolutionary biology , and hence may lay the foundation for predicting disease susceptibility and drug response. Whole genome sequencing should not be confused with DNA profiling , which only determines the likelihood that genetic material came from a particular individual or group, and does not contain additional information on genetic relationships, origin or susceptibility to specific diseases. [ 9 ] In addition, whole genome sequencing should not be confused with methods that sequence specific subsets of the genome – such methods include whole exome sequencing (1–2% of the genome) or SNP genotyping (< 0.1% of the genome). The DNA sequencing methods used in the 1970s and 1980s were manual; for example, Maxam–Gilbert sequencing and Sanger sequencing . Several whole bacteriophage and animal viral genomes were sequenced by these techniques, but the shift to more rapid, automated sequencing methods in the 1990s facilitated the sequencing of the larger bacterial and eukaryotic genomes. [ 11 ] The first virus to have its complete genome sequenced was the Bacteriophage MS2 by 1976. [ 12 ] In 1992, yeast chromosome III was the first chromosome of any organism to be fully sequenced. [ 13 ] The first organism whose entire genome was fully sequenced was Haemophilus influenzae in 1995. [ 14 ] After it, the genomes of other bacteria and some archaea were first sequenced, largely due to their small genome size. H. influenzae has a genome of 1,830,140 base pairs of DNA. [ 14 ] In contrast, eukaryotes , both unicellular and multicellular such as Amoeba dubia and humans ( Homo sapiens ) respectively, have much larger genomes (see C-value paradox ). [ 15 ] Amoeba dubia has a genome of 700 billion nucleotide pairs spread across thousands of chromosomes . [ 16 ] Humans contain fewer nucleotide pairs (about 3.2 billion in each germ cell – note the exact size of the human genome is still being revised) than A. dubia, however, their genome size far outweighs the genome size of individual bacteria. [ 17 ] The first bacterial and archaeal genomes, including that of H. influenzae , were sequenced by Shotgun sequencing . [ 14 ] In 1996, the first eukaryotic genome ( Saccharomyces cerevisiae ) was sequenced. S. cerevisiae , a model organism in biology has a genome of only around 12 million nucleotide pairs, [ 18 ] and was the first unicellular eukaryote to have its whole genome sequenced. The first multicellular eukaryote, and animal , to have its whole genome sequenced was the nematode worm: Caenorhabditis elegans in 1998. [ 19 ] Eukaryotic genomes are sequenced by several methods including Shotgun sequencing of short DNA fragments and sequencing of larger DNA clones from DNA libraries such as bacterial artificial chromosomes (BACs) and yeast artificial chromosomes (YACs). [ 20 ] In 1999, the entire DNA sequence of human chromosome 22 , the second shortest human autosome , was published. [ 21 ] By the year 2000, the second animal and second invertebrate (yet first insect ) genome was sequenced – that of the fruit fly Drosophila melanogaster – a popular choice of model organism in experimental research. [ 22 ] The first plant genome – that of the model organism Arabidopsis thaliana – was also fully sequenced by 2000. [ 23 ] By 2001, a draft of the entire human genome sequence was published. [ 24 ] The genome of the laboratory mouse Mus musculus was completed in 2002. [ 25 ] In 2004, the Human Genome Project published an incomplete version of the human genome. [ 26 ] In 2008, a group from Leiden, the Netherlands, reported the sequencing of the first female human genome ( Marjolein Kriek ). Currently thousands of genomes have been wholly or partially sequenced . Almost any biological sample containing a full copy of the DNA—even a very small amount of DNA or ancient DNA —can provide the genetic material necessary for full genome sequencing. Such samples may include saliva , epithelial cells , bone marrow , hair (as long as the hair contains a hair follicle ), seeds , plant leaves, or anything else that has DNA-containing cells. The genome sequence of a single cell selected from a mixed population of cells can be determined using techniques of single cell genome sequencing . This has important advantages in environmental microbiology in cases where a single cell of a particular microorganism species can be isolated from a mixed population by microscopy on the basis of its morphological or other distinguishing characteristics. In such cases the normally necessary steps of isolation and growth of the organism in culture may be omitted, thus allowing the sequencing of a much greater spectrum of organism genomes. [ 27 ] Single cell genome sequencing is being tested as a method of preimplantation genetic diagnosis , wherein a cell from the embryo created by in vitro fertilization is taken and analyzed before embryo transfer into the uterus. [ 28 ] After implantation, cell-free fetal DNA can be taken by simple venipuncture from the mother and used for whole genome sequencing of the fetus. [ 29 ] Sequencing of nearly an entire human genome was first accomplished in 2000 partly through the use of shotgun sequencing technology. While full genome shotgun sequencing for small (4000–7000 base pair ) genomes was already in use in 1979, [ 30 ] broader application benefited from pairwise end sequencing, known colloquially as double-barrel shotgun sequencing . As sequencing projects began to take on longer and more complicated genomes, multiple groups began to realize that useful information could be obtained by sequencing both ends of a fragment of DNA. Although sequencing both ends of the same fragment and keeping track of the paired data was more cumbersome than sequencing a single end of two distinct fragments, the knowledge that the two sequences were oriented in opposite directions and were about the length of a fragment apart from each other was valuable in reconstructing the sequence of the original target fragment. The first published description of the use of paired ends was in 1990 as part of the sequencing of the human HPRT locus, [ 31 ] although the use of paired ends was limited to closing gaps after the application of a traditional shotgun sequencing approach. The first theoretical description of a pure pairwise end sequencing strategy, assuming fragments of constant length, was in 1991. [ 32 ] In 1995, the innovation of using fragments of varying sizes was introduced, [ 33 ] and demonstrated that a pure pairwise end-sequencing strategy would be possible on large targets. The strategy was subsequently adopted by The Institute for Genomic Research (TIGR) to sequence the entire genome of the bacterium Haemophilus influenzae in 1995, [ 34 ] and then by Celera Genomics to sequence the entire fruit fly genome in 2000, [ 35 ] and subsequently the entire human genome. Applied Biosystems , now called Life Technologies , manufactured the automated capillary sequencers utilized by both Celera Genomics and The Human Genome Project. While capillary sequencing was the first approach to successfully sequence a nearly full human genome, it is still too expensive and takes too long for commercial purposes. Since 2005, capillary sequencing has been progressively displaced by high-throughput (formerly "next-generation") sequencing technologies such as Illumina dye sequencing , pyrosequencing , and SMRT sequencing . [ 36 ] All of these technologies continue to employ the basic shotgun strategy, namely, parallelization and template generation via genome fragmentation. Other technologies have emerged, including Nanopore technology . Though the sequencing accuracy of Nanopore technology is lower than those above, its read length is on average much longer. [ 37 ] This generation of long reads is valuable especially in de novo whole-genome sequencing applications. [ 38 ] In principle, full genome sequencing can provide the raw nucleotide sequence of an individual organism's DNA at a single point in time. However, further analysis must be performed to provide the biological or medical meaning of this sequence, such as how this knowledge can be used to help prevent disease. Methods for analyzing sequencing data are being developed and refined. Because sequencing generates a lot of data (for example, there are approximately six billion base pairs in each human diploid genome), its output is stored electronically and requires a large amount of computing power and storage capacity. While analysis of WGS data can be slow, it is possible to speed up this step by using dedicated hardware. [ 39 ] A number of public and private companies are competing to develop a full genome sequencing platform that is commercially robust for both research and clinical use, [ 40 ] including Illumina, [ 41 ] Knome , [ 42 ] Sequenom , [ 43 ] 454 Life Sciences , [ 44 ] Pacific Biosciences, [ 45 ] Complete Genomics , [ 46 ] Helicos Biosciences , [ 47 ] GE Global Research ( General Electric ), Affymetrix , IBM , Intelligent Bio-Systems, [ 48 ] Life Technologies, Oxford Nanopore Technologies, [ 49 ] and the Beijing Genomics Institute . [ 50 ] [ 51 ] [ 52 ] These companies are heavily financed and backed by venture capitalists , hedge funds , and investment banks . [ 53 ] [ 54 ] A commonly-referenced commercial target for sequencing cost until the late 2010s was $1,000 USD, however, the private companies are working to reach a new target of only $100. [ 55 ] In October 2006, the X Prize Foundation , working in collaboration with the J. Craig Venter Science Foundation, established the Archon X Prize for Genomics, [ 56 ] intending to award $10 million to "the first team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 1,000,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $1,000 per genome". [ 57 ] The Archon X Prize for Genomics was cancelled in 2013, before its official start date. [ 58 ] [ 59 ] In 2007, Applied Biosystems started selling a new type of sequencer called SOLiD System. [ 60 ] The technology allowed users to sequence 60 gigabases per run. [ 61 ] In June 2009, Illumina announced that they were launching their own Personal Full Genome Sequencing Service at a depth of 30× for $48,000 per genome. [ 62 ] [ 63 ] In August, the founder of Helicos Biosciences, Stephen Quake , stated that using the company's Single Molecule Sequencer he sequenced his own full genome for less than $50,000. [ 64 ] In November, Complete Genomics published a peer-reviewed paper in Science demonstrating its ability to sequence a complete human genome for $1,700. [ 65 ] [ 66 ] In May 2011, Illumina lowered its Full Genome Sequencing service to $5,000 per human genome, or $4,000 if ordering 50 or more. [ 67 ] Helicos Biosciences, Pacific Biosciences, Complete Genomics, Illumina, Sequenom, ION Torrent Systems, Halcyon Molecular, NABsys, IBM, and GE Global appear to all be going head to head in the race to commercialize full genome sequencing. [ 36 ] [ 68 ] With sequencing costs declining, a number of companies began claiming that their equipment would soon achieve the $1,000 genome: these companies included Life Technologies in January 2012, [ 69 ] Oxford Nanopore Technologies in February 2012, [ 70 ] and Illumina in February 2014. [ 71 ] [ 72 ] In 2015, the NHGRI estimated the cost of obtaining a whole-genome sequence at around $1,500. [ 73 ] In 2016, Veritas Genetics began selling whole genome sequencing, including a report as to some of the information in the sequencing for $999. [ 74 ] In summer 2019, Veritas Genetics cut the cost for WGS to $599. [ 75 ] In 2017, BGI began offering WGS for $600. [ 76 ] However, in 2015, some noted that effective use of whole gene sequencing can cost considerably more than $1000. [ 77 ] Also, reportedly there remain parts of the human genome that have not been fully sequenced by 2017. [ 78 ] [ 79 ] Full genome sequencing provides information on a genome that is orders of magnitude larger than by DNA arrays , the previous leader in genotyping technology. For humans, DNA arrays currently provide genotypic information on up to one million genetic variants, [ 80 ] [ 81 ] [ 82 ] while full genome sequencing will provide information on all six billion bases in the human genome, or 3,000 times more data. Because of this, full genome sequencing is considered a disruptive innovation to the DNA array markets as the accuracy of both range from 99.98% to 99.999% (in non-repetitive DNA regions) and their consumables cost of $5000 per 6 billion base pairs is competitive (for some applications) with DNA arrays ($500 per 1 million basepairs). [ 44 ] Whole genome sequencing has established the mutation frequency for whole human genomes. The mutation frequency in the whole genome between generations for humans (parent to child) is about 70 new mutations per generation. [ 83 ] [ 84 ] An even lower level of variation was found comparing whole genome sequencing in blood cells for a pair of monozygotic (identical twins) 100-year-old centenarians. [ 85 ] Only 8 somatic differences were found, though somatic variation occurring in less than 20% of blood cells would be undetected. In the specifically protein coding regions of the human genome, it is estimated that there are about 0.35 mutations that would change the protein sequence between parent/child generations (less than one mutated protein per generation). [ 86 ] In cancer, mutation frequencies are much higher, due to genome instability . This frequency can further depend on patient age, exposure to DNA damaging agents (such as UV-irradiation or components of tobacco smoke) and the activity/inactivity of DNA repair mechanisms. [ 87 ] Furthermore, mutation frequency can vary between cancer types: in germline cells, mutation rates occur at approximately 0.023 mutations per megabase, but this number is much higher in breast cancer (1.18-1.66 somatic mutations per Mb), in lung cancer (17.7) or in melanomas (≈33). [ 88 ] Since the haploid human genome consists of approximately 3,200 megabases, [ 89 ] this translates into about 74 mutations (mostly in noncoding regions) in germline DNA per generation, but 3,776-5,312 somatic mutations per haploid genome in breast cancer, 56,640 in lung cancer and 105,600 in melanomas. The distribution of somatic mutations across the human genome is very uneven, [ 90 ] such that the gene-rich, early-replicating regions receive fewer mutations than gene-poor, late-replicating heterochromatin, likely due to differential DNA repair activity. [ 91 ] In particular, the histone modification H3K9me3 is associated with high, [ 92 ] and H3K36me3 with low mutation frequencies. [ 93 ] In research, whole-genome sequencing can be used in a Genome-Wide Association Study (GWAS) – a project aiming to determine the genetic variant or variants associated with a disease or some other phenotype. [ 94 ] In 2009, Illumina released its first whole genome sequencers that were approved for clinical as opposed to research-only use and doctors at academic medical centers began quietly using them to try to diagnose what was wrong with people whom standard approaches had failed to help. [ 95 ] In 2009, a team from Stanford led by Euan Ashley performed clinical interpretation of a full human genome, that of bioengineer Stephen Quake. [ 96 ] In 2010, Ashley's team reported whole genome molecular autopsy [ 97 ] and in 2011, extended the interpretation framework to a fully sequenced family, the West family, who were the first family to be sequenced on the Illumina platform. [ 98 ] The price to sequence a genome at that time was $19,500 USD, which was billed to the patient but usually paid for out of a research grant; one person at that time had applied for reimbursement from their insurance company. [ 95 ] For example, one child had needed around 100 surgeries by the time he was three years old, and his doctor turned to whole genome sequencing to determine the problem; it took a team of around 30 people that included 12 bioinformatics experts, three sequencing technicians, five physicians, two genetic counsellors and two ethicists to identify a rare mutation in the XIAP that was causing widespread problems. [ 95 ] [ 99 ] [ 100 ] Due to recent cost reductions (see above) whole genome sequencing has become a realistic application in DNA diagnostics. In 2013, the 3Gb-TEST consortium obtained funding from the European Union to prepare the health care system for these innovations in DNA diagnostics. [ 101 ] [ 102 ] Quality assessment schemes, Health technology assessment and guidelines have to be in place. The 3Gb-TEST consortium has identified the analysis and interpretation of sequence data as the most complicated step in the diagnostic process. [ 103 ] At the Consortium meeting in Athens in September 2014, the Consortium coined the word genotranslation for this crucial step. This step leads to a so-called genoreport . Guidelines are needed to determine the required content of these reports. [ citation needed ] Genomes2People (G2P), an initiative of Brigham and Women's Hospital and Harvard Medical School was created in 2011 to examine the integration of genomic sequencing into clinical care of adults and children. [ 104 ] G2P's director, Robert C. Green , had previously led the REVEAL study — Risk EValuation and Education for Alzheimer's Disease – a series of clinical trials exploring patient reactions to the knowledge of their genetic risk for Alzheimer's. [ 105 ] [ 106 ] In 2018, researchers at Rady Children's Hospital Institute for Genomic Medicine in San Diego determined that rapid whole-genome sequencing (rWGS) could diagnose genetic disorders in time to change acute medical or surgical management (clinical utility) and improve outcomes in acutely ill infants. In a retrospective cohort study of acutely ill inpatient infants in a regional children's hospital from July 2016-March 2017, forty-two families received rWGS for etiologic diagnosis of genetic disorders. The diagnostic sensitivity of rWGS was 43% (eighteen of 42 infants) and 10% (four of 42 infants) for standard genetic tests (P = .0005). The rate of clinical utility of rWGS (31%, thirteen of 42 infants) was significantly greater than for standard genetic tests (2%, one of 42; P = .0015). Eleven (26%) infants with diagnostic rWGS avoided morbidity, one had a 43% reduction in likelihood of mortality, and one started palliative care. In six of the eleven infants, the changes in management reduced inpatient cost by $800,000-$2,000,000. The findings replicated a prior study of the clinical utility of rWGS in acutely ill inpatient infants, and demonstrated improved outcomes, net healthcare savings and consideration as a first tier test in this setting. [ 107 ] A 2018 review of 36 publications found the cost for whole genome sequencing to range from $1,906 USD to $24,810 USD and have a wide variance in diagnostic yield from 17% to 73% depending on patient groups. [ 108 ] Whole genome sequencing studies enable the assessment of associations between complex traits and both coding and noncoding rare variants ( minor allele frequency (MAF) < 1%) across the genome. Single-variant analyses typically have low power to identify associations with rare variants, and variant set tests have been proposed to jointly test the effects of given sets of multiple rare variants. [ 109 ] SNP annotations help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. [ 110 ] Some tools have been specifically developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. [ 111 ] [ 112 ] Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes. Some methods have been developed to enable functionally informed rare variant association analysis in biobank-scale cohorts using efficient approaches for summary statistic storage. [ 113 ] In this field, whole genome sequencing represents a great set of improvements and challenges to be faced by the scientific community, as it makes it possible to analyze, quantify and characterize circulating tumor DNA (ctDNA) in the bloodstream. This serves as a basis for early cancer diagnosis, treatment selection and relapse monitoring, as well as for determining the mechanisms of resistance, metastasis and phylogenetic patterns in the evolution of cancer. It can also help in the selection of individualized treatments for patients suffering from this pathology and observe how existing drugs are working during the progression of treatment. Deep whole genome sequencing involves a subclonal reconstruction based on ctDNA in plasma that allows for complete epigenomic and genomic profiling, showing the expression of circulating tumor DNA in each case. [ 114 ] In 2013, Green and a team of researchers launched the BabySeq Project to study the ethical and medical consequences of sequencing a newborn's DNA. [ 115 ] [ 116 ] As of 2015, whole genome and exome sequencing as a newborn screening tool were deliberated [ 117 ] and in 2021, further discussed. [ 118 ] In 2021, the NIH funded BabySeq2, an implementation study that expanded the BabySeq project, enrolling 500 infants from diverse families and track the effects of their genomic sequencing on their pediatric care. [ 119 ] In 2023, the Lancet opined that in the UK "focusing on improving screening by upgrading targeted gene panels might be more sensible in the short term. Whole genome sequencing in the long term deserves thorough examination and universal caution." [ 120 ] The introduction of whole genome sequencing may have ethical implications. [ 121 ] On one hand, genetic testing can potentially diagnose preventable diseases, both in the individual undergoing genetic testing and in their relatives. [ 121 ] On the other hand, genetic testing has potential downsides such as genetic discrimination , loss of anonymity, and psychological impacts such as discovery of non-paternity . [ 122 ] Some ethicists insist that the privacy of individuals undergoing genetic testing must be protected, [ 121 ] and is of particular concern when minors undergo genetic testing. [ 123 ] Illumina's CEO, Jay Flatley, wrongly claimed in February 2009 that "by 2019 it will have become routine to map infants' genes when they are born". [ 124 ] This potential use of genome sequencing is highly controversial, as it runs counter to established ethical norms for predictive genetic testing of asymptomatic minors that have been well established in the fields of medical genetics and genetic counseling . [ 125 ] [ 126 ] [ 127 ] [ 128 ] The traditional guidelines for genetic testing have been developed over the course of several decades since it first became possible to test for genetic markers associated with disease, prior to the advent of cost-effective, comprehensive genetic screening. [ citation needed ] When an individual undergoes whole genome sequencing, they reveal information about not only their own DNA sequences, but also about probable DNA sequences of their close genetic relatives. [ 121 ] This information can further reveal useful predictive information about relatives' present and future health risks. [ 129 ] Hence, there are important questions about what obligations, if any, are owed to the family members of the individuals who are undergoing genetic testing. In Western/European society, tested individuals are usually encouraged to share important information on any genetic diagnoses with their close relatives, since the importance of the genetic diagnosis for offspring and other close relatives is usually one of the reasons for seeking a genetic testing in the first place. [ 121 ] Nevertheless, a major ethical dilemma can develop when the patients refuse to share information on a diagnosis that is made for serious genetic disorder that is highly preventable and where there is a high risk to relatives carrying the same disease mutation. Under such circumstances, the clinician may suspect that the relatives would rather know of the diagnosis and hence the clinician can face a conflict of interest with respect to patient-doctor confidentiality. [ 121 ] Privacy concerns can also arise when whole genome sequencing is used in scientific research studies. Researchers often need to put information on patient's genotypes and phenotypes into public scientific databases, such as locus specific databases. [ 121 ] Although only anonymous patient data are submitted to locus specific databases, patients might still be identifiable by their relatives in the case of finding a rare disease or a rare missense mutation. [ 121 ] Public discussion around the introduction of advanced forensic techniques (such as advanced familial searching using public DNA ancestry websites and DNA phenotyping approaches) has been limited, disjointed, and unfocused. As forensic genetics and medical genetics converge toward genome sequencing, issues surrounding genetic data become increasingly connected, and additional legal protections may need to be established. [ 130 ] The first nearly complete human genomes sequenced were two Americans of predominantly Northwestern European ancestry in 2007 ( J. Craig Venter at 7.5-fold coverage , [ 131 ] [ 132 ] [ 133 ] and James Watson at 7.4-fold). [ 134 ] [ 135 ] [ 136 ] This was followed in 2008 by sequencing of an anonymous Han Chinese man (at 36-fold), [ 137 ] a Yoruban man from Nigeria (at 30-fold), [ 138 ] a female clinical geneticist ( Marjolein Kriek ) from the Netherlands (at 7 to 8-fold), and a female leukemia patient in her mid-50s (at 33 and 14-fold coverage for tumor and normal tissues). [ 139 ] Steve Jobs was among the first 20 people to have their whole genome sequenced, reportedly for the cost of $100,000. [ 140 ] As of June 2012 [update] , there were 69 nearly complete human genomes publicly available. [ 141 ] In November 2013, a Spanish family made their personal genomics data publicly available under a Creative Commons public domain license . The work was led by Manuel Corpas and the data obtained by direct-to-consumer genetic testing with 23andMe and the Beijing Genomics Institute . This is believed to be the first such Public Genomics dataset for a whole family. [ 142 ] According to Science , the major databases of whole genomes are: [ 143 ] In terms of genomic coverage and accuracy, whole genome sequencing can broadly be classified into either of the following: [ 146 ] Producing a truly high-quality finished sequence by this definition is very expensive. Thus, most human "whole genome sequencing" results are draft sequences (sometimes above and sometimes below the accuracy defined above). [ 146 ]
https://en.wikipedia.org/wiki/Whole_genome_sequencing
In chemistry , the whole number rule states that the masses of the isotopes are whole number multiples of the mass of the hydrogen atom. [ 1 ] The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. [ 2 ] It is also known as the Aston whole number rule [ 3 ] after Francis W. Aston who was awarded the Nobel Prize in Chemistry in 1922 "for his discovery, by means of his mass spectrograph , of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule." [ 4 ] The law of definite proportions was formulated by Joseph Proust around 1800 [ 5 ] and states that all samples of a chemical compound will have the same elemental composition by mass. The atomic theory of John Dalton expanded this concept and explained matter as consisting of discrete atoms with one kind of atom for each element combined in fixed proportions to form compounds. [ 6 ] In 1815, William Prout reported on his observation that the atomic weights of the elements were whole multiples of the atomic weight of hydrogen . [ 7 ] [ 8 ] He then hypothesized that the hydrogen atom was the fundamental object and that the other elements were a combination of different numbers of hydrogen atoms. [ 9 ] In 1920, Francis W. Aston demonstrated through the use of a mass spectrometer that apparent deviations from Prout's hypothesis are predominantly due to the existence of isotopes . [ 10 ] For example, Aston discovered that neon has two isotopes with masses very close to 20 and 22 as per the whole number rule, and proposed that the non-integer value 20.2 for the atomic weight of neon is due to the fact that natural neon is a mixture of about 90% neon-20 and 10% neon-22). A secondary cause of deviations is the binding energy or mass defect of the individual isotopes. During the 1920s, it was thought that the atomic nucleus was made of protons and electrons, which would account for the disparity between the atomic number of an atom and its atomic mass . [ 11 ] [ 12 ] In 1932, James Chadwick discovered an uncharged particle of approximately the mass as the proton, which he called the neutron . [ 13 ] The fact that the atomic nucleus is composed of protons and neutrons was rapidly accepted and Chadwick was awarded the Nobel Prize in Physics in 1935 for his discovery. [ 14 ] The modern form of the whole number rule is that the atomic mass of a given elemental isotope is approximately the mass number (number of protons plus neutrons) times an atomic mass unit (approximate mass of a proton, neutron, or hydrogen-1 atom). This rule predicts the atomic mass of nuclides and isotopes with an error of at most 1%, with most of the error explained by the mass deficit caused by nuclear binding energy .
https://en.wikipedia.org/wiki/Whole_number_rule
A whorl ( / w ɜːr l / or / w ɔːr l / ) is an individual circle , oval , volution or equivalent in a whorled pattern , which consists of a spiral or multiple concentric objects (including circles , ovals and arcs ). [ 1 ] [ 2 ] For mollusc whorls , the body whorl in a mollusc shell is the most recently formed whorl of a spiral shell, terminating in the aperture. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Whorl
" Why is there anything at all? " or " Why is there something rather than nothing? " is a question about the reason for basic existence which has been raised or commented on by a range of philosophers and physicists , including Gottfried Wilhelm Leibniz , [ 3 ] Ludwig Wittgenstein , [ 4 ] and Martin Heidegger , [ 5 ] who called it "the fundamental question of metaphysics ". [ 6 ] [ 7 ] [ 8 ] No experiment could support the hypothesis "There is nothing" because any observation obviously implies the existence of an observer. [ 9 ] The question is usually taken as concerning practical causality (rather than a moral reason for), and posed totally and comprehensively, rather than concerning the existence of anything specific, such as the universe or multiverse , the Big Bang , God , mathematical and physical laws , time or consciousness . It can be seen as an open metaphysical question, rather than a search for an exact answer. [ 10 ] [ 11 ] [ 12 ] [ 13 ] The question does not include the timing of when anything came to exist. Some have suggested the possibility of an infinite regress , where, if an entity cannot come from nothing and this concept is mutually exclusive from something , there must have always been something that caused the previous effect, with this causal chain (either deterministic or probabilistic ) extending infinitely back in time . [ 14 ] [ 15 ] [ 16 ] Philosopher Stephen Law has said the question may not need answering, as it is attempting to answer a question that is outside a spacetime setting while being within a spacetime setting. He compares the question to asking "what is north of the North Pole ?" [ 17 ] The ancient Greek philosopher Aristotle argued that everything in the universe must have a cause, culminating in an ultimate uncaused cause . (See Four causes .) However, David Hume argued that a cause may not be necessary in the case of the formation of the universe. Whilst we expect that everything has a cause because of our experience of the necessity of causes, the formation of the universe is outside our experience and may be subject to different rules. [ 18 ] [ 19 ] Kant supports [ 20 ] and extends this argument. [ 21 ] Kant argues that the nature of our mind may lead us to ask some questions (rather than asking because of the validity of those questions). [ 22 ] [ clarification needed ] In philosophy, the brute fact approach proposes that some facts cannot be explained in terms of a deeper, more "fundamental" fact. [ 23 ] [ 24 ] It is in opposition to the principle of sufficient reason approach. [ 25 ] On this question, Bertrand Russell took a brute fact position when he said, "I should say that the universe is just there, and that's all." [ 26 ] [ 27 ] Sean Carroll similarly concluded that "any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts; the universe simply is, without ultimate cause or explanation." [ 28 ] [ 29 ] : 25 Roy Sorensen has discussed that the question may have an impossible explanatory demand, if there are no existential premises. [ clarification needed ] [ 30 ] Philosopher Brian Leftow has argued that the question cannot have a causal explanation (as any cause must itself have a cause) or a contingent explanation (as the factors giving the contingency must pre-exist), and that if there is an answer, it must be something that exists necessarily (i.e., something that just exists, rather than is caused). [ 31 ] Philosopher of physics Dean Rickles has argued that numbers and mathematics (or their underlying laws) may necessarily exist. [ 32 ] [ 33 ] If we accept that mathematics is an extension of logic , as philosophers such as Bertrand Russell and Alfred North Whitehead did, then mathematical structures like numbers and shapes must be necessarily true propositions in all possible worlds . [ 34 ] [ 35 ] [ 36 ] Physicists, including popular physicists such as Stephen Hawking and Lawrence Krauss , have offered explanations (of at least the first particle coming into existence aspect of cosmogony ) that rely on quantum mechanics , saying that in a quantum vacuum state , virtual particles and spacetime bubbles will spontaneously come into existence. [ 37 ] The actual mathematical demonstration of quantum fluctuations of the hypothetical false vacuum state spontaneously causing an expanding bubble of true vacuum was done by quantum cosmologists in 2014 at the Chinese Academy of Sciences . [ 38 ] Gottfried Wilhelm Leibniz attributed to God as being the necessary sufficient reason for everything that exists (see: Cosmological argument ). He wrote: "Why is there something rather than nothing? The sufficient reason... is found in a substance which... is a necessary being bearing the reason for its existence within itself." [ 39 ] The pre-Socratic philosopher Parmenides was one of the first Western thinkers to question the possibility of nothing, and commentary on this has continued. [ 9 ] Nobel Laureate Frank Wilczek is credited with the aphorism that "nothing is unstable." Physicist Sean Carroll argues that this accounts merely for the existence of matter, but not the existence of quantum states , space-time , or the universe as a whole. [ 28 ] [ 29 ] Some cosmologists believe it to be possible that something (e.g., the universe ) may come to exist spontaneously from nothing. Some mathematical models support this idea, and it is growing to become a more prevalent explanation among the scientific community for why the Big Bang occurred. [ 40 ] Robert Nozick proposed some possible explanations. [ 41 ] Mariusz Stanowski explained: "There must be both something and nothing, because separately neither can be distinguished". [ 42 ] Philosophical wit Sidney Morgenbesser answered the question with an apothegm : "If there were nothing, you'd still be complaining!", [ 43 ] [ 44 ] or "Even if there was nothing, you still wouldn't be satisfied!" [ 29 ] : 17 "To explain why something exists, we standardly appeal to the existence of something else... For instance, if we answer 'There is something because the Universal Designer wanted there to be something', then our explanation takes for granted the existence of the Universal Designer. Someone who poses the question in a comprehensive way will not grant the existence of the Universal Designer as a starting point. If the explanation cannot begin with some entity, then it is hard to see how any explanation is feasible. Some philosophers conclude 'Why is there something rather than nothing?' is unanswerable. They think the question stumps us by imposing an impossible explanatory demand, namely, 'Deduce the existence of something without using any existential premises'. Logicians should feel no more ashamed of their inability to perform this deduction than geometers should feel ashamed at being unable to square the circle."
https://en.wikipedia.org/wiki/Why_is_there_anything_at_all?
IEEE 802.11be , dubbed Extremely High Throughput (EHT) , is a wireless networking standard in the IEEE 802.11 set of protocols [ 4 ] [ 5 ] which is designated Wi-Fi 7 by the Wi-Fi Alliance . [ 6 ] [ 7 ] [ 8 ] It has built upon 802.11ax , focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4, 5, and 6 GHz frequency bands. [ 9 ] In a single band, throughput reaches a theoretical maximum of 23 Gbit/s, although actual results are much lower. Development of the 802.11be amendment began with an initial draft in March 2021 with a final version expected by the end of 2024. [ 7 ] [ 10 ] [ 11 ] Despite this, numerous products were announced in 2022 based on draft standards , with retail availability in early 2023. On 8 January 2024, the Wi-Fi Alliance introduced its Wi-Fi Certified 7 program to certify Wi-Fi 7 devices. While final ratification wasn't expected until the end of 2024, the technical requirements were essentially complete. [ 12 ] [ 13 ] [ 14 ] The following are core features that have been approved as of Draft 3.0: The main candidate features mentioned in the 802.11be Project Authorization Request (PAR) are: [ 16 ] Apart from the features mentioned in the PAR, there are newly introduced features: [ 20 ] The 802.11be Task Group is led by individuals affiliated with Qualcomm, Intel, and Broadcom. Those affiliated with Huawei , Maxlinear , NXP , and Apple also have senior positions. [ 11 ] The Wi-Fi Alliance maintains a list of Wi-Fi 7 certified devices. [ 41 ] Android 13 and higher provide support for Wi-Fi 7. [ 42 ] The Linux 6.2 kernel provides support for Wi-Fi 7 devices. The 6.4 kernel added Wi-Fi 7 mesh support. [ 43 ] Linux 6.5 included significant driver support by Intel engineers, particularly support for MLO. [ 44 ] Support for Wi-Fi 7 was added to Windows 11 , as of build 26063.1. [ 45 ] [ 46 ]
https://en.wikipedia.org/wiki/Wi-Fi_7
Cambium Networks Corporation is an American manufacturer of wireless telecommunications equipment, including Enterprise WiFi , Network switch , Internet of Things , and fixed wireless broadband and Wi-Fi for enterprises products for Internet access . [ 6 ] [ 7 ] [ 8 ] Publicly traded on the NASDAQ stock exchange, it spun out of Motorola in October 2011. [ 9 ] [ 10 ] [ 11 ] Cambium Networks manufactures point-to-point backhaul , point-to-multipoint communication wide area network (WAN) , Wi-Fi indoor and outdoor access, and cloud-based network management systems . [ 12 ] In 2020, the company collaborated with Facebook to add mesh networking technology Terragraph that allows high-speed internet connections where laying fiber optic cable is not viable. [ 13 ] As of 2021 the company has shipped 10 million radios. [ 14 ] Products are available in point-to-point and point-to-multipoint configurations. Its cnWave fixed wireless solution provides multi-gigabit throughputs. [ 14 ] It includes both the original Motorola-designed products using the Canopy protocol and the PtP backhauls that were rebranded from Orthogon Systems, which Motorola acquired in 2006. Cambium Networks’ solutions are used by broadband service providers and managed service providers to connect business and residential locations in dense urban, suburban, rural and remote locations, including education and healthcare. [ 15 ] Campgrounds, RV parks and holiday parks have deployed Cambium Networks' fixed wireless and Wi-Fi for high-speed connectivity. [ 16 ] Cambium Networks also manufactures Wireless LAN (WLAN) Wi-Fi access points including Wi-Fi 6E and intelligent switches along with cloud-management systems. [ 17 ] In 2022, Spectralink added interoperability with Cambium Networks access points and Wi-Fi phones and handsets as part of its enterprise wireless certification program. [ 18 ] Cambium Networks was created when Motorola Solutions sold the Canopy and Orthogon businesses in 2011. Cambium evolved the platform and expanded it to three product lines: Point to Point (PTP) (formerly Orthogon), Point to Multipoint (PMP) (formerly Canopy) and ePMP. [ 19 ] In July 2019, Cambium acquired Xirrus from Riverbed Technology . [ 20 ] In June 2019, the company listed on the NASDAQ Stock Exchange in an initial public offering that raised $70 million. [ 21 ] WISPA network operator members voted Cambium Networks the “Manufacturer of the Year” from 2017-2020. [ 14 ] The technology competes with WiMAX , LTE and other long range mobile products, but not effectively with wired Internet, which is capable of much faster speeds and does not have wireless relay round-trip delay . Competent Canopy implementations such as the Broadband for Rural Nova Scotia initiative however have demonstrated VoIP , gaming and other low-latency applications work acceptably over this system, and in areas of challenging weather including high wind conditions (which cause antennas to move and affect connections). A typical Canopy setup consists of a cluster of up to six co-located standard access points (AP), each with a 60 degree horizontal beamwidth antenna, to achieve 360 degree coverage. The most commonly used APs are available in 120, 180, or 360 degree models for site-based coverage, thus decreasing the number of APs needed on a tower. Also included would be one or more backhauls or otherwise out-of-band links (to carry data to/from other network occasions) and a Cluster Management Module (CMM) to provide power and synchronization to each Canopy AP or Backhaul Module (BM). Customers of the system receive service through subscriber modules (SM) aimed towards the AP. The SMs should be mounted on the highest point of a building to get a reliable connection; otherwise, Fresnel zone obstruction will weaken the signal. Under ideal operating conditions, the system can communicate over distances of 3.5 to 15 miles (5.6 to 24.1 km) depending on the frequency using equipment with integrated antennas . Network operators can opt to install reflector dishes or Stinger antennas or to use Canopy models that accept external antennas at one or both ends of the link to increase coverage distance. Most Canopy equipment receives its power using Power over Ethernet , however, none of its standards comply with IEEE 802.3af. A customer can query the status of their SM by viewing URL 169.254.1.1/main.cgi with a web browser (unless the network operator uses a different IP address or has put the subscriber in a VLAN . In general, the 900 MHz version is more effective for use in outlying areas because of its ability to penetrate trees. [ 22 ] However, it requires careful installation because of the easy propagation of interference on that band. Other frequencies currently available are 2.4 GHz, 5.2 GHz, 5.4 GHz, and 5.7 GHz. While Cambium offers products that support the Wi-Fi protocols (mostly the cnPilot range and the products from their Xirrus acquisition), most of their outdoor, long-range products function exclusively with the proprietary TDMA Canopy or Cambium protocols on custom FPGA code. These are heavily optimized for GPS synchronization, frequency re-use, low latency and long distances / high interference survival. [ 23 ] The versions of this protocol include: These products are fixed wireless technology. Canopy protocol products have many advantages over Wi-Fi and other wireless local area network protocols: Their main disadvantages are:
https://en.wikipedia.org/wiki/Wi-Fi_array
WiFi-Where was a tool that facilitated Wardriving and detection of wireless LANs using the 802.11b, 802.11a and 802.11g WLAN standards . Versions existed for the operating systems iOS and Palm OS . Originally created in June 2004 for the Palm OS by Jonathan Hays of Hazelware Software, the IP for WiFi-Where was licensed to 3Jacks Software in 2009. An iPhone version of the application was released in January 2010, but was pulled from the App Store by Apple in March 2010. [ 1 ] The app was frequently listed as a common tool to facilitate Wardriving [ 2 ] As of 2010, it is still available in the Jailbroken Cydia store. [ citation needed ] WiFi-Where was one of many applications that were suddenly purged by Apple in March 2010. Apple never commented publicly on the reasons why other than that they accessed 'private frameworks.' [ 3 ] [ 4 ] [ 5 ] This removal of an entire category of software from the App Store pushed Wardriving software to other platforms such as Android and Windows . The program was commonly used for: Some of the unique features that the program implemented were:
https://en.wikipedia.org/wiki/WiFi-Where
WiFi Explorer is a wireless network scanner tool for macOS that can help users identify channel conflicts, overlapping and network configuration issues [ 1 ] [ 2 ] [ 3 ] that may be affecting the connectivity and performance of Wi-Fi networks. WiFi Explorer began as a desktop alternative to WiFi Analyzer, an iPhone app for wireless network scanning that was pulled out from Apple 's App Store in March, 2010, due to the use of private frameworks. [ 4 ] [ 5 ] Since its first release, WiFi Explorer incorporated features that were not included in the last available version of WiFi Analyzer, such as support for 5 GHz networks and 40 MHz channel widths. Starting in version 1.5, WiFi Explorer included support for 802.11ac networks, as well as 80 and 160 MHz channel widths. On June 22, 2017, a professional version of WiFi Explorer, WiFi Explorer Pro, was released. [ 6 ] WiFi Explorer Pro offers additional features especially designed for WLAN and IT professionals. [ 7 ] The standard version of WiFi Explorer is also available on Setapp . Due to limitations of Apple 's CoreWLAN framework, [ 12 ] the standard version of WiFi Explorer is unable to detect hidden networks (except when connected to it) and does not support external USB Wi-Fi adapters. The Pro edition supports passive scanning, [ 10 ] which can detect hidden networks, and can make use of external adapters via the External Adapter Support Environment (EASE).
https://en.wikipedia.org/wiki/WiFi_Explorer
WiGLE ( Wireless Geographic Logging Engine ) is a website for collecting information about the different wireless hotspots around the world. Users can register on the website and upload hotspot data like GPS coordinates , SSID , MAC address and the encryption type used on the hotspots discovered. In addition, cell tower data is uploaded and displayed. [ 1 ] By obtaining information about the encryption of the different hotspots, WiGLE tries to create an awareness of the need for security by running a wireless network. [ 2 ] The first recorded hotspot on WiGLE was uploaded in September 2001. By June 2017, WiGLE counted over 349 million recorded WiFi networks in its database, whereof 345 million was recorded with GPS coordinates and over 4.8 billion unique recorded observations. In addition, the database now contains 7.80 million unique cell towers including 7.75 million with GPS coordinates. [ 3 ] By May 2019, WiGLE had a total of 551 million networks recorded. [ 4 ] From Hacking for Dummies [ 5 ] to Introduction to Neogeography, [ 6 ] WiGLE is a well known resource and tool. As early as 2004, its database of 228,000 wireless networks was being used to advocate better security of Wifi. [ 7 ] Several books mentioned the WiGLE database in 2005, [ 8 ] [ 9 ] including internationally, [ 10 ] and the association with vehicles was also becoming widely known. [ 11 ] Some associations of WiGLE have been positive, and some have been darker. [ 12 ] [ 13 ] [ 14 ] By 2004, the site was sufficiently well known that the announcement of a new book quoted the co-founder, saying “This is the ‘Kama Sutra’ of wardriving literature. If you can't wardrive after reading this, nature has selected you not to. This is the first complete guide on the subject we’ve ever seen (it mentions us). Don't quote me on that.” –Bob “bobzilla” Hagemann, WiGLE.net CoFounder" and a shortened quote appeared on the book's cover. [ 15 ] [ 16 ] In early days, circa 2003 the lack of mapping was criticized, and was said to force WiFi seekers to use more primitive methods. "The most primitive method disseminated is warchalking , where mappers inscribe a symbolic markup on the physical premises to indicate the presence of a wireless network in the area." Regarding WiGLE in particular, it was said, "The Netstumbler map site and the Wireless Geographic Logging Engine store more detailed wardrive trace data, yet do not offer any visualization format that is particularly useful or informative." [ 17 ] By 2004 others felt differently, however, and a WiFi news site said about "the fine folks at wigle.net who have 900,000 access points in their wardriving database," "While the maps aren't as pretty, they're quite good, and the URLs correspond to specific locations where WiFiMaps hides the URL-to-location mapping." [ 18 ] In late 2004, other authors stated, "that war driving is now ubiquitous: a good illustration of this is provided by the WiGLE.net online database of WAPS." They also said, "The motherload of WAP maps is available on the Wireless Geographic Logging Engine Web site (wigle.net). Circa late September 2004, WiGLE’s database and mapping technology included over 1.6 million WAPS. If you can’t find the WAP of interest there, you can probably live without it." [ 19 ] In 2005, using WiFi databases for geolocation was being discussed, and WiGLE, with approximately 2.4 million located access points in the database, was often mentioned. [ 20 ] [ 21 ] [ 22 ] In 2017, data from WiGLE was used as a source for WiFi router locations and encryption frequencies. [ 23 ] In 2018, data from the WiGLE database was compared against the data collected by the authors. [ 24 ] The WiGLE Android app was compared against other wardriving tools in a conference in 2021. [ 25 ] In 2024, data from the WiGLE database was compared against Apple's location services and Erik Rye and Dave Levin found that the vast majority of networks in their sample from the WiGLE database were within 1km of the Apple database. [ 26 ] Although the apps used to collect information are open sourced, [ 27 ] the database itself is accessed and distributed under a freeware proprietary license. [ 28 ] Commercial use of parts of the data may be bought. [ 29 ] The Android app to collect Wi-Fi hotspots and their geographic correspondent information is available under a 3-clause BSD license. [ 30 ]
https://en.wikipedia.org/wiki/WiGLE
In physics , Wick rotation , named after Italian physicist Gian Carlo Wick , is a method of finding a solution to a mathematical problem in Minkowski space from a solution to a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable. Wick rotations are useful because of an analogy between two important but seemingly distinct fields of physics: statistical mechanics and quantum mechanics . In this analogy, inverse temperature plays a role in statistical mechanics formally akin to imaginary time in quantum mechanics: that is, it , where t is time and i is the imaginary unit ( i 2 = –1 ). More precisely, in statistical mechanics, the Gibbs measure exp(− H / k B T ) describes the relative probability of the system to be in any given state at temperature T , where H is a function describing the energy of each state and k B is the Boltzmann constant . In quantum mechanics, the transformation exp(− itH / ħ ) describes time evolution, where H is an operator describing the energy (the Hamiltonian ) and ħ is the reduced Planck constant . The former expression resembles the latter when we replace it / ħ with 1/ k B T , and this replacement is called Wick rotation. [ 1 ] Wick rotation is called a rotation because when we represent complex numbers as a plane , the multiplication of a complex number by the imaginary unit is equivalent to counter-clockwise rotating the vector representing that number by an angle of magnitude π /2 about the origin. [ 2 ] Wick rotation is motivated by the observation that the Minkowski metric in natural units (with metric signature (− + + +) convention) and the four-dimensional Euclidean metric are equivalent if one permits the coordinate t to take on imaginary values. The Minkowski metric becomes Euclidean when t is restricted to the imaginary axis , and vice versa. Taking a problem expressed in Minkowski space with coordinates x , y , z , t , and substituting t = − iτ sometimes yields a problem in real Euclidean coordinates x , y , z , τ which is easier to solve. This solution may then, under reverse substitution, yield a solution to the original problem. Wick rotation connects statistical mechanics to quantum mechanics by replacing inverse temperature with imaginary time , or more precisely replacing 1/ k B T with it / ħ , where T is temperature, k B is the Boltzmann constant , t is time, and ħ is the reduced Planck constant . For example, consider a quantum system whose Hamiltonian H has eigenvalues E j . When this system is in thermal equilibrium at temperature T , the probability of finding it in its j th energy eigenstate is proportional to exp(− E j / k B T ) . Thus, the expected value of any observable Q that commutes with the Hamiltonian is, up to a normalizing constant, where j runs over all energy eigenstates and Q j is the value of Q in the j th eigenstate. Alternatively, consider this system in a superposition of energy eigenstates , evolving for a time t under the Hamiltonian H . After time t , the relative phase change of the j th eigenstate is exp(− E j it / ħ ) . Thus, the probability amplitude that a uniform (equally weighted) superposition of states evolves to an arbitrary superposition is, up to a normalizing constant, Note that this formula can be obtained from the formula for thermal equilibrium by replacing 1/ k B T with it / ħ . Wick rotation relates statics problems in n dimensions to dynamics problems in n − 1 dimensions, trading one dimension of space for one dimension of time. A simple example where n = 2 is a hanging spring with fixed endpoints in a gravitational field. The shape of the spring is a curve y ( x ) . The spring is in equilibrium when the energy associated with this curve is at a critical point (an extremum); this critical point is typically a minimum, so this idea is usually called "the principle of least energy". To compute the energy, we integrate the energy spatial density over space: where k is the spring constant, and V ( y ( x )) is the gravitational potential. The corresponding dynamics problem is that of a rock thrown upwards. The path the rock follows is that which extremalizes the action ; as before, this extremum is typically a minimum, so this is called the " principle of least action ". Action is the time integral of the Lagrangian : We get the solution to the dynamics problem (up to a factor of i ) from the statics problem by Wick rotation, replacing y ( x ) by y ( it ) and the spring constant k by the mass of the rock m : Taken together, the previous two examples show how the path integral formulation of quantum mechanics is related to statistical mechanics. From statistical mechanics, the shape of each spring in a collection at temperature T will deviate from the least-energy shape due to thermal fluctuations; the probability of finding a spring with a given shape decreases exponentially with the energy difference from the least-energy shape. Similarly, a quantum particle moving in a potential can be described by a superposition of paths, each with a phase exp( iS ) : the thermal variations in the shape across the collection have turned into quantum uncertainty in the path of the quantum particle. The Schrödinger equation and the heat equation are also related by Wick rotation. Wick rotation also relates a quantum field theory at a finite inverse temperature β to a statistical-mechanical model over the "tube" R 3 × S 1 with the imaginary time coordinate τ being periodic with period β . However, there is a slight difference. Statistical-mechanical n -point functions satisfy positivity, whereas Wick-rotated quantum field theories satisfy reflection positivity . [ further explanation needed ] Note, however, that the Wick rotation cannot be viewed as a rotation on a complex vector space that is equipped with the conventional norm and metric induced by the inner product , as in this case the rotation would cancel out and have no effect. Dirk Schlingemann proved that a more rigorous link between Euclidean and quantum field theory can be constructed using the Osterwalder–Schrader axioms . [ 3 ]
https://en.wikipedia.org/wiki/Wick_rotation
WidSets is a mobile runtime technology, and a mobile service powered by the said technology, based on the Java MIDP 2.0 platform , from the Finnish mobile company Nokia . It is both a widget engine and a widget deployment service where mini-applications called widgets can be uploaded to WidSets servers to be compiled and then automatically deployed to MIDP 2.0 compliant mobile phones running the WidSets client software. The widgets are created using Extensible Markup Language (XML), Cascading Style Sheets (CSS), and Helium scripting language . Widsets is a combined application and service that is similar to what a widget does on a desktop PC, on a wide variety of mobile phones. WidSets are micro-applications intended to perform a single function. WidSets, like widgets , generally rely on some kind of web service to provide information to the user. WidSets was officially launched in October 2006. It worked on all Java MIDP 2.0 phones, including non-Nokia ones, and was regarded as a mobile counterpart to Netvibes . [ 1 ] The current version as of May 2008 [update] is version 2.0.0 for both the client and the SDK. In June 2009, Nokia announced that WidSets is no longer developed, [ 2 ] having been replaced by the umbrella Ovi Store . Example widgets included currency converters, news headlines retrievers and weather forecast information. This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/WidSets
The Widal test , developed in 1896 and named after its inventor, Georges-Fernand Widal , is an indirect agglutination test for enteric fever or undulant fever whereby bacteria causing typhoid fever are mixed with a serum containing specific antibodies obtained from an infected individual. In cases of Salmonella infection, the test assesses for host antibodies to the O soma antigen and the H flagellar antigen of the bacteria. [ 1 ] False positive and false negative results may occur. Test results need to be interpreted carefully to account for any history of enteric fever, typhoid vaccination, and the general level of antibodies in the populations in endemic areas of the world. As with all serological tests, the rise in antibody levels needed to perform the diagnosis takes 7–14 days, which limits its applicability in early diagnosis. Other means of diagnosing Salmonella typhi (and paratyphi ) include cultures of blood, urine and faeces . These organisms produce H 2 S from thiosulfate and can be identified easily on differential media such as bismuth sulfite agar . [ 2 ] [ 3 ] [ 4 ] Typhidot is the other test used to ascertain the diagnosis of typhoid fever . A new serological test called the Tubex test is neither superior nor better performing than the Widal test. Therefore, Tubex test is not recommended for diagnosis of typhoid fever. [ 5 ] 2-mercaptoethanol is often added to the Widal test. This agent more easily denatures the IgM class of antibodies , so if a decrease in the titer is seen after using this agent, it means that the contribution of IgM has been removed leaving the IgG component. This differentiation of antibody classes is important as it allows for the distinction of a recent (IgM) from an old infection (IgG). The Widal test is positive if TO antigen titer is more than 1:160 in an active infection, or if TH antigen titer is more than 1:160 in past infection or in immunized persons. A single Widal test is of little clinical relevance especially in endemic areas such as Indian subcontinent, Africa and South-east Asia. This is due to recurrent exposure to the typhoid causing bacteria, immunization and high chances of cross-reaction from infections, such as malaria and non typhoidal salmonella. [ 6 ] If no other tests (either bacteriologic culture or more specific serology) are available, a fourfold increase in the titer (e.g., from 1:40 to 1:640) in the course of the infection, or a conversion from an IgM reaction to an IgG reaction of at least the same titer, would be consistent with a typhoid infection. The normal widal ranges are 1:20 and 1:80, these are in the normal range anything more is a concern and you should seek medical consulation.
https://en.wikipedia.org/wiki/Widal_test
Widdershins (sometimes withershins , widershins or widderschynnes ) is a term meaning to go counter-clockwise, anti-clockwise, or lefthandwise, or to walk around an object by always keeping it on the left. Literally, it means to take a course opposite the apparent motion of the sun viewed from the Northern Hemisphere (the face of this imaginary clock is the ground the viewer stands upon). [ 1 ] The earliest recorded use of the word, as cited by the Oxford English Dictionary , is in a 1513 translation of the Aeneid , where it is found in the phrase "Abaisit I wolx, and widdersyns start my hair." In this sense, "widdershins start my hair" means "my hair stood on end". [ 2 ] The use of the word also means "in a direction opposite to the usual" and "in a direction contrary to the apparent course of the sun". It is cognate with the German language widersinnig , i.e., "against" + "sense". The term "widdershins" was especially common in Lowland Scots . [ 2 ] The opposite of widdershins is deosil , or sunwise , meaning "clockwise". Widdershins comes from Middle Low German weddersinnes , literally "against the way" (i.e. "in the opposite direction"), from widersinnen "to go against", from Old High German elements widar "against" and sinnen "to travel, go", related to sind "journey". [ 3 ] [ 4 ] Because the sun played a highly important role in older religions, to go against it was considered bad luck for sun-worshiping traditions. It was considered unlucky in Britain to travel in an anticlockwise (not sunwise ) direction around a church , and a number of folk myths make reference to this superstition ; for example, in the fairy tale Childe Rowland , the protagonist and his sister are transported to Elfland after the sister runs widdershins round a church. There is also a reference to this in Dorothy Sayers 's novels The Nine Tailors (chapter entitled The Second Course; "He turned to his right, knowing that it is unlucky to walk about a church widdershins ...") and Clouds of Witness ("True, O King, and as this isn't a church, there's no harm in going round it widdershins"). In Robert Louis Stevenson 's tale "The Song of the Morrow," an old crone on the beach dances "widdershins". [ 5 ] In the Eastern Orthodox Church [ 6 ] [ 7 ] and the Oriental Orthodox [ 8 ] Churches it is normal for processions around a church to travel in an anticlockwise direction. This remains the case regardless of which hemisphere they are performed in. In Judaism circles are also sometimes walked anticlockwise. For example, when a bride circles her groom seven times before marriage, when dancing around the bimah during Simchat Torah (or when dancing in a circle at any time), or when the Sefer Torah is brought out of the ark (ark is approached from the right, and departed from the left). This has its origins in the Temple in Jerusalem , where in order not to get in each other's way, the priests would walk around the altar anticlockwise while performing their duties. When entering the Beis Hamikdash the people would enter by one gate, and leave by another. [ citation needed ] The resulting direction of motion was anticlockwise. In Judaism, starting things from the right side is considered to be important, since the right side is the side of Chesed (kindness) while the left side is the side of Gevurah (judgment). [ citation needed ] For example, there is a Jewish custom recorded in the Shulchan Aruch to put on the right shoe first and take off the left shoe first, [ 9 ] following the example of Mar son of Ravina whom the Talmud records as putting his shoes on in this way. [ 10 ] The Bönpo in the Northern Hemisphere traditionally circumambulate (generally) in a counter-clockwise and 'widdershins' direction, that is to say, a direction that runs counter to the apparent movement of the Sun within the sky from the vantage of the ground. This runs counter to the prevalent directionality of Buddhism (in general) and orthodox Hinduism . This is in keeping with the aspect and directionality of the ' Sauvastika ' (Tibetan: yung-drung ), sacred to the Bönpo. In the Southern Hemisphere , the Bönpo practitioner is required to elect whether the directionality of 'counter-clockwise' ( deosil in the Southern Hemisphere) or running-counter to the direction of the Sun (widdershins in the Southern Hemisphere) is the key intention of the tradition. The resolution to this conundrum is left open to the practitioner, their 'intuitive insight' (Sanskrit: prajna ) and their tradition. [ citation needed ]
https://en.wikipedia.org/wiki/Widdershins
A wide-column store (or extensible record store ) is a type of NoSQL database . [ 1 ] It uses tables, rows, and columns, but unlike a relational database , the names and format of the columns can vary from row to row in the same table. A wide-column store can be interpreted as a two-dimensional key–value store . [ 1 ] Google 's Bigtable is one of the prototypical examples of a wide-column store. [ 2 ] Wide-column stores such as Bigtable and Apache Cassandra are not column stores in the original sense of the term, since their two-level structures do not use a columnar data layout. In genuine column stores, a columnar data layout is adopted such that each column is stored separately on disk. Wide-column stores do often support the notion of column families that are stored separately. However, each such column family typically contains multiple columns that are used together, similar to traditional relational database tables. Within a given column family, all data is stored in a row-by-row fashion, such that the columns for a given row are stored together, rather than each column being stored separately. Wide-column stores that support column families are also known as column family databases . [ citation needed ] Notable wide-column stores [ 3 ] include: This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wide-column_store
Wide-field Infrared Survey Explorer ( WISE , observatory code C51, Explorer 92 and MIDEX-6 ) was a NASA infrared astronomy space telescope in the Explorers Program launched in December 2009. [ 2 ] [ 3 ] [ 4 ] WISE discovered thousands of minor planets and numerous star clusters . Its observations also supported the discovery of the first Y-type brown dwarf and Earth trojan asteroid . [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] WISE performed an all-sky astronomical survey with images in 3.4, 4.6, 12 and 22 μm wavelength range bands, over ten months using a 40 cm (16 in) diameter infrared telescope in Earth orbit . [ 11 ] After its solid hydrogen coolant depleted, it was placed in hibernation mode in February 2011. [ 5 ] In 2013, NASA reactivated the WISE telescope to search for near-Earth objects (NEO), such as comets and asteroids , that could collide with Earth. [ 12 ] [ 13 ] The reactivation mission was called Near-Earth Object Wide-field Infrared Survey Explorer ( NEOWISE ). [ 13 ] As of August 2023, NEOWISE was 40% through the 20th coverage of the full sky. [ citation needed ] Science operations and data processing for WISE and NEOWISE take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena, California . The WISE All-Sky (WISEA) data, including processed images, source catalogs and raw data, was released to the public on 14 March 2012, and is available at the Infrared Science Archive . [ 14 ] [ 15 ] [ 16 ] The NEOWISE mission was originally expected to end in early 2025 with the satellite reentering the atmosphere some time after. [ 17 ] However, the NEOWISE mission concluded its science survey on 31 July 2024 with the satellite expected to reenter Earth's atmosphere later the same year (2 November 2024). This decision was made due to increased solar activity hastening the decay of its orbit and the lack of an onboard propulsion system for orbital maintenance. The onboard transmitter was turned off on 8 August, marking the formal decommissioning of the spacecraft. [ 18 ] The mission was planned to create infrared images of 99% of the sky, with at least eight images made of each position on the sky in order to increase accuracy. The spacecraft was placed in a 525 km (326 mi), circular, polar, Sun-synchronous orbit for its ten-month mission, during which it has taken 1.5 million images, one every 11 seconds. [ 19 ] The satellite orbited above the terminator , its telescope pointing always to the opposite direction to the Earth, except for pointing towards the Moon , which was avoided, and its solar cells towards the Sun . Each image covers a 47 arcminute field of view (FoV), which means a 6 arcsecond resolution . Each area of the sky was scanned at least 10 times at the equator ; the poles were scanned at theoretically every revolution due to the overlapping of the images. [ 20 ] [ 21 ] The produced image library contains data on the local Solar System , the Milky Way , and the more distant Universe . Among the objects WISE studied are asteroids, cool and dim stars such as brown dwarfs , and the most luminous infrared galaxies . WISE was not able to detect Kuiper belt objects , because their temperatures are too low. [ 22 ] Pluto is the only Kuiper belt object that was detected. [ 23 ] It was able to detect any objects warmer than 70–100 K . A Neptune -sized object would be detectable out to 700 Astronomical unit (AU), a Jupiter mass object out to 1 light year (63,000 AU), where it would still be within the Sun's zone of gravitational control . A larger object of 2–3 Jupiter masses would be visible at a distance of up to 7–10 light years. [ 22 ] At the time of planning, it was estimated that WISE would detect about 300,000 main-belt asteroids , of which approximately 100,000 will be new, and some 700 Near-Earth objects (NEO) including about 300 undiscovered. That translates to about 1000 new main-belt asteroids per day, and 1–3 NEOs per day. The peak of magnitude distribution for NEOs will be about 21–22 V . WISE would detect each typical Solar System object 10–12 times over about 36 hours in intervals of 3 hours. [ 20 ] [ 21 ] [ needs update ] Star formation , a process where visible light is normally obscured by interstellar dust , is detectable in infrared , since at this wavelength electromagnetic radiation can penetrate the dust. Infrared measurements from the WISE astronomical survey have been particularly effective at unveiling previously undiscovered star clusters . [ 10 ] Examples of such embedded star clusters are Camargo 18, Camargo 440, Majaess 101, and Majaess 116. [ 24 ] [ 25 ] In addition, galaxies of the young Universe and interacting galaxies, where star formation is intensive, are bright in infrared. At infrared wavelengths, interstellar gas clouds are also detectable, as well as proto-planetary discs. The WISE satellite was expected to find at least 1,000 proto-planetary discs. The WISE satellite bus was built by Ball Aerospace & Technologies in Boulder, Colorado . The spacecraft was derived from the Ball Aerospace & Technologies RS-300 spacecraft architecture, particularly the NEXTSat spacecraft built for the successful Orbital Express mission launched on 9 March 2007. The flight system had an estimated mass of 560 kg (1,230 lb). The spacecraft was three-axis stabilized , with body-fixed solar arrays . It used a high-gain antenna in the Ku-band to transmit to the ground through the Tracking and Data Relay Satellite System (TDRSS) geostationary system . Ball also performed the testing and flight system integration. [ citation needed ] Construction of the WISE telescope was divided between Ball Aerospace & Technologies (spacecraft, operations support), SSG Precision Optronics, Inc. (telescope, optics, scan mirror), DRS Technologies and Rockwell International (focal planes), Lockheed Martin ( cryostat , cooling for the telescope), and Space Dynamics Laboratory (instruments, electronics, and testing). The program was managed through the Jet Propulsion Laboratory . [ 12 ] The WISE instrument was built by the Space Dynamics Laboratory in Logan, Utah . WISE surveyed the sky in four wavelengths of the infrared band, at a very high sensitivity. Its design specified as goals that the full sky atlas of stacked images it produced have 5-sigma sensitivity limits of 120, 160, 650, and 2600 microjanskies (μJy) at 3.3, 4.7, 12, and 23 μm (aka microns ). [ 26 ] WISE achieved at least 68, 98, 860, and 5400 μJy; 5 sigma sensitivity at 3.4, 4.6, 12, and 22 μm for the WISE All-Sky data release. [ 27 ] This is a factor of 1,000 times better sensitivity than the survey completed in 1983 by the IRAS satellite in the 12 and 23 μm bands, and a factor of 500,000 times better than the 1990s survey by the Cosmic Background Explorer (COBE) satellite at 3.3 and 4.7 μm. [ 26 ] On the other hand, IRAS could also observe 60 and 100 μm wavelengths. [ 28 ] The primary mission lasted 10 months: one month for checkout, six months for a full-sky survey, then an additional three months of survey until the cryogenic coolant (which kept the instruments at 17 K) ran out. The partial second survey pass facilitated the study of changes (e.g. orbital movement) in observed objects. [ 29 ] On 8 November 2007, the House Committee on Science and Technology 's Subcommittee on Space and Aeronautics held a hearing to examine the status of NASA's Near-Earth Object (NEO) survey program. The prospect of using WISE was proposed by NASA officials. [ 30 ] NASA officials told Committee staff that NASA planned to use WISE to detect near-Earth objects in addition to performing its science goals. It was projected that WISE could detect 400 NEOs (or roughly 2% of the estimated NEO population of interest) within its one-year mission. By October 2010, over 33,500 new asteroids and comets were discovered, and nearly 154,000 Solar System objects had been observed by WISE. [ 31 ] Discovery of an ultra-cool brown dwarf, WISEPC J045853.90+643451.9 , about 10~30 light years away from Earth, was announced in late 2010 based on early data. [ 32 ] In July 2011, it was announced that WISE had discovered the first Earth trojan asteroid , 2010 TK 7 . [ 33 ] Also, the third-closest star system, Luhman 16 . As of May 2018, WISE / NEOWISE had also discovered 290 near-Earth objects and comets (see section below) . [ 34 ] The WISE mission is led by Edward L. Wright of the University of California, Los Angeles . The mission has a long history under Wright's efforts and was first funded by NASA in 1999 as a candidate for a NASA Medium-class Explorer (MIDEX) mission under the name Next Generation Sky Survey (NGSS). The history of the program from 1999 to date is briefly summarized as follows: [ citation needed ] Hibernation Reactivation The launch of the Delta II launch vehicle carrying the WISE spacecraft was originally scheduled for 11 December 2009. This attempt was scrubbed to correct a problem with a booster rocket steering engine. The launch was then rescheduled for 14 December 2009. [ 46 ] The second attempt launched on time at 14:09:33 UTC from Vandenberg Air Force Base in California . The launch vehicle successfully placed the WISE spacecraft into the planned polar orbit at an altitude of 525 km (326 mi) above the Earth. [ 4 ] WISE avoided the problem that affected Wide Field Infrared Explorer (WIRE), which failed within hours of reaching orbit in March 1999. [ 47 ] In addition, WISE was 1,000 times more sensitive than prior surveys such as IRAS , AKARI , and COBE 's DIRBE . [ 26 ] A month-long checkout after launch found all spacecraft systems functioning normally and both the low- and high-rate data links to the operations center working properly. The instrument cover was successfully jettisoned on 29 December 2009. [ 48 ] A first light image was released on 6 January 2010: an eight-second exposure in the Carina constellation showing infrared light in false color from three of WISE's four wavelength bands: Blue, green and red corresponding to 3.4, 4.6, and 12 μm, respectively. [ 49 ] On 14 January 2010, the WISE mission started its official sky survey. [ 50 ] The WISE group's bid for continued funding for an extended "warm mission" scored low by a NASA review board, in part because of a lack of outside groups publishing on WISE data. Such a mission would have allowed use of the 3.4 and 4.6 μm detectors after the last of cryo-coolant had been exhausted, with the goal of completing a second sky survey to detect additional objects and obtain parallax data on putative brown dwarf stars. NASA extended the mission in October 2010 to search for near-Earth objects (NEO). [ 12 ] By October 2010, over 33,500 new asteroids and comets were discovered, and over 154,000 Solar System objects were observed by WISE. [ 31 ] While active it found dozens of previously unknown asteroids every day. [ 51 ] In total, it captured more than 2.7 million images during its primary mission. [ 52 ] In October 2010, NASA extended the mission by one month with a program called Near-Earth Object WISE ( NEOWISE ). [ 12 ] Due to its success, the program was extended a further three months. [ 5 ] The focus was to look for asteroids and comets close to Earth orbit, using the remaining post-cryogenic detection capability (two of four detectors on WISE work without cryogenic). [ 12 ] In February 2011, NASA announced that NEOWISE had discovered many new objects in the Solar System, including twenty comets. [ 53 ] During its primary and extended missions, the spacecraft delivered characterizations of 158,000 minor planets, including more than 35,000 newly discovered objects. [ 54 ] [ 55 ] After completing a full scan of the asteroid belt for the NEOWISE mission, the spacecraft was put into hibernation on 1 February 2011. [ 56 ] The spacecraft was briefly contacted to check its status on 20 September 2012. [ 5 ] On 21 August 2013, NASA announced it would recommission NEOWISE to continue its search for near-Earth objects (NEO) and potentially dangerous asteroids. It would additionally search for asteroids that a robotic spacecraft could intercept and redirect to orbit the Moon. The extended mission would be for three years at a cost of US$5 million per year, and was brought about in part due to calls for NASA to step up asteroid detection after the Chelyabinsk meteor exploded over Russia in February 2013. [ 13 ] NEOWISE was successfully taken out of hibernation in September 2013. [ 57 ] With its coolant depleted, the spacecraft's temperature was reduced from 200 K (−73 °C; −100 °F) — a relatively high temperature resulting from its hibernation — to an operating temperature of 75 K (−198.2 °C; −324.7 °F) by having the telescope stare into deep space. [ 5 ] [ 52 ] Its instruments were then re-calibrated, [ 52 ] and the first post-hibernation photograph was taken on 19 December 2013. [ 57 ] The post-hibernation NEOWISE mission was anticipated to discover 150 previously unknown near-Earth objects and to learn more about the characteristics of 2,000 known asteroids. [ 52 ] [ 58 ] Few objects smaller than 100 m (330 ft) in diameter were detected by NEOWISE's automated detection software, known as the WISE Moving Object Processing Software (WMOPS), because it requires five or more detections to be reported. [ 59 ] The average albedo of asteroids larger than 100 m (330 ft) discovered by NEOWISE is 0.14. [ 59 ] The telescope was turned on again in 2013, and by December 2013 the telescope had cooled down sufficiently to be able to resume observations. [ 60 ] Between then and May 2017, the telescope made almost 640,000 detections of over 26,000 previously known objects including asteroids and comets. [ 60 ] In addition, it discovered 416 new objects and about a quarter of those were near-Earth objects classification. [ 60 ] As of July 2024, WISE / NEOWISE statistics lists a total of 399 near-Earth objects (NEOs), including 2016 WF 9 and C/2016 U 1 , discovered by the spacecraft: [ 34 ] Of the 365 near-Earth asteroids (NEAs), 66 of them are considered potentially hazardous asteroids (PHAs), a subset of the much larger family of NEOs, but particularly more likely to hit Earth and cause significant destruction. [ 34 ] NEOs can be divided into NECs (comets only) and NEAs (asteroids only), and further into subcategories such as Atira asteroids , Aten asteroids , Apollo asteroids , Amor asteroids and the potentially hazardous asteroids (PHAs). [ 61 ] NEOWISE has provided an estimate of the size of over 1,850 near-Earth objects. NEOWISE mission was extended for two more years (1 July 2021 – 30 June 2023). [ 62 ] As of June 2021 [update] NEOWISE's replacement, the next-generation NEO Surveyor , is scheduled to launch in 2028, and will greatly expand on what humans have learned, and continue to learn, from NEOWISE. [ 62 ] "As of August 2023 NEOWISE is 40% through the 20th coverage of the full sky since the start of the Reactivation mission." [ 63 ] On 13 December 2023, the Jet Propulsion Laboratory (JPL), announced that the satellite would enter a low orbit causing it to be unusable by early 2025. Increased solar activity as the sun approaches solar maximum during Solar cycle 25 was expected to increase atmospheric drag causing orbital decay . The satellite was expected to subsequently reenter the earth's atmosphere. [ 17 ] On 8 August 2024, the Jet Propulsion Laboratory updated its estimate of orbital decay to sometime in late 2024 and announced that NEOWISE's science survey had ended on 31 July. [ 18 ] NEOWISE entered and burnt up in the Earth's atmosphere at 8:49 p.m. EDT on 1 November 2024. [ 64 ] On 14 April 2011, a preliminary release of WISE data was made public, covering 57% of the sky observed by the spacecraft. [ 65 ] On 14 March 2012, a new atlas and catalog of the entire infrared sky as imaged by WISE was released to the astronomic community. [ 40 ] On 31 July 2012, NEOWISE Post-Cryo Preliminary Data was released. [ 5 ] A release called AllWISE, combining all data, was released on 13 November 2013. [ 66 ] NEOWISE data is released annually. [ 66 ] The WISE data include diameter estimates of intermediate precision, better than from an assumed albedo but not nearly as precise as good direct measurements, can be obtained from the combination of reflected light and thermal infrared emission, using a thermal model of the asteroid to estimate both its diameter and its albedo. In May 2016, technologist Nathan Myhrvold questioned the precision of the diameters and claimed systemic errors arising from the spacecraft's design. [ 67 ] [ 68 ] [ 69 ] The original version of his criticism itself faced criticism for its methodology [ 70 ] and did not pass peer review , [ 68 ] [ 71 ] but a revised version was subsequently published. [ 72 ] [ 73 ] The same year, an analysis of 100 asteroids by an independent group of astronomers gave results consistent with the original WISE analysis. [ 73 ] The Allwise co-added images were intentionally blurred, which is optimal for detecting isolated point sources. This has the disadvantage that many sources are not detected in crowded regions. The unofficial, unblurred coadds of the WISE imaging (unWISE) creates sharp images and masks defects and transients. [ 74 ] unWISE coadded images can be searched by coordinates on the unWISE website. [ 75 ] unWISE images are used for the citizen science projects Disk Detective and Backyard Worlds . [ 76 ] In 2019, a preliminary catalog was released. The catalog is called CatWISE. This catalog combines the WISE and NEOWISE data and provides photometry at 3.4 and 4.6 μm. It uses the unWISE images and the Allwise pipeline to detect sources. CatWISE includes fainter sources and far more accurate measurement of the motion of objects. The catalog is used to extend the number of discovered brown dwarfs, especially the cold and faint Y dwarfs. CatWISE is led by Jet Propulsion Laboratory (JPL), California Institute of Technology , with funding from NASA's Astrophysics Data Analysis Program. [ 77 ] [ 78 ] The CatWISE preliminary catalog can be accessed through Infrared Science Archive (IRSA). [ 79 ] In addition to numerous comets and minor planets, WISE and NEOWISE discovered many brown dwarfs , some just a few light years from the solar system; the first Earth trojan ; and the most luminous galaxies in the universe. Nearby stars discovered using WISE within 30 light years: The nearest brown dwarfs discovered by WISE within 20 light-years include: Before the discovery of Luhman 16 in 2013, WISE 1506+7027 at a distance of 11.1 +2.3 −1.3 light-years was suspected to be closest brown dwarf on the list of nearest stars (also see § Map with nearby WISE stars ) . [ 81 ] Directly imaged exoplanets first detected with WISE. See Definition of exoplanets : IAU working definition as of 2018 requires M planet ≤ 13 M J and M planet /M central < 0.04006. M min and M max are the lower and upper mass limit of the planet in Jupiter masses. M max =7.8<13 M max /M central =0.02<0.04 M max =20>13 M max /M central =0.019<0.04 M planet /M central =0.009<0.04 (0558 B) M max /M central =? The sensitivity of WISE in the infrared enabled the discovery of disk around young stars and old white dwarf systems. These discoveries usually require a combination of optical, near infrared and WISE or Spitzer mid-infrared observations. Examples are the red dwarf WISE J080822.18-644357.3 , the brown dwarf WISEA J120037.79-784508.3 and the white dwarf LSPM J0207+3331 . The NASA citizen science project Disk Detective is using WISE data. Additionally researchers used NEOWISE to discover erupting young stellar objects . [ 86 ] Researchers discovered a few nebulae using WISE. Such as the type Iax remnant Pa 30 . Nebulae around the massive B-type stars BD+60° 2668 and ALS 19653 , [ 87 ] an obscured shell around the Wolf-Rayet star WR 35 [ 88 ] and a halo around the Helix Nebula , a planetary nebula [ 89 ] were also discovered with WISE. Active galactic nuclei (AGN) can be identified from their mid-infrared color. One work used for example a combination of Gaia and unWISE data to identify AGNs. [ 90 ] Luminous infrared galaxies can be detected in the infrared. One study used SDSS and WISE to identify such galaxies. [ 91 ] NEOWISE observed the entire sky for more than 10 years and can be used to find transient events . Some of these discovered transients are Tidal Disruption Events (TDE) in galaxies [ 92 ] and infrared detection of supernovae similar to SN 2010jl . WISE is credited with discovering 3,088 numbered minor planets. [ 93 ] Examples of the mission's numbered minor planet discoveries include: On 27 March 2020, the comet C/2020 F3 (NEOWISE) was discovered by the WISE spacecraft. It eventually became a naked-eye comet and was widely photographed by professional and amateur astronomers. It was the brightest comet visible in the northern hemisphere since comet Hale-Bopp in 1997.
https://en.wikipedia.org/wiki/Wide-field_Infrared_Survey_Explorer
Wide-field multiphoton microscopy [ 2 ] [ 3 ] [ 4 ] [ 5 ] refers to an optical non-linear imaging technique tailored for ultrafast imaging in which a large area of the object is illuminated and imaged without the need for scanning. High intensities are required to induce non-linear optical processes such as two-photon fluorescence or second harmonic generation . In scanning multiphoton microscopes the high intensities are achieved by tightly focusing the light, and the image is obtained by beam scanning. In wide-field multiphoton microscopy the high intensities are best achieved using an optically amplified pulsed laser source to attain a large field of view (~100 μm). [ 2 ] [ 3 ] [ 4 ] The image in this case is obtained as a single frame with a CCD without the need of scanning, making the technique particularly useful to visualize dynamic processes simultaneously across the object of interest. With wide-field multiphoton microscopy the frame rate can be increased up to a 1000-fold compared to multiphoton scanning microscopy . [ 3 ] Wide-field multiphoton microscopes are not yet commercially available, but working prototypes exist in several optics laboratories. The main characteristic of the technique is the illumination of a wide area on the sample with a pulsed laser beam. In nonlinear optics the amount of nonlinear photons (N) generated by a pulsed beam per (illuminating) area per second is proportional to [ 6 ] [ 7 ] N ∝ E 2 τ A f {\displaystyle N\varpropto {\frac {E^{2}}{\tau A}}f} , where E is the energy of the beam in Joules, τ is the duration of the pulse in seconds, A is the illuminating area in square meters, and f is the repetition rate of the pulsed beam in Hertz. Increasing the illumination area thus reduces the amount of generated nonlinear photons unless the energy is increased. Optical damage depends on the energy density, i.e. peak intensity per area I p =E/(τA). Therefore, both the area and energy can be easily increased without the risk of optical damage if the peak intensity per area is kept low, and yet a gain in the amount of generated nonlinear photons can be obtained because of the quadratic dependence. For example, increasing both the area and energy 1000 fold, leaves the peak intensity unchanged but increases the generated nonlinear photons by 1000 fold. This 1000 extra photons are indeed generated over a larger area. In imaging this means that the extra 1000 photons are spread over the image, which at first might not seem an advantage over multiphoton scanning microscopy. The advantage however becomes evident when the size of the image and the scanning time are considered. [ 3 ] The amount of nonlinear photons per image frame per second generated by a wide-field multiphoton microscope compared to a scanning multiphoton microscope is given by [ 3 ] N w i d e − f i e l d N s c a n n i n g = n f w i d e − f i e l d f s c a n n i n g {\displaystyle {\frac {N_{\mathrm {wide-field} }}{N_{\mathrm {scanning} }}}=n{\frac {f_{\mathrm {wide-field} }}{f_{\mathrm {scanning} }}}} , when assuming that the same peak intensity is used in both systems. Here n is the number of scanning points such that A w i d e − f i e l d = n A s c a n n i n g {\textstyle A_{\mathrm {wide-field} }=nA_{\mathrm {scanning} }} . There is the technical difficulty of achieving a large illumination area without destroying the imaging optics. One approach is the so-called spatiotemporal focusing [ 4 ] [ 5 ] in which the pulsed beam is spatially dispersed by a diffraction grating forming a 'rainbow' beam that is subsequently focused by an objective lens. [ 5 ] The effect of focusing the 'rainbow' beam while imaging the diffraction grating forces the different wavelengths to overlap at the focal plane of the objective lens. The different wavelengths then only interfere at the overlapping volume, if no further spatial or temporal dispersion is introduced, so that the intense pulsed illumination is retrieved and capable of yielding cross-sectioned images. The axial resolution is typically 2-3 μm [ 4 ] [ 5 ] even with structured illumination techniques. [ 10 ] [ 11 ] The spatial dispersion generated by the diffraction grating ensures that the energy in the laser is spread over a wider area in the objective lens, hence reducing the possibility of damaging the lens itself. In contrast to what was initially thought, temporal focusing is remarkably robust to scattering. [ 12 ] Its ability to penetrate through turbid media with minimal speckle was used in optogenetics , enabling photo-excitation of arbitrary light patterns through tissue. [ 12 ] Temporal focusing was later combined with single-pixel detection to overcome the effect of scattering on fluorescent photons. [ 13 ] This technique, called TRAFIX , enabled wide-field imaging through biological tissue at great depths with higher signal-to-background ratio and lower photobleaching than standard point-scanning two-photon microscopy . [ 13 ] Another simpler method consists of two beams that are loosely focused and overlapped onto an area (~100 μm) on the sample. [ 2 ] [ 3 ] With this method it is possible to have access to all the elements of the χ ( 2 ) {\displaystyle \chi ^{(2)}} tensor thanks to the capability of being able to change the polarisation of each beam independently.
https://en.wikipedia.org/wiki/Wide-field_multiphoton_microscopy
The Wide Area Augmentation System ( WAAS ) is an air navigation aid developed by the Federal Aviation Administration to augment the Global Positioning System (GPS), with the goal of improving its accuracy, integrity, and availability. Essentially, WAAS is intended to enable aircraft to rely on GPS for all phases of flight, including approaches with vertical guidance to any airport within its coverage area. It may be further enhanced with the local-area augmentation system (LAAS) also known by the preferred ICAO term ground-based augmentation system (GBAS) in critical areas. WAAS uses a network of ground-based reference stations, in North America and Hawaii , to measure small variations in the GPS satellites' signals in the western hemisphere . Measurements from the reference stations are routed to master stations, which queue the received deviation correction (DC) and send the correction messages to geostationary WAAS satellites in a timely manner (every 5 seconds or better). Those satellites broadcast the correction messages back to Earth, where WAAS-enabled GPS receivers use the corrections while computing their positions to improve accuracy. The International Civil Aviation Organization (ICAO) calls this type of system a satellite-based augmentation system (SBAS). Europe and Asia are developing their own SBASs: the Indian GPS aided GEO augmented navigation (GAGAN), the European Geostationary Navigation Overlay Service (EGNOS), the Japanese Multi-functional Satellite Augmentation System (MSAS) and the Russian System for Differential Corrections and Monitoring (SDCM), respectively. Commercial systems include StarFire , OmniSTAR , and Atlas . A primary goal of WAAS was to allow aircraft to make a Category I approach without any equipment being installed at the airport. This would allow new GPS-based instrument landing approaches to be developed for any airport, even ones without any ground equipment. A Category I approach requires an accuracy of 16 metres (52 ft) laterally and 4.0 metres (13.1 ft) vertically. [ 2 ] To meet this goal, the WAAS specification requires it to provide a position accuracy of 7.6 metres (25 ft) or less (for both lateral and vertical measurements), at least 95% of the time. [ 3 ] Actual performance measurements of the system at specific locations have shown it typically provides better than 1.0 metre (3 ft 3 in) laterally and 1.5 metres (4 ft 11 in) vertically throughout most of the contiguous United States and large parts of Canada and Alaska . [ 1 ] Integrity of a navigation system includes the ability to provide timely warnings when its signal is providing misleading data that could potentially create hazards. The WAAS specification requires the system detect errors in the GPS or WAAS network and notify users within 6.2 seconds. [ 3 ] Certifying that WAAS is safe for instrument flight rules (IFR) (i.e. flying in the clouds) requires proving there is only an extremely small probability that an error exceeding the requirements for accuracy will go undetected. Specifically, the probability is stated as 1×10 −7 , and is equivalent to no more than 3 seconds of bad data per year. This provides integrity information equivalent to or better than receiver autonomous integrity monitoring (RAIM). [ 4 ] Availability is the probability that a navigation system meets the accuracy and integrity requirements. Before the advent of WAAS, GPS specifications allowed for system unavailability for as much as a total time of four days per year (99% availability). [ citation needed ] The WAAS specification mandates availability as 99.999% ( five nines ) throughout the service area, equivalent to a downtime of just over 5 minutes per year. [ 3 ] [ 4 ] WAAS is composed of three main segments: the ground segment , space segment , and user segment. The ground segment is composed of multiple wide-area reference stations (WRS). These precisely surveyed ground stations monitor and collect information on the GPS signals, then send their data to three wide-area master stations (WMS) using a terrestrial communications network. The reference stations also monitor signals from WAAS geostationary satellites, providing integrity information regarding them as well. As of October 2007 there were 38 WRSs: twenty in the contiguous United States (CONUS), seven in Alaska, one in Hawaii, one in Puerto Rico, five in Mexico, and four in Canada. [ 5 ] [ 6 ] Using the data from the WRS sites, the WMSs generate two different sets of corrections: fast and slow. The fast corrections are for errors which are changing rapidly and primarily concern the GPS satellites' instantaneous positions and clock errors. These corrections are considered user position-independent, which means they can be applied instantly by any receiver inside the WAAS broadcast footprint . The slow corrections include long-term ephemeric and clock error estimates, as well as ionospheric delay information. WAAS supplies delay corrections for a number of points (organized in a grid pattern) across the WAAS service area [ 7 ] (see user segment below to understand how these corrections are used). Once these correction messages are generated, the WMSs send them to two pairs of ground uplink stations (GUS), which then transmit to satellites in the space segment for rebroadcast to the user segment. [ 8 ] Each FAA Air Route Traffic Control Center in the 50 states has a WAAS reference station, except for Indianapolis . There are also stations positioned in Canada, Mexico and Puerto Rico. [ 7 ] See List of WAAS reference stations for the coordinates of the individual receiving antennas. [ 9 ] The space segment consists of multiple communication satellites which broadcast the correction messages generated by the WAAS master stations for reception by the user segment. The satellites also broadcast the same type of range information as normal GPS satellites, effectively increasing the number of satellites available for a position fix. The space segment currently consists of three commercial satellites: Eutelsat 117 West B , SES-15 , and Galaxy 30 . [ 10 ] [ 11 ] [ 12 ] The original two WAAS satellites, named Pacific Ocean Region (POR) and Atlantic Ocean Region-West (AOR-W), were leased space on Inmarsat III satellites. These satellites ceased WAAS transmissions on July 31, 2007. With the end of the Inmarsat lease approaching, two new satellites ( Galaxy 15 and Anik F1R ) were launched in late 2005. Galaxy 15 is a PanAmSat and Anik F1R is a Telesat . As with the previous satellites, these are leased services under the FAA's Geostationary Satellite Communications Control Segment contract with Lockheed Martin for WAAS geostationary satellite leased services, who were contracted to provide up to three satellites through the year 2016. [ 13 ] A third satellite was later added to the system. From March to November 2010, the FAA broadcast a WAAS test signal on a leased transponder on the Inmarsat-4 F3 satellite. [ 14 ] The test signal was not usable for navigation, but could be received and was reported with the identification numbers PRN 133 (NMEA #46). In November 2010, the signal was certified as operational and made available for navigation. [ 15 ] Following in orbit testing, Eutelsat 117 West B, broadcasting signal on PRN 131 (NMEA #44), was certified as operational and made available for navigation on March 27, 2018. The SES 15 satellite was launched on May 18, 2017, and following an in-orbit test of several months, was set operational on July 15, 2019. In 2018, a contract was awarded to place a WAAS L-band payload on the Galaxy 30 satellite. The satellite was successfully launched on August 15, 2020, and the WAAS transmissions were set operational on April 26, 2022, re-using PRN 135 (NMEA #48). [ 16 ] [ 17 ] After approximately three weeks with four active WAAS satellites, operational WAAS transmissions on Anik F1-R were ended on May 17, 2022. [ 17 ] In the table above, PRN is the satellite's actual pseudo-random number code. NMEA is the satellite number sent by some receivers when outputting satellite information (NMEA = PRN - 87). The user segment is the GPS and WAAS receiver, which uses the information broadcast from each GPS satellite to determine its location and the current time, and receives the WAAS corrections from the Space segment. The two types of correction messages received (fast and slow) are used in different ways. The GPS receiver can immediately apply the fast type of correction data, which includes the corrected satellite position and clock data, and determines its current location using normal GPS calculations. Once an approximate position fix is obtained the receiver begins to use the slow corrections to improve its accuracy. Among the slow correction data is the ionospheric delay. As the GPS signal travels from the satellite to the receiver, it passes through the ionosphere. The receiver calculates the location where the signal pierced the ionosphere and, if it has received an ionospheric delay value for that location, corrects for the error the ionosphere created. While the slow data can be updated every minute if necessary, ephemeris errors and ionosphere errors do not change this frequently, so they are only updated every two minutes and are considered valid for up to six minutes. [ 20 ] The WAAS was jointly developed by the United States Department of Transportation (DOT) and the Federal Aviation Administration (FAA) as part of the Federal Radionavigation Program (DOT-VNTSC-RSPA-95-1/DOD-4650.5), beginning in 1994, to provide performance comparable to category 1 instrument landing system (ILS) for all aircraft possessing the appropriately certified equipment. [ 7 ] Without WAAS, ionospheric disturbances, clock drift , and satellite orbit errors create too much error and uncertainty in the GPS signal to meet the requirements for a precision approach (see GPS sources of error ). A precision approach includes altitude information and provides course guidance, distance from the runway, and elevation information at all points along the approach, usually down to lower altitudes and weather minimums than non-precision approaches. Prior to the WAAS, the U.S. National Airspace System (NAS) did not have the ability to provide lateral and vertical navigation for precision approaches for all users at all locations. The traditional system for precision approaches is the instrument landing system (ILS), which used a series of radio transmitters each broadcasting a single signal to the aircraft. This complex series of radios needs to be installed at every runway end, some offsite, along a line extended from the runway centerline, making the implementation of a precision approach both difficult and very expensive. The ILS system is composed of 180 different transmitting antennas at each point built. For some time the FAA and NASA developed a much improved system, the microwave landing system (MLS). The entire MLS system for a particular approach was isolated in one or two boxes located beside the runway, dramatically reducing the cost of implementation. MLS also offered a number of practical advantages that eased traffic considerations, both for aircraft and radio channels. Unfortunately, MLS would also require every airport and aircraft to upgrade their equipment. During the development of MLS, consumer GPS receivers of various quality started appearing. GPS offered a huge number of advantages to the pilot, combining all of an aircraft's long-distance navigation systems into a single easy-to-use system, often small enough to be hand held. Deploying an aircraft navigation system based on GPS was largely a problem of developing new techniques and standards, as opposed to new equipment. The FAA started planning to shut down their existing long-distance systems ( VOR and NDBs ) in favor of GPS. This left the problem of approaches, however. GPS is simply not accurate enough to replace ILS systems. Typical accuracy is about 15 metres (49 ft), whereas even a "CAT I" approach, the least demanding, requires a vertical accuracy of 4 metres (13 ft). This inaccuracy in GPS is mostly due to large "billows" in the ionosphere , which slow the radio signal from the satellites by a random amount. Since GPS relies on timing the signals to measure distances, this slowing of the signal makes the satellite appear farther away. The billows move slowly, and can be characterized using a variety of methods from the ground, or by examining the GPS signals themselves. By broadcasting this information to GPS receivers every minute or so, this source of error can be significantly reduced. This led to the concept of Differential GPS , which used separate radio systems to broadcast the correction signal to receivers. Aircraft could then install a receiver which would be plugged into the GPS unit, the signal being broadcast on a variety of frequencies for different users (FM radio for cars, longwave for ships, etc.). Broadcasters of the required power generally cluster around larger cities, making such DGPS systems less useful for wide-area navigation. Additionally, most radio signals are either line-of-sight, or can be distorted by the ground, which made DGPS difficult to use as a precision approach system or when flying low for other reasons. The FAA considered systems that could allow the same correction signals to be broadcast over a much wider area, such as from a satellite, leading directly to WAAS. Since a GPS unit already consists of a satellite receiver, it made much more sense to send out the correction signals on the same frequencies used by GPS units, than to use an entirely separate system and thereby double the probability of failure. In addition to lowering implementation costs by "piggybacking" on a planned satellite launch, this also allowed the signal to be broadcast from geostationary orbit , which meant a small number of satellites could cover all of North America. On July 10, 2003, the WAAS signal was activated for general aviation, covering 95% of the United States, and portions of Alaska offering 350 feet (110 m) minimums. On January 17, 2008, Alabama-based Hickok & Associates became the first designer of helicopter WAAS with Localizer Performance (LP) and Localizer Performance with Vertical guidance (LPV) approaches, and the only entity with FAA-approved criteria (which even FAA has yet to develop). [ 21 ] [ 22 ] [ 23 ] This helicopter WAAS criteria offers as low as 250 foot minimums and decreased visibility requirements to enable missions previously not possible. On April 1, 2009, FAA AFS-400 approved the first three helicopter WAAS GPS approach procedures for Hickok & Associates' customer California Shock/Trauma Air Rescue (CALSTAR). Since then they have designed many approved WAAS helicopter approaches for various EMS hospitals and air providers, within the United States as well as in other countries and continents. On December 30, 2009, Seattle-based Horizon Air flew the first scheduled-passenger service flight [ 24 ] using WAAS with LPV on flight 2014, a Portland to Seattle flight operated by a Bombardier Q400 with a WAAS FMS from Universal Avionics. The airline, in partnership with the FAA, will outfit seven Q400-aircraft with WAAS and share flight data to better determine the suitability of WAAS in scheduled air service applications. [ needs update ] Wide-Area Augmentation System (WAAS) timeline [ 25 ] WAAS addresses all of the "navigation problem", providing highly accurate positioning that is extremely easy to use, for the cost of a single receiver installed on the aircraft. Ground- and space-based infrastructure is relatively limited, and no on-airport system is needed. WAAS allows a precision approach to be published for any airport, for the cost of developing the procedures and publishing the new approach plates. This means that almost any airport can have a precision approach and the cost of implementation is drastically reduced. Additionally WAAS works just as well between airports. This allows the aircraft to fly directly from one airport to another, as opposed to following routes based on ground-based signals. This can cut route distances considerably in some cases, saving both time and fuel. In addition, because of its ability to provide information on the accuracy of each GPS satellite's information, aircraft equipped with WAAS are permitted to fly at lower en-route altitudes than was possible with ground-based systems, which were often blocked by terrain of varying elevation. This enables pilots to safely fly at lower altitudes, not having to rely on ground-based systems. For unpressurized aircraft, this conserves oxygen and enhances safety. The above benefits create not only convenience, but also have the potential to generate significant cost savings. The cost to provide the WAAS signal, serving all 5,400 public use airports, is just under US$ 50 million per year. In comparison, the current ground based systems such as the Instrument Landing System (ILS), installed at only 600 airports, cost US$82 million in annual maintenance. [ citation needed ] Without ground navigation hardware to purchase, the total cost of publishing a runway's WAAS approach is approximately US$50,000; compared to the $1,000,000 to $1,500,000 cost to install an ILS radio system. [ 27 ] For all its benefits, WAAS is not without drawbacks and critical limitations: In 2007, WAAS vertical guidance was projected to be available nearly all the time (greater than 99%), and its coverage encompasses the full continental U.S., most of Alaska, northern Mexico, and southern Canada. [ 30 ] At that time, the accuracy of WAAS would meet or exceed the requirements for Category 1 ILS approaches, namely, three-dimensional position information down to 200 feet (60 m) above touchdown zone elevation. [ 2 ] Software improvements, to be implemented by September 2008, significantly improve signal availability of vertical guidance throughout the CONUS and Alaska. Area covered by the 95% available LPV solution in Alaska improves from 62% to 86%. And in the CONUS, the 100% availability LPV-200 coverage rises from 48% to 84%, with 100% coverage of the LPV solution. [ 6 ] Both Galaxy XV (PRN #135) and Anik F1R (PRN #138) contain an L1 & L5 GPS payload. This means they will potentially be usable with the L5 modernized GPS signals when the new signals and receivers become available. With L5, avionics will be able to use a combination of signals to provide the most accurate service possible, thereby increasing availability of the service. These avionics systems will use ionospheric corrections broadcast by WAAS, or self-generated onboard dual frequency corrections, depending on which one is more accurate. [ 31 ] Download coordinates as:
https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System
Wide Area GPS Enhancement (WAGE) is a method to increase the horizontal accuracy of the GPS encrypted P(Y) Code by adding additional range correction data to the satellite broadcast navigation message. Per a 1997 article, [ citation needed ] the navigation message for each satellite is updated once daily or as needed. This daily update of each satellite navigation message contains the range corrections for all the satellites in the constellation. Thus, more timely range correction information would be available for each satellite, resulting in increased horizontal accuracy. Potential improvements to the system include simplifying the upload procedure, uploading the data more often, and adding more monitor stations for better range correction. WAGE is available only to the Precise Positioning Service (PPS) or P(Y) Code receivers. It requires at least 12.5 minutes [ citation needed ] to obtain the most recent WAGE data. After that, the process of using the corrections data is automatic and transparent to the operator. Any time the receiver is on, it continually collects WAGE data (whether the WAGE mode is on or off). The receiver always uses the most recent WAGE data available to calculate position and it will not use the data that is over 6 hours old. [ citation needed ] A 1996 evaluation using a PLGR (a 5-channel L2 GPS receiver) found no clear advantage to using WAGE in its then-current configuration. Its overall average error of 9.1 meters was worse than when WAGE was not used. [ citation needed ] However, the specifications information for the Defense Advanced GPS Receiver , which has replaced the PLGR, lists its WAGE accuracy as better than 4.82 m, 95% Horizontal. [ citation needed ] PPS accuracy has improved beyond WAGE specification and accuracy improvement from WAGE is now negligible. [ citation needed ] Modern receivers & atomic clocks on a chip will also outperform WAGE. Some theorize that restrictions imposed by WAGE may limit precision for both C/A, P(Y), & WAGE users more than what it provides to WAGE users only. [ citation needed ] The capability of WAGE has been superseded by Talon NAMATH. [ citation needed ] There is a push for WAGE users to upgrade to Talon NAMATH or move them to using P(Y) alone. This could lift WAGE restrictions & allow accuracy improvements for all users. [ citation needed ] This article about geography terminology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wide_Area_GPS_Enhancement
Cisco Wide Area Application Services (WAAS) is technology developed by Cisco Systems that optimizes the performance of any TCP-based application operating in a wide area network (WAN) environment while preserving and strengthening branch security. WAAS combines WAN optimization , acceleration of TCP-based applications, and Cisco's Wide Area File Services (WAFS) in a single appliance or blade. It is Cisco's attempt to keep WAN optimization residing firmly in the router, eliminating the need to deploy acceleration appliances throughout the infrastructure. The technology preserves TCP information within the network while offering the performance benefits that come along with using WAN optimization technology. WAN optimization appliances have traditionally limited IT when it comes to maintaining functions such as security, quality of service , visibility, and monitoring end-to-end transactions because they tend to cause problems for most network monitoring devices and tools. By design, WAN Optimization “confuses” performance monitoring systems by changing packet header data. [ 1 ] [ 2 ] Latest Release Cisco's latest WAAS software release, announced at the 2007 Cisco Networkers conference, is the industry's first solution for both end-to-end monitoring and acceleration of application traffic.
https://en.wikipedia.org/wiki/Wide_area_application_services
The Wideband Networking Waveform ( WNW ) is a military radio protocol for mobile ad hoc networking (MANETs) for software defined radios. [ 1 ] It was developed as part of the Joint Tactical Radio System (JTRS) program of the U.S. Department of Defense , and was intended for US and NATO military use. The ""WNW"" waveform uses an OFDM physical layer, [ 2 ] and with variable frequency usage to best utilize the available bandwidth. [ 1 ] The waveform uses the Software Communications Architecture (SCA) software architecture, and has NSA approved security. [ 3 ] There is also a related COALWNW waveform for use by coalition partners. [ 1 ] This article related to telecommunications is a stub . You can help Wikipedia by expanding it . This mobile technology related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wideband_Networking_Waveform
Wideband material refers to material that can convey Microwave signals (light/sound) over a variety of wavelengths . These materials possess exemplary attenuation and dielectric constants, and are excellent dielectrics for semiconductor gates . Examples of such material include gallium nitride (GaN) and silicon carbide (SiC). SiC has been used extensively in the creation of lasers for several years. However, it performs poorly (providing limited brightness) because it has an indirect band gap . GaN has a wide band gap (~3.4 eV), which usually results in high energies for structures which possess electrons in the conduction band . This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wideband_materials
Widespread fatigue damage ( WFD ) in a structure is characterised by the simultaneous presence of fatigue cracks at multiple points that are of sufficient size and density that while individually they may be acceptable, link-up of the cracks could suddenly occur and the structure could fail. [ 1 ] For example, small fatigue cracks developing along a row of fastener holes can coalesce increasing the stress on adjacent cracked sites increasing the rate of growth of those cracks. The objective of a designer is to determine when large numbers of small cracks could degrade the joint strength to an unacceptable level. [ 2 ] The in-flight loss of part of the fuselage from Aloha Airlines Flight 243 was attributed to multi-site fatigue damage. Several factors can influence the occurrence of WFD, like Design issues and Probabilistic parameters like manufacturing, environment etc. Two categories of WFD are: MSD is the simultaneous presence of fatigue cracks in the same structural element. MED is the simultaneous presence of fatigue cracks in similar adjacent structural elements. Main difficulties involved are: First, a parameter called Limits Of Validity (LOV) is defined. [ 1 ] LOV is defined as “the period of time (in flight cycles, hours or both) up to which WFD will not occur in aeroplane structure.” The steps followed are:
https://en.wikipedia.org/wiki/Widespread_fatigue_damage
A widget is a device placed in a container of beer to manage the characteristics of the beer's head . The original widget was patented in Ireland by Guinness . The "floating widget" is found in cans of beer as a hollow plastic sphere, approximately 3 centimetres (1.2 in) in diameter (similar in appearance to a table tennis ball, but smaller) with two small holes and a seam. The "rocket widget" is found in bottles, 7 centimetres (2.8 in) in length with the small hole at the bottom. [ 1 ] Draught Guinness, as it is known today, was first produced in 1959. With Guinness keen to produce draught beer packaged for consumers to drink at home, Bottled Draught Guinness was formulated in 1978 and launched into the Irish market in 1979. It was never actively marketed internationally as it required an "initiator" device, which looked rather like a syringe , to make it work. Some canned beers are pressurized by adding liquid nitrogen , which vaporises and expands in volume after the can is sealed, forcing gas and beer into the widget's hollow interior through a tiny hole—the less beer inside the widget the better, for subsequent head quality. In addition, some nitrogen dissolves in the beer, which also contains dissolved carbon dioxide . Oxygen is generally excluded as its presence can cause flavour deterioration. The presence of dissolved nitrogen allows smaller bubbles to be formed, which increases the creaminess of the head. This is because the smaller bubbles need a higher internal pressure to balance the greater surface tension , which is inversely proportional to the radius of the bubbles. Achieving this higher pressure would not be possible with just dissolved carbon dioxide, as the greater solubility of this gas compared to nitrogen would create an unacceptably large head. When the can is opened, the pressure in the can quickly drops, causing the pressurised gas and beer inside the widget to jet out from the hole. This agitates the surrounding beer, creating a chain reaction of bubble formation throughout the beer. The result, when the can's content is then poured, is a surging mixture in the glass of very small gas bubbles and liquid. This is the case with certain types of draught beer such as draught stouts . In the case of these draught beers, which before dispensing also contain a mixture of dissolved nitrogen and carbon dioxide, the agitation is caused by forcing the beer under pressure through small holes in a restrictor in the tap. The surging mixture gradually settles to produce a very creamy head. In 1969 two Guinness brewers at Guinness's St James's Gate Brewery in Dublin, Tony Carey and Sammy Hildebrand, developed a system for producing draught type Guinness from cans or bottles through the discharge of gas from an internal compartment. It was patented in British Patent No 1266351, filed 27 January 1969, with a complete specification published 8 March 1972. Development work on a can system under Project ACORN (Advanced Cans Of Rich Nectar) focused on an arrangement whereby a false lid underneath the main lid formed the gas chamber (see diagram below right). Technical difficulties led to this approach being put on hold, and Guinness instead concentrated on bottles using external initiators. Subsequently, Guinness allowed this patent to lapse and it was not until Ernest Saunders centralised the company's research and development in 1984 that work restarted on this invention, under the direction of Alan Forage. The design of an internal compartment that could be readily inserted during the canning process was devised by Alan Forage and William Byrne, and work started on the widget during the period 1984–85. The plan was to introduce a plastic capsule into the can, pressurise it during the filling process and then allow it to release this pressure in a controlled manner when the can was opened. This would be sufficient to initiate the product and give it the characteristic creamy head. However, Tony Carey observed that this resulted in beer being forced into the widget during pressurisation, which reduced the quality of the head. He suggested overcoming this by rapidly inverting the can after the lid was seamed on. This extra innovation proved successful. The first samples sent to Dublin were labelled "Project Dynamite", which caused some delay before customs and excise would release the samples. [ citation needed ] Because of this the name was changed to Oaktree in recognition of the earlier ACORN project. Another name that changed was "inserts"; the operators called them "widgets" almost immediately after they arrived on site, a name that has now stuck with the industry. [ citation needed ] The development of ideas continued and more than one hundred alternatives were considered. The blow-moulded widget was to be pierced with a laser and a blower was then necessary to blow away the plume created by the laser burning through the polypropylene. This was abandoned and instead it was decided to gas-exchange air for nitrogen on the filler, [ 2 ] and produce the inserts with a hole in place using straightforward and cheaper injection-moulding techniques. Commissioning began January 1988, with a national launch date of March 1989. This first-generation widget was a plastic disc held in place by friction in the bottom of the can. This method worked fine if the beer was served cold; when served warm the can would overflow when opened. The floating widget, which Guinness calls the "Smoothifier", was launched in 1997 and does not have this problem. The diagrams on the left show the development sequences for canned and bottled draught Guinness from 1969 to 1988. The idea for the widget soon became popular. John Smith's started to include widgets in their cans in 1994 and many beer brands in the UK now use widgets, often alongside regular carbonated products. Technology from Ball Corp. uses a widget affixed to the bottom of a can that's also charged with nitrogen during canning. [ 3 ] The term widget glass can be used to refer to a laser-engraved pattern at the bottom of a beer glass which aids the release of carbon dioxide bubbles. [ 4 ] The pattern of the etching can be anything from a simple circular or chequered design to a logo or text. The widget in the base of a beer glass works by creating a nucleation point , allowing the CO 2 to be released from the liquid which comes into contact with it, thus assisting in maintaining head on the beer. This has become increasingly popular with Fosters, Estrella and others using them in public houses in the UK.
https://en.wikipedia.org/wiki/Widget_(beer)
Cinnoline is an aromatic heterocyclic compound with the formula C 8 H 6 N 2 . It is isomeric with other naphthyridines including quinoxaline , phthalazine and quinazoline . The free base can be obtained as an oil by treatment of the hydrochloride with base. It co-crystallizes with one molecule of ether as white silky needles, (m.p. 24–25 °C) upon cooling ethereal solutions. The free base melts at 39 °C. It has a taste resembling that of chloral hydrate and leaves a sharp irritation for some time. The compound was first obtained in impure form by cyclization of the alkyne o -C 6 H 4 (N 2 Cl)C≡CCO 2 H in water to give 4-hydroxycinnoline-3-carboxylic acid. This material could be decarboxylated and the hydroxyl group reductively removed to give the parent heterocycle. This reaction is called the Richter cinnoline synthesis . [ 3 ] Improved methods exist for its synthesis. It can be prepared by dehydrogenation of dihydrocinnoline with freshly precipitated mercuric oxide . It can be isolated as the hydrochloride . [ 4 ] Cinnolines are cinnoline derivatives. A classic organic reaction for synthesizing cinnolines is the Widman–Stoermer synthesis , [ 5 ] a ring-closing reaction of an α-vinyl- aniline with hydrochloric acid and sodium nitrite : A conceptually related reaction is the Bamberger triazine synthesis towards triazines. Another cinnoline method is the Borsche cinnoline synthesis . Cinnoline is toxic. [ citation needed ]
https://en.wikipedia.org/wiki/Widman–Stoermer_synthesis
In the context of the pressure-temperature phase diagram of a substance and of the supercritical fluid state in particular, the Widom line is a line emanating from the critical point which in a way extends the liquid-vapor coexistence curve above the critical point. It corresponds to the maxima or minima of certain physical properties of the supercritical fluid, such as the speed of sound, isothermal compressibility, isochoric and isobaric heat capacities. A common criterion for locating the Widom line is indeed the maximum in the isobaric heat capacity. More generally, the Widom line is defined as the line in the pressure-temperature phase diagram of a fluid substance along which the correlation length has its maximum. [ 1 ] It always emanates from a critical point . It has been investigated for various systems, including for example in the context of the hypothesized liquid–liquid critical point (or second critical point) of water . [ 2 ] Similar boundary lines include the Fisher-Widom line and the Frenkel line , which also describe transitions between distinct fluid behaviors. Named after theoretical physicist Benjamin Widom , the Widom line is a crucial concept in fluid thermodynamics and critical phenomena. Such a concept is indeed relevant to the physical properties of any single-component fluid at sufficiently high pressures and temperatures, and its study is an active research area. The Widom line has been suggested [ 3 ] to separate liquid-like behaviour and gas-like behaviour in supercritical fluids, where the traditional distinction between liquid and gas no longer exists. Specifically, on the low-pressure side of the line, the fluid exhibits a gas-like behavior, while on the high-pressure side, it behaves more like a liquid. This separation is not a sharp phase change but a continuous crossover in some of the properties of the fluid. It has been observed in laboratory experiments, for example on fluid methane. [ 4 ] The concept of Widom line provides a useful framework for characterizing and predicting the properties of fluids, which are important for scientific research as well as various industrial processes.
https://en.wikipedia.org/wiki/Widom_line
Widom scaling (after Benjamin Widom ) is a hypothesis in statistical mechanics regarding the free energy of a magnetic system near its critical point which leads to the critical exponents becoming no longer independent so that they can be parameterized in terms of two values. The hypothesis can be seen to arise as a natural consequence of the block-spin renormalization procedure, when the block size is chosen to be of the same size as the correlation length. [ 1 ] Widom scaling is an example of universality . The critical exponents α , α ′ , β , γ , γ ′ {\displaystyle \alpha ,\alpha ',\beta ,\gamma ,\gamma '} and δ {\displaystyle \delta } are defined in terms of the behaviour of the order parameters and response functions near the critical point as follows where Near the critical point, Widom's scaling relation reads where f {\displaystyle f} has an expansion with ω {\displaystyle \omega } being Wegner's exponent governing the approach to scaling . The scaling hypothesis is that near the critical point, the free energy f ( t , H ) {\displaystyle f(t,H)} , in d {\displaystyle d} dimensions, can be written as the sum of a slowly varying regular part f r {\displaystyle f_{r}} and a singular part f s {\displaystyle f_{s}} , with the singular part being a scaling function, i.e., a homogeneous function , so that Then taking the partial derivative with respect to H and the form of M(t,H) gives Setting H = 0 {\displaystyle H=0} and λ = ( − t ) − 1 / p {\displaystyle \lambda =(-t)^{-1/p}} in the preceding equation yields Comparing this with the definition of β {\displaystyle \beta } yields its value, Similarly, putting t = 0 {\displaystyle t=0} and λ = H − 1 / q {\displaystyle \lambda =H^{-1/q}} into the scaling relation for M yields Hence Applying the expression for the isothermal susceptibility χ T {\displaystyle \chi _{T}} in terms of M to the scaling relation yields Setting H=0 and λ = ( t ) − 1 / p {\displaystyle \lambda =(t)^{-1/p}} for t ↓ 0 {\displaystyle t\downarrow 0} (resp. λ = ( − t ) − 1 / p {\displaystyle \lambda =(-t)^{-1/p}} for t ↑ 0 {\displaystyle t\uparrow 0} ) yields Similarly for the expression for specific heat c H {\displaystyle c_{H}} in terms of M to the scaling relation yields Taking H=0 and λ = ( t ) − 1 / p {\displaystyle \lambda =(t)^{-1/p}} for t ↓ 0 {\displaystyle t\downarrow 0} (or λ = ( − t ) − 1 / p {\displaystyle \lambda =(-t)^{-1/p}} for t ↑ 0 ) {\displaystyle t\uparrow 0)} yields As a consequence of Widom scaling, not all critical exponents are independent but they can be parameterized by two numbers p , q ∈ R {\displaystyle p,q\in \mathbb {R} } with the relations expressed as The relations are experimentally well verified for magnetic systems and fluids.
https://en.wikipedia.org/wiki/Widom_scaling
The Wiedemann effect is the twisting of a ferromagnetic rod through which an electric current is flowing when the rod is placed in a longitudinal magnetic field, caused by the total helical magnetic field. It is the inverse of the Matteucci effect . It was discovered by the German physicist Gustav Wiedemann in 1858 [ 1 ] . The Wiedemann effect is one of the manifestations of magnetostriction in a field formed by the combination of a longitudinal magnetic field and a circular magnetic field that is created by an electric current. If the electric current (or the magnetic field) is alternating, the rod will begin torsional oscillation. In linear approach angle of rod torsion α does not depend on its cross-section form and is defined only by current density and magnetoelastic properties of the rod: [ 2 ] where Magnetostrictive position sensors use the Wiedemann effect to excite an ultrasonic pulse. Typically a small magnet is used to mark a position along a magnetostrictive wire. The magnetic field from a short current pulse in the wire combined with that from the position magnet excites the ultrasonic pulse. The time required for this pulse to travel from the point of excitation to a pickup at the end of the wire gives the position. Reflections from the other end of the wire could lead to disturbances. In order to avoid this the wire is connected to a mechanical damper that end. [ 3 ]
https://en.wikipedia.org/wiki/Wiedemann_effect
In mathematics —specifically, in Riemannian geometry —a Wiedersehen pair is a pair of distinct points x and y on a (usually, but not necessarily, two-dimensional) compact Riemannian manifold ( M , g ) such that every geodesic through x also passes through y , and the same with x and y interchanged. For example, on an ordinary sphere where the geodesics are great circles , the Wiedersehen pairs are exactly the pairs of antipodal points . If every point of an oriented manifold ( M , g ) belongs to a Wiedersehen pair, then ( M , g ) is said to be a Wiedersehen manifold . The concept was introduced by the Austro-Hungarian mathematician Wilhelm Blaschke and comes from the German term meaning "seeing again". As it turns out, in each dimension n the only Wiedersehen manifold (up to isometry ) is the standard Euclidean n -sphere . Initially known as the Blaschke conjecture , this result was established by combined works of Berger , Kazdan , Weinstein (for even n ), and Yang (odd n ). This Riemannian geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wiedersehen_pair
In physics , Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law , which describes the spectral brightness or intensity of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by German physicist Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases. Formally, the wavelength version of Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength λ peak {\displaystyle \lambda _{\text{peak}}} given by: λ peak = b T {\displaystyle \lambda _{\text{peak}}={\frac {b}{T}}} where T is the absolute temperature and b is a constant of proportionality called Wien's displacement constant , equal to 2.897 771 955 ... × 10 −3 m⋅K , [ 1 ] [ 2 ] or b ≈ 2898 μm ⋅K . This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature. There are other formulations of Wien's displacement law, which are parameterized relative to other quantities. For these alternate formulations, the form of the relationship is similar, but the proportionality constant, b , differs. Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation . In "Wien's displacement law", the word displacement refers to how the intensity-wavelength graphs appear shifted (displaced) for different temperatures. Wien's displacement law is relevant to some everyday experiences: The law is named for Wilhelm Wien , who derived it in 1893 based on a thermodynamic argument. [ 7 ] Wien considered adiabatic expansion of a cavity containing waves of light in thermal equilibrium. Using Doppler's principle , he showed that, under slow expansion or contraction, the energy of light reflecting off the walls changes in exactly the same way as the frequency. A general principle of thermodynamics is that a thermal equilibrium state, when expanded very slowly, stays in thermal equilibrium. Wien himself deduced this law theoretically in 1893, following Boltzmann's thermodynamic reasoning. It had previously been observed, at least semi-quantitatively, by an American astronomer, Langley . This upward shift in ν p e a k {\displaystyle \nu _{\mathrm {peak} }} with T {\displaystyle T} is familiar to everyone—when an iron is heated in a fire, the first visible radiation (at around 900 K) is deep red, the lowest frequency visible light. Further increase in T {\displaystyle T} causes the color to change to orange then yellow, and finally blue at very high temperatures (10,000 K or more) for which the peak in radiation intensity has moved beyond the visible into the ultraviolet. [ 8 ] The adiabatic principle allowed Wien to conclude that for each mode, the adiabatic invariant energy/frequency is only a function of the other adiabatic invariant, the frequency/temperature. From this, he derived the "strong version" of Wien's displacement law: the statement that the blackbody spectral radiance is proportional to ν 3 F ( ν / T ) {\displaystyle \nu ^{3}F(\nu /T)} for some function F of a single variable. A modern variant of Wien's derivation can be found in the textbook by Wannier [ 9 ] and in a paper by E. Buckingham [ 10 ] The consequence is that the shape of the black-body radiation function (which was not yet understood) would shift proportionally in frequency (or inversely proportionally in wavelength) with temperature. When Max Planck later formulated the correct black-body radiation function it did not explicitly include Wien's constant b {\displaystyle b} . Rather, the Planck constant h {\displaystyle h} was created and introduced into his new formula. From the Planck constant h {\displaystyle h} and the Boltzmann constant k {\displaystyle k} , Wien's constant b {\displaystyle b} can be obtained. The results in the tables above summarize results from other sections of this article. Percentiles are percentiles of the Planck blackbody spectrum. [ 11 ] Only 25 percent of the energy in the black-body spectrum is associated with wavelengths shorter than the value given by the peak-wavelength version of Wien's law. Notice that for a given temperature, different parameterizations imply different maximal wavelengths. In particular, the curve of intensity per unit frequency peaks at a different wavelength than the curve of intensity per unit wavelength. [ 12 ] For example, using T {\displaystyle T} = 6,000 K (5,730 °C; 10,340 °F) and parameterization by wavelength, the wavelength for maximal spectral radiance is λ {\displaystyle \lambda } = 482.962 nm with corresponding frequency ν {\displaystyle \nu } = 620.737 THz . For the same temperature, but parameterizing by frequency, the frequency for maximal spectral radiance is ν {\displaystyle \nu } = 352.735 THz with corresponding wavelength λ {\displaystyle \lambda } = 849.907 nm . These functions are radiance density functions, which are probability density functions scaled to give units of radiance. The density function has different shapes for different parameterizations, depending on relative stretching or compression of the abscissa, which measures the change in probability density relative to a linear change in a given parameter. Since wavelength and frequency have a reciprocal relation, they represent significantly non-linear shifts in probability density relative to one another. The total radiance is the integral of the distribution over all positive values, and that is invariant for a given temperature under any parameterization. Additionally, for a given temperature the radiance consisting of all photons between two wavelengths must be the same regardless of which distribution you use. That is to say, integrating the wavelength distribution from λ 1 {\displaystyle \lambda _{1}} to λ 2 {\displaystyle \lambda _{2}} will result in the same value as integrating the frequency distribution between the two frequencies that correspond to λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} , namely from c / λ 2 {\displaystyle c/\lambda _{2}} to c / λ 1 {\displaystyle c/\lambda _{1}} . [ 13 ] However, the distribution shape depends on the parameterization, and for a different parameterization the distribution will typically have a different peak density, as these calculations demonstrate. [ 12 ] The important point of Wien's law, however, is that any such wavelength marker, including the median wavelength (or, alternatively, the wavelength below which any specified percentage of the emission occurs) is proportional to the reciprocal of temperature. That is, the shape of the distribution for a given parameterization scales with and translates according to temperature, and can be calculated once for a canonical temperature, then appropriately shifted and scaled to obtain the distribution for another temperature. This is a consequence of the strong statement of Wien's law. For spectral flux considered per unit frequency d ν {\displaystyle d\nu } (in hertz ), Wien's displacement law describes a peak emission at the optical frequency ν peak {\displaystyle \nu _{\text{peak}}} given by: [ 14 ] ν peak = x h k T ≈ ( 5.879 × 10 10 H z / K ) ⋅ T {\displaystyle \nu _{\text{peak}}={x \over h}k\,T\approx (5.879\times 10^{10}\ \mathrm {Hz/K} )\cdot T} or equivalently h ν peak = x k T ≈ ( 2.431 × 10 − 4 e V / K ) ⋅ T {\displaystyle h\nu _{\text{peak}}=x\,k\,T\approx (2.431\times 10^{-4}\ \mathrm {eV/K} )\cdot T} where x {\displaystyle x} = 2.821 439 372 122 078 893 ... [ 15 ] is a constant resulting from the maximization equation, k is the Boltzmann constant , h is the Planck constant , and T is the absolute temperature. With the emission now considered per unit frequency, this peak now corresponds to a wavelength about 76% longer than the peak considered per unit wavelength. The relevant math is detailed in the next section. Planck's law for the spectrum of black-body radiation predicts the Wien displacement law and may be used to numerically evaluate the constant relating temperature and the peak parameter value for any particular parameterization. Commonly a wavelength parameterization is used and in that case the black body spectral radiance (power per emitting area per solid angle) is: u λ ( λ , T ) = 2 h c 2 λ 5 1 e h c / λ k T − 1 . {\displaystyle u_{\lambda }(\lambda ,T)={2hc^{2} \over \lambda ^{5}}{1 \over e^{hc/\lambda kT}-1}.} Differentiating u ( λ , T ) {\displaystyle u(\lambda ,T)} with respect to λ {\displaystyle \lambda } and setting the derivative equal to zero gives: ∂ u ∂ λ = 2 h c 2 ( h c k T λ 7 e h c / λ k T ( e h c / λ k T − 1 ) 2 − 1 λ 6 5 e h c / λ k T − 1 ) = 0 , {\displaystyle {\partial u \over \partial \lambda }=2hc^{2}\left({hc \over kT\lambda ^{7}}{e^{hc/\lambda kT} \over \left(e^{hc/\lambda kT}-1\right)^{2}}-{1 \over \lambda ^{6}}{5 \over e^{hc/\lambda kT}-1}\right)=0,} which can be simplified to give: h c λ k T e h c / λ k T e h c / λ k T − 1 − 5 = 0. {\displaystyle {hc \over \lambda kT}{e^{hc/\lambda kT} \over e^{hc/\lambda kT}-1}-5=0.} By defining: x ≡ h c λ k T , {\displaystyle x\equiv {hc \over \lambda kT},} the equation becomes one in the single variable x : x e x e x − 1 − 5 = 0. {\displaystyle {xe^{x} \over e^{x}-1}-5=0.} which is equivalent to: x = 5 ( 1 − e − x ) . {\displaystyle x=5(1-e^{-x})\,.} This equation is solved by x = 5 + W 0 ( − 5 e − 5 ) {\displaystyle x=5+W_{0}(-5e^{-5})} where W 0 {\displaystyle W_{0}} is the principal branch of the Lambert W function , and gives x = {\displaystyle x=} 4.965 114 231 744 276 303 ... . [ 16 ] Solving for the wavelength λ {\displaystyle \lambda } in millimetres, and using kelvins for the temperature yields: [ 17 ] [ 2 ] Another common parameterization is by frequency . The derivation yielding peak parameter value is similar, but starts with the form of Planck's law as a function of frequency ν {\displaystyle \nu } : u ν ( ν , T ) = 2 h ν 3 c 2 1 e h ν / k T − 1 . {\displaystyle u_{\nu }(\nu ,T)={2h\nu ^{3} \over c^{2}}{1 \over e^{h\nu /kT}-1}.} The preceding process using this equation yields: − h ν k T e h ν / k T e h ν / k T − 1 + 3 = 0. {\displaystyle -{h\nu \over kT}{e^{h\nu /kT} \over e^{h\nu /kT}-1}+3=0.} The net result is: x = 3 ( 1 − e − x ) . {\displaystyle x=3(1-e^{-x})\,.} This is similarly solved with the Lambert W function: [ 18 ] x = 3 + W 0 ( − 3 e − 3 ) {\displaystyle x=3+W_{0}(-3e^{-3})} giving x {\displaystyle x} = 2.821 439 372 122 078 893 ... . [ 15 ] Solving for ν {\displaystyle \nu } produces: [ 14 ] Using the implicit equation x = 4 ( 1 − e − x ) {\displaystyle x=4(1-e^{-x})} yields the peak in the spectral radiance density function expressed in the parameter radiance per proportional bandwidth . (That is, the density of irradiance per frequency bandwidth proportional to the frequency itself, which can be calculated by considering infinitesimal intervals of ln ⁡ ν {\displaystyle \ln \nu } (or equivalently ln ⁡ λ {\displaystyle \ln \lambda } ) rather of frequency itself.) This is perhaps a more intuitive way of presenting "wavelength of peak emission". That yields x {\displaystyle x} = 3.920 690 394 872 886 343 ... . [ 19 ] Another way of characterizing the radiance distribution is via the mean photon energy [ 12 ] ⟨ E phot ⟩ = π 4 30 ζ ( 3 ) k T ≈ ( 3.7294 × 10 − 23 J / K ) ⋅ T , {\displaystyle \langle E_{\textrm {phot}}\rangle ={\frac {\pi ^{4}}{30\,\zeta (3)}}k\,T\approx (\mathrm {3.7294\times 10^{-23}\,J/K} )\cdot T\;,} where ζ {\displaystyle \zeta } is the Riemann zeta function . The wavelength corresponding to the mean photon energy is given by λ ⟨ E ⟩ ≈ ( 0.532 65 c m ⋅ K ) / T . {\displaystyle \lambda _{\langle E\rangle }\approx (\mathrm {0.532\,65\,cm{\cdot }K} )/T\,.} Marr and Wilkin (2012) contend that the widespread teaching of Wien's displacement law in introductory courses is undesirable, and it would be better replaced by alternate material. They argue that teaching the law is problematic because: They suggest that the average photon energy be presented in place of Wien's displacement law, as being a more physically meaningful indicator of changes that occur with changing temperature. In connection with this, they recommend that the average number of photons per second be discussed in connection with the Stefan–Boltzmann law . They recommend that the Planck spectrum be plotted as a "spectral energy density per fractional bandwidth distribution," using a logarithmic scale for the wavelength or frequency. [ 12 ]
https://en.wikipedia.org/wiki/Wien's_displacement_law
Wien's approximation (also sometimes called Wien's law or the Wien distribution law ) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. [ 1 ] [ 2 ] [ 3 ] The equation does accurately describe the short- wavelength (high- frequency ) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long-wavelength (low-frequency) emission. [ 3 ] Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation. [ 1 ] Wien's original paper did not contain the Planck constant . [ 1 ] In this paper, Wien took the wavelength of black-body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck . [ 4 ] The law may be written as [ 5 ] I ( ν , T ) = 2 h ν 3 c 2 e − h ν k B T , {\displaystyle I(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}e^{-{\frac {h\nu }{k_{\text{B}}T}}},} (note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units , I ( ν , x ) = 2 ν 3 e − x , {\displaystyle I(\nu ,x)=2\nu ^{3}e^{-x},} where: This equation may also be written as [ 3 ] [ 6 ] I ( λ , T ) = 2 h c 2 λ 5 e − h c λ k B T , {\displaystyle I(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}e^{-{\frac {hc}{\lambda k_{\text{B}}T}}},} where I ( λ , T ) {\displaystyle I(\lambda ,T)} is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ . Wien acknowledges Friedrich Paschen in his original paper as having supplied him with the same formula based on Paschen's experimental observations. [ 1 ] The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, [ 7 ] occurs at a wavelength λ max = h c 5 k B T ≈ 0.2878 c m ⋅ K T , {\displaystyle \lambda _{\text{max}}={\frac {hc}{5k_{\text{B}}T}}\approx {\frac {\mathrm {0.2878~cm\cdot K} }{T}},} and frequency ν max = 3 k B T h ≈ 6.25 × 10 10 H z K ⋅ T . {\displaystyle \nu _{\text{max}}={\frac {3k_{\text{B}}T}{h}}\approx \mathrm {6.25\times 10^{10}~{\frac {Hz}{K}}} \cdot T.} The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long-wavelength (low-frequency) emission. However, it was soon superseded by Planck's law , which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell–Boltzmann statistics. Planck's law may be given as [ 5 ] I ( ν , T ) = 2 h ν 3 c 2 1 e h ν k T − 1 . {\displaystyle I(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}{\frac {1}{e^{\frac {h\nu }{kT}}-1}}.} The Wien approximation may be derived from Planck's law by assuming h ν ≫ k T {\displaystyle h\nu \gg kT} . When this is true, then [ 5 ] 1 e h ν k T − 1 ≈ e − h ν k T , {\displaystyle {\frac {1}{e^{\frac {h\nu }{kT}}-1}}\approx e^{-{\frac {h\nu }{kT}}},} and so the Wien approximation gets ever closer to Planck's law as the frequency increases. The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission. [ 3 ] [ 5 ]
https://en.wikipedia.org/wiki/Wien_approximation
The Wien effect is the experimentally-observed increase in ionic mobility or conductivity of electrolytes at very high gradient of electrical potential . [ 1 ] A theoretical explanation has been proposed by Lars Onsager . [ 2 ] A related phenomenon is known as the Second Wien Effect or the dissociation field effect , and it involves increased dissociation constants of weak acids at high electrical gradients. [ 3 ] The dissociation of weak chemical bases is unaffected. The effects are important at very high electrical fields (10 8 – 10 9 V/m), like those observed in electrical double layers at interfaces or at the surfaces of electrodes in electrochemistry . More generally, the electric field effect (directly, through space rather than through chemical bonds ) on chemical behaviour of systems (e.g., on reaction rates ) is known as the field effect or the direct effect . [ 4 ] The terms are named after Max Wien . [ 5 ] [ 6 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wien_effect
In network theory , the Wiener connector is a means of maximizing efficiency in connecting specified "query vertices" in a network. Given a connected , undirected graph and a set of query vertices in a graph, the minimum Wiener connector is an induced subgraph that connects the query vertices and minimizes the sum of shortest path distances among all pairs of vertices in the subgraph. In combinatorial optimization , the minimum Wiener connector problem is the problem of finding the minimum Wiener connector. It can be thought of as a version of the classic Steiner tree problem (one of Karp's 21 NP-complete problems ), where instead of minimizing the size of the tree, the objective is to minimize the distances in the subgraph. [ 1 ] [ 2 ] The minimum Wiener connector was first presented by Ruchansky et al. in 2015. [ 3 ] The minimum Wiener connector has applications in many domains where there is a graph structure and an interest in learning about connections between sets of individuals. For example, given a set of patients infected with a viral disease, which other patients should be checked to find the culprit? Or given a set of proteins of interest, which other proteins participate in pathways with them? The Wiener connector was named in honor of chemist Harry Wiener who first introduced the Wiener Index . The Wiener index is the sum of shortest path distances in a (sub)graph. Using d ( u , v ) {\displaystyle d(u,v)} to denote the shortest path between u {\displaystyle u} and v {\displaystyle v} , the Wiener index of a (sub)graph S {\displaystyle S} , denoted W ( S ) {\displaystyle W(S)} , is defined as The minimum Wiener connector problem is defined as follows. Given an undirected and unweighted graph with vertex set V {\displaystyle V} and edge set E {\displaystyle E} and a set of query vertices Q ⊆ V {\displaystyle Q\subseteq V} , find a connector H ⊆ V {\displaystyle H\subseteq V} of minimum Wiener index. More formally, the problem is to compute that is, find a connector H {\displaystyle H} that minimizes the sum of shortest paths in H {\displaystyle H} . The minimum Wiener connector problem is related to the Steiner tree problem . In the former, the objective function in the minimization is the Wiener index of the connector, whereas in the latter, the objective function is the sum of the weights of the edges in the connector. The optimum solutions to these problems may differ, given the same graph and set of query vertices. In fact, a solution for the Steiner tree problem may be arbitrarily bad for the minimum Wiener connector problem; the graph on the right provides an example. The problem is NP-hard , and does not admit a polynomial-time approximation scheme unless P = NP . [ 3 ] This can be proven using the inapproximability of vertex cover in bounded degree graphs. [ 4 ] Although there is no polynomial-time approximation scheme, there is a polynomial-time constant-factor approximation—an algorithm that finds a connector whose Wiener index is within a constant multiplicative factor of the Wiener index of the optimum connector. In terms of complexity classes , the minimum Wiener connector problem is in APX but is not in PTAS unless P = NP . An exhaustive search over all possible subsets of vertices to find the one that induces the connector of minimum Wiener index yields an algorithm that finds the optimum solution in 2 O ( n ) {\displaystyle 2^{O(n)}} time (that is, exponential time ) on graphs with n vertices. In the special case that there are exactly two query vertices, the optimum solution is the shortest path joining the two vertices, so the problem can be solved in polynomial time by computing the shortest path. In fact, for any fixed constant number of query vertices, an optimum solution can be found in polynomial time. There is a constant-factor approximation algorithm for the minimum Wiener connector problem that runs in time O ( q ( m log ⁡ n + n log 2 ⁡ n ) ) {\displaystyle O(q(m\log n+n\log ^{2}n))} on a graph with n vertices, m edges, and q query vertices, roughly the same time it takes to compute shortest-path distances from the query vertices to every other vertex in the graph. [ 3 ] The central approach of this algorithm is to reduce the problem to the vertex-weighted Steiner tree problem, which admits a constant-factor approximation in particular instances related to the minimum Wiener connector problem. The minimum Wiener connector behaves like betweenness centrality . When the query vertices belong to the same community, the non-query vertices that form the minimum Wiener connector tend to belong to the same community and have high centrality within the community. Such vertices are likely to be influential vertices playing leadership roles in the community. In a social network , these influential vertices might be good users for spreading information or to target in a viral marketing campaign. [ 5 ] When the query vertices belong to different communities, the non-query vertices that form the minimum Wiener connector contain vertices adjacent to edges that bridge the different communities. These vertices span a structural hole in the graph and are important. [ 6 ] The minimum Wiener connector is useful in applications in which one wishes to learn about the relationship between a set of vertices in a graph. For example,
https://en.wikipedia.org/wiki/Wiener_connector
In chemical graph theory , the Wiener index (also Wiener number ) introduced by Harry Wiener , is a topological index of a molecule , defined as the sum of the lengths of the shortest paths between all pairs of vertices in the chemical graph representing the non- hydrogen atoms in the molecule. [ 1 ] Wiener index can be used for the representation of computer networks and enhancing lattice hardware security . The Wiener index is named after Harry Wiener , who introduced it in 1947; at the time, Wiener called it the "path number". [ 2 ] It is the oldest topological index related to molecular branching. [ 3 ] Based on its success, many other topological indexes of chemical graphs, based on information in the distance matrix of the graph, have been developed subsequently to Wiener's work. [ 4 ] The same quantity has also been studied in pure mathematics, under various names including the gross status, [ 5 ] the distance of a graph, [ 6 ] and the transmission. [ 7 ] The Wiener index is also closely related to the closeness centrality of a vertex in a graph, a quantity inversely proportional to the sum of all distances between the given vertex and all other vertices that has been frequently used in sociometry and the theory of social networks . [ 6 ] Butane (C 4 H 10 ) has two different structural isomers : n -butane, with a linear structure of four carbon atoms, and isobutane , with a branched structure. The chemical graph for n -butane is a four-vertex path graph , and the chemical graph for isobutane is a tree with one central vertex connected to three leaves. The n -butane molecule has three pairs of vertices at distance one from each other, two pairs at distance two, and one pair at distance three, so its Wiener index is The isobutane molecule has three pairs of vertices at distances one from each other (the three leaf-center pairs), and three pairs at distance two (the leaf-leaf pairs). Therefore, its Wiener index is These numbers are instances of formulas for special cases of the Wiener index: it is ( n 3 − n ) / 6 {\displaystyle (n^{3}-n)/6} for any n {\displaystyle n} -vertex path graph such as the graph of n -butane, [ 8 ] and ( n − 1 ) 2 {\displaystyle (n-1)^{2}} for any n {\displaystyle n} -vertex star such as the graph of isobutane. [ 9 ] Thus, even though these two molecules have the same chemical formula, and the same numbers of carbon-carbon and carbon-hydrogen bonds, their different structures give rise to different Wiener indices. Wiener showed that the Wiener index number is closely correlated with the boiling points of alkane molecules. [ 2 ] Later work on quantitative structure–activity relationships showed that it is also correlated with other quantities including the parameters of its critical point , [ 10 ] the density , surface tension , and viscosity of its liquid phase, [ 11 ] and the van der Waals surface area of the molecule. [ 12 ] The Wiener index may be calculated directly using an algorithm for computing all pairwise distances in the graph. When the graph is unweighted (so the length of a path is just its number of edges), these distances may be calculated by repeating a breadth-first search algorithm, once for each starting vertex. [ 13 ] The total time for this approach is O( nm ), where n is the number of vertices in the graph and m is its number of edges. For weighted graphs, one may instead use the Floyd–Warshall algorithm [ 13 ] [ 14 ] [ 15 ] or Johnson's algorithm , [ 16 ] with running time O( n 3 ) or O( nm + n 2 log n ) respectively. Alternative but less efficient algorithms based on repeated matrix multiplication have also been developed within the chemical informatics literature. [ 17 ] [ 18 ] When the underlying graph is a tree (as is true for instance for the alkanes originally studied by Wiener), the Wiener index may be calculated more efficiently. If the graph is partitioned into two subtrees by removing a single edge e , then its Wiener index is the sum of the Wiener indices of the two subtrees, together with a third term representing the paths that pass through e . This third term may be calculated in linear time by computing the sum of distances of all vertices from e within each subtree and multiplying the two sums. [ 19 ] This divide and conquer algorithm can be generalized from trees to graphs of bounded treewidth , and leads to near-linear-time algorithms for such graphs. [ 20 ] An alternative method for calculating the Wiener index of a tree, by Bojan Mohar and Tomaž Pisanski , works by generalizing the problem to graphs with weighted vertices, where the weight of a path is the product of its length with the weights of its two endpoints. If v is a leaf vertex of the tree then the Wiener index of the tree may be calculated by merging v with its parent (adding their weights together), computing the index of the resulting smaller tree, and adding a simple correction term for the paths that pass through the edge from v to its parent. By repeatedly removing leaves in this way, the Wiener index may be calculated in linear time. [ 13 ] For graphs that are constructed as products of simpler graphs, the Wiener index of the product graph can often be computed by a simple formula that combines the indices of its factors. [ 21 ] Benzenoids (graphs formed by gluing regular hexagons edge-to-edge) can be embedded isometrically into the Cartesian product of three trees, allowing their Wiener indices to be computed in linear time by using the product formula together with the linear time tree algorithm. [ 22 ] Gutman & Yeh (1995) considered the problem of determining which numbers can be represented as the Wiener index of a graph. [ 23 ] They showed that all but two positive integers have such a representation; the two exceptions are the numbers 2 and 5, which are not the Wiener index of any graph. For graphs that must be bipartite, they found that again almost all integers can be represented, with a larger set of exceptions: none of the numbers in the set can be represented as the Wiener index of a bipartite graph. Gutman and Yeh conjectured, but were unable to prove, a similar description of the numbers that can be represented as Wiener indices of trees, with a set of 49 exceptional values: The conjecture was later proven by Wagner, Wang, and Yu. [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Wiener_index
In the mathematical field of probability , the Wiener sausage is a neighborhood of the trace of a Brownian motion up to a time t , given by taking all points within a fixed distance of Brownian motion. It can be visualized as a sausage of fixed radius whose centerline is Brownian motion. The Wiener sausage was named after Norbert Wiener by M. D. Donsker and S. R. Srinivasa Varadhan ( 1975 ) because of its relation to the Wiener process ; the name is also a pun on Vienna sausage , as "Wiener" is German for "Viennese". The Wiener sausage is one of the simplest non-Markovian functionals of Brownian motion. Its applications include stochastic phenomena including heat conduction . It was first described by Frank Spitzer ( 1964 ), and it was used by Mark Kac and Joaquin Mazdak Luttinger ( 1973 , 1974 ) to explain results of a Bose–Einstein condensate , with proofs published by M. D. Donsker and S. R. Srinivasa Varadhan ( 1975 ). The Wiener sausage W δ ( t ) of radius δ and length t is the set-valued random variable on Brownian paths b (in some Euclidean space) defined by There has been a lot of work on the behavior of the volume ( Lebesgue measure ) | W δ ( t )| of the Wiener sausage as it becomes thin (δ→0); by rescaling, this is essentially equivalent to studying the volume as the sausage becomes long ( t →∞). Spitzer (1964) showed that in 3 dimensions the expected value of the volume of the sausage is In dimension d at least 3 the volume of the Wiener sausage is asymptotic to as t tends to infinity. In dimensions 1 and 2 this formula gets replaced by 8 t / π {\displaystyle {\sqrt {8t/\pi }}} and 2 π t / log ⁡ ( t ) {\displaystyle 2{\pi }t/\log(t)} respectively. Whitman (1964) , a student of Spitzer, proved similar results for generalizations of Wiener sausages with cross sections given by more general compact sets than balls .
https://en.wikipedia.org/wiki/Wiener_sausage
In mathematics, the Wiener series , or Wiener G-functional expansion , originates from the 1958 book of Norbert Wiener . It is an orthogonal expansion for nonlinear functionals closely related to the Volterra series and having the same relation to it as an orthogonal Hermite polynomial expansion has to a power series . For this reason it is also known as the Wiener–Hermite expansion . The analogue of the coefficients are referred to as Wiener kernels . The terms of the series are orthogonal (uncorrelated) with respect to a statistical input of white noise . This property allows the terms to be identified in applications by the Lee–Schetzen method . The Wiener series is important in nonlinear system identification . In this context, the series approximates the functional relation of the output to the entire history of system input at any time. The Wiener series has been applied mostly to the identification of biological systems, especially in neuroscience . The name Wiener series is almost exclusively used in system theory . In the mathematical literature it occurs as the Itô expansion (1951) which has a different form but is entirely equivalent to it. The Wiener series should not be confused with the Wiener filter , which is another algorithm developed by Norbert Wiener used in signal processing. Given a system with an input/output pair ( x ( t ) , y ( t ) ) {\displaystyle (x(t),y(t))} where the input is white noise with zero mean value and power A, we can write the output of the system as sum of a series of Wiener G-functionals y ( n ) = ∑ p ( G p x ) ( n ) {\displaystyle y(n)=\sum _{p}(G_{p}x)(n)} In the following the expressions of the G-functionals up to the fifth order will be given: [ clarification needed ]
https://en.wikipedia.org/wiki/Wiener_series
The Wiener–Ikehara theorem is a Tauberian theorem , originally published by Shikao Ikehara , a student of Norbert Wiener 's, in 1931. It is a special case of Wiener's Tauberian theorems , which were published by Wiener one year later. It can be used to prove the prime number theorem (Chandrasekharan, 1969), under the assumption that the Riemann zeta function has no zeros on the line of real part one. Let A ( x ) be a non-negative, monotonic nondecreasing function of x , defined for 0 ≤ x < ∞. Suppose that converges for ℜ( s ) > 1 to the function ƒ ( s ) and that, for some non-negative number c , has an extension as a continuous function for ℜ( s ) ≥ 1. Then the limit as x goes to infinity of e − x A ( x ) is equal to c. An important number-theoretic application of the theorem is to Dirichlet series of the form where a ( n ) is non-negative. If the series converges to an analytic function in with a simple pole of residue c at s = b , then Applying this to the logarithmic derivative of the Riemann zeta function , where the coefficients in the Dirichlet series are values of the von Mangoldt function , it is possible to deduce the Prime number theorem from the fact that the zeta function has no zeroes on the line
https://en.wikipedia.org/wiki/Wiener–Ikehara_theorem
In applied mathematics , the Wiener–Khinchin theorem or Wiener–Khintchine theorem , also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem , states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Norbert Wiener proved this theorem for the case of a deterministic function in 1930; [ 8 ] Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. [ 9 ] [ 10 ] Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914. [ 11 ] [ 12 ] For continuous time, the Wiener–Khinchin theorem says that if x {\displaystyle x} is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance ) defined in terms of statistical expected value r x x ( τ ) = E [ x ( t ) ∗ x ( t − τ ) ] < ∞ , ∀ τ , t ∈ R , {\displaystyle r_{xx}(\tau )=\mathbb {E} {\big [}x(t)^{*}x(t-\tau ){\big ]}<\infty ,\quad \forall \tau ,t\in \mathbb {R} ,} where the asterisk denotes complex conjugate , then there exists a monotone function F ( f ) {\displaystyle F(f)} in the frequency domain − ∞ < f < ∞ {\displaystyle -\infty <f<\infty } , or equivalently a non negative Radon measure μ {\displaystyle \mu } on the frequency domain, such that r x x ( τ ) = ∫ − ∞ ∞ e 2 π i τ f μ ( d f ) = ∫ − ∞ ∞ e 2 π i τ f d F ( f ) , {\displaystyle r_{xx}(\tau )=\int _{-\infty }^{\infty }e^{2\pi i\tau f}\mu (df)=\int _{-\infty }^{\infty }e^{2\pi i\tau f}dF(f),} where the integral is a Riemann–Stieltjes integral . [ 1 ] [ 13 ] This is a kind of spectral decomposition of the auto-correlation function. F {\displaystyle F} is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum. The ordinary Fourier transform of x ( t ) {\displaystyle x(t)} does not exist in general, because stochastic random functions are usually not absolutely integrable . Nor is r x x {\displaystyle r_{xx}} assumed to be absolutely integrable, so it need not have a Fourier transform either. However, if the measure μ ( d f ) = d F ( f ) {\displaystyle \mu (df)=dF(f)} is absolutely continuous (e.g. if the process is purely indeterministic), then F {\displaystyle F} is differentiable almost everywhere and has a Radon-Nikodym derivative given by S ( f ) = μ ( d f ) d f . {\displaystyle S(f)={\frac {\mu (df)}{df}}.} In this case, one can determine S ( f ) {\displaystyle S(f)} , the power spectral density of x ( t ) {\displaystyle x(t)} , by taking the averaged derivative of F {\displaystyle F} . Because the left and right derivatives of F {\displaystyle F} exist everywhere, i.e. we can put S ( f ) = 1 2 ( lim ε ↓ 0 1 ε ( F ( f + ε ) − F ( f ) ) + lim ε ↑ 0 1 ε ( F ( f + ε ) − F ( f ) ) ) {\displaystyle S(f)={\frac {1}{2}}\left(\lim _{\varepsilon \downarrow 0}{\frac {1}{\varepsilon }}{\big (}F(f+\varepsilon )-F(f){\big )}+\lim _{\varepsilon \uparrow 0}{\frac {1}{\varepsilon }}{\big (}F(f+\varepsilon )-F(f){\big )}\right)} everywhere, [ 14 ] (obtaining that F is the integral of its averaged derivative [ 15 ] ), and the theorem simplifies to r x x ( τ ) = ∫ − ∞ ∞ e 2 π i τ f S ( f ) d f . {\displaystyle r_{xx}(\tau )=\int _{-\infty }^{\infty }e^{2\pi i\tau f}\,S(f)df.} Assuming that r {\displaystyle r} and S {\displaystyle S} are "sufficiently nice" such that the Fourier inversion theorem is valid, the Wiener–Khinchin theorem takes the simple form of saying that r {\displaystyle r} and S {\displaystyle S} are a Fourier transform pair , and S ( f ) = ∫ − ∞ ∞ r x x ( τ ) e − 2 π i f τ d τ . {\displaystyle S(f)=\int _{-\infty }^{\infty }r_{xx}(\tau )e^{-2\pi if\tau }\,d\tau .} For the discrete-time case, the power spectral density of the function with discrete values x n {\displaystyle x_{n}} is where ω = 2 π f {\displaystyle \omega =2\pi f} is the angular frequency, i {\displaystyle i} is used to denote the imaginary unit (in engineering, sometimes the letter j {\displaystyle j} is used instead) and r x x ( k ) {\displaystyle r_{xx}(k)} is the discrete autocorrelation function of x n {\displaystyle x_{n}} , defined in its deterministic or stochastic formulation. Provided r x x {\displaystyle r_{xx}} is absolutely summable, i.e. the result of the theorem then can be written as Being a discrete-time sequence, the spectral density is periodic in the frequency domain. For this reason, the domain of the function S {\displaystyle S} is usually restricted to [ − π , π ] {\displaystyle [-\pi ,\pi ]} (note the interval is open from one side). The theorem is useful for analyzing linear time-invariant systems (LTI systems) when the inputs and outputs are not square-integrable, so their Fourier transforms do not exist. A corollary is that the Fourier transform of the autocorrelation function of the output of an LTI system is equal to the product of the Fourier transform of the autocorrelation function of the input of the system times the squared magnitude of the Fourier transform of the system impulse response. [ 16 ] This works even when the Fourier transforms of the input and output signals do not exist because these signals are not square-integrable, so the system inputs and outputs cannot be directly related by the Fourier transform of the impulse response. Since the Fourier transform of the autocorrelation function of a signal is the power spectrum of the signal, this corollary is equivalent to saying that the power spectrum of the output is equal to the power spectrum of the input times the energy transfer function . This corollary is used in the parametric method for power spectrum estimation. In many textbooks and in much of the technical literature, it is tacitly assumed that Fourier inversion of the autocorrelation function and the power spectral density is valid, and the Wiener–Khinchin theorem is stated, very simply, as if it said that the Fourier transform of the autocorrelation function was equal to the power spectral density , ignoring all questions of convergence [ 17 ] (similar to Einstein's paper [ 11 ] ). But the theorem (as stated here) was applied by Norbert Wiener and Aleksandr Khinchin to the sample functions (signals) of wide-sense-stationary random processes , signals whose Fourier transforms do not exist. Wiener's contribution was to make sense of the spectral decomposition of the autocorrelation function of a sample function of a wide-sense-stationary random process even when the integrals for the Fourier transform and Fourier inversion do not make sense. Further complicating the issue is that the discrete Fourier transform always exists for digital, finite-length sequences, meaning that the theorem can be blindly applied to calculate autocorrelations of numerical sequences. As mentioned earlier, the relation of this discrete sampled data to a mathematical model is often misleading, and related errors can show up as a divergence when the sequence length is modified. Some authors refer to R {\displaystyle R} as the autocovariance function. They then proceed to normalize it by dividing by R ( 0 ) {\displaystyle R(0)} , to obtain what they refer to as the autocorrelation function.
https://en.wikipedia.org/wiki/Wiener–Khinchin_theorem
The Wieringerrandmeer or bordering lake of Wieringen was a project in the Dutch province of North Holland and the (former) municipalities of Wieringen and Wieringermeer . It planned to create islands and a bordering lake . The project was canceled in 2010 by the provincial and national governments. Wieringen has been connected to the mainland since 1932 and is used, among other things, for housing and agriculture. In the mid-1990s, Wieringen initiated a project to restore its separate character by creating a bordering lake between North Holland and the polder of Wieringermeer . The authorities of North Holland agreed. The Wieringerrandmeer was to be part of a series of lakes, including the Amstelmeer , the IJsselmeer and the Wadden Sea . It was intended to "boost economic and social development" by creating a recreational lake and stimulating business. The project's slogan was "the Experience of the lake." [ 1 ] In 2003, the province of North Holland, the various municipalities, and the Water Board of the Netherlands published a contest for the development project. Five consortia attended. The winner Wirense consisted of Volker Wessels and Boskalis , which could participate in the project and take joint responsibility. The governments and the selected consortium drafted a Memorandum of Understanding for cooperation. On March 5, 2007, the public-private partnership (PPP) between the government (provincial and municipal) and operators (Boskalis, Volker Wessels) for the construction of Wieringerrandmeer ended. Subsequently, the planning and preparations were processed in more detail. On December 19th, 2007 , the executive of North Holland approved the project. The project was approved by the Steering Committee Wieringerrandmeer: the province, municipalities, and Wirense group. The National Forest areas and the water board would be involved. According to the original schedule, construction was scheduled to begin in summer 2010. It was expected to create the lake in five years, but all the work could last until around 2030. With the construction of Wieringerrandmeer, continuity could be established between the IJsselmeer and Amstelmeer for forests, water bodies and reedbeds. This continuity of the provincial ecological structure, was called the Northern Arc . The completed plan would include: a total area of 600 hectares (6 km 2 ), including 250 hectares (2.5 km 2 ) for housing; about 860 hectares (8.6 km 2 ) of water surface; over 400 hectares (4 km 2 ) of new nature and 128 hectares (1.28 km 2 ) of forest plantations. Opponents of the project were concerned that the project would alter the region's water balance and adversely effect agricultural land, with thirty farms in Wieringen and Wieringermeer closing and many farmers forced to move away. One June 27, 2008 The other Wieringerrandmeer presented an alternative plan developed by residents and farmers who were part of environmental organizations. They proposed reducing the area to be flooded, building fewer homes that would be located closer to the lake, and requiring fewer farms to relocate. Some luxury items, such as the creation of the Zuiderhaven lock between the Amstelmeer and the IJsselmeer, were removed. It concluded that it was unnecessary to build a large lake on the edge of North-Holland. On November 3, 2010 the Gedeputeerde staten of North Holland canceled the project for financial reasons; on November 15 the States-Provincial of the Netherlands confirmed the decision.
https://en.wikipedia.org/wiki/Wieringerrandmeer
A Wigner crystal is the solid (crystalline) phase of electrons first predicted by Eugene Wigner in 1934. [ 1 ] [ 2 ] A gas of electrons moving in a uniform, inert, neutralizing background (i.e. Jellium Model ) will crystallize and form a lattice if the electron density is less than a critical value. This is because the potential energy dominates the kinetic energy at low densities, so the detailed spatial arrangement of the electrons becomes important. To minimize the potential energy, the electrons form a bcc ( body-centered cubic ) lattice in 3 D , a triangular lattice in 2D and an evenly spaced lattice in 1D . Most experimentally observed Wigner clusters exist due to the presence of the external confinement, i.e. external potential trap. As a consequence, deviations from the b.c.c or triangular lattice are observed. [ 3 ] A crystalline state of the 2D electron gas can also be realized by applying a sufficiently strong magnetic field. [ citation needed ] However, it is still not clear whether it is the Wigner crystallization that has led to observation of insulating behaviour in magnetotransport measurements on 2D electron systems, since other candidates are present, such as Anderson localization . [ clarification needed ] More generally, a Wigner crystal phase can also refer to a crystal phase occurring in non-electronic systems at low density. In contrast, most crystals melt as the density is lowered. Examples seen in the laboratory are charged colloids or charged plastic spheres. [ citation needed ] A uniform electron gas at zero temperature is characterised by a single dimensionless parameter, the so-called Wigner–Seitz radius r s = a / a b , where a is the average inter-particle spacing and a b is the Bohr radius . The kinetic energy of an electron gas scales as 1/ r s 2 , this can be seen for instance by considering a simple Fermi gas . The potential energy, on the other hand, is proportional to 1/ r s . When r s becomes larger at low density, the latter becomes dominant and forces the electrons as far apart as possible. As a consequence, they condense into a close-packed lattice. The resulting electron crystal is called the Wigner crystal. [ 4 ] Based on the Lindemann criterion one can find an estimate for the critical r s . The criterion states that the crystal melts when the root-mean-square displacement of the electrons ⟨ r 2 ⟩ {\displaystyle {\sqrt {\langle r^{2}\rangle }}} is about a quarter of the lattice spacing a . On the assumption that vibrations of the electrons are approximately harmonic, one can use that for a quantum harmonic oscillator the root mean square displacement in the ground state (in 3D) is given by with ℏ {\displaystyle \hbar } the Planck constant , m e the electron mass and ω the characteristic frequency of the oscillations. The latter can be estimated by considering the electrostatic potential energy for an electron displaced by r from its lattice point. Say that the Wigner–Seitz cell associated to the lattice point is approximately a sphere of radius a /2. The uniform, neutralizing background then gives rise to a smeared positive charge of density 6 e / a 3 π {\displaystyle 6e/a^{3}\pi } with e {\displaystyle e} the electron charge . The electric potential felt by the displaced electron as a result of this is given by with ε 0 the vacuum permittivity . Comparing − e φ ( r ) {\displaystyle -e\varphi (r)} to the energy of a harmonic oscillator, one can read off or, combining this with the result from the quantum harmonic oscillator for the root-mean-square displacement The Lindemann criterion than gives us the estimate that r s > 40 is required to give a stable Wigner crystal. Quantum Monte Carlo simulations indicate that the uniform electron gas actually crystallizes at r s = 106 in 3D [ 5 ] [ 6 ] and r s = 31 in 2D. [ 7 ] [ 8 ] [ 9 ] For classical systems at elevated temperatures one uses the average interparticle interaction in units of the temperature: G = e 2 / k B T a {\displaystyle G=e^{2}/k_{B}Ta} .. The Wigner transition occurs at G = 170 in 3D [ 10 ] and G = 125 in 2D. [ 11 ] It is believed that ions, such as those of iron, form a Wigner crystal in the interiors of white dwarf stars. In practice, it is difficult to experimentally realize a Wigner crystal because quantum mechanical fluctuations overpower the Coulomb repulsion and quickly cause disorder. Low electron density is needed. One notable example occurs in quantum dots with low electron densities or high magnetic fields where electrons will spontaneously localize in some situations, forming a so-called rotating "Wigner molecule", [ 12 ] a crystalline-like state adapted to the finite size of the quantum dot. Wigner crystallization in a two-dimensional electron gas under high magnetic fields was predicted (and was observed experimentally) [ 13 ] to occur for small filling factors [ 14 ] (less than ν = 1 / 5 {\displaystyle \nu =1/5} ) of the lowest Landau level . For larger fractional fillings, the Wigner crystal was thought to be unstable relative to the fractional quantum Hall effect (FQHE) liquid states. A Wigner crystal was observed in the immediate neighborhood of the large fractional filling ν = 1 / 3 {\displaystyle \nu =1/3} , [ 15 ] and led to a new understanding [ 16 ] (based on the pinning of a rotating Wigner molecule) for the interplay between quantum-liquid and pinned-solid phases in the lowest Landau level. Another experimental realisation of the Wigner crystal occurred in single-electron transistors with very low currents, where a 1D Wigner crystal formed. The current due to each electron can be directly detected experimentally. [ 17 ] Additionally, experiments using quantum wires (short quantum wires are sometimes referred to as ‘ quantum point contacts ’, (QPCs)) have led to suggestions of Wigner crystallization in 1D systems. [ 18 ] In the experiment performed by Hew et al ., a 1D channel was formed by confining electrons in both directions transverse to the electron transport, by the band structure of the GaAs / AlGaAs heterojunction and the potential from the QPC. The device design allowed the electron density in the 1D channel to vary relatively independently of the strength of transverse confining potential, thus allowing experiments to be performed in the regime in which Coulomb interactions between electrons dominate the kinetic energy. Conductance through a QPC shows a series of plateaux quantized in units of the conductance quantum , 2 e 2 / h However, this experiment reported a disappearance of the first plateau (resulting in a jump in conductance of 4 e 2 / h ), which was attributed to the formation of two parallel rows of electrons. In a strictly 1D system, electrons occupy equidistant points along a line, i.e. a 1D Wigner crystal. As the electron density increases, the Coulomb repulsion becomes large enough to overcome the electrostatic potential confining the 1D Wigner crystal in the transverse direction, leading to a lateral rearrangement of the electrons into a double-row structure. [ 19 ] [ 20 ] The evidence of a double row observed by Hew et al . may point towards the beginnings of a Wigner crystal in a 1D system. In 2018, a transverse magnetic focusing that combines charge and spin detection was used to directly probe a Wigner crystal and its spin properties in 1D quantum wires with tunable width. It provides direct evidence and a better understanding of the nature of zigzag Wigner crystallization by unveiling both the structural and the spin phase diagrams. [ 21 ] Direct evidence for the formation of small Wigner crystals was reported in 2019. [ 22 ] In 2024, physicists managed to directly image a Wigner crystal with a scanning tunneling microscope . [ 23 ] [ 24 ] Some layered Van der Waals materials, such as transition metal dichalcogenides have intrinsically large r s values which exceed the 2D theoretical Wigner crystal limit r s =31~38. The origin of the large r s is partly due to the suppressed kinetic energy arising from a strong electron phonon interaction which leads to polaronic band narrowing, and partly due to the low carrier density n at low temperatures. The charge density wave (CDW) state in such materials, such as 1T-TaS 2 , with a sparsely filled √13x√13 superlattice and r s =70~100 may be considered to be better described in terms of a Wigner crystal than the more traditional charge density wave. This viewpoint is supported both by modelling and systematic scanning tunnelling microscopy measurements. [ 25 ] Thus, Wigner crystal superlattices in so-called CDW systems may be considered to be the first direct observation of ordered electron states localised by mutual Coulomb interaction. An important criterion for is the depth of charge modulation, which depends on the material, and only systems where r s exceeds the theoretical limit can be regarded as Wigner crystals. In 2020, a direct image of a Wigner crystal observed by microscopy was obtained in molybdenum diselenide / molybdenum disulfide (MoSe2/MoS2) moiré heterostructures. [ 26 ] [ 27 ] A 2021 experiment created a Wigner crystal near 0K by confining electrons using a monolayer sheet of molybdenum diselenide . The sheet was sandwiched between two graphene electrodes and a voltage was applied. The resulting electron spacing was around 20 nanometers, as measured by the stationary appearance of light-agitated excitons. [ 28 ] [ 29 ] Another 2021 experiment reported quantum Wigner crystals where quantum fluctuations dominate over the thermal fluctuation in two coupled layers of molybdenum diselenide without any magnetic field. The researchers documented both thermal and quantum melting of the Wigner crystal in this experiment. [ 30 ] [ 31 ]
https://en.wikipedia.org/wiki/Wigner_crystal
The Wigner effect (named for its discoverer, Eugene Wigner ), [ 1 ] also known as the discomposition effect or Wigner's disease , [ 2 ] is the displacement of atoms in a solid caused by neutron radiation . Any solid can display the Wigner effect. The effect is of most concern in neutron moderators , such as graphite , intended to reduce the speed of fast neutrons , thereby turning them into thermal neutrons capable of sustaining a nuclear chain reaction involving uranium-235 . To cause the Wigner effect, neutrons that collide with the atoms in a crystal structure must have enough energy to displace them from the lattice. This amount ( threshold displacement energy ) is approximately 25 eV . A neutron's energy can vary widely, but it is not uncommon to have energies up to and exceeding 10 MeV (10,000,000 eV) in the centre of a nuclear reactor . A neutron with a significant amount of energy will create a displacement cascade in a matrix via elastic collisions . For example, a 1 MeV neutron striking graphite will create 900 displacements. Not all displacements will create defects, because some of the struck atoms will find and fill the vacancies that were either small pre-existing voids or vacancies newly formed by the other struck atoms. The atoms that do not find a vacancy come to rest in non-ideal locations; that is, not along the symmetrical lines of the lattice. These interstitial atoms (or simply "interstitials") and their associated vacancies are a Frenkel defect . Because these atoms are not in the ideal location, they have a Wigner energy associated with them, much as a ball at the top of a hill has gravitational potential energy . When a large number of interstitials have accumulated, they risk releasing all of their energy suddenly, creating a rapid, great increase in temperature. Sudden, unplanned increases in temperature can present a large risk for certain types of nuclear reactors with low operating temperatures. One such release was the indirect cause of the Windscale fire . Accumulation of energy in irradiated graphite has been recorded as high as 2.7 kJ /g--enough to raise the temperature by thousands of degrees--but is typically much lower than this. [ 3 ] Despite some reports, [ 4 ] Wigner energy buildup had nothing to do with the cause of the Chernobyl disaster : this reactor, like all contemporary power reactors, operated at a high enough temperature to allow the displaced graphite structure to realign itself before any potential energy could be stored. [ 5 ] Wigner energy may have played some part following the prompt critical neutron spike, when the accident entered the graphite fire phase of events. However, Wigner energy was the cause of the Windscale fire on 10 October 1957 at the Sellafield nuclear site in UK. A buildup of Wigner energy can be relieved by heating the material. This process is known as annealing . In graphite this occurs at 250 °C (482 °F). [ 6 ] In 2003, it was postulated that Wigner energy can be stored by the formation of metastable defect structures in graphite. Notably, the large energy release observed at 200–250 ° C has been described in terms of a metastable interstitial-vacancy pair. [ 7 ] The interstitial atom becomes trapped on the lip of the vacancy, and there is a barrier for it to recombine to give perfect graphite.
https://en.wikipedia.org/wiki/Wigner_effect
The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution , after Eugene Wigner and Jean-André Ville ) is a quasiprobability distribution . It was introduced by Eugene Wigner in 1932 [ 1 ] to study quantum corrections to classical statistical mechanics . The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space . It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction ψ ( x ) . Thus, it maps [ 2 ] on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, [ 3 ] in a context related to representation theory in mathematics (see Weyl quantization ). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal , [ 4 ] effectively a spectrogram . In 1949, José Enrique Moyal , who had derived it independently, recognized it as the quantum moment-generating functional, [ 5 ] and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation ). It has applications in statistical mechanics , quantum chemistry , quantum optics , classical optics and signal analysis in diverse fields, such as electrical engineering , seismology , time–frequency analysis for music signals , spectrograms in biology and speech processing, and engine design . A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection ( ensemble ) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails for a quantum particle, due to the uncertainty principle . Instead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributions. For instance, the Wigner distribution can and normally does take on negative values for states which have no classical model—and is a convenient indicator of quantum-mechanical interference. (See below for a characterization of pure states whose Wigner functions are non-negative.) Smoothing the Wigner distribution through a filter of size larger than ħ (e.g., convolving with a phase-space Gaussian, a Weierstrass transform , to yield the Husimi representation , below), results in a positive-semidefinite function, i.e., it may be thought to have been coarsened to a semi-classical one. [ a ] Regions of such negative value are provable (by convolving them with a small Gaussian) to be "small": they cannot extend to compact regions larger than a few ħ , and hence disappear in the classical limit . They are shielded by the uncertainty principle , which does not allow precise location within phase-space regions smaller than ħ , and thus renders such " negative probabilities " less paradoxical. The Wigner distribution W ( x , p ) of a pure state is defined as W ( x , p ) = def 1 π ℏ ∫ − ∞ ∞ ψ ∗ ( x + y ) ψ ( x − y ) e 2 i p y / ℏ d y , {\displaystyle W(x,p)~{\stackrel {\text{def}}{=}}~{\frac {1}{\pi \hbar }}\int _{-\infty }^{\infty }\psi ^{*}(x+y)\psi (x-y)e^{2ipy/\hbar }\,dy,} where ψ is the wavefunction, and x and p are position and momentum, but could be any conjugate variable pair (e.g. real and imaginary parts of the electric field or frequency and time of a signal). Note that it may have support in x even in regions where ψ has no support in x ("beats"). It is symmetric in x and p : where φ is the normalized momentum-space wave function, proportional to the Fourier transform of ψ . In 3D, In the general case, which includes mixed states, it is the Wigner transform of the density matrix : W ( x , p ) = 1 π ℏ ∫ − ∞ ∞ ⟨ x − y | ρ ^ | x + y ⟩ e 2 i p y / ℏ d y , {\displaystyle W(x,p)={\frac {1}{\pi \hbar }}\int _{-\infty }^{\infty }\langle x-y|{\hat {\rho }}|x+y\rangle e^{2ipy/\hbar }\,dy,} where ⟨ x | ψ ⟩ = ψ ( x ) . This Wigner transformation (or map) is the inverse of the Weyl transform , which maps phase-space functions to Hilbert-space operators, in Weyl quantization . Thus, the Wigner function is the cornerstone of quantum mechanics in phase space . In 1949, José Enrique Moyal elucidated how the Wigner function provides the integration measure (analogous to a probability density function ) in phase space, to yield expectation values from phase-space c-number functions g ( x , p ) uniquely associated to suitably ordered operators Ĝ through Weyl's transform (see Wigner–Weyl transform and property 7 below), in a manner evocative of classical probability theory . Specifically, an operator's Ĝ expectation value is a "phase-space average" of the Wigner transform of that operator: ⟨ G ^ ⟩ = ∫ d x d p W ( x , p ) g ( x , p ) . {\displaystyle \langle {\hat {G}}\rangle =\int dx\,dp\,W(x,p)g(x,p).} 1. W ( x , p ) is a real-valued function. 2. The x and p probability distributions are given by the marginals : 3. W ( x , p ) has the following reflection symmetries: 4. W ( x , p ) is Galilei-covariant : 5. The equation of motion for each point in the phase space is classical in the absence of forces: 6. State overlap is calculated as 7. Operator expectation values (averages) are calculated as phase-space averages of the respective Wigner transforms: 8. For W ( x , p ) to represent physical (positive) density matrices, it must satisfy 9. By virtue of the Cauchy–Schwarz inequality , for a pure state, it is constrained to be bounded: 10. The Wigner transformation is simply the Fourier transform of the antidiagonals of the density matrix, when that matrix is expressed in a position basis. [ 7 ] Let | m ⟩ ≡ a † m m ! | 0 ⟩ {\displaystyle |m\rangle \equiv {\frac {a^{\dagger m}}{\sqrt {m!}}}|0\rangle } be the m {\displaystyle m} -th Fock state of a quantum harmonic oscillator . Groenewold (1946) discovered its associated Wigner function, in dimensionless variables: where L m ( x ) {\displaystyle L_{m}(x)} denotes the m {\displaystyle m} -th Laguerre polynomial . This may follow from the expression for the static eigenstate wavefunctions, where H m {\displaystyle H_{m}} is the m {\displaystyle m} -th Hermite polynomial . From the above definition of the Wigner function, upon a change of integration variables, The expression then follows from the integral relation between Hermite and Laguerre polynomials. [ 8 ] The Wigner transformation is a general invertible transformation of an operator Ĝ on a Hilbert space to a function g ( x , p ) on phase space and is given by Hermitian operators map to real functions. The inverse of this transformation, from phase space to Hilbert space, is called the Weyl transformation : (not to be confused with the distinct Weyl transformation in differential geometry ). The Wigner function W ( x , p ) discussed here is thus seen to be the Wigner transform of the density matrix operator ρ̂ . Thus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of g ( x , p ) with the Wigner function. The Wigner transform of the von Neumann evolution equation of the density matrix in the Schrödinger picture is Moyal's evolution equation for the Wigner function: ∂ W ( x , p , t ) ∂ t = − { { W ( x , p , t ) , H ( x , p ) } } , {\displaystyle {\frac {\partial W(x,p,t)}{\partial t}}=-\{\{W(x,p,t),H(x,p)\}\},} where H ( x , p ) is the Hamiltonian, and {{⋅, ⋅}} is the Moyal bracket . In the classical limit, ħ → 0 , the Moyal bracket reduces to the Poisson bracket , while this evolution equation reduces to the Liouville equation of classical statistical mechanics. Formally, the classical Liouville equation can be solved in terms of the phase-space particle trajectories which are solutions of the classical Hamilton equations. This technique of solving partial differential equations is known as the method of characteristics . This method transfers to quantum systems, where the characteristics' "trajectories" now determine the evolution of Wigner functions. The solution of the Moyal evolution equation for the Wigner function is represented formally as where x t ( x , p ) {\displaystyle x_{t}(x,p)} and p t ( x , p ) {\displaystyle p_{t}(x,p)} are the characteristic trajectories subject to the quantum Hamilton equations with initial conditions x t = 0 ( x , p ) = x {\displaystyle x_{t=0}(x,p)=x} and p t = 0 ( x , p ) = p {\displaystyle p_{t=0}(x,p)=p} , and where ⋆ {\displaystyle \star } -product composition is understood for all argument functions. Since ⋆ {\displaystyle \star } -composition of functions is thoroughly nonlocal (the "quantum probability fluid" diffuses, as observed by Moyal), vestiges of local trajectories in quantum systems are barely discernible in the evolution of the Wigner distribution function. [ b ] In the integral representation of ⋆ {\displaystyle \star } -products, successive operations by them have been adapted to a phase-space path integral, to solve the evolution equation for the Wigner function [ 9 ] (see also [ 10 ] [ 11 ] [ 12 ] ). This non-local feature of Moyal time evolution [ 13 ] is illustrated in the gallery below, for Hamiltonians more complex than the harmonic oscillator. In the classical limit, the trajectory nature of the time evolution of Wigner functions becomes more and more distinct. At ħ = 0, the characteristics' trajectories reduce to the classical trajectories of particles in phase space. In the special case of the quantum harmonic oscillator , however, the evolution is simple and appears identical to the classical motion: a rigid rotation in phase space with a frequency given by the oscillator frequency. This is illustrated in the gallery below. This same time evolution occurs with quantum states of light modes , which are harmonic oscillators. The Wigner function allows one to study the classical limit , offering a comparison of the classical and quantum dynamics in phase space. [ 14 ] [ 15 ] It has been suggested that the Wigner function approach can be viewed as a quantum analogy to the operatorial formulation of classical mechanics introduced in 1932 by Bernard Koopman and John von Neumann : the time evolution of the Wigner function approaches, in the limit ħ → 0, the time evolution of the Koopman–von Neumann wavefunction of a classical particle. [ 16 ] Moments of the Wigner function generate symmetrized operator averages, in contrast to the normal order and antinormal order generated by the Glauber–Sudarshan P representation and Husimi Q representation respectively. The Wigner representation is thus very well suited for making semi-classical approximations in quantum optics [ 17 ] and field theory of Bose-Einstein condensates where high mode occupation approaches a semiclassical limit. [ 18 ] As already noted, the Wigner function of quantum state typically takes some negative values. Indeed, for a pure state in one variable, if W ( x , p ) ≥ 0 {\displaystyle W(x,p)\geq 0} for all x {\displaystyle x} and p {\displaystyle p} , then the wave function must have the form for some complex numbers a , b , c {\displaystyle a,b,c} with Re ⁡ ( a ) > 0 {\displaystyle \operatorname {Re} (a)>0} (Hudson's theorem [ 19 ] ). Note that a {\displaystyle a} is allowed to be complex. In other words, it is a one-dimensional gaussian wave packet . Thus, pure states with non-negative Wigner functions are not necessarily minimum-uncertainty states in the sense of the Heisenberg uncertainty formula ; rather, they give equality in the Schrödinger uncertainty formula , which includes an anticommutator term in addition to the commutator term. (With careful definition of the respective variances, all pure-state Wigner functions lead to Heisenberg's inequality all the same.) In higher dimensions, the characterization of pure states with non-negative Wigner functions is similar; the wave function must have the form where A {\displaystyle A} is a symmetric complex matrix whose real part is positive-definite, b {\displaystyle b} is a complex vector, and c is a complex number. [ 20 ] The Wigner function of any such state is a Gaussian distribution on phase space. Soto and Claverie [ 20 ] give an elegant proof of this characterization, using the Segal–Bargmann transform . The reasoning is as follows. The Husimi Q function of ψ {\displaystyle \psi } may be computed as the squared magnitude of the Segal–Bargmann transform of ψ {\displaystyle \psi } , multiplied by a Gaussian. Meanwhile, the Husimi Q function is the convolution of the Wigner function with a Gaussian. If the Wigner function of ψ {\displaystyle \psi } is non-negative everywhere on phase space, then the Husimi Q function will be strictly positive everywhere on phase space. Thus, the Segal–Bargmann transform F ( x + i p ) {\displaystyle F(x+ip)} of ψ {\displaystyle \psi } will be nowhere zero. Thus, by a standard result from complex analysis, we have for some holomorphic function g {\displaystyle g} . But in order for F {\displaystyle F} to belong to the Segal–Bargmann space —that is, for F {\displaystyle F} to be square-integrable with respect to a Gaussian measure— g {\displaystyle g} must have at most quadratic growth at infinity. From this, elementary complex analysis can be used to show that g {\displaystyle g} must actually be a quadratic polynomial. Thus, we obtain an explicit form of the Segal–Bargmann transform of any pure state whose Wigner function is non-negative. We can then invert the Segal–Bargmann transform to obtain the claimed form of the position wave function. There does not appear to be any simple characterization of mixed states with non-negative Wigner functions. It has been shown that the Wigner quasiprobability distribution function can be regarded as an ħ - deformation of another phase-space distribution function that describes an ensemble of de Broglie–Bohm causal trajectories. [ 21 ] Basil Hiley has shown that the quasi-probability distribution may be understood as the density matrix re-expressed in terms of a mean position and momentum of a "cell" in phase space, and the de Broglie–Bohm interpretation allows one to describe the dynamics of the centers of such "cells". [ 22 ] [ 23 ] There is a close connection between the description of quantum states in terms of the Wigner function and a method of quantum states reconstruction in terms of mutually unbiased bases . [ 24 ] The Wigner distribution was the first quasiprobability distribution to be formulated, but many more followed, formally equivalent and transformable to and from it (see Transformation between distributions in time–frequency analysis ). As in the case of coordinate systems, on account of varying properties, several such have with various advantages for specific applications: Nevertheless, in some sense, the Wigner distribution holds a privileged position among all these distributions, since it is the only one whose requisite star-product drops out (integrates out by parts to effective unity) in the evaluation of expectation values, as illustrated above, and so can be visualized as a quasiprobability measure analogous to the classical ones. As indicated, the formula for the Wigner function was independently derived several times in different contexts. In fact, apparently, Wigner was unaware that even within the context of quantum theory, it had been introduced previously by Heisenberg and Dirac , [ 26 ] [ 27 ] albeit purely formally: these two missed its significance, and that of its negative values, as they merely considered it as an approximation to the full quantum description of a system such as the atom. (Incidentally, Dirac would later become Wigner's brother-in-law, marrying his sister Manci .) Symmetrically, in most of his legendary 18-month correspondence with Moyal in the mid-1940s, Dirac was unaware that Moyal's quantum-moment generating function was effectively the Wigner function, and it was Moyal who finally brought it to his attention. [ 28 ]
https://en.wikipedia.org/wiki/Wigner_quasiprobability_distribution
In theoretical physics , the composition of two non- collinear Lorentz boosts results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. This rotation is called Thomas rotation , Thomas–Wigner rotation or Wigner rotation . If a sequence of non-collinear boosts returns an object to its initial velocity, then the sequence of Wigner rotations can combine to produce a net rotation called the Thomas precession . [ 1 ] The rotation was discovered by Émile Borel in 1913, [ 2 ] [ 3 ] [ 4 ] rediscovered and proved by Ludwik Silberstein in his 1914 book The Theory of Relativity , [ 5 ] rediscovered by Llewellyn Thomas in 1926, [ 6 ] and rederived by Eugene Wigner in 1939. [ 7 ] Wigner acknowledged Silberstein. There are still ongoing discussions about the correct form of equations for the Thomas rotation in different reference systems with contradicting results. [ 8 ] Goldstein : [ 9 ] Einstein's principle of velocity reciprocity (EPVR) reads [ 10 ] With less careful interpretation, the EPVR is seemingly violated in some situations, [ 11 ] but on closer analysis there is no such violation. Let it be u the velocity in which the lab reference frame moves respect an object called A and let it be v the velocity in which another object called B is moving, measured from the lab reference frame. If u and v are not aligned, the coordinates of the relative velocities of these two bodies will not be opposite even though the actual velocity vectors themselves are indeed opposites (with the fact that the coordinates are not opposites being due to the fact that the two travellers are not using the same coordinate basis vectors). If A and B both started in the lab system with coordinates matching those of the lab and subsequently use coordinate systems that result from their respective boosts from that system, then the velocity that A will measure on B will be given in terms of A's new coordinate system by: And the velocity that B will measure on A will be given in terms of B's coordinate system by: The Lorentz factor for the velocities that either A sees on B or B sees on A are the same: but the components are not opposites - i.e. v A B ≠ − v B A {\textstyle \mathbf {v} _{AB}\neq -\mathbf {v} _{BA}} However this does not mean that the velocities are not opposites as the components in each case are multiplied by different basis vectors (and all observers agree that the difference is by a rotation of coordinates such that the actual velocity vectors are indeed exact opposites). The angle of rotation can be calculated in two ways: Or: And the axis of rotation is: When studying the Thomas rotation at the fundamental level, one typically uses a setup with three coordinate frames, Σ, Σ′ Σ′′ . Frame Σ′ has velocity u relative to frame Σ , and frame Σ′′ has velocity v relative to frame Σ′ . The axes are, by construction, oriented as follows. Viewed from Σ′ , the axes of Σ′ and Σ are parallel (the same holds true for the pair of frames when viewed from Σ .) Also viewed from Σ′ , the spatial axes of Σ′ and Σ′′ are parallel (and the same holds true for the pair of frames when viewed from Σ′′ .) [ 12 ] This is an application of EVPR: If u is the velocity of Σ′ relative to Σ , then u ′ = − u is the velocity of Σ relative to Σ′ . The velocity 3 -vector u makes the same angles with respect to coordinate axes in both the primed and unprimed systems. This does not represent a snapshot taken in any of the two frames of the combined system at any particular time, as should be clear from the detailed description below. This is possible, since a boost in, say, the positive z -direction , preserves orthogonality of the coordinate axes. A general boost B ( w ) can be expressed as L = R −1 ( e z , w ) B z ( w ) R ( e z , w ) , where R ( e z , w ) is a rotation taking the z -axis into the direction of w and B z is a boost in the new z -direction . [ 13 ] [ 14 ] [ 15 ] Each rotation retains the property that the spatial coordinate axes are orthogonal. The boost will stretch the (intermediate) z -axis by a factor γ , while leaving the intermediate x -axis and y -axis in place. [ 16 ] The fact that coordinate axes are non-parallel in this construction after two consecutive non-collinear boosts is a precise expression of the phenomenon of Thomas rotation. [ nb 1 ] The velocity of Σ′′ as seen in Σ is denoted w d = u ⊕ v , where ⊕ refers to the relativistic addition of velocity (and not ordinary vector addition ), given by [ 17 ] and is the Lorentz factor of the velocity u (the vertical bars | u | indicate the magnitude of the vector ). The velocity u can be thought of the velocity of a frame Σ′ relative to a frame Σ , and v is the velocity of an object, say a particle or another frame Σ′′ relative to Σ′ . In the present context, all velocities are best thought of as relative velocities of frames unless otherwise specified. The result w = u ⊕ v is then the relative velocity of frame Σ′′ relative to a frame Σ . Although velocity addition is nonlinear , non- associative , and non- commutative , the result of the operation correctly obtains a velocity with a magnitude less than c . If ordinary vector addition was used, it would be possible to obtain a velocity with a magnitude larger than c . The Lorentz factor γ of both composite velocities are equal, and the norms are equal under interchange of velocity vectors Since the two possible composite velocities have equal magnitude, but different directions, one must be a rotated copy of the other. More detail and other properties of no direct concern here can be found in the main article. Consider the reversed configuration, namely, frame Σ moves with velocity − u relative to frame Σ′ , and frame Σ′ , in turn, moves with velocity − v relative to frame Σ′′ . In short, u → − u and v → − v by EPVR. Then the velocity of Σ relative to Σ′′ is (− v ) ⊕ (− u ) ≡ − v ⊕ u . By EPVR again, the velocity of Σ′′ relative to Σ is then w i = v ⊕ u . (A) One finds w d ≠ w i . While they are equal in magnitude, there is an angle between them. For a single boost between two inertial frames, there is only one unambiguous relative velocity (or its negative). For two boosts, the peculiar result of two inequivalent relative velocities instead of one seems to contradict the symmetry of relative motion between any two frames. Which is the correct velocity of Σ′′ relative to Σ ? Since this inequality may be somewhat unexpected and potentially breaking EPVR, this question is warranted. [ nb 2 ] The answer to the question lies in the Thomas rotation, and that one must be careful in specifying which coordinate system is involved at each step. When viewed from Σ , the coordinate axes of Σ and Σ′′ are not parallel. While this can be hard to imagine since both pairs (Σ, Σ′) and (Σ′, Σ′′) have parallel coordinate axes, it is easy to explain mathematically. Velocity addition does not provide a complete description of the relation between the frames. One must formulate the complete description in terms of Lorentz transformations corresponding to the velocities. A Lorentz boost with any velocity v (magnitude less than c ) is given symbolically by where the coordinates and transformation matrix are compactly expressed in block matrix form and, in turn, r , r ′, v are column vectors (the matrix transpose of these are row vectors), and γ v is the Lorentz factor of velocity v . The boost matrix is a symmetric matrix . The inverse transformation is given by It is clear that to each admissible velocity v there corresponds a pure Lorentz boost, Velocity addition u ⊕ v corresponds to the composition of boosts B ( v ) B ( u ) in that order. The B ( u ) acts on X first, then B ( v ) acts on B ( u ) X . Notice succeeding operators act on the left in any composition of operators, so B ( v ) B ( u ) should be interpreted as a boost with velocities u then v , not v then u . Performing the Lorentz transformations by block matrix multiplication, the composite transformation matrix is [ 18 ] and, in turn, Here γ is the composite Lorentz factor, and a and b are 3×1 column vectors proportional to the composite velocities. The 3×3 matrix M will turn out to have geometric significance. The inverse transformations are and the composition amounts to a negation and exchange of velocities, If the relative velocities are exchanged, looking at the blocks of Λ , one observes the composite transformation to be the matrix transpose of Λ . This is not the same as the original matrix, so the composite Lorentz transformation matrix is not symmetric, and thus not a single boost. This, in turn, translates to the incompleteness of velocity composition from the result of two boosts; symbolically, To make the description complete, it is necessary to introduce a rotation, before or after the boost. This rotation is the Thomas rotation . A rotation is given by where the 4×4 rotation matrix is and R is a 3×3 rotation matrix . [ nb 3 ] In this article the axis-angle representation is used, and θ = θ e is the "axis-angle vector", the angle θ multiplied by a unit vector e parallel to the axis. Also, the right-handed convention for the spatial coordinates is used (see orientation (vector space) ), so that rotations are positive in the anticlockwise sense according to the right-hand rule , and negative in the clockwise sense. With these conventions; the rotation matrix rotates any 3d vector about the axis e through angle θ anticlockwise (an active transformation ), which has the equivalent effect of rotating the coordinate frame clockwise about the same axis through the same angle (a passive transformation). The rotation matrix is an orthogonal matrix , its transpose equals its inverse, and negating either the angle or axis in the rotation matrix corresponds to a rotation in the opposite sense, so the inverse transformation is readily obtained by A boost followed or preceded by a rotation is also a Lorentz transformation, since these operations leave the spacetime interval invariant. The same Lorentz transformation has two decompositions for appropriately chosen rapidity and axis-angle vectors; and if these are two decompositions are equal, the two boosts are related by so the boosts are related by a matrix similarity transformation. It turns out the equality between two boosts and a rotation followed or preceded by a single boost is correct: the rotation of frames matches the angular separation of the composite velocities, and explains how one composite velocity applies to one frame, while the other applies to the rotated frame. The rotation also breaks the symmetry in the overall Lorentz transformation making it nonsymmetric. For this specific rotation, let the angle be ε and the axis be defined by the unit vector e , so the axis-angle vector is ε = ε e . Altogether, two different orderings of two boosts means there are two inequivalent transformations. Each of these can be split into a boost then rotation, or a rotation then boost, doubling the number of inequivalent transformations to four. The inverse transformations are equally important; they provide information about what the other observer perceives. In all, there are eight transformations to consider, just for the problem of two Lorentz boosts. In summary, with subsequent operations acting on the left, they are Matching up the boosts followed by rotations, in the original setup, an observer in Σ notices Σ′′ to move with velocity u ⊕ v then rotate clockwise (first diagram), and because of the rotation an observer in Σ′′ notices Σ to move with velocity − v ⊕ u then rotate anticlockwise (second diagram). If the velocities are exchanged an observer in Σ notices Σ′′ to move with velocity v ⊕ u then rotate anticlockwise (third diagram), and because of the rotation an observer in Σ′′ notices Σ to move with velocity − u ⊕ v then rotate clockwise (fourth diagram). The cases of rotations then boosts are similar (no diagrams are shown). Matching up the rotations followed by boosts, in the original setup, an observer in Σ notices Σ′′ to rotate clockwise then move with velocity v ⊕ u , and because of the rotation an observer in Σ′′ notices Σ to rotate anticlockwise then move with velocity − u ⊕ v . If the velocities are exchanged an observer in Σ notices Σ′′ to rotate anticlockwise then move with velocity u ⊕ v , and because of the rotation an observer in Σ′′ notices Σ to rotate clockwise then move with velocity − u ⊕ v . The above formulae constitute the relativistic velocity addition and the Thomas rotation explicitly in the general Lorentz transformations. Throughout, in every composition of boosts and decomposition into a boost and rotation, the important formula holds, allowing the rotation matrix to be defined completely in terms of the relative velocities u and v . The angle of a rotation matrix in the axis–angle representation can be found from the trace of the rotation matrix , the general result for any axis is tr( R ) = 1 + 2 cos ε . Taking the trace of the equation gives [ 19 ] [ 20 ] [ 21 ] The angle ε between a and b is not the same as the angle α between u and v . In both frames Σ and Σ′′, for every composition and decomposition, another important formula holds. The vectors a and b are indeed related by a rotation, in fact by the same rotation matrix R which rotates the coordinate frames. Starting from a , the matrix R rotates this into b anticlockwise, it follows their cross product (in the right-hand convention) defines the axis correctly, therefore the axis is also parallel to u × v . The magnitude of this pseudovector is neither interesting nor important, only the direction is, so it can be normalized into the unit vector which still completely defines the direction of the axis without loss of information. The rotation is simply a "static" rotation and there is no relative rotational motion between the frames, there is relative translational motion in the boost. However, if the frames accelerate, then the rotated frame rotates with an angular velocity. This effect is known as the Thomas precession , and arises purely from the kinematics of successive Lorentz boosts. The decomposition process described (below) can be carried through on the product of two pure Lorentz transformations to obtain explicitly the rotation of the coordinate axes resulting from the two successive "boosts". In general, the algebra involved is quite forbidding, more than enough, usually, to discourage any actual demonstration of the rotation matrix In principle, it is pretty easy. Since every Lorentz transformation is a product of a boost and a rotation, the consecutive application of two pure boosts is a pure boost, either followed by or preceded by a pure rotation. Thus, suppose The task is to glean from this equation the boost velocity w and the rotation R from the matrix entries of Λ . [ 22 ] The coordinates of events are related by Inverting this relation yields or Set x ′ = ( ct ′, 0, 0, 0). Then x ν will record the spacetime position of the origin of the primed system, or But Multiplying this matrix with a pure rotation will not affect the zeroth columns and rows, and which could have been anticipated from the formula for a simple boost in the x -direction, and for the relative velocity vector Thus given with Λ , one obtains β and w by little more than inspection of Λ −1 . (Of course, w can also be found using velocity addition per above.) From w , construct B (− w ) . The solution for R is then With the ansatz one finds by the same means Finding a formal solution in terms of velocity parameters u and v involves first formally multiplying B ( v ) B ( u ) , formally inverting, then reading off β w form the result, formally building B (− w ) from the result, and, finally, formally multiplying B (− w ) B ( v ) B ( u ) . It should be clear that this is a daunting task, and it is difficult to interpret/identify the result as a rotation, though it is clear a priori that it is. It is these difficulties that the Goldstein quote at the top refers to. The problem has been thoroughly studied under simplifying assumptions over the years. Another way to explain the origin of the rotation is by looking at the generators of the Lorentz group . The passage from a velocity to a boost is obtained as follows. An arbitrary boost is given by [ 23 ] where ζ is a triple of real numbers serving as coordinates on the boost subspace of the Lie algebra so (3, 1) spanned by the matrices The vector is called the boost parameter or boost vector , while its norm is the rapidity . Here β is the velocity parameter , the magnitude of the vector β = u / c . While for ζ one has 0 ≤ ζ < ∞ , the parameter β is confined within 0 ≤ β < 1 , and hence 0 ≤ u < c . Thus The set of velocities satisfying 0 ≤ u < c is an open ball in ℝ 3 and is called the space of admissible velocities in the literature. It is endowed with a hyperbolic geometry described in the linked article. [ 24 ] The generators of boosts, K 1 , K 2 , K 3 , in different directions do not commute. This has the effect that two consecutive boosts is not a pure boost in general, but a rotation preceding a boost. Consider a succession of boosts in the x direction, then the y direction, expanding each boost to first order [ 25 ] then and the group commutator is Three of the commutation relations of the Lorentz generators are where the bracket [ A , B ] = AB − BA is a binary operation known as the commutator , and the other relations can be found by taking cyclic permutations of x, y, z components (i.e. change x to y, y to z, and z to x, repeat). Returning to the group commutator, the commutation relations of the boost generators imply for a boost along the x then y directions, there will be a rotation about the z axis. In terms of the rapidities, the rotation angle θ is given by equivalently expressible as In fact, the full Lorentz group is not indispensable for studying the Wigner rotation. Given that this phenomenon involves only two spatial dimensions, the subgroup SO(2, 1) + is sufficient for analyzing the associated problems. Analogous to the Euler parametrization of SO(3) , SO(2, 1) + can be decomposed into three simple parts, providing a straightforward and intuitive framework for exploring the Wigner rotation problem. [ 26 ] The familiar notion of vector addition for velocities in the Euclidean plane can be done in a triangular formation, or since vector addition is commutative, the vectors in both orderings geometrically form a parallelogram (see " parallelogram law "). This does not hold for relativistic velocity addition; instead a hyperbolic triangle arises whose edges are related to the rapidities of the boosts. Changing the order of the boost velocities, one does not find the resultant boost velocities to coincide. [ 27 ]
https://en.wikipedia.org/wiki/Wigner_rotation
In mathematical physics , the Wigner surmise is a statement about the probability distribution of the spaces between points in the spectra of nuclei of heavy atoms, which have many degrees of freedom, or quantum systems with few degrees of freedom but chaotic classical dynamics. It was proposed by Eugene Wigner in probability theory . [ 1 ] The surmise was a result of Wigner's introduction of random matrices in the field of nuclear physics . The surmise consists of two postulates: The above result is exact for 2 × 2 {\displaystyle 2\times 2} real symmetric matrices M {\displaystyle M} , with elements that are independent standard gaussian random variables, with joint distribution proportional to In practice, it is a good approximation for the actual distribution for real symmetric matrices of any dimension. The corresponding result for complex hermitian matrices (which is also exact in the 2 × 2 {\displaystyle 2\times 2} case and a good approximation in general) with distribution proportional to e − 1 2 T r ( M M † ) {\displaystyle e^{-{\frac {1}{2}}{\rm {Tr}}(MM^{\dagger })}} , is given by During the conference on Neutron Physics by Time-of-Flight , held at Gatlinburg, Tennessee , November 1 and 2, 1956, Wigner delivered a presentation on the theoretical arrangement of neighboring neutron resonances (with matching spin and parity) in heavy nuclei. In the presentation he gave the following guess: [ 3 ] [ 4 ] Perhaps I am now too courageous when I try to guess the distribution of the distances between successive levels (of energies of heavy nuclei). Theoretically, the situation is quite simple if one attacks the problem in a simpleminded fashion. The question is simply what are the distances of the characteristic values of a symmetric matrix with random coefficients. [ 5 ] This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . This mathematical physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Wigner_surmise
The Wigner–Seitz cell , named after Eugene Wigner and Frederick Seitz , is a primitive cell which has been constructed by applying Voronoi decomposition to a crystal lattice . It is used in the study of crystalline materials in crystallography . The unique property of a crystal is that its atoms are arranged in a regular three-dimensional array called a lattice . All the properties attributed to crystalline materials stem from this highly ordered structure. Such a structure exhibits discrete translational symmetry . In order to model and study such a periodic system, one needs a mathematical "handle" to describe the symmetry and hence draw conclusions about the material properties consequent to this symmetry. The Wigner–Seitz cell is a means to achieve this. A Wigner–Seitz cell is an example of a primitive cell , which is a unit cell containing exactly one lattice point. For any given lattice, there are an infinite number of possible primitive cells. However there is only one Wigner–Seitz cell for any given lattice. It is the locus of points in space that are closer to that lattice point than to any of the other lattice points. A Wigner–Seitz cell, like any primitive cell, is a fundamental domain for the discrete translation symmetry of the lattice. The primitive cell of the reciprocal lattice in momentum space is called the Brillouin zone . The concept of Voronoi decomposition was investigated by Peter Gustav Lejeune Dirichlet , leading to the name Dirichlet domain . Further contributions were made from Evgraf Fedorov , ( Fedorov parallelohedron ), Georgy Voronoy ( Voronoi polyhedron ), [ 1 ] [ 2 ] and Paul Niggli ( Wirkungsbereich ). [ 3 ] The application to condensed matter physics was first proposed by Eugene Wigner and Frederick Seitz in a 1933 paper, where it was used to solve the Schrödinger equation for free electrons in elemental sodium . [ 4 ] They approximated the shape of the Wigner–Seitz cell in sodium, which is a truncated octahedron, as a sphere of equal volume, and solved the Schrödinger equation exactly using periodic boundary conditions , which require d ψ / d r = 0 {\displaystyle d\psi /dr=0} at the surface of the sphere. A similar calculation which also accounted for the non-spherical nature of the Wigner–Seitz cell was performed later by John C. Slater . [ 5 ] There are only five topologically distinct polyhedra which tile three-dimensional space , ℝ 3 . These are referred to as the parallelohedra . They are the subject of mathematical interest, such as in higher dimensions. [ 6 ] These five parallelohedra can be used to classify the three dimensional lattices using the concept of a projective plane, as suggested by John Horton Conway and Neil Sloane . [ 7 ] However, while a topological classification considers any affine transformation to lead to an identical class, a more specific classification leads to 24 distinct classes of voronoi polyhedra with parallel edges which tile space. [ 3 ] For example, the rectangular cuboid , right square prism , and cube belong to the same topological class, but are distinguished by different ratios of their sides. This classification of the 24 types of voronoi polyhedra for Bravais lattices was first laid out by Boris Delaunay . [ 8 ] The Wigner–Seitz cell around a lattice point is defined as the locus of points in space that are closer to that lattice point than to any of the other lattice points. [ 9 ] It can be shown mathematically that a Wigner–Seitz cell is a primitive cell . This implies that the cell spans the entire direct space without leaving any gaps or holes, a property known as tessellation . The general mathematical concept embodied in a Wigner–Seitz cell is more commonly called a Voronoi cell , and the partition of the plane into these cells for a given set of point sites is known as a Voronoi diagram . The cell may be chosen by first picking a lattice point . After a point is chosen, lines are drawn to all nearby lattice points. At the midpoint of each line, another line is drawn normal to each of the first set of lines. The smallest area enclosed in this way is called the Wigner–Seitz primitive cell . For a 3-dimensional lattice, the steps are analogous, but in step 2 instead of drawing perpendicular lines, perpendicular planes are drawn at the midpoint of the lines between the lattice points. As in the case of all primitive cells, all area or space within the lattice can be filled by Wigner–Seitz cells and there will be no gaps. Nearby lattice points are continually examined until the area or volume enclosed is the correct area or volume for a primitive cell . Alternatively, if the basis vectors of the lattice are reduced using lattice reduction only a set number of lattice points need to be used. [ 10 ] In two-dimensions only the lattice points that make up the 4 unit cells that share a vertex with the origin need to be used. In three-dimensions only the lattice points that make up the 8 unit cells that share a vertex with the origin need to be used. For composite lattices , (crystals which have more than one vector in their basis ) each single lattice point represents multiple atoms. We can break apart each Wigner–Seitz cell into subcells by further Voronoi decomposition according to the closest atom, instead of the closest lattice point. [ 12 ] For example, the diamond crystal structure contains a two atom basis. In diamond, carbon atoms have tetrahedral sp 3 bonding , but since tetrahedra do not tile space , the voronoi decomposition of the diamond crystal structure is actually the triakis truncated tetrahedral honeycomb . [ 13 ] Another example is applying Voronoi decomposition to the atoms in the A15 phases , which forms the polyhedral approximation of the Weaire–Phelan structure . The Wigner–Seitz cell always has the same point symmetry as the underlying Bravais lattice . [ 9 ] For example, the cube , truncated octahedron , and rhombic dodecahedron have point symmetry O h , since the respective Bravais lattices used to generate them all belong to the cubic lattice system , which has O h point symmetry. In practice, the Wigner–Seitz cell itself is actually rarely used as a description of direct space , where the conventional unit cells are usually used instead. However, the same decomposition is extremely important when applied to reciprocal space . The Wigner–Seitz cell in the reciprocal space is called the Brillouin zone , which is used in constructing band diagrams to determine whether a material will be a conductor , semiconductor or an insulator .
https://en.wikipedia.org/wiki/Wigner–Seitz_cell