id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
633,000
https://en.wikipedia.org/wiki/Martensitic%20stainless%20steel
Martensitic stainless steel is a family of stainless steel alloy that has a martensite (body-centered tetragonal) crystal structure. It can be hardened and tempered through aging and heat treatment. The other main types of stainless steel are austenitic, ferritic, duplex, and precipitation hardened. History In 1912, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, while seeking a corrosion-resistant alloy for gun barrels, discovered and subsequently industrialized a martensitic stainless steel alloy. The discovery was announced two years later in a January 1915 newspaper article in The New York Times. Brearly applied for a U.S. patent during 1915. This was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in 1929 in London. The characteristic body-centered tetragonal martensite microstructure was first observed by German microscopist Adolf Martens around 1890. In 1912, Elwood Haynes applied for a U.S. patent on a martensitic stainless steel alloy. This patent was not granted until 1919. Overview Martensitic stainless steels can be high- or low-carbon steels built around the composition of iron, 12% up to 17% chromium, carbon from 0.10% (Type 410) up to 1.2% (Type 440C): Up to about 0.4% C they are used mostly for their mechanical properties in applications such as pumps, valves, and shafts. Above 0.4% C they are used mostly for their wear resistance, such as in cutlery, surgical blades, plastic injection molds, and nozzles. They may contain some Ni (Type 431) which allows a higher Cr and/or Mo content, thereby improving corrosion resistance and as the carbon content is also lower, the toughness is improved. Grade EN 1.4313 (CA6NM) with a low C, 13% Cr and 4% Ni offers good mechanical properties, good castability, and good weldability. It is used for nearly all the hydroelectric turbines in the world, including those of the huge "Three Gorges" dam in China. Additions of B, Co, Nb, Ti improve the high temperature properties, particularly creep resistance. This is used for heat exchangers in steam turbines. A specific grade is Type 630 (also called 17-4 PH) which is martensitic and hardens by precipitation at . Chemical compositions There are many proprietary grades not listed in the standards, particularly for cutlery. Mechanical Properties Martensitic stainless alloys are hardenable by heat treatment, specifically by quenching and stress relieving, or by quenching and tempering (referred to as QT). The alloy composition, and the high cooling rate of quenching enable the formation of martensite. Untempered martensite is low in toughness and therefore brittle.Tempered martensite gives steel good hardness and high toughness as can be seen below, and is largely used for medical surgical instruments, such as scalpels, razors, and internal clamps. In the heat treatment column, QT refers to Quenched and Tempered, P refers to Precipitation hardened Physical properties Processing When formability, softness, etc. are required in fabrication, steel having 0.12% maximum carbon is often used in soft condition. With increasing carbon, it is possible by hardening and tempering to obtain tensile strength in the range of , combined with reasonable toughness and ductility. In this condition, these steels find many useful general applications where mild corrosion resistance is required. Also, with the higher carbon range in the hardened and lightly tempered condition, tensile strength of about may be developed with lowered ductility. A common example of a Martensitic stainless steel is X46Cr13. Martensitic stainless steel can be nondestructively tested using the magnetic particle inspection method, unlike austenitic stainless steel. Applications Martensitic stainless steels, depending upon their carbon content and are often used for their corrosion resistance and high strength in pumps, valves, and boat shafts. They are also used for their wear resistance in, cutlery, medical tools (scalpels, razors and internal clamps), ball bearings, razor blades, injection molds for polymers, and brake disks for bicycles and motorbikes. References Building materials Stainless steel
Martensitic stainless steel
[ "Physics", "Engineering" ]
911
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
633,006
https://en.wikipedia.org/wiki/James%20Young%20%28chemist%29
James Young (13 July 1811 – 13 May 1883) was a Scottish chemist best known for his method of distilling paraffin from coal and oil shales. He is often referred to as Paraffin Young. Life James Young was born in Shuttle Street in the Drygate area of Glasgow, the son of John Young, a cabinetmaker and joiner, and his wife Jean Wilson. He became his father's apprentice at an early age, but educated himself at night school, attending evening classes in Chemistry at the nearby Anderson's College (now Strathclyde University) from the age of 19. At Anderson's College he met Thomas Graham, who had just been appointed as a lecturer on chemistry. In 1831 Young was appointed as Graham's assistant and occasionally took some of his lectures. While at Anderson's College he also met and befriended the explorer David Livingstone; this friendship continued until Livingstone's death in Africa many years later. On 21 August 1838 he married Mary Young of Paisley middle parish; in 1839 they moved to Lancashire. He died at Kelly House at Wemyss Bay in western Scotland on 13 May 1883. He is buried in Inverkip churchyard. Career In Young's first scientific paper, dated 4 January 1837, he described a modification of a voltaic battery invented by Michael Faraday. Later that same year he moved with Graham to University College, London where he helped him with experimental work. Chemicals In 1839 Young was appointed manager at James Muspratt's chemical works Newton-le-Willows, near St Helens, Merseyside, and in 1844 to Tennants, Clow & Co. at Manchester, for whom he devised a method of making sodium stannate directly from cassiterite. Potato blight In 1845 he served on a committee of the Manchester Literary and Philosophical Society for the investigation of potato blight, and suggested immersing the potatoes in dilute sulphuric acid as a means of combatting the disease; he was not elected a member of the Society until 19 October 1847. Finding the Manchester Guardian newspaper insufficiently liberal, he also began a movement for the establishment of the Manchester Examiner newspaper which was first published in 1846. Oils In 1847 Young had his attention called to a natural petroleum seepage in the Riddings colliery at Alfreton, Derbyshire from which he distilled a light thin oil suitable for use as lamp oil, at the same time obtaining a thicker oil suitable for lubricating machinery. In 1848 Young left Tennants', and in partnership with his friend and assistant Edward Meldrum, set up a small business refining the crude oil. The new oils were successful, but the supply of oil from the coal mine soon began to fail (eventually being exhausted in 1851). Young, noticing that the oil was dripping from the sandstone roof of the coal mine, theorised that it somehow originated from the action of heat on the coal seam and from this thought that it might be produced artificially. Following up this idea, he tried many experiments and eventually succeeded in producing, by distilling cannel coal at a low heat, a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. Young found that by slow distillation he could obtain a number of useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. Patents The production of these oils and solid paraffin wax from coal formed the subject of his patent dated 17 October 1850. In 1850 Young & Meldrum and Edward William Binney entered into partnership under the title of E.W. Binney & Co. at Bathgate in West Lothian and E. Meldrum & Co. at Glasgow; their works at Bathgate were completed in 1851 and became the first truly commercial oil-works in the world, using oil extracted from locally mined torbanite, lamosite, and bituminous coal to manufacture naphtha and lubricating oils; paraffin for fuel use and solid paraffin were not sold till 1856. In 1852 Young left Manchester to live in Scotland and that same year took out a US patent for the production of paraffin oil by distillation of coal. Both the US and UK patents were subsequently upheld in both countries in a series of lawsuits and other producers were obliged to pay him royalties. Torbanite was exceptionally rich, yielding 537 litres of petroleum spirit per tonne but it was a finite resource, which was completely exhausted by 1862. Geological surveys at the time showed the potential for similar sedimentary deposits in West Lothian, leading to the discovery of oil shales at Broxburn in 1858. The oil shales were less rich, typically yielding 150-180 litres per tonne, but the discovery meant that Young could extend his operations to West Lothian. Young's Paraffin Light and Mineral Oil Company In 1865 Young bought out his business partners and built second and larger works at Addiewell, near West Calder. It was a substantial industrial complex, in its time one of the largest chemical works in Scotland. In 1866 Young sold the concern to Young's Paraffin Light and Mineral Oil Company. Although Young remained in the company, he took no active part in it, instead withdrawing from business to occupy himself with yachting, travelling, scientific pursuits, and looking after the estates which he had purchased. The company continued to grow and expanded its operations, selling paraffin oil and paraffin lamps all over the world and earning for its founder the nickname "Paraffin" Young. Addiewell remained the centre of operations for Young's Paraffin Light and Mineral Oil Co. Ltd.. By the 1900s nearly 2 million tons of shale were being extracted annually, employing 4,000 men. The West Lothian oil-shale industry peaked in around 1892 operating 120 oil shale works, but the end was approaching. Retorting the oil shale was energy intensive and expensive. Cheaper free-flowing petroleum from Russia and later the USA, effectively priced oil shales out of the market, causing the industry to almost collapse by 1919, long after Young himself had died. Young's refinery closed around 1921. Other work During the height of enthusiasm for the Volunteer movement, Young formed the 4th Linlithgowshire Rifle Volunteer Corps at Bathgate on 9 August 1862, mainly from employees of his chemical works, with Young himself as Captain in command until 1865. It later became D Company of the 8th Volunteer Battalion, Royal Scots. Young made significant discoveries in rustproofing ships in 1872, which were later adopted by the Royal Navy. Noticing that bilge water was acidic, he suggested that quicklime could be used to prevent it corroding iron ships. Young worked with Professor George Forbes on the speed of light around 1880, using an improved version of Hippolyte Fizeau's method. Honours In 1847 Young was elected to the Manchester Literary and Philosophical Society. In 1861 he was elected Fellow of the Royal Society of Edinburgh. (proposed by Lyon Playfair) From 1868–1877 he was President of Anderson's College and founded the Young Chair of Technical Chemistry at the College. In 1873 Young was elected a Fellow of the Royal Society In 1879 he was awarded an honorary doctorate (LLD) from St. Andrews University. From 1879–1881 he was Vice-President of the Chemical Society. Retirement and death Young's wife died on 6 April 1868, and by 1871 he had moved with his children to Kelly House, near Wemyss Bay in the district of Inverkip. The 1881 census record shows him living with his son and daughter at this estate. Young died at the age of 71 in his home on 16 May 1883, in the presence of his son James. He was buried at Inverkip churchyard. Legacy Statues of his old professor, Thomas Graham, and of his fellow student and lifelong friend, David Livingstone, which stand respectively in George Square, Glasgow, and at Glasgow Cathedral, were erected by him. From 1855 James 'Paraffin' Young lived at Limefield House, Polbeth. A sycamore tree which Livingstone planted in 1864 is still flourishing in the grounds of Limefield House. There too one can see a miniature version of the "Victoria Falls", which the missionary discovered in the mid-19th century. It was built, as a tribute to Livingstone, by Young on the little stream which runs through the estate. Young had a lifelong friendship with David Livingstone, whom he had met at Anderson's College. He gave generously towards the expenses of Livingstone's African expeditions, and contributed to a search expedition, which proved too late to find Livingstone alive. He also had Livingstone's servants brought to Scotland, and presented to Glasgow a statue to his memory, which was erected in George Square, Glasgow. The James Young High School in Livingston, the streets James Young Road in Bathgate and James Young Avenue in Uphall Station, and the James Young Halls at the University of Strathclyde are all named after him. In 2011 he was one of seven inaugural inductees to the Scottish Engineering Hall of Fame. See also Pumpherston Luther Atwood Abraham Pineo Gesner Alexander Selligue History of the oil shale industry Monkland Railways Wilsontown, Morningside and Coltness Railway References and notes Notes External links The James Paraffin Young Memorial, Inverkip 1811 births 1883 deaths Alumni of the University of Strathclyde Fellows of the Royal Society Fellows of the Royal Society of Edinburgh Oil shale in Scotland People associated with Inverclyde Oil shale researchers Oil shale technology inventors Scientists from Glasgow Scottish chemists Scottish inventors Scottish engineers Scottish industrialists Scottish Engineering Hall of Fame inductees 19th-century Scottish businesspeople Scottish company founders
James Young (chemist)
[ "Chemistry" ]
1,995
[ "Oil shale technology inventors", "Oil shale technology" ]
633,037
https://en.wikipedia.org/wiki/Conceptualism
In metaphysics, conceptualism is a theory that explains universality of particulars as conceptualized frameworks situated within the thinking mind. Intermediate between nominalism and realism, the conceptualist view approaches the metaphysical concept of universals from a perspective that denies their presence in particulars outside the mind's perception of them. Conceptualism is anti-realist about abstract objects, just like immanent realism is (their difference being that immanent realism accepts there are mind-independent facts about whether universals are instantiated). History Medieval philosophy The evolution of late scholastic terminology has led to the emergence of conceptualism, which stemmed from doctrines that were previously considered to be nominalistic. The terminological distinction was made in order to stress the difference between the claim that universal mental acts correspond with universal intentional objects and the perspective that dismissed the existence of universals outside the mind. The former perspective of rejection of objective universality was distinctly defined as conceptualism. Peter Abélard was a medieval thinker whose work is currently classified as having the most potential in representing the roots of conceptualism. Abélard’s view denied the existence of determinate universals within things. William of Ockham was another famous late medieval thinker who had a strictly conceptualist solution to the metaphysical problem of universals. He argued that abstract concepts have no fundamentum outside the mind. In the 17th century conceptualism gained favour for some decades especially among the Jesuits: Pedro Hurtado de Mendoza, Rodrigo de Arriaga and Francisco Oviedo are the main figures. Although the order soon returned to the more realist philosophy of Francisco Suárez, the ideas of these Jesuits had a great impact on the early modern philosophy. Modern philosophy Conceptualism was either explicitly or implicitly embraced by most of the early modern thinkers, including René Descartes, John Locke, Baruch Spinoza, Gottfried Wilhelm Leibniz, George Berkeley, and David Hume – often in a quite simplified form if compared with the elaborate scholastic theories. Sometimes the term is applied even to the radically different philosophy of Immanuel Kant, who holds that universals have no connection with things as they are in themselves because they (universals) are exclusively produced by our a priori mental structures and functions, even though the categories have an objective validity for objects of experience (that is, phenomena). In late modern philosophy, conceptualist views were held by G. W. F. Hegel. Contemporary philosophy In contemporary times, Edmund Husserl's philosophy of mathematics has been construed as a form of conceptualism. In the context of the arts see: Conceptualisms in Christoph Metzger, Conceputalisms in Musik, Kunst und Film, im Auftrag der Akademie der Künste, Berlin, 2003 Conceptualist realism (a view put forward by David Wiggins in 1980) states that our conceptual framework maps reality. Though separate from the historical debate regarding the status of universals, there has been significant debate regarding the conceptual character of experience since the release of Mind and World by John McDowell in 1994. McDowell's touchstone is the famous refutation that Wilfrid Sellars provided for what he called the "Myth of the Given"—the notion that all empirical knowledge is based on certain assumed or 'given' items, such as sense data. Thus, in rejecting the Myth of the Given, McDowell argues for perceptual conceptualism, according to which perceptual content is conceptual "from the ground up", that is, all perceptual experience is a form of conceptual experience. McDowell's philosophy of justification is considered a form of foundationalism: it is a form of foundationalism because it allows that certain judgements are warranted by experience and it is a coherent form of this view because it maintains that experience can warrant certain judgements because experience is irreducibly conceptual. A clear motivation of contemporary conceptualism is that the kind of perception that rational creatures like humans enjoy is unique in the fact that it has conceptual character. McDowell explains his position: I have urged that our perceptual relation to the world is conceptual all the way out to the world’s impacts on our receptive capacities. The idea of the conceptual that I mean to be invoking is to be understood in close connection with the idea of rationality, in the sense that is in play in the traditional separation of mature human beings, as rational animals, from the rest of the animal kingdom. Conceptual capacities are capacities that belong to their subject’s rationality. So another way of putting my claim is to say that our perceptual experience is permeated with rationality. I have also suggested, in passing, that something parallel should be said about our agency. McDowell's conceptualism, though rather distinct (philosophically and historically) from conceptualism's genesis, shares the view that universals are not "given" in perception from outside the sphere of reason. Particular objects are perceived, as it were, already infused with conceptuality stemming from the spontaneity of the rational subject herself. The retroactive application of the term "perceptual conceptualism" to Kant's philosophy of perception is debatable. Robert Hanna has argued for a rival interpretation of Kant's work termed perceptual non-conceptualism. See also Conceptual architecture Conceptual art Lyco art (lyrical conceptualism), term coined by artist Paul Hartal Notes References Theories of deduction Abstract object theory Metaphysical theories Occamism
Conceptualism
[ "Mathematics" ]
1,123
[ "Theories of deduction" ]
633,233
https://en.wikipedia.org/wiki/Gravitino
In supergravity theories combining general relativity and supersymmetry, the gravitino () is the gauge fermion supersymmetric partner of the hypothesized graviton. It has been suggested as a candidate for dark matter. If it exists, it is a fermion of spin and therefore obeys the Rarita–Schwinger equation. The gravitino field is conventionally written as ψμα with a four-vector index and a spinor index. For one would get negative norm modes, as with every massless particle of spin 1 or higher. These modes are unphysical, and for consistency there must be a gauge symmetry which cancels these modes: , where εα(x) is a spinor function of spacetime. This gauge symmetry is a local supersymmetry transformation, and the resulting theory is supergravity. Thus the gravitino is the fermion mediating supergravity interactions, just as the photon is mediating electromagnetism, and the graviton is presumably mediating gravitation. Whenever supersymmetry is broken in supergravity theories, it acquires a mass which is determined by the scale at which supersymmetry is broken. This varies greatly between different models of supersymmetry breaking, but if supersymmetry is to solve the hierarchy problem of the Standard Model, the gravitino cannot be more massive than about 1 TeV/c2. History Murray Gell-Mann and Peter van Nieuwenhuizen intended the spin-3/2 particle associated with supergravity to be called the 'hemitrion', meaning 'half-3', however the editors of Physical Review were not keen on the name and instead suggested 'massless Rarita–Schwinger particle' for their 1977 publication. The current name of gravitino was instead suggested by Sidney Coleman and Heinz Pagels, although this term was originally coined in 1954 by Felix Pirani to describe a class of negative energy excitations with zero rest mass. Gravitino cosmological problem If the gravitino indeed has a mass of the order of TeV, then it creates a problem in the standard model of cosmology, at least naïvely. One option is that the gravitino is stable. This would be the case if the gravitino is the lightest supersymmetric particle and R-parity is conserved (or nearly so). In this case the gravitino is a candidate for dark matter; as such gravitinos will have been created in the very early universe. However, one may calculate the density of gravitinos and it turns out to be much higher than the observed dark matter density. The other option is that the gravitino is unstable. Thus the gravitinos mentioned above would decay and will not contribute to the observed dark matter density. However, since they decay only through gravitational interactions, their lifetime would be very long, of the order of in natural units, where Mpl is the Planck mass and m is the mass of a gravitino. For a gravitino mass of the order of TeV this would be , much later than the era of nucleosynthesis. At least one possible channel of decay must include either a photon, a charged lepton or a meson, each of which would be energetic enough to destroy a nucleus if it strikes one. One can show that enough such energetic particles will be created in the decay as to destroy almost all the nuclei created in the era of nucleosynthesis, in contrast with observations. In fact, in such a case the universe would have been made of hydrogen alone, and star formation would probably be impossible. One possible solution to the cosmological gravitino problem is the split supersymmetry model, where the gravitino mass is much higher than the TeV scale, but other fermionic supersymmetric partners of standard model particles already appear at this scale. Another solution is that R-parity is slightly violated and the gravitino is the lightest supersymmetric particle. This causes almost all supersymmetric particles in the early Universe to decay into Standard Model particles via R-parity violating interactions well before the synthesis of primordial nuclei; a small fraction however decay into gravitinos, whose half-life is orders of magnitude greater than the age of the Universe due to the suppression of the decay rate by the Planck scale and the small R-parity violating couplings. See also Dual graviton Graviton Gravity Supersymmetry References Fermions Quantum gravity Supersymmetry Hypothetical elementary particles
Gravitino
[ "Physics", "Materials_science" ]
966
[ "Symmetry", "Fermions", "Unsolved problems in physics", "Quantum gravity", "Subatomic particles", "Condensed matter physics", "Hypothetical elementary particles", "Supersymmetry", "Physics beyond the Standard Model", "Matter" ]
633,263
https://en.wikipedia.org/wiki/Radiotelephony%20procedure
Radiotelephony procedure (also on-air protocol and voice procedure) includes various techniques used to clarify, simplify and standardize spoken communications over two-way radios, in use by the armed forces, in civil aviation, police and fire dispatching systems, citizens' band radio (CB), and amateur radio. Voice procedure communications are intended to maximize clarity of spoken communication and reduce errors in the verbal message by use of an accepted nomenclature. It consists of a signalling protocol such as the use of abbreviated codes like the CB radio ten-code, Q codes in amateur radio and aviation, police codes, etc., and jargon. Some elements of voice procedure are understood across many applications, but significant variations exist. The armed forces of the NATO countries have similar procedures in order to make cooperation easier. The impacts of having radio operators who are not well-trained in standard procedures can cause significant operational problems and delays, as exemplified by one case of amateur radio operators during Hurricane Katrina, in which:...many of the operators who were deployed had excellent go-kits and technical ability, but were seriously wanting in traffic handling skill. In one case it took almost 15 minutes to pass one 25 word message. Introduction Radiotelephony procedures encompass international regulations, official procedures, technical standards, and commonly understood conventions intended to ensure efficient, reliable, and inter-operable communications via all modes of radio communications. The most well-developed and public procedures are contained in the Combined Communications Electronics Board's Allied Communications Procedure ACP 125(G): Communications Instructions Radiotelephone Procedures. These procedures consist of many different components. The three most important ones are: Voice procedures—what to say Speech technique—how to say it Microphone technique—how to say it into a microphone These procedures have been developed, tested under the most difficult of conditions, then revised to implement the lessons learned, many times since the early 1900s. According to ACP 125(G) and the Virginia Defense Force Signal Operating Instructions:Voice procedure is designed to provide the fastest and most accurate method of speech transmission. All messages should be pre-planned, brief and straightforward. Ideally, messages should be written down: even brief notes reduce the risk of error. Messages should be constructed clearly and logically in order not to confuse the recipient.Voice procedure is necessary because: Speech on a congested voice net must be clear, concise and unambiguous. To avoid interference between speech and data, it will often be expedient to assign the passage of data traffic to logistic or admin nets rather than to those directly associated with command and control. It must be assumed that all transmissions will be intercepted by a portion of the civilian population. The use of a standard procedure will help reduce the threat of spreading rumors or creating panic among those not involved in an emergency response. Some form of discipline is needed to ensure that transmissions do not overlap. If two people send traffic at the same time, the result is chaos. Radio operators must talk differently because two-way radios reduce the quality of human speech in such a way that it becomes harder to understand. A large part of the radio-specific procedures is the specialized language that has been refined over more than 100 years. There are several main methods of communication over radio, and they should be used in this order of preference: Procedure words Standard (predefined) phraseology (for most things in aviation and maritime use) Plain language dialogue (for things that can't be handled by phraseology) Formal messages Narrative messages Dialogue (normal conversation) Brevity codes, including Ten-codes, and Phillips Code; and operating signals, including 92 code, Q code, and Z code; should be used as a last choice, as these lists of codes are so extensive that it is unlikely that all participants have the full and correct definitions memorized. All of those listed here except the ten-code are designed exclusively for use in Morse code or teletypewriter use, and are thus unsuitable for use on voice circuits. International Radio Regulations All radio communications on the planet operate under regulations created by the ITU-R, which prescribes most of the basic voice radio procedures, and these are further codified by each individual country. United States radio regulations In the U.S., radio communications are regulated by the NTIA and the FCC. Regulations created by the FCC are codified in Title 47 of the Code of Federal Regulations: Part 4—Disruptions to Communications Part 20—Commercial Mobile Services Part 80—Stations in the Maritime Services (Maritime Mobile Service) Part 87—Aviation Services Part 90—Private Land Mobile Radio Services (Concerning licensed wireless communications for businesses and non-federal governments) Subpart C—Business Band Part 95—Personal Radio Services (MURS, FRS, GMRS, and CB radio) Part 97—Amateur Radio Service (Ham radio) Part 300—NTIA Rules and Regulations Radio call signs Radio call signs are a globally unique identifier assigned to all stations that are required to obtain a license in order to emit RF energy. The identifiers consist of from 3 to 9 letters and digits, and while the basic format of the call signs are specified by the ITU-R Radio Regulations, Article 19, Identification of stations, the details are left up to each country's radio licensing organizations. Official call signs Each country is assigned a range of prefixes, and the radiotelecommunications agencies within each country then responsible for allocating call signs, within the format defined by the RR, as they see fit. The Radio Regulations require most radio stations to regularly identify themselves by means of their official station call sign or other unique identifier. Functional designators Because official radio call signs have no inherent meaning outside of the above-described patterns, and other than individually licensed Amateur radio stations, do not serve to identify the person using the radio, they are not usually desirable as the primary means of identifying which person, department, or function is transmitting or is being contacted. For this reason, functional designators (a.k.a. tactical call signs) are frequently used to provide such identification. Such designators are not sufficient to meet the FCC requirements that stations regularly identify the license they are operating under, typically every x number of minutes and at the end of each transmission, where x ranges from 10 to 30 minutes (longer for broadcast stations). For the some radio services, the FCC authorizes alternate station IDs, typically in situations where the alternate station ID serves the purposes of identifying the transmitting station better than the standard ITU format. These include: Aircraft—the registration number (tail number) of the aircraft, preceded by the type (typical of general aviation aircraft); or the aircraft operator nickname assigned by the FAA, followed by the flight number (typical of scheduled airline services). Land mobile—Name of the station licensee (typically abbreviated), location of station, name of city, or facility served, followed by additional digits following the more general ID. Land mobile railroad—Name of railroad, followed by the train number, engine number etc. Call signs in the United States The United States has been assigned all call signs with the prefixes K, N, and W, as well as AAA–ALZ. Allocating call signs within these groups is the responsibility of the National Telecommunications and Information Administration (almost all government stations) or the Federal Communications Commission (all other stations), and they subdivide the radio call signs into the following groups: Military call sign systems AAA–AEZ and ALA–ALZ are reserved for Department of the Army stations AFA–AKZ are assigned to the Department of the Air Force NAA–NZZ is jointly assigned to the Department of the Navy and the U.S. Coast Guard. Amateur call sign systems Ham station call signs begin with A, K, N or W, and have a single digit from 0 to 9 that separates the 1 or 2 letter prefix from the 1 to 3 letter suffix (special event stations have only three characters: the prefix, the digit, and a one-letter suffix). Maritime call signs Maritime call signs have a much more complex structure, and are sometimes replaced with the name of the vessel or a Maritime Mobile Service Identity (MMSI) number. Microphone technique Microphones are imperfect reproducers of the human voice, and will distort the human voice in ways that make it unintelligible unless a set of techniques are used to avoid the problems. The recommended techniques vary, but generally align with the following guidelines, which are extracted from the IARU Emergency Telecommunications Guide Hold the microphone close to your cheek, just off to the side of your mouth, positioned so that you talk across, and not into, the microphone. This reduces plosives (popping sounds from letters such as "P"). Speak in a normal, clear, calm voice. Talking loudly or shouting does not increase the volume of your voice at the receiving radios, but will distort the audio, because loud sounds result in over-modulation, which directly causes distortion. Speak at a normal pace, or preferably, slower. Not leaving gaps between words causes problems with radio transmissions that are not as noticeable when one is talking face-to-face. Pronounce words carefully, making each syllable and sound clearly distinguishable. Adjust the microphone gain so that a normal voice 50 mm away from the microphone will produce full modulation. Setting the gain higher than that will transmit greater amounts of background noise, making your voice harder to hear, or even distorted. Noise-cancelling microphones can assist in this, but do not substitute for proper mic placement and gain settings. If you use a headset boom microphone, be aware that lower-cost models have omni-directional elements that will pick up background noise. Models with uni-directional or noise-cancelling elements are best. Do not use voice operated transmission (VOX) microphone circuits for emergency communication. The first syllable or so of each transmission will not actually be transmitted, while extraneous noises may also trigger transmission unintentionally. If not operating in a vehicle, use a foot push-to-talk switch so that both of your hands are free to transmit. Always leave a little extra time (1 second will suffice) between depressing the PTT switch and speaking. Numerous electronic circuits, including tone squelch, RF squelch and power-saving modes, need a substantial fraction of that time in order to allow your signal to be transmitted or received. This is especially true of repeaters, which might also have a "kerchunk" timer that prevents brief transmissions from keying the transmitter, and doubly true of linked repeaters, which have multiple sets of such circuits that must be activated before all stations can hear you. One must also leave gaps between the last station that transmitted and the next station, because such gaps are necessary to let other stations break in with emergency traffic. A pause of two seconds, approximated by a count of "one, one thousand" is sufficient in many conditions. Similarly, the U.S. military radio procedures recommend headsets with noise-cancelling microphones:Use of Audio Equipment. In many situations, particularly in noisy or difficult conditions, the use of headsets fitted with a noise cancelling microphone is preferable to loudspeakers as a headset will aid concentration and the audibility of the incoming signal. The double-sided, noise cancelling microphone is designed to cancel out surrounding noise, for example engine noise or gunfire, allowing speech entering on one side to pass freely. The microphone should be as close to the mouth as possible.The U.S. Navy radio operator training manuals contain similar guidelines, including NAVPERS 10228-B, Radioman 3 & 2 training course (1957 edition): Dos: Do listen before transmitting. Unauthorized break-in is lubberly and causes confusion. Often neither transmission gets through. Do speak clearly and distinctly. Slurred syllables and clipped speech are both hard to understand. A widespread error among untrained operators is failure to emphasize vowels sufficiently. Do speak slowly. Unless the action officer is listening he will have to rely on the copy being typed or written at the other end. Give the recorder a chance to get it all the first time. You will save time and repetitions that way. Do avoid extremes of pitch. A high voice cuts best through interference, but is shrill and unpleasant if too high. A lower pitch is easier on the ear, but is hard to understand through background noises if too low. Do be natural. Maintain a normal speaking rhythm. Group words in a natural manner. Send your message phrase by phrase rather than word by word. Do use standard pronunciation. Speech with sectional peculiarities is difficult for persons from other parts of the country. Talkers using the almost standard pronunciation of a broadcast network announcer are easiest to understand. Do speak in a moderately strong voice. This will override unavoidable background noises and prevent drop-outs. Do keep correct distance between lips and microphone. If the distance is too great, speech is inaudible and background noises creep in; if too small, blaring and blasting result. Do shield your microphone. Turn your head away from noise generating sources while transmitting. Do keep the volume of a hand set earphone low. Do keep speaker volumes to a moderate level. Do give an accurate evaluation in response to a request for a radio check. A transmission with feedback and/or a high level of background noise is not loud and clear even though the message can be understood. Do pause momentarily, when possible, and interrupt your carrier. This allows any other station with higher precedence traffic to break in. Do adhere strictly to prescribed procedures. Up-to-date radiotelephone procedure is found in the effective edition of ACP 125. Do transact your business and get off the air. Preliminary calls only waste time when communication is good and the message short. It is NOT necessary to blow into a microphone to test it, nor to repeat portions of messages when no repetition has been requested. Do Nots: Don't transmit while surrounded by other persons loudly discussing the next maneuver or event. It confuses receiving stations, and a serious security violation can result. Don't hold the microphone button in the push-to-talk position until absolutely ready to transmit. Your carrier will block communications on the net. Don't hold a hand set in such a position while speaking that there is a possibility of having feedback from the earphone added to other extraneous noises. Don't hold a hand set loosely. A firm pressure on the microphone button prevents unintentional release and consequent signal drop-out. Don't send test signals for longer than 10 seconds. Many radio systems also require the operator to wait a few seconds after depressing the PTT button before speaking, and so this is a recommended practice on all systems. The California Statewide EMS Operations and Communications Resource Manual explains why:Key your transmitter before engaging in speech. The complexities in communications system design often introduce delay in the time it takes to turn on the various components comprising the system. Transmitters take time to come up to full power output, tone squelch decoding equipment requires time to open receivers and receiver voting systems take time to select the best receiver. While these events generally are accomplished in less than one second's time, there are many voice transmissions that could be missed in their entirety if the operator did not delay slightly before beginning his/her voice message. Pausing one second after depressing the push-to-talk button on the microphone or handset is sufficient in most cases to prevent missed words or responses.Further, transmissions should be kept as short as possible; a maximum limit of 20 or 30 seconds is typically suggested:Transmissions should generally be kept to less than 20 seconds, or within the time specifically allocated by the system. Most radio systems limit transmissions to less than 30 seconds to prevent malfunctioning transmitters or accidentally keyed microphones from dominating a system, and will automatically stop transmitting at the expiration of the allowed time cutting off additional audio. Speech technique Communicating by voice over two-way radios is more difficult than talking with other people face-to-face or over the telephone. The human voice is changed dramatically by two-way radio circuits. In addition to cutting off important audio bandwidth at both the low and high ends of the human speech spectrum (reducing the bandwidth by at least half), other distortions of the voice occur in the microphone, transmitter, receiver, and speaker—and the radio signal itself is subject to fading, interruptions, and other interference. All of these make human speech more difficult to recognize; in particular, momentary disruptions or distortions of the signal are likely to block the transmission of entire syllables. The best way to overcome these problems is by greatly reducing the number of single-syllable words used. This is very much counter to the human nature of taking shortcuts, and so takes training, discipline, and having all operators using the same language, techniques, and procedures. Method of speech Several radio operation procedures manuals, including ACP 125(G) teach the same mnemonic of Rhythm, Speed, Volume, and Pitch (RSVP): Rhythm Use short sentences divided into sensible phrases which maintain a natural rhythm; they should not be spoken word by word. Where pauses occur, the press-to-talk should be released to minimize transmission time and permit stations to break in when necessary. Speed Speak slightly slower than for normal conversation. Where a message is to be written down by the recipients, or in difficult conditions, extra time should be allowed to compensate for the receiving station experiencing the worst conditions. Speed of transmission is easily adjusted by increasing or decreasing the length of pauses between phrases, as opposed to altering the gaps between words; the latter will create an unnatural, halted style of speech, which is difficult to understand. Volume Speak quietly when using whisper facilities, otherwise the volume should be as for normal conversation. Shouting causes distortion. Pitch The voice should be pitched slightly higher than for normal conversation to improve clarity. According to the UK's Radiotelephony Manual, CAP 413, radio operators should talk at a speed of fewer than 100 words per minute. Radio discipline Communicating over a half-duplex, shared circuit with multiple parties requires a large amount of discipline in following the established procedures and conventions, because whenever one particular radio operator is transmitting, that operator can not hear any other station on the channel being used. ABC—Accuracy, Brevity, Clarity The initialism ABC is commonly used as a memory aid to reinforce the three most important rules about what to transmit. The Five Ws Whenever a report or a request is transmitted over a two-way radio, the operator should consider including the standard Five Ws in the transmission, so as to eliminate additional requests for information that may occur and thereby delay the request (and other communications). Who—needs something What—do they need Why—do they need it When—do they need it Where—do they need it Other rules Think before you speak Listen before you speak Answer all calls promptly Keep the airways free of unnecessary talk Be brief and to the point Only transmit facts Do not act as a relay station unless the net control asks for one Voice procedures The procedures described in this section can be viewed as the base of all voice radio communications procedures. Service-specific procedures However, the international aviation and maritime industries, because their global expansion in the 20th century coincided with, and were heavily integral to the development of voice procedures and other aspects in the development of two-way radio technology, gradually developed their own variations on these procedures. Aeronautical Mobile Service Voice communications procedures for international air traffic control and communications among airplanes are defined by the following International Civil Aviation Organization documents: Annex 10—Aeronautical Telecommunications, Volume II—Communications Procedures including those with PANS status, Procedures for Air Navigation Services—Air Traffic Management (PANS-ATM, ICAO Doc 4444) ICAO Doc 9432 (AN/925) Manual of Radiotelephony. Refinements and localization of these procedures can be done by each member country of ICAO. United States FAA Pilot Controller Glossary United Kingdom Civil Aviation Authority's Radiotelephony manual Maritime Mobile Service Voice procedures for use on ships and boats are defined by the International Telecommunication Union and the International Maritime Organization bodies of the United Nations, and by international treaties such as the Safety of Life a Sea Convention (a.k.a. SOLAS 74), and by other documents, such as the International Code of Signals. ITU Radio Regulations Appendix 18 ITU maritime recommendations ITU-R M.1171: Radiotelephony procedures in the maritime mobile service. IMO resolutions Resolution A.918(22) (covers Standard Marine Communication Phrases) Police procedures In the U.S., the organization chartered with devising police communications procedures is APCO International, the Association of Police Communications Officers, which was founded in 1935. For the most part, APCO's procedures have been developed independently of the worldwide standard operating procedures, leading to most police departments using a different spelling alphabet, and the reverse order of calling procedure (e.g. 1-Adam-12 calling Dispatch). However, APCO occasionally follows the international procedure standards, having adopted the U.S. Navy's Morse code procedure signs in the 1930s, and adopting the ICAO radiotelephony spelling alphabet in 1974, replacing its own Adam-Boy-Charles alphabet adopted in 1940, although very few U.S. police departments made the change. APCO has also specified Standard Description Forms, a standard order of reporting information describing people and vehicles. Standard description of persons The Standard Description of Persons format first appeared in the April 1950 edition of the APCO Bulletin. It starts with a description of the person themself and finishes with a description of what they are wearing at the time. Standard description of automobiles APCO promotes the mnemonic CYMBALS for reporting vehicle descriptions: Calling procedure The voice calling procedure (sometimes referred to as "method of calling" or "communications order model") is the standardized method of establishing communications. The order of transmitting the called station's call sign, followed by the calling station's call sign, was first specified for voice communications in the International Radiotelegraph Convention of Washington, 1927, however it matches the order used for the radiotelegraph calling procedure that had already existed since at least 1912. In the United States, the radiotelegraph calling procedure is legally defined in FCC regulations Part 80.97 (47 CFR 80.97(c)), which specifies that the method of calling begins with the call sign of the station called, not more than twice, [THIS IS] and the call sign of the calling station, not more than twice". This order is also specified by the ICAO for international aviation radio procedures (Annex 10 to the Convention on. International Civil Aviation: Aeronautical Telecommunications.), the FAA (Aeronautical Information Manual) and by the ITU-R for the Maritime Mobile Service (ITU-R M.1171), and the U.S. Coast Guard (Radiotelephone Handbook). The March, 1940 issue of The APCO Bulletin explains the origin of this order was found to have better results than other methods, MUST give the callsign of the station you are calling, twice (never three times) MUST follow the callsign with the proword THIS IS MUST give your callsign once, and once only Communicate SHOULD end your transmission with the proword OVER, or OUT, although this can be omitted when using a repeater that inserts a courtesy tone at the end of each transmission. Break-in procedure Stations needing to interrupt other communications in-progress shall use the most appropriate of the below procedure words, followed by their call sign. The use of these emergency signals is governed by the International Radio Regulations that have the force of law in most countries, and were originally defined in the International Code of Signals and the International Convention for the Safety of Life at Sea, so the rules for their use emanate from that document. All of these break-in procedure words must be followed by your call sign, because that information will help the NCS determine the relevant importance when dealing with multiple break-ins of the same precedence, and to determine the relevance when multiple calls offering a CORRECTION or INFO are received. Order of priority of communications The priority levels described below are derived from Article 44 of the ITU Radio Regulations, Chapter VIII, and were codified as early as the International Telecommunication Convention, Atlantic City, 1947 (but probably existed much earlier). Procedure words Procedure words are a direct voice replacement for procedure signs (prosigns) and operating signals (such as Q codes), and must always be used on radiotelephone channels in their place. Prosigns/operating signals may only be used with Morse Code (as well as semaphore flags, light signals, etc.) and TTY (including all forms of landline and radio teletype, and Amateur radio digital interactive modes). The most complete set of procedure words is defined in the U.S. Military's Allied Communications Publication ACP 125(G). Radio checks Whenever an operator is transmitting and uncertain of how good their radio and/or voice signal are, they can use the following procedure words to ask for a signal strength and readability report. This is the modern method of signal reporting that replaced the old 1 to 5 scale reports for the two aspects of a radio signal, and as with the procedure words, are defined in ACP 125(G): The prowords listed below are for use when initiating and answering queries concerning signal strength and readability. Signal strength prowords In the tables below, the mappings of the QSA and QRK Morse code prosigns is interpreted because there is not a 1:1 correlation. See QSA and QRK code for the full procedure specification. Readability prowords The reporting format is one of the signal strength prowords followed by an appropriate conjunction, with that followed by one of the readability prowords: LOUD AND CLEAR means Excellent copy with no noise GOOD AND READABLE means Good copy with slight noise FAIR BUT READABLE means Fair copy, occasional fills are needed WEAK WITH INTERFERENCE means Weak copy, frequent fills are needed because of interference from other radio signals. WEAK AND UNREADABLE means Unable to copy, a relay is required According to military usage, if the response would be LOUD AND CLEAR, you may also respond simply with the proword ROGER. However, because this reporting format is not currently used widely outside of military organizations, it is better to always use the full format, so that there is no doubt about the response by parties unfamiliar with minimization and other shorthand radio operating procedures. International Radiotelephony Spelling Alphabet The International Civil Aviation Organization (ICAO), International Telecommunication Union, and the International Maritime Organization (all agencies of the United Nations), plus NATO, all specify the use of the ICAO Radiotelephony Spelling Alphabet for use when it is necessary to spell out words, callsigns, and other letter/number sequences. It was developed with international cooperation and ratified in 1956, and has been in use unmodified ever since. Rules for spelling Spelling is necessary when difficult radio conditions prevent the reception of an obscure word, or of a word or group, which is unpronounceable. Such words or groups within the text of plain language messages may be spelt using the phonetic alphabet; they are preceded by the proword "I SPELL". If the word is pronounceable and it is advantageous to do so, then it should be spoken before and after the spelling to help identify the word. Rules for numbers and figures When radio conditions are satisfactory and confusion will not arise, numbers in the text of a message may be spoken as in normal speech. During difficult conditions, or when extra care is necessary to avoid misunderstanding, numbers are sent figure by figure preceded by the proword FIGURES. This proword warns that figures follow immediately, to help distinguish them from other similarly pronounced words. Closing down Ending a two-way radio call has its own set of procedures: Generally, the station that originated the call is the station that should initiate termination of the call. All stations indicate their last transmission of a particular communication exchange by using the proword OUT (I intend no further communication with you at this time) or OUT TO YOU (I am ending my communication with you and calling another station). Stations going off the air (specifically turning their radio equipment off or leaving the station unattended) can additionally state that they are "closing" or "closing down", based on the proword command "CLOSE DOWN". Radio nets Nets operate either on schedule or continuously (continuous watch). Nets operating on schedule handle traffic only at definite, prearranged times and in accordance with a prearranged schedule of intercommunication. Nets operating continuously are prepared to handle traffic at any time; they maintain operators on duty at all stations in the net at all times. When practicable, messages relating to schedules will be transmitted by a means of signal communication other than radio. Net manager A net manager is the person who supervises the creation and operation of a net over multiple sessions. This person will specify the format, date, time, participants, and the net control script. The net manager will also choose the Net Control Station for each net, and may occasionally take on that function, especially in smaller organizations. Net Control Station Radio nets are like conference calls in that both have a moderator who initiates the group communication, who ensures all participants follow the standard procedures, and who determines and directs when each other station may talk. The moderator in a radio net is called the Net Control Station, formally abbreviated NCS, and has the following duties: Establishes the net and closes the net; Directs Net activities, such as passing traffic, to maintain optimum efficiency; Chooses net frequency, maintains circuit discipline and frequency accuracy; Maintains a net log and records participation in the net and movement of messages; (always knows who is on and off net) Appoints one or more Alternate Net Control Stations (ANCS); Determines whether and when to conduct network continuity checks; Determines when full procedure and full call signs may enhance communications; Subject to Net Manager guidance, directs a net to be directed or free. The Net Control Station will, for each net, appoint at least one Alternate Net Control Station, formally abbreviated ANCS (abbreviated NC2 in WWII procedures), who has the following duties: Assists the NCS to maintain optimum efficiency; Assumes NCS duties in event that the NCS develops station problems; Assumes NCS duties for a portion of the net, as directed or as needed; Serves as a resource for the NCS; echoes transmissions of the NCS if, and only if, directed to do so by the NCS; Maintains a duplicate net log Structure of the net Nets can be described as always having a net opening and a net closing, with a roll call normally following the net opening, itself followed by regular net business, which may include announcements, official business, and message passing. Military nets will follow a very abbreviated and opaque version of the structure outlined below, but will still have the critical elements of opening, roll call, late check-ins, and closing. A net should always operate on the same principle as the inverted pyramid used in journalism—the most important communications always come first, followed by content in ever lower levels of priority. Net opening Identification of the NCS Announcement of the regular date, time, and frequency of the net Purpose of the net Roll call A call for stations to check in, oftentimes from a roster of regular stations A call for late check-ins (stations on the roster who did not respond to the first check-in period) A call for guest stations to check in Net business Optional conversion to a free net Net closing Each net will typically have a main purpose, which varies according to the organization conducting the net, which occurs during the net business phase. For amateur radio nets, it's typically for the purpose of allowing stations to discuss their recent operating activities (stations worked, antennas built, etc.) or to swap equipment. For Military Auxiliary Radio System and National Traffic System nets, net business will involve mainly the passing of formal messages, known as radiograms. Time synchronization procedures Stations without the ability to acquire a time signal accurate to at least one second should request a time check at the start of every shift, or once a day minimum. Stations may ask the NCS for a time check by waiting for an appropriate pause, keying up and stating your call sign, and then using the prowords "REQUEST TIME CHECK, OVER" when the NCS calls on you. Otherwise, you may ask any station that has access to any of the above time signals for a time check. Once requested, the sending station will state the current UTC time plus one minute, followed by a countdown as follows:This is Net Control, TIME CHECK WUN AIT ZERO TOO ZULU (pause) WUN FIFE SECONDS…WUN ZERO SECONDS…FIFE FOWER TREE TOO WUN…TIME WUN AIT ZERO TOO ZULU…OVERThe receiving station will then use the proword "TIME" as the synch mark, indicating zero seconds. If the local time is desired instead of UTC, substitute the time zone code "JULIETT" for "ZULU". Instead of providing time checks on an individual basis, the NCS should give advance notice of a time check by stating, for example, "TIME CHECK AT 0900 JULIETT", giving all stations sufficient time to prepare their clocks and watches for adjustment. A period of at least five minutes is suggested. Modes of radio net operation Directed Net A net in which no station other than the net control station can communicate with any other station, except for the transmission of urgent messages, without first obtaining the permission of the net control station. Free net A net in which any station may communicate with any other station in the same net without first obtaining permission from the net control station to do so. Types of net calls When calling stations who are part of a net, a variety of types of calls can be used: Types of radio nets The Civil Air Patrol and International Amateur Radio Union define a number of different nets which represent the typical type and range used in civilian radio communications: Radio net procedure words U.S. Army Field Manual ACP 125(G) has the most complete set of procedure words used in radio nets: Example usage Aeronautical mobile procedure The Federal Aviation Administration uses the term phraseology to describe voice procedure or communications protocols used over telecommunications circuits. An example is air traffic control radio communications. Standardised wording is used and the person receiving the message may repeat critical parts of the message back to the sender. This is especially true of safety-critical messages. Consider this example of an exchange between a controller and an aircraft: Aircraft: Boston Tower, Warrior three five foxtrot (35F), holding short of two two right. Tower: Warrior three five foxtrot, Boston Tower, runway two two right, cleared for immediate takeoff. Aircraft: Roger, cleared for immediate takeoff, two two right, Warrier three five foxtrot. On telecommunications circuits, disambiguation is a critical function of voice procedure. Due to any number of variables, including radio static, a busy or loud environment, or similarity in the phonetics of different words, a critical piece of information can be misheard or misunderstood; for instance, a pilot being ordered to eleven thousand as opposed to seven thousand (by hearing "even"). To reduce ambiguity, critical information may be broken down and read as separate letters and numbers. To avoid error or misunderstanding, pilots will often read back altitudes in the tens of thousands using both separate numbers and the single word (example: given a climb to 10,000 ft, the pilot replies "[Callsign] climbing to One zero, Ten Thousand"). However, this is usually only used to differentiate between 10,000 and 11,000 ft since these are the most common altitude deviations. The runway number read visually as eighteen, when read over a voice circuit as part of an instruction, becomes one eight. In some cases a spelling alphabet is used (also called a radio alphabet or a phonetic alphabet). Instead of the letters AB, the words Alpha Bravo are used. Main Street becomes Mike Alpha India November street, clearly separating it from Drain Street and Wayne Street. The numbers 5 and 9 are pronounced "fife" and "niner" respectively, since "five" and "nine" can sound the same over the radio. The use of 'niner' in place of 'nine' is due to German-speaking NATO allies for whom the spoken word 'nine' could be confused with the German word 'nein' or 'no'. Over fire service radios, phraseology may include words that indicate the priority of a message, for example: Forty Four Truck to the Bronx, Urgent! or San Diego, Engine Forty, Emergency traffic! Words may be repeated to modify them from traditional use in order to describe a critical message: Evacuate! Evacuate! Evacuate! A similar technique may be used in aviation for critical messages. For example, this transmission might be sent to an aircraft that has just landed and has not yet cleared the runway. Echo-Foxtrot-Charlie, Tower. I have engine out traffic on short final. Exit runway at next taxiway. Expedite! Expedite! Police Radios also use this technique to escalate a call that is quickly becoming an emergency. Code 3! Code 3! Code 3! Railroads have similar processes. When instructions are read to a locomotive engineer, they are preceded by the train or locomotive number, direction of travel and the engineer's name. This reduces the possibility that a set of instructions will be acted on by the wrong locomotive engineer: Five Sixty Six West, Engineer Jones, okay to proceed two blocks west to Ravendale. Phraseology on telecommunications circuits may employ special phrases like ten codes, Sigalert, Quick Alert! or road service towing abbreviations such as T6. This jargon may abbreviate critical data and alert listeners by identifying the priority of a message. It may also reduce errors caused by ambiguities involving rhyming, or similar-sounding, words. Maritime mobile procedure (Done on VHF Ch 16) Boat "Albacore" talking to Boat "Bronwyn" Albacore: Bronwyn, Bronwyn, Bronwyn* this is Albacore, OVER. (*3×1, repeating the receiver's callsign up to 3 times, and the sender's once, is proper procedure and should be used when first establishing contact, especially over a long distance. A 1×1, i.e. 'Bronwyn this is Albacore,' or 2×1, i.e. 'Bronwyn, Bronwyn, this is Albacore,' is less proper, but acceptable especially for a subsequent contact.) Bronwyn: Albacore, this is Bronwyn, OVER. (** At this point switch to a working channel as 16 is for distress and hailing only**) Albacore: This is Albacore. Want a tow and are you OK for tea at Osbourne Bay? OVER. Bronwyn: This is Bronwyn. Negative, got engine running, 1600 at clubhouse fine with us. OVER. Albacore: This is Albacore, ROGER, OUT. "Copy that" is incorrect. COPY is used when a message has been intercepted by another station, i.e. a third station would respond: Nonesuch: Bronwyn, this is Nonesuch. Copied your previous, will also see you there, OUT. One should always use one's own callsign when transmitting. British Army Station C21A (charlie-two-one Alpha) talking to C33B (charlie-three-three Bravo): C21A: C33B, this is C21A, message, OVER. C33B: C33B, send, OVER. C21A: Have you got C1ØD Sunray at your location?, OVER. C33B: Negative, I think he is with C3ØC, OVER. C21A: Roger, OUT. The advantage of this sequence is that the recipient always knows who sent the message. The downside is that the listener only knows the intended recipient from the context of the conversation. Requires moderate signal quality for the radio operator to keep track of the conversations. However a broadcast message and response is fairly efficient. Sunray (Lead) Charlie Charlie (Collective Call - everyone), this is Sunray. Radio check, OVER. C-E-5-9: Sunray, this is Charlie Echo five niner, LOUD AND CLEAR, OVER. Y-S-7-2 Sunray, this is Yankee Sierra Seven Two, reading three by four. OVER. B-G-5-2: Sunray, this is Bravo Golf Five Two, Say again. OVER. E-F-2-0: Sunray, this is Echo Foxtrot Two Zero, reading Five by Four OVER. Sunray: Charlie Charlie this is Sunray, OUT. The "Say again" response from B-G-5-2 tells Sunray that the radio signal is not good and possibly unreadable. Sunray can then re-initiate a Call onto B-G-5-2 and start another R/C or instruct them to relocate, change settings, etc. So it could carry on with: Sunray: Bravo Golf Five Two this is sunray, RADIO CHECK OVER. B-G-5-2: Sunray this is Bravo Golf Five Two, unclear, read you 2 by 3 OVER. Sunray: Sunray copies, Relocate to Grid One Niner Zero Three Three Two for a better signal OVER. B-G-5-2: Bravo Golf Five Two copies and is Oscar Mike, Bravo Golf Five Two OUT. See also Radiotelephone Mobile radio R-S-T system (for Amateur radio only) Circuit Merit (for wired and wireless telephone circuits only, not radiotelephony) List of international common standards Mayday Military slang Station identification Allied Communication Procedures Notes External links Origins of Hamspeak Rec. ITU-R M.1171 Radiotelephony Procedures in the Maritime Mobile Service Military communications Public safety communications Radio communications Oral communication fr:Vocabulaire radio professionnel id:Prosedur suara he:נדב"ר
Radiotelephony procedure
[ "Engineering" ]
8,749
[ "Military communications", "Telecommunications engineering", "Radio communications" ]
633,325
https://en.wikipedia.org/wiki/Neutrino%20astronomy
Neutrino astronomy is the branch of astronomy that gathers information about astronomical objects by observing and studying neutrinos emitted by them with the help of neutrino detectors in special Earth observatories. It is an emerging field in astroparticle physics providing insights into the high-energy and non-thermal processes in the universe. Neutrinos are nearly massless and electrically neutral or chargeless elementary particles. They are created as a result of certain types of radioactive decay, nuclear reactions such as those that take place in the Sun or high energy astrophysical phenomena, in nuclear reactors, or when cosmic rays hit atoms in the atmosphere. Neutrinos rarely interact with matter (only via the weak nuclear force), travel at nearly the speed of light in straight lines, pass through large amounts of matter without any notable absorption or without being deflected by magnetic fields. Unlike photons, neutrinos rarely scatter along their trajectory. But like photons, neutrinos are some of the most common particles in the universe. Because of this, neutrinos offer a unique opportunity to observe processes that are inaccessible to optical telescopes, such as reactions in the Sun's core. Neutrinos that are created in the Sun’s core are barely absorbed, so a large quantity of them escape from the Sun and reach the Earth. Neutrinos can also offer a very strong pointing direction compared to charged particle cosmic rays. Neutrinos are very hard to detect due to their non-interactive nature. In order to detect neutrinos, scientists have to shield the detectors from cosmic rays, which can penetrate hundreds of meters of rock. Neutrinos, on the other hand, can go through the entire planet without being absorbed, like "ghost particles". That's why neutrino detectors are placed many hundreds of meter underground, usually at the bottom of mines. There a neutrino detection liquid such as a Chlorine-rich solution is placed; the neutrinos react with a Chlorine isotope and can create radioactive Argon. Gallium to Germanium conversion has also been used. The IceCube Neutrino Observatory built in 2010 in the south pole is the biggest neutrino detector, consisting of thousands of optical sensors buried 500 meters underneath a cubic kilometer of deep, ultra-transparent ice, detects light emitted by charged particles that are produced when a single neutrino collides with a proton or neutron inside an atom. The resulting nuclear reaction produces secondary particles traveling at high speeds that give off a blue light called Cherenkov radiation. Super-Kamiokande in Japan and ANTARES and KM3NeT in the Mediterranean are some other important neutrino detectors. Since neutrinos interact weakly, neutrino detectors must have large target masses (often thousands of tons). The detectors also must use shielding and effective software to remove background signal. Since neutrinos are very difficult to detect, the only bodies that have been studied in this way are the sun and the supernova SN1987A, which exploded in 1987. Scientist predicted that supernova explosions would produce bursts of neutrinos, and a similar burst was actually detected from Supernova 1987A. In the future neutrino astronomy promises to discover other aspects of the universe, including coincidental gravitational waves, gamma ray bursts, the cosmic neutrino background, origins of ultra-high-energy neutrinos, neutrino properties (such as neutrino mass hierarchy), dark matter properties, etc. It will become an integral part of multi-messenger astronomy, complementing gravitational astronomy and traditional telescopic astronomy. History Neutrinos were first recorded in 1956 by Clyde Cowan and Frederick Reines in an experiment employing a nearby nuclear reactor as a neutrino source. Their discovery was acknowledged with a Nobel Prize in Physics in 1995. This was followed by the first atmospheric neutrino detection in 1965 by two groups almost simultaneously. One was led by Frederick Reines who operated a liquid scintillator - the Case-Witwatersrand-Irvine or CWI detector - in the East Rand gold mine in South Africa at an 8.8 km water depth equivalent. The other was a Bombay-Osaka-Durham collaboration that operated in the Indian Kolar Gold Field mine at an equivalent water depth of 7.5 km. Although the KGF group detected neutrino candidates two months later than Reines CWI, they were given formal priority due to publishing their findings two weeks earlier. In 1968, Raymond Davis, Jr. and John N. Bahcall successfully detected the first solar neutrinos in the Homestake experiment. Davis, along with Japanese physicist Masatoshi Koshiba were jointly awarded half of the 2002 Nobel Prize in Physics "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos (the other half went to Riccardo Giacconi for corresponding pioneering contributions which have led to the discovery of cosmic X-ray sources)." The first generation of undersea neutrino telescope projects began with the proposal by Moisey Markov in 1960 "...to install detectors deep in a lake or a sea and to determine the location of charged particles with the help of Cherenkov radiation." The first underwater neutrino telescope began as the DUMAND project. DUMAND stands for Deep Underwater Muon and Neutrino Detector. The project began in 1976 and although it was eventually cancelled in 1995, it acted as a precursor to many of the following telescopes in the following decades. The Baikal Neutrino Telescope is installed in the southern part of Lake Baikal in Russia. The detector is located at a depth of 1.1 km and began surveys in 1980. In 1993, it was the first to deploy three strings to reconstruct the muon trajectories as well as the first to record atmospheric neutrinos underwater. AMANDA (Antarctic Muon And Neutrino Detector Array) used the 3 km thick ice layer at the South Pole and was located several hundred meters from the Amundsen-Scott station. Holes 60 cm in diameter were drilled with pressurized hot water in which strings with optical modules were deployed before the water refroze. The depth proved to be insufficient to be able to reconstruct the trajectory due to the scattering of light on air bubbles. A second group of 4 strings were added in 1995/96 to a depth of about 2000 m that was sufficient for track reconstruction. The AMANDA array was subsequently upgraded until January 2000 when it consisted of 19 strings with a total of 667 optical modules at a depth range between 1500 m and 2000 m. AMANDA would eventually be the predecessor to IceCube in 2005. An example of an early neutrino detector is the (ASD), located in the Soledar Salt Mine in Ukraine at a depth of more than 100 m. It was created in the Department of High Energy Leptons and Neutrino Astrophysics of the Institute of Nuclear Research of the USSR Academy of Sciences in 1969 to study antineutrino fluxes from collapsing stars in the Galaxy, as well as the spectrum and interactions of muons of cosmic rays with energies up to 10 ^ 13 eV. A feature of the detector is a 100-ton scintillation tank with dimensions on the order of the length of an electromagnetic shower with an initial energy of 100 GeV. 21st century After the decline of DUMAND the participating groups split into three branches to explore deep sea options in the Mediterranean Sea. ANTARES was anchored to the sea floor in the region off Toulon at the French Mediterranean coast. It consists of 12 strings, each carrying 25 "storeys" equipped with three optical modules, an electronic container, and calibration devices down to a maximum depth of 2475 m. NEMO (NEutrino Mediterranean Observatory) was pursued by Italian groups to investigate the feasibility of a cubic-kilometer scale deep-sea detector. A suitable site at a depth of 3.5 km about 100 km off Capo Passero at the South-Eastern coast of Sicily has been identified. From 2007 to 2011 the first prototyping phase tested a "mini-tower" with 4 bars deployed for several weeks near Catania at a depth of 2 km. The second phase as well as plans to deploy the full-size prototype tower will be pursued in the KM3NeT framework. The NESTOR Project was installed in 2004 to a depth of 4 km and operated for one month until a failure of the cable to shore forced it to be terminated. The data taken still successfully demonstrated the detector's functionality and provided a measurement of the atmospheric muon flux. The proof of concept will be implemented in the KM3Net framework. The second generation of deep-sea neutrino telescope projects reach or even exceed the size originally conceived by the DUMAND pioneers. IceCube, located at the South Pole and incorporating its predecessor AMANDA, was completed in December 2010. It currently consists of 5160 digital optical modules installed on 86 strings at depths of 1450 to 2550 m in the Antarctic ice. The KM3NeT in the Mediterranean Sea and the GVD are in their preparatory/prototyping phase. IceCube instruments 1 km3 of ice. GVD is also planned to cover 1 km3 but at a much higher energy threshold. KM3NeT is planned to cover several km3 and have two components; ARCA (Astroparticle Research with Cosmics in the Abyss) and ORCA (Oscillations Research with Cosmics in the Abyss). Both KM3NeT and GVD have completed at least part of their construction and it is expected that these two along with IceCube will form a global neutrino observatory. In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506+056 located 3.7 billion light-years away in the direction of the constellation Orion. This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. In November 2022, the IceCube collaboration made another significant progress towards identifying the origin of cosmic rays, reporting the observation of 79 neutrinos with an energy over 1 TeV originated from the nearby galaxy M77. These findings in a well-known object are expected to help study the active nucleus of this galaxy, as well as serving as a baseline for future observations. In June 2023, astronomers reported using a new technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy. Detection methods Neutrinos interact incredibly rarely with matter, so the vast majority of neutrinos will pass through a detector without interacting. If a neutrino does interact, it will only do so once. Therefore, to perform neutrino astronomy, large detectors must be used to obtain enough statistics. The method of neutrino detection depends on the energy and type of the neutrino. A famous example is that anti-electron neutrinos can interact with a nucleus in the detector by inverse beta decay and produce a positron and a neutron. The positron immediately will annihilate with an electron, producing two 511keV photons. The neutron will attach to another nucleus and give off a gamma with an energy of a few MeV. In general, neutrinos can interact through neutral-current and charged-current interactions. In neutral-current interactions, the neutrino interacts with a nucleus or electron and the neutrino retains its original flavor. In charged-current interactions, the neutrino is absorbed by the nucleus and produces a lepton corresponding to the neutrino's flavor (\nu_{e} -> e^-,\nu_{\mu} -> \mu^{-}, etc.). If the charged resultants are moving fast enough, they can create Cherenkov light. To observe neutrino interactions, detectors use photomultiplier tubes (PMTs) to detect individual photons. From the timing of the photons, it is possible to determine the time and place of the neutrino interaction. If the neutrino creates a muon during its interaction, then the muon will travel in a line, creating a "track" of Cherenkov photons. The data from this track can be used to reconstruct the directionality of the muon. For high-energy interactions, the neutrino and muon directions are the same, so it's possible to tell where the neutrino came from. This is pointing direction is important in extra-solar system neutrino astronomy. Along with time, position, and possibly direction, it's possible to infer the energy of the neutrino from the interactions. The number of photons emitted is related to the neutrino energy, and neutrino energy is important for measuring the fluxes from solar and geo-neutrinos. Due to the rareness of neutrino interactions, it is important to maintain a low background signal. For this reason, most neutrino detectors are constructed under a rock or water overburden. This overburden shields against most cosmic rays in the atmosphere; only some of the highest-energy muons are able to penetrate to the depths of our detectors. Detectors must include ways of dealing with data from muons so as to not confuse them with neutrinos. Along with more complicated measures, if a muon track is first detected outside of the desired "fiducial" volume, the event is treated as a muon and not considered. Ignoring events outside the fiducial volume also decreases the signal from radiation outside the detector. Despite shielding efforts, it is inevitable that some background will make it into the detector, many times in the form of radioactive impurities within the detector itself. At this point, if it is impossible to differentiate between the background and true signal, a Monte Carlo simulation must be used to model the background. While it may be unknown if an individual event is background or signal, it is possible to detect an excess about the background, signifying existence of the desired signal. Applications When astronomical bodies, such as the Sun, are studied using light, only the surface of the object can be directly observed. Any light produced in the core of a star will interact with gas particles in the outer layers of the star, taking hundreds of thousands of years to make it to the surface, making it impossible to observe the core directly. Since neutrinos are also created in the cores of stars (as a result of stellar fusion), the core can be observed using neutrino astronomy. Other sources of neutrinos- such as neutrinos released by supernovae- have been detected. Several neutrino experiments have formed the Supernova Early Warning System (SNEWS), where they search for an increase of neutrino flux that could signal a supernova event. There are currently goals to detect neutrinos from other sources, such as active galactic nuclei (AGN), as well as gamma-ray bursts and starburst galaxies. Neutrino astronomy may also indirectly detect dark matter. Supernova warning Seven neutrino experiments (Super-K, LVD, IceCube, KamLAND, Borexino, Daya Bay, and HALO) work together as the Supernova Early Warning System (SNEWS). In a core collapse supernova, ninety-nine percent of the energy released will be in neutrinos. While photons can be trapped in the dense supernova for hours, neutrinos are able to escape on the order of seconds. Since neutrinos travel at roughly the speed of light, they can reach Earth before photons do. If two or more of SNEWS detectors observe a coincidence of an increased flux of neutrinos, an alert is sent to professional and amateur astronomers to be on the lookout for supernova light. By using the distance between detectors and the time difference between detections, the alert can also include directionality as to the supernova's location in the sky. Stellar processes The Sun, like other stars, is powered by nuclear fusion in its core. The core is incredibly large, meaning that photons produced in the core will take a long time to diffuse outward. Therefore, neutrinos are the only way that we can obtain real-time data about the nuclear processes in the Sun. There are two main processes for stellar nuclear fusion. The first is the Proton-Proton (PP) chain, in which protons are fused together into helium, sometimes temporarily creating the heavier elements of lithium, beryllium, and boron along the way. The second is the CNO cycle, in which carbon, nitrogen, and oxygen are fused with protons, and then undergo alpha decay (helium nucleus emission) to begin the cycle again. The PP chain is the primary process in the Sun, while the CNO cycle is more dominant in stars more massive than the Sun. Each step in the process has an allowed spectra of energy for the neutrino (or a discrete energy for electron capture processes). The relative rates of the Sun's nuclear processes can be determined by observations in its flux at different energies. This would shed insight into the Sun's properties, such as metallicity, which is the composition of heavier elements. Borexino is one of the detectors studying solar neutrinos. In 2018, they found 5σ significance for the existence of neutrinos from the fusing of two protons with an electron (pep neutrinos). In 2020, they found for the first time evidence of CNO neutrinos in the Sun. Improvements on the CNO measurement will be especially helpful in determining the Sun's metallicity. Composition and structure of Earth The interior of Earth contains radioactive elements such as ^{40}K and the decay chains of ^{238}U and ^{232}Th. These elements decay via Beta decay, which emits an anti-neutrino. The energies of these anti-neutrinos are dependent on the parent nucleus. Therefore, by detecting the anti-neutrino flux as a function of energy, we can obtain the relative compositions of these elements and set a limit on the total power output of Earth's geo-reactor. Most of our current data about the core and mantle of Earth comes from seismic data, which does not provide any information as to the nuclear composition of these layers. Borexino has detected these geo-neutrinos through the process \bar{\nu}+p^+\longrightarrow e^+ {+n}. The resulting positron will immediately annihilate with an electron and produce two gamma-rays each with an energy of 511keV (the rest mass of an electron). The neutron will later be captured by another nucleus, which will lead to a 2.22MeV gamma-ray as the nucleus de-excites. This process on average takes on the order of 256 microseconds. By searching for time and spatial coincidence of these gamma rays, the experimenters can be certain there was an event. Using over 3,200 days of data, Borexino used geoneutrinos to place constraints on the composition and power output of the mantle. They found that the ratio of ^{238}U to ^{232}Th is the same as chondritic meteorites. The power output from uranium and thorium in Earth's mantle was found to be 14.2-35.7 TW with a 68% confidence interval. Neutrino tomography also provides insight into the interior of Earth. For neutrinos with energies of a few TeV, the interaction probability becomes non-negligible when passing through Earth. The interaction probability will depend on the number of nucleons the neutrino passed along its path, which is directly related to density. If the initial flux is known (as it is in the case of atmospheric neutrinos), then detecting the final flux provides information about the interactions that occurred. The density can then be extrapolated from knowledge of these interactions. This can provide an independent check on the information obtained from seismic data. In 2018, one year worth of IceCube data was evaluated to perform neutrino tomography. The analysis studied upward going muons, which provide both the energy and directionality of the neutrinos after passing through the Earth. A model of Earth with five layers of constant density was fit to the data, and the resulting density agreed with seismic data. The values determined for the total mass of Earth, the mass of the core, and the moment of inertia all agree with the data obtained from seismic and gravitational data. With the current data, the uncertainties on these values are still large, but future data from IceCube and KM3NeT will place tighter restrictions on this data. High-energy astrophysical events Neutrinos can either be primary cosmic rays (astrophysical neutrinos), or be produced from cosmic ray interactions. In the latter case, the primary cosmic ray will produce pions and kaons in the atmosphere. As these hadrons decay, they produce neutrinos (called atmospheric neutrinos). At low energies, the flux of atmospheric neutrinos is many times greater than astrophysical neutrinos. At high energies, the pions and kaons have a longer lifetime (due to relativistic time dilation). The hadrons are now more likely to interact before they decay. Because of this, the astrophysical neutrino flux will dominate at high energies (~100TeV). To perform neutrino astronomy of high-energy objects, experiments rely on the highest energy neutrinos. To perform astronomy of distant objects, a strong angular resolution is required. Neutrinos are electrically neutral and interact weakly, so they travel mostly unperturbed in straight lines. If the neutrino interacts within a detector and produces a muon, the muon will produce an observable track. At high energies, the neutrino direction and muon direction are closely correlated, so it is possible to trace back the direction of the incoming neutrino. These high-energy neutrinos are either the primary or secondary cosmic rays produced by energetic astrophysical processes. Observing neutrinos could provide insights into these processes beyond what is observable with electromagnetic radiation. In the case of the neutrino detected from a distant blazar, multi-wavelength astronomy was used to show spatial coincidence, confirming the blazar as the source. In the future, neutrinos could be used to supplement electromagnetic and gravitational observations, leading to multi-messenger astronomy. See also List of neutrino experiments References External links Astronomical sub-disciplines
Neutrino astronomy
[ "Astronomy" ]
4,784
[ "Neutrino astronomy", "Astronomical sub-disciplines" ]
633,423
https://en.wikipedia.org/wiki/Visual%20Molecular%20Dynamics
Visual Molecular Dynamics (VMD) is a molecular modelling and visualization computer program. VMD is developed mainly as a tool to view and analyze the results of molecular dynamics simulations. It also includes tools for working with volumetric data, sequence data, and arbitrary graphics objects. Molecular scenes can be exported to external rendering tools such as POV-Ray, RenderMan, Tachyon, Virtual Reality Modeling Language (VRML), and many others. Users can run their own Tcl and Python scripts within VMD as it includes embedded Tcl and Python interpreters. VMD runs on Unix, Apple Mac macOS, and Microsoft Windows. VMD is available to non-commercial users under a distribution-specific license which permits both use of the program and modification of its source code, at no charge. History VMD has been developed under the aegis of principal investigator Klaus Schulten in the Theoretical and Computational Biophysics group at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign. A precursor program, called VRChem, was developed in 1992 by Mike Krogh, William Humphrey, and Rick Kufrin. The initial version of VMD was written by William Humphrey, Andrew Dalke, Ken Hamer, Jon Leech, and James Phillips. It was released in 1995. The earliest versions of VMD were developed for Silicon Graphics workstations and could also run in a cave automatic virtual environment (CAVE) and communicate with a Nanoscale Molecular Dynamics (NAMD) simulation. VMD was further developed by A. Dalke, W. Humphrey, J. Ulrich in 1995–1996, followed by Sergei Izrailev and J. Stone during 1997–1998. In 1998, John Stone became the main VMD developer, porting VMD to many other Unix operating systems and completing the first full-featured OpenGL version. The first version of VMD for the Microsoft Windows platform was released in 1999. In 2001, Justin Gullingsrud, and Paul Grayson, and John Stone added support for haptic feedback devices and further developing the interface between VMD and NAMD for performing interactive molecular dynamics simulations. In subsequent developments, Jordi Cohen, Gullingsrud, and Stone entirely rewrote the graphical user interfaces, added built-in support for display and processing of volumetric data, and the use of OpenGL Shading Language. Interprocess communication VMD can communicate with other programs via Tcl/Tk. This communication allows the development of several external plugins that works together with VMD. These plugins increases the set of features and tools of VMD making it one of the most used software in computational chemistry, biology, and biochemistry. Here is a list of some VMD plugins developed using Tcl/Tk: Delphi Force — electrostatic force calculation and visualization Pathways Plugin — identify dominant electron transfer pathways and estimate donor-to-acceptor electronic tunneling Check Sidechains Plugin — checks and helps select best orientation and protonation state for Asn, Gln, and His side chains MultiMSMS Plugin — caches MSMS calculations to speedup the animation of a sequence of frames Interactive Essential Dynamics — Interactive visualization of essential dynamics Mead Ionize — Improved version of autoionize for highly charged systems Andriy Anishkin's VMD Scripts — Many useful VMD scripts for visualization and analysis RMSD Trajectory Tool — Development version of RMSD plugin for trajectories Clustering Tool — Visualize clusters of conformations of a structure iTrajComp — interactive Trajectory Comparison tool Swap — Atomic coordinate swapping for improved RMSD alignment Intervor — Protein-Protein interface extraction and display SurfVol — Measure surface area and volume of proteins vmdICE — Plugin for computing RMSD, RMSF, SASA, and other time-varying quantities molUP - A VMD plugin to handle QM and ONIOM calculations using the gaussian software VMD Store - A VMD extensions that helps users to discover, install, and update other VMD plugins. See also References External links VMD on GPUs Protein workbench STRAP Molecular modelling software
Visual Molecular Dynamics
[ "Chemistry" ]
859
[ "Molecular modelling", "Molecular modelling software", "Computational chemistry software" ]
633,524
https://en.wikipedia.org/wiki/Christmas%20tree%20packet
In information technology, a Christmas tree packet (also known as a kamikaze packet, nastygram, or lamp test segment) is a network message segment or packet with every option enabled for the particular network protocol in use. Background Network packets contain a number of flags or options depending on the type of network protocol in use. Enabling options can elicit specific behaviors in the device receiving the packet and differences in the responses to the packets. By analyzing those differences, Christmas tree packets can be used as a method of TCP/IP stack fingerprinting, exposing the underlying nature of a TCP/IP stack by sending the packets and then awaiting and analyzing the responses. When used as part of scanning a system, the TCP header of a Christmas tree packet has the flags FIN, URG and PSH set. Many operating systems implement their compliance with the Internet Protocol standards in varying or incomplete ways. By observing how a host responds to an odd packet, such as a Christmas tree packet, inferences can be made regarding the host's operating system. Versions of Microsoft Windows, BSD/OS, HP-UX, Cisco IOS, MVS, and IRIX display behaviors that differ from the RFC standard when queried with said packets. A large number of Christmas tree packets can also be used to conduct a DoS attack by exploiting the fact that Christmas tree packets require much more processing by routers and end-hosts than the "usual" packets do. Christmas tree packets can be easily detected by intrusion-detection systems or more advanced firewalls. From a network security point of view, Christmas tree packets are always suspicious and indicate a high probability of network reconnaissance activities. See also Martian packet References External links Nmap documentation Computer jargon Packets (information technology) Denial-of-service attacks
Christmas tree packet
[ "Technology" ]
364
[ "Computing terminology", "Denial-of-service attacks", "Computer jargon", "Computer security exploits", "Natural language and computing" ]
633,559
https://en.wikipedia.org/wiki/Jesse%20L.%20Greenstein
Jesse Leonard Greenstein (October 15, 1909 – October 21, 2002) was an American astronomer. His parents were Maurice G. and Leah Feingold. He earned a Ph.D, with thesis advisor Donald H. Menzel, from Harvard University in 1937, having started there at age 16. Before leaving Harvard, Greenstein was involved in a project with Fred Lawrence Whipple to explain Karl Jansky's discovery of radio waves from the Milky Way and to propose a source. He began his professional career at Yerkes Observatory under Otto Struve and later went to Caltech. With Louis G. Henyey he invented a new spectrograph and a wide-field camera. He directed the Caltech astronomy program until 1972 and later did classified work on military reconnaissance satellites. With Leverett Davis, Jr, he demonstrated in 1949 that the magnetic field in our galaxy is aligned with the spiral arms. His theoretical work with Davis was based on the conclusion just reached by William A. Hiltner that the recently detected polarization of starlight was due to dichroic extinction by interstellar dust grains aligned with the ambient magnetic field. For the 1965 book Galactic Structure, edited by Blaauw and Schmidt, Greenstein wrote an important chapter on subluminous blue stars. Greenstein did important work in determining the abundances of the elements in stars, and was, with Maarten Schmidt, among the first to recognize quasars as compact, very distant sources as bright as a galaxy. The spectra of the first quasars discovered, radio sources 3C 48 and 3C 273, were displaced so far to the red due to their redshifts as to be almost unrecognizable, but Greenstein deciphered 3C 48 shortly before Schmidt, his colleague at the Hale Observatories worked out the spectrum of 3C 273. Honors Awards Henry Norris Russell Lectureship of the American Astronomical Society (1970) Bruce Medal (1971) Gold Medal of the Royal Astronomical Society (1975) Golden Plate Award of the American Academy of Achievement (1980) Honors Elected to the American Academy of Arts and Sciences (1954) Elected to the United States National Academy of Sciences (1957) Elected to the American Philosophical Society (1968) Named after him Asteroid 4612 Greenstein References External links Obituary from Caltech Caltech oral history interview Story of the discovery of quasars Bruce Medal page Awarding of Bruce medal: PASP 83 (1971) 243 Awarding of RAS gold medal: QJRAS 16 (1975) 356 Biography by Robert P. Kraft, former director of Lick Observatory 1909 births 2002 deaths American astronomers Jewish astronomers Harvard University alumni California Institute of Technology faculty Recipients of the Gold Medal of the Royal Astronomical Society Members of the American Philosophical Society
Jesse L. Greenstein
[ "Astronomy" ]
560
[ "Astronomers", "Jewish astronomers" ]
633,593
https://en.wikipedia.org/wiki/Carbon%20steel
Carbon steel is a steel with carbon content from about 0.05 up to 2.1 percent by weight. The definition of carbon steel from the American Iron and Steel Institute (AISI) states: no minimum content is specified or required for chromium, cobalt, molybdenum, nickel, niobium, titanium, tungsten, vanadium, zirconium, or any other element to be added to obtain a desired alloying effect; the specified minimum for copper does not exceed 0.40%; or the specified maximum for any of the following elements does not exceed: manganese 1.65%; silicon 0.60%; and copper 0.60%. As the carbon content percentage rises, steel has the ability to become harder and stronger through heat treating; however, it becomes less ductile. Regardless of the heat treatment, a higher carbon content reduces weldability. In carbon steels, the higher carbon content lowers the melting point. The term may be used to reference steel that is not stainless steel; in this use carbon steel may include alloy steels. High carbon steel has many uses such as milling machines, cutting tools (such as chisels) and high strength wires. These applications require a much finer microstructure, which improves toughness. Properties Carbon steel is often divided into two main categories: low-carbon steel and high-carbon steel. It may also contain other elements, such as manganese, phosphorus, sulfur, and silicon, which can affect its properties. Carbon steel can be easily machined and welded, making it versatile for various applications. It can also be heat treated to improve its strength, hardness, and durability. Carbon steel is susceptible to rust and corrosion, especially in environments with high moisture levels and/or salt. It can be shielded from corrosion by coating it with paint, varnish, or other protective material. Alternatively, it can be made from a stainless steel alloy that contains chromium, which provides excellent corrosion resistance. Carbon steel can be alloyed with other elements to improve its properties, such as by adding chromium and/or nickel to improve its resistance to corrosion and oxidation or adding molybdenum to improve its strength and toughness at high temperatures. It is an environmentally friendly material, as it is easily recyclable and can be reused in various applications. It is energy-efficient to produce, as it requires less energy than other metals such as aluminium and copper. Type Mild or low-carbon steel Mild steel (iron containing a small percentage of carbon, strong and tough but not readily tempered), also known as plain-carbon steel and low-carbon steel, is now the most common form of steel because its price is relatively low while it provides material properties that are acceptable for many applications. Mild steel contains approximately 0.05–0.30% carbon making it malleable and ductile. Mild steel has a relatively low tensile strength, but it is cheap and easy to form. Surface hardness can be increased with carburization. The density of mild steel is approximately and the Young's modulus is . Low-carbon steels display yield-point runout where the material has two yield points. The first yield point (or upper yield point) is higher than the second and the yield drops dramatically after the upper yield point. If a low-carbon steel is only stressed to some point between the upper and lower yield point then the surface develops Lüder bands. Low-carbon steels contain less carbon than other steels and are easier to cold-form, making them easier to handle. Typical applications of low carbon steel are car parts, pipes, construction, and food cans. High-tensile steel High-tensile steels are low-carbon, or steels at the lower end of the medium-carbon range, which have additional alloying ingredients in order to increase their strength, wear properties or specifically tensile strength. These alloying ingredients include chromium, molybdenum, silicon, manganese, nickel, and vanadium. Impurities such as phosphorus and sulfur have their maximum allowable content restricted. 41xx steel 4140 steel 4145 steel 4340 steel 300M steel EN25 steel – 2.521% nickel-chromium-molybdenum steel EN26 steel Higher-carbon steels Carbon steels which can successfully undergo heat-treatment have a carbon content in the range of 0.30–1.70% by weight. Trace impurities of various other elements can significantly affect the quality of the resulting steel. Trace amounts of sulfur in particular make the steel red-short, that is, brittle and crumbly at high working temperatures. Low-alloy carbon steel, such as A36 grade, contains about 0.05% sulfur and melt around . Manganese is often added to improve the hardenability of low-carbon steels. These additions turn the material into a low-alloy steel by some definitions, but AISI's definition of carbon steel allows up to 1.65% manganese by weight. There are two types of higher carbon steels which are high carbon steel and the ultra high carbon steel. The reason for the limited use of high carbon steel is that it has extremely poor ductility and weldability and has a higher cost of production. The applications best suited for the high carbon steels is its use in the spring industry, farm industry, and in the production of wide range of high-strength wires. AISI classification The following classification method is based on the American AISI/SAE standard. Other international standards including DIN (Germany), GB (China), BS/EN (UK), AFNOR (France), UNI (Italy), SS (Sweden), UNE (Spain), JIS (Japan), ASTM standards, and others. Carbon steel is broken down into four classes based on carbon content: Low-carbon steel Low-carbon steel has 0.05 to 0.15% carbon (plain carbon steel) content. Medium-carbon steel Medium-carbon steel has approximately 0.3–0.5% carbon content. It balances ductility and strength and has good wear resistance. It is used for large parts, forging and automotive components. High-carbon steel High-carbon steel has approximately 0.6 to 1.0% carbon content. It is very strong, used for springs, edged tools, and high-strength wires. Ultra-high-carbon steel Ultra-high-carbon steel has approximately 1.25–2.0% carbon content. Steels that can be tempered to great hardness. Used for special purposes such as (non-industrial-purpose) knives, axles, and punches. Most steels with more than 2.5% carbon content are made using powder metallurgy. Heat treatment The purpose of heat treating carbon steel is to change the mechanical properties of steel, usually ductility, hardness, yield strength, or impact resistance. Note that the electrical and thermal conductivity are only slightly altered. As with most strengthening techniques for steel, Young's modulus (elasticity) is unaffected. All treatments of steel trade ductility for increased strength and vice versa. Iron has a higher solubility for carbon in the austenite phase; therefore all heat treatments, except spheroidizing and process annealing, start by heating the steel to a temperature at which the austenitic phase can exist. The steel is then quenched (heat drawn out) at a moderate to low rate allowing carbon to diffuse out of the austenite forming iron-carbide (cementite) and leaving ferrite, or at a high rate, trapping the carbon within the iron thus forming martensite. The rate at which the steel is cooled through the eutectoid temperature (about ) affects the rate at which carbon diffuses out of austenite and forms cementite. Generally speaking, cooling swiftly will leave iron carbide finely dispersed and produce a fine grained pearlite and cooling slowly will give a coarser pearlite. Cooling a hypoeutectoid steel (less than 0.77 wt% C) results in a lamellar-pearlitic structure of iron carbide layers with α-ferrite (nearly pure iron) between. If it is hypereutectoid steel (more than 0.77 wt% C) then the structure is full pearlite with small grains (larger than the pearlite lamella) of cementite formed on the grain boundaries. A eutectoid steel (0.77% carbon) will have a pearlite structure throughout the grains with no cementite at the boundaries. The relative amounts of constituents are found using the lever rule. The following is a list of the types of heat treatments possible: Spheroidizing Spheroidite forms when carbon steel is heated to approximately for over 30 hours. Spheroidite can form at lower temperatures but the time needed drastically increases, as this is a diffusion-controlled process. The result is a structure of rods or spheres of cementite within primary structure (ferrite or pearlite, depending on which side of the eutectoid you are on). The purpose is to soften higher carbon steels and allow more formability. This is the softest and most ductile form of steel. Full annealing A hypoeutectoid carbon steel (carbon composition smaller than the eutectoid one) is heated to approximately above the austenictic temperature (A3), whereas a hypereutectoid steel is heated to a temperature above the eutectoid one (A1) for a certain number of hours; this ensures all the ferrite transforms into austenite (although cementite might still exist in hypereutectoid steels). The steel must then be cooled slowly, in the realm of 20 °C (36 °F) per hour. Usually it is just furnace cooled, where the furnace is turned off with the steel still inside. This results in a coarse pearlitic structure, which means the "bands" of pearlite are thick. Fully annealed steel is soft and ductile, with no internal stresses, which is often necessary for cost-effective forming. Only spheroidized steel is softer and more ductile. Process annealing A process used to relieve stress in a cold-worked carbon steel with less than 0.3% C. The steel is usually heated to for 1 hour, but sometimes temperatures as high as . The image above shows the process annealing area. Isothermal annealing It is a process in which hypoeutectoid steel is heated above the upper critical temperature. This temperature is maintained for a time and then reduced to below the lower critical temperature and is again maintained. It is then cooled to room temperature. This method eliminates any temperature gradient. Normalizing Carbon steel is heated to approximately for 1 hour; this ensures the steel completely transforms to austenite. The steel is then air-cooled, which is a cooling rate of approximately per minute. This results in a fine pearlitic structure, and a more-uniform structure. Normalized steel has a higher strength than annealed steel; it has a relatively high strength and hardness. Quenching Carbon steel with at least 0.4 wt% C is heated to normalizing temperatures and then rapidly cooled (quenched) in water, brine, or oil to the critical temperature. The critical temperature is dependent on the carbon content, but as a general rule is lower as the carbon content increases. This results in a martensitic structure; a form of steel that possesses a super-saturated carbon content in a deformed body-centered cubic (BCC) crystalline structure, properly termed body-centered tetragonal (BCT), with much internal stress. Thus quenched steel is extremely hard but brittle, usually too brittle for practical purposes. These internal stresses may cause stress cracks on the surface. Quenched steel is approximately three times harder (four with more carbon) than normalized steel. Martempering (marquenching) Martempering is not actually a tempering procedure, hence the term marquenching. It is a form of isothermal heat treatment applied after an initial quench, typically in a molten salt bath, at a temperature just above the "martensite start temperature". At this temperature, residual stresses within the material are relieved and some bainite may be formed from the retained austenite which did not have time to transform into anything else. In industry, this is a process used to control the ductility and hardness of a material. With longer marquenching, the ductility increases with a minimal loss in strength; the steel is held in this solution until the inner and outer temperatures of the part equalize. Then the steel is cooled at a moderate speed to keep the temperature gradient minimal. Not only does this process reduce internal stresses and stress cracks, but it also increases impact resistance. Tempering This is the most common heat treatment encountered because the final properties can be precisely determined by the temperature and time of the tempering. Tempering involves reheating quenched steel to a temperature below the eutectoid temperature and then cooling. The elevated temperature allows very small amounts spheroidite to form, which restores ductility but reduces hardness. Actual temperatures and times are carefully chosen for each composition. Austempering The austempering process is the same as martempering, except the quench is interrupted and the steel is held in the molten salt bath at temperatures between , and then cooled at a moderate rate. The resulting steel, called bainite, produces an acicular microstructure in the steel that has great strength (but less than martensite), greater ductility, higher impact resistance, and less distortion than martensite steel. The disadvantage of austempering is it can be used only on a few sheets of steel, and it requires a special salt bath. Case hardening Case hardening processes harden only the exterior of the steel part, creating a hard, wear-resistant skin (the "case") but preserving a tough and ductile interior. Carbon steels are not very hardenable meaning they can not be hardened throughout thick sections. Alloy steels have a better hardenability, so they can be through-hardened and do not require case hardening. This property of carbon steel can be beneficial, because it gives the surface good wear characteristics but leaves the core flexible and shock-absorbing. Forging temperature of steel See also Aermet Cold working Eglin steel (a low-cost precipitation-hardened high-strength steel) Forging Hot working Maraging steel (precipitation-hardened high-strength steels) Welding (high-strength steels) References Bibliography Steels Metallurgical processes
Carbon steel
[ "Chemistry", "Materials_science" ]
3,045
[ "Steels", "Metallurgical processes", "Alloys", "Metallurgy" ]
633,692
https://en.wikipedia.org/wiki/Silver%20Streak%20%28film%29
Silver Streak is a 1976 American thriller comedy film about a murder on a Los Angeles-to-Chicago train journey. It was directed by Arthur Hiller, written by Colin Higgins, and stars Gene Wilder, Jill Clayburgh, and Richard Pryor, with Patrick McGoohan, Ned Beatty, Clifton James, Ray Walston, Scatman Crothers, and Richard Kiel in supporting roles. The film score is by Henry Mancini. This film marked the first pairing of Wilder and Pryor, who were later paired in three other films. The film is primarily set on a train called Silver Streak. A passenger accidentally finds out about the murder of an art historian and about efforts to discredit the victim's book. A shady art dealer is profiting from forged works of Rembrandt and is willing to kill in order to maintain secrecy about his crimes. The film was released on December 8, 1976 by 20th Century Fox, and it received positive reviews from critics as well as earning $51.1 million against a budget between $5.5 million and $6.5 million. Plot Aboard the Silver Streak train to Chicago, book editor George Caldwell meets salesman Bob Sweet and Hilly Burns, secretary to Rembrandt historian Professor Schreiner. Hilly and George share an instant attraction and she invites him to her cabin. There, he sees Schreiner's body fall from the train's roof outside her window. Hilly believes George is mistaken, so he goes to investigate Schreiner's cabin, where he encounters Whiney and Reace, who are searching Schreiner's belongings. After Whiney implies that Hilly is in trouble, the burly Reace throws George off the train. Concerned about Hilly, George follows the train tracks until he meets a farmer, who flies George in her biplane to a station ahead of the Silver Streak where he can reboard. Once aboard the train again, George sees Hilly with art dealer Roger Devereau and assumes they are romantically involved. He confronts Devereau, who explains that Whiney and Reace are in his employ and their confrontation was a misunderstanding. Devereau also introduces George to a seemingly alive Schreiner (in actuality his other employee, Johnson, in disguise). Convinced he was wrong, and upset at Hilly's presumed relationship with Devereau, George gets drunk and explains the situation to Sweet, who reveals himself to be an undercover FBI agent named Stevens. He explains that the FBI has been investigating Devereau, a ruthless criminal known publicly as a professional art appraiser. Stevens believes Devereau wants Schreiner's Rembrandt letters, which could expose Devereau for authenticating forged paintings as genuine Rembrandts. George realizes the letters are hidden inside Schreiner's book, and he shows them to Stevens. Reace interrupts and attempts to assassinate George but inadvertently kills Stevens. Reace pursues George to the train's roof, where George kills him with a harpoon gun. George falls from the roof of the moving train and again finds himself on foot. George seeks help from a local sheriff in Dodge City, Kansas, who finds George's story unbelievable. The sheriff then gets a phone call about Stevens' murder and believes that George is the suspect. George escapes the inept sheriff and steals a patrol car, unaware that arrested car thief Grover T. Muldoon is in the back. George and Grover work together to catch up to the train at Kansas City so George can save Hilly. With police searching for George, Grover disguises George as a black man using shoe polish so they can reboard the train. Devereau captures George, recovers the Rembrandt letters and later burns them. Devereau tells George and Hilly he plans to frame them for Schreiner's murder and then make their deaths look like murder–suicide. Grover poses as a steward and rescues George and Hilly but, after a shootout with Devereau's men, Grover and George are forced to jump from the train to escape. They are promptly arrested and taken to a train station where they meet Chief Donaldson, who turns out to be Stevens' former partner. George tries to explain that he didn't kill Stevens; Donaldson tells George that he and the police knew all along that Devereau and his men, rather than George, were the ones who killed Stevens. The story in the news about Stevens' murder by Devereau was actually planted by Donaldson and the police. Donaldson also sent the police to the sheriff's office in Dodge City to arrest George so that they could protect him from Devereau. As George and Grover amicably part ways, Donaldson has the train stopped and surrounded by police before evacuating the passengers. A firefight erupts, Whiney is wounded, and George, alongside a returning Grover, boards the train to kill Johnson and rescue Hilly. Devereau seizes the train controls, setting it to run at full speed without a driver, and throws Whiney from the train. Donaldson provides supporting fire from a helicopter, and George distracts Devereau which causes Donaldson to mortally wound Devereau before he is beheaded by an oncoming boxcar train. Unable to stop the driverless Silver Streak, George and a porter uncouple the train cars from the engine to trigger their brakes, saving the remaining passengers. But the runaway engine crashes into Chicago's Central Station, destroying everything in its path. George, Hilly and Grover survey the damaged engine as Grover drives away in a stolen car. George and Hilly bid him goodbye and leave to begin their new relationship. Cast Gene Wilder as George Caldwell Jill Clayburgh as Hildegarde "Hilly" Burns Richard Pryor as Grover T. Muldoon Patrick McGoohan as Roger Devereau Ned Beatty as FBI Agent Bob Stevens / Bob Sweet Clifton James as Sheriff Oliver Chauncey Gordon Hurst as Deputy "Moose" Ray Walston as Edgar Whiney Scatman Crothers as Porter Ralston Len Birman as FBI Agent Donaldson Lucille Benson as Rita Babtree Stefan Gierasch as Professor Arthur Schreiner / Johnson Valerie Curtin as Plain Jane Richard Kiel as Reace Fred Willard as Jerry Jarvis Ed McNamara as Benny Henry Beckman as Conventioneer Harvey Atkin as Conventioneer Robert Culp as FBI Agent (uncredited) J.A. Preston as The Waiter (uncredited) Production The film was based on an original screenplay by Colin Higgins, who at the time was best known for writing Harold and Maude. He wrote Silver Streak "because I had always wanted to get on a train and meet some blonde. It never happened, so I wrote a script." Higgins wrote Silver Streak for the producers of The Devil's Daughter, a TV film he had written. Both they and Higgins wanted to get into television. The script was sent out to auction. It was set on an Amtrak train and Paramount was interested, but wanted Amtrak to give its approval. Alan Ladd Jr. and Frank Yablans at 20th Century Fox didn't want to wait and bought the script for a then-record $400,000. Ladd said "It was like the old Laurel and Hardy comedies. The hero is Laurel, he falls off the train, stumbles about, makes a fool of himself, but still gets the pretty girl. Audiences have identified with that since Buster Keaton." Colin Higgins wanted George Segal for the hero – the character's name is George – but Fox preferred Gene Wilder. Ladd reasoned that Wilder was "younger, more identifiable for the younger audience. And he's so average, so ordinary, and he gets caught up in all these crazy adventures." (Wilder was actually older than Segal.) Colin Higgins claimed the producers did not want Richard Pryor cast because Pryor had recently walked off The Bingo Long Traveling All-Stars & Motor Kings; he says the producer at one stage considered casting another black actor as a backup. However, Pryor was very professional during the shoot. Release The film had over 400 previews around the United States starting November 28, 1976 in New York City. It had its premiere at Tower East Theater in New York on Tuesday, December 7, 1976 and opened in New York City the following day. It opened in Los Angeles on Friday, December 10 before opening nationwide in an additional 350 theaters on December 22. Reception The film grossed over $51 million at the box office and was praised by critics, including Roger Ebert. It maintains a 76% approval rating at Rotten Tomatoes from 25 reviews. Ruth Batchelor of the Los Angeles Free Press described it as a "fabulous, funny, suspenseful, wonderful, marvelous, sexy, fantastic trip on a train, with the most lovable group of characters ever assembled." Gene Siskel of the Chicago Tribune, however, called the film "a needlessly convoluted mystery yarn, which calls everyone's identity into question except Wilder's." Siskel, who gave the film just two stars, added that "the story isn't easy to follow" and that "I'm still not sure whether Clayburgh's character, secretary to Devereaux, was in on the hustle from the beginning." (Hilly Burns was actually Professor Schreiner's secretary, not Devereaux's.) Awards and honors Academy Award nomination: Best Sound (Donald O. Mitchell, Douglas O. Williams, Richard Tyler, and Harold M. Etherington) Nomination: Golden Globe Award for Best Actor – Motion Picture Musical or Comedy — Gene Wilder Writers Guild of America nomination: Best Comedy Written Directly for the Screen – Colin Higgins The film was chosen for the Royal Film Performance in 1977. In 2000, American Film Institute included the film in AFI's 100 Years...100 Laughs – #95. Score and soundtrack Though the film dates to 1976, Henry Mancini's score was never officially released on a soundtrack album. Intrada Records' 2002 compilation became one of the year's best-selling special releases. References External links Silver Streak on Soundtrack.net Making of Silver Streak (1976) – Pre-release promotional "Making Of" documentary about the film. Complete copy of script 1976 films 1970s English-language films 1976 action comedy films 1970s American films 1970s buddy comedy films 1970s comedy mystery films 1970s comedy thriller films 20th Century Fox films American action comedy films American buddy comedy films American comedy mystery films American comedy thriller films English-language action comedy films English-language buddy comedy films English-language comedy mystery films Fictional trains Films shot in Calgary Films shot in Toronto Films directed by Arthur Hiller Films set on trains Films scored by Henry Mancini Films with screenplays by Colin Higgins Films about the Federal Bureau of Investigation English-language comedy thriller films
Silver Streak (film)
[ "Technology" ]
2,209
[ "Fiction about railway accidents and incidents", "Railway accidents and incidents" ]
633,822
https://en.wikipedia.org/wiki/Sarich%20orbital%20engine
The Sarich orbital engine is a type of internal combustion engine, invented in 1972 by Ralph Sarich, an engineer from Perth, Australia, which features orbital rather than reciprocating motion of its central piston. It differs from the conceptually similar Wankel engine by using a generally prismatic shaped piston that orbits the axis of the engine, without rotation, rather than the rotating trilobular rotor of the Wankel. Overview The engine promised to be about one third the size and weight of conventional piston engines due to the compact arrangement of the combustion chambers. Another advantage is that there is no high-speed contact area with the engine walls, unlike in the Wankel engine in which edge wear is a problem. However, the combustion chambers are divided by vanes which do have contact with both the walls and the orbiting piston and are more difficult to seal due to the eight corners of the combustion chamber. In the patent, the engine is described as two stroke internal combustion engine, but the patent claims that with a different valve mechanism it could be used as a four stroke engine. However most of the development work was done on four stroke versions with both poppet and disk valve arrangements. A supercharger is required if operated in two stroke mode since crankcase pumping can't be used to charge the combustion chamber. Interestingly, in his seminal book researching and documenting all the possible ways to create a rotary piston displacer, Felix Wankel shows the orbiting piston and reciprocating vane mechanism used in the orbital engine. Research and development The Orbital Engine Company, with funding from partner BHP and Federal Government R&D grants, worked on the concept from 1972 until 1983 and had a 3.5L four stroke engine performing as well as the similar petrol car engines of the day at typical road load conditions. A technical paper was presented to the Society of Automotive Engineers in 1982, and is now part of their historic transaction collection. A major reason for the good performance of this engine was the development of a unique and patented injection system directed into the combustion chamber which created a stratified charge combustion process. Several auto makers from around the world showed great interest in the engine, however it was realised that there was still at least $100 million of development work required to commercialise the engine and the funding sources decided this was not a sound investment. Instead it was realised the same injection and combustion system could be adapted onto existing two and four stroke petrol engines and this work become the future of the company, being called the Orbital Combustion Process. During prototyping process, the engine has been installed in 3 vehicles: Toyota Kijang (3 cylinder unit), and Suzuki Karimun. (2 cylinder unit), installed by Sangeet Hari Kapoor when he was working in PT Wahana Perkasa Auto Jaya, which is a company under the Texmaco group. The 3 cylinder unit is also installed to 100 units of Ford Festivas in Australia, dubbed Festiva EcoSport, and the verdict is that while the car is somewhat more powerful than the Ford Festiva 1.3, it failed in to deliver emission compliance, efficiency, and NVH (noise, vibration, harshness) reduction at same time. Technical problems The orbital engine has two of fundamental design issues, which also plague the Wankel engine: Large surface to volume ratio combustion chamber which leads to larger combustion chamber heat losses and so loss of power, which can be greatly reduced using stratified combustion; Long sealing paths and multiple corner seals which mean it is harder to contain the chamber gases and so there is some loss of pressure and thus power. Drawings Some conceptual sketches from the engine's patent: See also Orbital Corporation Powerplus supercharger William Selwood Cecil Hughes References External links Pistonless rotary engine Australian inventions Engine technology
Sarich orbital engine
[ "Technology" ]
770
[ "Engine technology", "Engines", "Pistonless rotary engine" ]
633,843
https://en.wikipedia.org/wiki/Bayou
In usage in the Southern United States, a bayou () is a body of water typically found in a flat, low-lying area. It may refer to an extremely slow-moving stream, river (often with a poorly defined shoreline), marshy lake, wetland, or creek. They typically contain brackish water highly conducive to fish life and plankton. Bayous are commonly found in the Gulf Coast region of the southern United States, especially in the Mississippi River Delta, though they also exist elsewhere. A bayou is often an anabranch or minor braid of a braided channel that is slower than the mainstem, often becoming boggy and stagnant. Though fauna varies by region, many bayous are home to crawfish, certain species of shrimp, other shellfish, catfish, frogs, toads, salamanders, newts, American alligators, American crocodiles, herons, lizards, turtles, tortoises, spoonbills, snakes, and leeches, as well as many other species. Etymology The word entered American English via Louisiana French in Louisiana and is thought to originate from the Choctaw word bayuk, which means "small stream". After first appearing in the 17th century, the term is found in 18th century accounts and maps, often as bayouc or bayouque, where it was eventually shortened to its current form. The first settlements of the Bayou Têche and other bayous were founded by the Louisiana Creoles, and the bayous are commonly associated with Creole and Cajun culture. An alternative spelling, "buyou", is also known to have been in use, as in "Pine Buyou", used in a description by Congress in 1833 of Arkansas Territory. "bye-you" is the most common pronunciation, while a few use "bye-oh" , although that pronunciation is declining. Geography The term Bayou Country is most closely associated with Cajun and Creole cultural groups derived from French settlers and stretching along the Gulf Coast from Houston, Texas, to Mobile, Alabama, and picking back up in South Florida around the Everglades, with its center in New Orleans, Louisiana. The term may also be associated with the homelands of certain Choctaw tribal groups. Houston has the nickname "Bayou City". Environmental risks Anthropogenic influences have damaged bayou ecosystems over the years. Bayous are susceptible to pollution such as runoff from nearby urban communities (which can result in eutrophication) and oil spills given their low-lying position in the watershed. Many bayous have been cleared away by human activity as well, with those in Louisiana having shrunk by 1,900 square miles (4,900 square kilometers) since the 1930s. Agriculture Farming activities introduce nutrients into bayou ecosystems. Row crop agricultural land use is common (75–86% of the bayou watershed) in Bayou watersheds given the unique physical characteristics like flat topography and alluvial soils. Agricultural activity results in byproducts of nitrogen and phosphorus from fertilizers, which can drastically alter delicate balances in freshwater and marine ecosystems. A study conducted on 3 agricultural bayous in the lower Mississippi River Basin found that the addition of nitrogen and phosphorus to sample mesocosms affected the decomposition of maize crop and willow oak detritus. While both species showed an increase in decomposition rate after N and P nutrient enhancement, the maize crop broke down faster than the native willow oak. The maize crop also had a significantly faster microbial respiration rate. The changes in microbial respiration of a wetland system impacts its carbon exchange with the environment. Inhibiting a wetland's ability to sequester carbon further damages the status of the wetland as a carbon sink. This poses larger-scale issues as it alters the exchange of carbon dioxide with the atmosphere and environment. The use of pesticides in agriculture poses further threats to bayou ecosystems. A study conducted on three bayous (Cow Oak, Howden, Roundaway) in the western Mississippi River watershed found that pesticides released into bayou sediments cause significant impairment of the amphipod Hyalella azteca both spatially and temporally. Despite being banned 40 years ago in the United States by the Environmental Protection Agency, traces of dichlorodiphenyltrichloroethane (DDT), once used in agriculture as an insecticide, were found in sediment and amphipod tissue. DDT is a probable carcinogen, and it has been linked to adverse health effects in both humans and wildlife. Oil spills Several oil spills have impacted bayou regions, including the Deepwater Horizon oil spill of 2010. This oil spill occurred off the Louisiana coast and resulted in the deaths of 11 people and the release of over 4.9 million barrels of oil into the ocean. The bayou wetlands of Bataria Bay, Louisiana experienced increased shoreline erosion as a direct result of the Deepwater horizon oil spill. This was determined by examining rates of wetland loss in the region from the year prior to the oil spill and contrasting that with the rates of wetland loss after the oil spill. The study noted significant land loss in regions not impacted by wave activity, further demonstrating that the land degradation was caused by oil rather than other sources of weathering from waves and cyclones. Other notable oil spills affecting bayous include 4,000 U.S. gallons (about 15,141.65 L) of oil spilling in a lake near Bayou Sorrel in Louisiana and 20,000 U.S. gallons (about 75,708.24 L) of oil spilling into Saint Bernard Parish waters and the adjacent Bayou Bienvenue in Louisiana. Both incidents occurred in 2022. Oil spills harm bayous as oil is toxic to most animals. In vapor form, oil leads to lung, liver, and nervous system dysfunction if inhaled. Ingested oil poses threats to the digestive tract. Oil matts feathers and fur, resulting in disruptions in the animal's ability to insulate themselves in colder temperatures. Matted bird feathers lose properties that aid in flying and swimming. Such disruptions in individual adaptive ability may lead to trophic cascades in a bayou community. Impervious surfaces Human development activities, such as the increase of impervious surfaces, results in quicker, high intensity flood pulses, delivering larger quantities of these nutrients to the ecosystem at a much more rapid rate. Impervious surfaces include roads, housing developments, and parking lots that replace natural vegetation, typically associated with human development and urbanization. When impervious surfaces are installed, the layer of soil that stores water is damaged/removed, resulting in a lack of permeable surfaces to absorb rainfall and floodwater. Heavy metal contamination Bayous have experienced trends of land cover loss and conversion to impervious surfaces, of which has been associated with influxes of metals such as aluminum, copper, iron, lead, and zinc. Heavy metals in sediments and ultimately the waters of bayous bioaccumulates in organisms to spread their toxins throughout various trophic levels. This harms both the health of individuals in that ecosystem as well as the humans who would be ingesting fish and other aquatic organisms with potential metal contamination. Notable examples Bayou Bartholomew Bayou Corne Bayou La Batre Bayou Lafourche Bayou St. John Bayou Teche Big Bayou Canot Buffalo Bayou Armand Bayou Cypress Bayou Bayou Brevelle See also References External links Fluvial landforms Lagoons Wetlands . . Bayou .Bayou .Bayou .Bayou .Bayou .Bayou
Bayou
[ "Environmental_science" ]
1,530
[ "Hydrology", "Wetlands" ]
633,953
https://en.wikipedia.org/wiki/Weather%20stick
A weather stick is a traditional means of weather prediction used by some Native Americans. It consists of a balsam fir or birch rod mounted outdoors which twists upwards in low humidity and downwards in high-humidity environments. These sticks were first used by the Native Americans of the American northeast and the Canadian east and southeast, who noted the behavior of dry branches before the arrival of weather changes. The weather stick is a rare example of a weather prediction tool that predates the mercury barometer See also Barometry References External links The Weather Stick.com New Potato.com First Nations history in Canada Meteorological instrumentation and equipment Native American tools
Weather stick
[ "Technology", "Engineering" ]
126
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
633,963
https://en.wikipedia.org/wiki/Liposome
A liposome is a small artificial vesicle, spherical in shape, having at least one lipid bilayer. Due to their hydrophobicity and/or hydrophilicity, biocompatibility, particle size and many other properties, liposomes can be used as drug delivery vehicles for administration of pharmaceutical drugs and nutrients, such as lipid nanoparticles in mRNA vaccines, and DNA vaccines. Liposomes can be prepared by disrupting biological membranes (such as by sonication). Liposomes are most often composed of phospholipids, especially phosphatidylcholine, and cholesterol, but may also include other lipids, such as those found in egg and phosphatidylethanolamine, as long as they are compatible with lipid bilayer structure. A liposome design may employ surface ligands for attaching to desired cells or tissues. Based on vesicle structure, there are seven main categories for liposomes: multilamellar large (MLV), oligolamellar (OLV), small unilamellar (SUV), medium-sized unilamellar (MUV), large unilamellar (LUV), giant unilamellar (GUV) and multivesicular vesicles (MVV). The major types of liposomes are the multilamellar vesicle (MLV, with several lamellar phase lipid bilayers), the small unilamellar liposome vesicle (SUV, with one lipid bilayer), the large unilamellar vesicle (LUV), and the cochleate vesicle. A less desirable form is multivesicular liposomes in which one vesicle contains one or more smaller vesicles. Liposomes should not be confused with lysosomes, or with micelles and reverse micelles. In contrast to liposomes, micelles typically contain a monolayer of fatty acids or surfactants. Discovery The word liposome derives from two Greek words: lipo ("fat") and soma ("body"); it is so named because its composition is primarily of phospholipid. Liposomes were first described by British hematologist Alec Douglas Bangham in 1961 at the Babraham Institute, in Cambridge—findings that were published 1964. The discovery came about when Bangham and R. W. Horne were testing the institute's new electron microscope by adding negative stain to dry phospholipids. The resemblance to the plasmalemma was obvious, and the microscopic pictures provided the first evidence that the cell membrane is a bilayer lipid structure. The following year, Bangham, his colleague Malcolm Standish, and Gerald Weissmann, an American physician, established the integrity of this closed, bilayer structure and its ability to release its contents following detergent treatment (structure-linked latency). During a Cambridge pub discussion with Bangham, Weissmann first named the structures "liposomes" after something which laboratory had been studying, the lysosome: a simple organelle whose structure-linked latency could be disrupted by detergents and streptolysins. Liposomes are readily distinguishable from micelles and hexagonal lipid phases through negative staining transmission electron microscopy. Bangham, with colleagues Jeff Watkins and Standish, wrote the 1965 paper that effectively launched what would become the liposome "industry." Around that same time, Weissmann joined Bangham at the Babraham. Later, Weissmann, then an emeritus professor at New York University School of Medicine, recalled the two of them sitting in a Cambridge pub, reflecting on the role of lipid sheets in separating the cell interior from its exterior milieu. This insight, they felt, would be to cell function what the discovery of the double helix had been to genetics. As Bangham had been calling his lipid structures "multilamellar smectic mesophases," or sometimes "Banghasomes," Weissmann proposed the more user-friendly term liposome. Mechanism Encapsulation in liposomes A liposome has an aqueous solution core surrounded by a hydrophobic membrane, in the form of a lipid bilayer; hydrophilic solutes dissolved in the core cannot readily pass through the bilayer. Hydrophobic chemicals associate with the bilayer. This property can be utilized to load liposomes with hydrophobic and/or hydrophilic molecules, a process known as encapsulation. Typically, liposomes are prepared in a solution containing the compound to be trapped, which can either be an aqueous solution for encapsulating hydrophilic compounds like proteins, or solutions in organic solvents mixed with lipids for encapsulating hydrophobic molecules. Encapsulation techniques can be categorized into two types: passive, which relies on the stochastic trapping of molecules during liposome formation, and active, which relies on the presence of charged lipids or transmembrane ion gradients. A crucial parameter to consider is the "encapsulation efficiency," which is defined as the amount of compound present in the liposome solution divided by the total initial amount of compound used during the preparation. In more recent developments, the application of liposomes in single-molecule experiments has introduced the concept of "single entity encapsulation efficiency." This term refers to the probability of a specific liposome containing the required number of copies of the compound. Delivery To deliver the molecules to a site of action, the lipid bilayer can fuse with other bilayers such as the cell membrane, thus delivering the liposome contents; this is a complex and non-spontaneous event, however, that does not apply to nutrients and drug delivery. By preparing liposomes in a solution of DNA or drugs (which would normally be unable to diffuse through the membrane) they can be (indiscriminately) delivered past the lipid bilayer. Liposomes can also be designed to deliver drugs in other ways. Liposomes that contain low (or high) pH can be constructed such that dissolved aqueous drugs will be charged in solution (i.e., the pH is outside the drug's pI range). As the pH naturally neutralizes within the liposome (protons can pass through some membranes), the drug will also be neutralized, allowing it to freely pass through a membrane. These liposomes work to deliver drug by diffusion rather than by direct cell fusion. However, the efficacy of this pH regulated passage depends on the physiochemical nature of the drug in question (e.g. pKa and having a basic or acid nature), which is very low for many drugs. A similar approach can be exploited in the biodetoxification of drugs by injecting empty liposomes with a transmembrane pH gradient. In this case the vesicles act as sinks to scavenge the drug in the blood circulation and prevent its toxic effect. Another strategy for liposome drug delivery is to target endocytosis events. Liposomes can be made in a particular size range that makes them viable targets for natural macrophage phagocytosis. These liposomes may be digested while in the macrophage's phagosome, thus releasing its drug. Liposomes can also be decorated with opsonins and ligands to activate endocytosis in other cell types. Regarding pH-sensitive liposomes, there are three mechanisms of drug delivery intracellularly, which occurs via endocytosis. This is possible because of the acidic environment within endosomes. The first mechanism is through the destabilization of the liposome within the endosome, triggering pore formation on the endosomal membrane and allowing diffusion of the liposome and its contents into the cytoplasm. Another is the release of the encapsulated content within the endosome, eventually diffusing out into the cytoplasm through the endosomal membrane. Lastly, the membrane of the liposome and the endosome fuse together, releasing the encapsulated contents onto the cytoplasm and avoiding degradation at the lysosomal level due to minimal contact time. Certain anticancer drugs such as doxorubicin (Doxil) and daunorubicin may be administered encapsulated in liposomes. Liposomal cisplatin has received orphan drug designation for pancreatic cancer from EMEA. A study provides a promising preclinical demonstration of the effectiveness and ease of preparation of Valrubicin-loaded immunoliposomes (Val-ILs) as a novel nanoparticle technology. In the context of hematological cancers, Val-ILs have the potential to be used as a precise and effective therapy based on targeted vesicle-mediated cell death. The use of liposomes for transformation or transfection of DNA into a host cell is known as lipofection. In addition to gene and drug delivery applications, liposomes can be used as carriers for the delivery of dyes to textiles, pesticides to plants, enzymes and nutritional supplements to foods, and cosmetics to the skin. Liposomes are also used as outer shells of some microbubble contrast agents used in contrast-enhanced ultrasound. Dietary and nutritional supplements Until recently, the clinical uses of liposomes were for targeted drug delivery, but new applications for the oral delivery of certain dietary and nutritional supplements are in development. This new application of liposomes is in part due to the low absorption and bioavailability rates of traditional oral dietary and nutritional tablets and capsules. The low oral bioavailability and absorption of many nutrients is clinically well documented. Therefore, the natural encapsulation of lypophilic and hydrophilic nutrients within liposomes would be an effective method of bypassing the destructive elements of the gastric system and small intestines allowing the encapsulated nutrient to be efficiently delivered to the cells and tissues. The term nutraceutical combines the words nutrient and pharmaceutical, originally coined by Stephen DeFelice, who defined nutraceuticals as “food or part of a food that provides medical or health benefits, including the prevention and/or treatment of a disease”. However, currently, there is no conclusive definition of nutraceuticals yet, to distinguish them from other food‐derived categories, such as food (dietary) supplements, herbal products, pre‐ and probiotics, functional foods, and fortified foods. Generally, this term is used to describe any product derived from food sources which is expected to provide health benefits additionally to the nutritional value of daily food. A wide range of nutrients or other substances with nutritional or physiological effects (EU Directive 2002/46/EC) might be present in these products, including vitamins, minerals, amino acids, essential fatty acids, fibres and various plants and herbal extracts. Liposomal nutraceuticals contain bioactive compounds with health-promoting effects. The encapsulation of bioactive compounds in liposomes is attractive as liposomes have been shown to be able to overcome serious hurdles bioactives would otherwise encounter in the gastrointestinal (GI) tract upon oral intake. Certain factors have far-reaching effects on the percentage of liposome that are yielded in manufacturing, as well as the actual amount of realized liposome entrapment and the actual quality and long-term stability of the liposomes themselves. They are the following: (1) The actual manufacturing method and preparation of the liposomes themselves; (2) The constitution, quality, and type of raw phospholipid used in the formulation and manufacturing of the liposomes; (3) The ability to create homogeneous liposome particle sizes that are stable and hold their encapsulated payload. These are the primary elements in developing effective liposome carriers for use in dietary and nutritional supplements. Manufacturing The choice of liposome preparation method depends, i.a., on the following parameters: the physicochemical characteristics of the material to be entrapped and those of the liposomal ingredients; the nature of the medium in which the lipid vesicles are dispersed the effective concentration of the entrapped substance and its potential toxicity; additional processes involved during application/delivery of the vesicles; optimum size, polydispersity and shelf-life of the vesicles for the intended application; and, batch-to-batch reproducibility and possibility of large-scale production of safe and efficient liposomal products Useful liposomes rarely form spontaneously. They typically form after supplying enough energy to a dispersion of (phospho)lipids in a polar solvent, such as water, to break down multilamellar aggregates into oligo- or unilamellar bilayer vesicles. Liposomes can hence be created by sonicating a dispersion of amphipatic lipids, such as phospholipids, in water. Low shear rates create multilamellar liposomes. The original aggregates, which have many layers like an onion, thereby form progressively smaller and finally unilamellar liposomes (which are often unstable, owing to their small size and the sonication-created defects). Sonication is generally considered a "gross" method of preparation as it can damage the structure of the drug to be encapsulated. Newer methods such as extrusion, micromixing and Mozafari method are employed to produce materials for human use. Using lipids other than phosphatidylcholine can greatly facilitate liposome preparation. Prospect Further advances in liposome research have been able to allow liposomes to avoid detection by the body's immune system, specifically, the cells of reticuloendothelial system (RES). These liposomes are known as "stealth liposomes". They were first proposed by G. Cevc and G. Blume and, independently and soon thereafter, the groups of L. Huang and Vladimir Torchilin and are constructed with PEG (Polyethylene Glycol) studding the outside of the membrane. The PEG coating, which is inert in the body, allows for longer circulatory life for the drug delivery mechanism. Studies have also shown that PEGylated liposomes elicit anti-IgM antibodies, thus leading to an enhanced blood clearance of the liposomes upon re-injection, depending on lipid dose and time interval between injections. In addition to a PEG coating, some stealth liposomes also have some sort of biological species attached as a ligand to the liposome, to enable binding via a specific expression on the targeted drug delivery site. These targeting ligands could be monoclonal antibodies (making an immunoliposome), vitamins, or specific antigens, but must be accessible. Targeted liposomes can target certain cell type in the body and deliver drugs that would otherwise be systemically delivered. Naturally toxic drugs can be much less systemically toxic if delivered only to diseased tissues. Polymersomes, morphologically related to liposomes, can also be used this way. Also morphologically related to liposomes are highly deformable vesicles, designed for non-invasive transdermal material delivery, known as transfersomes. Liposomes are used as models for artificial cells. Liposomes can be used on their own or in combination with traditional antibiotics as neutralizing agents of bacterial toxins. Many bacterial toxins evolved to target specific lipids of the host cells membrane and can be baited and neutralized by liposomes containing those specific lipid targets. A study published in May 2018 also explored the potential use of liposomes as "nano-carriers" of fertilizing nutrients to treat malnourished or sickly plants. Results showed that these synthetic particles "soak into plant leaves more easily than naked nutrients", further validating the utilization of nanotechnology to increase crop yields. Machine learning has started to contribute to liposome research. For example, deep learning was used to monitor a multistep bioassay containing sucrose-loaded and nucleotides-loaded liposomes interacting with a lipid membrane-perforating peptide. Artificial neural networks were also used to optimize formulation parameters of leuprolide acetate loaded liposomes and to predict the particle size and the polydispersity index of liposomes. See also Azotosome Lamella (cell biology) Langmuir–Blodgett film Lipid bilayer Targeted drug delivery References External links Journal of Liposome Research Membrane biology Drug delivery devices Dosage forms
Liposome
[ "Chemistry" ]
3,518
[ "Pharmacology", "Membrane biology", "Drug delivery devices", "Molecular biology" ]
633,969
https://en.wikipedia.org/wiki/American%20Society%20of%20Mechanical%20Engineers
The American Society of Mechanical Engineers (ASME) is an American professional association that, in its own words, "promotes the art, science, and practice of multidisciplinary engineering and allied sciences around the globe" via "continuing education, training and professional development, codes and standards, research, conferences and publications, government relations, and other forms of outreach." ASME is thus an engineering society, a standards organization, a research and development organization, an advocacy organization, a provider of training and education, and a nonprofit organization. Founded as an engineering society focused on mechanical engineering in North America, ASME is today multidisciplinary and global. ASME has over 85,000 members in more than 135 countries worldwide. ASME was founded in 1880 by Alexander Lyman Holley, Henry Rossiter Worthington, John Edison Sweet and Matthias N. Forney in response to numerous steam boiler pressure vessel failures. Known for setting codes and standards for mechanical devices, ASME conducts one of the world's largest technical publishing operations. It holds numerous technical conferences and hundreds of professional development courses each year and sponsors numerous outreach and educational programs. Georgia Tech president and women engineer supporter Blake R Van Leer was an executive member. Kate Gleason and Lydia Weld were the first two women members. Codes and standards ASME is one of the oldest standards-developing organizations in America. It produces approximately 600 codes and standards covering many technical areas, such as fasteners, plumbing fixtures, elevators, pipelines, and power plant systems and components. ASME's standards are developed by committees of subject matter experts using an open, consensus-based process. Many ASME standards are cited by government agencies as tools to meet their regulatory objectives. ASME standards are therefore voluntary, unless the standards have been incorporated into a legally binding business contract or incorporated into regulations enforced by an authority having jurisdiction, such as a federal, state, or local government agency. ASME's standards are used in more than 100 countries and have been translated into numerous languages. Boiler and pressure vessel code The largest ASME standard, both in size and in the number of volunteers involved in its preparation, is the ASME Boiler and Pressure Vessel Code (BPVC). The BPVC provides rules for the design, fabrication, installation, inspection, care, and use of boilers, pressure vessels, and nuclear components. The code also includes standards on materials, welding and brazing procedures and qualifications, nondestructive examination, and nuclear in-service inspection. Other notable standardization areas Other Notable Standardization Areas include but not limited to are; Elevators and Escalators (A17 Series), Overhead and Mobile Cranes and related lifting and rigging equipment (B30 Series), Piping and Pipelines (B31 Series), Bio-processing Equipment (BPE), Valves Flanges, Fittings and Gaskets (B16), Nuclear Components and Processes Performance Test Codes. Publications Journals The journals published by ASME include: Applied Mechanics Reviews Journal of Applied Mechanics Journal of Biomechanical Engineering Journal of Computational and Nonlinear Dynamics Journal of Dynamic Systems Measurement & Control Journal of Fluids Engineering Journal of Heat Transfer Journal of Risk and Uncertainty in Engineering Systems Magazine In addition to academic journals, since 1880 the ASME has also published the magazine Mechanical Engineering. Society awards ASME offers four categories of awards: achievement awards to recognize "eminently distinguished engineering achievement"; literature awards for original papers; service awards for voluntary service to ASME; and unit awards, jointly awarded by six societies in recognition of advancement in the field of transportation. ASME Medal Worcester Reed Warner Medal Charles T. Main Student Leadership Award Holley Medal Honorary Member Kate Gleason Award George Westinghouse Medal Henry Laurence Gantt Medal Leonardo Da Vinci Award Lewis F. Moody Award Melville Medal Nadia Medal Old Guard Early Career Award Sia Nemat-Nasser Early Career Award R. Tom Sawyer Award Ralph Coats Roe Medal Soichiro Honda Medal Nadia Medal recipients Satya N. Atluri (2012) Huseyin Sehitoglu (2007) George Z. Voyiadjis (2022) ASME Fellows ASME Fellow is a Membership Grade of Distinction conferred by The ASME Committee of Past Presidents to an ASME member with significant publications or innovations and distinguished scientific and engineering background. Over 3,000 members have attained the grade of Fellow. The ASME Fellow membership grade is the highest elected grade in ASME. E-Fests ASME runs several annual E-Fests, or Engineering Festivals, taking the place of the Student Professional Development Conference (SPDC) series. In addition to the Human Powered Vehicle Challenge (HPVC), the Innovative Additive Manufacturing 3D Challenge (IAM3D), the Student Design Competition, and the Old Guard Competition, there are also talks, interactive workshops, and entertainment. These events allows students to network with working engineers, host contests, and promote ASME's benefits to students as well as professionals. E-Fests are held in four regions in the United States and internationally—western U.S, eastern U.S., Asia Pacific, and South America—with the E-Fest location for each region changing every year. Student competitions ASME holds a variety of competitions every year for engineering students from around the world. Human Powered Vehicle Challenge (HPVC) Student Design Competition (SDC) Innovative Design Simulation Challenge (IDSC) Innovative Additive Manufacturing 3D Challenge (IAM3D) Old Guard Competitions Innovation Showcase (IShow) Student Design Expositions Organization ASME has four key offices in the United States, including its headquarters operation in New York, N.Y., and three international offices in Beijing, China; Brussels, Belgium, and New Delhi, India. ASME has two institutes and 32 technical divisions within its organizational structure. Volunteer activity is organized into four sectors: Technical Events and Content Public Affairs and Outreach Standards and Certification Student and Early Career Development Controversy In 1982, ASME was found to be the first non-profit organization to in violation of the Sherman Antitrust Act. The United States Supreme Court found the organization liable for more than $6 million in American Society of Mechanical Engineers v. Hydrolevel Corp. See also ASME Y14.41-2003 Digital Product Definition Data Practices List of American Society of Mechanical Engineers academic journals List of Historic Mechanical Engineering Landmarks ASME Medal ASME Boiler and Pressure Vessel Code Uniform Mechanical Code American Welding Society References Further reading Calvert, Monte A. The Mechanical Engineer in America, 1830–1910: Professional Cultures in Conflict. Baltimore: The Johns Hopkins University Press, 1967. Hutton, Frederick Remson (1915) A History of the American Society of Mechanical Engineers. ASME. Sinclair, Bruce. A Centennial History of the American Society of Mechanical Engineers, 1880–1980. Toronto: Toronto University Press, 1980. External links ASME Peerlink (archived) Society Awards American engineering organizations Mechanical engineering organizations ASME Historic Mechanical Engineering Landmarks Engineering societies based in the United States Scientific organizations established in 1880 1880 establishments in New York (state)
American Society of Mechanical Engineers
[ "Engineering" ]
1,430
[ "American Society of Mechanical Engineers", "Mechanical engineering", "Mechanical engineering organizations" ]
634,016
https://en.wikipedia.org/wiki/Fluorescence%20recovery%20after%20photobleaching
Fluorescence recovery after photobleaching (FRAP) is a method for determining the kinetics of diffusion through tissue or cells. It is capable of quantifying the two-dimensional lateral diffusion of a molecularly thin film containing fluorescently labeled probes, or to examine single cells. This technique is very useful in biological studies of cell membrane diffusion and protein binding. In addition, surface deposition of a fluorescing phospholipid bilayer (or monolayer) allows the characterization of hydrophilic (or hydrophobic) surfaces in terms of surface structure and free energy. Similar, though less well known, techniques have been developed to investigate the 3-dimensional diffusion and binding of molecules inside the cell; they are also referred to as FRAP. Experimental setup The basic apparatus comprises an optical microscope, a light source and some fluorescent probe. Fluorescent emission is contingent upon absorption of a specific optical wavelength or color which restricts the choice of lamps. Most commonly, a broad spectrum mercury or xenon source is used in conjunction with a color filter. The technique begins by saving a background image of the sample before photobleaching. Next, the light source is focused onto a small patch of the viewable area either by switching to a higher magnification microscope objective or with laser light of the appropriate wavelength. The fluorophores in this region receive high intensity illumination which causes their fluorescence lifetime to quickly elapse (limited to roughly 105 photons before extinction). Now the image in the microscope is that of a uniformly fluorescent field with a noticeable dark spot. As Brownian motion proceeds, the still-fluorescing probes will diffuse throughout the sample and replace the non-fluorescent probes in the bleached region. This diffusion proceeds in an ordered fashion, analytically determinable from the diffusion equation. Assuming a Gaussian profile for the bleaching beam, the diffusion constant D can be simply calculated from: where w is the radius of the beam and tD is the "Characteristic" diffusion time. Applications Supported lipid bilayers Originally, the FRAP technique was intended for use as a means to characterize the mobility of individual lipid molecules within a cell membrane. While providing great utility in this role, current research leans more toward investigation of artificial lipid membranes. Supported by hydrophilic or hydrophobic substrates (to produce lipid bilayers or monolayers respectively) and incorporating membrane proteins, these biomimetic structures are potentially useful as analytical devices for determining the identity of unknown substances, understanding cellular transduction, and identifying ligand binding sites. Protein binding This technique is commonly used in conjunction with green fluorescent protein (GFP) fusion proteins, where the studied protein is fused to a GFP. When excited by a specific wavelength of light, the protein will fluoresce. When the protein that is being studied is produced with the GFP, then the fluorescence can be tracked. Photodestroying the GFP, and then watching the repopulation into the bleached area can reveal information about protein interaction partners, organelle continuity and protein trafficking. If after some time the fluorescence doesn't reach the initial level anymore, then some part of the fluorescence is caused by an immobile fraction (that cannot be replenished by diffusion). Similarly, if the fluorescent proteins bind to static cell receptors, the rate of recovery will be retarded by a factor related to the association and disassociation coefficients of binding. This observation has most recently been exploited to investigate protein binding. Similarly, if the GFP labeled protein is constitutively incorporated into a larger complex, the dynamics of fluorescence recovery will be characterized by the diffusion of the larger complex. Applications outside the membrane FRAP can also be used to monitor proteins outside the membrane. After the protein of interest is made fluorescent, generally by expression as a GFP fusion protein, a confocal microscope is used to photobleach and monitor a region of the cytoplasm, mitotic spindle, nucleus, or another cellular structure. The mean fluorescence in the region can then be plotted versus time since the photobleaching, and the resulting curve can yield kinetic coefficients, such as those for the protein's binding reactions and/or the protein's diffusion coefficient in the medium where it is being monitored. Often the only dynamics considered are diffusion and binding/unbinding interactions, however, in principle proteins can also move via flow, i.e., undergo directed motion, and this was recognized very early by Axelrod et al. This could be due to flow of the cytoplasm or nucleoplasm, or transport along filaments in the cell such as microtubules by molecular motors. The analysis is most simple when the fluorescence recovery is limited by either the rate of diffusion into the bleached area or by rate at which bleached proteins unbind from their binding sites within the bleached area, and are replaced by fluorescent protein. Let us look at these two limits, for the common case of bleaching a GFP fusion protein in a living cell. Diffusion-limited fluorescence recovery For a circular bleach spot of radius and diffusion-dominated recovery, the fluorescence is described by an equation derived by Soumpasis (which involves modified Bessel functions and ) with the characteristic timescale for diffusion, and is the time. is the normalized fluorescence (goes to 1 as goes to infinity). The diffusion timescale for a bleached spot of radius is , with D the diffusion coefficient. Note that this is for an instantaneous bleach with a step function profile, i.e., the fraction of protein assumed to be bleached instantaneously at time is , and , for is the distance from the centre of the bleached area. It is also assumed that the recovery can be modelled by diffusion in two dimensions, that is also both uniform and isotropic. In other words, that diffusion is occurring in a uniform medium so the effective diffusion constant D is the same everywhere, and that the diffusion is isotropic, i.e., occurs at the same rate along all axes in the plane. In practice, in a cell none of these assumptions will be strictly true. Bleaching will not be instantaneous. Particularly if strong bleaching of a large area is required, bleaching may take a significant fraction of the diffusion timescale . Then a significant fraction of the bleached protein will diffuse out of the bleached region actually during bleaching. Failing to take account of this will introduce a significant error into D. The bleached profile will not be a radial step function. If the bleached spot is effectively a single pixel then the bleaching as a function of position will typically be diffraction limited and determined by the optics of the confocal laser scanning microscope used. This is not a radial step function and also varies along the axis perpendicular to the plane. Cells are of course three-dimensional not two-dimensional, as is the bleached volume. Neglecting diffusion out of the plane (we take this to be the xy plane) will be a reasonable approximation only if the fluorescence recovers predominantly via diffusion in this plane. This will be true, for example, if a cylindrical volume is bleached with the axis of the cylinder along the z axis and with this cylindrical volume going through the entire height of the cell. Then diffusion along the z axis does not cause fluorescence recovery as all protein is bleached uniformly along the z axis, and so neglecting it, as Soumpasis' equation does, is harmless. However, if diffusion along the z axis does contribute to fluorescence recovery then it must be accounted for. There is no reason to expect the cell cytoplasm or nucleoplasm to be completely spatially uniform or isotropic. Thus, the equation of Soumpasis is just a useful approximation, that can be used when the assumptions listed above are good approximations to the true situation, and when the recovery of fluorescence is indeed limited by the timescale of diffusion . Note that just because the Soumpasis can be fitted adequately to data does not necessarily imply that the assumptions are true and that diffusion dominates recovery. Reaction-limited recovery The equation describing the fluorescence as a function of time is particularly simple in another limit. If a large number of proteins bind to sites in a small volume such that there the fluorescence signal is dominated by the signal from bound proteins, and if this binding is all in a single state with an off rate koff, then the fluorescence as a function of time is given by Note that the recovery depends on the rate constant for unbinding, koff, only. It does not depend on the on rate for binding. Although it does depend on a number of assumptions The on rate must be sufficiently large in order for the local concentration of bound protein to greatly exceed the local concentration of free protein, and so allow us to neglect the contribution to f of the free protein. The reaction is a simple bimolecular reaction, where the protein binds to localised sites that do not move significantly during recovery Exchange is much slower than diffusion (or whatever transport mechanism is responsible for mobility), as only then does the diffusing fraction recovery rapidly and then acts as the source of fluorescent protein that binds and replaces the bound bleached protein and so increases the fluorescence. With r the radius of the bleached spot, this means that the equation is only valid if the bound lifetime . If all these assumptions are satisfied, then fitting an exponential to the recovery curve will give the off rate constant, koff. However, other dynamics can give recovery curves similar to exponentials, so fitting an exponential does not necessarily imply that recovery is dominated by a simple bimolecular reaction. One way to distinguish between recovery with a rate determined by unbinding and recovery that is limited by diffusion, is to note that the recovery rate for unbinding-limited recovery is independent of the size of the bleached area r, while it scales as , for diffusion-limited recovery. Thus if a small and a large area are bleached, if recovery is limited by unbinding then the recovery rates will be the same for the two sizes of bleached area, whereas if recovery is limited by diffusion then it will be much slower for the larger bleached area. Diffusion and reaction In general, the recovery of fluorescence will not be dominated by either simple isotropic diffusion, or by a single simple unbinding rate. There will be both diffusion and binding, and indeed the diffusion constant may not be uniform in space, and there may be more than one type of binding sites, and these sites may also have a non-uniform distribution in space. Flow processes may also be important. This more complex behavior implies that a model with several parameters is required to describe the data; models with only either a single diffusion constant D or a single off rate constant, koff, are inadequate. There are models with both diffusion and reaction. Unfortunately, a single FRAP curve may provide insufficient evidence to reliably and uniquely fit (possibly noisy) experimental data. Sadegh Zadeh et al. have shown that FRAP curves can be fitted by different pairs of values of the diffusion constant and the on-rate constant, or, in other words, that fits to the FRAP are not unique. This is in three-parameter (on-rate constant, off-rate constant and diffusion constant) fits. Fits that are not unique, are not generally useful. Thus for models with a number of parameters, a single FRAP experiment may be insufficient to estimate all the model parameters. Then more data is required, e.g., by bleaching areas of different sizes, determining some model parameters independently, etc. See also Fluorescence microscope Photobleaching Fluorescence loss in photobleaching (FLIP) References Cell imaging Biochemistry methods Fluorescence Microscopy Fluorescence techniques Biophysics
Fluorescence recovery after photobleaching
[ "Physics", "Chemistry", "Biology" ]
2,482
[ "Biochemistry methods", "Luminescence", "Fluorescence", "Applied and interdisciplinary physics", "Biophysics", "Microscopy", "Biochemistry", "Cell imaging", "Fluorescence techniques" ]
634,052
https://en.wikipedia.org/wiki/Patch%20clamp
The patch clamp technique is a laboratory technique in electrophysiology used to study ionic currents in individual isolated living cells, tissue sections, or patches of cell membrane. The technique is especially useful in the study of excitable cells such as neurons, cardiomyocytes, muscle fibers, and pancreatic beta cells, and can also be applied to the study of bacterial ion channels in specially prepared giant spheroplasts. Patch clamping can be performed using the voltage clamp technique. In this case, the voltage across the cell membrane is controlled by the experimenter and the resulting currents are recorded. Alternatively, the current clamp technique can be used. In this case, the current passing across the membrane is controlled by the experimenter and the resulting changes in voltage are recorded, generally in the form of action potentials. Erwin Neher and Bert Sakmann developed the patch clamp in the late 1970s and early 1980s. This discovery made it possible to record the currents of single ion channel molecules for the first time, which improved understanding of the involvement of channels in fundamental cell processes such as action potentials and nerve activity. Neher and Sakmann received the Nobel Prize in Physiology or Medicine in 1991 for this work. Basic technique Set-up During a patch clamp recording, a hollow glass tube known as a micropipette or patch pipette filled with an electrolyte solution and a recording electrode connected to an amplifier is brought into contact with the membrane of an isolated cell. Another electrode is placed in a bath surrounding the cell or tissue as a reference ground electrode. An electrical circuit can be formed between the recording and reference electrode with the cell of interest in between. The solution filling the patch pipette might match the ionic composition of the bath solution, as in the case of cell-attached recording, or match the cytoplasm, for whole-cell recording. The solution in the bath solution may match the physiological extracellular solution, the cytoplasm, or be entirely non-physiological, depending on the experiment to be performed. The researcher can also change the content of the bath solution (or less commonly the pipette solution) by adding ions or drugs to study the ion channels under different conditions. Depending on what the researcher is trying to measure, the diameter of the pipette tip used may vary, but it is usually in the micrometer range. This small size is used to enclose a cell membrane surface area or "patch" that often contains just one or a few ion channel molecules. This type of electrode is distinct from the "sharp microelectrode" used to puncture cells in traditional intracellular recordings, in that it is sealed onto the surface of the cell membrane, rather than inserted through it. In some experiments, the micropipette tip is heated in a microforge to produce a smooth surface that assists in forming a high resistance seal with the cell membrane. To obtain this high resistance seal, the micropipette is pressed against a cell membrane and suction is applied. A portion of the cell membrane is suctioned into the pipette, creating an omega-shaped area of membrane which, if formed properly, creates a resistance in the 10–100 gigaohms range, called a "gigaohm seal" or "gigaseal". The high resistance of this seal makes it possible to isolate electronically the currents measured across the membrane patch with little competing noise, as well as providing some mechanical stability to the recording. Recording Many patch clamp amplifiers do not use true voltage clamp circuitry, but instead are differential amplifiers that use the bath electrode to set the zero current (ground) level. This allows a researcher to keep the voltage constant while observing changes in current. To make these recordings, the patch pipette is compared to the ground electrode. Current is then injected into the system to maintain a constant, set voltage. The current that is needed to clamp the voltage is opposite in sign and equal in magnitude to the current through the membrane. Alternatively, the cell can be current clamped in whole-cell mode, keeping current constant while observing changes in membrane voltage. Tissue sectioning Accurate tissue sectioning with compresstome vibratome or microtomes is essential, in addition to patch clamp methods. By supplying thin, uniform tissue slices, these devices provide optimal electrode implantation. To prepare tissues for patch clamp studies in a way that ensures accurate and dependable recordings, researchers can select between using vibratomes for softer tissues and microtomes for tougher structures. Leica Biosystems, Carl Zeiss AG are the notable producer of these devices. Variations Several variations of the basic technique can be applied, depending on what the researcher wants to study. The inside-out and outside-out techniques are called "excised patch" techniques, because the patch is excised (removed) from the main body of the cell. Cell-attached and both excised patch techniques are used to study the behavior of individual ion channels in the section of membrane attached to the electrode. Whole-cell patch and perforated patch allow the researcher to study the electrical behavior of the entire cell, instead of single channel currents. The whole-cell patch, which enables low-resistance electrical access to the inside of a cell, has now largely replaced high-resistance microelectrode recording techniques to record currents across the entire cell membrane. Cell-attached patch For this method, the pipette is sealed onto the cell membrane to obtain a gigaseal (a seal with electrical resistance on the order of a gigaohm), while ensuring that the cell membrane remains intact. This allows the recording of currents through single, or a few, ion channels contained in the patch of membrane captured by the pipette. By only attaching to the exterior of the cell membrane, there is very little disturbance of the cell structure. Also, by not disrupting the interior of the cell, any intracellular mechanisms normally influencing the channel will still be able to function as they would physiologically. Using this method it is also relatively easy to obtain the right configuration, and once obtained it is fairly stable. For ligand-gated ion channels or channels that are modulated by metabotropic receptors, the neurotransmitter or drug being studied is usually included in the pipette solution, where it can interact with what used to be the external surface of the membrane. The resulting channel activity can be attributed to the drug being used, although it is usually not possible to then change the drug concentration inside the pipette. The technique is thus limited to one point in a dose response curve per patch. Therefore, the dose response is accomplished using several cells and patches. However, voltage-gated ion channels can be clamped successively at different membrane potentials in a single patch. This results in channel activation as a function of voltage, and a complete I-V (current-voltage) curve can be established in only one patch. Another potential drawback of this technique is that, just as the intracellular pathways of the cell are not disturbed, they cannot be directly modified either. Inside-out patch In the inside-out method, a patch of the membrane is attached to the patch pipette, detached from the rest of the cell, and the cytosolic surface of the membrane is exposed to the external media, or bath. One advantage of this method is that the experimenter has access to the intracellular surface of the membrane via the bath and can change the chemical composition of what the inside surface of the membrane is exposed to. This is useful when an experimenter wishes to manipulate the environment at the intracellular surface of single ion channels. For example, channels that are activated by intracellular ligands can then be studied through a range of ligand concentrations. To achieve the inside-out configuration, the pipette is attached to the cell membrane as in the cell-attached mode, forming a gigaseal, and is then retracted to break off a patch of membrane from the rest of the cell. Pulling off a membrane patch often results initially in the formation of a vesicle of membrane in the pipette tip, because the ends of the patch membrane fuse together quickly after excision. The outer face of the vesicle must then be broken open to enter into inside-out mode; this may be done by briefly taking the membrane through the bath solution/air interface, by exposure to a low Ca2+ solution, or by momentarily making contact with a droplet of paraffin or a piece of cured silicone polymer. Whole-cell recording or whole-cell patch Whole-cell recordings involve recording currents through multiple channels simultaneously, over a large region of the cell membrane. The electrode is left in place on the cell, as in cell-attached recordings, but more suction is applied to rupture the membrane patch, thus providing access from the interior of the pipette to the intracellular space of the cell. This provides a means to administer and study how treatments (e.g. drugs) can affect cells in real time. Once the pipette is attached to the cell membrane, there are two methods of breaking the patch. The first is by applying more suction. The amount and duration of this suction depends on the type of cell and size of the pipette. The other method requires a large current pulse to be sent through the pipette. How much current is applied and the duration of the pulse also depend on the type of cell. For some types of cells, it is convenient to apply both methods simultaneously to break the patch. The advantage of whole-cell patch clamp recording over sharp electrode technique recording is that the larger opening at the tip of the patch clamp electrode provides lower resistance and thus better electrical access to the inside of the cell. A disadvantage of this technique is that because the volume of the electrode is larger than the volume of the cell, the soluble contents of the cell's interior will slowly be replaced by the contents of the electrode. This is referred to as the electrode "dialyzing" the cell's contents. After a while, any properties of the cell that depend on soluble intracellular contents will be altered. The pipette solution used usually approximates the high-potassium environment of the interior of the cell to minimize any changes this may cause. There is often a period at the beginning of a whole-cell recording when one can take measurements before the cell has been dialyzed. Outside-out patch The name "outside-out" emphasizes both this technique's complementarity to the inside-out technique, and the fact that it places the external rather than intracellular surface of the cell membrane on the outside of the patch of membrane, in relation to the patch electrode. The formation of an outside-out patch begins with a whole-cell recording configuration. After the whole-cell configuration is formed, the electrode is slowly withdrawn from the cell, allowing a bulb of membrane to bleb out from the cell. When the electrode is pulled far enough away, this bleb will detach from the cell and reform as a convex membrane on the end of the electrode (like a ball open at the electrode tip), with the original outside of the membrane facing outward from the electrode. As the image at the right shows, this means that the fluid inside the pipette will be simulating the intracellular fluid, while a researcher is free to move the pipette and the bleb with its channels to another bath of solution. While multiple channels can exist in a bleb of membrane, single channel recordings are also possible in this conformation if the bleb of detached membrane is small and only contains one channel. Outside-out patching gives the experimenter the opportunity to examine the properties of an ion channel when it is isolated from the cell and exposed successively to different solutions on the extracellular surface of the membrane. The experimenter can perfuse the same patch with a variety of solutions in a relatively short amount of time, and if the channel is activated by a neurotransmitter or drug from the extracellular face, a dose-response curve can then be obtained. This ability to measure current through exactly the same piece of membrane in different solutions is the distinct advantage of the outside-out patch relative to the cell-attached method. On the other hand, it is more difficult to accomplish. The longer formation process involves more steps that could fail and results in a lower frequency of usable patches. Perforated patch This variation of the patch clamp method is very similar to the whole-cell configuration. The main difference lies in the fact that when the experimenter forms the gigaohm seal, suction is not used to rupture the patch membrane. Instead, the electrode solution contains small amounts of an antifungal or antibiotic agent, such as amphothericin-B, nystatin, or gramicidin, which diffuses into the membrane patch and forms small pores in the membrane, providing electrical access to the cell interior. When comparing the whole-cell and perforated patch methods, one can think of the whole-cell patch as an open door, in which there is complete exchange between molecules in the pipette solution and the cytoplasm. The perforated patch can be likened to a screen door that only allows the exchange of certain molecules from the pipette solution to the cytoplasm of the cell. Advantages of the perforated patch method, relative to whole-cell recordings, include the properties of the antibiotic pores, that allow equilibration only of small monovalent ions between the patch pipette and the cytosol, but not of larger molecules that cannot permeate through the pores. This property maintains endogenous levels of divalent ions such as Ca2+ and signaling molecules such as cAMP. Consequently, one can have recordings of the entire cell, as in whole-cell patch clamping, while retaining most intracellular signaling mechanisms, as in cell-attached recordings. As a result, there is reduced current rundown, and stable perforated patch recordings can last longer than one hour. Disadvantages include a higher access resistance, relative to whole-cell, due to the partial membrane occupying the tip of the electrode. This may decrease current resolution and increase recording noise. It can also take a significant amount of time for the antibiotic to perforate the membrane (about 15 minutes for amphothericin-B, and even longer for gramicidin and nystatin). The membrane under the electrode tip is weakened by the perforations formed by the antibiotic and can rupture. If the patch ruptures, the recording is then in whole-cell mode, with antibiotic contaminating the inside of the cell. Loose patch A loose patch clamp is different from the other techniques discussed here in that it employs a loose seal (low electrical resistance) rather than the tight gigaseal used in the conventional technique. This technique was used as early as the year 1961, as described in a paper by Strickholm on the impedance of a muscle cell's surface, but received little attention until being brought up again and given a name by Almers, Stanfield, and Stühmer in 1982, after patch clamp had been established as a major tool of electrophysiology. To achieve a loose patch clamp on a cell membrane, the pipette is moved slowly towards the cell, until the electrical resistance of the contact between the cell and the pipette increases to a few times greater resistance than that of the electrode alone. The closer the pipette gets to the membrane, the greater the resistance of the pipette tip becomes, but if too close a seal is formed, and it could become difficult to remove the pipette without damaging the cell. For the loose patch technique, the pipette does not get close enough to the membrane to form a gigaseal or a permanent connection, nor to pierce the cell membrane. The cell membrane stays intact, and the lack of a tight seal creates a small gap through which ions can pass outside the cell without entering the pipette. A significant advantage of the loose seal is that the pipette that is used can be repeatedly removed from the membrane after recording, and the membrane will remain intact. This allows repeated measurements in a variety of locations on the same cell without destroying the integrity of the membrane. This flexibility has been especially useful to researchers for studying muscle cells as they contract under real physiological conditions, obtaining recordings quickly, and doing so without resorting to drastic measures to stop the muscle fibers from contracting. A major disadvantage is that the resistance between the pipette and the membrane is greatly reduced, allowing current to leak through the seal, and significantly reducing the resolution of small currents. This leakage can be partially corrected for, however, which offers the opportunity to compare and contrast recordings made from different areas on the cell of interest. Given this, it has been estimated that the loose patch technique can resolve currents smaller than 1 mA/cm2. Patch-Seq A combination of cellular imaging, RNA sequencing and patch clamp this method is used to fully characterize neurons across multiple modalities. As neural tissues are one of the most transcriptomically diverse populations of cells, classifying neurons into cell types in order to understand the circuits they form is a major challenge for neuroscientists. Combining classical classification methods with single cell RNA-sequencing post-hoc has proved to be difficult and slow. By combining multiple data modalities such as electrophysiology, sequencing and microscopy, Patch-seq allows for neurons to be characterized in multiple ways simultaneously. It currently suffers from low throughput relative to other sequencing methods mainly due to the manual labor involved in achieving a successful patch-clamp recording on a neuron. Investigations are currently underway to automate patch-clamp technology which will improve the throughput of patch-seq as well. Automatic patch clamping Automated patch clamp systems have been developed in order to collect large amounts of data inexpensively in a shorter period of time. Such systems typically include a single-use microfluidic device, either an injection molded or a polydimethylsiloxane (PDMS) cast chip, to capture a cell or cells, and an integrated electrode. In one form of such an automated system, a pressure differential is used to force the cells being studied to be drawn towards the pipette opening until they form a gigaseal. Then, by briefly exposing the pipette tip to the atmosphere, the portion of the membrane protruding from the pipette bursts, and the membrane is now in the inside-out conformation, at the tip of the pipette. In a completely automated system, the pipette and the membrane patch can then be rapidly moved through a series of different test solutions, allowing different test compounds to be applied to the intracellular side of the membrane during recording. See also References External links The Axon Guide - Electrophysiology and Biophysics Laboratory Techniques Alternative images for patch clamp variations Animation of the Patch Clamp Method Neurophysiology Electrophysiology Laboratory techniques
Patch clamp
[ "Chemistry" ]
3,917
[ "nan" ]
634,056
https://en.wikipedia.org/wiki/Kingdome
The Kingdome (officially the King County Stadium) was a multi-purpose stadium located in the Industrial District (later SoDo) neighborhood of Seattle, Washington, United States. Owned and operated by King County, it was best known as the home stadium of the Seattle Seahawks of the National Football League (NFL) and the Seattle Mariners of Major League Baseball (MLB); it was also home to the Seattle SuperSonics of the National Basketball Association (NBA) (from 1978 to 1985) and additionally served as both the home outdoor and indoor venue for the Seattle Sounders of the North American Soccer League (NASL). The Kingdome measured wide from its inside walls. The idea of constructing a covered stadium for a major league football or baseball team was first proposed to Seattle officials in 1959. Voters rejected separate measures to approve public funding for such a stadium in 1960 and 1966, but the outcome was different in 1968; King County voters approved the issue of $40 million in municipal bonds to construct the stadium. Construction began in 1972 and the stadium opened in 1976 as the home of the Sounders and Seahawks. The Mariners moved in the following year, and the SuperSonics moved in the year after that, only to move back to the Seattle Center Coliseum in 1985. The stadium hosted several major sports events, including the Soccer Bowl in August 1976, the Pro Bowl in January 1977, the Major League Baseball All-Star Game in July 1979, the NBA All-Star Game in 1987, and the NCAA Final Four in 1984, 1989, and 1995. During the 1990s, the Seahawks' and Mariners' respective ownership groups began to question the suitability of the Kingdome as a venue for each team, threatening to relocate unless new, publicly funded stadiums were built. An issue was that neither team saw their shared tenancy as profitable; both teams also questioned the integrity of the stadium's roof as highlighted by the collapse of ceiling tiles onto the seating area before a scheduled Mariners game in 1994. As a result, public funding packages for new, purpose-built stadiums for the Mariners and Seahawks were respectively approved in 1995 and 1997. The Mariners moved to Safeco Field, now known as T-Mobile Park, midway through the 1999 season, and the Seahawks temporarily moved to Husky Stadium after the 1999 season. On March 26, 2000, the Kingdome was demolished by implosion. The Seahawks' new stadium, now known as Lumen Field, was built on the site and opened in 2002. King County finally paid off the bonds used to build and repair the Kingdome in 2015, fifteen years after its demolition. Concept and construction In 1959, Seattle restaurateur David L. Cohn wrote a letter to the Seattle City Council suggesting the city needed a covered stadium for a major professional sports franchise. A domed stadium was thought to be a must because of Seattle's frequent rain. At the time, the city had Husky Stadium and Sick's Stadium for college football and minor league baseball, respectively, but both were deemed inadequate for a major league team. In 1960, King County commissioners placed a $15 million bond issue measure on the ballot to fund construction of a stadium, but voters on November 8 defeated it with only 48 percent approval because of doubt the stadium could be built within that budget, and lack of a guarantee the city would have a team to play in the stadium. By 1966, the National Football League and the American League were both considering granting the city an expansion franchise, and as a result, the King County Council placed another bond issue measure on the ballot for a September vote. While it received 51.5 percent approval, it did not reach the 60 percent required to proceed; the requirement was due to a 1932 initiative that mandated a supermajority for tax levies over 40 mills. In 1967, the American League granted Seattle an expansion franchise that would be known as the Seattle Pilots. The league clearly stated Sick's Stadium was not adequate as a major-league stadium, and stipulated that as a condition of being awarded the franchise, bonds had to be issued to fund construction of a domed stadium that had to be completed by 1970; additionally, the capacity at Sick's Stadium had to be expanded from 11,000 to 30,000 by Opening Day 1969, when the team was scheduled to begin playing. The Pilots were supposed to begin play in 1971 along with the Kansas City Royals. However, when Senator Stuart Symington of Missouri got wind of those plans, he demanded both teams begin play in 1969. The American League had birthed the Royals and Pilots as a result of the Kansas City Athletics moving to Oakland, and Symington would not accept the prospect of Kansas City waiting three years for baseball's return. On February 13, 1968, King County voters approved the issue of $40 million in bonds to fund construction of the "King County Multipurpose Domed Stadium" with 62 percent in favor; it was part of the Forward Thrust group of bond propositions that, among other items, had a regional rapid transit system rejected. That year, a committee considered over 100 sites throughout Seattle and King County for the stadium; they unanimously decided the best site would be on the grounds of Seattle Center, site of the 1962 World's Fair. Community members decried the idea, claiming the committee was influenced by special interest groups. The Pilots began play as planned in 1969, but Sick's Stadium proved to be a problematic venue for fans, media, and visiting players alike. The Pilots only drew 677,000 fans that season, not nearly enough to break even. It soon became apparent that the Pilots would not survive long enough to move to their new stadium without new ownership. It was also obvious that the timetable for a new stadium would have to be significantly advanced, as Sick's Stadium was completely unsuitable even for temporary use. However, a petition by stadium opponents brought the dome project to a halt. The Pilots' ownership group ran out of money by the end of the season, and with the stadium plans in limbo, the team was forced to declare bankruptcy. Despite efforts by Seattle-area businessmen to buy the team as well as an attempt to keep the team in Seattle through the court system, the Pilots were sold to Milwaukee businessman Bud Selig, who relocated the team to Milwaukee and renamed it the Milwaukee Brewers a week before the start of the 1970 season. The push to build the domed stadium continued despite the lack of a major league sports team to occupy it. In May 1970 voters rejected the proposal to build the stadium at Seattle Center. From 1970 to 1972, the commission studied the feasibility and economic impact of building the stadium on King Street adjacent to Pioneer Square and the International District—a site that ranked at the bottom when the commission originally narrowed the field of possible sites in 1968. This drew sharp opposition primarily from the International District community, which feared the impact of the stadium on neighborhood businesses located east of the site. The King Street site was approved 8–1 by the county council in late 1971, and the groundbreaking ceremony in 1972 was held on November 2. Several protesters attended the ceremony, disrupted the speakers, and at one point threw mud balls at them. In bidding for construction of the stadium, which had separate offers for the dome and the rest of the stadium, Donald M. Drake Construction Company of Portland, Oregon, was the winning contractor for both with respective bids of $28.9 million and $5.9 million. Peter Kiewit Sons Construction Company was the only other bidder, offering $30.57 million for the stadium and $5.8 million for the roof; the latter came with the caveat of the company using its own design consultant. To help alleviate tension between the International District community and county officials, Drake emphasized the hiring of minorities, with minorities eventually representing 13 percent of the workers at the site; a community center and a shelter were also built in the neighborhood. However, the stadium's construction encountered numerous issues; in January 1973, six support beams for the roof were toppled as one or two of them buckled, bringing down the others in a domino effect. By January 1974, the stadium reached 50 percent completion; only reaching 60 percent completion in July, it was clear that Drake would not reach the December deadline at that point. It was also apparent that Drake was ill-prepared to work on a project with such scale, with numerous errors, delays, and short-staffing slowing down construction. Efforts to renegotiate the contract failed, and on November 22, Drake stopped work on the Kingdome. The county fired Drake on December 10, bringing in Kiewit to finish construction on the stadium. On December 5, 1974, the NFL awarded Seattle an expansion franchise to occupy the new stadium; the team was later named the Seattle Seahawks. Construction lasted another two years, and the stadium held an opening ceremony on March 27, 1976. It hosted its first professional sporting event two weeks later on April 9, an exhibition soccer game between the Seattle Sounders and New York Cosmos of the NASL. It set a record for the largest soccer audience in North America at 58,120. The stadium was finished at $20 million over budget, with part of the cost overrun covered by a $12.8 million out-of-court settlement in 1980 between the county and Drake's liability insurers. Surface Like virtually all other multi-purpose stadiums, the Kingdome featured AstroTurf artificial turf for its playing surface, with its baseball configuration featuring dirt sliding pits around each base. When it was constructed, artificial turf was considered a must because the roof was likely to inhibit the growth of natural grass, like the Astrodome's roof. The AstroTurf surface was first replaced in July 1983 during the MLB All-Star break; Monsanto, the then-owner of AstroTurf, won the turf replacement contract over SuperTurf (then used by the Metrodome) with a bid of $1.2 million. By request of the Mariners and Seahawks, it was replaced again in October and December 1990 at a cost of $2.56 million; the previous surface was sold off thereafter, with 25 rolls of it sold to the Tacoma Dome for $108,200. A strip 40 feet by 4 inches was ripped off left field near second base during a field invasion by celebrating fans after the Mariners won the AL West tiebreaker game in 1995; it was replaced before the first Mariners home game in the ALDS. Before the 1990 replacement, the AstroTurf surface was converted from baseball to football configuration via the covering of the infield with turf strips; a one-piece surface was placed over the infield after the conclusion of the Mariners season. The surface was attached together via both Velcro and Ziploc fasteners. After the 1990 replacement, separate surfaces were installed for each team; the Seahawks specifically wanted a stiffer variation of AstroTurf. The replacement surfaces were attached together via zippers. The underlying base of the surface was asphalt, with the AstroTurf essentially consisting of a carpet on top of a pad with respective thicknesses of one-half inch and five-eights inch. Lumps, holes, and ridges were also present in the surface along with gaps within its seams. These factors combined to create a playing surface that was despised by both football and baseball players alike; after the 1998 season, a survey by the NFL Players Association found that 56.7 percent of Seahawks players rated the surface as "poor" or "fair", and was the worst-rated one in the AFC West. Injuries from playing at the Kingdome and its contemporaries occurred more often compared to stadiums with natural grass. Of note, Seahawks running backs Sherman Smith and Curt Warner respectively suffered season-ending knee injuries in 1980 and 1984 during games at the Kingdome; additionally, the Kingdome's surface is partly blamed for Ken Griffey Jr.'s subsequent injuries and decline in performance after the Mariners traded him to the Cincinnati Reds at the end of the 1999 season. Football Seahawks The expansion Seattle Seahawks of the National Football League (NFL) played their first game ever on August 1, 1976, a preseason game against the San Francisco 49ers at the Kingdome in which they lost 27–20 before a crowd of 60,825. The Seahawks' first regular season game was against the St. Louis Cardinals at the Kingdome on September 12. The Cardinals defeated the Seahawks, 30–24, with 58,441 fans in attendance. At the end of that season, the venue hosted the Pro Bowl, the NFL's all-star game, on January 17, 1977. The Seahawks hosted Monday Night Football games at the Kingdome twelve times in their history and were 9–3 in those games. The Seahawks and the Oakland/Los Angeles Raiders played five Monday Night games in the Kingdome in the 1980s with Seattle holding a 3–2 edge including a 37–0 blowout victory in 1986. The next year, in 1987, Bo Jackson of the Los Angeles Raiders rushed for 221 yards, the most ever on MNF, and scored 2 touchdowns. One of his scores was a 91-yard touchdown and the other was a historic plowing into Seahawks high-profile rookie linebacker Brian "The Boz" Bosworth. The Seahawks regularly sold out games at the Kingdome from its inception and throughout the 1980s; 117 consecutive regular-season home games were sold out between 1979 and 1993. However, after Ken Behring took over ownership of the team from the Nordstrom family in 1988, the team began to decline in performance; after winning the AFC West that year, it suffered a franchise-worst 2–14 record in 1992. Season ticket sales, which had reached 62,000 that year with a waiting list of 30,000, gradually decreased to 46,000 in 1995, with the team averaging 46,218 in attendance over five games at the Kingdome in 1994; as a result, the Seahawks began failing to sell out games, resulting in their blackout in the Seattle market. After the blackout of the October 24, 1993 game versus the New England Patriots, one more game was blacked out that year, with five games blacked out the following year; KING-TV, which as Seattle's NBC affiliate was the Seahawks' local broadcast home at the time, prevented further blackouts by purchasing all remaining unsold tickets for three games in 1993 and two games in 1994. In the Seahawks' heyday, the Kingdome was known as one of the loudest stadiums in the league. Opposing teams were known to practice with jet engine sounds blaring at full blast to prepare for the painfully high decibel levels typical of Seahawks games. It was where Seahawks fans, who were long called "the 12th Man" and led the Seahawks to retire the number 12 in honor of them in 1984, made their reputation as one of the most ravenous fan bases in the NFL, a reputation that has carried over to what is now Lumen Field. The Kingdome's reputation contributed to the NFL's 1989 vote in favor of enacting a rule penalizing home teams for excessive crowd noise; it was especially loathed by Seahawks fans during preseason games, with fan displeasure throughout the league leading commissioner Pete Rozelle to soften enforcement of the rule before the start of the regular season. Raucous Seahawk fans at the Kingdome were also some of the earliest performers of The Wave. The city of Seattle made numerous bids to host the Super Bowl during the Seahawks' tenure at the Kingdome. However, despite five bids over 12 years, the Kingdome was never awarded the opportunity to host a Super Bowl; its closest chance was in 1989 for Super Bowl XXVI, which was awarded to the Hubert H. Humphrey Metrodome in Minneapolis, Minnesota. In its 1982 bid for Super Bowl XIX, the Seattle City Council voted to give tax exemptions to the NFL if the league selected the Kingdome to host the game. The Seahawks played their final game at the Kingdome on January 9, 2000, suffering a first-round playoff loss to the Miami Dolphins in their first playoff appearance since the 1988 season. The Dolphins scored a fourth-quarter touchdown to win 20–17; it marked the first home playoff loss for the Seahawks as well as the first road playoff win in 28 years for the Dolphins. It was the last NFL victory for Hall of Fame quarterback Dan Marino and head coach Jimmy Johnson, and it was also the last event the Kingdome ever hosted before its implosion. The Seahawks had an overall record of in the Kingdome, and were 2–1 in the postseason. Amateur College The first football (and college football by extension) game played in the Kingdome occurred just after it opened in 1976, when the Washington Huskies varsity team won 10–7 against a team of Husky alumni on May 1 before 20,470 fans. The Huskies looked into temporarily renting the Kingdome for the 1987 season when the north grandstand of Husky Stadium collapsed during construction on February 25; however, the Kingdome was ultimately not needed as the grandstand was completed in time for the team's first home game against the Stanford Cardinal on September 5. (Seven years later, the Seattle Seahawks would use Husky Stadium as their home field during the first half of the 1994 season while the Kingdome's ceiling was under repair.) The Kingdome also hosted a game between the Washington State Cougars and USC Trojans on October 9, 1976. With 37,268 in attendance, USC running back Ricky Bell rushed for 346 yards and set the Pac-8 single-game rushing record; the Trojans won by nine points, In 1994, under then-new athletic director Rick Dickson, the Cougars flirted with the idea of hosting an additional home game at the Kingdome starting in 1997; however, the plan never came to fruition. In the late 1970s, the Kingdome hosted both instances of a Pacific-10 Conference all-star game called the Challenge Bowl; the bowl, sponsored by the Olympia Brewing Company, pitted an all-star team of Pac-10 players against a similar team from another conference. The Pac-10 went undefeated with a 27–20 victory (as the Pac-8) over the Big Ten on January 15, 1978, and a 36–23 victory over the Big Eight on January 13, 1979. During the same period, the University of Puget Sound Loggers and Pacific Lutheran University Lutes also faced off at the Kingdome twice; the Loggers won both contests, defeating the Lutes 23–21 on September 17, 1977, with 13,167 in attendance, and then defeating them again 27–14 on September 23, 1978, before a crowd of 8,329. The 1977 game set a series attendance record at the time. Other levels The stadium also hosted the annual WIAA high school football state championships in an event called the Kingbowl from 1977 through 1994; the title games were moved to the Tacoma Dome in nearby Tacoma in 1995. The Seattle and Tacoma Police Departments played a yearly game named the Bacon Bowl to raise money for charity from 1980 to 2005; the Kingdome hosted it from the beginning until 1982, then had a one-off in 1985 during a nine-year span in which the Tacoma Dome hosted the rest of the games. The Kingdome hosted the game again from 1992 to 1994 before it returned to the Tacoma Dome; the game came back for one final time in 1999 before the stadium was demolished. Baseball Shortly after the Pilots' departure for Milwaukee, the city of Seattle, King County, and the state of Washington sued the American League, claiming a breach of contract. The league agreed to grant Seattle another franchise in exchange for dropping the lawsuit, and the team that would later be known as the Seattle Mariners was born. The Mariners held their first game in franchise history at the Kingdome on April 6, 1977, against the California Angels. The Angels shut out the Mariners 7–0 in front of a sellout crowd of 57,762. The first pitch was a strike thrown by the Mariners' Diego Seguí to Jerry Remy. In the top of the first inning, Don Baylor registered the first hit at the stadium with a double that scored Remy, who had stolen second and third base after drawing a walk from Seguí. The Mariners' first batter, Dave Collins, struck out; however, the next batter, José Báez, singled for the franchise's first ever hit. The first home run at the venue was hit in the top of the third inning by Joe Rudi; designated hitter Juan Bernhardt scored the Mariners' first home run in their fifth game at the Kingdome on April 10. The Mariners had their first win at the Kingdome and team history two games after the opener (they were also shut out in their second game 2–0), defeating the Angels 7–6 on April 8 via a walk-off double from Larry Milbourne. The venue hosted the All-Star Game on July 17, 1979. The Kingdome was somewhat problematic as a baseball venue. Foul territory was quite large, and seats in the upper deck as far as from home plate. Part of the problem was that the Kingdome was not a multipurpose stadium in the truest sense. Instead, it was built as a football stadium that could convert into a baseball stadium. For instance, most fans in the outfield seats on the 300 level were unable to see parts of right and center field; these areas were not part of the football playing field. For most of the Mariners' first 18 years, their poor play (they did not have a winning season until 1991) combined with the Kingdome's design, led to poor attendance. Some writers and fans called it "the Tomb" (because of its gray concrete and lack of noise) and "Puget Puke." After their inaugural home opener, the Mariners didn't have another sellout for the next 1,018 home games until their 1990 home opener on April 13. At one point the Mariners covered seats in the upper decks in right and right-center with a tarp in order to make the stadium feel "less empty". Additionally, the Kingdome's acoustics created problems for stadium announcers, who had to deal with significant echo issues. However, when the team's fortunes began to change in the mid-1990s and they began drawing larger crowds, especially in the post-season, the noise created an electric atmosphere and gave the home team a distinct advantage similar to the effect on football games. The average attendance of 22,064 in 1995 was the lowest in three years with the removal of nine home games for the season, but when put in perspective, it was still higher compared to any of the Mariners' first 14 seasons. Despite its cavernous interior, the Kingdome's field dimensions were relatively small. It had a reputation as a hitter's park, especially in the 1990s when Ken Griffey Jr., Edgar Martínez, Jay Buhner, Alex Rodriguez, and other sluggers played there. The large number of in-play objects—speakers, roof support wires and streamers—contributed to an "arena baseball" feel. The Kingdome was somewhat improved in 1982 with the addition of a wall in right field nicknamed the "Walla Walla" (after the city in southeastern Washington); a nearly $100,000 Daktronics out-of-town scoreboard was later installed on it in 1990. In 1990 and 1991, the moving of home plate closer to the backstop, the addition of box seats down the third base line and the removal of a few rows of seats in left field reduced foul territory and made the outfield dimensions longer and asymmetrical. In its early years, the outfield was symmetrical with a uniform wall height: deep in center, and short elsewhere. For the All-Star Game in 1979, center field was , power alleys were , and the foul lines were ; the unpadded wall was green with a top yellow stripe, approximately in height and did not have the power alley distances listed on it. Down the lines, the distance was also listed in fathoms (52.7 fm), presumably to maintain a nautical theme in line with the team name; however, this practice was ditched after the 1980 season. Like the Kingdome's contemporaries, the bullpens were located in foul territory adjacent to the baselines and the stands. The longest game in the Kingdome took place on July 30, 1998, when the Cleveland Indians defeated the Mariners 9–8 in 17 innings via a three-run homer from Manny Ramirez off Bob Wells; Paul Shuey staved off a comeback by the Mariners in the bottom of the inning to end the game the next morning after five hours and 23 minutes. The most noteworthy baseball game in the Kingdome's history took place on October 8, 1995; in the rubber game of the ALDS, the Mariners defeated the New York Yankees 6–5 in 11 innings in front of 57,411 raucous fans. In the bottom of the 11th, Martinez doubled to left, sending Joey Cora and Griffey home with the winning runs and vaulting the Mariners into the ALCS for the first time in franchise history. On May 2, 1996, a game at the Kingdome between the Mariners and the Cleveland Indians was suspended in the bottom of the seventh inning because of a minor earthquake. The earthquake, estimated at a magnitude of 5.3 to 5.4, occurred during a pitching change as Indians' pitcher Orel Hershiser was walking off the mound following a home run by Edgar Martínez. After an inspection by engineers, the game was continued the next evening, resulting in a 6–4 win for the Indians. Seguí, who retired from professional baseball after the 1977 season, was invited by the Mariners to throw the ceremonial last pitch after the final Mariners game at the Kingdome in 1999. However, while they were able to make the tickets and reservations for Seguí, a payment mix-up prevented him from boarding the flight out of Kansas City International Airport on the day of the game; the incident made him irate such that he refused to visit Seattle again until 2012, when he was invited as part of the Mariners' 35th anniversary celebration. Despite the disappointment from Seguí's son, then-Mariners first baseman David Segui, the ceremony went on as planned; David's son, then-seven-year-old Cory Segui, threw the last pitch to Bob Stinson, who was the Mariners' catcher in their first game. In 1989, Griffey Jr. hit a home run in his first-ever plate appearance at the Kingdome on April 10. On June 27, 1999, Griffey Jr. hit the last home run ever at the Kingdome against the Texas Rangers. The Mariners played 1,755 games at the Kingdome, compiling an overall home record of during their 22½-season tenure there. Basketball SuperSonics Besides the Mariners and Seahawks, the stadium also hosted the Seattle SuperSonics of the National Basketball Association (NBA) for seven seasons. The SuperSonics, having previously played at the Seattle Center Coliseum, announced on July 29, 1977, that they intended to move into the Kingdome for the 1978–79 season after the expiration of their contract with the city of Seattle, the owner of the Coliseum; the team pushed for a move to the Kingdome after the city balked at a $30 million plan to expand the Coliseum to 20,000 seats the previous year. On August 22, the King County Council voted 7–2 to approve a 17-year lease with the SuperSonics, with the agreement signed the following day. The following week, the council unanimously voted on August 29 to spend $1.5 million on improvements to the Kingdome in preparation for the team; the team would pay the same amount over the first seven years as part of the agreement. Additional terms of the agreement had the SuperSonics pay the county 10 percent of ticket sale proceeds (not including admissions taxes) and $2,539 in personnel costs per game; the county additionally kept all game concession and parking revenue. On the same day as the agreement signing, longtime Kingdome critic Frank Ruano filed a referendum petition in an attempt to halt the move, but he announced on September 17 that he would withdraw support from the petition for lack of support. While the SuperSonics had played a few games at the Kingdome over the previous two seasons, their full-time tenancy required the addition of 5,000 portable stadium seats added onto the floor of the arena as well as additional scoreboards and a new basketball court. The center circle of the court was positioned over first base, with the court itself laid parallel and adjacent to the right-field seats; the portable seats were positioned across the court with one end hovering over home plate. The first SuperSonics game in the Kingdome under the agreement was an exhibition game versus the Portland Trail Blazers on September 22, 1978. A few weeks later, a crowd of 15,219 watched as the SuperSonics defeated the Chicago Bulls, 104–86, on October 14 in their first regular-season game as a tenant. Captain Fred Brown and leading scorer Gus Williams helped lead the team to their first and only championship that season, defeating the Washington Bullets in the Finals and avenging their Finals loss to them the previous season. At the time, the Kingdome was known in the NBA for being the noisiest arena for basketball and for having the largest crowds, with stadium vendor Bill Scott ( Bill the Beerman) taking the duties as cheerleader. In the 1979–80 season, the SuperSonics set an NBA record average attendance of 21,725 fans per game (since broken). The SuperSonics set the NBA single-game playoff attendance record at 39,457 during Game 4 of the 1978 NBA Finals; they set it again on April 15, 1980, during a conference semifinal game against the Milwaukee Bucks with an attendance record of 40,172 (also since broken). The Kingdome regular season, single-game attendance record of 38,067 was set on November 22, 1991, when the SuperSonics faced the Chicago Bulls. While leaving a SuperSonics game on February 16, 1983, a 21-year-old man from Olympia fell off a ramp and plunged 47 feet to his death; this was despite the installation of signs warning about the chest-level barriers the previous year. Logistics would be a problem throughout the team's tenure at the Kingdome because the Seahawks and Mariners had scheduling priority over them, especially during the playoffs when the Mariners were playing there at the same time in the spring. As part of the 1977 agreement, King County agreed to pay the SuperSonics $15,000 for each game (up to five) that was moved elsewhere because of booking issues. Even then, the scheduling priority meant that the SuperSonics would only play home playoff games at the Kingdome while the Mariners were on the road, with most of the games played at the Coliseum; the team even had to use Hec Edmundson Pavilion at the University of Washington for a few games when both the Kingdome and the Coliseum were unavailable. Along with the scheduling issues, as with other multipurpose stadiums used by the NBA the Kingdome proved itself to be a less-than-ideal venue for basketball. Although the Kingdome's capacity allowed the SuperSonics to set attendance records, the vast space it afforded meant that it did not have the intimate environment of a dedicated arena; furthermore, fans were displeased about the poor sight lines and cold temperatures in the Kingdome. All these factors, plus dwindling attendance due to poor team performance towards the end of their tenancy at the Kingdome, led SuperSonics general manager Zollie Volchok to sign a 10-year contract with the city of Seattle in 1983, agreeing to have the team move back to the Coliseum after the 1984–85 season in exchange for upgrades there. The SuperSonics faced the Phoenix Suns at the Kingdome on April 7, 1985, in their final game as a regular tenant, losing 110–125 with 5,672 in attendance. However, exemplifying the scheduling issues, it was not their final home game of the season; the SuperSonics were forced to play at the Tacoma Dome on April 11 because the Mariners hosted the Oakland Athletics at the Kingdome that day. By that point, the SuperSonics had an average attendance of 7,399, failing to surpass 10,000 seats sold in 29 of 37 games held at the Kingdome in their final season there. Despite calling the Coliseum home again, the SuperSonics still played occasionally at the Kingdome over the next few years when large crowds were anticipated; as such, the SuperSonics hosted the 1987 NBA All-Star Game there, having previously hosted the 1974 game at the Coliseum before the Kingdome opened. However, SuperSonics owner Barry Ackerley, who had bought the team from Sam Schulman in October 1983 after the Coliseum deal was signed, started seeking a new arena for them in 1989; team president Bob Whitsitt claimed that the Coliseum was outdated and leaking. Ackerley proposed to build a new arena south of the Kingdome (where T-Mobile Park stands today), but the plan was initially rejected by King County because of objections from the Seahawks and Mariners over inadequate parking. The plan was eventually approved by the Seattle City Council 7–1 on May 30, 1990, but it was ultimately scrapped the following year on June 26 because of issues in financing it; as a compromise measure, the Coliseum was rebuilt as KeyArena during the 1994–95 season, with the SuperSonics playing home games at the Tacoma Dome instead of the closer Kingdome in the meantime. The SuperSonics played at KeyArena until they were controversially relocated to Oklahoma City by owner Clay Bennett after the 2007–08 season. The SuperSonics played 303 games at the Kingdome in total, including 14 playoff games; they held an overall record of and a playoff record of at the stadium. Of those games, 20 of them had attendances of 30,000 or more. College The first men's college basketball game at the Kingdome was held on January 9, 1984, when the Washington Huskies defeated the Notre Dame Fighting Irish, 63–61, in the second overtime in front of 7,466 fans. The Huskies held their only other basketball game at the Kingdome more than a decade later, defeating the Old Dominion Monarchs 71–61 on December 22, 1994, with 4,187 in attendance. The only women's basketball game at the Kingdome was held on December 6, 1979, when the Soviet national team beat Seattle University 135-45, before 7,239 spectators. Final Four The NCAA Final Four of men's college basketball was held three times at the Kingdome, with the stadium hosting the 1984, 1989, and 1995 editions. The 1984 championship game saw the Georgetown Hoyas defeat the Houston Cougars, 84–75. Meanwhile, the 1989 championship game had the Michigan Wolverines beat the Seton Hall Pirates, 80–79, in overtime because of a controversial last-second foul call against the Pirates. Finally, with the 1995 championship game, the UCLA Bruins defeated the Arkansas Razorbacks, 89–78, to win their first championship since the retirement of coach John Wooden twenty years earlier in 1975. The Kingdome was not the first venue in Seattle to host the Final Four; Hec Edmundson Pavilion had previously hosted it in 1949 and 1952. However, the Kingdome is credited with helping shape the Final Four into an event with a stature comparable to that of the Super Bowl because of its large capacity. It was the only such capable venue on the West Coast of the United States; the last time a non-Seattle West Coast site hosted the game was when the Los Angeles Memorial Sports Arena hosted it in 1972. The 1995 edition was the last time that Seattle hosted a Final Four, and it will likely remain that way for the foreseeable future since the Kingdome's successors were not designed with a controlled environment in mind; it also remains the last time that the Final Four was held on the West Coast. The Final Four was not held again in the Western United States until 2017, when University of Phoenix Stadium in Glendale, Arizona, hosted it for the Phoenix area. Other On February 18, 1979, the Harlem Globetrotters held an exhibition game at the Kingdome with close to 23,000 in attendance, of which around 3,500 were under 12 years old. As a result of the boycott of the 1980 Summer Olympics by the United States, the U.S. Olympic team faced off against a squad of NBA players in a six-game exhibition tournament called the "Gold Medal Series" that June. On June 20, the NBA All-Stars defeated the U.S. Olympic team, 78–76, before a crowd of 10,902; it was the only victory by the NBA squad in the tournament. The Washington Interscholastic Activities Association (WIAA) held their 3A and 4A high school basketball state tournament five times at the Kingdome between 1993 and 1999. The boys' and girls' games were held simultaneously until the championship, at which point they took turns playing on a single court. Soccer Sounders The Seattle Sounders of the North American Soccer League (NASL) were the first tenant to move into the Kingdome upon its opening, having played at Memorial Stadium for their first two seasons. As a result, they held the honor of hosting the first sporting event at the Kingdome with an exhibition game versus the New York Cosmos on April 9, 1976; the Cosmos defeated them 3–1 with 58,128 fans in attendance. Highlighting the secondary treatment of the Sounders, about 5,000 seats were not yet installed when the game occurred. Just weeks later, they hosted their first regular-season game in the Kingdome on April 26, defeating the Portland Timbers 1–0 via a Geoff Hurst penalty kick in the second overtime before 24,983 spectators. The largest crowd to attend a Sounders match, regular or postseason, occurred on August 25, 1977, when 56,256 spectators watched as they defeated the Los Angeles Aztecs 1–0 in the second game of the Pacific Conference Final to advance to their first Soccer Bowl. The Sounders' regular-season attendance record was set on August 9, 1980, when the Cosmos defeated them 1–0 in front of 49,606 fans. Overall, the team drew an average attendance of 20,183 from 1975 to 1982, peaking in the 1980 season with an average attendance of 24,247. Along with traditional soccer, the Sounders participated in NASL indoor soccer for the 1980–81 and 1981–82 seasons. However, the 1983 outdoor season proved to be a dire one for the Sounders; with the team's front office heavily cutting costly foreign players from the roster, the team suffered their worst season ever performance-wise, resulting in a record low average attendance of 8,181. That season additionally saw the smallest crowd to attend a Sounders game, with only 4,270 spectators on hand to witness their 3–1 victory over the Tulsa Roughnecks on July 27. With the cuts not enough to keep the team afloat, the owners ultimately elected to fold it that year on September 6; their final home game was a 3–2 victory over the San Diego Sockers on August 25 with 7,331 fans in attendance. College The Kingdome hosted the NCAA Division I Men's Soccer Championship Finals twice in consecutive years. The final on December 17, 1984, featured the Clemson Tigers, coached by Dr. I. M. Ibrahim, and defending national champion Indiana Hoosiers, headed by coach Jerry Yeagley; 7,926 spectators watched as the Tigers won 2–1 in regulation to bring home their first national championship in soccer and deny the Hoosiers a third straight title. A year later, on December 14, 1985, a crowd of 5,986 watched as the UCLA Bruins defeated the American Eagles 1–0 after eight overtime periods to win their first national soccer championship; Bruin coach Sigi Schmid went on to coach the Seattle Sounders FC of Major League Soccer (MLS), a phoenix club of the NASL Sounders, from its inaugural season in 2009 to 2016. Other professional games A game of the 1976 U.S.A. Bicentennial Cup tournament was held at the Kingdome on May 28, with Brazil defeating Team America 2–0 before 20,245 spectators. The Kingdome also hosted the NASL's championship game, the Soccer Bowl, between the Minnesota Kicks and the Toronto Metros-Croatia on August 28, 1976; the Metros-Croatia defeated the Kicks 3–0 before a crowd of 25,765, setting an NASL championship attendance record at the time. A CONCACAF Championship qualifier for the 1978 FIFA World Cup was hosted at the Kingdome on October 20, 1976; the game, which saw the United States defeat Canada 2–0 before a crowd of 17,675, was the first instance of a World Cup qualifier that was held indoors. A doubleheader featuring both the U.S. Olympic and national squads was held at the Kingdome on February 3, 1979. The U.S. Olympic team defeated the Canadian Olympic team 2–0 in the first game, while the Soviet national team defeated the U.S. national team 3–1 in the second game; 13,317 spectators were present for both games. The Kingdome was additionally considered in Seattle's bid to be a host city for the 1994 FIFA World Cup, but it was rejected in favor of Husky Stadium because of concerns over its indoor environment and its turf; the bid ultimately failed in part because of apprehension from the University of Washington. Other events Upon its opening, the Kingdome served as one of the main convention centers in Seattle alongside the Seattle Center Coliseum. During preliminary studies for the then-proposed Washington State Convention Center (now the Seattle Convention Center) in the early 1980s, a proposal to build it on the stadium's northern parking lot was floated, but it was never seriously considered and ultimately rejected by the convention center board in favor of building it in the Downtown area. The largest crowd to attend a single event in the Kingdome came early, during an eight-day Billy Graham crusade in 1976. The Friday night edition on May 14 drew 74,000 and featured singer Johnny Cash; 5,000 were turned away. The stadium was also part of Seattle's bid to host the 1988 Republican National Convention, but it ultimately failed because of a scheduling conflict with the Mariners. Country singer CW McCall performed 8 shows during the 4-day Custom Van, Truck, 4-Wheel Drive and Motorcycle Show, March 17–20, 1977. The Kingdome hosted a round of the AMA Supercross Championship from 1978 to 1999. Concerts Numerous rock concerts were held in the venue, despite significant echo and sound delay problems attributable to the structure's cavernous size. Final years The loss of the Sounders and Sonics in the mid-1980s caused financial constraints as the Kingdome was left with 59 unfilled days in their annual schedule. By the 1990s, multi-purpose stadiums fell out of favor with the public, and the Kingdome's suitability as an NFL and MLB venue came into doubt as a result. Neither the Seahawks' nor the Mariners' respective ownership groups saw the shared stadium arrangement as economically feasible because the Kingdome was unable to meet the needs of both tenants; they also noted the lack of revenue-generating luxury suites prominent in newer stadiums. After several years of threats to relocate the Mariners because of poor attendance and revenue, then-owner Jeff Smulyan put the team up for sale on December 6, 1991; he subsequently received approval by MLB to sell the team to an ownership group led by Nintendo president Hiroshi Yamauchi on June 10, 1992. Almost immediately, the new ownership group began campaigning with local and state governments to secure public funding for a new baseball-only stadium. In March 1994, King County Executive Gary Locke appointed a task force to study the need for a baseball-only stadium. 1994 ceiling collapse The Kingdome's roof had been problematic from the beginning because of a design flaw. With the stadium's limited budget compared to its contemporaries, its architects had the roof's acoustic ceiling tiles serve a dual purpose as forms to pour concrete over for the roof sections. They were firmly placed via six metal clips on their edges, but the effectiveness of the clips was weakened as moisture from the polyurethane insulation accumulated in the tiles because it lacked proper water vapor management. As a result, leaks were discovered in the roof three months before the stadium opened, and several attempts at repairs made the situation worse or were quickly undone. In 1993, the county decided to strip off the outer roof coating and replace it with a special coating. Sandblasting failed to strip the old roof material off, and the contractor changed its method to pressure washing. This pressure-washing resulted in water seepage through the roof, and on July 19, 1994, four , waterlogged acoustic ceiling tiles fell into the seating area. The tiles fell while the Mariners were on the field preparing for a scheduled game against the Baltimore Orioles, a half-hour before the gates were to open for fans to enter the stadium. As a result, the Kingdome was closed for repairs. The Mariners were forced to play the last 20 games of the 1994 season on the road after the players' union vetoed playing the "home" games at Cheney Stadium in Tacoma, BC Place Stadium in Vancouver, British Columbia, or a neutral site because the union believed that its members should play only in major-league venues. The extended road trip could have lasted over two months, but it was shortened because of the 1994–95 Major League Baseball strike, which began on August 12 and ended up canceling the remainder of the 1994 MLB season; the strike also resulted in a delay to the start of the 1995 season. The Seahawks had to play both their two preseason home games and their first three regular-season home games of the 1994 season at nearby Husky Stadium. The Kingdome held a reopening ceremony the weekend of November 4–6, 1994, which culminated with the Seahawks returning to the stadium for a regular-season game against the Cincinnati Bengals. Repairing the roof ultimately cost US$51 million, and two construction workers lost their lives in a crane accident on August 17 during the repair. The incident also motivated plans to replace the stadium. Replacement On September 19, 1995, King County voters defeated a ballot measure that would have funded the construction of a new baseball-only stadium for the Mariners. However, the following month, the Mariners made it to the MLB postseason for the first time and, on October 8, defeated the New York Yankees in the decisive fifth game of the 1995 ALDS on the heels of a walk-off game-winning double hit by Edgar Martínez. The Mariners' postseason run demonstrated that there was a fan base in Seattle that wanted the team to stay in town, and as a result, the Washington State Legislature approved a separate funding package for a new stadium on October 14. In January 1996, Seahawks owner Ken Behring announced he was moving the team to Los Angeles and the team would play at Anaheim Stadium, which had recently been vacated as a football venue when the Los Angeles Rams moved to St. Louis (at the same time, the Los Angeles Raiders returned to Oakland, after 13 years away). His rationale for the decision included unfounded safety concerns surrounding the seismic stability of the Kingdome. Behring went so far as to relocate team headquarters to Anaheim, California, but his plans were defeated when lawyers found out that the Seahawks could not break their lease on the Kingdome until 2005. As a result, Behring tried to sell the team. He found a potential buyer in Microsoft co-founder Paul Allen, who stipulated that a new publicly funded stadium had to be built as a condition of his purchase of the team. Allen funded a special election held on June 17, 1997, that featured a measure that would allocate public funding for a new stadium for the Seahawks on the Kingdome site. The measure passed, Allen officially purchased the team, and the Kingdome's fate was sealed. Despite the intention of the Mariners to start playing at their new home at the beginning of the 1999 season, construction delays meant that installation of its retractable roof would not occur on time, leading to another sale threat by the team's owners. However, the team eventually agreed to play at the Kingdome from the start of the season until after the All-Star Game, with construction on the new home starting on March 8, 1997. Two years later, a sold-out crowd of 56,530 watched as the Mariners defeated the Texas Rangers 5–2 in their final game at the Kingdome on June 27, 1999; they played their first game at their new home, Safeco Field, nearly three weeks later on July 15. Meanwhile, the Seahawks temporarily relocated to Husky Stadium for two seasons following the 1999 season. To make way for construction of their new stadium, the Kingdome was stripped down and prepared for demolition. During the process, a security incident occurred on February 21, 2000, when a skateboarder disguised himself as a construction worker, climbed up onto the roof, and skated on it with two friends filming him on the nearby Alaskan Way Viaduct; demolition crews were unimpressed by the incident and implemented tighter security measures in response. On the morning of March 26, 2000 at 8:30 AM, the Kingdome was demolished by Controlled Demolition, Inc. via implosion, just one day short of 24 years after the stadium's opening; it set a record recognized by Guinness World Records for the largest building, by volume, ever demolished by implosion. The Kingdome was the first large, domed stadium to be demolished in the United States; its demolition was also the first live event covered by ESPN Classic. The new stadium, Seahawks Stadium, eventually opened on July 20, 2002, in time for the beginning of the NFL season that year. The Kingdome was demolished before the debt issued to finance its construction was fully paid, and as of September 2010, residents of King County were still responsible for more than $80 million in debt on the demolished stadium. The debt was retired in March 2015, nine months ahead of the original bond maturity and 15 years after the stadium's demolition. The 2% of the 15.6% hotel/motel tax earmarked for the Kingdome debt no longer needed went instead to the county's 4Culture program for arts, heritage, and preservation. Seating capacity In popular culture Because of its versatility and its prominent position in the Seattle skyline for close to a quarter-century, the Kingdome was featured in numerous forms of media during and after its existence. On television, it served as the backdrop for a rescue in the 1978 TV movie "Most Deadly Passage" of NBC's Emergency! series, which featured the work of Seattle Medic One paramedics. It was also mentioned in 1992 with the airing of "Crushed", the sixteenth episode of the fifth season of ABC sitcom Full House; in the episode, guest star Tommy Page boasted to Jesse Katsopolis about playing there. The Kingdome was mentioned again in 1998 during the sixth season of NBC sitcom Frasier, which was set in Seattle. In the sixth episode, "Secret Admirer", Martin describes Daphne's frustrating driving that repeatedly takes them right into various traffic delays, ending with them encountering traffic from the Kingdome. Furthermore, the Kingdome's demolition was featured on The History Channel's Modern Marvels series with their "Concrete" episode that first aired on May 31, 2000. The Kingdome was not limited to just television mentions; numerous songs mentioned it in their lyrics. Rock band Foo Fighters mentioned it in the refrain of "New Way Home", which was featured on their 1997 album, The Colour and the Shape. Rapper Macklemore also mentioned the Kingdome in "My Oh My", a 2011 song that paid tribute to Dave Niehaus, the longtime play-by-play announcer of the Mariners who had recently died; in it, he talks about growing up in Seattle and going to the Kingdome. The song mentions the Double in the Mariners–Yankees 1995 ALDS, and its accompanying music video also contains footage of the Kingdome's demolition. With the rise of 3D computer graphics, video games started to depict the Kingdome as well. The Gran Turismo series of racing games on the PlayStation line of consoles featured the Kingdome in the Seattle Circuit race track, a street circuit based on the roads of Seattle. Seattle Circuit is featured in Gran Turismo 2, Gran Turismo 3: A-Spec, Gran Turismo 4, Tourist Trophy, and Gran Turismo PSP. Despite the Kingdome's demolition occurring before the game was released, Gran Turismo 3: A-Spec still featured it in the track. The Kingdome also made an appearance in the 2007 RTS game World in Conflict, in which it was destroyed by Soviet artillery during a Soviet invasion of Seattle in an alternate timeline. See also Delta Dome Thin-shell structure List of thin shell structures Notes References Bibliography External links The Story behind the implosion of The Seattle Kingdome Kingdome: The Controversial Birth of a Seattle Icon (1959–1976) Video of Kingdome implosion via KING-TV Kingdome King County Domed Stadium (demolished 2000) 2000 disestablishments in Washington (state) Buildings and structures demolished by controlled implosion Defunct college baseball venues in the United States Defunct college football venues Defunct college soccer venues in the United States Defunct Major League Baseball venues Defunct National Football League venues Defunct soccer venues in the United States Defunct indoor soccer venues in the United States Defunct American football venues in the United States Defunct baseball venues in the United States Defunct multi-purpose stadiums in the United States Demolished sports venues in Washington (state) Seattle Mariners stadiums Seattle Seahawks stadiums Seattle SuperSonics Seattle Sounders (1974–1983) Basketball venues in Washington (state) Soccer venues in Washington (state) American football venues in Washington (state) Baseball venues in Washington (state) Baseball venues in Seattle Multi-purpose stadiums in the United States Former NBA venues Architecture in Washington (state) Concrete shell structures Modernist architecture in Washington (state) Sports venues in Seattle Sports venues completed in 1976 Sports venues demolished in 2000 North American Soccer League (1968–1984) indoor venues North American Soccer League (1968–1984) stadiums Demolished buildings and structures in Washington (state) 1976 establishments in Washington (state) Washington Huskies baseball Defunct covered stadiums
Kingdome
[ "Engineering" ]
11,206
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
634,116
https://en.wikipedia.org/wiki/Super%20Sentai
The is a Japanese superhero team metaseries and media franchise consisting of television series and films produced by Toei Company and Bandai, and aired by TV Asahi. The shows are of the tokusatsu genre, featuring live action characters and colorful special effects, and are aimed at children and young adults. Super Sentai airs alongside the Kamen Rider series in the Super Hero Time programming block on Sunday mornings. In North America, the Super Sentai series is best known as the source material for the Power Rangers series. Series overview In every Super Sentai series, the protagonists are a team of people who – using either wrist-worn or hand-held devices – transform into superheroes and gain superpowers – color-coded uniforms, signature weapons, sidearms, and fighting skills – to battle a group of otherworldly supervillains that threaten to take over the Earth. In a typical episode, the heroes thwart the enemies' plans and defeat an army of enemy soldiers and the monster of the week before an enlarged version of the monster confronts them, only to be defeated once again when the heroes fight it with their super robot mecha. Each Sentai series is set in its own unique fictional universe; various TV, video, and film specials feature a team-up among two or more teams. The first two Super Sentai series were created by Shotaro Ishinomori, then known for the 1971–1973 Kamen Rider TV series and the long-running manga Cyborg 009. He developed Himitsu Sentai Gorenger, which ran from 1975 to 1977, and J.A.K.Q. Dengekitai, released in 1977. Toei Company put the franchise on hiatus in 1978, collaborating with Marvel Comics to produce a live-action Spider-Man series, which added giant robots to the concept of tokusatsu shows. The giant robot concept was carried over to Toei and Marvel's next show, Battle Fever J, released in 1979, and was then used throughout the Super Sentai series. The next two series Denshi Sentai Denjiman and Taiyo Sentai Sun Vulcan had Marvel copyrights and co-productions, despite no influence. Subsequently, the remainder of the series has been solely produced by Toei Company. Productions Main series The following is a list of the Super Sentai series and their years of broadcast: Theatrical releases 1975: Himitsu Sentai Gorenger (Movie version of episode 6) 1975: Himitsu Sentai Gorenger: The Blue Fortress (Movie version of episode 15) 1976: Himitsu Sentai Gorenger: The Red Death Match (Movie version of episode 36) 1976: Himitsu Sentai Gorenger: The Bomb Hurricane 1976: Himitsu Sentai Gorenger: Fire Mountain's Final Explosion (Movie version of episode 54) 1977: J.A.K.Q. Dengekitai (Movie version of episode 7) 1978: J.A.K.Q. Dengekitai vs. Gorenger 1979: Battle Fever J (Movie version of episode 5) 1980: Denshi Sentai Denjiman 1981: Taiyo Sentai Sun Vulcan 1982: Dai Sentai Goggle-V 1983: Kagaku Sentai Dynaman 1984: Choudenshi Bioman 1985: Dengeki Sentai Changeman 1985: Dengeki Sentai Changeman: Shuttle Base! Crisis! 1986: Choushinsei Flashman 1987: Choushinsei Flashman: Big Rally! Titan Boy!! (Movie version of episodes 15–18) 1987: Hikari Sentai Maskman 1989: Kousoku Sentai Turboranger 1993: Gosei Sentai Dairanger 1994: Ninja Sentai Kakuranger 1994: Super Sentai World 1994: Toei Hero Daishugō 1995: Chouriki Sentai Ohranger 2001: Hyakujuu Sentai Gaoranger: The Fire Mountain Roars 2002: Ninpu Sentai Hurricanger: Shushutto The Movie 2003: Bakuryū Sentai Abaranger DELUXE: Abare Summer is Freezing Cold! 2004: Tokusou Sentai Dekaranger The Movie: Full Blast Action 2005: Mahō Sentai Magiranger The Movie: Bride of Infershia ~Maagi Magi Giruma Jinga~ 2006: GoGo Sentai Boukenger The Movie: The Greatest Precious 2007: Juken Sentai Gekiranger: Nei-Nei! Hou-Hou! Hong Kong Decisive Battle 2008: Engine Sentai Go-onger: Boom Boom! Bang Bang! GekijōBang!! 2009: Engine Sentai Go-onger vs. Gekiranger 2009: Samurai Sentai Shinkenger the Movie: The Fateful War 2010: Samurai Sentai Shinkenger vs. Go-onger: GinmakuBang!! 2010: Tensou Sentai Goseiger: Epic on the Movie 2011: Tensou Sentai Goseiger vs. Shinkenger: Epic on Ginmaku 2011: Gokaiger Goseiger Super Sentai 199 Hero Great Battle 2011: Kaizoku Sentai Gokaiger the Movie: The Flying Ghost Ship 2012: Kaizoku Sentai Gokaiger vs. Space Sheriff Gavan: The Movie 2012: Kamen Rider × Super Sentai: Super Hero Taisen 2012: Tokumei Sentai Go-Busters the Movie: Protect the Tokyo Enetower! 2013: Tokumei Sentai Go-Busters vs. Kaizoku Sentai Gokaiger: The Movie 2013: Kamen Rider × Super Sentai × Space Sheriff: Super Hero Taisen Z 2013: Zyuden Sentai Kyoryuger: Gaburincho of Music 2014: Zyuden Sentai Kyoryuger vs. Go-Busters: The Great Dinosaur Battle! Farewell Our Eternal Friends 2014: Heisei Riders vs. Shōwa Riders: Kamen Rider Taisen feat. Super Sentai 2014: Ressha Sentai ToQger the Movie: Galaxy Line S.O.S. 2015: Ressha Sentai ToQger vs. Kyoryuger: The Movie 2015: Super Hero Taisen GP: Kamen Rider 3 2015: Shuriken Sentai Ninninger the Movie: The Dinosaur Lord's Splendid Ninja Scroll! 2016: Shuriken Sentai Ninninger vs. ToQger the Movie: Ninja in Wonderland 2016: Doubutsu Sentai Zyuohger the Movie: The Exciting Circus Panic! 2017: Doubutsu Sentai Zyuohger vs. Ninninger the Movie: Super Sentai's Message from the Future 2017: Kamen Rider × Super Sentai: Ultra Super Hero Taisen 2017: Uchu Sentai Kyuranger the Movie: Gase Indaver Strikes Back 2018: Kaitou Sentai Lupinranger VS Keisatsu Sentai Patranger en Film 2019: Kishiryu Sentai Ryusoulger the Movie: Time Slip! Dinosaur Panic 2020: Kishiryu Sentai Ryusoulger VS Lupinranger VS Patranger 2020: Mashin Sentai Kiramager: Episode Zero 2021: Kishiryu Sentai Ryusoulger Special Chapter: Memory of Soulmates 2021: Mashin Sentai Kiramager The Movie: Bee-Bop Dream 2021: Kikai Sentai Zenkaiger The Movie: Red Battle! All Sentai Rally!! 2021: Saber + Zenkaiger: Superhero Senki 2022: Avataro Sentai Donbrothers The Movie: New First Love Hero 2023: Ohsama Sentai King-Ohger the Movie: Adventure Heaven 2024: Bakuage Sentai Boonboomger GekijōBoon! Promise the Circuit V-Cinema releases 1996: Chōriki Sentai Ohranger: Ohré vs. Kakuranger 1997: Gekisou Sentai Carranger vs. Ohranger 1998: Denji Sentai Megaranger vs. Carranger 1999: Seijuu Sentai Gingaman vs. Megaranger 1999: Kyuukyuu Sentai GoGoFive: Sudden Shock! A New Warrior! 2000: Kyuukyuu Sentai GoGoFive vs. Gingaman 2001: Mirai Sentai Timeranger vs. GoGoFive 2001: Hyakujuu Sentai Gaoranger vs. Super Sentai 2003: Ninpu Sentai Hurricanger vs. Gaoranger 2004: Bakuryū Sentai Abaranger vs. Hurricanger 2005: Tokusou Sentai Dekaranger vs. Abaranger 2006: Mahō Sentai Magiranger vs. Dekaranger 2007: GoGo Sentai Boukenger vs. Super Sentai 2008: Juken Sentai Gekiranger vs. Boukenger 2010: Samurai Sentai Shinkenger Returns 2011: Tensou Sentai Goseiger Returns 2013: Tokumei Sentai Go-Busters Returns vs. Dōbutsu Sentai Go-Busters 2013: Ninpu Sentai Hurricanger: 10 Years After 2014: Zyuden Sentai Kyoryuger: 100 Years After 2015: Ressha Sentai ToQger Returns 2015: Tokusou Sentai Dekaranger: 10 Years After 2016: Shuriken Sentai Ninninger Returns 2017: Doubutsu Sentai Zyuohger Returns: Give Me Your Life! Earth Champion Tournament 2017: Space Squad: Uchuu Keiji Gavan vs. Tokusou Sentai Dekaranger 2017: Uchu Sentai Kyuranger: Episode of Stinger 2018: Uchu Sentai Kyuranger vs. Space Squad 2018: Engine Sentai Go-Onger: 10 Years Grand Prix 2019: Lupinranger VS Patranger VS Kyuranger 2021: Kiramager VS Ryusoulger 2021: Kaizoku Sentai: Ten Gokaiger 2022: Kikai Sentai Zenkaiger vs. Kiramager vs. Senpaiger 2023: Avataro Sentai Donbrothers VS Zenkaiger 2023: Ninpu Sentai Hurricanger Degozaru! Shushuuto 20th Anniversary 2023: Bakuryu Sentai Abaranger 20th: The Unforgivable Abare 2024: Tokusou Sentai Dekaranger 20th: Fireball Booster 2024: Ohsama Sentai King-Ohger vs. Donbrothers 2024: Ohsama Sentai King-Ohger vs. Kyoryuger Spin offs / mini-series / extras 2012-2013: Unofficial Sentai Akibaranger 2017: Zyuden Sentai Kyoryuger Brave 2021: The High School Heroes Distribution and overseas adaptations Although the Super Sentai series originated in Japan, various Sentai series have been imported and dubbed in other languages for broadcast in several other countries. United States After Honolulu's KIKU-TV had success with Android Kikaider (marketed as Kikaida) and Kamen Rider V3 in the 1970s, multiple Super Sentai series, including Himitsu Sentai Gorenger and Battle Fever J, were brought to the Hawaiian market, broadcast in Japanese with English subtitles by JN Productions. In 1985, Marvel Comics produced a pilot for an American adaptation of Super Sentai, but the show was rejected by the major US TV networks. In 1986, Saban Productions produced a pilot for an American adaptation of Choudenshi Bioman titled Bio Man. In 1987, some episodes of Kagaku Sentai Dynaman were dubbed and aired as a parody on the USA Network television show Night Flight. In 1993, American production company Saban Entertainment adapted 1992's Kyōryū Sentai Zyuranger into Mighty Morphin Power Rangers for the Fox Kids programming block, combining the original Japanese action footage with new footage featuring American actors for the story sequences. Since then, nearly every Super Sentai series that followed became a new season of Power Rangers. In 2002, Saban sold the Power Rangers franchise to Disney's Buena Vista division, who owned it until 2010, broadcasting Power Rangers on ABC Kids, ABC Family, Jetix, and Toon Disney. On 12 May 2010, Saban bought the franchise back from Disney, moving the show to the Nickelodeon network for 2011 with Power Rangers Samurai. On 25 July 2014, Shout! Factory announced that they would release Zyuranger on DVD in the United States. They have since been the official distributor of Super Sentai in North America, and as of 2024 have released all subsequent series up to Dekaranger, plus Jetman and Fiveman. Shout! also provides episodes on demand via Shout! TV since 2016. Super Sentai episodes are also available to watch on the free streaming service, Tubi. On 1 May 2018, toy company Hasbro announced they had acquired the Power Rangers franchise from Saban Capital Group for $522 million. South Korea Super Sentai has been broadcast in South Korea, dubbed in Korean. The first such series was Choushinsei Flashman which aired as Jigu Bangwidae Flash Man (Earth Defence Squadron Flashman), released in video format in 1989 by the Daeyung Panda video company; this was followed by Hikari Sentai Maskman and Chodenshi Bioman. Throughout the 1990s, Dai Sentai Goggle Five, Dengeki Sentai Changeman, Choujyu Sentai Liveman, and Kousoku Sentai Turboranger were also released in video format. In the 2000s and early 2010s, Tooniverse (formerly Orion Cartoon Network), JEI-TV (Jaeneung Television), Champ TV/Anione TV (Daewon Broadcasting), Cartoon Network South Korea, and Nickelodeon South Korea have broadcast Super Sentai series a year following their original Japanese broadcast, but have changed the titles to "Power Rangers". Merchandise , Bandai Namco has sold Super Sentai shape-changing model robots since 1979. References External links Official Super Sentai website Toei Video's Super Sentai DVD Soft Guide Bandai's Super Sentai website Toei International Special Content: Super Sentai Series Shout! Factory's Official Super Sentai website Bandai brands Fiction about size change Japanese children's television series Japanese superheroes Mass media franchises introduced in 1975 Superhero television shows Toei tokusatsu
Super Sentai
[ "Physics", "Mathematics" ]
2,815
[ "Fiction about size change", "Quantity", "Physical quantities", "Size" ]
634,140
https://en.wikipedia.org/wiki/Breeder
A breeder is a person who selectively breeds carefully selected mates, normally of the same breed, to sexually reproduce offspring with specific, consistently replicable qualities and characteristics. This might be as a farmer, agriculturalist, or hobbyist, and can be practiced on a large or small scale, for food, fun, or profit. About A breeder can breed purebred pets such as cats or dogs, livestock such as cattle or horses, and may show their animals professionally in assorted forms of competitions. In these specific instances, the breeder strives to meet standards in each animal set out by organizations. A breeder may also assist with breeding animals in the zoo. In other cases, a breeder can be referred to an animal scientist who has the capabilities of developing more efficient ways to produce the meat and other animal products humans eat. Earnings as a breeder vary widely because of the various types of work involved in the job title. Even in breeding small domestic animals, the earning differ. It mostly depends on the type of animal being bred and whether or not the breeder has a reputation of breeding champions. The US Bureau of Labor Statistics reports that large animal breeders that work as veterinarians earned a median annual income of $61,029 in 2006. The other individuals employed in the field of animal science earned $47,800. Required education To breed small and domestic animals, no formal training or credentials are required, though it is recommended they familiarize themselves with the desired and standard characteristics of the breed they work with. For those who are seeking to breed more exotic animals, such as those in a zoo, a bachelor's degree in veterinary science is needed. It is also recommended that an individual also goes onto graduate school and specializes in zoology. To breed agricultural animals, a 4-year degree in agricultural science is needed for most entry-level positions. See also Animal breeding Animal husbandry Animal fancy Breeding in the wild Plant breeding Dog Breeding References Animal husbandry occupations Pets Breeding
Breeder
[ "Biology" ]
405
[ "Behavior", "Breeding", "Reproduction" ]
634,143
https://en.wikipedia.org/wiki/Disk%20cloning
Disk cloning is the process of duplicating all data on a digital storage drive, such as a hard disk or solid state drive, using hardware or software techniques. Unlike file copying, disk cloning also duplicates the filesystems, partitions, drive meta data and slack space on the drive. Common reasons for cloning a drive include; data backup and recovery; duplicating a computer's configuration for mass deployment and for preserving data for digital forensics purposes. Drive cloning can be used in conjunction with drive imaging where the cloned data is saved to one or more files on another drive rather than copied directly to another drive. Background Disk cloning occurs by copying the contents of a drive called the source drive. While called "disk cloning", any type of storage medium that connects to the computer via USB, NVMe or SATA can be cloned. A small amount of data is read and then held in the computer's memory. The data is then either written directly to another (destination) drive or to a disk image. Typically, the destination drive is connected to a computer (Fig. 1). Once connected, a disk cloner is used to perform the clone itself. A hardware-based drive cloner can be used which does not require a computer. However, software cloners tend to allow for greater flexibility because they can exclude unwanted data from being duplicated reducing cloning time. For example, the filesystem and partitions can be resized by the software allowing data to be cloned to a drive equal to or greater than the total used space. Most hardware-based cloners typically require for the destination drive to be the same size as the source drive even if only a fraction of the space is used. Some hardware cloners can clone only the used space but tend to be much more expensive. Applications Deployment A common use of disk cloning is for deployment. For example, a group of computers with similar hardware can be set up much quicker by cloning the configuration. In educational institutions, students are typically expected to experiment with computers to learn. Disk cloning can be used to help keep computers clean and configured correctly. Further, while installing the operating system is quick, installation of programs and ensuring a consistent configuration is time consuming. Thus, disk cloning seeks to mitigate this administrative challenge. Digital forensics One of the most common applications of disk cloning is for digital forensics purposes. This aims to ensure that data is preserved at the time it was acquired for later analysis. Techniques for cloning a disk for forensic purposes differ from cloning a drive for other purposes. Typically, the cloning process itself must not interfere with the data. Because software cannot be installed on the system, a hardware-based cloner is generally used to duplicate the data to another drive or image. Further, the hardware-based cloner also has write-blocking capabilities which intercepts write commands to prevent data being written to the drive. Backup Disk cloning can be used as a backup solution by creating a duplicate of data as it existed when the clone was started. The clone can be used to restore corrupted files such as corrupted databases. In modern software solutions, it is not uncommon for disk cloning techniques to be combined with disk imaging techniques to create a backup solution. Drive upgrade Upgrading to a larger or faster drive can be facilitated by cloning the old drive to the new drive once it is installed into the system. This reduces the need to having to manually reinstall applications, drivers and the operating system. The procedure can be used when migrating from mechanical hard disk drives to solid state drives. Modern cloning software tends to communicate with storage devices through a common interface, which means, that any storage device can be cloned and migrated. Sometimes, booting from the destination drive can fail and require adjustments in the computer's UEFI or BIOS to make the new clone bootable. Technical challenges There are several technical challenges that need to be considered when planning to clone a drive. Drive in use Often, cloning software runs within the operating system which is running off of one of the drives being cloned. As a result, any attempt to clone the contents of the drive, even to a file, would result in data corruption. Consequently, the drive cloner must ensure that the data on the source drive remains in a consistent state at the time of reading. Further, in the case, that the user desires to clone to the computer's system drive, this generally cannot be done while the operating system is running. A common solution to cloning a drive that is in use, which is utilized by software such as CloneZilla, is to boot from a Linux-based operating system so the drive can be copied and/or overwritten. This approach is not suitable for servers that need to be running all the time and cannot be shutdown routinely to perform the backup (or cloning) operation. Further, the Linux-based operating system must provide appropriate drivers for the system's hardware. Drivers are also required for the source and destination drives and for any attached storage involved in facilitating the cloning operation such as USB, tape device and networking drivers. Some server-based operating system incorporate mechanisms to allow the drive to be safely backed up while the system is running to overcome these challenges. For example, Windows Server 2003 (and later) includes volume shadow service (VSS). VSS takes a snapshot of the drive so that any changes are not written to the snapshot. The snapshot creates a virtual drive called a shadow volume that is backed up (or cloned) by the software. Slow Disk cloning can be time consuming, especially, for large disks because a true clone needs to copy all the data on the disk even if most data resides in unallocated drive space. Software solutions can determine the space in use and only copy the used data reducing the time needed to clone the drive. Some drive cloners make use of multithreading to further speed up the cloning operation. See also Comparison of disk cloning software Disk mirroring Disk image List of backup software List of data recovery software List of disk partitioning software Live USB Recovery disc Security Identifier References Storage software Backup Utility software types
Disk cloning
[ "Engineering" ]
1,275
[ "Reliability engineering", "Backup" ]
634,183
https://en.wikipedia.org/wiki/Radio%20spectrum
The radio spectrum is the part of the electromagnetic spectrum with frequencies from 3 Hz to 3,000 GHz (3 THz). Electromagnetic waves in this frequency range, called radio waves, are widely used in modern technology, particularly in telecommunication. To prevent interference between different users, the generation and transmission of radio waves is strictly regulated by national laws, coordinated by an international body, the International Telecommunication Union (ITU). Different parts of the radio spectrum are allocated by the ITU for different radio transmission technologies and applications; some 40 radiocommunication services are defined in the ITU's Radio Regulations (RR). In some cases, parts of the radio spectrum are sold or licensed to operators of private radio transmission services (for example, cellular telephone operators or broadcast television stations). Ranges of allocated frequencies are often referred to by their provisioned use (for example, cellular spectrum or television spectrum). Because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to utilize it more effectively is driving modern telecommunications innovations such as trunked radio systems, spread spectrum, ultra-wideband, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. Limits The frequency boundaries of the radio spectrum are a matter of convention in physics and are somewhat arbitrary. Since radio waves are the lowest frequency category of electromagnetic waves, there is no lower limit to the frequency of radio waves. Radio waves are defined by the ITU as: "electromagnetic waves of frequencies arbitrarily lower than 3000 GHz, propagated in space without artificial guide". At the high frequency end the radio spectrum is bounded by the infrared band. The boundary between radio waves and infrared waves is defined at different frequencies in different scientific fields. The terahertz band, from 300 gigahertz to 3 terahertz, can be considered either as microwaves or infrared. It is the highest band categorized as radio waves by the International Telecommunication Union. but spectroscopic scientists consider these frequencies part of the far infrared and mid infrared bands. Because it is a fixed resource, the practical limits and basic physical considerations of the radio spectrum, the frequencies which are useful for radio communication, are determined by technological limitations which are impossible to overcome. So although the radio spectrum is becoming increasingly congested, there is no possible way to add additional frequency bandwidth outside of that currently in use. The lowest frequencies used for radio communication are limited by the increasing size of transmitting antennas required. The size of antenna required to radiate radio power efficiently increases in proportion to wavelength or inversely with frequency. Below about 10 kHz (a wavelength of 30 km), elevated wire antennas kilometers in diameter are required, so very few radio systems use frequencies below this. A second limit is the decreasing bandwidth available at low frequencies, which limits the data rate that can be transmitted. Below about 30 kHz, audio modulation is impractical and only slow baud rate data communication is used. The lowest frequencies that have been used for radio communication are around 80 Hz, in ELF submarine communications systems built by a few nations' navies to communicate with their submerged submarines hundreds of meters underwater. These employ huge ground dipole antennas 20–60 km long excited by megawatts of transmitter power, and transmit data at an extremely slow rate of about 1 bit per minute (17 millibits per second, or about 5 minutes per character). The highest frequencies useful for radio communication are limited by the absorption of microwave energy by the atmosphere. As frequency increases above 30 GHz (the beginning of the millimeter wave band), atmospheric gases absorb increasing amounts of power, so the power in a beam of radio waves decreases exponentially with distance from the transmitting antenna. At 30 GHz, useful communication is limited to about 1 km, but as frequency increases the range at which the waves can be received decreases. In the terahertz band above 300 GHz, the radio waves are attenuated to zero within a few meters due to the absorption of electromagnetic radiation by the atmosphere (mainly due to ozone, water vapor and carbon dioxide), which is so great that it is essentially opaque to electromagnetic emissions, until it becomes transparent again near the near-infrared and optical window frequency ranges. Bands A radio band is a small frequency band (a contiguous section of the range of the radio spectrum) in which channels are usually used or set aside for the same purpose. To prevent interference and allow for efficient use of the radio spectrum, similar services are allocated in bands. For example, broadcasting, mobile radio, or navigation devices, will be allocated in non-overlapping ranges of frequencies. Band plan For each radio band, the ITU has a band plan (or frequency plan) which dictates how it is to be used and shared, to avoid interference and to set protocol for the compatibility of transmitters and receivers. Each frequency plan defines the frequency range to be included, how channels are to be defined, and what will be carried on those channels. Typical definitions set forth in a frequency plan are: numbering scheme – which channel numbers or letters (if any) will be assigned center frequencies – how far apart the carrier wave for each channel will be bandwidth and/or deviation – how wide each channel will be spectral mask – how extraneous signals will be attenuated by frequency modulation – what type will be used or are permissible content – what types of information are allowed, such as audio or video, analog or digital licensing – what the procedure will be to obtain a broadcast license ITU The actual authorized frequency bands are defined by the ITU and the local regulating agencies like the US Federal Communications Commission (FCC) and voluntary best practices help avoid interference. As a matter of convention, the ITU divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten (10n) metres, with corresponding frequency of 3×108−n hertz, and each covering a decade of frequency or wavelength. Each of these bands has a traditional name. For example, the term high frequency (HF) designates the wavelength range from 100 to 10 metres, corresponding to a frequency range of 3 to 30 MHz. This is just a symbol and is not related to allocation; the ITU further divides each band into subbands allocated to different services. Above 300 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so great that the atmosphere is effectively opaque, until it becomes transparent again in the near-infrared and optical window frequency ranges. These ITU radio bands are defined in the ITU Radio Regulations. Article 2, provision No. 2.1 states that "the radio spectrum shall be subdivided into nine frequency bands, which shall be designated by progressive whole numbers in accordance with the following table". The table originated with a recommendation of the fourth CCIR meeting, held in Bucharest in 1937, and was approved by the International Radio Conference held at Atlantic City, NJ in 1947. The idea to give each band a number, in which the number is the logarithm of the approximate geometric mean of the upper and lower band limits in Hz, originated with B. C. Fleming-Williams, who suggested it in a letter to the editor of Wireless Engineer in 1942. For example, the approximate geometric mean of band 7 is 10 MHz, or 107 Hz. The band name "tremendously low frequency" (TLF) has been used for frequencies from 1–3 Hz (wavelengths from 300,000–100,000 km), but the term has not been defined by the ITU. IEEE radar bands Frequency bands in the microwave range are designated by letters. This convention began around World War II with military designations for frequencies used in radar, which was the first application of microwaves. There are several incompatible naming systems for microwave bands, and even within a given system the exact frequency range designated by a letter may vary somewhat between different application areas. One widely used standard is the IEEE radar bands established by the US Institute of Electrical and Electronics Engineers. EU, NATO, US ECM frequency designations Waveguide frequency bands Comparison of radio band designation standards The band name "tremendously low frequency" (TLF) has been used for frequencies from 1–3  Hz (wavelengths of 300,000–100,000 km), but the term has not been defined by the ITU. Applications Broadcasting Broadcast frequencies: Longwave AM Radio = 148.5 kHz – 283.5 kHz (LF) Mediumwave AM Radio = 520 kHz – 1700 kHz (MF) Shortwave AM Radio = 3 MHz – 30 MHz (HF) Designations for television and FM radio broadcast frequencies vary between countries, see Television channel frequencies and FM broadcast band. Since VHF and UHF frequencies are desirable for many uses in urban areas, in North America some parts of the former television broadcasting band have been reassigned to cellular phone and various land mobile communications systems. Even within the allocation still dedicated to television, TV-band devices use channels without local broadcasters. The Apex band in the United States was a pre-WWII allocation for VHF audio broadcasting; it was made obsolete after the introduction of FM broadcasting. Air band Airband refers to VHF frequencies 108 to 137 MHz, used for navigation and voice communication with aircraft. Trans-oceanic aircraft also carry HF radio and satellite transceivers. Marine band The greatest incentive for development of radio was the need to communicate with ships out of visual range of shore. From the very early days of radio, large oceangoing vessels carried powerful long-wave and medium-wave transmitters. High-frequency allocations are still designated for ships, although satellite systems have taken over some of the safety applications previously served by 500 kHz and other frequencies. 2182 kHz is a medium-wave frequency still used for marine emergency communication. Marine VHF radio is used in coastal waters and relatively short-range communication between vessels and to shore stations. Radios are channelized, with different channels used for different purposes; marine Channel 16 is used for calling and emergencies. Amateur radio frequencies Amateur radio frequency allocations vary around the world. Several bands are common for amateurs worldwide, usually in the HF part of the spectrum. Other bands are national or regional allocations only due to differing allocations for other services, especially in the VHF and UHF parts of the radio spectrum. Citizens' band and personal radio services Citizens' band radio is allocated in many countries, using channelized radios in the upper HF part of the spectrum (around 27 MHz). It is used for personal, small business and hobby purposes. Other frequency allocations are used for similar services in different jurisdictions, for example UHF CB is allocated in Australia. A wide range of personal radio services exist around the world, usually emphasizing short-range communication between individuals or for small businesses, simplified license requirements or in some countries covered by a class license, and usually FM transceivers using around 1 watt or less. Industrial, scientific, medical The ISM bands were initially reserved for non-communications uses of RF energy, such as microwave ovens, radio-frequency heating, and similar purposes. However, in recent years the largest use of these bands has been by short-range low-power communications systems, since users do not have to hold a radio operator's license. Cordless telephones, wireless computer networks, Bluetooth devices, and garage door openers all use the ISM bands. ISM devices do not have regulatory protection against interference from other users of the band. Land mobile bands Bands of frequencies, especially in the VHF and UHF parts of the spectrum, are allocated for communication between fixed base stations and land mobile vehicle-mounted or portable transceivers. In the United States these services are informally known as business band radio. See also Professional mobile radio. Police radio and other public safety services such as fire departments and ambulances are generally found in the VHF and UHF parts of the spectrum. Trunking systems are often used to make most efficient use of the limited number of frequencies available. The demand for mobile telephone service has led to large blocks of radio spectrum allocated to cellular frequencies. Radio control Reliable radio control uses bands dedicated to the purpose. Radio-controlled toys may use portions of unlicensed spectrum in the 27 MHz or 49 MHz bands, but more costly aircraft, boat, or land vehicle models use dedicated radio control frequencies near 72 MHz to avoid interference by unlicensed uses. The 21st century has seen a move to 2.4 GHz spread spectrum RC control systems. Licensed amateur radio operators use portions of the 6-meter band in North America. Industrial remote control of cranes or railway locomotives use assigned frequencies that vary by area. Radar Radar applications use relatively high power pulse transmitters and sensitive receivers, so radar is operated on bands not used for other purposes. Most radar bands are in the microwave part of the spectrum, although certain important applications for meteorology make use of powerful transmitters in the UHF band. See also Notes References ITU-R Recommendation V.431: Nomenclature of the frequency and wavelength bands used in telecommunications. International Telecommunication Union, Geneva. IEEE Standard 521-2002: Standard Letter Designations for Radar-Frequency Bands AFR 55-44/AR 105-86/OPNAVINST 3430.9A/MCO 3430.1, 27 October 1964 superseded by AFR 55-44/AR 105-86/OPNAVINST 3430.1A/MCO 3430.1A, 6 December 1978: Performing Electronic Countermeasures in the United States and Canada, Attachment 1,ECM Frequency Authorizations. External links UnwantedEmissions.com A reference to radio spectrum allocations. "Radio spectrum: a vital resource in a wireless world" European Commission policy.
Radio spectrum
[ "Physics" ]
2,787
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
634,233
https://en.wikipedia.org/wiki/Analytical%20hierarchy
In mathematical logic and descriptive set theory, the analytical hierarchy is an extension of the arithmetical hierarchy. The analytical hierarchy of formulas includes formulas in the language of second-order arithmetic, which can have quantifiers over both the set of natural numbers, , and over functions from to . The analytical hierarchy of sets classifies sets by the formulas that can be used to define them; it is the lightface version of the projective hierarchy. The analytical hierarchy of formulas The notation indicates the class of formulas in the language of second-order arithmetic with number quantifiers but no set quantifiers. This language does not contain set parameters. The Greek letters here are lightface symbols, indicating the language choice. Each corresponding boldface symbol denotes the corresponding class of formulas in the extended language with a parameter for each real; see projective hierarchy for details. A formula in the language of second-order arithmetic is defined to be if it is logically equivalent to a formula of the form where is . A formula is defined to be if it is logically equivalent to a formula of the form where is . This inductive definition defines the classes and for every natural number . Kuratowski and Tarski showed in 1931 that every formula in the language of second-order arithmetic has a prenex normal form, and therefore is or for some . Because meaningless quantifiers can be added to any formula, once a formula is given the classification or for some it will be given the classifications and for all greater than . The analytical hierarchy of sets of natural numbers A set of natural numbers is assigned the classification if it is definable by a formula (with one free number variable and no free set variables). The set is assigned the classification if it is definable by a formula. If the set is both and then it is given the additional classification . The sets are called hyperarithmetical. An alternate classification of these sets by way of iterated computable functionals is provided by the hyperarithmetical theory. The analytical hierarchy on subsets of Cantor and Baire space The analytical hierarchy can be defined on any effective Polish space; the definition is particularly simple for Cantor and Baire space because they fit with the language of ordinary second-order arithmetic. Cantor space is the set of all infinite sequences of 0s and 1s; Baire space is the set of all infinite sequences of natural numbers. These are both Polish spaces. The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification if it is definable by a formula (with one free set variable and no free number variables). The set is assigned the classification if it is definable by a formula. If the set is both and then it is given the additional classification . A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function from to to the characteristic function of its graph. A subset of Baire space is given the classification , , or if and only if the corresponding subset of Cantor space has the same classification. An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same classifications as the first definition. Because Cantor space is homeomorphic to any finite Cartesian power of itself, and Baire space is homeomorphic to any finite Cartesian power of itself, the analytical hierarchy applies equally well to finite Cartesian powers of one of these spaces. A similar extension is possible for countable powers and to products of powers of Cantor space and powers of Baire space. Extensions As is the case with the arithmetical hierarchy, a relativized version of the analytical hierarchy can be defined. The language is extended to add a constant set symbol A. A formula in the extended language is inductively defined to be or using the same inductive definition as above. Given a set , a set is defined to be if it is definable by a formula in which the symbol is interpreted as ; similar definitions for and apply. The sets that are or , for any parameter Y, are classified in the projective hierarchy, and often denoted by boldface Greek letters to indicate the use of parameters. Examples For a relation on , the statement " is a well-order on " is . (Not to be confused with the general case for well-founded relations on sets, see Lévy hierarchy) The set of all natural numbers that are indices of computable ordinals is a set that is not . These sets are exactly the -recursively-enumerable subsets of . [Bar75, p. 168] A function is definable by Herbrand's 1931 formalism of systems of equations if and only if is hyperarithmetical. The set of continuous functions that have the mean value property is no lower than on the hierarchy. The set of elements of Cantor space that are the characteristic functions of well orderings of is a set that is not . In fact, this set is not for any element of Baire space. If the axiom of constructibility holds then there is a subset of the product of the Baire space with itself that is and is the graph of a well ordering of Baire space. If the axiom holds then there is also a well ordering of Cantor space. Properties For each we have the following strict containments: , , , . A set that is in for some n is said to be analytical. Care is required to distinguish this usage from the term analytic set, which has a different meaning, namely . Table See also Arithmetical hierarchy Lévy hierarchy References Computability theory Effective descriptive set theory Hierarchy Mathematical logic hierarchies
Analytical hierarchy
[ "Mathematics" ]
1,219
[ "Computability theory", "Mathematical logic", "Mathematical logic hierarchies" ]
634,240
https://en.wikipedia.org/wiki/Wheel%20theory
A wheel is a type of algebra (in the sense of universal algebra) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring. The term wheel is inspired by the topological picture of the real projective line together with an extra point ⊥ (bottom element) such that . A wheel can be regarded as the equivalent of a commutative ring (and semiring) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution. Definition A wheel is an algebraic structure , in which is a set, and are elements of that set, and are binary operations, is a unary operation, and satisfying the following properties: and are each commutative and associative, and have and as their respective identities. is an involution, for example is multiplicative, for example Algebra of wheels Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument similar (but not identical) to the multiplicative inverse , such that becomes shorthand for , but neither nor in general, and modifies the rules of algebra such that in the general case in the general case, as is not the same as the multiplicative inverse of . Other identities that may be derived are where the negation is defined by and if there is an element such that (thus in the general case ). However, for values of satisfying and , we get the usual If negation can be defined as above then the subset is a commutative ring, and every commutative ring is such a subset of a wheel. If is an invertible element of the commutative ring then . Thus, whenever makes sense, it is equal to , but the latter is always defined, even when . Examples Wheel of fractions Let be a commutative ring, and let be a multiplicative submonoid of . Define the congruence relation on via means that there exist such that . Define the wheel of fractions of with respect to as the quotient (and denoting the equivalence class containing as ) with the operations (additive identity) (multiplicative identity) (reciprocal operation) (addition operation) (multiplication operation) In general, this structure is not a ring unless it is trivial, as in the usual sense - here with we get , although that implies that is an improper relation on our wheel . This follows from the fact that , which is also not true in general. Projective line and Riemann sphere The special case of the above starting with a field produces a projective line extended to a wheel by adjoining a bottom element noted ⊥, where . The projective line is itself an extension of the original field by an element , where for any element in the field. However, is still undefined on the projective line, but is defined in its extension to a wheel. Starting with the real numbers, the corresponding projective "line" is geometrically a circle, and then the extra point gives the shape that is the source of the term "wheel". Or starting with the complex numbers instead, the corresponding projective "line" is a sphere (the Riemann sphere), and then the extra point gives a 3-dimensional version of a wheel. See also NaN Citations References (a draft) (also available online here). Fields of abstract algebra
Wheel theory
[ "Mathematics" ]
706
[ "Fields of abstract algebra" ]
634,264
https://en.wikipedia.org/wiki/Catalogue%20of%20Galaxies%20and%20of%20Clusters%20of%20Galaxies
The Catalogue of Galaxies and of Clusters of Galaxies (or CGCG) was compiled by Fritz Zwicky in 1961–68. It contains 29,418 galaxies and 9,134 galaxy clusters. Gallery External Links Caltech library's free online PDFs of all six volumes of the Catalogue References Astronomical catalogues Astronomical catalogues of galaxies Astronomical catalogues of galaxy clusters
Catalogue of Galaxies and of Clusters of Galaxies
[ "Astronomy" ]
77
[ "Works about astronomy", "Astronomy stubs", "Astronomical catalogues", "Astronomical catalogue stubs", "Astronomical objects" ]
634,266
https://en.wikipedia.org/wiki/FR-4
FR-4 (or FR4) is a NEMA grade designation for glass-reinforced epoxy laminate material. FR-4 is a composite material composed of woven fiberglass cloth with an epoxy resin binder that is flame resistant (self-extinguishing). "FR" stands for "flame retardant", and does not denote that the material complies with the standard UL94V-0 unless testing is performed to UL 94, Vertical Flame testing in Section 8 at a compliant lab. The designation FR-4 was created by NEMA in 1968. FR-4 glass epoxy is a popular and versatile high-pressure thermoset plastic laminate grade with good strength to weight ratios. With near zero water absorption, FR-4 is most commonly used as an electrical insulator possessing considerable mechanical strength. The material is known to retain its high mechanical values and electrical insulating qualities in both dry and humid conditions. These attributes, along with good fabrication characteristics, lend utility to this grade for a wide variety of electrical and mechanical applications. Grade designations for glass epoxy laminates are: G-10, G-11, FR-4, FR-5 and FR-6. Of these, FR-4 is the grade most widely in use today. G-10, the predecessor to FR-4, lacks FR-4's self-extinguishing flammability characteristics. Hence, FR-4 has since replaced G-10 in most applications. FR-4 epoxy resin systems typically employ bromine, a halogen, to facilitate flame-resistant properties in FR-4 glass epoxy laminates. Some applications where thermal destruction of the material is a desirable trait will still use G-10 non flame resistant. Properties Which materials fall into the "FR-4" category is defined in the NEMA LI 1-1998 standard. Typical physical and electrical properties of FR-4 are as follows. The abbreviations LW (lengthwise, warp yarn direction) and CW (crosswise, fill yarn direction) refer to the conventional perpendicular fiber orientations in the XY plane of the board (in-plane). In terms of Cartesian coordinates, lengthwise is along the x-axis, crosswise is along the y-axis, and the z-axis is referred to as the through-plane direction. The values shown below are an example of a certain manufacturer's material. Another manufacturer's material will usually have slightly different values. Checking the actual values, for any particular material, from the manufacturer's datasheet, can be very important, for example in high frequency applications. where: LW Lengthwise CW Crosswise PF Perpendicular to laminate face Applications FR-4 is a common material for printed circuit boards (PCBs). A thin layer of copper foil is typically laminated to one or both sides of an FR-4 glass epoxy panel. These are commonly referred to as copper clad laminates. The copper thickness or copper weight can vary and so is specified separately. FR-4 is also used in the construction of relays, switches, standoffs, busbars, washers, arc shields, transformers and screw terminal strips. See also FR-2 Polyimide G-10 (material) References Further reading Printed circuit board manufacturing Fibre-reinforced polymers
FR-4
[ "Engineering" ]
692
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
634,296
https://en.wikipedia.org/wiki/John%20Macadam
The Honorable Dr John Macadam (29 May 1827 – 2 September 1865), was a Scottish-Australian chemist, medical teacher, Australian politician and cabinet minister, and honorary secretary of the Burke and Wills expedition. The genus Macadamia (macadamia nut) was named after him in 1857. He died at sea, on a voyage from Australia to New Zealand, aged 38. Early life John Macadam was born at Northbank, Glasgow, Scotland, on 29 May 1827, the son of William Macadam (1783-1853) and Helen, née Stevenson (1803-1857). His father was a Glasgow businessman, who owned a spinning and textile printing works in Kilmarnock, and was a burgess and a bailie (magistrate) of Glasgow. His fellow industrialists and he in the craft had developed, using chemistry, the processes for the large-scale industrial printing of fabrics for which these plants in the area became known. John Macadam was privately educated in Glasgow; he studied chemistry at the Andersonian University (now the University of Strathclyde) and went for advanced study at the University of Edinburgh under Professor William Gregory. In 1846–47, he went on to serve as assistant to Professor George Wilson at the University of Edinburgh in his laboratory in Brown Square. He was elected a fellow of the Royal Scottish Society of Arts that year, and in 1848, a member of the Glasgow Philosophical Society. He then studied medicine at the University of Glasgow (LFPS, MD,1854; FFPSG,1855). He was a member of what became a small dynasty of Scottish scientists and lecturers in analytical chemistry, which included, other than himself, his eldest half brother William Macadam, his immediate younger brother Stevenson Macadam (a younger brother Charles Thomas Macadam, although not involved as a scientist, was also indirectly involved in chemistry becoming a senior partner in a chemical fertiliser company) and nephews William Ivison Macadam and Stevenson J. C. G. Macadam, as well as the former nephew's daughter, his great niece Elison A. Macadam. On 8 June 1855, aged 28, Macadam sailed for Melbourne in the Colony of Victoria, Australia, on the sailing ship Admiral. He arrived on 8 September 1855. Australian academic career In 1855 he was a lecturer on chemistry and natural science at Scotch College, having been engaged for the position before leaving Scotland. In 1857 he was awarded an MD ad eundem from the University of Melbourne in acknowledgment of his MD from the University of Glasgow. In 1857-1858 he also taught at Geelong Church of England Grammar School (now Geelong Grammar School). In 1858, he was appointed the Victorian government analytical chemist. In 1860 he became health officer to the City of Melbourne. He wrote several reports on public health. On 3 March 1862 he was appointed as the first lecturer in medicine (chemistry and practical chemistry) at the University of Melbourne School of Medicine. For the next few years he held classes for a small number of medical students in the Analytical Laboratory behind the Public Library. He was also a member of the Board of Agriculture. Political life Macadam became a member of the Victorian Legislative Assembly of the self-governing Colony of Victoria as a radical and supporter of the Land Convention, representing Castlemaine. Appointed postmaster-general of Victoria in 1861, Macadam resigned from the legislature in 1864. He had sponsored bills on medical practitioners and adulteration of food which became law in 1862 and 1863. Royal Society of Victoria Between 1857 and 1862, Macadam served as honorary secretary of the Philosophical Institute of Victoria, which then became the Royal Society of Victoria in 1860, and was appointed vice-president of it in 1863. He was editor of first five volumes of the society's Transactions. He was active in erecting the Society's Meeting Hall (their present building) and was involved in the institute's initiative to obtain a royal charter. He saw both happen while he held office, when in January 1860, the Philosophical Institute became the Royal Society of Victoria and met in their new building. Burke and Wills expedition Between 1857 and 1865, Macadam served as honorary secretary to the Exploration Committee of the Royal Society of Victoria, which organised the Burke and Wills expedition. The expedition was organised by the society with the aim of crossing the continent of Australia from the south to the north coasts, map it, and collect scientific data and specimens. At that time, most of the interior of Australia had not been explored by the European settlers and was unknown to them. In 1860–61, Robert O'Hara Burke and William John Wills led the expedition of 19 men with that intention, crossing Australia from Melbourne in the south, to the Gulf of Carpentaria in the north, a distance around 2,000 miles. Three men ultimately travelled over 3,000 miles from Melbourne to the shores of the Gulf of Carpentaria and back to the Depot Camp at Cooper Creek. Seven men died in the attempt, including the leaders Burke and Wills. Of the four men who reached the north coast, only one, John King, survived with the help of the indigenous people to return to Melbourne. This expedition became the first to cross the Australian continent. It was of great importance to the subsequent development of Australia and could be compared in importance to the Lewis and Clark Expedition overland to the North American Pacific Coast to the development of the United States. After the heavy death toll of the expedition, initial criticism fell on the Royal Society, but it became clear that their foresight could not have prevented the deaths and this was then widely recognised when it became known that as Secretary of the Exploration Committee of the Burke and Wills expedition, Dr. Macadam had insisted on adequate provisions for their safety. Macadamia The macadamia (genus Macadamia) nut was discovered by the European settlers, and subsequently the tree was named after him by his friend and colleague, Ferdinand von Mueller (1825-1896), Director of the Royal Botanic Gardens, Melbourne. The tree gave his name to macadamia nuts. The genus Macadamia was first described scientifically in 1857 by Dr. Mueller and he named the new genus in honour of his friend Dr John Macadam. Mueller had done a great deal of taxonomy of the flora, naming innumerable genera but chose this "...a beautiful genus dedicated to John Macadam, M.D. the talented and deserving secretary of our institute." Australian rules football On 7 August 1858, Macadam, along with Tom Wills, officiated at a game of football played between Scotch College and Melbourne Grammar. This game was a predecessor to the modern game of Australian rules football and is commemorated by a statue outside the Melbourne Cricket Ground. The two schools have competed annually ever since, lately for the Cordner–Eggleston Cup. Learned societies 1847 fellow of the Royal Scottish Society of Arts 1848 a member of the Glasgow Philosophical Society (now Royal Philosophical Society of Glasgow) 1855 elected Fellow of the Faculty of Physicians and Surgeons, the University of Glasgow 1855 elected member (1857–59 Hon. Sec), the Philosophical Institute of Victoria, later to become the Royal Society of Victoria 1860 vice-president of Royal Society of Victoria Family On 18 September 1856, a year after he arrived from Scotland, he married Elizabeth Clark in Melbourne, Australia. She had arrived three days before the wedding with her maid on the Admiral, the same ship on which he had travelled out a year earlier, which reached Hobson's Bay (Melbourne's port) on 15 September 1856, having set sail from London on 7 June 1856. Elizabeth Clark was probably born on 7 October 1832 in Barony parish Scotland, near Glasgow (her mother being Mary McGregor). She was the second daughter of John Clark, of Levenfield House in Alexandria, the Vale of Leven, a short distance north of Glasgow in West Dunbartonshire. His Levenfield Works were involved in similar work to Dr John Macadam's father William Macadam in Kilmarnock in the then lucrative business of textile printing for domestic and European markets. The Clarks and Macadams must have become known to each other in Scotland because of their respective fathers' business connections. Elizabeth died in 1915, in Brighton, Victoria. John and Elizabeth had two sons: John Melnotte Macadam was born 29 August 1858 at Fitzroy, Melbourne, Australia, and died on 30 January 1859, aged 5 months (he was reburied with his father, whose monument bears the additional inscription: In memory of his only children John Melnotte Macadam Born August 29, 1858 Died January 30, 1859 followed by an inscription to his second son below it). William Castlemaine Macadam was born on 2 July 1860 and died 17 December 1865 at Williamstown, Victoria, Australia. He died aged five and had survived his father by a few months. The inscription on his father's burial monument under His only children has him listed under his elder brother (above), who died in infancy, but does not for some reason give William's date of death on it. Death In March 1865 Macadam sailed to New Zealand to give evidence at the trial of Captain W. A. Jarvey, accused of fatally poisoning his wife, but the jury did not reach a verdict. During the return voyage, Macadam fractured his ribs during a storm. He was advised, on medical grounds, not to return for the adjourned trial but did so and died on the ship on 2 September 1865. His medical-student assistant John Drummond Kirkland gave evidence at the trial in Macadam's place, and Jarvey was convicted. The Australian News commented, "At the time of his death, Dr Macadam was but 38 years of age; there can be little doubt that the various and onerous duties he discharged for the public must be attributed in great measure the shortening of his days." The Australian Medical Journal stated, "For some time it had been evident to his friends that his general health was giving way: that a frame naturally robust and vigorous was gradually becoming undermined by the incessant and harassing duties of the multifarious offices he filled." The inquest verdict (he died at sea) stated, "His death was caused by excessive debility and general exhaustion." Funeral The funeral was large. The newspapers carried tributes and subsequently lengthier obituaries from learned societies were published, such as that in the Australian Medical Journal and elsewhere. The Melbourne Leader described the funeral: "The coffin was drawn by four horses. Four mourning coaches contained the chief mourners and the more intimate friends of the deceased gentleman. A large procession followed, in which were several members of Parliament, the members of the Royal Society, the Chief Justice; the Mayor and corporation of the city of Melbourne. A number of private carriages and the public wound up the procession....At the University, the chancellor, the vice-chancellor, and a number of the students, all in their academic robes, met the funeral cortege, and proceeded the remainder of the distance". The chief mourner was his youngest brother, George Robert Macadam (1837-1918). John Macadam's grave, surmounted by a marble obelisk, is in Melbourne General Cemetery. Widow remarried After John Macadam and her children's deaths his widow, Elizabeth Clark, later remarried. She married the Reverend John Dalziel Dickie, who was pastor at Colac for 32 years. They married on 26 February 1868 They had four daughters. Elizabeth Dickie died aged 82 in 1915, in Brighton, Victoria, as the widow of the Rev. Dickie. Dickie had died 25 December 1909. References External links Macadam, John (1827-1865) – entry in the Trove database of the National Library of Australia Macadam, John (1827–1865) – entry in the Australian Dictionary of Biography Burke & Wills Web – comprehensive website containing many of the historical documents relating to the Burke & Wills expedition The Burke & Wills Historical Society 1827 births 1865 deaths 19th-century Scottish chemists Academic staff of the University of Melbourne Scottish emigrants to Australia Alumni of the University of Edinburgh Alumni of the University of Strathclyde Alumni of the University of Glasgow Scientists from Glasgow Analytical chemists Members of the Victorian Legislative Assembly Burials at Melbourne General Cemetery Postmasters-general of Victoria
John Macadam
[ "Chemistry" ]
2,551
[ "Analytical chemists" ]
634,316
https://en.wikipedia.org/wiki/Prednisolone
Prednisolone is a corticosteroid, a steroid hormone used to treat certain types of allergies, inflammatory conditions, autoimmune disorders, and cancers, electrolyte imbalances and skin conditions. Some of these conditions include adrenocortical insufficiency, high blood calcium, rheumatoid arthritis, dermatitis, eye inflammation, asthma, multiple sclerosis, and phimosis. It can be taken by mouth, injected into a vein, used topically as a skin cream, or as eye drops. It differs from the similarly named prednisone in having a hydroxyl at the 11th carbon instead of a ketone. Common side effects with short-term use include nausea, difficulty concentrating, insomnia, increased appetite, and fatigue. More severe side effects include psychiatric problems, which may occur in about 5% of people. Common side effects with long-term use include bone loss, weakness, yeast infections, and easy bruising. While short-term use in the later part of pregnancy is safe, long-term use or use in early pregnancy is occasionally associated with harm to the baby. It is a glucocorticoid made from hydrocortisone (cortisol). Prednisolone was discovered and approved for medical use in 1955. It is on the World Health Organization's List of Essential Medicines. It is available as a generic drug. In 2022, it was the 136th most commonly prescribed medication in the United States, with more than 4million prescriptions. Medical uses When used in low doses, corticosteroids serve as an anti-inflammatory agent. At higher doses, they are considered as immunosuppressants. Corticosteroids inhibit the inflammatory response to a variety of inciting agents and, it is presumed, delay or slow healing. They inhibit edema, fibrin deposition, capillary dilation, leukocyte migration, capillary proliferation, fibroblast proliferation, deposition of collagen, and scar formation associated with inflammation. Systemic use Prednisolone is a corticosteroid drug with predominant glucocorticoid and low mineralocorticoid activity, making it useful for the treatment of a wide range of inflammatory and autoimmune conditions such as asthma, uveitis, pyoderma gangrenosum, rheumatoid arthritis, urticaria, angioedema, ulcerative colitis, pericarditis, temporal arteritis, Crohn's disease, Bell's palsy, multiple sclerosis, cluster headaches, vasculitis, acute lymphoblastic leukemia, autoimmune hepatitis, lupus, Kawasaki disease, dermatomyositis, post-myocardial infarction syndrome, and sarcoidosis. Prednisolone can also be used for allergic reactions ranging from seasonal allergies to drug allergic reactions. Prednisolone can also be used as an immunosuppressant for organ transplants. Prednisolone in lower doses can be used in cases of adrenal insufficiency due to Addison's disease. Topical use Ophthalmology Topical prednisolone is mainly used in the ophthalmic pathway as eye drops in numerous eye conditions, including corneal injuries caused by chemicals, burns, and alien objects, inflammation of the eyes, mild to moderate non-infectious allergies, disorders of the eyelid, conjunctiva or sclera, ocular inflammation caused by operation and optic neuritis. Some side effects include glaucoma, blurred vision, eye discomfort, impaired recovery of injured site, scarring of the optic nerve, cataracts, and urticaria. However, their prevalence is not known. Prednisolone eye drops are contraindicated in individuals who develop hypersensitivity reactions against prednisolone, or individuals with the current conditions, such as tuberculosis of the eye, shingles affecting the eye, raised intraocular pressure, and eye infection caused by fungus. Prednisolone acetate ophthalmic suspension (eye drops) is prepared as a sterile ophthalmic suspension and used to reduce swelling, redness, itching, and allergic reactions affecting the eye. It has been explored as a treatment option for bacterial keratitis. Prednisolone eye drops are used in conjunctivitis caused by allergies and bacteria, marginal keratitis, uveitis, endophthalmitis, which is an infection of the eye involving the aqueous humor, Graves' ophthalmopathy, herpes zoster ocular infection, inflammation of the eye after surgery, and corneal injuries caused by chemicals, radiation, thermal burns, or penetration of foreign objects. It is also used in the prevention of myringosclerosis, herpes simplex stromal keratitis. Topical prednisolone can also be used after procedures such as Laser Peripheral Iridotomy for patients with primary angle-closure suspects (PACS) to control inflammations. Ear drops In addition, topical prednisolone can also be administered as ear drops. ] Adverse effects Adverse reactions from the use of prednisolone include: Increased appetite, weight gain, nausea, and malaise Increased risk of infection Cardiovascular events Dermatological effects including reddening of the face, bruising/skin discoloration, impaired wound healing, skin atrophy, skin rash, edema, and abnormal hair growth Hyperglycemia; patients with diabetes may need increased insulin or diabetic therapies Menstrual abnormalities Lower response to hormones, especially during stressful instances such as surgery or illness Change in electrolytes: rise in blood pressure, increased sodium and low potassium, leading to alkalosis Gastrointestinal system effects: swelling of the stomach lining, reversible increase in liver enzymes, and risk of stomach ulcers Muscular and skeletal abnormalities, such as muscle weakness/muscle loss, osteoporosis (see steroid-induced osteoporosis), long bone fractures, tendon rupture, and back fractures Neurological effects, including involuntary movements (convulsions), headaches, and vertigo Psychosocial behavioral and emotional disturbances with aggression being one of the most common cognitive symptoms, especially with oral use. Nasal septum perforation and bowel perforation (in some pathologic conditions). Discontinuing prednisolone after long-term or high-dose use can lead to adrenal insufficiency. Pregnancy and breastfeeding Although there are no major human studies of prednisolone use in pregnant women, studies in several animals show that it may cause birth defects including increased likelihood of cleft palate. Prednisolone is found in the breast milk of mothers taking prednisolone. Local adverse effects in the eye When used topically on the eye, the following are potential side effects: Cataracts: Extended usage of corticosteroids may cause clouding at the back of the lens, also known as posterior subcapsular cataract. This type of cataract reduces the path of light from reaching the eye, which interferes with a person's reading vision. Consumption of prednisolone eye drops post-surgery may also retard the healing process. Corneal thinning: When corticosteroids are used in the long term, corneal and scleral thinning is also one of its consequences. When not ceased, thinning may ultimately lead to perforation of the cornea. Glaucoma: Elongated use of corticosteroids has a chance of causing a raised intraocular pressure (IOP), injuring the optic nerve, and weakening visual awareness. Corticosteroids should be used cautiously in patients with concomitant conditions of glaucoma. Doctors track patients' IOP if they are using corticosteroid eye drops for more than 103 days. Pharmacology Pharmacodynamics As a glucocorticoid, the lipophilic structure of prednisolone allows for easy passage through the cell membrane where it then binds to its respective glucocorticoid receptor (GCR) located in the cytoplasm. Upon binding, the formation of the GC/GCR complex causes dissociation of chaperone proteins from the glucocorticoid receptor enabling the GC/GCR complex to translocate inside the nucleus. This process occurs within 20 minutes of binding. Once inside the nucleus, the homodimer GC/GCR complex binds to specific DNA binding sites known as glucocorticoid response elements (GREs) resulting in gene expression or inhibition. Complex binding to positive GREs leads to the synthesis of anti-inflammatory proteins while binding to negative GREs blocks the transcription of inflammatory genes. They inhibit the release of signals that promote inflammation such as nuclear factor-Kappa B (NF-κB), Activator protein 1 (AP-1), nuclear factor of activated T-cells (NFAT), and stimulate anti-inflammatory signals such as the interleukin-10 gene. All of them will collectively cause a sequence of events, including the inhibition of prostaglandin synthesis and additional inflammatory mediators. Glucocorticoids also inhibit neutrophil cell death and demargination. As well as phospholipase A2, which in turn lessens arachidonic acid derivative genesis. Pharmacokinetics Prednisolone has a relatively short half-life, ranging 2–4 hours. It also has a large therapeutic window, considering the dosage required to produce a therapeutic effect is a few times higher than what the body naturally produces. Prednisolone is 70–90% plasma protein bound, it binds to proteins such as albumin. Both prednisolone phosphate and prednisolone acetate go through ester hydrolysis in the body to form prednisolone. It subsequently undergoes the usual metabolism of prednisolone. Concomitant use of prednisolone and strong CYP3A4 inhibitors such as ketoconazole is shown to cause a rise in plasma prednisolone concentrations by about 50% owing to a diminished clearance. Prednisolone predominantly undergoes kidney elimination and is excreted in the urine as sulphate and metabolites of glucuronide conjugate. Prednisone Prednisone is a prodrug that is activated in the liver. When it enters the body, prednisone is triggered by the liver and body chemicals to turn into its active form, prednisolone. Chemistry Prednisolone is a synthetic pregnane corticosteroid closely related to its cognate prednisone, having identical structure save for two fewer hydrogens near C11. It is also known as δ1-cortisol, δ1-hydrocortisone, 1,2-dehydrocortisol, or 1,2-dehydrohydrocortisone, as well as 11β,17α,21-trihydroxypregna-1,4-diene-3,20-dione. Interactions Co-administration of prednisolone eye drops with ophthalmic nonsteroidal anti-inflammatory drugs (NSAIDs) may perhaps exacerbate its effects, causing unwanted side effects such as toxicity. The wound healing process may also be hindered. Drug interactions of prednisolone include other immunosuppressants like azathioprine or ciclosporin, antiplatelet drugs like clopidogrel, anticoagulants like dabigatran or warfarin, or NSAIDs such as aspirin, celecoxib, or ibuprofen. Contraindications Special populations Children Prolonged use of prednisolone eye drops in children may lead to raised intraocular pressure. While this phenomenon is dose-dependent, it is shown to have a greater effect, especially in children under 6 years of age. Pregnancy and breastfeeding Research on animal reproduction has indicated that there is a trace of teratogenicity when doses are reduced by 10 times the human recommended dose. There is no sufficient information on human pregnancy at this moment. Use is only recommended when the potential benefits outweigh the potential risks for the pregnant mother and the fetus. Prednisolone when delivered systemically can be found in the mother's breast milk, however, there is no data provided for the extent of prednisolone found in the system after administering eye drops. However, the presence of corticosteroids is recorded when they are administered systemically, and it could affect the fetus' growth. Therefore, the use of prednisolone during breastfeeding is not advocated. Society and culture Dosage forms Prednisolone is supplied as oral liquid, oral suspension, oral syrup, oral tablet, and oral disintegrating tablet. It may be a generic medication or supplied as brands Flo-Pred (prednisolone acetate oral suspension), Millipred (oral tablets), Orapred (prednisolone sodium phosphate oral dissolving tablets), Pediapred (prednisolone sodium phosphate oral solution), Veripred 20, Prelone, Hydeltra-T.B.A., Hydeltrasol, Key-Pred, Cotolone, Predicort, Medicort, Predcor, Bubbli-Pred, Omnipred (prednisolone acetate ophthalmic suspension), Pred Mild, Pred Forte, and others. Athletics As a glucocorticosteroid, unauthorized or ad hoc use of prednisolone during competition via oral, intravenous, intramuscular, or rectal routes is banned under World Anti-Doping Agency (WADA) anti-doping rules. Veterinary uses Prednisolone is used in the treatment of inflammatory and allergic conditions in cats, dogs, horses, small mammals such as ferrets, birds, and reptiles. Its usage in treating inflammation, immune-mediated disease, Addison's disease, and neoplasia is often considered off-label use. Many drugs are commonly prescribed for off-label use in veterinary medicine." Studies in ruminating species, such as alpacas, have shown that oral administration of the drug is associated with a reduced bioavailability compared to intravenous administration; however, levels that are therapeutic in other species can be achieved with oral administration in alpacas. It is used in a broad spectrum of diseases, for example, inflammation of scleral tissues, cornea, and conjunctiva in dogs. In horses, prednisolone acetate suspensions are priorly used to treat inflammation in the middle layer of the eye, also known as anterior uveitis and equine recurrent uveitis (ERU), which is the leading cause of visual impairment in horses. Prednisolone acetate eye drops are not to be used in other animals such as birds. Prednisolone acetate eye drops are also prescribed to dogs and cats to lessen swelling, redness, burning, and pain sensations after surgeries of the eye. Cats with conjunctivitis usually are required to avoid using ophthalmic preparations of corticosteroids and its derivatives. The most typical infections are caused by herpes virus. References External links Drugs developed by AbbVie CYP3A4 inducers Glucocorticoids Human drug metabolites Mineralocorticoids Otologicals World Health Organization essential medicines Wikipedia medicine articles ready to translate hy:Պրեդնիզոլոն
Prednisolone
[ "Chemistry" ]
3,366
[ "Chemicals in medicine", "Human drug metabolites" ]
634,463
https://en.wikipedia.org/wiki/Claude%20%C3%89mile%20Jean-Baptiste%20Litre
Claude Émile Jean-Baptiste Litre (1716-1778) is a fictional character created in 1978 by Kenneth Woolner of the University of Waterloo to justify the use of a capital L to denote litres. The International System of Units usually only permits the use of a capital letter when a unit is named after a person. The lower-case character l might be difficult to distinguish from the upper-case character I or the digit 1 in certain fonts and styles, and therefore both the lower-case (l) and the upper-case (L) are allowed as the symbol for litre. The United States National Institute of Standards and Technology now recommends the use of the uppercase letter L, a practice that is also widely followed in Canada and Australia. Woolner perpetrated the April Fools' Day hoax in the April 1978 issue of "CHEM 13 News", a newsletter concerned with chemistry for school teachers. According to the hoax, Claude Litre was born on 12 February 1716, the son of a manufacturer of wine bottles. During Litre's extremely distinguished fictional scientific career, he purportedly proposed a unit of volume measurement that was incorporated into the International System of Units after his death in 1778. The hoax was mistakenly printed as fact in the IUPAC journal Chemistry International and subsequently retracted. In reality, the litre derives its name from the litron, an old French unit of dry volume. See also Etiological myth False etymology References External links Reprints of articles about the Litre hoax. Hoaxes in science Fictional scientists 1978 in science Hoaxes in Canada 1978 in Canada 1978 hoaxes Non-SI metric units Fictional characters introduced in 1978 Fictitious entries April Fools' Day jokes Nonexistent people used in hoaxes
Claude Émile Jean-Baptiste Litre
[ "Mathematics" ]
346
[ "Non-SI metric units", "Quantity", "Units of measurement" ]
634,543
https://en.wikipedia.org/wiki/Bendixson%E2%80%93Dulac%20theorem
In mathematics, the Bendixson–Dulac theorem on dynamical systems states that if there exists a function (called the Dulac function) such that the expression has the same sign () almost everywhere in a simply connected region of the plane, then the plane autonomous system has no nonconstant periodic solutions lying entirely within the region. "Almost everywhere" means everywhere except possibly in a set of measure 0, such as a point or line. The theorem was first established by Swedish mathematician Ivar Bendixson in 1901 and further refined by French mathematician Henri Dulac in 1923 using Green's theorem. Proof Without loss of generality, let there exist a function such that in simply connected region . Let be a closed trajectory of the plane autonomous system in . Let be the interior of . Then by Green's theorem, Because of the constant sign, the left-hand integral in the previous line must evaluate to a positive number. But on , and , so the bottom integrand is in fact 0 everywhere and for this reason the right-hand integral evaluates to 0. This is a contradiction, so there can be no such closed trajectory . See also Liouville's theorem (Hamiltonian), similar theorem with References Differential equations Theorems in dynamical systems
Bendixson–Dulac theorem
[ "Mathematics" ]
263
[ "Theorems in dynamical systems", "Mathematical objects", "Differential equations", "Equations", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]
2,285,258
https://en.wikipedia.org/wiki/Conway%27s%20law
Conway's law describes the link between communication structure of organizations and the systems they design. It is named after the computer programmer Melvin Conway, who introduced the idea in 1967. His original wording was: The law is based on the reasoning that in order for a product to function, the authors and designers of its component parts must communicate with each other in order to ensure compatibility between the components. Therefore, the technical structure of a system will reflect the social boundaries of the organizations that produced it, across which communication is more difficult. In colloquial terms, it means complex products end up "shaped like" the organizational structure they are designed in or designed for. The law is applied primarily in the field of software architecture, though Conway directed it more broadly and its assumptions and conclusions apply to most technical fields. Variations Eric S. Raymond, an open-source advocate, restated Conway's law in The New Hacker's Dictionary, a reference work based on the Jargon File. The organization of the software and the organization of the software team will be congruent, he said. Summarizing an example in Conway's paper, Raymond wrote: Raymond further presents Tom Cheatham's amendment of Conway's Law, stated as: Yourdon and Constantine, in their 1979 book on Structured Design, gave a more strongly stated variation of Conway's Law: James O. Coplien and Neil B. Harrison stated in a 2004 book concerned with organizational patterns of Agile software development: More recent commentators have noted a corollary - for software projects with a long lifetime of code reuse, such as Microsoft Windows, the structure of the code mirrors not only the communication structure of the organization which created the most recent release, but also the communication structures of every previous team which worked on that code. There’s also an old car industry joke: Interpretations The law is, in a strict sense, only about correspondence; it does not state that communication structure is the cause of system structure, merely describes the connection. Different commentators have taken various positions on the direction of causality; that technical design causes the organization to restructure to fit, that the organizational structure dictates the technical design, or both. Conway's law was intended originally as a sociological observation, but many other interpretations are possible. The New Hacker's Dictionary entry uses it in a primarily humorous context, while participants at the 1968 National Symposium on Modular Programming considered it sufficiently serious and universal to dub it 'Conway's Law'. Opinions also vary on the desirability of the phenomenon; some say that the mirroring pattern is a helpful feature of such systems, while other interpretations say it's an undesirable result of organizational bias. Middle positions describe it as a necessary feature of compromise, undesirable in the abstract but necessary to handle human limitations. Supporting evidence An example of the impact of Conway's Law can be found in the design of some organization websites. Nigel Bevan stated in a 1997 paper, regarding usability issues in websites: "Organizations often produce web sites with a content and structure which mirrors the internal concerns of the organization rather than the needs of the users of the site." Evidence in support of Conway's law has been published by a team of Massachusetts Institute of Technology (MIT) and Harvard Business School researchers who, using "the mirroring hypothesis" as an equivalent term for Conway's law, found "strong evidence to support the mirroring hypothesis", and that the "product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization". The authors highlight the impact of "organizational design decisions on the technical structure of the artifacts that these organizations subsequently develop". Additional and likewise supportive case studies of Conway's law have been conducted by Nagappan, Murphy and Basili at the University of Maryland in collaboration with Microsoft, and by Syeed and Hammouda at Tampere University of Technology in Finland. See also Inner-platform effect Cognitive dimensions of notations Deutsch limit Organizational theory Isomorphism (sociology) Good regulator References Further reading Alan MacCormack, John Rusnak & Carliss Baldwin, 2012, "Exploring the Duality between Product and Organizational Architectures: A Test of the 'Mirroring' Hypothesis," Research Policy 41:1309–1324 [earlier Harvard Business School Working Paper 08-039], see , accessed 9 March 2015. Lise Hvatum & Allan Kelly, Eds., "What do I think about Conway's Law now? Conclusions of a EuroPLoP 2005 Focus Group," European Conference on Pattern Languages of Programs, Kloster Irsee, Germany, January 16, 2006, see , addressed 9 March 2015. Lyra Colfer & Carliss Baldwin. "The Mirroring Hypothesis: Theory, Evidence and Exceptions." Harvard Business School Working Paper, No. 16-124, April 2016. (Revised May 2016.) See , accessed 2 August 2016. Adages Computer architecture statements Software project management Software design Computer-related introductions in 1968
Conway's law
[ "Engineering" ]
1,033
[ "Design", "Software design" ]
2,285,261
https://en.wikipedia.org/wiki/Nasopharyngeal%20airway
In medicine, a nasopharyngeal airway (NPA), nasal trumpet (because of its flared end), or nose hose, is a type of airway adjunct, a tube that is designed to be inserted through the nasal passage down into the posterior pharynx to secure an open airway. It was introduced by in 1958. When a patient becomes unconscious, the muscles in the jaw commonly relax and can allow the tongue to slide back and obstruct the airway. This makes airway management necessary, and an NPA is one of the available tools. The purpose of the flared end is to prevent the device from becoming lost inside the patient's nose. Sizes As with other catheters, NPAs are measured using the French catheter scale, but sizes are usually also quoted in millimeters. Typical sizes include: 6.5 mm/28FR, 7.0 mm/30FR, 7.5 mm/32FR, 8.0 mm/34FR, and 8.5 mm/36FR. Indications and contraindications These devices are used by emergency care professionals such as EMTs and paramedics in situations where an artificial form of airway maintenance is necessary, but tracheal intubation is impossible, inadvisable, or outside the practitioner's scope of practice. An NPA is often used in patients who are conscious or have an altered level of consciousness where an oropharyngeal airway would trigger the gag reflex. The use of an NPA is contraindicated when there is trauma to the face, especially the nose or if there is a suspected skull fracture. Insertion The correct size airway is chosen by measuring the device on the patient: the device should reach from the patient's nostril to the earlobe or the angle of the jaw. The outside of the tube is lubricated with a water-based lubricant so that it enters the nose more easily. The device is inserted until the flared end rests against the nostril. References Airway management Medical equipment Respiration
Nasopharyngeal airway
[ "Biology" ]
430
[ "Medical equipment", "Medical technology" ]
2,285,280
https://en.wikipedia.org/wiki/National%20High%20Magnetic%20Field%20Laboratory
The National High Magnetic Field Laboratory (MagLab) is a facility at Florida State University, the University of Florida, and Los Alamos National Laboratory in New Mexico, that performs magnetic field research in physics, biology, bioengineering, chemistry, geochemistry, biochemistry. It is the only such facility in the US, and is among twelve high magnetic facilities worldwide. The lab is supported by the National Science Foundation and the state of Florida, and works in collaboration with private industry. The lab holds several world records for the world's strongest magnets, including highest magnetic field of 45.5 Tesla. For nuclear magnetic resonance spectroscopy experiments, its series connected hybrid (SCH) magnet broke the record during a series of tests conducted by MagLab engineers and scientists on 15 November 2016, reaching its full field of 36 Tesla. History Proposal and award In 1989 Florida State University (FSU), Los Alamos National Laboratory, and the University of Florida submitted a proposal to the National Science Foundation (NSF) for a new national laboratory supporting interdisciplinary research in high magnetic fields. The plan proposed a federal-state partnership serving magnet-related research, science and technology education, and partnering industry. The goal was to maintain the competitive position of the US in magnet-related research and development. Following a peer-review competition, the NSF approved the FSU-led consortium's proposal. Competing proposal by MIT In a competing proposal to the NSF, the Massachusetts Institute of Technology (MIT), with the University of Iowa, the University of Wisconsin–Madison, Brookhaven National Laboratory, and Argonne National Laboratory, had suggested improving the existing world-class Francis Bitter Magnet Laboratory at MIT. On September 5, 1990, MIT researchers asked the 21 members of the National Science Board (NSB) to "review and reconsider" its decision. With $60 million at stake in the NSF grant, MIT stated it would phase out the Francis Bitter Lab if it lost its appeal, the first of its kind in NSF history. The request was turned down September 18, 1990. Early years The laboratory's early years were spent establishing infrastructure, building the facility, and recruiting faculty. The Tallahassee complex was dedicated on October 1, 1994, to a large crowd, with keynote speaker Vice President Al Gore. Folk legend A scientifically unsupported folk legend and popular joke among Tallahassee residents is that the magnetic lab shields Tallahassee from hurricanes and from inclement weather in general. Mission The lab's mission, as set forth by the NSF, is: "To provide the highest magnetic fields and necessary services for scientific research conducted by users from a wide range of disciplines, including physics, chemistry, materials science, engineering, biology and geology." The lab focuses on four objectives: Develop user facilities and services for magnet-related research, open to all qualified scientists and engineers Advance magnet technology in cooperation with industry Promote a multidisciplinary research environment and administer in-house research program that uses and advances the facilities Develop an educational outreach program Education and public outreach The National MagLab promotes science education and supports science, engineering, and science teachers through its Center for Integrating Research and Learning. Programs include mentorships in an interdisciplinary learning environment. Through the Magnet Academy, the lab's website provides educational content on electricity and magnetism. The National MagLab also conducts monthly tours open to the public, and hosts an annual open house with about 10,000 attendees. Special tour and outreach opportunities are also available to local schools. In an interview on Skepticality, Dr. Scott Hannahs said, "If you come by on the third Saturday in February I believe we have an open house and we have Tesla coils shooting sparks and we melt rocks in the geochemistry group and we measure the speed of sound and we have lasers and potato launchers and we just have all sorts of things showing little scientific principles and stuff. We get together and we have about 5,000 people show up to come and tour a physics lab which is a pretty amazing group of people." Programs Florida State University programs The Tallahassee laboratory at Florida State University is a complex and has approximately 300 faculty, staff, graduate, and postdoctoral students. Its director is physicist Kathleen Amm. Its chief scientist is Laura Greene. DC field program The facility contains 14 resistive magnet cells connected to a 48 megawatt DC power supply and of cooling equipment to remove the heat generated by the magnets. The facility houses several magnets, including a 45 Tesla hybrid magnet, which combines resistive and superconducting magnets. The lab's 41.4 Tesla resistive magnet is the strongest DC (continuous-field) resistive magnet in the world, and the 25 Tesla Keck magnet has the highest homogeneity of any resistive magnet. NMR spectroscopy and imaging This program serves a broad user base in solution and solid state NMR spectroscopy and MRI and diffusion measurements at high magnetic field strengths. The lab develops technology, methodology, and applications at high magnetic fields through both in-house and external user activities. An in-house made 900 MHz (21.1 Tesla) NMR magnet has an ultra-wide bore measuring 105 mm (about 4 inches) in diameter, this superconducting magnet has the highest field for MRI study of a living animals. Ion cyclotron resonance The Fourier transform ion cyclotron resonance mass spectrometry program is involved in instrument and technique development and applications of FT-ICR mass spectrometry. Under the leadership of director Alan G. Marshall, the program continuously develops techniques and instruments and applications of FT-ICR mass spectrometry. The program has several instruments, including a 14.5 Tesla, 104 mm bore system. Electron magnetic resonance The most common form of EMR is electron paramagnetic/spin resonance (EPR/ESR). In EPR experiments, transitions are observed between the mS sublevels of an electronic spin state S that are split by the applied magnetic field as well as by the fine structure interactions and the electron-nuclear hyperfine interactions. This technique has applications in chemistry, biochemistry, biology, physics and materials research. Magnet science and technology The Magnet Science and Technology division is charged with developing the technology and expertise for magnet systems. These magnet projects include building advanced magnet systems for the Tallahassee and Los Alamos sites, working with industry to develop the technology to improve high-field magnet manufacturing capabilities, and improving high field magnet systems through research and development. Also at the lab's FSU headquarters, the Applied Superconductivity Center advances the science and technology of superconductivity for both the low temperature niobium-based and the high temperature cuprate or MgB2-based materials. The ASC pursues the superconductors for magnets for fusion, high energy physics, MRI, and electric power transmission lines and transformers. In-house research The in-house research program utilizes MagLab facilities to pursue high field research in science and engineering, while advancing the lab's user programs through development of new techniques and equipment. Condensed matter group The condensed matter group scientists concentrate on various aspects of condensed matter physics, including studies and experiments involving magnetism, the quantum hall effect, quantum oscillations, high temperature superconductivity, and heavy fermion systems. Geochemistry program The geochemistry research program is centered around the use of trace elements and isotopes to understand the Earth processes and environment. The research interests range from the chemical evolution of Earth and Solar System through time to local scale problems on the sources and transport of environmentally significant substances. The studies conducted by the geochemistry division concern terrestrial and extraterrestrial questions and involve land-based and seagoing expeditions and spacecraft missions. Together with FSU's Chemistry and Oceanography departments, Geochemistry has started a program in Biogeochemical Dynamics. Other programs Other programs include cryogenics, optical microscopy, quantum materials and resonant ultrasound spectroscopy. The lab also has a materials research team that researches new ways to make high strength magnetic materials using more common and cheaper elements. Los Alamos National Laboratory Pulsed Field Facility Los Alamos National Laboratory in New Mexico hosts the Pulsed Field Facility, which provides researchers with experimental capabilities for a wide range of measurements in non-destructive pulsed fields to 101 Tesla (75 T currently and 101 T under repair). Pulsed field magnets create high magnetic fields, but only for fractions of a second. The laboratory is located at the center of Los Alamos. In 1999–2000, the facility was relocated into a new specially designed Experimental Hall to better accommodate user operations and support. The program is the first and only high pulsed field user facility in the United States. The facility provides a wide variety of experimental capabilities to 100 Tesla, using short and long pulse magnets. Power comes from a pulsed power infrastructure which includes a 1.43 gigawatt motor generator and five 64-megawatt power supplies. The 1200-ton motor generator sits on a 4800-short ton (4350 t) inertia block which rests on 60 springs to minimize earth tremors and is the centerpiece of the Pulsed Field Laboratory. The facility's magnets include a 60 Tesla long-pulse magnet (under repair) that is the most powerful controlled-pulse magnet in the world. University of Florida The University of Florida is home to user facilities in magnetic resonance imaging or (MRI) with an ultra-low temperature, ultra-quiet environment for experimental studies in the High B/T (high magnetic field/low temperature) Facility. Facilities are also available for the fabrication and characterization of nanostructures at a new nanoscale research facility operated in conjunction with the university's Major Analytical and Instrumentation Center. High B/T Facility The High B/T Facility is part of the Microkelvin Laboratory of the Physics Department and conducts experiments in high magnetic fields up to 15.2 Tesla and at temperatures as low as 0.4 mK simultaneously for studies of magnetization, thermodynamic quantities, transport measurements, magnetic resonance, viscosity, diffusion, and pressure. The facility holds world records for high B/T in Bay 1 for short term low field capabilities and world records for high field long time (> 1 week) experiments. The research group leads the world in collective studies of quantum fluids and solids in terms of breadth and low temperature techniques (thermometry, NMR, ultrasound, heat capacity, sample cooling.) Advanced Magnetic Resonance Imaging and Spectroscopy The Advanced Magnetic Resonance Imaging and Spectroscopy program contains facilities for the Mag Lab's NMR and MRIProgram that complement the facilities at the lab's headquarters in Tallahassee. The program is located at the University of Florida's McKnight Brain Institute. Their instruments include a 600 MHz NMR magnet with 1.5 mm triple-resonance, high-temperature superconducting probe, which delivers the highest 13C-optimized mass sensitivity of any probe in the world. References External links National High Magnetic Field Laboratory, Florida State University Pulsed Field Facility, Los Alamos National Laboratory High B/T Facility, University of Florida Nuclear Magnetic Resonance and Magnetic Resonance Imaging / Spectroscopy Advanced Magnetic Resonance Imaging and Spectroscopy Facility], The NMR-MRI/S Facility at MagLab headquarters near Florida State University in Tallahassee and the Advanced Magnetic Resonance Imaging and Spectroscopy Facility (AMRIS) at the University of Florida in Gainesville. Florida State University Nuclear research institutes Particle physics facilities Research institutes in Florida National Science Foundation United States Department of Energy national laboratories 1994 establishments in Florida Research institutes in New Mexico
National High Magnetic Field Laboratory
[ "Engineering" ]
2,383
[ "Nuclear research institutes", "Nuclear organizations" ]
2,285,301
https://en.wikipedia.org/wiki/Mixed%20oxide
In chemistry, a mixed oxide is a somewhat informal name for an oxide that contains cations of more than one chemical element or cations of a single element in several states of oxidation. The term is usually applied to solid ionic compounds that contain the oxide anion and two or more element cations. Typical examples are ilmenite (), a mixed oxide of iron () and titanium () cations, perovskite and garnet.The cations may be the same element in different ionization states: a notable example is magnetite , which is also known as ferrosoferric oxide , contains the cations ("ferrous" iron) and ("ferric" iron) in 1:2 ratio. Other notable examples include red lead , the ferrites, and the yttrium aluminum garnet , used in lasers. The term is sometimes also applied to compounds of oxygen and two or more other elements, where some or all of the oxygen atoms are covalently bound into oxyanions. In sodium zincate , for example, the oxygens are bound to the zinc atoms forming zincate anions. (On the other hand, strontium titanate , despite its name, contains cations and not the anion.) Sometimes the term is applied loosely to solid solutions of metal oxides rather than chemical compounds, or to fine mixtures of two or more oxides. Mixed oxide minerals are plentiful in nature. Synthetic mixed oxides are components of many ceramics with remarkable properties and important advanced technological applications, such as strong magnets, fine optics, lasers, semiconductors, piezoelectrics, superconductors, catalysts, refractories, gas mantles, nuclear fuels, and more. Piezoelectric mixed oxides, in particular, are extensively used in pressure and strain gauges, microphones, ultrasound transducers, micromanipulators, delay lines, etc. See also Complex oxide Double salt MOX fuel References
Mixed oxide
[ "Chemistry" ]
414
[ "Oxides", "Salts" ]
2,285,560
https://en.wikipedia.org/wiki/.NET%20Reflector
.NET Reflector is a class browser, decompiler and static analyzer for software created with .NET Framework, originally written by Lutz Roeder. MSDN Magazine named it as one of the Ten Must-Have utilities for developers, and Scott Hanselman listed it as part of his "Big Ten Life and Work-Changing Utilities". Overview It can be used to inspect, navigate, search, analyze, and browse the contents of a CLI component such as an assembly and translates the binary information to a human-readable form. By default Reflector allows decompilation of CLI assemblies into C#, Visual Basic .NET, C++/CLI and Common Intermediate Language and F# (alpha version). Reflector also includes a "Call Tree" that can be used to drill down into intermediate language methods to see what other methods they call. It will show the metadata, resources and XML documentation. .NET Reflector can be used by .NET developers to understand the inner workings of code libraries, to show the differences between two versions of the same assembly, and how the various parts of a CLI application interact with each other. There are a large number of add-ins for Reflector. .NET Reflector can be used to track down performance problems and bugs, browse classes, and maintain or help become familiar with code bases. Using the Analyzer option, it can also be used to find assembly dependencies, and even windows DLL dependencies. There is a call tree and inheritance-browser. It will pick up the same documentation or comments that are stored in xml files alongside their associated assemblies that are used to drive IntelliSense inside Visual Studio. It is even possible to cross-navigate related documentation (xmldoc), searching for specific types, members and references. It can be used to effectively convert source code between C# and Visual Basic. .NET Reflector has been designed to host add-ins to extend its functionality, many of which are open source. Some of these add-ins provide other languages that can be disassembled too, such as PowerShell, Delphi and MC++. Others analyze assemblies in different ways, providing quality metrics, sequence diagrams, class diagrams, dependency structure matrices or dependency graphs. It is possible to use add-ins to search text, save disassembled code to disk, export an assembly to XMI/UML, compare different versions, or search code. Other add-ins allow debugging processes. Some add-ins are designed to facilitate testing by creating stubs and wrappers. History .NET Reflector was originally developed by Lutz Roeder as freeware. Its first versions can be tracked back to January 2001. Archive.org hosts a collection of the early versions of Reflector. On 20 August 2008, Red Gate Software announced they were taking responsibility for future development of the software. In February 2010 Red Gate released .NET Reflector 6 along with a commercial Pro edition that enabled users to step into decompiled code in the Visual Studio debugger as if it were their own source code. On 10 January 2011 Red Gate announced that .NET Reflector 7 would incorporate Jason Haley's PowerCommands add-in. On 1 February 2011 Red Gate announced that .NET Reflector would become a commercial product as of version 7, which was released on 14 March 2011. This led to the creation of several free alternatives, including dotPeek, CodeReflect and the open source program ILSpy. Subsequently, on 26 April 2011, due to community feedback Red Gate announced that they would continue to make .NET Reflector 6 available for free to existing users (while new users will have to pay for Reflector). References Reflector Decompilers Static program analysis tools 2001 software
.NET Reflector
[ "Engineering" ]
781
[ "Reverse engineering", "Decompilers" ]
2,285,574
https://en.wikipedia.org/wiki/SLAM%20project
The SLAM project, which was started in 1999 by Thomas Ball and Sriram Rajamani of Microsoft Research, aimed at verifying software safety properties using model checking techniques. It was implemented in OCaml, and has been used to find many bugs in Windows Device Drivers. It is distributed as part of the Microsoft Windows Driver Foundation development kit as the Static Driver Verifier (SDV). "SLAM originally was an acronym but we found it too cumbersome to explain. We now prefer to think of 'slamming' the bugs in a program." It initially stood for "software (specifications), programming languages, abstraction, and model checking". Note that Microsoft has since re-used SLAM to stand for "Social Location Annotation Mobile". See also Abstraction model checking the BLAST model checker, a model checker similar to SLAM that uses "lazy abstraction" References External links Formal methods OCaml software Microsoft Research
SLAM project
[ "Technology", "Engineering" ]
191
[ "Computer engineering", "Computer engineering stubs", "Software engineering", "Computing stubs", "Formal methods" ]
2,285,662
https://en.wikipedia.org/wiki/Soft%20key
A soft key is a button flexibly programmable to invoke any of a number of functions rather than being associated with a single fixed function or a fixed set of functions. A softkey often takes the form of a screen-labeled function key located alongside a display device, where the button invokes a function described by the text at that moment shown adjacent to the button on the display. Soft keys are also found away from the display device, for example on the sides of cellular phones, where they are typically programmed to invoke functions such as PTT, memo, or volume control. Function keys on keyboards are a form of soft key. In contrast, a hard key is a key with dedicated function such as the keys on a number keypad. Screen-labeled function keys are today most commonly found in kiosk applications, such as automated teller machines and gas pumps. Screen-label function keys date to aviation applications in the late 1960s. Kiosk applications were particularly common in the 1990s and 2000s. Screen-labeled function keys are found in automotive and aviation applications such as in the primary flight and multi-function displays. An alternative to screen-labeled function keys is buttons (virtual keys) on a touchscreen, where the label is directly pushable. The increased prevalence of touchscreens in the 2000s has led to a decrease in screen-labeled function keys. However, screen-labeled function keys are inexpensive and robust, and provide tactile feedback. History Early examples are found in aviation glass cockpits, such as the Mark II avionics of the F-111D in the late 1960s/early 1970s (first ordered 1967, delivered 1970–73). Hewlett-Packard developed them for use in computers/calculators in the 1970s. The HP 9830 desktop computer was the first calculator with two rows of 4 keys, over which a paper overlay would be placed. These were later adapted to terminals. Programmers found that the HP 2640 terminals could lock the top two lines of the screen, so they displayed the key functions there. Starting with HP 2647 terminal, the keys were re-arranged to correspond with 2 pairs of 4 labels at the bottom of the screen. These could be programmed by escape sequence or configuration screen. This would be further developed on the failed HP 300 Amigo, which used keys at the right side of the screen and HP 250 business computers which placed them at the bottom. By arranging functions in hierarchical trees, many functions can be implemented with only 8 keys. Graphical calculators , HP calculators use this arrangement to implement hierarchical trees of functions. They are rarely found on PC applications, even though the first IBM PC BASIC labeled function key use at the bottom of the screen, and there were 12 function keys, patterned after use on IBM terminals. Modern Texas Instruments calculators such as the TI-89 series use function keys to open drop-down menus on their menu bar, the menu title acting like the key label. Casio calculators use the function keys for a menu at the bottom of the screen. Mobile phone A typical mobile phone with soft keys has them located beneath the bottom left and bottom right of the display; some, especially those made by Nokia, have an additional center soft key, activated by pressing on the center of the directional pad. Depending on the modality of the application, various functions can be mapped onto it. It can also bring up multiple functions listed on a pop-up expanded menu. Usually the prompt text on the display for the softkey is not allowed to be truncated or omitted with ellipsis. The softkey itself is usually not printed with a functional icon or text, but is often marked with a dot or short bar. Soft keys have become increasingly rare as touchscreens take the place of function keys on many modern smartphones. Point of sale Screen-labeled function keys have found use in point of sale systems; NCR Corporation claims that their DynaKey system "has been proven to reduce training time and cashier errors". References Kiljander, Harri (2004) “Evolution and usability of mobile phone interaction styles” Helsinki University of Technology, dissertation Lindholm, Christian. Keinonen, Turkka, Kiljander, Harri (2003) “Mobile usability : how Nokia changed the face of the mobile phone” New York : McGraw-Hill External links Input/output Graphical user interface elements
Soft key
[ "Technology" ]
907
[ "Components", "Graphical user interface elements" ]
2,285,973
https://en.wikipedia.org/wiki/Real-Time%20Multiprogramming%20Operating%20System
Real-Time Multiprogramming Operating System (RTMOS) was a 24-bit process control operating system developed in the 1960s by General Electric that supported both real-time computing and multiprogramming. Programming was done in assembly language or Process FORTRAN. The two languages could be used in the same program, allowing programmers to alternate between the two as desired. Multiprogramming operating systems are now considered obsolete, having been replaced by multitasking. References General Electric Real-time operating systems
Real-Time Multiprogramming Operating System
[ "Technology" ]
103
[ "Operating system stubs", "Computing stubs", "Real-time computing", "Real-time operating systems" ]
2,286,045
https://en.wikipedia.org/wiki/Nash%E2%80%93Moser%20theorem
In the mathematical field of analysis, the Nash–Moser theorem, discovered by mathematician John Forbes Nash and named for him and Jürgen Moser, is a generalization of the inverse function theorem on Banach spaces to settings when the required solution mapping for the linearized problem is not bounded. Introduction In contrast to the Banach space case, in which the invertibility of the derivative at a point is sufficient for a map to be locally invertible, the Nash–Moser theorem requires the derivative to be invertible in a neighborhood. The theorem is widely used to prove local existence for non-linear partial differential equations in spaces of smooth functions. It is particularly useful when the inverse to the derivative "loses" derivatives, and therefore the Banach space implicit function theorem cannot be used. History The Nash–Moser theorem traces back to , who proved the theorem in the special case of the isometric embedding problem. It is clear from his paper that his method can be generalized. , for instance, showed that Nash's methods could be successfully applied to solve problems on periodic orbits in celestial mechanics in the KAM theory. However, it has proven quite difficult to find a suitable general formulation; there is, to date, no all-encompassing version; various versions due to Gromov, Hamilton, Hörmander, Saint-Raymond, Schwartz, and Sergeraert are given in the references below. That of Hamilton's, quoted below, is particularly widely cited. The problem of loss of derivatives This will be introduced in the original setting of the Nash–Moser theorem, that of the isometric embedding problem. Let be an open subset of Consider the map given by In Nash's solution of the isometric embedding problem (as would be expected in the solutions of nonlinear partial differential equations) a major step is a statement of the schematic form "If is such that is positive-definite, then for any matrix-valued function which is close to , there exists with ." Following standard practice, one would expect to apply the Banach space inverse function theorem. So, for instance, one might expect to restrict to and, for an immersion in this domain, to study the linearization given by If one could show that this were invertible, with bounded inverse, then the Banach space inverse function theorem directly applies. However, there is a deep reason that such a formulation cannot work. The issue is that there is a second-order differential operator of which coincides with a second-order differential operator applied to . To be precise: if is an immersion then where is the scalar curvature of the Riemannian metric , denotes the mean curvature of the immersion , and denotes its second fundamental form; the above equation is the Gauss equation from surface theory. So, if is , then is generally only . Then, according to the above equation, can generally be only ; if it were then |||| would have to be at least . The source of the problem can be quite succinctly phrased in the following way: the Gauss equation shows that there is a differential operator such that the order of the composition of with is less than the sum of the orders of and . In context, the upshot is that the inverse to the linearization of , even if it exists as a map , cannot be bounded between appropriate Banach spaces, and hence the Banach space implicit function theorem cannot be applied. By exactly the same reasoning, one cannot directly apply the Banach space implicit function theorem even if one uses the Hölder spaces, the Sobolev spaces, or any of the spaces. In any of these settings, an inverse to the linearization of will fail to be bounded. This is the problem of loss of derivatives. A very naive expectation is that, generally, if is an order differential operator, then if is in then must be in . However, this is somewhat rare. In the case of uniformly elliptic differential operators, the famous Schauder estimates show that this naive expectation is borne out, with the caveat that one must replace the spaces with the Hölder spaces ; this causes no extra difficulty whatsoever for the application of the Banach space implicit function theorem. However, the above analysis shows that this naive expectation is not borne out for the map which sends an immersion to its induced Riemannian metric; given that this map is of order 1, one does not gain the "expected" one derivative upon inverting the operator. The same failure is common in geometric problems, where the action of the diffeomorphism group is the root cause, and in problems of hyperbolic differential equations, where even in the very simplest problems one does not have the naively expected smoothness of a solution. All of these difficulties provide common contexts for applications of the Nash–Moser theorem. The schematic form of Nash's solution This section only aims to describe an idea, and as such it is intentionally imprecise. For concreteness, suppose that is an order-one differential operator on some function spaces, so that it defines a map for each . Suppose that, at some function , the linearization has a right inverse ; in the above language this reflects a "loss of one derivative". One can concretely see the failure of trying to use Newton's method to prove the Banach space implicit function theorem in this context: if is close to in and one defines the iteration then implies that is in , and then is in . By the same reasoning, is in , and is in , and so on. In finitely many steps the iteration must end, since it will lose all regularity and the next step will not even be defined. Nash's solution is quite striking in its simplicity. Suppose that for each one has a smoothing operator which takes a function, returns a smooth function, and approximates the identity when is large. Then the "smoothed" Newton iteration transparently does not encounter the same difficulty as the previous "unsmoothed" version, since it is an iteration in the space of smooth functions which never loses regularity. So one has a well-defined sequence of functions; the major surprise of Nash's approach is that this sequence actually converges to a function with . For many mathematicians, this is rather surprising, since the "fix" of throwing in a smoothing operator seems too superficial to overcome the deep problem in the standard Newton method. For instance, on this point Mikhael Gromov says Remark. The true "smoothed Newton iteration" is a little more complicated than the above form, although there are a few inequivalent forms, depending on where one chooses to insert the smoothing operators. The primary difference is that one requires invertibility of for an entire open neighborhood of choices of , and then one uses the "true" Newton iteration, corresponding to (using single-variable notation) as opposed to the latter of which reflects the forms given above. This is rather important, since the improved quadratic convergence of the "true" Newton iteration is significantly used to combat the error of "smoothing", in order to obtain convergence. Certain approaches, in particular Nash's and Hamilton's, follow the solution of an ordinary differential equation in function space rather than an iteration in function space; the relation of the latter to the former is essentially that of the solution of Euler's method to that of a differential equation. Hamilton's formulation of the theorem The following statement appears in : Similarly, if each linearization is only injective, and a family of left inverses is smooth tame, then P is locally injective. And if each linearization is only surjective, and a family of right inverses is smooth tame, then P is locally surjective with a smooth tame right inverse. Tame Fréchet spaces A consists of the following data: a vector space a countable collection of seminorms such that for all One requires these to satisfy the following conditions: if is such that for all then if is a sequence such that, for each and every there exists such that implies then there exists such that, for each one has Such a graded Fréchet space is called a if it satisfies the following condition: there exists a Banach space and linear maps and such that is the identity map and such that: there exists and such that for each there is a number such that for every and for every Here denotes the vector space of exponentially decreasing sequences in that is, The laboriousness of the definition is justified by the primary examples of tamely graded Fréchet spaces: If is a compact smooth manifold (with or without boundary) then is a tamely graded Fréchet space, when given any of the following graded structures: take to be the -norm of take to be the -norm of for fixed take to be the -norm of for fixed If is a compact smooth manifold-with-boundary then the space of smooth functions whose derivatives all vanish on the boundary, is a tamely graded Fréchet space, with any of the above graded structures. If is a compact smooth manifold and is a smooth vector bundle, then the space of smooth sections is tame, with any of the above graded structures. To recognize the tame structure of these examples, one topologically embeds in a Euclidean space, is taken to be the space of functions on this Euclidean space, and the map is defined by dyadic restriction of the Fourier transform. The details are in pages 133-140 of . Presented directly as above, the meaning and naturality of the "tame" condition is rather obscure. The situation is clarified if one re-considers the basic examples given above, in which the relevant "exponentially decreasing" sequences in Banach spaces arise from restriction of a Fourier transform. Recall that smoothness of a function on Euclidean space is directly related to the rate of decay of its Fourier transform. "Tameness" is thus seen as a condition which allows an abstraction of the idea of a "smoothing operator" on a function space. Given a Banach space and the corresponding space of exponentially decreasing sequences in the precise analogue of a smoothing operator can be defined in the following way. Let be a smooth function which vanishes on is identically equal to one on and takes values only in the interval Then for each real number define by If one accepts the schematic idea of the proof devised by Nash, and in particular his use of smoothing operators, the "tame" condition then becomes rather reasonable. Smooth tame maps Let and be graded Fréchet spaces. Let be an open subset of , meaning that for each there are and such that implies that is also contained in . A smooth map is called a if for all the derivative satisfies the following: The fundamental example says that, on a compact smooth manifold, a nonlinear partial differential operator (possibly between sections of vector bundles over the manifold) is a smooth tame map; in this case, can be taken to be the order of the operator. Proof of the theorem Let denote the family of inverse mappings Consider the special case that and are spaces of exponentially decreasing sequences in Banach spaces, i.e. and . (It is not too difficult to see that this is sufficient to prove the general case.) For a positive number , consider the ordinary differential equation in given by Hamilton shows that if and is sufficiently small in , then the solution of this differential equation with initial condition exists as a mapping , and that converges as to a solution of . References . Differential equations Topological vector spaces Inverse functions Theorems in functional analysis
Nash–Moser theorem
[ "Mathematics" ]
2,360
[ "Theorems in mathematical analysis", "Vector spaces", "Mathematical objects", "Differential equations", "Topological vector spaces", "Space (mathematics)", "Equations", "Theorems in functional analysis" ]
2,286,239
https://en.wikipedia.org/wiki/Lithium%20nitride
Lithium nitride is an inorganic compound with the chemical formula . It is the only stable alkali metal nitride. It is a reddish-pink solid with a high melting point. Preparation and handling Lithium nitride is prepared by direct reaction of elemental lithium with nitrogen gas: Instead of burning lithium metal in an atmosphere of nitrogen, a solution of lithium in liquid sodium metal can be treated with . Lithium nitride must be protected from moisture as it reacts violently with water to produce ammonia: Structure and properties alpha- (stable at room temperature and pressure) has an unusual crystal structure that consists of two types of layers: one layer has the composition contains 6-coordinate N centers and the other layer consists only of lithium cations. Two other forms are known: beta-, formed from the alpha phase at 0.42 GPa has the sodium arsenide () structure; gamma- (same structure as lithium bismuthide ) forms from the beta form at 35 to 45 GPa. Lithium nitride shows ionic conductivity for , with a value of c. 2×10−4 Ω−1cm−1, and an (intracrystal) activation energy of c. 0.26 eV (c. 24 kJ/mol). Hydrogen doping increases conductivity, whilst doping with metal ions (Al, Cu, Mg) reduces it. The activation energy for lithium transfer across lithium nitride crystals (intercrystalline) has been determined to be higher, at c. 68.5 kJ/mol. The alpha form is a semiconductor with band gap of c. 2.1 eV. Reactions Reacting lithium nitride with carbon dioxide results in amorphous carbon nitride (), a semiconductor, and lithium cyanamide (), a precursor to fertilizers, in an exothermic reaction. Under hydrogen at around 200°C, Li3N will react to form lithium amide. At higher temperatures it will react further to form ammonia and lithium hydride. Lithium imide can also be formed under certain conditions. Some research has explored this as a possible industrial process to produce ammonia since lithium hydride can be thermally decomposed back to lithium metal. Lithium nitride has been investigated as a storage medium for hydrogen gas, as the reaction is reversible at 270 °C. Up to 11.5% by weight absorption of hydrogen has been achieved. References See also WebElements Nitrides Lithium compounds
Lithium nitride
[ "Chemistry" ]
509
[ "Inorganic compounds", "Inorganic compound stubs" ]
2,286,382
https://en.wikipedia.org/wiki/Interrupt%20storm
In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting. Background Because interrupt processing is typically a non-preemptible task in time-sharing operating systems, an interrupt storm will cause sluggish response to user input, or even appear to freeze the system completely. This state is commonly known as live lock. In such a state, the system is spending most of its resources processing interrupts instead of completing other work. To the end-user, it does not appear to be processing anything at all as there is often no output. An interrupt storm is sometimes mistaken for thrashing, since they both have similar symptoms (unresponsive or sluggish response to user input, little or no output). Common causes include: misconfigured or faulty hardware, faulty device drivers, flaws in the operating system, or metastability in one or more components. The latter condition rarely occurs outside of prototype or amateur-built hardware. Most modern hardware and operating systems have methods for mitigating the effect of an interrupt storm. For example, most Ethernet controllers implement interrupt "rate limiting", which causes the controller to wait a programmable amount of time between each interrupt it generates. When not present within the device, similar functionality is usually written into the device driver, and/or the operating system itself. The most common cause is when a device "behind" another signals an interrupt to an APIC (Advanced Programmable Interrupt Controller). Most computer peripherals generate interrupts through an APIC as the number of interrupts is most always less (typically 15 for the modern PC) than the number of devices. The OS must then query each driver registered to that interrupt to ask if the interrupt originated from its hardware. Faulty drivers may always claim "yes", causing the OS to not query other drivers registered to that interrupt (only one interrupt can be processed at a time). The device which originally requested the interrupt therefore does not get its interrupt serviced, so a new interrupt is generated (or is not cleared) and the processor becomes swamped with continuous interrupt signals. Any operating system can live lock under an interrupt storm caused by such a fault. A kernel debugger can usually break the storm by unloading the faulty driver, allowing the driver "underneath" the faulty one to clear the interrupt, if user input is still possible. This occurred in an older version of FreeBSD, where PCI cards that were configured to operate in ISA compatibility mode could not properly interact with the ISA interrupt routing. This would either cause interrupts to never be detected by the operating system, or the operating system would never be able to clear them, resulting in an interrupt storm. As drivers are most often implemented by a 3rd party, most operating systems also have a polling mode that queries for pending interrupts at fixed intervals or in a round-robin fashion. This mode can be set globally, on a per-driver, per-interrupt basis, or dynamically if the OS detects a fault condition or excessive interrupt generation. A polling mode may be enabled dynamically when the number of interrupts or the resource use caused by an interrupt, passes certain thresholds. When these thresholds are no longer exceeded, an OS may then change the interrupting driver, interrupt, or interrupt handling globally, from an interrupt mode to a polling mode. Interrupt rate limiting in hardware usually negates the use of a polling mode, but can still happen during normal operation during intense I/O if the processor is unable switch contexts quickly enough to keep pace. History Perhaps the first interrupt storm occurred during the Apollo 11's lunar descent in 1969. Considerations Interrupt rate limiting must be carefully configured for optimum results. For example, an Ethernet controller with interrupt rate limiting will buffer the packets it receives from the network in between each interrupt. If the rate is set too low, the controller's buffer will overflow, and packets will be dropped. The rate must take into account how fast the buffer may fill between interrupts, and the interrupt latency between the interrupt and the transfer of the buffer to the system. Interrupt mitigating There are hardware-based and software-based approaches to the problem. For example, FreeBSD detects interrupt storms and masks problematic interrupts for some time in response. The system used by NAPI is an example of the hardware-based approach: the system (driver) starts in interrupt enabled state, and the Interrupt handler then disables the interrupt and lets a thread/task handle the event(s) and then task polls the device, processing some number of events and enabling the interrupt. Another interesting approach using hardware support is one where the device generates interrupt when the event queue state changes from "empty" to "not empty". Then, if there are no free DMA descriptors at the RX FIFO tail, the device drops the event. The event is then added to the tail and the FIFO entry is marked as occupied. If at that point entry (tail−1) is free (cleared), an interrupt will be generated (level interrupt) and the tail pointer will be incremented. If the hardware requires the interrupt be acknowledged, the CPU (interrupt handler) will do that, handle the valid DMA descriptors at the head, and return from the interrupt. See also Broadcast storm Inter-processor interrupt (IPI) Non-maskable interrupt (NMI) Programmable Interrupt Controller (PIC) References Interrupts Software anomalies
Interrupt storm
[ "Technology" ]
1,151
[ "Interrupts", "Events (computing)", "Technological failures", "Software anomalies", "Computer errors" ]
2,286,508
https://en.wikipedia.org/wiki/Salt%20bridge
In electrochemistry, a salt bridge or ion bridge is an essential laboratory device discovered over 100 years ago. It contains an electrolyte solution, typically an inert solution, used to connect the oxidation and reduction half-cells of a galvanic cell (voltaic cell), a type of electrochemical cell. In short, it functions as a link connecting the anode and cathode half-cells within an electrochemical cell. It also maintains electrical neutrality within the internal circuit and stabilizes the junction potential between the solutions in the half-cells. Additionally, it serves to minimize cross-contamination between the two half cells. A salt bridge typically consists of tubes filled with an electrolyte solution. These tubes often have diaphragms such as glass frits at their ends to help contain the solution within the tubes and prevent excessive mixing with the surrounding environment. When setting up a salt bridge between different solvents of half-cells, it is crucial to ensure that the electrolyte used in the bridge is soluble in both solutions and does not interact with any species present in either solutions. There are several types of salt bridges: glass tube bridges (traditional KCl-type salt bridge and ionic liquid salt bridge), filter paper bridges, porous frit salt bridges, fumed-silica, and agar gel salt bridges. The following sections will explore in greater detail the characteristics and applications of glass tube bridges, filter paper bridges, fumed silica salt bridges, and charcoal salt bridges. Glass tube bridges (KCl-type and ionic liquid salt bridge) Glass tube salt bridges commonly consist of U-shaped Vycor tubes filled with a relatively inert electrolyte. The electrolyte solution usually comprises a combination of cations, such as ammonium and potassium, and anions, including chloride and nitrate, which have similar mobility. The combination is chosen which does not react with any of the chemicals used in the cell. KCl-type salt bridges Traditionally, concentrated aqueous potassium chloride (KCl) solution has been used for decades to neutralize the liquid-junction potential. When comparing other salt solutions such as potassium bromide and potassium iodide to potassium chloride, potassium chloride is the most efficient in nullifying the junction potential. Yet, the effectiveness of this salt bridge decreases as the ionic strength of the sample solution increases. Ionic liquid salt bridges Due to the numerous drawbacks of KCl-type salt bridges, ionic liquid salt bridges (ILSB) have been utilized to address the potentiometry issues arising from KCl-type salt bridges in electrochemical cells. ILSBs demonstrate efficient performance in aqueous solutions of hydrophilic electrolytes. This is because ionic liquids do not mix with water (they are immiscible), rendering them suitable as salt bridges for aqueous solutions. Additionally, they are chemically inert and highly stable in water. To set up a glass tube salt bridge, a U-shaped Vycor tube is fashioned to contain a suitable electrolyte solution. Normally, glass frits, a porous material, cover the ends of the tube or the electrolyte is often gelified with agar-agar to help prevent the intermixing of fluids that might otherwise occur. The conductivity of a glass tube bridge primarily depends on the concentration of the electrolyte solution. At concentrations below saturation, an increase in concentration enhances conductivity. However, beyond-saturation electrolyte content and a narrow tube diameter may both reduce conductivity. Filter paper bridges Porous paper such as filter paper may be used as a salt bridge if soaked in an appropriate electrolyte such as the electrolytes used in glass tube bridges. No gelification agent is required as the filter paper provides a solid medium for conduction. The conductivity of this kind of salt bridge depends on a number of factors: the concentration of the electrolyte solution, the texture of the paper, and the absorbing ability of the paper. Generally, smoother texture and higher absorbency equate to higher conductivity. To set up this type of salt bridge, laboratory filter paper can be used and rolled to form a shape that connects the two half-cells, typically rolled into a cylindrical shape. The rolled filter paper is then soaked in an appropriate inert salt solution. A straw can be used to shape the rolled filter paper into a U-shaped tube, providing mechanical strength to the soaked filter paper. This filter paper can now be used to act as a salt bridge and connect the two half-cells. While filter paper salt bridges are inexpensive and easily accessible, one disadvantage of not using a straw to provide mechanical strength is that a new rolled and soaked filter paper must be used for each experiment. Additionally, filter paper has limited lon Charcoal salt bridges A recent development is the charcoal salt bridge. It is considered an excellent option for a porous junction for the reference electrode in an alkaline solution. A porous junction serves as a salt bridge between the two half-cells of reference and electrolyte solutions. Other materials used for porous junctions, such as glass, Teflon, and agar gel, have their own benefits but also some significant drawbacks such as high cost and high risk of contamination. Therefore, the advantages of using charcoal as frits include its low cost and easy accessibility, as charcoal can be sourced from porous carbon materials. Despite being fragile, charcoal facilitates efficient ion transfer due to its highly porous structure. See also Liquid junction potential Ion transport number References Electrochemical concepts Laboratory equipment
Salt bridge
[ "Chemistry" ]
1,140
[ "Electrochemistry", "Electrochemical concepts" ]
2,286,665
https://en.wikipedia.org/wiki/Enterprise%20asset%20management
Enterprise asset management (EAM) involves the management of the maintenance of physical assets of an organization throughout each asset's lifecycle. EAM is used to plan, optimize, execute, and track the needed maintenance activities with the associated priorities, skills, materials, tools, and information. This covers the design, construction, commissioning, operations, maintenance and decommissioning or replacement of plant, equipment and facilities. The goal of EAM is to maximize the value and efficiency of these assets while minimizing associated costs and risks. "Enterprise" refers to the scope of the assets in an Enterprise across departments, locations, facilities and, potentially, supporting business functions. Various assets are managed by the modern enterprises at present. The assets may be fixed assets like buildings, plants, machineries or moving assets like vehicles, ships, moving equipments etc. The lifecycle management of the high value physical assets require regressive planning and execution of the work. History EAM arose as an extension of the computerized maintenance management system (CMMS) which is usually defined as a system for the computerisation of the maintenance of physical assets. Enterprise asset management software Enterprise asset management software is a computer software that handles every aspect of running a public works or asset-intensive organization. Enterprise asset management (EAM) software applications include features such as asset life-cycle management, preventive maintenance scheduling, warranty management, integrated mobile wireless handheld options and portal-based software interface. Rapid development and availability of mobile devices also affected EAM software which now often supports mobile enterprise asset management. EAM Solution Applications in Power Generation EAM solution applications, are used in power generation, including nuclear power plants. EAM solutions are used in the industries for managing asset portfolios and operational efficiency. They are recognized for their role in enhancing asset utilization and reducing costs, with a focus on compliance with regulatory guidelines, and meeting consumer/clients’ needs. Features and applications for solutions include, but are not limited to: Standardization of Work Processes: EAM solution applications are designed to streamline work processes in power generation operations. This includes improving worker productivity and asset return on investment by aiming to increase asset availability, reduce planned outage time, and enhance reliability. Asset Performance Management (APM): This component of EAM solution applications offer software and services for optimizing asset performance and operational & maintenance efficiency. Features such as these include proprietary analytics and work process automation (e.g., Work Orders, Procurement Processes, Material Requestes, etc.). Application in Power Generation: EAM solutions are often tailored for use in power generation and other industries with complex, mission-critical environments. The focus is on addressing challenges where operational failure can lead to significant consequences. Asset Management in Nuclear Power: Asset management is a key component in nuclear power plants, particularly in competitive electricity markets. Asset Suite EAM aims to support decision-making processes by balancing financial performance, operational performance, and risk. These applications are essential for identifying and tracking changes to plant-specific controlled equipment and documentation. Industry Usage: The software is noted for its application in the utility, transmission, and fossil or nuclear power industry. It is reportedly used by a significant portion of global nuclear fleets. Modules for Nuclear Plants: EAM solution applications offer modules such as Procurement Engineering, Inventory Management, Total Exposure, Material Request and Receipt, Engineering Changes, and Work Orders which are geared towards the needs of nuclear plants. Standardizing: Currently, there is a large movement to isolate and distribute a singular EAM solution for managing industrial assets across the nuclear power generation industry for commercial electricity production (i.e., Asset Suite/Passport) . This deployment is aimed at standardizing practices across multiple nuclear power plants. Although, not every plant utilizes the same software. As plants and corporations continue to expand and modernize, industries are moving from Asset Suite/Passport to Maximo EAM, which is another EAM solution application currently tailored for utility, transmission, and the nuclear industry. See also Building lifecycle management References Sources Physical Asset Management(Springer publication) Nicholas Anthony John,2010. Pascual, R. "El Arte de Mantener", Pontificia Universidad Católica de Chile, Santiago, Chile, 2015. Asset management Business software Wireless locating
Enterprise asset management
[ "Technology" ]
871
[ "Wireless locating" ]
2,286,680
https://en.wikipedia.org/wiki/International%20Ice%20Patrol
The International Ice Patrol is an organization with the purpose of monitoring the presence of icebergs in the Atlantic and Arctic oceans and reporting their movements for safety purposes. It is operated by United States Coast Guard but is funded by the 13 nations interested in trans-Atlantic navigation. As of 2011 the governments contributing to the International Ice Patrol include Belgium, Canada (see Canadian Ice Service), Denmark, Finland, France, Germany, Greece, Italy, Japan, the Netherlands, Norway, Panama, Poland, Spain, Sweden, the United Kingdom, and the United States. The organization was established in 1914 in response to the sinking of RMS Titanic. The primary mission of the Ice Patrol is to monitor the iceberg danger in the North Atlantic Ocean and provide relevant iceberg warning products to the maritime community. History Founding From the earliest journeys into the North Atlantic, icebergs have threatened vessels. A review of the history of navigation prior to the turn of the 20th century shows an impressive number of casualties occurred in the vicinity of the Grand Banks of Newfoundland. For example, sank in 1833 with a loss of 215 people. Between 1882 and 1890, 14 vessels were lost and 40 seriously damaged due to ice. This does not include the large number of whaling and fishing vessels lost or damaged by ice. It took one of the greatest marine disasters of all time to arouse public demand for international cooperative action to deal with this marine hazard. This disaster, the sinking of on 15 April 1912, was the prime impetus for the establishment of the International Ice Patrol. On her maiden voyage from Southampton, England bound for New York, Titanic collided with an iceberg just south of the tail of the Grand Banks and sank in less than three hours. The loss of life was enormous with more than 1,500 of the 2,224 passengers and crew perishing. Titanic, the brand new flagship of the White Star Line, was the largest passenger liner of her time displacing 45,000 tons and capable of sustained speed in excess of . The loss of Titanic gripped the world with a sobering awareness of an iceberg's potential for tragedy. The sheer dimensions of the Titanic disaster created sufficient public reaction on both sides of the Atlantic to prod reluctant governments into action, producing the first Safety of Life at Sea (SOLAS) convention in 1914. After the Titanic disaster, the U.S. Navy assigned the cruisers and to patrol the Grand Banks of Newfoundland for the remainder of 1912. In 1913, the United States Navy could not spare ships for this purpose, so the Revenue Cutter Service (forerunner of the United States Coast Guard) assumed responsibility, assigning USRC Seneca and USRC Miami to conduct the patrol. At the first International Conference on the Safety of Life at Sea, which was convened in London on 12 November 1913, the subject of patrolling the ice regions was thoroughly discussed. The convention signed on 30 January 1914, by the representatives of the world's various maritime powers, provided for the inauguration of an international derelict-destruction, ice observation, and ice patrol service, consisting of vessels, which should patrol the ice regions during the season of iceberg danger and attempt to keep the trans-Atlantic lanes clear of derelicts during the remainder of the year. Due primarily to the experience gained in 1912 and 1913, the United States Government was invited to undertake the management of the triple service, the expense to be defrayed by the 13 nations interested in trans-Atlantic navigation. The second International Conference on Safety of Life at Sea was convened in London on 16 April 1929. Eighteen nations participated, all of which signed the final act on 31 May 1929. Because of the fear in the United States Senate as a result of ambiguities in Article 54 dealing with control, the 1929 convention was not ratified by the United States until 7 August 1936, and even then the ratification was accompanied by three reservations. At the same time, Congress enacted legislation on 25 June 1936, formally requiring the Commandant of the Coast Guard to administer the International Ice Observation and Ice Patrol Service (Chap. 807, para. 2 49 USC 1922) and describing in general fashion the manner in which this service was to be performed. With only minor changes, this remains today as the basic Coast Guard authority to operate the International Ice Patrol. Since 1929, there have been three SOLAS conventions (1948, 1960 & 1974). None of these have recommended any basic change affecting the Ice Patrol. Every year since 1914, the United States Coast Guard and the International Ice Patrol lay a wreath from a ship or an aircraft at the site of the Titanic disaster on 15 April. The solemn ceremony is attended by the craft's crew and a dedication statement to the Titanic and her fatalities is read. Administration From its inception until the beginning of World War II, the Ice Patrol was conducted from two surface patrol cutters alternating surveillance patrols of the southern ice limits. In 1931 and thereafter a third ship was assigned to Ice Patrol to perform oceanographic observations in the vicinity of the Grand Banks. After World War II, aerial surveillance became the primary ice reconnaissance method with surface patrols phased out except during unusually heavy ice years or extended periods of reduced visibility. Use of the oceanographic vessel continued until 1982, when the Coast Guard's sole remaining oceanographic ship, , was converted to a medium endurance cutter. The aircraft has distinct advantages for ice reconnaissance providing much greater coverage in a relatively short period of time. From 1946 until 1966, the Ice Patrol offices, operations center and reconnaissance aircraft were based at the Coast Guard Air Detachment Argentia, Newfoundland during the ice season. Due to changing operational commitments and financial constraints the Coast Guard Argentia Air Detachment closed in 1966. Ice Patrol headquarters and operations center moved to Governors Island, New York where they remained until October 1983. Today the International Ice Patrol is located at the NOAA Satellite Operations Facility in Suitland, Maryland. Previously, it was located at the Coast Guard Research and Development Center in New London, Connecticut. The ice reconnaissance detachment, usually composed of eleven aircrew and four ice observers flying in an HC-130 aircraft, continues to work out of Newfoundland. The Ice Patrol disseminates information on icebergs and the limit of all known sea ice via radio broadcast from the U.S. Coast Guard Communications Command (COMMCOM) located in Chesapeake, Virginia via Inmarsat Safetynet, and radio facsimile chart. Ice Patrol information is also available via internet access. 2002 changes to SOLAS requires ships transiting the region guarded by the Ice Patrol to use the services provided during the ice season. Aviation history of the International Ice Patrol 6 February 1946 – A PBY-5A makes the first International Ice Patrol reconnaissance flight. 24 February 1946 – Two PB4Y-1s arrive at Naval Station Argentia in Argentia, Newfoundland to become the first dedicated Ice Patrol aircraft. 1 July 1946 – First helicopter deployments in International Ice Patrol. An HNS-1 helicopter, Sikorsky R-4, CGNR 39047, flew from off the Greenland coast. 1947 – A PB-1G becomes the Ice Patrol aircraft. 1948 – Camera-equipped PB-1G begins an iceberg census off Baffin Island completed in 1949. 1949 – Aircraft become the sole reconnaissance tools for the first time. 1956 – Unsuccessful tests to identify icebergs by marking with dye markers, commercial dye, and used motor oil. 1958 – Last ice patrol by a PB-1G. 1959 – R5Ds replace PB-1Gs. June 1959 – Unsuccessful iceberg demolition experiments with magnesium and thermite incendiary bombs. May 1960 – Unsuccessful iceberg demolition experiments dropping high-explosive bombs from UF-2G. 24 May 1962 – First Ice Patrol by HC-130B. 1963 – R5Ds replaced by Doppler Navigation System equipped HC-130Bs. 1964 – First successful use of Airborne radiation thermometer to detect changes in surface water temperature. 1967 – First use of microwave radiometer to differentiate radar contacts as ship or iceberg. 30 April 1970 – The ice reconnaissance detachment moved from Argentia to CFB Summerside in Prince Edward Island. 1971 – Side-looking airborne radar (SLAR) evaluation began. 1973 – Inertial Navigation System installed on Ice Patrol aircraft. 1973 – The ice reconnaissance detachment moved to St. John's International Airport, St. John's, Newfoundland. 1982 – The ice reconnaissance detachment relocated to CFB Gander, Gander, Newfoundland. 1989 – The ice reconnaissance detachment moved back to St. John's, Newfoundland. References External links United States Coast Guard International Ice Patrol Site Official website of the United States Coast Guard International Ice Patrol Paper on the Patrol's Economic Value Article by Clark W. Pritchett on the Economic Value of the International Ice Patrol (September 17th, 2008) United States Coast Guard Aviation Association, The Ancient Order of the Pterodactyl Website on the Coast Guard Aviation Association, The Ancient Order of The Pterodactyl Coast Guard International Ice Patrol, Public Information Division Book by the United States Coast Guard Headquarters that gives information about the International Ice Patrol in the North Atlantic Ocean (Washington, D.C.: United States Coast Guard, Public Information Division, 1942) United States Navy and Coast Guard patrols Navigation Maritime organizations United States Coast Guard United States Coast Guard Aviation International water transport Ice in transportation Organizations established in 1914
International Ice Patrol
[ "Physics" ]
1,904
[ "Physical systems", "Transport", "Ice in transportation" ]
2,286,702
https://en.wikipedia.org/wiki/Kaonic%20hydrogen
Kaonic hydrogen is an exotic atom consisting of a negatively charged kaon orbiting a proton. Such particles were first identified, through their X-ray spectrum, at the KEK proton synchrotron in Tsukuba, Japan in 1997. More detailed studies have been performed at DAFNE in Frascati, Italy. Kaonic hydrogen has been created in very low energy collisions of kaons with the protons in a gaseous hydrogen target. At DAFNE, kaons are produced by the decay of φ mesons which are in turn created in collisions between electrons and positrons. The experiments analyzed X-rays from several electronic transitions in kaonic hydrogen. Unlike in the hydrogen atom, where the binding between electron and proton is dominated by the electromagnetic interaction, kaons and protons interact also to a large extent by the strong interaction. In kaonic hydrogen this strong contribution was found to be repulsive, shifting the ground state energy by 283 ± 36 (statistical) ± 6 (systematic) eV, thus making the system unstable with a resonance width of 541 ± 89 (stat) ± 22 (syst) eV (decay into Λπ and Σπ). Kaonic hydrogen is studied mainly because of its importance for the understanding of kaon-nucleon interactions and for testing quantum chromodynamics. See also Kaonium Pionic helium References External links Article in CERN Courier Exotic atoms Atomic physics Hydrogen physics Mesons Nuclear physics Quantum chromodynamics Substances discovered in the 1990s Strange quark
Kaonic hydrogen
[ "Physics", "Chemistry" ]
312
[ "Exotic atoms", "Quantum mechanics", "Subatomic particles", " molecular", "Atomic physics", "Nuclear physics", "Atomic", "Particle physics", "Particle physics stubs", "Atoms", "Matter", " and optical physics" ]
2,286,714
https://en.wikipedia.org/wiki/Goal%20seeking
In computing, goal seeking is the ability to calculate backward to obtain an input that would result in a given output. This can also be called what-if analysis or backsolving. It can either be attempted through trial and improvement or more logical means. Basic goal seeking functionality is built into most modern spreadsheet packages such as Microsoft Excel. According to O'Brien and Marakas, optimization analysis is a more complex extension of goal-seeking analysis. Instead of setting a specific target value for a variable, the goal is to find the optimum value for one or more target variables, given certain constraints. Then one or more other variables are changed repeatedly, subject to the specified constraints, until you discover the best values for the target variables. Examples Suppose a family wanted to take out the biggest loan that they could afford to pay for. If they set aside $500 a month, the goal-seeking program would try to work out how big a loan the family could afford to take out. Even using simple trial and improvement, a computer could quickly determine that they could not afford a $50,000 loan, but could afford a $48,000 loan. It would then repeat the process until it had reached a figure such as $48,476.34, which would give them a monthly repayment as close to $500 as possible, without exceeding it. A more efficient method, especially on more complicated calculations, would be for the program to logically work through the argument. By drawing up a simple equation, the program could come to the conclusion that the output equalled one ninety-sixth of the input, and could then multiply the output (or goal) by ninety-six to find the necessary input. See also Global optimization Goal programming References Computing terminology Goal
Goal seeking
[ "Technology" ]
359
[ "Computing terminology" ]
2,286,716
https://en.wikipedia.org/wiki/NGC%207331
NGC 7331, also known as Caldwell 30, is an unbarred spiral galaxy about away in the constellation Pegasus. It was discovered by William Herschel on 6 September 1784. The galaxy appears similar in size and structure to the Milky Way, and is sometimes referred to as "the Milky Way's twin". However, discoveries in the 2000s regarding the structure of the Milky Way may call this similarity into doubt, particularly because the latter is now believed to be a barred spiral, compared to the unbarred status of NGC 7331. In spiral galaxies the central bulge typically co-rotates with the disk but the bulge in the galaxy NGC 7331 is rotating in the opposite direction to the rest of the disk. In both visible light and infrared photos of the NGC 7331, the core of the galaxy appears to be slightly off-center, with one side of the disk appearing to extend further away from the core than the opposite side. Galaxy Groups NGC 7331 is the brightest galaxy in the field of a visual grouping known as the NGC 7331 Group of galaxies. In fact, the other members of the group, NGC 7335, NGC 7336, NGC 7337 and NGC 7340, lie far in the background at distances of approximately 300–350 million light years. All of the members of the NGC 7331 Group, along with NGC 7325, NGC 7326, NGC 7327, NGC 7333, NGC 7338, are listed together as Holm 795 in Erik Holmberg's A Study of Double and Multiple Galaxies Together with Inquiries into some General Metagalactic Problems, published in 1937. Supernovae Three supernovae have been observed in NGC 7331: SN 1959D (type II-L, mag. 13.4) was discovered by Milton Humason and H. S. Gates in a survey at Palomar Observatory on 28 June 1959. SN 2013bu (type II, mag. 16.6) was discovered by Kōichi Itagaki on 21 April 2013. SN 2014C was discovered by the Lick Observatory Supernova Search (LOSS) on 5 January 2014. The star underwent an unusual "metamorphosis" from a hydrogen-poor Type Ib to a hydrogen-rich Type IIn over the course of a year. In addition to the confirmed supernovae, a 1903 photographic plate from Yerkes Observatory shows a magnitude 16.6 candidate transient that may have also been a supernova. See also M94 – another galaxy with a prominent starburst ring NGC 1512 – another galaxy with a prominent starburst ring Flocculent spiral galaxy List of NGC objects (7001–7840) References External links Calar Alto Observatory – NGC 7331 APOD (2004-07-01) – "A Galaxy So Inclined" SST – "Morphology of Our Galaxy's 'Twin'" NGC 7331 at the astro-photography site of Mr. T. Yoshida NGC7331 at W. Kloehr Astrophotography SEDS – NGC 7331 NGC 7331 Group Unbarred spiral galaxies Pegasus (constellation) 7331 12113 69327 030b 17840906 +06-49-045 22347+3409
NGC 7331
[ "Astronomy" ]
660
[ "Pegasus (constellation)", "Constellations" ]
2,286,731
https://en.wikipedia.org/wiki/Kaonium
Kaonium is an exotic atom consisting of a bound state of a positively charged and a negatively charged kaon. Kaonium has not been observed experimentally and is expected to have a short lifetime on the order of 10−18 seconds. See also Kaonic hydrogen References Onia Mesons Nuclear physics
Kaonium
[ "Physics" ]
63
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
2,286,884
https://en.wikipedia.org/wiki/Term%20of%20patent
The term of a patent is the maximum time during which it can be maintained in force. It is usually expressed in a number of years either starting from the filing date of the patent application or from the date of grant of the patent. In most patent laws, annuities or maintenance fees have to be regularly paid in order to keep the patent in force. Thus, a patent may lapse before its term if a renewal fee is not paid in due time. International harmonization Significant international harmonization of patent term across national laws was provided in the 1990s by the implementation of the WTO's Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs Agreement). Article 33 of the TRIPs Agreement provides that "The term of protection available [for patents] shall not end before the expiration of a period of twenty years counted from the filing date." Consequently, in most patent laws nowadays, the term of patent is 20 years from the filing date of the application. This however does not forbid the states party to the WTO from providing, in their national law, other type of patent-like rights with shorter terms. Utility models are an example of such rights. Their term is usually 6 or 10 years. Jurisdiction Europe The European Patent Convention requires all jurisdictions to give a European patent a term of 20 years from the actual date of filing an application for a European patent or the actual date of filing an international application under the PCT designating the EPO. The actual date of filing can be up to a year after the earliest priority date. The term of a granted European patent may be extended under national law if national law provides term extension to compensate for pre-marketing regulatory approval. For EEA member states this is by means of a supplementary protection certificate. United States In the United States, for utility patents filed on or after June 8, 1995, the term of the patent is 20 years from the earliest filing date of the application on which the patent was granted and any prior U.S. or Patent Cooperation Treaty (PCT) applications from which the patent claims priority (excluding provisional applications). For patents filed prior to June 8, 1995, the term of patent is either 20 years from the earliest filing date as above or 17 years from the issue date, whichever is longer. Extensions may be had for certain administrative delays. The patent term will additionally be adjusted to compensate for delays in the issuance of a patent. The reasons for extensions include: Delayed response to an application request for patent. Exceeding 3 years to consider a patent application. Delays due to a secrecy order or appeal. For design patents (patents based on decorative, non-functional features), for design applications filed on or after May 13, 2015, the term is 15 years from the issue date. For design applications filed before May 13, 2015, the term is 14 years from the issue date. See also Paris Convention for the Protection of Industrial Property, provides what is called the "priority year" Patent cliff, when the patent expiration leads to an abrupt drop in sales Provisional patent application Submarine patent Supplementary protection certificate (SPC), provides a limited time extension to the protection conferred by certain patents in the European Union References Further reading United States - Contents and term of patent; provisional rights 2701 Patent Term - 2700 Patent Terms and Extensions in Manual of Patent Examining Procedure (MPEP), USPTO Patent law Time in government
Term of patent
[ "Physics" ]
689
[ "Spacetime", "Physical quantities", "Time in government", "Time" ]
2,286,941
https://en.wikipedia.org/wiki/List%20of%20types%20of%20XML%20schemas
This is a list of notable XML schemas in use on the Internet sorted by purpose. XML schemas can be used to create XML documents for a wide range of purposes such as syndication, general exchange, and storage of data in a standard format. Bookmarks XBEL - XML Bookmark Exchange Language Brewing BeerXML - a free XML based data description standard for the exchange of brewing data Business ACORD data standards - Insurance Industry XML schemas specifications by Association for Cooperative Operations Research and Development Europass XML - XML vocabulary describing the information contained in a Curriculum Vitae (CV), Language Passport (LP) and European Skills Passport (ESP) OSCRE - Open Standards Consortium for Real Estate format for data exchange within the real estate industry UBL - Defining a common XML library of business documents (purchase orders, invoices, etc.) by Oasis XBRL Extensible Business Reporting Language for International Financial Reporting Standards (IFRS) and United States generally accepted accounting principles (GAAP) business accounting. Elections EML - Election Markup Language, is an OASIS standard to support end-to-end management of election processes. It defines over thirty schemas, for example EML 510 for vote count reporting and EML 310 for voter registration. Engineering gbXML - an open schema developed to facilitate transfer of building data stored in Building Information Models (BIMs) to engineering analysis tools. IFC-XML - Building Information Models for architecture, engineering, construction, and operations. XMI - an Object Management Group (OMG) standard for exchanging metadata information, commonly used for exchange of UML information XTCE - XML Telemetric and Command Exchange is an XML based data exchange format for spacecraft telemetry and command meta-data Financial FIXatdl - FIX algorithmic trading definition language. Schema provides a HCI between a human trader, the order entry screen(s), unlimited different algorithmic trading types (called strategies) from a variety of sources, and formats a new order message on the FIX wire. FIXML - Financial Information eXchange (FIX) protocol is an electronic communications protocol initiated in 1992 for international real-time exchange of information related to the securities transactions and markets. FpML - Financial products Markup Language is the industry-standard protocol for complex financial products. It is based on XML (eXtensible Markup Language), the standard meta-language for describing data shared between applications. Geographic information systems and geotagging KML - Keyhole Markup Language is used for annotation on geographical browsers including Google Earth and NASA's World Wind. These annotations are used to place events such as earthquake warnings, historical events, etc. SensorML - used for describing sensors and measurement processes Graphical user interfaces FIXatdl - algorithmic trading GUIs (language independent) FXML - Extensible Application Markup Language for Java GLADE - GNOME's User Interface Language (GTK+) KParts - KDE's User Interface Language (Qt) UXP - Unified XUL Platform, a 2017 fork of XUL. XAML - Microsoft's Extensible Application Markup Language XForms - XForms XUL - XML User Interface Language (Native) Humanities texts EpiDoc - Epigraphic Documents TEI - Text Encoding Initiative Intellectual property DS-XML - Industrial Design Information Exchange Standard IPMM Invention Disclosure Standard TM-XML - Trade Mark Information Exchange Standard Libraries EAD - for encoding archival finding aids, maintained by the Technical Subcommittee for Encoded Archival Description of the Society of American Archivists, in partnership with the Library of Congress MARCXML - a direct mapping of the MARC standard to XML syntax METS - a schema for aggregating in a single XML file descriptive, administrative, and structural metadata about a digital object MODS - a schema for a bibliographic element set and maintained by the Network Development and MARC Standards Office of the Library of Congress Math and science MathML - Mathematical Markup Language ANSI N42.42 or "N42" - NIST data format standard for radiation detectors used for Homeland Security Metadata DDML - reformulations XML DTD ONIX for Books - ONline Information eXchange, developed and maintained by EDItEUR jointly with Book Industry Communication (UK) and the Book Industry Study Group (US), and with user groups in Australia, Canada, France, Germany, Italy, the Netherlands, Norway, Spain and the Republic of Korea. PRISM - Publishing Requirements for Industry Standard Metadata RDF - Resource Description Framework Music playlists XSPF - XML Shareable Playlist Format Musical notation MusicXML - XML western musical notation format News syndication Atom - Atom RSS - Really Simple Syndication Paper and forest products EPPML - an XML conceptual model for the interactions between parties of a postal communication system. papiNet - XML format for exchange of business documents and product information in the paper and forest products industries. Publishing DITA - Darwin Information Typing Architecture, document authoring system DocBook - for technical documentation JATS (formerly known as the NLM DTD) - Journal Article Tag Suite, a journal publishing structure originally developed by the United States National Library of Medicine PRISM - Publishing Requirements for Industry Standard Metadata Statistics DDI - "Data Documentation Initiative" is a format for information describing statistical and social science data (and the lifecycle). SDMX - SDMX-ML is a format for exchange and sharing of Statistical Data and Metadata. Vector images SVG - Scalable Vector Graphics See also List of XML markup languages XML Schema Language Comparison XML transformation language XML pipeline Data Format Description Language XML log Notes External links Schema Documentation Library Data modeling languages XML Schemas Financial industry XML-based standards
List of types of XML schemas
[ "Technology" ]
1,165
[ "Computing-related lists", "Lists of computer languages" ]
2,287,054
https://en.wikipedia.org/wiki/Pinwheel%20nebula
A pinwheel nebula is a nebulous region in the shape of a pinwheel. Spiral galaxies The term 'Pinwheel nebula' is an antiquated misnomer used by observers before Edwin Hubble realized that many of these spiral shaped nebulae were actually 'island universes' or what we now call galaxies. Wolf–Rayet nebulae Some Wolf–Rayet stars are surrounded by pinwheel nebulae. These nebulae are formed from the dust that is spewed out of a binary star system. The stellar winds of the two stars collide and form two dust lanes that spiral outward with the rotation of the system. An example of this is WR 104. External links "The Twisted Tale of Wolf-Rayet 104, First of the Pinwheel Nebulae" Some Wolf–Rayet stars in binaries are close enough that we can image a rotating "pinwheel nebula" showing the dust generated by colliding winds in the binary system, from aperture masking interferometry observations. Pinwheel Galaxy from ESA/Hubble
Pinwheel nebula
[ "Astronomy" ]
215
[ "Nebula stubs", "Astronomy stubs" ]
2,287,240
https://en.wikipedia.org/wiki/Rebound%20effect
The rebound effect, or rebound phenomenon, is the emergence or re-emergence of symptoms that were either absent or controlled while taking a medication, but appear when that same medication is discontinued, or reduced in dosage. In the case of re-emergence, the severity of the symptoms is often worse than pretreatment levels. Definition The rebound effect, or pharmaceutical rebound phenomenon, is the emergence or re-emergence of symptoms that were either absent or controlled while taking a medication, but appear when that same medication is discontinued, or reduced in dosage. In the case of re-emergence, the severity of the symptoms is often worse than pretreatment levels. Examples Sedative hypnotics Rebound insomnia is insomnia that occurs following discontinuation of sedative substances taken to relieve primary insomnia. Regular use of these substances can cause a person to become dependent on its effects in order to fall asleep. Therefore, when a person has stopped taking the medication and is 'rebounding' from its effects, they may experience insomnia as a symptom of withdrawal. Occasionally, this insomnia may be worse than the insomnia the drug was intended to treat. Common medicines known to cause this problem are eszopiclone, zolpidem, and anxiolytics such as benzodiazepines and which are prescribed to people having difficulties falling or staying asleep. Rebound depression may appear to arise in patients previously free of such an illness. Daytime rebound effects of anxiety, metallic taste, perceptual disturbances which are typical benzodiazepine withdrawal symptoms can occur the next day after a short-acting benzodiazepine hypnotic wears off. Rebound phenomena do not necessarily only occur on discontinuation of a prescribed dosage. Another example is early morning rebound insomnia which may occur when a rapidly eliminated hypnotic wears off which leads to rebounding awakeness forcing the person to become wide awake before he or she has had a full night's sleep. One drug which seems to be commonly associated with these problems is triazolam, due to its high potency and ultra short half life, but these effects can occur with other short-acting hypnotic drugs. Quazepam, due to its selectivity for type1 benzodiazepine receptors and long half-life, does not cause daytime anxiety rebound effects during treatment, showing that half-life is very important for determining whether a nighttime hypnotic will cause next-day rebound withdrawal effects or not. Daytime rebound effects are not necessarily mild but can sometimes produce quite marked psychiatric and psychological disturbances. Stimulants Rebound effects from stimulants such as methylphenidate or dextroamphetamine include stimulant psychosis, depression and a return of ADHD symptoms but in a temporarily exaggerated form. Up to a third of ADHD children experience a rebound effect when methylphenidate is withdrawn. Antidepressants Many antidepressants, including SSRIs, can cause rebound depression, panic attacks, anxiety, and insomnia when discontinued. Antipsychotics Sudden and severe emergence or re-emergence of psychosis may appear when antipsychotics are switched or discontinued too rapidly. Alpha-2 adrenergic agents Rebound hypertension, above pre-treatment level, was observed after clonidine, and guanfacine discontinuation. Continuous usage of topical decongestants (nasal sprays) can lead to constant nasal congestion, known as rhinitis medicamentosa. Humanized antibodies Denosumab inhibits osteoclast recycling, which results in the accumulation of pre-osteoclasts and osteomorphs. When denosumab therapy is discontinued, the induced cells quite quickly and abundantly differentiate into osteoclasts causing bone resorption (rebound effect) and increasing the risk of fractures. For improving mineral bone density and preventing fractures after denosumab discontinuation, bisphosphonate administration is recommended. Other medications Another example of pharmaceutical rebound is a rebound headache from painkillers when the dose is lowered, the medication wears off, or the drug is abruptly discontinued. In 2022, reports of viral RNA and symptom rebound in people with COVID-19 treated with Paxlovid were published. In May, CDC even issued a health alert informing physicians about "Paxlovid rebounds", which received attention when US president Joe Biden experienced a rebound. The cause of the rebound is unclear however, since around a third of people with COVID-19 experience a symptom rebound regardless of treatment. Abrupt withdrawal of highly potent corticosteroids, such as clobetasol for psoriasis can cause a much more severe case of the psoriasis to develop. Therefore, withdrawal should be gradual, until very little actual medication is being applied. See also Disuse supersensitivity Supersensitivity psychosis Physical dependence Rebound headache Drug withdrawal Unintended consequences References Clinical pharmacology Psychiatric diagnosis
Rebound effect
[ "Chemistry" ]
1,037
[ "Pharmacology", "Clinical pharmacology" ]
2,287,294
https://en.wikipedia.org/wiki/Discontinuous%20reception
Discontinuous reception (DRX) is a method used in mobile communication to conserve the battery of the mobile device. The mobile device and the network negotiate phases in which data transfer occurs. During other times the device turns its receiver off and enters a low power state. This is usually a function designed into the protocol that allows this to happen - most notably how the transmission is structured - for example in slots with headers containing address details so that devices can listen to these headers in each slot to decide whether the transmission is relevant to them or not. In this case, the receiver only has to be active at the beginning of each slot to receive the header, conserving battery life. Other techniques include polling, whereby the device is placed into standby for a given amount of time and then a beacon is sent by the access point or base station periodically which indicates if there is any waiting data for it. This is used in 802.11 wireless networks when compatible access cards and access points negotiate a power saving mode arrangement. A hybrid of the above techniques could be used in reality. See also Discontinuous transmission References Mobile telecommunications
Discontinuous reception
[ "Technology" ]
231
[ "Mobile telecommunications" ]
2,287,401
https://en.wikipedia.org/wiki/Crooked%20spire
A crooked spire, (also known as a twisted spire) is a tower showing a twist and/or a deviation from the vertical. A church tower usually consists of a square stone tower topped with a pyramidal wooden structure, the spire is usually cladded with slates or lead to protect the wood. Through accident or design the spire may contain a twist, or it may not point perfectly straight upwards. Some however have been built or rebuilt with a deliberate twist, generally as a design choice. There are about a hundred bell towers of this type in Europe. Reasons for spires to twist and bend Twisting can be caused by internal or external forces. Internal conditions, such as green or unseasoned wood, can cause some twisting until after about 50 years when fully seasoned. Also the weight of any lead used in construction can cause the wood to twist. Dry wood will shrink, causing further movement. External forces, such as water ingress that causes rot, can cause partial collapse, resulting in tilting. Heat from the sun on one side can also cause movement. Earthquakes have also occasionally caused twisting. Subsidence can cause leaning. Strong winds have been blamed at times, but there is little evidence to back this up. Finally, weak design can be at fault, for instance with a lack of cross-bracing, resulting in the ability of the tower to move. One legend relating to Chesterfield says that a virgin once married in the church, and the church was so surprised that the spire turned around to look at the bride. Another version of the myth common in Chesterfield is that the devil twisted the spire when a virgin married in the church, saying that he would untwist it when the next virgin got married there. A third myth says that the devil perched on the spire and twisted his tail around it to hold on, the twist of his tail transmitting to the structure. List of twisted spires References Towers
Crooked spire
[ "Engineering" ]
387
[ "Structural engineering", "Towers" ]
2,288,130
https://en.wikipedia.org/wiki/A%20Biographical%20Dictionary%20of%20Railway%20Engineers
The book A Biographical Dictionary of Railway Engineers, by John Marshall (b. 1922), summarises the lives of more than 600 engineers from Europe and North America. Each biographical entry is in summary form and concludes with a list of references. It includes an index, but no illustrations. A typical entry begins with the subject's birth and death dates, with places, and deals chronologically with the subject's railway career. Any writings by the subject are noted, and the concluding section gives page references to where the information came from, usually technical periodicals. The concluding index is of railway companies worldwide and notes the engineers who worked for them. The second edition now covers 752 names. A review of it appears in Journal of Transport History, March 2004. 1978 edition pub. David & Charles, Newton Abbot. 252pp. 2003 edition pub. Railway and Canal Historical Society, Oxford. 206pp. References Biographical Dictionary of Railway Engineers Biographical Dictionary of Railway Engineers Biographical Dictionary of Railway Engineers Biographical Dictionary of Railway Engineers Railway Engineers, Biographical Dictionary of Biographical Dictionary of Railway Engineers
A Biographical Dictionary of Railway Engineers
[ "Physics" ]
216
[ "Physical systems", "Transport", "Transport stubs" ]
2,288,158
https://en.wikipedia.org/wiki/William%20Duncan%20MacMillan
William Duncan MacMillan (July 24, 1871 – November 14, 1948) was an American mathematician and astronomer on the faculty of the University of Chicago. He published research on the applications of classical mechanics to astronomy, and is noted for pioneering speculations on physical cosmology. For the latter, Helge Kragh noted, "the cosmological model proposed by MacMillan was designed to lend support to a cosmic optimism, which he felt was threatened by the world view of modern physics." Biography He was born in La Crosse, Wisconsin, to D. D. MacMillan, who was in the lumber business, and Mary Jane McCrea. His brother, John H. MacMillan, headed the Cargill Corporation from 1909 to 1936. MacMillan graduated from La Crosse High School in 1888. In 1889, he attended Lake Forest College, then entered the University of Virginia. Later in 1898, he earned an A.B. degree from Fort Worth University, which was then a Methodist university in Texas. He performed his graduate work at the University of Chicago, earning a master's degree in 1906 and a PhD in astronomy in 1908. In 1907, prior to completing his PhD, he joined the staff of the University of Chicago as a research assistant in geology. In 1908, he became an associate in mathematics, then in 1909, he began instruction in astronomy at the same institution. His career as a professor began in 1912 when he became an assistant professor. In 1917, when the U.S. declared war on Germany, Dr. MacMillan served as a major in the U.S. army's ordnance department during World War I. Following the war, he became associate professor in 1919, then full professor in 1924. MacMillan retired in 1936. In a 1958 paper about MacMillan's work on cosmology, Richard Schlegel introduced MacMillan as "best known to physicists for his three-volume Classical Mechanics" that remained in print for decades after MacMillan's 1936 retirement. MacMillan published extensively on the mathematics of the orbits of planets and stars. In the 1920s, MacMillan developed a cosmology that presumed an unchanging, steady-state model of the universe. This was uncontroversial at the time, and indeed in 1918, Albert Einstein had also sought to adapt his relativity theories to the model using a cosmological constant. MacMillan accepted that the radiance of stars came from then unknown processes that converted their mass into radiant energy. This perspective suggested that individual stars and the universe itself would ultimately go dark, which was called the "heat death" of the universe. MacMillan avoided the conclusion about the universe through a mechanism later known as the "tired-light hypothesis". He speculated that the light emitted by stars might recreate matter in its travels through space. MacMillan's work on cosmology lost influence in the 1930s after Hubble's law became accepted. Edwin Hubble's 1929 publication, and earlier work by Georges Lemaître, reported on observations of entire galaxies far from the earth and its galaxy. The further away a galaxy is, the faster it is apparently moving away from the earth. Hubble's law strongly suggested that universe is expanding. In 1948, a new version of a steady-state cosmology was proposed by Bondi, Gold, and Hoyle that was consistent with the measurements on distant galaxies. While the authors were apparently not aware of MacMillan's earlier work, substantial similarities exist. With the observation of the cosmic microwave background (CMB) in 1965, steady-state models of the universe have been rejected by most astronomers and physicists. The CMB is a prediction of the Big Bang model of an expanding universe. MacMillan also had a distaste for Einstein's relativity theories. In a published debate in 1927, Macmillan invoked "postulates of normal intuition" to argue against them. He objected to the theories' inconsistency with an absolute scale of time. Einstein's theories predict that an observer will see that rapidly moving clocks tick more slowly than the observer's own clock. Later experiments amply confirmed this "time dilation" prediction of relativity theory. In an Associated Press report, MacMillan speculated on the nature of interstellar civilizations, believing that they would be vastly more advanced than our own. "Out in the heavens, perhaps, are civilizations as far above ours as we are above the single cell, since they are so much older than ours." The crater MacMillan on the Moon is named in his honor. Selected publications 1916 . . Later reprinted by Dover, 1958, . Reprinted by Dover, 1958, . Later reprinted by Dover, 1960, . See also Sitnikov problem Static universe References 1871 births 1948 deaths American astronomers Lake Forest College alumni Relativity critics University of Chicago alumni United States Army officers University of Chicago faculty
William Duncan MacMillan
[ "Physics" ]
974
[ "Relativity critics", "Theory of relativity" ]
2,288,181
https://en.wikipedia.org/wiki/Primary%20flight%20display
A primary flight display or PFD is a modern aircraft instrument dedicated to flight information. Much like multi-function displays, primary flight displays are built around a Liquid-crystal display or CRT display device. Representations of older six pack or "steam gauge" instruments are combined on one compact display, simplifying pilot workflow and streamlining cockpit layouts. Most airliners built since the 1980s—as well as many business jets and an increasing number of newer general aviation aircraft—have glass cockpits equipped with primary flight and multi-function displays (MFDs). Cirrus Aircraft was the first general aviation manufacturer to add a PFD to their already existing MFD, which they made standard on their SR-series aircraft in 2003. Mechanical gauges have not been eliminated from the cockpit with the onset of the PFD; they are retained for backup purposes in the event of total electrical failure. Components While the PFD does not directly use the pitot-static system to physically display flight data, it still uses the system to make altitude, airspeed, vertical speed, and other measurements precisely using air pressure and barometric readings. An air data computer analyzes the information and displays it to the pilot in a readable format. A number of manufacturers produce PFDs, varying slightly in appearance and functionality, but the information is displayed to the pilot in a similar fashion. FAA regulation describes that a PFD includes at a minimum, an airspeed indicator, turn coordinator, attitude indicator, heading indicator, altimeter, and vertical speed indicator [14 CFR Part 61.129(j)(1)]. Layout The details of the display layout on a primary flight display can vary enormously, depending on the aircraft, the aircraft's manufacturer, the specific model of PFD, certain settings chosen by the pilot, and various internal options that are selected by the aircraft's owner (i.e., an airline, in the case of a large airliner). However, the great majority of PFDs follow a similar layout convention. The center of the PFD usually contains an attitude indicator (AI), which gives the pilot information about the aircraft's pitch and roll characteristics, and the orientation of the aircraft with respect to the horizon. Unlike a traditional attitude indicator, however, the mechanical gyroscope is not contained within the panel itself, but is rather a separate device whose information is simply displayed on the PFD. The attitude indicator is designed to look very much like traditional mechanical AIs. Other information that may or may not appear on or about the attitude indicator can include the stall angle, a runway diagram, ILS localizer and glide-path “needles”, and so on. Unlike mechanical instruments, this information can be dynamically updated as required; the stall angle, for example, can be adjusted in real time to reflect the calculated critical angle of attack of the aircraft in its current configuration (airspeed, etc.). The PFD may also show an indicator of the aircraft's future path (over the next few seconds), as calculated by onboard computers, making it easier for pilots to anticipate aircraft movements and reactions. To the left and right of the attitude indicator are usually the airspeed and altitude indicators, respectively. The airspeed indicator displays the speed of the aircraft in knots, while the altitude indicator displays the aircraft's altitude above mean sea level (AMSL). These measurements are conducted through the aircraft's pitot system, which tracks air pressure measurements. As in the PFD's attitude indicator, these systems are merely displayed data from the underlying mechanical systems, and do not contain any mechanical parts (unlike an aircraft's airspeed indicator and altimeter). Both of these indicators are usually presented as vertical “tapes”, which scroll up and down as altitude and airspeed change. Both indicators may often have “bugs”, that is, indicators that show various important speeds and altitudes, such as V speeds calculated by a flight management system, do-not-exceed speeds for the current configuration, stall speeds, selected altitudes and airspeeds for the autopilot, and so on. The vertical speed indicator, usually next to the altitude indicator, indicates to the pilot how fast the aircraft is ascending or descending, or the rate at which the altitude changes. This is usually represented with numbers in "thousands of feet per minute." For example, a measurement of "+2" indicates an ascent of 2000 feet per minute, while a measurement of "-1.5" indicates a descent of 1500 feet per minute. There may also be a simulated needle showing the general direction and magnitude of vertical movement. At the bottom of the PFD is the heading display, which shows the pilot the magnetic heading of the aircraft. This functions much like a standard magnetic heading indicator, turning as required. Often this part of the display shows not only the current heading, but also the current track (actual path over the ground), rate of turn, current heading setting on the autopilot, and other indicators. Other information displayed on the PFD includes navigational marker information, bugs (to control the autopilot), ILS glideslope indicators, course deviation indicators, altitude indicator QFE settings, and much more. Although the layout of a PFD can be very complex, once a pilot is accustomed to it the PFD can provide an enormous amount of information with a single glance. Airbus Starting with the A350-1000, Airbus proposes a common symbology on the PFD and HUD centered on a flightpath vector and an energy cue instead of a flight director, supplementing the usual pitch and heading indications to improve situational awareness, and helping incorporating synthetic vision into the PFD. Drawbacks The great variability in the precise details of PFD layout makes it necessary for pilots to study the specific PFD of the specific aircraft they will be flying in advance, so that they know exactly how certain data is presented. While the basics of flight parameters tend to be much the same in all PFDs (speed, attitude, altitude), much of the other useful information presented on the display is shown in different formats on different PFDs. For example, one PFD may show the current angle of attack as a tiny dial near the attitude indicator, while another may actually superimpose this information on the attitude indicator itself. Since the various graphic features of the PFD are not labeled, the pilot must learn what they all mean in advance. A failure of a PFD deprives the pilot of an extremely important source of information. While backup instruments will still provide the most essential information, they may be spread over several locations in the cockpit, which must be scanned by the pilot, whereas the PFD presents all this information on one display. Additionally, some of the less important information, such as speed and altitude bugs, stall angles, and the like, will simply disappear if the PFD malfunctions; this may not endanger the flight, but it does increase pilot workload and diminish situational awareness. See also Multi-function display (MFD) References External links Avionics Glass cockpit Navigational flight instruments
Primary flight display
[ "Technology" ]
1,492
[ "Glass cockpit", "Avionics", "Aircraft instruments", "Navigational flight instruments" ]
2,288,302
https://en.wikipedia.org/wiki/Active%20shape%20model
Active shape models (ASMs) are statistical models of the shape of objects which iteratively deform to fit to an example of the object in a new image, developed by Tim Cootes and Chris Taylor in 1995. The shapes are constrained by the PDM (point distribution model) Statistical Shape Model to vary only in ways seen in a training set of labelled examples. The shape of an object is represented by a set of points (controlled by the shape model). The ASM algorithm aims to match the model to a new image. The ASM works by alternating the following steps: Generate a suggested shape by looking in the image around each point for a better position for the point. This is commonly done using what is called a "profile model", which looks for strong edges or uses the Mahalanobis distance to match a model template for the point. Conform the suggested shape to the point distribution model, commonly called a "shape model" in this context. The figure to the right shows an example. The technique has been widely used to analyse images of faces, mechanical assemblies and medical images (in 2D and 3D). It is closely related to the active appearance model. It is also known as a "Smart Snakes" method, since it is an analog to an active contour model which would respect explicit shape constraints. See also Procrustes analysis Point distribution model References External links Matlab code open-source ASM implementation. Description of AAMs from Manchester University. Tim Cootes' home page (one of the original co-inventors of ASMs). Source code for ASMs (the "stasm" library). ASMlib-OpenCV, An open source C++/OpenCV implementation of ASM. Computer vision
Active shape model
[ "Engineering" ]
365
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
2,288,308
https://en.wikipedia.org/wiki/White%20box%20%28software%20engineering%29
A white box (or glass box, clear box, or open box) is a subsystem whose internals can be viewed but usually not altered. The term is used in systems engineering, software engineering, and in intelligent user interface design, where it is closely related to recent interest in explainable artificial intelligence. Having access to the subsystem internals in general makes the subsystem easier to understand, but also easier to hack; for example, if a programmer can examine source code, weaknesses in an algorithm are much easier to discover. That makes white-box testing much more effective than black-box testing but considerably more difficult from the sophistication needed on the part of the tester to understand the subsystem. The notion of a "Black Box in a Glass Box" was originally used as a metaphor for teaching complex topics to computing novices. See also Black box Gray-box testing References Software testing
White box (software engineering)
[ "Engineering" ]
193
[ "Software engineering", "Software testing", "Software engineering stubs" ]
2,288,348
https://en.wikipedia.org/wiki/White%20box%20%28computer%20hardware%29
In computer hardware, a white box is a personal computer or server without a well-known brand name. The term is usually applied to systems assembled by small system integrators and to homebuilt computer systems assembled by end users from parts purchased separately at retail. In this sense, building a white box system is part of the DIY movement. The term is also applied to high volume production of unbranded PCs that began in the mid-1980s with 8 MHz Turbo XT systems selling for just under $1000. In 2002, around 30% of personal computers sold annually were white box systems. Operating systems While PCs built by system manufacturers generally come with a pre-installed operating system, white boxes from both large and small system vendors and other VAR channels can be ordered with or without a pre-installed OS. Usually when ordered with an operating system, the system builder uses an OEM copy of the OS. Whitebook or Intel "Common Building Blocks" Intel defined form factor and interconnection standards for notebook computer components, including "Barebones" (chassis and motherboard), hard disk drive, optical disk drive, LCD, battery pack, keyboard, and AC/DC adapter. These building blocks are primarily marketed to computer building companies, rather than DIY users. Costs While saving money is a common motivation for building one's own PC, today in the US it is generally more expensive to build a low-end PC than to buy a pre-built one from a well-known manufacturer. See also Beige box Enthusiast computing Homebuilt computer White-label product References Computer enclosure Electronics manufacturing Personal computers Computer jargon
White box (computer hardware)
[ "Technology", "Engineering" ]
336
[ "Computing terminology", "Computer jargon", "Natural language and computing", "Electronic engineering", "Electronics manufacturing" ]
2,288,549
https://en.wikipedia.org/wiki/Momentum%20operator
In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is: where is the reduced Planck constant, the imaginary unit, is the spatial coordinate, and a partial derivative (denoted by ) is used instead of a total derivative () since the wave function is also a function of time. The "hat" indicates an operator. The "application" of the operator on a differentiable wave function is as follows: In a basis of Hilbert space consisting of momentum eigenstates expressed in the momentum representation, the action of the operator is simply multiplication by , i.e. it is a multiplication operator, just as the position operator is a multiplication operator in the position representation. Note that the definition above is the canonical momentum, which is not gauge invariant and not a measurable physical quantity for charged particles in an electromagnetic field. In that case, the canonical momentum is not equal to the kinetic momentum. At the time quantum mechanics was developed in the 1920s, the momentum operator was found by many theoretical physicists, including Niels Bohr, Arnold Sommerfeld, Erwin Schrödinger, and Eugene Wigner. Its existence and form is sometimes taken as one of the foundational postulates of quantum mechanics. Origin from de Broglie plane waves The momentum and energy operators can be constructed in the following way. One dimension Starting in one dimension, using the plane wave solution to Schrödinger's equation of a single free particle, where is interpreted as momentum in the -direction and is the particle energy. The first order partial derivative with respect to space is This suggests the operator equivalence so the momentum of the particle and the value that is measured when a particle is in a plane wave state is the (generalized) eigenvalue of the above operator. Since the partial derivative is a linear operator, the momentum operator is also linear, and because any wave function can be expressed as a superposition of other states, when this momentum operator acts on the entire superimposed wave, it yields the momentum eigenvalues for each plane wave component. These new components then superimpose to form the new state, in general not a multiple of the old wave function. Three dimensions The derivation in three dimensions is the same, except the gradient operator del is used instead of one partial derivative. In three dimensions, the plane wave solution to Schrödinger's equation is: and the gradient is where , , and are the unit vectors for the three spatial dimensions, hence This momentum operator is in position space because the partial derivatives were taken with respect to the spatial variables. Definition (position space) For a single particle with no electric charge and no spin, the momentum operator can be written in the position basis as: where is the gradient operator, is the reduced Planck constant, and is the imaginary unit. In one spatial dimension, this becomes This is the expression for the canonical momentum. For a charged particle in an electromagnetic field, during a gauge transformation, the position space wave function undergoes a local U(1) group transformation, and will change its value. Therefore, the canonical momentum is not gauge invariant, and hence not a measurable physical quantity. The kinetic momentum, a gauge invariant physical quantity, can be expressed in terms of the canonical momentum, the scalar potential  and vector potential : The expression above is called minimal coupling. For electrically neutral particles, the canonical momentum is equal to the kinetic momentum. Properties Hermiticity The momentum operator can be described as a symmetric (i.e. Hermitian), unbounded operator acting on a dense subspace of the quantum state space. If the operator acts on a (normalizable) quantum state then the operator is self-adjoint. In physics the term Hermitian often refers to both symmetric and self-adjoint operators. (In certain artificial situations, such as the quantum states on the semi-infinite interval , there is no way to make the momentum operator Hermitian. This is closely related to the fact that a semi-infinite interval cannot have translational symmetry—more specifically, it does not have unitary translation operators. See below.) Canonical commutation relation By applying the commutator to an arbitrary state in either the position or momentum basis, one can easily show that: where is the unit operator. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables. Fourier transform The following discussion uses the bra–ket notation. One may write so the tilde represents the Fourier transform, in converting from coordinate space to momentum space. It then holds that that is, the momentum acting in coordinate space corresponds to spatial frequency, An analogous result applies for the position operator in the momentum basis, leading to further useful relations, where stands for Dirac's delta function. Derivation from infinitesimal translations The translation operator is denoted , where represents the length of the translation. It satisfies the following identity: that becomes Assuming the function to be analytic (i.e. differentiable in some domain of the complex plane), one may expand in a Taylor series about : so for infinitesimal values of : As it is known from classical mechanics, the momentum is the generator of translation, so the relation between translation and momentum operators is: thus 4-momentum operator Inserting the 3d momentum operator above and the energy operator into the 4-momentum (as a 1-form with metric signature): obtains the 4-momentum operator: where is the 4-gradient, and the becomes preceding the 3-momentum operator. This operator occurs in relativistic quantum field theory, such as the Dirac equation and other relativistic wave equations, since energy and momentum combine into the 4-momentum vector above, momentum and energy operators correspond to space and time derivatives, and they need to be first order partial derivatives for Lorentz covariance. The Dirac operator and Dirac slash of the 4-momentum is given by contracting with the gamma matrices: If the signature was , the operator would be instead. See also Mathematical descriptions of the electromagnetic field Translation operator (quantum mechanics) Relativistic wave equations Pauli–Lubanski pseudovector References Quantum mechanics
Momentum operator
[ "Physics" ]
1,309
[ "Quantum operators", "Quantum mechanics" ]
2,288,590
https://en.wikipedia.org/wiki/Crystal%20earpiece
A crystal earpiece is a type of piezoelectric earphone, producing sound by using a piezoelectric crystal, a material that changes its shape when electricity is applied to it. It is usually designed to plug into the ear canal of the user. Operation A crystal earpiece typically consists of a piezoelectric crystal with metal electrodes attached to either side, glued to a conical plastic or metal foil diaphragm, enclosed in a plastic case. The piezoelectric material used in early crystal earphones was Rochelle salt, but modern earphones use barium titanate, or less often quartz. When the audio signal is applied to the electrodes, the crystal bends back and forth a little with the signal, vibrating the diaphragm. The diaphragm pushes on the air, creating sound waves. The plastic earpiece casing confines the sound waves and conducts them efficiently into the ear canal, to the eardrum. The diaphragm is generally fixed at its outer edge, relying on bending to operate. The air path in the earpiece is generally a horn shape, with a narrowing column of air which increases the air displacement at the eardrum, increasing the volume. Application Crystal earpieces are usually monaural devices with very low sound fidelity, but high sensitivity and impedance. Their peak use was probably with 1960s era transistor radios and hearing aids. They are not used with modern portable media players due to unacceptable sound quality. The main causes of poor performance with these earpieces are low diaphragm excursion, nonlinearity, in-band resonance and the very short horn shape of the earpiece casing. The resulting sound is very tinny and lacking in bass. Modern headphones use electromagnetic drivers that work similarly to speakers, with moving coils or moving iron cores in a magnetic field. One remaining use for crystal earpieces is in crystal radios. Their very high sensitivity enables them to use the very weak signals produced by crystal radios, and their high impedance (on the order of 20 kilohms) is a good match for the typical crystal radio. They have also been used as microphones, with their high output requiring less amplification. Crystal earpieces can also be used as rudimentary, low voltage, audio circuit troubleshooting tools; it is sufficient to touch the tip of the earpiece's audio connector on a point of interest while simultaneously touching the other (sleeve) connection with one's finger. The high impedance of the earpiece means that any audio-range signal applied to the tip of the connector will be heard in the earpiece. This quick-and-dirty technique can remove the need for setting up and probing with an oscilloscope or connecting an amplifier to the test point in the first instance. References Headphones Audio electronics
Crystal earpiece
[ "Engineering" ]
580
[ "Audio electronics", "Audio engineering" ]
2,288,777
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger%20effect
The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. It was first described by David Dunning and Justin Kruger in 1999. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. In popular culture, the Dunning–Kruger effect is often misunderstood as a claim about general overconfidence of people with low intelligence instead of specific overconfidence of people unskilled at a particular task. Numerous similar studies have been done. The Dunning–Kruger effect is usually measured by comparing self-assessment with objective performance. For example, participants may take a quiz and estimate their performance afterward, which is then compared to their actual results. The original study focused on logical reasoning, grammar, and social skills. Other studies have been conducted across a wide range of tasks. They include skills from fields such as business, politics, medicine, driving, aviation, spatial memory, examinations in school, and literacy. There is disagreement about the causes of the Dunning–Kruger effect. According to the metacognitive explanation, poor performers misjudge their abilities because they fail to recognize the qualitative difference between their performances and the performances of others. The statistical model explains the empirical findings as a statistical effect in combination with the general tendency to think that one is better than average. Some proponents of this view hold that the Dunning–Kruger effect is mostly a statistical artifact. The rational model holds that overly positive prior beliefs about one's skills are the source of false self-assessment. Another explanation claims that self-assessment is more difficult and error-prone for low performers because many of them have very similar skill levels. There is also disagreement about where the effect applies and about how strong it is, as well as about its practical consequences. Inaccurate self-assessment could potentially lead people to making bad decisions, such as choosing a career for which they are unfit, or engaging in dangerous behavior. It may also inhibit people from addressing their shortcomings to improve themselves. Critics argue that such an effect would have much more dire consequences than what is observed. Definition The Dunning–Kruger effect is defined as the tendency of people with low ability in a specific area to give overly positive assessments of this ability. This is often seen as a cognitive bias, i.e. as a systematic tendency to engage in erroneous forms of thinking and judging. In the case of the Dunning–Kruger effect, this applies mainly to people with low skill in a specific area trying to evaluate their competence within this area. The systematic error concerns their tendency to greatly overestimate their competence, i.e. to see themselves as more skilled than they are. The Dunning–Kruger effect is usually defined specifically for the self-assessments of people with a low level of competence. But some theorists do not restrict it to the bias of people with low skill, also discussing the reverse effect, i.e., the tendency of highly skilled people to underestimate their abilities relative to the abilities of others. In this case, the source of the error may not be the self-assessment of one's skills, but an overly positive assessment of the skills of others. This phenomenon can be understood as a form of the false-consensus effect, i.e., the tendency to "overestimate the extent to which other people share one's beliefs, attitudes, and behaviours". Some researchers include a metacognitive component in their definition. In this view, the Dunning–Kruger effect is the thesis that those who are incompetent in a given area tend to be ignorant of their incompetence, i.e., they lack the metacognitive ability to become aware of their incompetence. This definition lends itself to a simple explanation of the effect: incompetence often includes being unable to tell the difference between competence and incompetence. For this reason, it is difficult for the incompetent to recognize their incompetence. This is sometimes termed the "dual-burden" account, since low performers are affected by two burdens: they lack a skill and they are unaware of this deficiency. Other definitions focus on the tendency to overestimate one's ability and see the relation to metacognition as a possible explanation that is not part of the definition. This contrast is relevant since the metacognitive explanation is controversial. Many criticisms of the Dunning–Kruger effect target this explanation but accept the empirical findings that low performers tend to overestimate their skills. Among laypeople, the Dunning–Kruger effect is often misunderstood as the claim that people with low intelligence are more confident in their knowledge and skills than people with high intelligence. According to psychologist Robert D. McIntosh and his colleagues, it is sometimes understood in popular culture as the claim that "stupid people are too stupid to know they are stupid". But the Dunning–Kruger effect applies not to intelligence in general but to skills in specific tasks. Nor does it claim that people lacking a given skill are as confident as high performers. Rather, low performers overestimate themselves but their confidence level is still below that of high performers. Measurement, analysis, and investigated tasks The most common approach to measuring the Dunning–Kruger effect is to compare self-assessment with objective performance. The self-assessment is sometimes called subjective ability in contrast to the objective ability corresponding to the actual performance. The self-assessment may be done before or after the performance. If done afterward, the participants receive no independent clues during the performance as to how well they did. Thus, if the activity involves answering quiz questions, no feedback is given as to whether a given answer was correct. The measurement of the subjective and the objective abilities can be in absolute or relative terms. When done in absolute terms, self-assessment and performance are measured according to objective standards, e.g. concerning how many quiz questions were answered correctly. When done in relative terms, the results are compared with a peer group. In this case, participants are asked to assess their performances in relation to the other participants, for example in the form of estimating the percentage of peers they outperformed. The Dunning–Kruger effect is present in both cases, but tends to be significantly more pronounced when done in relative terms. This means that people are usually more accurate when predicting their raw score than when assessing how well they did relative to their peer group. The main point of interest for researchers is usually the correlation between subjective and objective ability. To provide a simplified form of analysis of the measurements, objective performances are often divided into four groups. They start from the bottom quartile of low performers and proceed to the top quartile of high performers. The strongest effect is seen for the participants in the bottom quartile, who tend to see themselves as being part of the top two quartiles when measured in relative terms. The initial study by David Dunning and Justin Kruger examined the performance and self-assessment of undergraduate students in inductive, deductive, and abductive logical reasoning; English grammar; and appreciation of humor. Across four studies, the research indicates that the participants who scored in the bottom quartile overestimated their test performance and their abilities. Their test scores placed them in the 12th percentile, but they ranked themselves in the 62nd percentile. Other studies focus on how a person's self-view causes inaccurate self-assessments. Some studies indicate that the extent of the inaccuracy depends on the type of task and can be improved by becoming a better performer. Overall, the Dunning–Kruger effect has been studied across a wide range of tasks, in aviation, business, debating, chess, driving, literacy, medicine, politics, spatial memory, and other fields. Many studies focus on students—for example, how they assess their performance after an exam. In some cases, these studies gather and compare data from different countries. Studies are often done in laboratories; the effect has also been examined in other settings. Examples include assessing hunters' knowledge of firearms and large Internet surveys. Explanations Various theorists have tried to provide models to explain the Dunning–Kruger effect's underlying causes. The original explanation by Dunning and Kruger holds that a lack of metacognitive abilities is responsible. This interpretation is not universally accepted, and many alternative explanations are discussed in the academic literature. Some of them focus only on one specific factor, while others see a combination of various factors as the cause. Metacognitive The metacognitive explanation rests on the idea that part of acquiring a skill consists in learning to distinguish between good and bad performances of the skill. It assumes that people of low skill level are unable to properly assess their performance because they have not yet acquired the discriminatory ability to do so. This leads them to believe that they are better than they actually are because they do not see the qualitative difference between their performance and that of others. In this regard, they lack the metacognitive ability to recognize their incompetence. This model has also been called the "dual-burden account" or the "double-burden of incompetence", since the burden of regular incompetence is paired with the burden of metacognitive incompetence. The metacognitive lack may hinder some people from becoming better by hiding their flaws from them. This can then be used to explain how self-confidence is sometimes higher for unskilled people than for people with an average skill: only the latter are aware of their flaws. Some attempts have been made to measure metacognitive abilities directly to examine this hypothesis. Some findings suggest that poor performers have reduced metacognitive sensitivity, but it is not clear that its extent is sufficient to explain the Dunning–Kruger effect. Another study concluded that unskilled people lack information but that their metacognitive processes have the same quality as those of skilled people. An indirect argument for the metacognitive model is based on the observation that training people in logical reasoning helps them make more accurate self-assessments. Many criticisms of the metacognitive model hold that it has insufficient empirical evidence and that alternative models offer a better explanation. Statistical and better-than-average effect A different interpretation is further removed from the psychological level and sees the Dunning–Kruger effect as mainly a statistical artifact. It is based on the idea that the statistical effect known as regression toward the mean explains the empirical findings. This effect happens when two variables are not perfectly correlated: if one picks a sample that has an extreme value for one variable, it tends to show a less extreme value for the other variable. For the Dunning–Kruger effect, the two variables are actual performance and self-assessed performance. If a person with low actual performance is selected, their self-assessed performance tends to be higher. Most researchers acknowledge that regression toward the mean is a relevant statistical effect that must be taken into account when interpreting the empirical findings. This can be achieved by various methods. Some theorists, like Gilles Gignac and Marcin Zajenkowski, go further and argue that regression toward the mean in combination with other cognitive biases, like the better-than-average effect, can explain most of the empirical findings. This type of explanation is sometimes called "noise plus bias". According to the better-than-average effect, people generally tend to rate their abilities, attributes, and personality traits as better than average. For example, the average IQ is 100, but people on average think their IQ is 115. The better-than-average effect differs from the Dunning–Kruger effect since it does not track how the overly positive outlook relates to skill. The Dunning–Kruger effect, on the other hand, focuses on how this type of misjudgment happens for poor performers. When the better-than-average effect is paired with regression toward the mean, it shows a similar tendency. This way, it can explain both that unskilled people greatly overestimate their competence and that the reverse effect for highly skilled people is much less pronounced. This can be shown using simulated experiments that have almost the same correlation between objective and self-assessed ability as actual experiments. Some critics of this model have argued that it can explain the Dunning–Kruger effect only when assessing one's ability relative to one's peer group. But it may not be able to explain self-assessment relative to an objective standard. A further objection claims that seeing the Dunning–Kruger effect as a regression toward the mean is only a form of relabeling the problem and does not explain what mechanism causes the regression. Based on statistical considerations, Nuhfer et al. arrive at the conclusion that there is no strong tendency to overly positive self-assessment and that the label "unskilled and unaware of it" applies only to few people. Science communicator Jonathan Jarry makes the case that this effect is the only one shown in the original and subsequent papers. Dunning has defended his findings, writing that purely statistical explanations often fail to consider key scholarly findings while adding that self-misjudgements are real regardless of their underlying cause. Rational The rational model of the Dunning–Kruger effect explains the observed regression toward the mean not as a statistical artifact but as the result of prior beliefs. If low performers expect to perform well, this can cause them to give an overly positive self-assessment. This model uses a psychological interpretation that differs from the metacognitive explanation. It holds that the error is caused by overly positive prior beliefs and not by the inability to correctly assess oneself. For example, after answering a ten-question quiz, a low performer with only four correct answers may believe they got two questions right and five questions wrong, while they are unsure about the remaining three. Because of their positive prior beliefs, they will automatically assume that they got these three remaining questions right and thereby overestimate their performance. Distribution of high and low performers Another model sees the way high and low performers are distributed as the source of erroneous self-assessment. It is based on the assumption that many low performers' skill levels are very similar, i.e., that "many people [are] piled up at the bottom rungs of skill level". This would make it much more difficult for them to accurately assess their skills in relation to their peers. According to this model, the reason for the increased tendency to give false self-assessments is not a lack of metacognitive ability but a more challenging situation in which this ability is applied. One criticism of this interpretation is directed against the assumption that this type of distribution of skill levels can always be used as an explanation. While it can be found in various fields where the Dunning–Kruger effect has been researched, it is not present in all of them. Another criticism holds that this model can explain the Dunning–Kruger effect only when the self-assessment is measured relative to one's peer group. But it may fail when it is measured relative to absolute standards. Lack of incentive A further explanation, sometimes given by theorists with an economic background, focuses on the fact that participants in the corresponding studies lack incentive to give accurate self-assessments. In such cases, intellectual laziness or a desire to look good to the experimenter may motivate participants to give overly positive self-assessments. For this reason, some studies were conducted with additional incentives to be accurate. One study gave participants a monetary reward based on how accurate their self-assessments were. These studies failed to show any significant increase in accuracy for the incentive group in contrast to the control group. Practical significance There are disagreements about the Dunning–Kruger effect's magnitude and practical consequences as compared to other psychological effects. Claims about its significance often focus on how it causes affected people to make decisions that have bad outcomes for them or others. For example, according to Gilles E. Gignac and Marcin Zajenkowski, it can have long-term consequences by leading poor performers into careers for which they are unfit. High performers underestimating their skills, though, may forgo viable career opportunities matching their skills in favor of less promising ones that are below their skill level. In other cases, the wrong decisions can also have short-term effects. For example, Pavel et al. hold that overconfidence can lead pilots to operate a new aircraft for which they lack adequate training or to engage in flight maneuvers that exceed their proficiency. Emergency medicine is another area where the correct assessment of one's skills and the risks of treatment matters. According to Lisa TenEyck, the tendencies of physicians in training to be overconfident must be considered to ensure the appropriate degree of supervision and feedback. Schlösser et al. hold that the Dunning–Kruger effect can also negatively affect economic activities. This is the case, for example, when the price of a good, such as a used car, is lowered by the buyers' uncertainty about its quality. An overconfident buyer unaware of their lack of knowledge may be willing to pay a much higher price because they do not take into account all the potential flaws and risks relevant to the price. Another implication concerns fields in which researchers rely on people's self-assessments to evaluate their skills. This is common, for example, in vocational counseling or to estimate students' and professionals' information literacy skills. According to Khalid Mahmood, the Dunning–Kruger effect indicates that such self-assessments often do not correspond to the underlying skills. It implies that they are unreliable as a method for gathering this type of data. Regardless of the field in question, the metacognitive ignorance often linked to the Dunning–Kruger effect may inhibit low performers from improving themselves. Since they are unaware of many of their flaws, they may have little motivation to address and overcome them. Not all accounts of the Dunning–Kruger effect focus on its negative sides. Some also concentrate on its positive sides, e.g. that ignorance is sometimes bliss. In this sense, optimism can lead people to experience their situation more positively, and overconfidence may help them achieve even unrealistic goals. To distinguish the negative from the positive sides, two important phases have been suggested to be relevant for realizing a goal: preparatory planning and the execution of the plan. According to Dunning, overconfidence may be beneficial in the execution phase by increasing motivation and energy. However it can be detrimental in the planning phase since the agent may ignore bad odds, take unnecessary risks, or fail to prepare for contingencies. For example, being overconfident may be advantageous for a general on the day of battle because of the additional inspiration passed on to his troops. But it can be disadvantageous in the weeks before by ignoring the need for reserve troops or additional protective gear. Historical precursors of the Dunning–Kruger effect were expressed by theorists such as Charles Darwin ("Ignorance more frequently begets confidence than does knowledge") and Bertrand Russell ("...in the modern world the stupid are cocksure while the intelligent are full of doubt"). In 2000, Kruger and Dunning were awarded the satirical Ig Nobel Prize in recognition of the scientific work recorded in "their modest report". See also References Citations Sources Further reading External links Cognitive biases Cognitive inertia Incompetence
Dunning–Kruger effect
[ "Biology" ]
3,970
[ "Incompetence", "Behavior", "Human behavior" ]
2,288,809
https://en.wikipedia.org/wiki/Embedding%20effect
The embedding effect is an issue in environmental economics and other branches of economics where researchers wish to identify the value of a specific public good using a contingent valuation or willingness-to-pay (WTP) approach. The problem arises because public goods belong to society as a whole, and are generally not traded in the market. Because market prices cannot be used to value them, researchers ask a sample of people how much they are willing to pay for the public good, wildlife preservation for example. The results can be misleading because of the difficulty, for individual society members, of identifying the particular value that they attach to one particular thing which is embedded in a collection of similar things (e.g. The Tower of London within the set of all globally important historic monuments or Caernarvon Castle within the set of all Welsh Scheduled Monuments). A similar problem occurs with a wider selection of public goods (for example whether spending on preserving a specific wetland is more important than preserving a specific persons life for the next two years using taxpayers' money). The embedding effect suggests the contingent valuation method is not an unbiased approach to measuring policy impacts for cost-benefit analysis of environmental, and other government policies. Policy implications Few government policies are independent of any other governmental policy. Most policies involve either substitute or complementary relationships with others at either the same or different intergovernmental level. For example, in the USA, the protection of coastal water quality is a goal of both state and multiple federal agencies. The Clean Water Act, wetlands protection programs, and fisheries management plans all address coastal water quality. These policies may be substitutes or complements for each other. These relationships complicate the application of the contingent valuation method. The resulting problems that may be encountered have been called the part-whole bias and sequencing and nesting (see below). One method of overcoming some aspects of this problem is to ask two questions (1) How much would you be willing to contribute to a specific tax fund for the whole set of items to be preserved? (e.g. all Coral Sea areas west of Australia) followed by (2) How much of this would you like to give to the preservation of the specific named item? (e.g. the Great Barrier Reef). These questions may also be supplemented by questions which ask about the respective importance of alternatives e.g. whether the preservation of the Great Barrier Reef is more/the same/less important than other public goods such as poor relief, health care, education etc. Part-whole bias If the contingent valuation method is used to elicit willingness to pay for two government policies independently (the parts) the sum of the independently estimated willingness to pay amounts may be different from the willingness to pay elicited for both projects (the whole). This result is troubling if the projects are geographically related, for example, different wilderness areas (McFadden, 1994). This result does not violate the nonsatiation axiom of consumer theory if projects are perfect substitutes (Carson and Mitchell, 1995). Several applications of the contingent valuation method have found an absence of part-whole bias (e.g., Whitehead, Haab, and Huang, 1998). Sequencing and Nesting A related issue occurs with the sequential valuation of projects. Consider a two-part policy valued in two different sequences. The willingness to pay for a project when valued first will be larger than when the question is placed second. Independent valuation, in effect valuing each project at the beginning of a sequence, will always lead to the largest of the possible willingness to pay estimates. This result is expected for the value of public goods estimated with the contingent valuation method due to substitution and income effects (Hoehn and Randall, 1989; Carson, Flores, and Hanemann, 1998). References Carson, Richard T., and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management, 28, 155-173, 1995. Hoehn, John P., and Alan Randall, “Too Many Proposals Pass the Benefit Cost Test,” American Economic Review, 79, 541-551, 1989. Carson, Richard T., Nicholas E. Flores, and W. Michael Hanemann, “Sequencing and Valuing Public Goods,” Journal of Environmental Economics and Management, 36, 314-324, 1998. McFadden, Daniel, “Contingent Valuation and Social Choice,” American Journal of Agricultural Economics, 76, 689-708, 1994. Whitehead, John C., Timothy C. Haab, and Ju-Chin Huang, “Part-Whole Bias in Contingent Valuation: Will Scope Effects Be Detected with Inexpensive Survey Methods?” Southern Economic Journal, 65, 160-168, 1998. Environmental economics
Embedding effect
[ "Environmental_science" ]
970
[ "Environmental economics", "Environmental social science" ]
2,288,927
https://en.wikipedia.org/wiki/Negative%20thermal%20expansion
Negative thermal expansion (NTE) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0 to 3.98 °C. Also, the density of solid water (ice) is lower than the density of liquid water at standard pressure. Water's NTE is the reason why water ice floats, rather than sinks, in liquid water. Materials which undergo NTE have a range of potential engineering, photonic, electronic, and structural applications. For example, if one were to mix a negative thermal expansion material with a "normal" material which expands on heating, it could be possible to use it as a thermal expansion compensator that might allow for forming composites with tailored or even close to zero thermal expansion. Origin of negative thermal expansion There are a number of physical processes which may cause contraction with increasing temperature, including transverse vibrational modes, rigid unit modes and phase transitions. In 2011, Liu et al. showed that the NTE phenomenon originates from the existence of high pressure, small volume configurations with higher entropy, with their configurations present in the stable phase matrix through thermal fluctuations. They were able to predict both the colossal positive thermal expansion (In cerium) and zero and infinite negative thermal expansion (in ). Alternatively, large negative and positive thermal expansion may result from the design of internal microstructure. Negative thermal expansion in close-packed systems Negative thermal expansion is usually observed in non-close-packed systems with directional interactions (e.g. ice, graphene, etc.) and complex compounds (e.g. , , beta-quartz, some zeolites, etc.). However, in a paper, it was shown that negative thermal expansion (NTE) is also realized in single-component close-packed lattices with pair central force interactions. The following sufficient condition for potential giving rise to NTE behavior is proposed for the interatomic potential, , at the equilibrium distance : where is shorthand for the third derivative of the interatomic potential at the equilibrium point: This condition is (i) necessary and sufficient in 1D and (ii) sufficient, but not necessary in 2D and 3D. An approximate necessary and sufficient condition is derived in a paper where is the space dimensionality. Thus in 2D and 3D negative thermal expansion in close-packed systems with pair interactions is realized even when the third derivative of the potential is zero or even negative. Note that one-dimensional and multidimensional cases are qualitatively different. In 1D thermal expansion is caused by anharmonicity of interatomic potential only. Therefore, the sign of thermal expansion coefficient is determined by the sign of the third derivative of the potential. In multidimensional case the geometrical nonlinearity is also present, i.e. lattice vibrations are nonlinear even in the case of harmonic interatomic potential. This nonlinearity contributes to thermal expansion. Therefore, in multidimensional case both and are present in the condition for negative thermal expansion. Materials Perhaps one of the most studied materials to exhibit negative thermal expansion is zirconium tungstate (). This compound contracts continuously over a temperature range of 0.3 to 1050 K (at higher temperatures the material decomposes). Other materials that exhibit NTE behaviour include other members of the family of materials (where A = or , M = or ) and and , though and only in their high temperature phase starting at 350 to 400 K. also is an example of controllable negative thermal expansion. Cubic materials like and also and are especially precious for applications in engineering because they exhibit isotropic NTE i.e. the NTE is the same in all three dimensions making it easier to apply them as thermal expansion compensators. Ordinary ice shows NTE in its hexagonal and cubic phases at very low temperatures (below –200 °C). In its liquid form, pure water also displays negative thermal expansivity below 3.984 °C. ALLVAR Alloy 30, a titanium-based alloy, shows NTE over a wide temperature range, with a -30 ppm/°C instantaneous coefficient of thermal expansion at 20 °C. ALLVAR Alloy 30's negative thermal expansion is anisotropic. This commercially available material is used in the optics, aerospace, and cryogenics industries in the form of optical spacers that prevent thermal defocus, ultra-stable struts, and washers for thermally-stable bolted joints. Carbon fibers shows NTE between 20°C and 500°C. This property is utilized in tight-tolerance aerospace applications to tailor the CTE of carbon fiber reinforced plastic components for specific applications/conditions, by adjusting the ratio of carbon fiber to plastic and by adjusting the orientation of the carbon fibers within the part. Quartz () and a number of zeolites also show NTE over certain temperature ranges. Fairly pure silicon (Si) has a negative coefficient of thermal expansion for temperatures between about 18 K and 120 K. Cubic Scandium trifluoride has this property which is explained by the quartic oscillation of the fluoride ions. The energy stored in the bending strain of the fluoride ion is proportional to the fourth power of the displacement angle, unlike most other materials where it is proportional to the square of the displacement. A fluorine atom is bound to two scandium atoms, and as temperature increases the fluorine oscillates more perpendicularly to its bonds. This draws the scandium atoms together throughout the material and it contracts. exhibits this property from 10 to 1100 K above which it shows the normal positive thermal expansion. Shape memory alloys such as NiTi are a nascent class of materials that exhibit zero and negative thermal expansion. Applications Forming a composite of a material with (ordinary) positive thermal expansion with a material with (anomalous) negative thermal expansion could allow for tailoring the thermal expansion of the composites or even having composites with a thermal expansion close to zero. Negative and positive thermal expansion hereby compensate each other to a certain amount if the temperature is changed. Tailoring the overall thermal expansion coefficient (CTE) to a certain value can be achieved by varying the volume fractions of the different materials contributing to the thermal expansion of the composite. Especially in engineering there is a need for having materials with a CTE close to zero i.e. with constant performance over a large temperature range e.g. for application in precision instruments. But also in everyday life materials with a CTE close to zero are required. Glass-ceramic cooktops like Ceran cooktops need to withstand large temperature gradients and rapid changes in temperature while cooking because only certain parts of the cooktops will be heated while other parts stay close to ambient temperature. In general, due to its brittleness temperature gradients in glass might cause cracks. However, the glass-ceramics used in cooktops consist of multiple different phases, some exhibiting positive and some others exhibiting negative thermal expansion. The expansion of the different phases compensate each other so that there is not much change in volume of the glass-ceramic with temperature and crack formation is avoided. An everyday life example for the need for materials with tailored thermal expansion are dental fillings. If the fillings tend to expand by an amount different from the teeth, for example when drinking a hot or cold drink, it might cause a toothache. If dental fillings are, however, made of a composite material containing a mixture of materials with positive and negative thermal expansion then the overall expansion could be precisely tailored to that of tooth enamel. References Further reading Physical chemistry Thermodynamics Materials science
Negative thermal expansion
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,575
[ "Applied and interdisciplinary physics", "Materials science", "Thermodynamics", "nan", "Physical chemistry", "Dynamical systems" ]
2,289,050
https://en.wikipedia.org/wiki/Magic%20angle
The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial, , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging, structures with ordered collagen, such as tendons and ligaments, oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect. Mathematical definition The magic angle θm is where arccos and arctan are the inverse cosine and tangent functions respectively. is the angle between the space diagonal of a cube and any of its three connecting edges, see image. Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as or radians . This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle). Magic angle and nuclear magnetic resonance In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling, chemical shift anisotropy (CSA), and first-order quadrupolar coupling, depend on the orientation of the interaction tensor with the external magnetic field. By spinning the sample around a given axis, their average angular dependence becomes: where is the angle between the principal axis of the interaction and the magnetic field, is the angle of the axis of rotation relative to the magnetic field and is the (arbitrary) angle between the axis of rotation and principal axis of the interaction. For dipolar couplings, the principal axis corresponds to the internuclear vector between the coupled spins; for the CSA, it corresponds to the direction with the largest deshielding; for the quadrupolar coupling, it corresponds to the -axis of the electric-field gradient tensor. The angle cannot be manipulated as it depends on the orientation of the interaction relative to the molecular frame and on the orientation of the molecule relative to the external field. The angle , however, can be decided by the experimenter. If one sets , then the average angular dependence goes to zero. Magic angle spinning is a technique in solid-state NMR spectroscopy which employs this principle to remove or reduce the influence of anisotropic interactions, thereby increasing spectral resolution. For a time-independent interaction, i.e. heteronuclear dipolar couplings, CSA and first-order quadrupolar couplings, the anisotropic component is greatly reduced and almost suppressed in the limit of fast spinning, i.e. when the spinning frequency is greater than the width of the interaction. The averaging is only close to zero in a first-order perturbation theory treatment; higher order terms cause allowed frequencies at multiples of the spinning frequency to appear, creating spinning side-bands in the spectra. Time-dependent interactions, such as homonuclear dipolar couplings, are more difficult to average to their isotropic values by magic angle spinning; a network of strongly coupled spins will produce a mixing of spin states during the course of the sample rotation, interfering with the averaging process. Application to medical imaging: The magic angle artifact The magic angle artifact refers to the increased signal observed when MRI sequences with short echo time (TE) (e.g., or proton density spin-echo sequences) are used to image tissues with well-ordered collagen fibers in one direction (e.g., tendon or articular hyaline cartilage). This artifact occurs when the angle such fibers make with the magnetic field is equal to . Example: This artifact comes into play when evaluating the rotator cuff tendons of the shoulder. The magic angle effect can create the appearance of supraspinatus tendinitis. Reinforced rubber To achieve optimal loading in a straight rubber hose the fibres must be positioned under an angle of approximately 54.7 angular degrees, also referred to as the magic angle. The magic angle of 54.7 exactly balances the internal-pressure-induced longitudinal stress and the hoop (circumferential) stress. References Nuclear magnetic resonance spectroscopy Magnetic resonance imaging Mathematical constants
Magic angle
[ "Physics", "Chemistry", "Mathematics" ]
907
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Magnetic resonance imaging", "Nuclear magnetic resonance spectroscopy", "Mathematical objects", "nan", "Mathematical constants", "Spectroscopy", "Numbers" ]
2,289,103
https://en.wikipedia.org/wiki/Electron-capture%20dissociation
Electron-capture dissociation (ECD) is a method of fragmenting gas-phase ions for structure elucidation of peptides and proteins in tandem mass spectrometry. It is one of the most widely used techniques for activation and dissociation of mass selected precursor ion in MS/MS. It involves the direct introduction of low-energy electrons to trapped gas-phase ions. History Electron-capture dissociation was developed by Roman Zubarev and Neil Kelleher while in Fred McLafferty's lab at Cornell University. Irradiation of melittin 4+ ions and ubiquitin 10+ ions (trapped in FT-MS cell) by laser pulses not only resulted in peculiar c', z fragmentation but also charge reduction. It was suggested that if FT cell is modified to trap cations and electrons simultaneously, secondary electrons emitted by UV photons increases the charge reduction effect and c′, z• fragmentation. Replacing UV laser with EI source led to the development of this new technique. Principles Electron-capture dissociation typically involves a multiply protonated molecule M interacting with a free electron to form an odd-electron ion. Liberation of the electric potential energy results in fragmentation of the product ion. . Rate of electron capture dissociation not only depends on the frequency of ion–electron fragmentation reactions but also on the number of ions in an ion–electron interaction volume. Electron current density and cross-section of ECD is directly proportional to fragmentation frequency. An indirectly heated dispenser cathode used as an electron source results in larger electron current and larger emitting surface area. ECD devices can be of two forms. It can trap analyte ions during the ECD stage or can undergo flow through mode where dissociation takes place as analyte ions flows continuously through the ECD region. Flow through mode has advantage over other mode because nearly all the analyte ion beam is used. However, that decreases the efficiency of ECD for flow through mode. ECD produces significantly different types of fragment ions (although primarily c- and z-type, b-ions have been identified in ECD) than other MS/MS fragmentation methods such as electron-detachment dissociation (EDD) (primarily a and x types), collision-induced dissociation (CID) (primarily b and y type) and infrared multiphoton dissociation. CID and IRMPD introduce internal vibrational energy in some way or another, causing loss of post-translational modifications during fragmentation. In ECD, unique fragments (and complementary to CID) are observed, and the ability to fragment whole macromolecules effectively has been promising. Although ECD is primarily used in Fourier transform ion cyclotron resonance mass spectrometry, investigators have indicated that it has been successfully used in an ion-trap mass spectrometer. ECD can also do rapid integration of multiple scans in FTICR-MS if put in a combination with external accumulation. ECD is a recently introduced MS/MS fragmentation technique and is still being investigated. The mechanism of ECD is still under debate but appears not to necessarily break the weakest bond and is therefore thought to be a fast process (nonergodic) where energy is not free to relax intramolecularly. Suggestions have been made that radical reactions initiated by the electron may be responsible for the action of ECD. In a similar MS/MS fragmentation technique called electron-transfer dissociation, the electrons are transferred by collision between the analyte cations and reagent anions. Applications Disulfide bond cleavage ECD itself and combined with other MS is very useful for proteins and peptides containing multiple disulfide bonds. FTICR combined with ECD helps to recognize peptides containing disulfide bonds. ECD could also access important sequence information by activation of higher charged proteins. Moreover, disulfide bond cleavage takes place by ECD of multiply charge proteins or peptides produced by ESI. Electron capture by these proteins releases H atom, captured by the disulfide bond to cause its dissociation. RS-SR' + \bullet H -> R-S(H) \bullet S-R' -> RSH{} + \bullet SR' ECD with UV-based activation increases the top-down MS sequence coverage of disulfide bond containing proteins and cleaves a disulfide bond homolytically to produce two separated thiol radicals. This technique was observed with insulin and ribonuclease, which led them to cleave up to three disulfide bonds and increase the sequence coverage. Post-translational modifications ECD-MS fragments can retain posttranslational modifications such as carboxylation, phosphorylation and O-glycosylation. ECD has the potential to do the top-down characterization of the major types of posttranslational modifications in proteins. It successfully cleaved 87 of 208 backbone bonds and provided the first direct characterization of a phosphoprotein, bovine β casein, simultaneously restricting the location of five phosphorylation sites. It has advantages over CAD to measure the degree of phosphorylation with a minimum number of losses of phosphates and for phosphopeptide/phosphoprotein mapping, which makes ECD a superior technique. Coupling of ECD with separation techniques ECD has been coupled with capillary electrophoresis (CE) to gain insight into structural analysis of mixture of peptides and protein digest. Micro-HPLC combined with ECD FTICR was used to analyze pepsin digest of cytochrome c. Sequence tags were provided by analysis of a mixture of peptides and tryptic digest of bovine serum albumin when LC ECD FTICR MS was used. Additionally, LC-ECD-MS/MS is provides longer sequence tags than LC-CID-MS/MS for identification of proteins. ECD devices using radio frequency quadrupole ion trap are relevant for high-throughput proteomics. Recently, Atmospheric pressure electron capture dissociation (AP-ECD) is emerging as a better technique because it can be implemented as a stand-alone ion-source device and doesn't require any modification of the main instrument. Proteomics Analysis of proteins can be done by either using top-down or bottom-up approach. However, better sequence coverage is provided by top-down analysis. Combination of ECD with FTICR MS has resulted in popularity of this approach. It has also helped in determining the multiple modification sites in intact proteins. Native electron capture dissociation (NECD) was used to study cytochrome c dimer and has been recently used to elucidate iron-binding channels in horse spleen ferritin. Synthetic polymers ECD studies of polyalkene glycols, polyamides, polyacrylates and polyesters are useful for understanding composition of polymer samples. It has become a powerful technique to analyze structural information about precursor ions during MS/MS for synthetic polymers. ECD's single bond cleavage tendency makes the interpretation of product ion scans simple and easy for polymer chemistry. See also Electron capture ionization Electron–capture mass spectrometry RRKM theory References Tandem mass spectrometry
Electron-capture dissociation
[ "Physics" ]
1,513
[ "Mass spectrometry", "Spectrum (physical sciences)", "Tandem mass spectrometry" ]
2,289,119
https://en.wikipedia.org/wiki/Hexol
In chemistry, hexol is a cation with formula {[Co(NH3)4(OH)2]3Co}6+ — a coordination complex consisting of four cobalt cations in oxidation state +3, twelve ammonia molecules , and six hydroxy anions , with a net charge of +6. The hydroxy groups act as bridges between the central cobalt atom and the other three, which carry the ammonia ligands. Salts of hexol, such as the sulfate {[Co(NH3)4(OH)2]3Co}(SO4)3(H2O)x, are of historical significance as the first synthetic non-carbon-containing chiral compounds. Preparation Salts of hexol were first described by Jørgensen, although it was Werner who recognized its structure. The cation is prepared by heating a solution containing the cis-diaquotetramminecobalt(III) cation [Co(NH3)4(H2O)2]3+ with a dilute base: 4 [Co(NH3)4(H2O)2]3+ + 2 HO− → {[Co(NH3)4(OH)2]3Co}6+ + 4 NH4+ + 4 H2O Hexol sulfate Starting with the sulfate and using ammonium hydroxide as the base, depending on the conditions, one obtains the 9-hydrate, the 6-hydrate, or the 4-hydrate of hexol sulfate. These salts form dark brownish-violet or black tabular crystals, with low solubility in water. When treated with concentrated hydrochloric acid, hexol sulfate converts to cis-diaquotetramminecobalt(III) sulfate. In boiling dilute sulfuric acid, hexol sulfate further degrades with evolution of oxygen and nitrogen. Optical properties The hexol cation exists as two optical isomers that are mirror images of each other, depending on the arrangement of the bonds between the central cobalt atom and the three bidentate peripheral units [Co(NH3)4(HO)2]. It belongs to the D point group. The nature of chirality can be compared to that of the ferrioxalate anion . In a historic set of experiments, a salt of hexol with an optically active anion — specifically, its D-(+)-bromocamphorsulfonate – was resolved into separate salts of the two cation isomers by fractional crystallisation. A more efficient resolution involves the bis(tartrato)diantimonate(III) anion. The hexol hexacation has a high specific rotation of 2640°. "Second hexol" Werner also described a second achiral hexol (a minor byproduct from the production of Fremy's salt) that he incorrectly identified as a linear tetramer. The second hexol is hexanuclear (contains six cobalt centres in each ion), not tetranuclear. Its point group is C, and its formula is , whereas that of hexol is . References External links Hexol Molecule of the Month September 1997 Website National Pollutant Inventory – Cobalt fact sheet Cobalt(III) compounds Stereochemistry Ammine complexes Cobalt complexes
Hexol
[ "Physics", "Chemistry" ]
702
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
2,289,171
https://en.wikipedia.org/wiki/Shell-and-tube%20heat%20exchanger
A shell-and-tube heat exchanger is a class of heat exchanger designs. It is the most common type of heat exchanger in oil refineries and other large chemical processes, and is suited for higher-pressure applications. As its name implies, this type of heat exchanger consists of a shell (a large pressure vessel) with a bundle of tubes inside it. One fluid runs through the tubes, and another fluid flows over the tubes (through the shell) to transfer heat between the two fluids. The set of tubes is called a tube bundle, and may be composed of several types of tubes: plain, longitudinally finned, etc. Theory and application Two fluids, of different starting temperatures, flow through the heat exchanger. One flows through all the tubes in parallel and the other flows outside the tubes, but inside the shell, typically in counterflow. Heat is transferred from one fluid to the other through the tube walls, either from tube side to shell side or vice versa. Cross-baffles can be used to force the shell fluid to flow perpendicularly across the tubes to develop a more turbulent flow, increasing the heat-transfer coefficient. The fluids can be either liquids or gases on either the shell or the tube side. In order to transfer heat efficiently, a large heat transfer area should be used, leading to the use of many tubes. In this way, waste heat can be put to use. This is an efficient way to conserve energy. Heat exchangers with only one phase (liquid or gas) on each side can be called one-phase or single-phase heat exchangers. Two-phase heat exchangers can be used to heat a liquid to boil it into a gas (vapor), sometimes called boilers, or to cool the vapors and condense it into a liquid (called condensers), with the phase change usually occurring on the shell side. Boilers in steam engine locomotives are typically large, usually cylindrically-shaped shell-and-tube heat exchangers. In large power plants with steam-driven turbines, shell-and-tube surface condensers are used to condense the exhaust steam exiting the turbine into condensate water which is recycled back to be turned into steam in the steam generator. They are also used in liquid-cooled chillers for transferring heat between the refrigerant and the water in both the evaporator and condenser, and in air-cooled chillers for only the evaporator. Shell and tube heat exchanger design There can be many variations on the shell-and tube-design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes. In nuclear power plants called pressurized water reactors, large heat exchangers called steam generators are two-phase, shell-and-tube heat exchangers which typically have U-tubes. They are used to boil water recycled from a surface condenser into steam to drive a turbine to produce power. Most shell-and-tube heat exchangers are either 1, 2, or 4 pass designs on the tube side. This refers to the number of times the fluid in the tubes passes through the fluid in the shell. In a single pass heat exchanger, the fluid goes in one end of each tube and out the other. Surface condensers in power plants are often 1-pass straight-tube heat exchangers (see surface condenser for diagram). Two and four pass designs are common because the fluid can enter and exit on the same side. This makes construction much simpler. There are often baffles directing flow through the shell side so the fluid does not take a short cut through the shell side leaving ineffective low flow volumes. These are generally attached to the tube bundle rather than the shell in order that the bundle is still removable for maintenance. Countercurrent heat exchangers are most efficient because they allow the highest log mean temperature difference between the hot and cold streams. Many companies however do not use two pass heat exchangers with a u-tube because they can break easily in addition to being more expensive to build. Often multiple heat exchangers can be used to simulate the countercurrent flow of a single large exchanger. Selection of tube material To be able to transfer heat well, the tube material should have good thermal conductivity. Because heat is transferred from a hot to a cold side through the tubes, there is a temperature difference through the width of the tubes. Because of the tendency of the tube material to thermally expand differently at various temperatures, thermal stresses occur during operation. This is in addition to any stress from high pressures from the fluids themselves. The tube material also should be compatible with both the shell-and-tube side fluids for long periods under the operating conditions (temperatures, pressures, pH, etc.) to minimize deterioration such as corrosion. All of these requirements call for careful selection of strong, thermally-conductive, corrosion-resistant, high quality tube materials, typically metals, including aluminium, copper alloy, stainless steel, carbon steel, non-ferrous copper alloy, Inconel, nickel, Hastelloy and titanium. Fluoropolymers such as Perfluoroalkoxy alkane (PFA) and Fluorinated ethylene propylene (FEP) are also used to produce the tubing material due to their high resistance to extreme temperatures. Poor choice of tube material could result in a leak through a tube between the shell-and-tube sides causing fluid cross-contamination and possibly loss of pressure. Applications and uses The simple design of a shell-and-tube heat exchanger makes it an ideal cooling solution for a wide variety of applications. One of the most common applications is the cooling of hydraulic fluid and oil in engines, transmissions and hydraulic power packs. With the right choice of materials they can also be used to cool or heat other mediums, such as swimming pool water or charge air. There are many advantages to shell-and-tube technology over plates One of the big advantages of using a shell-and-tube heat exchanger is that they are often easy to service, particularly with models where a floating tube bundle is available.(where the tube plates are not welded to the outer shell). The cylindrical design of the housing is extremely resistant to pressure and allows all ranges of pressure applications Overpressure protection In shell-and-tube heat exchangers there is a potential for a tube to rupture and for high pressure (HP) fluid to enter and over-pressurise the low pressure (LP) side of the heat exchanger. The usual configuration of exchangers is for the HP fluid to be in the tubes and for LP water, cooling or heating media to be on the shell side. There is a risk that a tube rupture could compromise the integrity of the shell and the release flammable gas or liquid, with a risk to people and financial loss. The shell of an exchanger must be protected against over-pressure by rupture discs or relief valves. The opening time of protection devices has been found to be critical for exchanger protection. Such devices are fitted directly on the shell of the exchanger and discharge into a relief system. Tubes Overview Shell-and-tube heat exchangers are integral components in thermal engineering, primarily used for efficient heat transfer. The design and arrangement of the tubes within these exchangers are fundamental to their operation and effectiveness. The precise design and specification of tubes in shell and tube heat exchangers underscore the complexities of thermal engineering. Each design aspect, from material selection to tube arrangement and fluid flow, plays a vital role in the exchanger's performance, showcasing the intricacies and precision required in this field. Specification and Standards Tubes in these exchangers, often termed as condenser tubes, are distinct from typical water tubing. They adhere to the Birmingham Wire Gage (BWG) standard, which dictates specific dimensions such as the outside diameter. For example, a 1-inch tube according to BWG will have an exact outside diameter of 1 inch. Detailed specifications are available in specialized references. Materials The tubes are made from a variety of materials, each chosen based on specific system requirements including thermal conductivity, strength, and corrosion resistance. Tube Arrangement The arrangement of tubes is a crucial design aspect. They are positioned in holes drilled in tube sheets, with the spacing between holes - known as tube pitch - being a key factor for both structural integrity and efficiency. Tubes are typically organized in square or triangular patterns, and specific layouts are detailed in engineering references. Tube Counts Tube count refers to the maximum number of tubes that can fit within a shell of a specific diameter without weakening the tube sheet. This aspect is crucial for ensuring the structural integrity and efficiency of the heat exchanger. Information on tube counts for various shell sizes can be found in specialized literature. Fluid Flow In shell and tube heat exchangers, there are two distinct fluid streams for heat transfer. The tube fluid circulates inside the tubes, while the shell fluid flows around them, guided by baffles. The movement of the shell fluid, whether it is side-to-side or up-and-down, and the number of passes it makes over the tubes, are controlled by segmental baffles, essential for maximizing heat transfer efficiency. These aspects are elaborated in dedicated references. Design and construction standards Standards of the Tubular Exchanger Manufacturers Association (TEMA), 10th edition, 2019 EN 13445-3 "Unfired Pressure Vessels - Part 3: Design", Section 13 (2012) ASME Boiler and Pressure Vessel Code, Section VIII, Division 1, Part UHX See also Boiler or Reboiler EJMA Fired heater Fouling or scaling Heat exchanger NTU method as an alternative to finding the LMTD Plate and frame heat exchanger Plate fin heat exchanger Pressure vessel Surface condenser References External links Shell-and-Tube Heat Exchangers Construction Details Basics of Shell and Tube Exchanger Design Basics of Industrial Heat Transfer Specifying a Liquid_Liquid Heat Exchanger A Free Book - Thermal Design of Shell & Tube Heat Exchangers Shell and tube heat exchanger calculator for shellside Self-Cleaning Heat Exchangers Heat exchangers
Shell-and-tube heat exchanger
[ "Chemistry", "Engineering" ]
2,119
[ "Chemical equipment", "Heat exchangers" ]
2,289,219
https://en.wikipedia.org/wiki/Analytic%20element%20method
The analytic element method (AEM) is a numerical method used for the solution of partial differential equations. It was initially developed by O.D.L. Strack at the University of Minnesota. It is similar in nature to the boundary element method (BEM), as it does not rely upon the discretization of volumes or areas in the modeled system; only internal and external boundaries are discretized. One of the primary distinctions between AEM and BEMs is that the boundary integrals are calculated analytically. Although originally developed to model groundwater flow, AEM has subsequently been applied to other fields of study including studies of heat flow and conduction, periodic waves, and deformation by force. Mathematical basis The basic premise of the analytic element method is that, for linear differential equations, elementary solutions may be superimposed to obtain more complex solutions. A suite of 2D and 3D analytic solutions ("elements") are available for different governing equations. These elements typically correspond to a discontinuity in the dependent variable or its gradient along a geometric boundary (e.g., point, line, ellipse, circle, sphere, etc.). This discontinuity has a specific functional form (usually a polynomial in 2D) and may be manipulated to satisfy Dirichlet, Neumann, or Robin (mixed) boundary conditions. Each analytic solution is infinite in space and/or time. Commonly each analytic solution contains degrees of freedom (coefficients) that may be calculated to meet prescribed boundary conditions along the element's border. To obtain a global solution (i.e., the correct element coefficients), a system of equations is solved such that the boundary conditions are satisfied along all of the elements (using collocation, least-squares minimization, or a similar approach). Notably, the global solution provides a spatially continuous description of the dependent variable everywhere in the infinite domain, and the governing equation is satisfied everywhere exactly except along the border of the element, where the governing equation is not strictly applicable due to discontinuity. The ability to superpose numerous elements in a single solution means that analytical solutions can be realized for arbitrarily complex boundary conditions. That is, models that have complex geometries, straight or curved boundaries, multiple boundaries, transient boundary conditions, multiple aquifer layers, piecewise varying properties, and continuously varying properties can be solved. Elements can be implemented using far-field expansions such that models containing many thousands of elements can be solved efficiently to high precision. The analytic element method has been applied to problems of groundwater flow governed by a variety of linear partial differential equations including the Laplace, the Poisson equation, the modified Helmholtz equation, the heat equation, and the biharmonic equations. Often these equations are solved using complex variables which enables using mathematical techniques available in complex variable theory. A useful technique to solve complex problems is using conformal mapping which maps the boundary of a geometry, e.g. an ellipse, onto the boundary of the unit circle where the solution is known. In the analytic element method the discharge potential and stream function, or combined the complex potential, are used. This potential links the physical properties of the groundwater system, the hydraulic head or flow boundaries, to a mathematical representation of a potential. This mathematical representation can be used to calculate the potential in terms of position and thus also solve groundwater flow problems. Elements are developed by solving the boundary conditions for either of these two properties, hydraulic head or flow boundary, which results in analytical solutions capable of dealing with numerous boundary conditions. Comparison to other methods As mentioned the analytic element method thus does not rely on the discretization of volume or area in the model, as in the finite elements or finite different methods. Thus, it can model complex problems with an error in the order of machine precision. This is illustrated in a study that modeled a highly heterogeneous, isotropic aquifer by including 100,000 spherical heterogeneity with a random conductivity and tracing 40,000 particles. The analytical element method can efficiently be used as verification or as a screening tool in larger projects as it may fast and accurately calculate the groundwater flow for many complex problems. In contrast to other commonly used groundwater modeling methods, e.g. the finite elements or finite different method, the AEM does not discrete the model domain into cells. This gives the advantage that the model is valid for any given point in the model domain. However, it also imposes that the domain is not as easily divided into regions of e.g. different hydraulic conductivity, as when modeling with a cell grid; however, one solution to this problem is to include subdomains to the AEM model. There also exist solutions for implementing vertically varying properties or structures in an aquifer in an AEM model. See also Boundary element method Conformal mapping Superposition principle References Further read External links Analytic elements community wiki Fitts Geolsolutions, AnAqSim (analytic aquifer simulator) and AnAqSimEDU (free) web site Numerical differential equations Hydrology models
Analytic element method
[ "Biology", "Environmental_science" ]
1,044
[ "Hydrology", "Environmental modelling", "Hydrology models", "Biological models" ]
2,289,312
https://en.wikipedia.org/wiki/Electrodialysis%20reversal
Electrodialysis reversal (EDR) is an electrodialysis reversal water desalination membrane process that has been commercially used since the early 1960s. An electric current migrates dissolved salt ions, including fluorides, nitrates and sulfates, through an electrodialysis stack consisting of alternating layers of cationic and anionic ion exchange membranes. Periodically (3-4 times per hour), the direction of ion flow is reversed by reversing the polarity of the applied electric current. Current reversal reduces clogging of membranes, as salt deposits in the membrane gets dissolved when the current flow is reversed. Electrodialysis reversal causes a small decrease in the diluted feed quality and requires increased complexity infrastructures, as reversible valves are required to change the flow direction of diluted and concentrated streams. However, it greatly increases ion exchange membranes durability, and membrane cleaning prevents electrical resistance increase of membrane as deposits accumulate in the membrane pores. The polarity reversal of EDR alternately exposed membrane surfaces and the water flow paths to concentrate with a tendency to precipitate scale and desalted water that tends to dissolve scale. This allows the process to operate with supersaturated concentrate streams up to specific limits without chemical additions to prevent scale formation. See also Reversed electrodialysis (RED) Osmotic power References External links Article: Water issues prompt new look at desalination Water desalination Membrane technology
Electrodialysis reversal
[ "Chemistry" ]
294
[ "Water desalination", "Separation processes", "Water treatment", "Membrane technology", "Water technology" ]
2,289,369
https://en.wikipedia.org/wiki/Logarithmic%20mean%20temperature%20difference
In thermal engineering, the logarithmic mean temperature difference (LMTD) is used to determine the temperature driving force for heat transfer in flow systems, most notably in heat exchangers. The LMTD is a logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties. Definition We assume that a generic heat exchanger has two ends (which we call "A" and "B") at which the hot and cold streams enter or exit on either side; then, the LMTD is defined by the logarithmic mean as follows: where is the temperature difference between the two streams at end , and is the temperature difference between the two streams at end . When the two temperature differences are equal, this formula does not directly resolve, so the LMTD is conventionally taken to equal its limit value, which is in this case trivially equal to the two differences. With this definition, the LMTD can be used to find the exchanged heat in a heat exchanger: where (in SI units): is the exchanged heat duty (watts), is the heat transfer coefficient (watts per kelvin per square meter), is the exchange area. Note that estimating the heat transfer coefficient may be quite complicated. This holds both for cocurrent flow, where the streams enter from the same end, and for countercurrent flow, where they enter from different ends. In a cross-flow, in which one system, usually the heat sink, has the same nominal temperature at all points on the heat transfer surface, a similar relation between exchanged heat and LMTD holds, but with a correction factor. A correction factor is also required for other more complex geometries, such as a shell and tube exchanger with baffles. Derivation Assume heat transfer is occurring in a heat exchanger along an axis , from generic coordinate to , between two fluids, identified as and , whose temperatures along are and . The local exchanged heat flux at is proportional to the temperature difference: The heat that leaves the fluids causes a temperature gradient according to Fourier's law: where are the thermal conductivities of the intervening material at points and respectively. Summed together, this becomes where . The total exchanged energy is found by integrating the local heat transfer from to : Notice that is clearly the pipe length, which is distance along , and is the circumference. Multiplying those gives the heat exchanger area of the pipe, and use this fact: In both integrals, make a change of variables from to : With the relation for (equation ), this becomes Integration at this point is trivial, and finally gives: , from which the definition of LMTD follows. Assumptions and limitations It has been assumed that the rate of change for the temperature of both fluids is proportional to the temperature difference; this assumption is valid for fluids with a constant specific heat, which is a good description of fluids changing temperature over a relatively small range. However, if the specific heat changes, the LMTD approach will no longer be accurate. A particular case for the LMTD are condensers and reboilers, where the latent heat associated to phase change is a special case of the hypothesis. For a condenser, the hot fluid inlet temperature is then equivalent to the hot fluid exit temperature. It has also been assumed that the heat transfer coefficient (U) is constant, and not a function of temperature. If this is not the case, the LMTD approach will again be less valid The LMTD is a steady-state concept, and cannot be used in dynamic analyses. In particular, if the LMTD were to be applied on a transient in which, for a brief time, the temperature difference had different signs on the two sides of the exchanger, the argument to the logarithm function would be negative, which is not allowable. No phase change during heat transfer Changes in kinetic energy and potential energy are neglected Logarithmic Mean Pressure Difference A related quantity, the logarithmic mean pressure difference or LMPD, is often used in mass transfer for stagnant solvents with dilute solutes to simplify the bulk flow problem. References Kay J M & Nedderman R M (1985) Fluid Mechanics and Transfer Processes, Cambridge University Press Heat transfer
Logarithmic mean temperature difference
[ "Physics", "Chemistry" ]
948
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
2,289,394
https://en.wikipedia.org/wiki/Research%20Institute%20of%20Computer%20Science%20and%20Random%20Systems
The Institut de recherche en informatique et systèmes aléatoires is a joint computer science research center of CNRS, University of Rennes 1, ENS Rennes, INSA Rennes and Inria, in Rennes in Brittany. It is one of the eight Inria research centers. Created in 1975 as a spin-off of the University of Rennes 1, merging the young computer science department and a few mathematicians, more specifically probabilists, among them Michel Métivier, who was to become the first president of IRISA. Research topics span from theoretical computer science, such as formal languages, formal methods, or more mathematically oriented topics such as information theory, optimization, complex system... to application-driven topics like bioinformatics, image and video compression, handwriting recognition, computer graphics, medical imaging, content-based image retrieval. See also French space program Space program of France Aerospace engineering organizations Computer science institutes in France France Research institutes in France French National Centre for Scientific Research 1975 establishments in France French UMR
Research Institute of Computer Science and Random Systems
[ "Engineering" ]
218
[ "Aeronautics organizations", "Aerospace engineering organizations", "Aerospace engineering" ]
2,289,476
https://en.wikipedia.org/wiki/AM%200644-741
AM 0644-741, also known as the Lindsay-Shapley Ring, is an unbarred lenticular galaxy, and a ring galaxy, which is 300 million light-years away in the southern constellation Volans. Properties Formation The yellowish nucleus was once the center of a normal spiral galaxy, and the ring which currently surrounds the center is 150,000 light years in diameter. The ring is theorized to have formed by a collision with another galaxy, which triggered a gravitational disruption that caused dust in the galaxy to condense and form stars, which forced it to then expand away from the galaxy and create a ring. Physical characteristics The ring's intense blue hue is the result of widespread formation of massive, young, blue stars. The pink regions along the ring are another sign of rampant star formation. They are rarefied clouds of glowing hydrogen gas, fluorescing as a result of the young blue stars' intense ultraviolet light. Future of the ring Galactic simulation models suggest that the ring of AM 0644-741 will continue to expand for about another 300 million years, after which it will begin to disintegrate. References Ring galaxies Unbarred lenticular galaxies Volans 019481 34-11 06443-7411
AM 0644-741
[ "Astronomy" ]
256
[ "Volans", "Constellations" ]
2,289,648
https://en.wikipedia.org/wiki/Windows%20Vista
Windows Vista is a major release of the Windows NT operating system developed by Microsoft. It was the direct successor to Windows XP, released five years earlier, which was then the longest time span between successive releases of Microsoft Windows. It was released to manufacturing on November 8, 2006, and over the following two months, it was released in stages to business customers, original equipment manufacturers (OEMs), and retail channels. On January 30, 2007, it was released internationally and was made available for purchase and download from the Windows Marketplace; it is the first release of Windows to be made available through a digital distribution platform. Development of Windows Vista began in 2001 under the codename "Longhorn"; originally envisioned as a minor successor to Windows XP, it gradually included numerous new features from the then-next major release of Windows codenamed "Blackcomb", after which it was repositioned as a major release of Windows, and it subsequently underwent a period of protracted development that was unprecedented for Microsoft. Most new features were prominently based on a new presentation layer codenamed Avalon, a new communications architecture codenamed Indigo, and a relational storage platform codenamed WinFS — all built on the .NET Framework; however, this proved to be untenable due to incompleteness of technologies and ways in which new features were added, and Microsoft reset the project in 2004. Many features were eventually reimplemented after the reset, but Microsoft ceased using managed code to develop the operating system. New features of Windows Vista include a graphical user interface and visual style referred to as Windows Aero; a content index and desktop search platform called Windows Search; new peer-to-peer technologies to simplify sharing files and media between computers and devices on a home network; and new multimedia tools such as Windows DVD Maker. Windows Vista included version 3.0 of the .NET Framework, allowing software developers to write applications without traditional Windows APIs. There are major architectural overhauls to audio, display, network, and print sub-systems; deployment, installation, servicing, and startup procedures are also revised. It is the first release of Windows built on Microsoft's Trustworthy Computing initiative and emphasized security with the introduction of many new security and safety features such as BitLocker and User Account Control. The ambitiousness and scope of these changes, and the abundance of new features earned positive reviews, but Windows Vista was the subject of frequent negative press and significant criticism. Criticism of Windows Vista focused on driver, peripheral, and program incompatibility; digital rights management; excessive authorization from the new User Account Control; inordinately high system requirements when contrasted with Windows XP; its protracted development; longer boot time; and more restrictive product licensing. Windows Vista deployment and satisfaction rates were consequently lower than those of Windows XP, and it is considered a market failure; however, its use surpassed Microsoft's pre-launch two-year-out expectations of achieving 200 million users (with an estimated 330 million users by 2009). Two service packs were released, in 2008 and 2009 respectively. Windows Vista was succeeded by Windows 7 in 2009, and on October 22, 2010 Microsoft ceased retail distribution of Windows Vista; OEM supply ceased a year later. Mainstream support for Windows Vista ended on April 10, 2012 and extended support ended on April 11, 2017. Development Microsoft began work on Windows Vista, known at the time by its codename "Longhorn", in May 2001, five months before the release of Windows XP. It was originally expected to ship in October 2003 as a minor step between Windows XP and "Blackcomb", which was planned to be the company's next major operating system release. Gradually, "Longhorn" assimilated many of the important new features and technologies slated for Blackcomb, resulting in the release date being pushed back several times in three years. In some builds of Longhorn, their license agreement said "For the Microsoft product codenamed 'Whistler'". Many of Microsoft's developers were also re-tasked to build updates to Windows XP and Windows Server 2003 to strengthen security. Faced with ongoing delays and concerns about feature creep, Microsoft announced on August 27, 2004, that it had revised its plans. For this reason, Longhorn was reset to start work on componentizing the Windows Server 2003 Service Pack 1 codebase, and over time re-incorporating the features that would be intended for an actual operating system release. However, some previously announced features such as WinFS were dropped or postponed, and a new software development methodology called the Security Development Lifecycle was incorporated to address concerns with the security of the Windows codebase, which is programmed in C, C++ and assembly. Longhorn became known as Vista in 2005. Vista in Spanish means view. Longhorn The early development stages of Longhorn were generally characterized by incremental improvements and updates to Windows XP. During this period, Microsoft was fairly quiet about what was being worked on, as their marketing and public relations efforts were more strongly focused on Windows XP, and Windows Server 2003, which was released in April 2003. Occasional builds of Longhorn were leaked onto popular file sharing networks such as IRC, BitTorrent, eDonkey and various newsgroups, and so most of what is known about builds before the first sanctioned development release of Longhorn in May 2003 is derived from these builds. After several months of relatively little news or activity from Microsoft with Longhorn, Microsoft released Build 4008, which had made an appearance on the Internet around February 28, 2003. It was also privately handed out to a select group of software developers. As an evolutionary release over build 3683, it contained several small improvements, including a modified blue "Plex" theme and a new, simplified Windows Image-based installer that operates in graphical mode from the outset, and completed an install of the operating system in approximately one third the time of Windows XP on the same hardware. An optional "new taskbar" was introduced that was thinner than the previous build and displayed the time differently. The most notable visual and functional difference, however, came with Windows Explorer. The incorporation of the Plex theme made blue the dominant color of the entire application. The Windows XP-style task pane was almost completely replaced with a large horizontal pane that appeared under the toolbars. A new search interface allowed for filtering of results, searching for Windows help, and natural-language queries that would be used to integrate with WinFS. The animated search characters were also removed. The "view modes" were also replaced with a single slider that would resize the icons in real-time, in the list, thumbnail, or details mode, depending on where the slider was. File metadata was also made more visible and more easily editable, with more active encouragement to fill out missing pieces of information. Also of note was the conversion of Windows Explorer to being a .NET application. Most builds of Longhorn and Vista were identified by a label that was always displayed in the bottom-right corner of the desktop. A typical build label would look like "Longhorn Build 3683.Lab06_N.020923-1821". Higher build numbers did not automatically mean that the latest features from every development team at Microsoft was included. Typically, a team working on a certain feature or subsystem would generate their working builds which developers would test with, and when the code was deemed stable, all the changes would be incorporated back into the main development tree at once. At Microsoft, several "Build labs" exist where the compilation of the entirety of Windows can be performed by a team. The name of the lab in which any given build originated is shown as part of the build label, and the date and time of the build follow that. Some builds (such as Beta 1 and Beta 2) only display the build label in the version information dialog (Winver). The icons used in these builds are from Windows XP. At the Windows Hardware Engineering Conference (WinHEC) in May 2003, Microsoft gave their first public demonstrations of the new Desktop Window Manager and Aero. The demonstrations were done on a revised build 4015 which was never released. Several sessions for developers and hardware engineers at the conference focused on these new features, as well as the Next-Generation Secure Computing Base (previously known as "Palladium"), which at the time was Microsoft's proposed solution for creating a secure computing environment whereby any given component of the system could be deemed "trusted". Also at this conference, Microsoft reiterated their roadmap for delivering Longhorn, pointing to an "early 2005" release date. Development reset By 2004, it had become obvious to the Windows team at Microsoft that they were losing sight of what needed to be done to complete the next version of Windows and ship it to customers. Internally, some Microsoft employees were describing the Longhorn project as "another Cairo" or "Cairo.NET", referring to the Cairo development project that the company embarked on through the first half of the 1990s, which never resulted in a shipping operating system (though nearly all the technologies developed in that time did end up in Windows 95 and Windows NT). Microsoft was shocked in 2005 by Apple's release of Mac OS X Tiger. It offered only a limited subset of features planned for Longhorn, in particular fast file searching and integrated graphics and sound processing, but appeared to have impressive reliability and performance compared to contemporary Longhorn builds. Most Longhorn builds had major Windows Explorer system leaks which prevented the OS from performing well, and added more confusion to the development teams in later builds with more and more code being developed which failed to reach stability. In a September 23, 2005 front-page article in The Wall Street Journal, Microsoft co-president Jim Allchin, who had overall responsibility for the development and delivery of Windows, explained how development of Longhorn had been "crashing into the ground" due in large part to the haphazard methods by which features were introduced and integrated into the core of the operating system, without a clear focus on an end-product. Allchin went on to explain how in December 2003, he enlisted the help of two other senior executives, Brian Valentine and Amitabh Srivastava, the former being experienced with shipping software at Microsoft, most notably Windows Server 2003, and the latter having spent his career at Microsoft researching and developing methods of producing high-quality testing systems. Srivastava employed a team of core architects to visually map out the entirety of the Windows operating system, and to proactively work towards a development process that would enforce high levels of code quality, reduce interdependencies between components, and in general, "not make things worse with Vista". Since Microsoft decided that Longhorn needed to be further componentized, work started on builds (known as the Omega-13 builds, named after a time travel device in the film Galaxy Quest) that would componentize existing Windows Server 2003 source code, and over time add back functionality as development progressed. Future Longhorn builds would start from Windows Server 2003 Service Pack 1 and continue from there. This change, announced internally to Microsoft employees on August 26, 2004, began in earnest in September, though it would take several more months before the new development process and build methodology would be used by all of the development teams. A number of complaints came from individual developers, and Bill Gates himself, that the new development process was going to be prohibitively difficult to work within. As Windows Vista By approximately November 2004, the company had considered several names for the final release, ranging from simple to fanciful and inventive. In the end, Microsoft chose Windows Vista as confirmed on July 22, 2005, believing it to be a "wonderful intersection of what the product really does, what Windows stands for, and what resonates with customers, and their needs". Group Project Manager Greg Sullivan told Paul Thurrott "You want the PC to adapt to you and help you cut through the clutter to focus on what's important to you. That's what Windows Vista is all about: "bringing clarity to your world" (a reference to the three marketing points of Vista—Clear, Connected, Confident), so you can focus on what matters to you". Microsoft co-president Jim Allchin also loved the name, saying that "Vista creates the right imagery for the new product capabilities and inspires the imagination with all the possibilities of what can be done with Windows—making people's passions come alive." After Longhorn was named Windows Vista in July 2005, an unprecedented beta-test program was started, involving hundreds of thousands of volunteers and companies. In September of that year, Microsoft started releasing regular Community Technology Previews (CTP) to beta testers from July 2005 to February 2006. The first of these was distributed at the 2005 Microsoft Professional Developers Conference, and was subsequently released to beta testers and Microsoft Developer Network subscribers. The builds that followed incorporated most of the planned features for the final product, as well as a number of changes to the user interface, based largely on feedback from beta testers. Windows Vista was deemed feature-complete with the release of the "February CTP", released on February 22, 2006, and much of the remainder of the work between that build and the final release of the product focused on stability, performance, application and driver compatibility, and documentation. Beta 2, released in late May, was the first build to be made available to the general public through Microsoft's Customer Preview Program. It was downloaded over 5 million times. Two release candidates followed in September and October, both of which were made available to a large number of users. At the Intel Developer Forum on March 9, 2006, Microsoft announced a change in their plans to support EFI in Windows Vista. The UEFI 2.0 specification (which replaced EFI 1.10) was not completed until early 2006, and at the time of Microsoft's announcement, no firmware manufacturers had completed a production implementation which could be used for testing. As a result, the decision was made to postpone the introduction of UEFI support to Windows; support for UEFI on 64-bit platforms was postponed until Vista Service Pack 1 and Windows Server 2008 and 32-bit UEFI would not be supported, as Microsoft did not expect many such systems to be built because the market was quickly moving to 64-bit processors. While Microsoft had originally hoped to have the consumer versions of the operating system available worldwide in time for the 2006 holiday shopping season, it announced in March 2006 that the release date would be pushed back to January 2007 in order to give the company—and the hardware and software companies that Microsoft depends on for providing device drivers—additional time to prepare. Because a release to manufacturing (RTM) build is the final version of code shipped to retailers and other distributors, the purpose of a pre-RTM build is to eliminate any last "show-stopper" bugs that may prevent the code from responsibly being shipped to customers, as well as anything else that consumers may find troublesome. Thus, it is unlikely that any major new features would be introduced; instead, work would focus on Vista's fit and finish. In just a few days, developers had managed to drop Vista's bug count from over 2470 on September 22 to just over 1400 by the time RC2 shipped in early October. However, they still had a way to go before Vista was ready to RTM. Microsoft's internal processes required Vista's bug count to drop to 500 or fewer before the product could go into escrow for RTM. For most of the pre-RTM builds, only 32-bit editions were released. On June 14, 2006, Windows developer Philip Su posted a blog entry which decried the development process of Windows Vista, stating that "The code is way too complicated, and that the pace of coding has been tremendously slowed down by overbearing process." The same post also described Windows Vista as having approximately 50 million lines of code, with about 2,000 developers working on the product. During a demonstration of the speech recognition feature new to Windows Vista at Microsoft's Financial Analyst Meeting on July 27, 2006, the software recognized the phrase "Dear mom" as "Dear aunt". After several failed attempts to correct the error, the sentence eventually became "Dear aunt, let's set so double the killer delete select all". A developer with Vista's speech recognition team later explained that there was a bug with the build of Vista that was causing the microphone gain level to be set very high, resulting in the audio being received by the speech recognition software being "incredibly distorted". Windows Vista build 5824 (October 17, 2006) was supposed to be the RTM release, but a bug, where the OOBE hangs at the start of the WinSAT Assessment (if upgraded from Windows XP), requiring the user to terminate msoobe.exe by pressing Shift+F10 to open Command Prompt using either command-line tools or Task Manager prevented this, damaging development and lowering the chance that it would hit its January 2007 deadline. Development of Windows Vista came to an end when Microsoft announced that it had been finalized on November 8, 2006, and was concluded by co-president of Windows development, Jim Allchin. The RTM's build number had also jumped to 6000 to reflect Vista's internal version number, NT 6.0. Jumping RTM build numbers is common practice among consumer-oriented Windows versions, like Windows 98 (build 1998), Windows 98 SE (build 2222), Windows Me (build 3000) or Windows XP (build 2600), as compared to the business-oriented versions like Windows 2000 (build 2195) or Server 2003 (build 3790). On November 16, 2006, Microsoft made the final build available to MSDN and Technet Plus subscribers. A business-oriented Enterprise edition was made available to volume license customers on November 30, 2006. Windows Vista was launched for general customer availability on January 30, 2007. New or changed features New features introduced by Windows Vista are very numerous, encompassing significant functionality not available in its predecessors. End-user Windows Aero is the new graphical user interface, which Jim Allchin stated is an acronym for Authentic, Energetic, Reflective, and Open. Microsoft intended the new interface to be cleaner and more aesthetically pleasing than those of previous Windows versions, and it features advanced visual effects such as blurred glass translucencies and dynamic glass reflections and smooth window animations. Laptop users report, however, that enabling Aero reduces battery life and reduces performance. Windows Aero requires a compositing window manager called Desktop Window Manager. Windows Shell offers a new range of organization, navigation, and search capabilities: Task Panes in Windows Explorer are removed, with the relevant tasks moved to a new command bar. The navigation pane can now be displayed when tasks are available, and it has been updated to include a new "Favorite Links" that houses shortcuts to common locations. An incremental search search box now appears at all times in Windows Explorer. The address bar has been replaced with a breadcrumb navigation bar, which means that multiple locations in a hierarchy can be navigated without needing to go back and forth between locations. Icons now display thumbnails depicting contents of items and can be dynamically scaled in size (up to 256 × 256 pixels). A new preview pane allows users to see thumbnails of items and play tracks, read contents of documents, and view photos when they are selected. Groups of items are now selectable and display the number of items in each group. A new details pane allows users to manage metadata. There are several new sharing features, including the ability to directly share files. The Start menu also now includes an incremental search box — allowing the user to press the key and start typing to instantly find an item or launch a program — and the All Programs list uses a vertical scroll bar instead of the cascading flyout menu of Windows XP. Windows Search is a new content index desktop search platform that replaces the Indexing Service of previous Windows versions to enable incremental searches for files and non-file items — documents, emails, folders, programs, photos, tracks, and videos — and contents or details such as attributes, extensions, and filenames across compatible applications. Windows Sidebar is a translucent panel that hosts gadgets that display details such as feeds and sports scores on the Windows desktop; the Sidebar can be hidden and gadgets can also be placed on the desktop itself. Internet Explorer 7 is a significant revision over Internet Explorer 6 with a new user interface comprising additional address bar features, a new search box, enhanced page zoom, RSS feed functionality, and support for tabbed browsing (with an optional "quick tabs" feature that shows thumbnails of each open tab). Anti-phishing software is introduced that combines client-side scanning with an optional online service; it checks with Microsoft the address being visited to determine its legitimacy, compares the address with a locally stored list of legitimate addresses, and uses heuristics to determine whether an address's characteristics are indicative of phishing attempts. In Windows Vista, it runs in isolation from other applications (protected mode); exploits and malicious software are restricted from writing to any location beyond Temporary Internet Files without explicit user consent. Windows Media Player 11 is a significant update to Microsoft's Windows Media Player for playing and organizing photos, tracks, and videos. New features include an updated GUI for the media library, disc spanning, enhanced audio fingerprinting, instant search capabilities, item organization features, synchronization features, the ability to share the media library over a network with other Windows Vista machines, Xbox 360 integration, and Windows Media Center Extender support. Windows Defender is an antispyware program with several configurable options for real-time protection, with settings to block and notify of changes to browser, security, and Windows settings; prohibit startup applications; and view network-connected applications and their addresses; users can optionally report detected threats through the Microsoft Active Protection Service to help stop new threats. Backup and Restore Center allows for the creation of periodic backups and backup schedules, as well as recovery from previous backups; backups are incremental, storing only subsequent changes, which minimizes disk space usage. Windows Vista Business, Windows Vista Enterprise, and Windows Vista Ultimate additionally include Windows Complete PC Backup that allows system images to be created, and this feature can be started from Windows Vista installation media so that images can be restored to a new hard disk or new hardware or if a PC has experienced hardware failures and it cannot boot. Windows Calendar is a basic calendar application that integrates with Windows Contacts and Windows Mail; users can create appointments and tasks, publish calendars to the Internet or to a network share, receive reminders, send and receive calendar invitations, and share calendars with family members. Windows Mail is the successor to Outlook Express that includes significant feature additions (many of which were previously exclusive to Microsoft Outlook) and introduces fundamental revisions to the identification process, storage architecture, and security structure. Windows Photo Gallery replaces Windows Picture and Fax Viewer; it can acquire photographs from digital cameras; adjust photograph effects; burn photographs to optical media; create Direct3D-accelerated slideshows; and reduce red eye. Windows Media Center previously exclusive to Windows XP Media Center Edition is available in Windows Vista Home Premium and Windows Vista Ultimate; it has been updated with many new features such as support for CableCARD, DVD/MPEG-2, HD content, and two dual-tuner cards. Parental controls allow administrators to control and manage user activity (such as limiting the games that can be played or prohibiting specific contents of websites) of each standard user. Games including FreeCell, Hearts, Minesweeper, Solitaire, and Spider Solitaire have been rewritten in DirectX to take advantage of Windows Vista's new graphical capabilities. New games include Chess Titans (3D Chess), Mahjong Titans (3D Mahjong), and Purble Place (a collection consisting of a cake-creation game, a dress-up puzzle game, and a matching game oriented towards younger children). All in-box games in Windows Vista can be played with an Xbox 360 Controller. Games Explorer is the central location for installed games that displays details such as covers, developers, genres, installation dates, play times, publishers, ratings, and versions. Customizable tasks for games are available; metadata for installed games can be updated from the Internet. Game-related settings such as audio options, community support options, game controller options, firewall settings, and parental controls are displayed. Windows Mobility Center centralizes settings and statuses relevant to mobile computing such as battery life, connectivity status, display brightness, screen orientation, synchronization status, and volume level, and new options can be added by OEMs. Windows Fax and Scan allows machines to create, receive, scan, and send faxes, with the goal of making fax management identical to working with email; it is available in Windows Vista Business, Windows Vista Enterprise, and Windows Vista Ultimate. Windows Meeting Space replaces NetMeeting and relies on People Near Me and WS-Discovery to identify participants on the local subnet or across the Internet; users can give control of their computers to other participants, project their desktops, send messages to participants, and share files. Windows HotStart enables compatible computers to start applications directly from startup or resume by the press of a button, which allows them to function as a consumer electronics device such as a DVD player. Shadow Copy (originally only available in Windows Server 2003) creates copies of files and folders on a scheduled basis, allowing users to recover multiple versions of deleted or overwritten files or folders. Incremental changes are saved by shadow copies, which helps to limit the disk space in use. Windows Update is now a native client application; in previous versions of Windows, it was a web application that had to be accessed from a web browser. Automatic Updates can now automatically download and install Recommended updates (in addition to High Priority updates that could be automatically downloaded and installed in previous versions of Windows). The prompt that appears when an update is installed that requires a machine to be restarted has been revised, with new options to postpone an operating system restart indefinitely, by 10 minutes, by 1 hour, or by 4 hours (in Windows XP, users could only repeatedly dismiss the prompt to restart, or allow the machine to be restarted within 15 minutes of its appearance). Windows Defender definitions and Windows Mail spam filter are delivered through Windows Update. Windows SideShow delivers data such as messages and feeds from a personal computer to additional devices and displays, which makes data available in mobile scenarios; compatible devices could additionally transmit commands to applications, devices, or systems connected to a computer (e.g., a smart phone can control a presentation). Magnifier in Windows Vista can magnify the vector-based content of Windows Presentation Foundation applications without blurring the magnified content—it performs resolution-independent zooming—when the Desktop Window Manager is enabled; the release of .NET Framework 3.5 SP1 in 2008 removes this capability when installed in Windows Vista. Magnifier can now be docked to the bottom, left, right, or top of the screen. Microsoft also introduced the Magnification API so that developers can build solutions that magnify portions of the screen or that apply color effects. Windows Speech Recognition is new speech recognition functionality that enables voice commands for controlling the desktop; dictating documents; navigating websites; operating the mouse cursor; and performing keyboard shortcuts. Problem Reports and Solutions allows users to check for solutions to problems and receive solutions and additional information when it is available. Disk Management: the Logical Disk Manager in Windows Vista supports shrinking and expanding volumes. Reliability and Performance Monitor includes various tools for tuning and monitoring system performance and resources activities of CPU, disks, network, memory and other resources. It shows the operations on files, the opened connections, etc. Windows System Assessment Tool performs a series of assessments of a system's CPU, GPU, RAM, and HDD performance and assigns to the system a rating from 1.0 to 5.9; a system is rated during the out-of-box experience to determine if Windows Aero should be enabled. Windows Anytime Upgrade enabled users running a lower tier edition of Windows Vista to easily upgrade to a subsequent edition (e.g., to upgrade from Windows Vista Home Basic to Windows Vista Ultimate) by purchasing a license from an online merchant. Digital Locker Assistant simplified access to Windows Marketplace purchases for users to download applications and retrieve licenses; purchases were managed with Microsoft account credentials. Windows Ultimate Extras in Windows Vista Ultimate provided additional features such as BitLocker and EFS improvements that allowed users to back up their encryption keys; Multilingual User Interface packages; and Windows Dreamscene, which allowed using MPEG and WMV videos as the desktop background. Core Vista includes technologies such as ReadyBoost and ReadyDrive, which employ fast flash memory (located on USB flash drives and hybrid hard disk drives) to improve system performance by caching commonly used programs and data. This manifests itself in improved battery life on notebook computers as well, since a hybrid drive can be spun down when not in use. Another new technology called SuperFetch utilizes machine learning techniques to analyze usage patterns to allow Windows Vista to make intelligent decisions about what content should be present in system memory at any given time. It uses almost all the extra RAM as disk cache. In conjunction with SuperFetch, an automatic built-in Windows Disk Defragmenter makes sure that those applications are strategically positioned on the hard disk where they can be loaded into memory very quickly with the least physical movement of the hard disk's read-write heads. As part of the redesign of the networking architecture, IPv6 has been fully incorporated into the operating system and a number of performance improvements have been introduced, such as TCP window scaling. Earlier versions of Windows typically needed third-party wireless networking software to work properly, but this is not the case with Vista, which includes more comprehensive wireless networking support. For graphics, Vista introduces a new Windows Display Driver Model and a major revision to Direct3D. The new driver model facilitates the new Desktop Window Manager, which provides the tearing-free desktop and special effects that are the cornerstones of Windows Aero. Direct3D 10, developed in conjunction with major graphics card manufacturers, is a new architecture with more advanced shader support, and allows the graphics processing unit to render more complex scenes without assistance from the CPU. It features improved load balancing between CPU and GPU and also optimizes data transfer between them. WDDM also provides video content playback that rivals typical consumer electronics devices. It does this by making it easy to connect to external monitors, providing for protected HD video playback, and increasing overall video playback quality. For the first time in Windows, graphics processing unit (GPU) multitasking is possible, enabling users to run more than one GPU-intensive application simultaneously. At the core of the operating system, many improvements have been made to the memory manager, process scheduler and I/O scheduler. The Heap Manager implements additional features such as integrity checking in order to improve robustness and defend against buffer overflow security exploits, although this comes at the price of breaking backward compatibility with some legacy applications. A Kernel Transaction Manager has been implemented that enables applications to work with the file system and Registry using atomic transaction operations. Security-related Improved security was a primary design goal for Vista. Microsoft's Trustworthy Computing initiative, which aims to improve public trust in its products, has had a direct effect on its development. This effort has resulted in a number of new security and safety features and an Evaluation Assurance Level rating of 4+. User Account Control, or UAC is perhaps the most significant and visible of these changes. UAC is a security technology that makes it possible for users to use their computer with fewer privileges by default, to stop malware from making unauthorized changes to the system. This was often difficult in previous versions of Windows, as the previous "limited" user accounts proved too restrictive and incompatible with a large proportion of application software, and even prevented some basic operations such as looking at the calendar from the notification tray. In Windows Vista, when an action is performed that requires administrative rights (such as installing/uninstalling software or making system-wide configuration changes), the user is first prompted for an administrator name and password; in cases where the user is already an administrator, the user is still prompted to confirm the pending privileged action. Regular use of the computer such as running programs, printing, or surfing the Internet does not trigger UAC prompts. User Account Control asks for credentials in a Secure Desktop mode, in which the entire screen is dimmed, and only the authorization window is active and highlighted. The intent is to stop a malicious program from misleading the user by interfering with the authorization window, and to hint to the user about the importance of the prompt. Testing by Symantec Corporation has proven the effectiveness of UAC. Symantec used over 2,000 active malware samples, consisting of backdoors, keyloggers, rootkits, mass mailers, trojan horses, spyware, adware, and various other samples. Each was executed on a default Windows Vista installation within a standard user account. UAC effectively blocked over 50 percent of each threat, excluding rootkits. 5 percent or less of the malware that evaded UAC survived a reboot. Internet Explorer 7's new security and safety features include a phishing filter, IDN with anti-spoofing capabilities, and integration with system-wide parental controls. For added security, ActiveX controls are disabled by default. Also, Internet Explorer operates in a protected mode, which operates with lower permissions than the user and runs in isolation from other applications in the operating system, preventing it from accessing or modifying anything besides the Temporary Internet Files directory. Microsoft's anti-spyware product, Windows Defender, has been incorporated into Windows, protecting against malware and other threats. Changes to various system configuration settings (such as new auto-starting applications) are blocked unless the user gives consent. Whereas prior releases of Windows supported per-file encryption using Encrypting File System, the Enterprise and Ultimate editions of Vista include BitLocker Drive Encryption, which can protect entire volumes, notably the operating system volume. However, BitLocker requires approximately a 1.5-gigabyte partition to be permanently not encrypted and to contain system files for Windows to boot. In normal circumstances, the only time this partition is accessed is when the computer is booting, or when there is a Windows update that changes files in this area, which is a legitimate reason to access this section of the drive. The area can be a potential security issue, because a hexadecimal editor (such as dskprobe.exe), or malicious software running with administrator and/or kernel level privileges would be able to write to this "Ghost Partition" and allow a piece of malicious software to compromise the system, or disable the encryption. BitLocker can work in conjunction with a Trusted Platform Module (TPM) cryptoprocessor (version 1.2) embedded in a computer's motherboard, or with a USB key. However, as with other full disk encryption technologies, BitLocker is vulnerable to a cold boot attack, especially where TPM is used as a key protector without a boot PIN being required too. A variety of other privilege-restriction techniques are also built into Vista. An example is the concept of "integrity levels" in user processes, whereby a process with a lower integrity level cannot interact with processes of a higher integrity level and cannot perform DLL–injection to processes of a higher integrity level. The security restrictions of Windows services are more fine-grained, so that services (especially those listening on the network) cannot interact with parts of the operating system they do not need to. Obfuscation techniques such as address space layout randomization are used to increase the amount of effort required of malware before successful infiltration of a system. Code integrity verifies that system binaries have not been tampered with by malicious code. As part of the redesign of the network stack, Windows Firewall has been upgraded, with new support for filtering both incoming and outgoing traffic. Advanced packet filter rules can be created that can grant or deny communications to specific services. The 64-bit versions of Vista require that all new Kernel-Mode device drivers be digitally signed, so that the creator of the driver can be identified. This is also on par with one of the primary goals of Vista to move code out of kernel-mode into user-mode drivers, with another example bing the new Windows Display Driver Model. System management While much of the focus of Vista's new capabilities highlighted the new user interface, security technologies, and improvements to the core operating system, Microsoft also adding new deployment and maintenance features: The Windows Imaging Format (WIM) provides the cornerstone of Microsoft's new deployment and packaging system. WIM files, which contain a HAL-independent image of Windows Vista, can be maintained and patched without having to rebuild new images. Windows Images can be delivered via Systems Management Server or Business Desktop Deployment technologies. Images can be customized and configured with applications then deployed to corporate client personal computers using little to no touch by a system administrator. ImageX is the Microsoft tool used to create and customize images. Windows Deployment Services replaces Remote Installation Services for deploying Vista and prior versions of Windows. Approximately 700 new Group Policy settings have been added, covering most aspects of the new features in the operating system, as well as significantly expanding the configurability of wireless networks, removable storage devices, and user desktop experience. Vista also introduced an XML-based format (ADMX) to display registry-based policy settings, making it easier to manage networks that span geographic locations and different languages. Services for UNIX, renamed as "Subsystem for UNIX-based Applications", comes with the Enterprise and Ultimate editions of Vista. Network File System (NFS) client support is also included. Multilingual User Interface–Unlike previous versions of Windows (which required the loading of language packs to provide local-language support), Windows Vista Ultimate and Enterprise editions support the ability to dynamically change languages based on the logged-on user's preference. Wireless Projector support Developer Windows Vista includes a large number of new application programming interfaces. Chief among them is the inclusion of version 3.0 of the .NET Framework, which consists of a class library and Common Language Runtime. Version 3.0 includes four new major components: Windows Presentation Foundation is a user interface subsystem and framework based vector graphics, which makes use of 3D computer graphics hardware and Direct3D technologies. It provides the foundation for building applications and blending application UI, documents, and media content. It is the successor to Windows Forms. Windows Communication Foundation is a service-oriented messaging subsystem that enables applications and systems to interoperate locally or remotely using Web services. Windows Workflow Foundation provides task automation and integrated transactions using workflows. It is the programming model, engine, and tools for building workflow-enabled applications on Windows. Windows CardSpace is a component that securely stores digital identities of a person, and provides a unified interface for choosing the identity for a particular transaction, such as logging into a website. These technologies are also available for Windows XP and Windows Server 2003 to facilitate their introduction to and usage by developers and end-users. There are also significant new development APIs in the core of the operating system, notably the completely re-designed audio, networking, print, and video interfaces, major changes to the security infrastructure, improvements to the deployment and installation of applications ("ClickOnce" and Windows Installer 4.0), new device driver development model ("Windows Driver Foundation"), Transactional NTFS, mobile computing API advancements (power management, Tablet PC Ink support, SideShow) and major updates to (or complete replacements of) many core subsystems such as Winlogon and CAPI. There are some issues for software developers using some of the graphics APIs in Vista. Games or programs built solely on the Windows Vista-exclusive version of DirectX, version 10, cannot work on prior versions of Windows, as DirectX 10 is not available for previous Windows versions. Also, games that require the features of D3D9Ex, the updated implementation of DirectX 9 in Windows Vista are also incompatible with previous Windows versions. According to a Microsoft blog, there are three choices for OpenGL implementation on Vista. An application can use the default implementation, which translates OpenGL calls into the Direct3D API and is frozen at OpenGL version 1.4, or an application can use an Installable Client Driver (ICD), which comes in two flavors: legacy and Vista-compatible. A legacy ICD disables the Desktop Window Manager, a Vista-compatible ICD takes advantage of a new API, and is fully compatible with the Desktop Window Manager. At least two primary vendors, ATI and NVIDIA provided full Vista-compatible ICDs. However, hardware overlay is not supported, because it is considered as an obsolete feature in Vista. ATI and NVIDIA strongly recommend using compositing desktop/Framebuffer Objects for same functionality. Installation Windows Vista is the first Microsoft operating system: To use DVD-ROM media for installation To provide during setup a selection of multiple editions of Windows available for installation (a license determines which version of Windows Vista is eligible for installation) That can be installed only on a partition formatted with the NTFS file system That supports installation from either OEM or retail media and during setup the input of a single license regardless of the installation source (previous releases of Windows maintained OEM and retail versions separately — users installing Windows from a manufacturer-supplied source could not input a retail license during setup, and users installing Windows from a retail source could not input a manufacturer-supplied license) That supports loading drivers for SCSI, SATA and RAID controllers from any source (such as optical disc drives and USB flash drives) in addition to floppy disks prior to its installation That can be installed on and booted from systems with GPT disks and UEFI firmware Removed features Some notable Windows XP applications and features have been replaced or removed in Windows Vista, including Active Desktop, MSN Explorer, HyperTerminal, Messenger service NetMeeting, NTBackup, and Windows Messenger. Several multimedia features, networking features, and Shell and Windows Explorer features such as the Luna visual style are no longer available. Support lifecycle Support for the original release of Windows Vista (without a service pack) ended on April 13, 2010. Windows Vista Service Pack 1 was retired on July 12, 2011, and Windows Vista Service Pack 2 reached its end of support on April 11, 2017. Upgradability Several Windows Vista components are upgradable to the latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows Vista include: DirectX 11 Internet Explorer 9 Windows Installer 4.5 Microsoft Virtual PC 2007 SP1 .NET Framework 4.6 Visual Studio 2015 Office 2010 SP2 Editions Windows Vista shipped in six different product editions. These were deviced across separate consumer and business target markets, with editions varying in features to cater to specific sub-markets. For consumers, there are three editions, with two available for economically more developed countries. Windows Vista Starter edition is aimed at low-powered computers with availability only in emerging markets. Windows Vista Home Basic is intended for budget users. Windows Vista Home Premium covers the majority of the consumer market and contains applications for creating and using multimedia; the home editions consequentally cannot join a Windows Server domain. For businesses, there are three editions as well. Windows Vista Business is specifically designed for small and medium-sized enterprises, while Windows Vista Enterprise is only available to Software Assurance customers. Windows Vista Ultimate contains all features from the Home and Business editions, as well as Windows Ultimate Extras. In the European Union, Home Basic N and Business N variants without Windows Media Player are also available due to sanctions brought against Microsoft for violating anti-monopoly laws; similar sanctions exist in South Korea. Visual styles Windows Vista includes four distinct visual styles: Windows Aero Windows Aero requires the Desktop Window Manager and is available in Home Premium and subsequent editions. Windows Aero introduces support for advanced visual effects such as blurred glass translucencies and dynamic glass reflections, Flip and Flip 3D, smooth window animations, and thumbnails on the taskbar. Windows Aero is intended for mid-range to high-end video cards; to enable its features the contents of every open window are stored in video in video memory to facilitate preemptive graphic operations such as tearing-free movement of windows. As a result, Windows Aero has significantly higher hardware requirements than its predecessors; video cards must support 128 MB of memory, 32 bits per pixel, DirectX 9, Pixel Shader 2.0, and the new Windows Display Driver Model (WDDM). Windows Vista Standard A variant of Windows Aero, but it lacks advanced graphical effects including blurred glass translucencies, dynamic glass reflections, and smooth window animations; it is only included in Windows Vista Home Basic. Windows Vista Basic A visual style that does not rely on the Desktop Window Manager; as such, it does not feature blurred glass translucencies, dynamic glass reflections, smooth window animations, or taskbar thumbnails. Windows Vista Basic has video card requirements similar to Windows XP, and it is the default visual style of Windows Vista Starter and on systems without support for Windows Aero. Before Windows Vista SP1, machines that failed Windows Genuine Advantage product license validation would also revert to this visual style. Windows Standard/Windows Classic This visual style reprises the user interface of Windows 9x, Windows 2000, and Windows Server. As with previous versions of Windows, this visual style supports custom color schemes, which are collections of color settings. Windows Vista includes four high-contrast color schemes and the default color schemes from Windows 98 (titled "Windows Classic") and Windows 2000/Windows Me (titled "Windows Standard"). Hardware requirements Computers capable of running Windows Vista are classified as Vista Capable and Vista Premium Ready. A Vista Capable or equivalent PC is capable of running all editions of Windows Vista although some of the special features and high-end graphics options may require additional or more advanced hardware. A Vista Premium Ready PC can take advantage of Vista's high-end features. Windows Vista's Basic and Classic interfaces work with virtually any graphics hardware that supports Windows XP or 2000; accordingly, most discussion around Vista's graphics requirements centers on those for the Windows Aero interface. As of Windows Vista Beta 2, the NVIDIA GeForce 6 series and later, the ATI Radeon 9500 and later, Intel's GMA 950 and later integrated graphics, and a handful of VIA chipsets and S3 Graphics discrete chips are supported. Although originally supported, the GeForce FX 5 series has been dropped from newer drivers from NVIDIA. The last driver from NVIDIA to support the GeForce FX series on Vista was 96.85. Microsoft offered a tool called the Windows Vista Upgrade Advisor to assist Windows XP and Vista users in determining what versions of Windows their machine is capable of running. The required server connections for this utility are no longer available. Although the installation media included in retail packages is a 32-bit DVD, customers needing a CD-ROM or customers who wish for a 64-bit install media can acquire this media through the Windows Vista Alternate Media program. The Ultimate edition includes both 32-bit and 64-bit media. The digitally downloaded version of Ultimate includes only one version, either 32-bit or 64-bit, from Windows Marketplace. Physical memory limits The maximum amount of RAM that Windows Vista supports varies by edition and processor architecture, as shown in the table. Processor limits All editions except Windows Vista Starter support both the 32-bit (x86) architecture and the additional 64-bit (x86-64) instruction set extensions, which Vista was the first consumer home release of Windows to support. Intel IA-64 Itanium support however is exclusively limited to the Vista-based Windows Server 2008. The maximum number of logical processors in a PC that Windows Vista supports is: 32 for 32-bit; 64 for 64-bit. The maximum number of physical processors in a PC that Windows Vista supports is: one processor for Windows Vista Starter, Windows Vista Home Basic, and Windows Vista Home Premium, and two processors for Windows Vista Business, Windows Vista Enterprise, and Windows Vista Ultimate. Updates Microsoft releases updates such as service packs for its Windows operating systems to add features, address issues, and improve performance and stability. Service Pack 1 Windows Vista Service Pack 1 (SP1) was released on February 4, 2008, alongside Windows Server 2008 to OEM partners, after a five-month beta test period. The initial deployment of the service pack caused a number of machines to continually reboot, rendering the machines unusable. This temporarily caused Microsoft to suspend automatic deployment of the service pack until the problem was resolved. The synchronized release date of the two operating systems reflected the merging of the workstation and server kernels back into a single code base for the first time since Windows 2000. MSDN subscribers were able to download SP1 on February 15, 2008. SP1 became available to current Windows Vista users on Windows Update and the Download Center on March 18, 2008. Initially, the service pack only supported five languages – English, French, Spanish, German and Japanese. Support for the remaining 31 languages was released on April 14, 2008. A white paper, published by Microsoft on August 29, 2007, outlined the scope and intent of the service pack, identifying three major areas of improvement: reliability and performance, administration experience, and support for newer hardware and standards. One area of particular note is performance. Areas of improvement include file copy operations, hibernation, logging off on domain-joined machines, JavaScript parsing in Internet Explorer, network file share browsing, Windows Explorer ZIP file handling, and Windows Disk Defragmenter. The ability to choose individual drives to defragment is being reintroduced as well. Service Pack 1 introduced support for some new hardware and software standards, notably the exFAT file system, 802.11n wireless networking, IPv6 over VPN connections, and the Secure Socket Tunneling Protocol. Booting a system using Extensible Firmware Interface on x64 systems was also introduced; this feature had originally been slated for the initial release of Vista but was delayed due to a lack of compatible hardware at the time. Booting from a GUID Partition Table–based hard drive greater than 2.19 TB is supported (x64 only). Two areas have seen changes in SP1 that have come as the result of concerns from software vendors. One of these is desktop search; users will be able to change the default desktop search program to one provided by a third party instead of the Microsoft desktop search program that comes with Windows Vista, and desktop search programs will be able to seamlessly tie in their services into the operating system. These changes come in part due to complaints from Google, whose Google Desktop Search application was hindered by the presence of Vista's built-in desktop search. In June 2007, Google claimed that the changes being introduced for SP1 "are a step in the right direction, but they should be improved further to give consumers greater access to alternate desktop search providers". The other area of note is a set of new security APIs being introduced for the benefit of antivirus software that currently relies on the unsupported practice of patching the kernel (see Kernel Patch Protection). An update to DirectX 10, named DirectX 10.1, marked mandatory several features that were previously optional in Direct3D 10 hardware. Graphics cards will be required to support DirectX 10.1. SP1 includes a kernel (6001.18000) that matches the version shipped with Windows Server 2008. The Group Policy Management Console (GPMC) was replaced by the Group Policy Object Editor. An updated downloadable version of the Group Policy Management Console was released soon after the service pack. SP1 enables support for hotpatching, a reboot-reduction servicing technology designed to maximize uptime. It works by allowing Windows components to be updated (or "patched") while they are still in use by a running process. Hotpatch-enabled update packages are installed via the same methods as traditional update packages, and will not trigger a system reboot. Service Pack 2 Service Pack 2 for Windows Vista and Windows Server 2008 was released through different channels between April 28 and June 9, 2009, one year after the release of Windows Vista SP1, and four months before the release of Windows 7. In addition to a number of security and other fixes, a number of new features have been added. However, it did not include Internet Explorer 8, but instead was included in Windows 7. Windows Search 4 (available for SP1 systems as a standalone update) Feature Pack for Wireless adds support for Bluetooth 2.1 Windows Feature Pack for Storage enables the data recording onto Blu-ray media Windows Connect Now (WCN) to simplify Wi-Fi configuration Improved support for resuming with active Wi-Fi connections Improved support for eSATA drives The limit of 10 half-open, outgoing TCP connections introduced in Windows XP SP2 was removed Enables the exFAT file system to support UTC timestamps, which allows correct file synchronization across time zones Support for ICCD/CCID smart cards Support for VIA 64-bit CPUs Improved performance and responsiveness with the RSS feeds sidebar Improves audio and video performance for streaming high-definition content Improves Windows Media Center (WMC) in content protection for TV Provides an improved power management policy that is approximately 10% more efficient than the original with the default policies Windows Vista and Windows Server 2008 share a single service pack binary, reflecting the fact that their code bases were joined with the release of Server 2008. Service Pack 2 is not a cumulative update meaning that Service Pack 1 must be installed first. Platform Update The Platform Update for Windows Vista and Windows Server 2008 (KB971644) was announced on September 10, 2009 and released on October 27, 2009; The Platform Update for Windows Vista and Windows Server 2008 allows developers to target both Windows Vista and Windows 7 by backporting several significant components by consisting of: Windows Automation API 3.0 (MSAA and UI Automation updates) Windows Graphics Runtime (Direct2D, Direct3D 10 Level 9, Direct3D 11, DirectX 11, DXGI 1.1, DirectWrite, and WARP) XPS Document API, XPS Rasterization Service, and XPS Print API Windows Ribbon and Animation Manager Library' (Windows Animation Manager API and Windows Ribbon API) Windows Portable Devices Platform (Media Transfer Protocol over Bluetooth and WPD over MTP Device Services) With the release of the Platform Update on October 27, 2009, the Windows Management Framework (Background Intelligent Transfer Service 4.0, Windows PowerShell 2.0, and Windows Remote Management 2.0) of Windows 7 was also made available to users of Windows XP and Windows Vista. Remote Desktop Connection 7.0 was made available as well. In July 2011, Microsoft released the Platform Update Supplement (KB2117917) to address issues and improve performance on Windows Vista and Windows Server 2008 machines with the Platform Update installed. Out-of-band patches BlueKeep patch Microsoft released an update for Windows Vista SP2 to resolve the BlueKeep security vulnerability () that affects the Remote Desktop Protocol of several versions of Windows. Subsequent related flaws, (collectively known as DejaBlue) do not affect Windows Vista or earlier versions of Windows. The installation of this patch in Windows Vista changes the build number of Windows Vista from 6002 to 6003. CredSSP encryption oracle remediation A remote code execution vulnerability was discovered in the Credential Security Support Provider protocol (CredSSP) () that could allow attackers to relay user credentials during a connection to execute code on a targeted system. Microsoft released a patch to address the issue. Microsoft Malware Protection Engine patch A vulnerability related to Windows Defender that affected the way the Malware Protection Engine operates () was reported in May 2017. If Windows Defender scanned a specially crafted file, it would lead to memory corruption, potentially allowing an attacker to control the affected machine or perform arbitrary code execution in the context of LocalSystem; the vulnerability was exacerbated by the default real-time protection settings of Windows Defender, which were configured to automatically initiate malware scans at regular intervals. The first version of the Protection Engine affected by the vulnerability is Version 1.1.13701.0—subsequent versions of the engine are unaffected. Microsoft released a patch to address the issue. Text Services Framework patch The Text Services Framework was compromised by a privilege escalation vulnerability () that could allow attackers to use the framework to perform privileged operations, run software, or send messages to privileged processes from unprivileged processes—bypassing security features such as sandboxes or User Account Control. Microsoft remediated issues related to this vulnerability with the release of a patch in August 2019 for Windows Vista SP2, Windows Server 2008 SP2, and later versions of Windows. Marketing campaigns The Mojave Experiment Microsoft introduced an advertising campaign in July 2008 called the Mojave Experiment that depicted a group of people being asked to evaluate what is purported to be a new operating system codenamed "Mojave". Participants were asked for their impressions of Windows Vista, whether they used it, and to assess it based on a scale from one to ten. Participants were then shown a demonstration of Windows Vista features and were then asked to assess "Mojave"; none of the participants gave "Mojave" a rating lower than an initial rating for Windows Vista. The campaign implied that negative reception of Windows Vista was based partially on preconceived ideas. The campaign had been criticized for focusing on positive statements from participants and not addressing all criticism of Windows Vista. Reception Windows Vista received mixed reviews at the time of its release and throughout its lifespan, mainly for its much higher hardware requirements and perceived slowness compared to Windows XP. It received generally positive reviews from PC gamers who praised the advantages brought by DirectX 10, which allowed for better gaming performance and more realistic graphics, as well as support for many new capabilities featured in new GPUs. However, many DirectX 9 games initially ran with lower frame rates compared to when they were run on Windows XP. In mid-2008, benchmarks suggested that the SP1 update improved performance to be on par with (or better than) Windows XP in terms of game performance. Peter Bright of Ars Technica wrote that, despite its delays and feature cuts, Windows Vista was "a huge evolution in the history of the NT platform [...] The fundamental changes to the platform are of a scale not seen since the release of NT [3.1; the first version]." In a continuation of his previous assessment, Bright stated that "Vista is not simply XP with a new skin; core parts of the OS have been radically overhauled, and virtually every area has seen significant refinement. In terms of the magnitude and extent of these changes, Vista represents probably the biggest leap that the NT platform has ever seen. Never before have significant subsystems been gutted and replaced in the way they are in Vista." Many others in the tech industry echoed these sentiments at the time, directing praise towards the massive amount of technical features new to Windows Vista. Windows Vista received the "Best of CES" award at the Consumer Electronics Show in 2007. In its first year of availability, PC World rated it as the biggest tech disappointment of 2007, and it was rated by InfoWorld as No. 2 of Tech's all-time 25 flops. Microsoft's then much smaller competitor Apple noted that, despite Vista's far greater sales, its own operating system did not seem to have suffered after its release, and would later invest in advertising mocking Vista's unpopularity with users. Computer manufacturers such as Dell, Lenovo, and Hewlett-Packard released their newest computers with Windows Vista pre-installed; however, after the negative reception of the operating system, they also began selling their computers with Windows XP CDs included because of a drop in sales. Post-release The Service Pack 1 update, released in 2008, received mixed reviews. Gizmodo wrote that it didn't solve the "most annoying flaws" of the original release of the operating system. Robert Vamosi of CNET thought that while it fixes many tiny problems, it didn't significantly improve performance. Service Pack 2 was well received, with TechRadar writing in its review that it is a "must-have upgrade that finally makes Vista a joy to use." The New York Times's Randall Kennedy, while previewing the beta version, gave praise to the performance improvements. Sales A Gartner research report predicted that Vista business adoption in 2008 would overtake that of XP during the same time frame (21.3% vs. 16.9%) while IDC had indicated that the launch of Windows Server 2008 served as a catalyst for the stronger adoption rates. As of January 2009, Forrester Research had indicated that almost one third of North American and European corporations had started deploying Vista. At a May 2009 conference, a Microsoft Vice President said "Adoption and deployment of Windows Vista has been slightly ahead of where we had been with XP" for big businesses. Within its first month, 20 million copies of Vista were sold, double the amount of Windows XP sales within its first month in October 2001, five years earlier. Shortly after however, due to Vista's relatively low adoption rates and continued demand for Windows XP, Microsoft decided to sell Windows XP until June 30, 2008, instead of the previously planned date of January 31, 2008. There were reports of Vista users "downgrading" their operating systems back to XP, as well as reports of businesses planning to skip Vista. A study conducted by ChangeWave in March 2008 showed that the percentage of corporate users who were "very satisfied" with Vista was dramatically lower than other operating systems, with Vista at 8%, compared to the 40% who said they were "very satisfied" with Windows XP. The internet-usage market share for Windows Vista after two years of availability, in January 2009, was 20.61%. This figure combined with World Internet Users and Population Stats yielded a user base of roughly 330 million, which exceeded Microsoft's two-year post launch expectations by 130 million. The internet user base reached before the release of its successor (Windows 7) was roughly 400 million according to the same statistical sources. Criticism Windows Vista received mixed reviews. Criticism targets include protracted development time (5–6 years), more restrictive licensing terms, the inclusion of several technologies aimed at restricting the copying of protected digital media, and the usability of the new User Account Control security technology. Moreover, some concerns have been raised about many PCs meeting "Vista Premium Ready" hardware requirements and Vista's pricing. Hardware requirements While in 2005 Microsoft claimed "nearly all PCs on the market today will run Windows Vista", the higher requirements of some of the "premium" features, such as the Aero interface, affected many upgraders. According to the UK newspaper The Times in May 2006, the full set of features "would be available to less than 5 percent of Britain's PC market"; however, this prediction was made several months before Vista was released. This continuing lack of clarity eventually led to a class action against Microsoft as people found themselves with new computers that were unable to use the new software to its full potential despite the assurance of "Vista Capable" designations. The court case has made public internal Microsoft communications that indicate that senior executives have also had difficulty with this issue. For example, Mike Nash (Corporate Vice President, Windows Product Management) commented, "I now have a $2,100 e-mail machine" because his laptop lacked an appropriate graphics chip needed for Vista's advanced features. Licensing Criticism of upgrade licenses pertaining to Windows Vista Starter through Home Premium was expressed by Ars Technicas Ken Fisher, who noted that the new requirement of having a prior operating system already installed was going to irritate users who reinstall Windows regularly. It has been revealed that an Upgrade copy of Windows Vista can be installed clean without first installing a previous version of Windows. On the first install, Windows will refuse to activate. The user must then reinstall that same copy of Vista. Vista will then activate on the reinstall, thus allowing a user to install an Upgrade of Windows Vista without owning a previous operating system. As with Windows XP, separate rules still apply to OEM versions of Vista installed on new PCs: Microsoft asserts that these versions are not legally transferable (although whether this conflicts with the right of first sale has yet to be clearly decided legally). Cost Initially, the cost of Windows Vista was also a source of concern and commentary. A majority of users in a poll said that the prices of various Windows Vista editions posted on the Microsoft Canada website in August 2006 make the product too expensive. A BBC News report on the day of Vista's release suggested that, "there may be a backlash from consumers over its pricing plans—with the cost of Vista versions in the US roughly half the price of equivalent versions in the UK." Since the release of Vista in 2006, Microsoft has reduced the retail, and upgraded the price point of Vista. Originally, Vista Ultimate was priced at $399, and Home Premium Vista at $239. These prices have since been reduced to $319 and $199 respectively. Digital rights management Windows Vista supports additional forms of DRM restrictions. One aspect of this is the Protected Video Path, which is designed so that "premium content" from HD DVD or Blu-ray Discs may mandate that the connections between PC components be encrypted. Depending on what the content demands, the devices may not pass premium content over non-encrypted outputs, or they must artificially degrade the quality of the signal on such outputs or not display it at all. Drivers for such hardware must be approved by Microsoft; a revocation mechanism is also included, which allows Microsoft to disable drivers of devices in end-user PCs over the Internet. Peter Gutmann, security researcher and author of the open source cryptlib library, claims that these mechanisms violate fundamental rights of the user (such as fair use), unnecessarily increase the cost of hardware, and make systems less reliable (the "tilt bit" being a particular worry; if triggered, the entire graphic subsystem performs a reset) and vulnerable to denial-of-service attacks. However, despite several requests for evidence supporting such claims Peter Gutmann has never supported his claims with any researched evidence. Proponents have claimed that Microsoft had no choice but to follow the demands of the movie studios, and that the technology will not actually be enabled until after 2010; Microsoft also noted that content protection mechanisms have existed in Windows as far back as Windows ME, and that the new protections will not apply to any existing content, only future content. User Account Control Although User Account Control (UAC) is an important part of Vista's security infrastructure as it blocks software from silently gaining administrator privileges without the user's knowledge, it has been widely criticized for generating too many prompts. This has led many Vista UAC users to consider it troublesome, with some consequently either turning the feature off or (for Windows Vista Enterprise or Windows Vista Ultimate users) putting it in auto-approval mode. Responding to this criticism, Microsoft altered the implementation to reduce the number of prompts with SP1. Though the changes resulted in some improvement, it did not alleviate the concerns completely. Downgrade rights End-users of licenses of Windows 7 acquired through OEM or volume licensing may downgrade to the equivalent edition of Windows Vista. Downgrade rights are not offered for Starter, Home Basic or Home Premium editions of Windows 7. For Windows 8 licenses acquired through an OEM, a user may also downgrade to the equivalent edition of Windows Vista. Customers licensed for use of Windows 8 Enterprise are generally licensed for Windows 8 Pro, which may be downgraded to Windows Vista Business. See also BlueKeep (security vulnerability) Comparison of Windows Vista and Windows XP Microsoft Security Essentials Notes References External links Windows Vista End of Support Windows Vista Service Pack 2 (SP2) Update 2006 software IA-32 operating systems Products and services discontinued in 2017 Vista X86-64 operating systems Microsoft Windows
Windows Vista
[ "Technology" ]
14,149
[ "Computing platforms", "Microsoft Windows" ]
2,289,914
https://en.wikipedia.org/wiki/Fr%C3%A9my%27s%20salt
Frémy's salt is a chemical compound with the formula (K4[ON(SO3)2]2), sometimes written as (K2[NO(SO3)2]). It is a bright yellowish-brown solid, but its aqueous solutions are bright violet. The related sodium salt, disodium nitrosodisulfonate (NDS, Na2ON(SO3)2, CAS 29554-37-8) is also referred to as Frémy's salt. Regardless of the cations, the salts are distinctive because aqueous solutions contain the radical [ON(SO3)2]2−. Applications Frémy's salt, being a long-lived free radical, is used as a standard in electron paramagnetic resonance (EPR) spectroscopy, e.g. for quantitation of radicals. Its intense EPR spectrum is dominated by three lines of equal intensity with a spacing of about 13 G (1.3 mT). The inorganic aminoxyl group is a persistent radical, akin to TEMPO. It has been used in some oxidation reactions, such as for oxidation of some anilines and phenols allowing polymerization and cross-linking of peptides and peptide-based hydrogels. It can also be used as a model for peroxyl radicals in studies that examine the antioxidant mechanism of action in a wide range of natural products. Preparation Frémy's salt is prepared from hydroxylaminedisulfonic acid. Oxidation of the conjugate base gives the purple dianion: HON(SO3H)2 → [HON(SO3)2]2− + 2 H+ 2 [HON(SO3)2]2− + PbO2 → 2 [ON(SO3)2]2− + PbO + H2O The synthesis can be performed by combining nitrite and bisulfite to give the hydroxylaminedisulfonate. Oxidation is typically conducted at low-temperature, either chemically or by electrolysis. Other reactions: HNO2 + 2 → + H2O 3 + + H+ → 3 + MnO2 + 2 H2O 2 + 4 K+ → K4[ON(SO3)2]2 History Frémy's salt was discovered in 1845 by Edmond Frémy (1814–1894). Its use in organic synthesis was popularized by Hans Teuber, such that an oxidation using this salt is called the Teuber reaction. References Further reading Free radicals Oxidizing agents Sodium compounds Potassium compounds Nitrogen–oxygen compounds Reagents for organic chemistry
Frémy's salt
[ "Chemistry", "Biology" ]
543
[ "Redox", "Free radicals", "Oxidizing agents", "Senescence", "Biomolecules", "Reagents for organic chemistry" ]
2,289,986
https://en.wikipedia.org/wiki/Taylor%20cone
A Taylor cone refers to the cone observed in electrospinning, electrospraying and hydrodynamic spray processes from which a jet of charged particles emanates above a threshold voltage. Aside from electrospray ionization in mass spectrometry, the Taylor cone is important in field-emission electric propulsion (FEEP) and colloid thrusters used in fine control and high efficiency (low power) thrust of spacecraft. History This cone was described by Sir Geoffrey Ingram Taylor in 1964 before electrospray was "discovered". This work followed on the work of Zeleny who photographed a cone-jet of glycerine in a strong electric field and the work of several others: Wilson and Taylor (1925), Nolan (1926) and Macky (1931). Taylor was primarily interested in the behavior of water droplets in strong electric fields, such as in thunderstorms. Formation When a small volume of electrically conductive liquid is exposed to an electric field, the shape of liquid starts to deform from the shape caused by surface tension alone. The liquid becomes polarized and as the voltage is increased the effect of the electric field becomes more prominent. This causes an intense electric field surrounding the liquid droplet As this effect of the electric field begins to exert a similar magnitude of force on the droplet as the surface tension does, a cone shape begins to form with convex sides and a rounded tip. This approaches the shape of a cone with a whole angle (width) of 98.6°. When a certain threshold voltage has been reached the slightly rounded tip inverts and emits a jet of liquid. This is called a cone-jet and is the beginning of the electrospraying process in which ions may be transferred to the gas phase. It is generally found that in order to achieve a stable cone-jet a slightly higher than threshold voltage must be used. As the voltage is increased even more, other modes of droplet disintegration are found. The term Taylor cone can specifically refer to the theoretical limit of a perfect cone of exactly the predicted angle or generally refer to the approximately conical portion of a cone-jet after the electrospraying process has begun. Taylor cones can be stationary as cone-jets described previously, or transient which can form when droplets undergo Coulombic explosion. Theory Sir Geoffrey Ingram Taylor in 1964 described this phenomenon, theoretically derived based on general assumptions that the requirements to form a perfect cone under such conditions required a semi-vertical angle of 49.3° (a whole angle of 98.6°) and demonstrated that the shape of such a cone approached the theoretical shape just before jet formation. This angle is known as the Taylor angle. This angle is more precisely where is the first zero of (the Legendre function of order 1/2). Taylor's derivation is based on two assumptions: (1) that the surface of the cone is an equipotential surface and (2) that the cone exists in a steady state equilibrium. To meet both of these criteria the electric field must have azimuthal symmetry and have dependence to counter the surface tension to produce the cone. The solution to this problem is: where (equipotential surface) exists at a value of (regardless of R) producing an equipotential cone. The angle necessary for for all R is a zero of between 0 and which there is only one at 130.7099°. The complement of this angle is the Taylor angle. References Mass spectrometry
Taylor cone
[ "Physics", "Chemistry" ]
721
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
2,290,139
https://en.wikipedia.org/wiki/Chen%20Quan
Chen Quan (; born December 1963) is a Chinese pilot selected as part of the Shenzhou program. Chen was born in Suining, Sichuan, China. He joined the People's Liberation Army Air Force and became a fighter interceptor pilot and later as a regiment commander in the PLAAF. Career as an astronaut Chen was selected to be an astronaut in 1998 and served as commander of the backup crew for Shenzhou 7 which flew in September 2008. Chen Quan retired from the Astronaut Corps in 2014. See also List of Chinese astronauts References Chen Quan at the Encyclopedia Astronautica. Accessed 23 July 2005. Spacefacts biography of Chen Quan Living people People's Liberation Army Astronaut Corps People from Suining Shenzhou program astronauts People's Liberation Army Air Force personnel 1963 births
Chen Quan
[ "Astronomy" ]
157
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
2,290,155
https://en.wikipedia.org/wiki/Daniel%20Kan
Daniel Marinus Kan (or simply Dan Kan) (August 4, 1927 – August 4, 2013) was a Dutch mathematician working in category theory and homotopy theory. He was a prolific contributor to both fields for six decades, having authored or coauthored several dozen research papers and monographs. Career Daniel Kan was born into a Jewish family. He received his Ph.D. at Hebrew University in 1955, under the direction of Samuel Eilenberg. His students include Aldridge K. Bousfield, William Dwyer, Stewart Priddy, Emmanuel Dror Farjoun, and Jeffrey H. Smith. He was an emeritus professor at the Massachusetts Institute of Technology where he taught from 1959, formally retiring in 1993. Work He played a role in the beginnings of modern homotopy theory similar to that of Saunders Mac Lane in homological algebra, namely the adroit and persistent application of categorical methods. His most famous work is the abstract formulation of the discovery of adjoint functors, which dates from 1958. The Kan extension is one of the broadest descriptions of a useful general class of adjunctions. From the mid-1950s he made distinguished contributions to the theory of simplicial sets and simplicial methods in topology in general. In recognition of this, fibrations in the usual closed model category structure on the category of simplicial sets are known as Kan fibrations, and the fibrant objects are known as Kan complexes. Some of Kan's later work concerned model categories and other homotopical categories. Especially noteworthy are his work with Aldridge Bousfield on completions and homotopy limits, and his work with William Dwyer on simplicial localizations of relative categories. See also Dold–Kan correspondence Kan extension References External links Kan memorial note at the MIT Mathematics Department 1927 births 2013 deaths Dutch emigrants to Israel Israeli expatriates in the United States 20th-century Dutch mathematicians 20th-century Israeli mathematicians Hebrew University of Jerusalem alumni Massachusetts Institute of Technology faculty Topologists
Daniel Kan
[ "Mathematics" ]
416
[ "Topologists", "Topology" ]
2,290,281
https://en.wikipedia.org/wiki/Ghost%20net
Ghost nets are fishing nets that have been abandoned, lost, or otherwise discarded in the ocean, lakes, and rivers. These nets, often nearly invisible in the dim light, can be left tangled on a rocky reef or drifting in the open sea. They can entangle fish, dolphins, sea turtles, sharks, dugongs, crocodiles, seabirds, crabs, and other creatures, including the occasional human diver. Acting as designed, the nets restrict movement, causing starvation, laceration and infection, and suffocation in those that need to return to the surface to breathe. It's estimated that around 48 million tons (48,000 kt) of lost fishing gear is generated each year, not including those that were abandoned or discarded and these may linger in the oceans for a considerable time before breaking-up. Description Some commercial fishermen use gillnets. These are suspended in the sea by flotation buoys, such as glass floats, along one edge. In this way they can form a vertical wall hundreds of metres long, where any fish within a certain size range can be caught. Normally these nets are collected by fishermen and the catch removed. If this is not done, the net can continue to catch fish until the weight of the catch exceeds the buoyancy of the floats. The net then sinks, and the fish are devoured by bottom-dwelling crustaceans and other fish. Then the floats pull the net up again and the cycle continues. Given the high-quality synthetics that are used today, the destruction can continue for a long time. The problem is not just nets but ghost gear in general; old-fashioned crab traps, without the required "rot-out panel", also sit on the bottom, where they become self-baiting traps that can continue to trap marine life for years. Even balled-up fishing line can be deadly for a variety of creatures, including birds and marine mammals. Over time the nets become more and more tangled. In general, fish are less likely to be trapped in gear that has been down a long time. Fishermen sometimes abandon worn-out nets because it is often the easiest way to get rid of them. The French government offered a reward for ghost nets handed in to local coastguards along sections of the Normandy coast between 1980 and 1981. The project was abandoned when people vandalized nets to claim rewards, without retrieving anything at all from the shoreline or ocean. In September 2015, the Global Ghost Gear Initiative (GGGI) was created by the World Animal Protection to give a unique and stronger voice to the cause. The term ALDFG means "abandoned, lost and discarded fishing gear". Environmental impact From 2000 to 2012, the National Marine Fisheries Service reported an average of 11 large whales entangled in ghost nets every year along the US west coast. From 2002 to 2010, 870 nets were recovered in Washington (state) with over 32,000 marine animals trapped inside. Ghost gear is estimated to account for 10% (640,000 tonnes) of all marine litter. An estimated 46% of the Great Pacific Garbage Patch consists of fishing related plastics. Fishing nets account for about 1% of the total mass of all marine macroplastics larger than , and plastic fishing gear overall constitutes over two-thirds of the total mass. According to the SeaDoc Society, each ghost net kills $20,000 worth of Dungeness crab over 10 years. The Virginia Institute of Marine Science calculated that ghost crab pots capture 1.25 million blue crabs each year in the Chesapeake Bay alone. In May 2016, the Australian Fisheries Management Authority (AFMA) recovered 10 tonnes of abandoned nets within the Australian Exclusive Economic Zone and Torres Strait protected zone perimeters. One protected turtle was rescued. The northern Australian olive ridley sea turtle Lepidochelys olivacea, is a genetically distinct variation of the olive ridley sea turtle. Ghost nets pose a threat to the continued existence of the northern Australian variety. Without further action to preserve the northern Australian olive ridley sea turtle, the population could face extinction. Researchers in Brazil used social media to estimate how ghost nets have negatively affected the Brazilian marine biota. Footage of ghost nets found on Google and YouTube were obtained and analyzed to arrive at the results of the study. They found that ghost nets have an adverse effect on several marine species, including large marine animals, such as the Bryde's whale and Guiana dolphin. Solutions Alternative materials and practice Unlike synthetic fishing nets, biodegradable fishing nets decompose naturally under water after a certain period of time. Coconut fibre (coir) fishing nets are commercially made and are hence a practical solution that can be taken by fishermen. Technology systems for marking and tracking fishing gear, including GPS tracking, are being trialled to promote greater accountability and transparency. Collection and recycling Legalizing gear retrievals and establishing waste management systems is required to manage and mitigate abandoned, lost, and discarded fishing gear at-sea. The company Net-works worked out a solution to turn discarded fishing nets into carpet tiles. Between 2008 and 2015, the US Fishing for Energy initiative collected 2.8 million pounds of fishing gear, and in partnership with Reworld turned this into enough electricity to power 182 homes for one year by incineration. One retrieval initiative in Southwest Nova Scotia in Canada conducted 60 retrieval trips, searched ~1523 square kilometers of the seafloor and removed 7064 kg of abandoned, lost, and discarded fishing gear (ALDFG) (comprising 66% lobster traps and 22% dragger cable). Lost traps continued to capture target and non-target species. A total of 15 different species were released from retrieved ALDFG, including 239 lobsters (67% were market-sized) and seven groundfish (including five species-at-risk). The commercial losses from ALDFG in Southwest Nova Scotia were estimated at $175,000 CAD annually. In 2009 world-renowned Dutch technical diver Pascal van Erp started to recover abandoned ghost fishing gear entangled on North Sea wrecks. He soon inspired others. Organised teams of volunteer technical divers recovered tons of ghost fishing gear off the Netherlands coastline. The loop was then closed - after a season's diving 22 tons of fishing gear was sent to the Aquafil Group for recycling back into new Nylon 6 material. In 2012 Pascal van Erp formally founded the not-for-profit Ghost Fishing organisation. In 2020 the Ghost Fishing Foundation rebranded as the Ghost Diving Foundation. A plan to protect UK seas from ghost fishing was backed by the European Parliament Fisheries Committee in 2018. Mr. Flack, who led the committee, said: "Abandoned fishing nets are polluting our seas, wasting fishing stocks and indiscriminately killing whales, sea lions or even dolphins. The tragedy of ghost fishing must end". Net amnesty schemes such as Fishing for Litter create incentives for the collection and responsible disposal of end of life fishing gear. These schemes address the root cause for many net abandonments, which is the financial cost of their disposal. Fishing nets are often made from extremely high quality plastics to ensure suitable strength, which makes them desirable for recycling. Initiatives like Healthy Seas are connecting environmental cleanup projects to manufacturers to re-use these materials. Recycled waste nets can be made into yarn and consumer products, such as swimwear. In Australia, the Carpentaria Ghost Nets Program has collaborated with indigenous communities to increase awareness of ghost nets and to foster long term solutions. The program has trained indigenous northern Australians in scouting for ghost nets and in removing ghost nets and other plastic pollution. See also Drift netting Monofilament fishing line#Environmental impact The Derelict Crab Trap Program Plastic pollution General: Marine debris List of environmental issues Notes 1 References Macfadyen G, Huntington T and Cappell R (2009) Abandoned, lost or otherwise discarded fishing gear FAO: Fisheries and Aquaculture, Technical paper 523. Rome. External links Film on Ghost nets in the Indian Ocean Ghost nets in the Indian Ocean Ghost Diving - International cleanup projects Ghost Net Project Carpentaria Ghost Net Programme Team Hunts Deadly 'Ghost Nets' in the Pacific Tracking Down Ghost Nets Ghost nets kill sea turtles Ghost nets hurting marine environment: UN report Environmental impact of fishing Nets (devices) Water pollution
Ghost net
[ "Chemistry", "Environmental_science" ]
1,676
[ "Water pollution" ]
2,290,298
https://en.wikipedia.org/wiki/179%20%28number%29
179 (one hundred [and] seventy-nine) is the natural number following 178 and preceding 180. In mathematics 179 is part of the Cunningham chain of prime numbers 89, 179, 359, 719, 1439, 2879, in which each successive number is two times the previous number, plus one. Among Cunningham chains of this length, this one has the smallest numbers. Because 179 is neither the start nor the end of this chain, it is both a safe prime and a Sophie Germain prime. It is also a super-prime number, because it is the 41st smallest prime and 41 is also prime. Since 971 (the digits of 179 reversed) is prime, 179 is an emirp. See also References External links Integers
179 (number)
[ "Mathematics" ]
152
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
2,290,337
https://en.wikipedia.org/wiki/181%20%28number%29
181 (one hundred [and] eighty-one) is the natural number following 180 and preceding 182. In mathematics 181 is prime, and a palindromic, strobogrammatic, and dihedral number in decimal. 181 is a Chen prime. 181 is a twin prime with 179, equal to the sum of five consecutive prime numbers: 29 + 31 + 37 + 41 + 43. 181 is the difference of two consecutive square numbers 912 – 902, as well as the sum of two consecutive squares: 92 + 102. As a centered polygonal number, 181 is: 181 is also a centered (hexagram) star number, as in the game of Chinese checkers. Specifically, 181 is the 42nd prime number and 16th full reptend prime in decimal, where multiples of its reciprocal inside a prime reciprocal magic square repeat 180 digits with a magic sum of 810; this value is one less than 811, the 141st prime number and 49th full reptend prime (or equivalently long prime) in decimal whose reciprocal repeats 810 digits. While the first full non-normal prime reciprocal magic square is based on with a magic constant of 81 from a square, a normal magic square has a magic constant ; the next such full, prime reciprocal magic square is based on multiples of the reciprocal of 383 (also palindromic). 181 is an undulating number in ternary and nonary numeral systems, while in decimal it is the 28th undulating prime. References External links Prime curiosities: 181 Number Facts and Trivia: 181 Number Gossip: 181 Integers
181 (number)
[ "Mathematics" ]
331
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
2,290,384
https://en.wikipedia.org/wiki/191%20%28number%29
191 (one hundred [and] ninety-one) is the natural number following 190 and preceding 192. In mathematics 191 is a prime number, part of a prime quadruplet of four primes: 191, 193, 197, and 199. Because doubling and adding one produces another prime number (383), 191 is a Sophie Germain prime. It is the smallest prime that is not a full reptend prime in any base from 2 to 10; in fact, the smallest base for which 191 is a full period prime is base 19. See also 191 (disambiguation) References Integers
191 (number)
[ "Mathematics" ]
125
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
2,290,403
https://en.wikipedia.org/wiki/193%20%28number%29
193 (one hundred [and] ninety-three) is the natural number following 192 and preceding 194. In mathematics 193 is the number of compositions of 14 into distinct parts. In decimal, it is the seventeenth full repetend prime, or long prime. It is the only odd prime known for which 2 is not a primitive root of . It is the thirteenth Pierpont prime, which implies that a regular 193-gon can be constructed using a compass, straightedge, and angle trisector. It is part of the fourteenth pair of twin primes , the seventh trio of prime triplets , and the fourth set of prime quadruplets . Aside from itself, the friendly giant (the largest sporadic group) holds a total of 193 conjugacy classes. It also holds at least 44 maximal subgroups aside from the double cover of (the forty-fourth prime number is 193). 193 is also the eighth numerator of convergents to Euler's number; correct to three decimal places: The denominator is 71, which is the largest supersingular prime that uniquely divides the order of the friendly giant. See also 193 (disambiguation) References Integers
193 (number)
[ "Mathematics" ]
246
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
2,290,431
https://en.wikipedia.org/wiki/Security%20parameter
In cryptography, a security parameter is a way of measuring of how "hard" it is for an adversary to break a cryptographic scheme. There are two main types of security parameter: computational and statistical, often denoted by and , respectively. Roughly speaking, the computational security parameter is a measure for the input size of the computational problem on which the cryptographic scheme is based, which determines its computational complexity, whereas the statistical security parameter is a measure of the probability with which an adversary can break the scheme (whatever that means for the protocol). Security parameters are usually expressed in unary representation - i.e. is expressed as a string of s, , conventionally written as - so that the time complexity of the cryptographic algorithm is polynomial in the size of the input. Computational security The security of cryptographic primitives relies on the hardness of some hard problems. One sets the computational security parameter such that computation is considered intractable. Examples If the security of a scheme depends on the secrecy of a key for a pseudorandom function (PRF), then we may specify that the PRF key should be sampled from the space so that a brute-force search requires computational power. In the RSA cryptosystem, the security parameter denotes the length in bits of the modulus n; the positive integer n must therefore be a number in the set {0, ..., 2 - 1}. Statistical security Security in cryptography often relies on the fact that statistical distance between a distribution predicated on a secret, and a simulated distribution produced by an entity that does not know the secret is small. We formalise this using the statistical security parameter by saying that the distributions are statistically close if the statistical distance between distributions can be expressed as a negligible function in the security parameter. One sets the statistical security parameter such that is considered a "small enough" chance of the adversary winning. Consider the following two broad categories of attack of adversaries on a given cryptographic scheme: attacks in which the adversary tries to learn secret information, and attacks in which the adversary tries to convince an honest party to accept a false statement as true (or vice versa). In the first case, for example a public-key encryption scheme, an adversary may be able to obtain a large amount of information from which he can attempt to learn secret information, e.g. by examining the distribution of ciphertexts for a fixed plaintext encrypted under different randomness. In the second case, it may be that the adversary must guess a challenge or a secret and can do so with some fixed probability; in this we can talk about distributions by considering the algorithm for sampling the challenge in the protocol. In both cases, we can talk about the chance of the adversary "winning" in a loose sense, and can parameterise the statistical security by requiring the distributions to be statistically close in the first case or defining a challenge space dependent on the statistical security parameter in the second case. Examples In encryption schemes, one aspect of security is (at a high level) that anything that can be learnt about a plaintext given a ciphertext can also be learnt from a randomly-sampled string (of the same length as ciphertexts) that is independent of the plaintext. Formally, one would need to show that a uniform distribution over a set of strings of fixed length is statistically close to a uniform distribution over the space of all possible ciphertexts. In zero knowledge protocols, we can further subdivide the statistical security parameters into zero knowledge and soundness statistical security parameters. The former parameterises what the transcript leaks about the secret knowledge, and the latter parameterises the chance with which a dishonest prover can convince an honest verifier that he knows a secret even if he doesn't. In universal composability, the security of a protocol relies on the statistical indistinguishability of distributions of a real-world and an ideal-world execution. Interestingly, for a computationally unbounded environment it is not sufficient for distributions to be statistically indistinguishable since the environment can run the experiment enough times to observe which distribution is being produced (real or ideal); however, any standalone adversary against the protocol will only win with negligible probability in the statistical security parameter since it only engages in the protocol once. See also Key size Negligible function Cryptography
Security parameter
[ "Mathematics", "Engineering" ]
902
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
2,290,446
https://en.wikipedia.org/wiki/197%20%28number%29
197 (one hundred [and] ninety-seven) is the natural number following 196 and preceding 198. In mathematics 197 is a prime number, the third of a prime quadruplet: 191, 193, 197, 199 197 is the smallest prime number that is the sum of seven consecutive primes: 17 + 19 + 23 + 29 + 31 + 37 + 41, and is the sum of the first twelve prime numbers: 2 + 3 + 5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 197 is a centered heptagonal number, a centered figurate number that represents a heptagon with a dot in the center and all other dots surrounding the center dot in successive heptagonal layers 197 is a Schröder–Hipparchus number, counting for instance the number of ways of subdividing a heptagon by a non-crossing set of its diagonals. See also References Integers
197 (number)
[ "Mathematics" ]
195
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
2,290,459
https://en.wikipedia.org/wiki/199%20%28number%29
199 (one hundred [and] ninety-nine) is the natural number following 198 and preceding 200. In mathematics 199 is a centered triangular number. It is a prime number and the fourth part of a prime quadruplet: 191, 193, 197, 199. 199 is the smallest natural number that takes more than two iterations to compute its digital root as a repeated digit sum: Thus, its additive persistence is three, and it is the smallest number of persistence three. See also References Integers
199 (number)
[ "Mathematics" ]
102
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
2,290,664
https://en.wikipedia.org/wiki/Bachelor%20of%20Computing
A Bachelor of Computing (B.Comp.) is a bachelor's degree in computing. This degree is offered in a small number of universities, and varies slightly from a Bachelor of Science (B.Sc.) in Computer Science or Information Technology, a Bachelor of Science in Information Technology (B.Sc IT.) or a Bachelor of Computer Science (B.CS.). Academics Most universities confer a Bachelor of Computing degree to a student after four years of full-time study (generally 120 credit hours) have been completed. Potential specialisations within a B.Comp. vary greatly, and may include: Cognitive Science, Computer Science, Information Technology, Management Information Systems, Medical Informatics, Medical Imaging, Multimedia, or Software Engineering. Job prospects A Bachelor of Computing integrated with science can lead to various professional careers, ranging from data analysis and cyber security analysis to game designing and developing. Other fields in which this degree could be useful include business analysis, IT training, nanotechnology and network engineering. See also Bachelor of Computer Information Systems Bachelor of Computer Science Bachelor of Information Technology Bachelor of Science in Information Technology References Computing Computer science education Information technology qualifications
Bachelor of Computing
[ "Technology" ]
238
[ "Computer science education", "Computer science", "Computer occupations", "Information technology qualifications" ]
2,290,742
https://en.wikipedia.org/wiki/Luminophore
In chemistry, a luminophore (sometimes shortened to lumophore) is an atom or functional group in a chemical compound that is responsible for its luminescent properties. Luminophores can be either organic or inorganic. Luminophores can be further classified as fluorophores or phosphors, depending on the nature of the excited state responsible for the emission of photons. However, some luminophores cannot be classified as being exclusively fluorophores or phosphors. Examples include transition-metal complexes such as tris(bipyridine)ruthenium(II) chloride, whose luminescence comes from an excited (nominally triplet) metal-to-ligand charge-transfer (MLCT) state, which is not a true triplet state in the strict sense of the definition; and colloidal quantum dots, whose emissive state does not have either a purely singlet or triplet spin. Most luminophores consist of conjugated π systems or transition-metal complexes. There are also purely inorganic luminophores, such as zinc sulfide doped with rare-earth metal ions, rare-earth metal oxysulfides doped with other rare-earth metal ions, yttrium oxide doped with rare-earth metal ions, zinc orthosilicate doped with manganese ions, etc. Luminophores can be observed in action in fluorescent lights, television screens, computer monitor screens, organic light-emitting diodes and bioluminescence. The correct, textbook terminology is luminophore, not lumophore, although the latter term has been frequently used in the chemical literature. See also Chromophore Fluorophore Phosphor References Luminescence Chemical compounds
Luminophore
[ "Physics", "Chemistry" ]
379
[ "Luminescence", "Molecular physics", "Chemical compounds", "Molecules", "Matter" ]
5,712,026
https://en.wikipedia.org/wiki/Fructosephosphates
Fructosephosphates are sugar phosphates based upon fructose, and are common in the biochemistry of cells. Fructosephosphates play integral roles in many metabolic pathways, particularly glycolysis, gluconeogenesis and the pentose phosphate pathway. The major biologically active fructosephosphates are: Fructose 1-phosphate Fructose 2-phosphate Fructose 3-phosphate Fructose 6-phosphate Fructose 1,6-bisphosphate Fructose 2,6-bisphosphate See also Fructose bisphosphatase References External links Pubchem - fructose-6-phosphate Organophosphates
Fructosephosphates
[ "Chemistry", "Biology" ]
150
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
5,712,070
https://en.wikipedia.org/wiki/John%20J.%20Garstka
John Joseph Garstka (born February 20, 1961) is the acting CISO for acquisition and sustainment at the Department of Defense Biography Garstka is a recognized international speaker and has delivered the Network Centric Warfare message to military and commercial audiences worldwide. In addition, he has lectured at Harvard University, Georgetown University, the University of California at Irvine, University of Maryland, the Army War College, the Air War College, the Naval War College, and the Naval Postgraduate School. Prior to joining the Office of Force Transformation, Garstka was the Chief Technology Officer in the Joint Staff Directorate for Command, Control, Computer and Communications (C4) Systems. In this capacity, he played a key role in the development and conceptualization of network-centric warfare and was the Joint Staff lead for the Department of Defense's Report to Congress on Network Centric Warfare. Prior to joining the Joint Staff, Garstka was a Senior Systems Engineer with Cambridge Research Associates, where he had responsibility for leading consulting engagements with commercial and government customers. Before joining Cambridge, Garstka served as an officer in the United States Air Force (USAF) for ten years, with assignments on the Air Staff and at USAF Space and Missile Center. Early life and education Garstka was born in Tokyo and raised in Los Angeles. He graduated from Westchester High School in 1979. Garstka is a Distinguished Graduate of the United States Air Force Academy, where he earned a Bachelor of Science degree in Mathematics in 1983. He also holds a Master of Science Degree in Engineering-Economic Systems from Stanford University, where he studied as a Hertz Fellow. Publications Publications and reports he has authored or co-authored include: Network Centric Warfare: Developing and Leveraging Information Superiority, by Alberts, Garstka, and Stein, CCRP Press, 1999. This book has been reprinted by leading IT companies and translated into three languages. Online at the DoD Command and Control Research Program Understanding Information Age Warfare, by Alberts, Garstka, Hayes, and Signori, CCRP Press, 2001. Online at the DoD Command and Control Research Program Network Centric Warfare: Its Origin and Future, which appeared in Proceedings of the Naval Institute in January 1998. Network Centric Warfare: An Overview of Emerging Theory, which appeared in PHALANX in December 2000. DoD Report to Congress on Network Centric Warfare, July 2001. Online at the DoD Command and Control Research Program References External links Biography from the Office of Force Transformation The Command and Control Research Program (CCRP) 1961 births Living people People from Tokyo People from Los Angeles Westchester High School (Los Angeles) alumni United States Air Force Academy alumni Stanford University alumni Systems engineers Naval Postgraduate School faculty Harvard University people Military personnel from California
John J. Garstka
[ "Engineering" ]
556
[ "Systems engineers", "Systems engineering" ]
5,712,105
https://en.wikipedia.org/wiki/Sylvania%20Wilderness
Sylvania Wilderness is an protected area located a few miles west of Watersmeet Township, Michigan. Sylvania is located entirely within the bounds of the Ottawa National Forest, and is currently being managed as a wilderness area as part of the National Wilderness Preservation System by the U.S. Forest Service. Within its borders lie 34 lakes set against a backdrop of old-growth forests. It represents one of only a handful of such areas left in the Midwest. History Little is known of the area prior to the late 1800s, other than the area was frequently used by clans of Ojibwe Native Americans, as evidenced by the few scattered artifacts that have been found there. In 1895, a Wisconsin lumberman by the name of A.D. Johnston purchased of land at the south end of Clark Lake with the intent to cut the large pines located there. Upon seeing the land for himself, he was so taken by the rugged beauty of it that he changed his mind and decided to preserve it. He soon invited friends, many of whom were equally impressed and so moved to purchase adjacent lands, and after some time the Sylvania Club was formed, with fishing, hunting, and hiking being the main focus. The owners built lodges and cabins on the larger lakes, and the area became an exclusive resort for a small number of affluent and influential guests. Ownership changed hands over the years, and finally the entire area was purchased by the United States Forest Service in 1967, which promptly removed all buildings and began managing it as a special recreation area. In 1987, it was designated as a federal wilderness when the Michigan Wilderness Act was passed by Congress and signed into law by Ronald Reagan. Geography Sylvania straddles the divide between the Lake Superior and the Mississippi River drainage systems, occupying some of the highest ground in the Midwest. As an example, many of the lakes in the park are more than above sea-level. Due to this apex position, these deep, clear lakes are primarily landlocked, fed by springs and local run-off. There are no surface streams entering the park, which is one of the reasons the lakes remain pristine and pure. For this same reason, the lakes are a bit "fragile" (low flush rates, low nutrient loads, etc.). Special fishing regulations on these lakes, including catch and release for all bass, have helped to preserve the lakes' fisheries. The Sylvania Wilderness also features of hiking trails and portages within its . Soils are mostly classic podzol sandy loam or loamy sand developed on glacial till or outwash. Among the most common series are Gogebic, Karlin and Keeweenaw. There are 50 designated campsites in 29 locations throughout the wilderness, each with rudimentary amenities such as outdoor toilets, tent pads, pack racks (for keeping foodstuffs out of reach of wildlife), and fire-grills. Flora and fauna The old-growth northern hardwood forests in this wilderness are some of the most extensive in North America, nearly spanning the entire park at some . Sugar maple, eastern hemlock, and yellow birch are the most common trees, and are found along with white, red, and jack pine, white spruce, balsam fir, and paper birch. Wildlife abounds in the park, with white-tailed deer, black bear, grey wolves, porcupines, bobcat, beaver, otter, coyote, fox, bald eagle, loon, osprey, and many others. List of major lakes in Sylvania Big Bateau Lake Clark Lake Clear Lake Crooked Lake Deer Island Lake Devils Head Lake Dream Lake East Bear Lake Fisher Lake Florence Lake Glimmerglass Lake Helen Lake High Lake Indian Lake Katherine Lake Little Duck Lake Long Lake Loon Lake Marsh Lake Moss Lake Mountain Lake Snap Jack Lake West Bear Lake Whitefish Lake References External links Sylvania Wilderness and Recreation Area, Ottawa National Forest U.S. Forest Service Map of Sylvania Wilderness, Ottawa National Forest U.S. Forest Service Protected areas of Gogebic County, Michigan Wilderness areas of Michigan Ottawa National Forest Old-growth forests
Sylvania Wilderness
[ "Biology" ]
835
[ "Old-growth forests", "Ecosystems" ]
5,712,189
https://en.wikipedia.org/wiki/Fructose%201%2C6-bisphosphate
Fructose 1,6-bisphosphate, known in older publications as Harden-Young ester, is fructose sugar phosphorylated on carbons 1 and 6 (i.e., is a fructosephosphate). The β-D-form of this compound is common in cells. Upon entering the cell, most glucose and fructose is converted to fructose 1,6-bisphosphate. In glycolysis Fructose 1,6-bisphosphate lies within the glycolysis metabolic pathway and is produced by phosphorylation of fructose 6-phosphate. It is, in turn, broken down into two compounds: glyceraldehyde 3-phosphate and dihydroxyacetone phosphate. It is an allosteric activator of pyruvate kinase through distinct interactions of binding and allostery at the enzyme's catalytic site The numbering of the carbon atoms indicates the fate of the carbons according to their position in fructose 6-phosphate. Isomerism Fructose 1,6-bisphosphate has only one biologically active isomer, the β-D-form. There are many other isomers, analogous to those of fructose. Iron chelation Fructose 1,6-bis(phosphate) has also been implicated in the ability to bind and sequester Fe(II), a soluble form of iron whose oxidation to the insoluble Fe(III) is capable of generating reactive oxygen species via Fenton chemistry. The ability of fructose 1,6-bis(phosphate) to bind Fe(II) may prevent such electron transfers, and thus act as an antioxidant within the body. Certain neurodegenerative diseases, like Alzheimer's and Parkinson's, have been linked to metal deposits with high iron content, although it is uncertain whether Fenton chemistry plays a substantial role in these diseases, or whether fructose 1,6-bis(phosphate) is capable of mitigating those effects. See also Fructose 2,6-bisphosphate References External links Monosaccharide derivatives Organophosphates Glycolysis
Fructose 1,6-bisphosphate
[ "Chemistry" ]
465
[ "Carbohydrate metabolism", "Glycolysis" ]
5,712,191
https://en.wikipedia.org/wiki/Fructose%206-phosphate
Fructose 6-phosphate (sometimes called the Neuberg ester) is a derivative of fructose, which has been phosphorylated at the 6-hydroxy group. It is one of several possible fructosephosphates. The β-D-form of this compound is very common in cells. The great majority of glucose is converted to fructose 6-phosphate upon entering a cell. Fructose is predominantly converted to fructose 1-phosphate by fructokinase following cellular import. History The name Neuberg ester comes from the German biochemist Carl Neuberg. In 1918, he found that the compound (later identified as fructose 6-phosphate) was produced by mild acid hydrolysis of fructose 2,6-bisphosphate. In glycolysis Fructose 6-phosphate lies within the glycolysis metabolic pathway and is produced by isomerisation of glucose 6-phosphate. It is in turn further phosphorylated to fructose-1,6-bisphosphate. See also Mannose phosphate isomerase References Monosaccharide derivatives Organophosphates Pentose phosphate pathway Phosphate esters Glycolysis
Fructose 6-phosphate
[ "Chemistry" ]
259
[ "Carbohydrate metabolism", "Glycolysis", "Pentose phosphate pathway" ]
5,712,360
https://en.wikipedia.org/wiki/2-Phosphoglyceric%20acid
2-Phosphoglyceric acid (2PG), or 2-phosphoglycerate, is a glyceric acid which serves as the substrate in the ninth step of glycolysis. It is catalyzed by enolase into phosphoenolpyruvate (PEP), the penultimate step in the conversion of glucose to pyruvate. In glycolysis See also 3-Phosphoglyceric acid References Organophosphates Glycolysis
2-Phosphoglyceric acid
[ "Chemistry", "Biology" ]
114
[ "Carbohydrate metabolism", "Biotechnology stubs", "Glycolysis", "Biochemistry stubs", "Organic compounds", "Biochemistry", "Organic compound stubs", "Organic chemistry stubs" ]
5,712,395
https://en.wikipedia.org/wiki/Pipe%20Nebula
The Pipe Nebula (also known as Barnard 59, 65–67, and 78) is a dark nebula in the Ophiuchus constellation and a part of the Dark Horse Nebula. It is a large but readily apparent smoking pipe-shaped dust lane that obscures the Milky Way star clouds behind it. Clearly visible to the naked eye in the Southern United States under clear dark skies, but it is best viewed with 7× binoculars. The nebula has two main parts: the Pipe Stem with an opacity of 6 which is composed of Barnard 59, 65, 66, and 67 (also known as LDN 1773) 300′ x 60′ RA: 17h 21m Dec: −27° 23′; and the Bowl of the Pipe with an opacity of 5 which is composed of Barnard 78 (also known as LDN 42) 200′ x 140′ RA: 17h 33m Dec: −26° 30′. Gallery References Dark nebulae Barnard objects Ophiuchus
Pipe Nebula
[ "Astronomy" ]
203
[ "Ophiuchus", "Constellations" ]
5,712,506
https://en.wikipedia.org/wiki/Phosphoglycerate%20kinase
Phosphoglycerate kinase () (PGK 1) is an enzyme that catalyzes the reversible transfer of a phosphate group from 1,3-bisphosphoglycerate (1,3-BPG) to ADP producing 3-phosphoglycerate (3-PG) and ATP : 1,3-bisphosphoglycerate + ADP glycerate 3-phosphate + ATP Like all kinases it is a transferase. PGK is a major enzyme used in glycolysis, in the first ATP-generating step of the glycolytic pathway. In gluconeogenesis, the reaction catalyzed by PGK proceeds in the opposite direction, generating ADP and 1,3-BPG. In humans, two isozymes of PGK have been so far identified, PGK1 and PGK2. The isozymes have 87-88% identical amino acid sequence identity and though they are structurally and functionally similar, they have different localizations: PGK2, encoded by an autosomal gene, is unique to meiotic and postmeiotic spermatogenic cells, while PGK1, encoded on the X-chromosome, is ubiquitously expressed in all cells. Biological function PGK is present in all living organisms as one of the two ATP-generating enzymes in glycolysis. In the gluconeogenic pathway, PGK catalyzes the reverse reaction. Under biochemical standard conditions, the glycolytic direction is favored. In the Calvin cycle in photosynthetic organisms, PGK catalyzes the phosphorylation of 3-PG, producing 1,3-BPG and ADP, as part of the reactions that regenerate ribulose-1,5-bisphosphate. PGK has been reported to exhibit thiol reductase activity on plasmin, leading to angiostatin formation, which inhibits angiogenesis and tumor growth. The enzyme was also shown to participate in DNA replication and repair in mammal cell nuclei. The human isozyme PGK2, which is only expressed during spermatogenesis, was shown to be essential for sperm function in mice. Interactive pathway map Structure Overview PGK is found in all living organisms and its sequence has been highly conserved throughout evolution. The enzyme exists as a 415-residue monomer containing two nearly equal-sized domains that correspond to the N- and C-termini of the protein. 3-phosphoglycerate (3-PG) binds to the N-terminal, while the nucleotide substrates, MgATP or MgADP, bind to the C-terminal domain of the enzyme. This extended two-domain structure is associated with large-scale 'hinge-bending' conformational changes, similar to those found in hexokinase. The two domains of the protein are separated by a cleft and linked by two alpha-helices. At the core of each domain is a 6-stranded parallel beta-sheet surrounded by alpha helices. The two lobes are capable of folding independently, consistent with the presence of intermediates on the folding pathway with a single domain folded. Though the binding of either substrate triggers a conformational change, only through the binding of both substrates does domain closure occur, leading to the transfer of the phosphate group. The enzyme has a tendency to exist in the open conformation with short periods of closure and catalysis, which allow for rapid diffusion of substrate and products through the binding sites; the open conformation of PGK is more conformationally stable due to the exposure of a hydrophobic region of the protein upon domain closure. Role of magnesium Magnesium ions are normally complexed to the phosphate groups the nucleotide substrates of PGK. It is known that in the absence of magnesium, no enzyme activity occurs. The bivalent metal assists the enzyme ligands in shielding the bound phosphate group's negative charges, allowing the nucleophilic attack to occur; this charge-stabilization is a typical characteristic of phosphotransfer reaction. It is theorized that the ion may also encourage domain closure when PGK has bound both substrates. Mechanism Without either substrate bound, PGK exists in an "open" conformation. After both the triose and nucleotide substrates are bound to the N- and C-terminal domains, respectively, an extensive hinge-bending motion occurs, bringing the domains and their bound substrates into close proximity and leading to a "closed" conformation. Then, in the case of the forward glycolytic reaction, the beta-phosphate of ADP initiates a nucleophilic attack on the 1-phosphate of 1,3-BPG. The Lys219 on the enzyme guides the phosphate group to the substrate. PGK proceeds through a charge-stabilized transition state that is favored over the arrangement of the bound substrate in the closed enzyme because in the transition state, all three phosphate oxygens are stabilized by ligands, as opposed to only two stabilized oxygens in the initial bound state. In the glycolytic pathway, 1,3-BPG is the phosphate donor and has a high phosphoryl-transfer potential. The PGK-catalyzed transfer of the phosphate group from 1,3-BPG to ADP to yield ATP can the carbon-oxidation reaction of the previous glycolytic step (converting glyceraldehyde 3-phosphate to 3-phosphoglycerate). Regulation The enzyme is activated by low concentrations of various multivalent anions, such as pyrophosphate, sulfate, phosphate, and citrate. High concentrations of MgATP and 3-PG activates PGK, while Mg2+ at high concentrations non-competitively inhibits the enzyme. PGK exhibits a wide specificity toward nucleotide substrates. Its activity is inhibited by salicylates, which appear to mimic the enzyme's nucleotide substrate. Macromolecular crowding has been shown to increase PGK activity in both computer simulations and in vitro environments simulating a cell interior; as a result of crowding, the enzyme becomes more enzymatically active and more compact. Disease relevance Phosphoglycerate kinase (PGK) deficiency is an X-linked recessive trait associated with hemolytic anemia, mental disorders and myopathy in humans, depending on form – there exists a hemolytic form and a myopathic form. Since the trait is X-linked, it is usually fully expressed in males, who have one X chromosome; affected females are typically asymptomatic. The condition results from mutations in Pgk1, the gene encoding PGK1, and twenty mutations have been identified. On a molecular level, the mutation in Pgk1 impairs the thermal stability and inhibits the catalytic activity of the enzyme. PGK is the only enzyme in the immediate glycolytic pathway encoded by an X-linked gene. In the case of hemolytic anemia, PGK deficiency occurs in the erythrocytes. Currently, no definitive treatment exists for PGK deficiency. PGK1 overexpression has been associated with gastric cancer and has been found to increase the invasiveness of gastric cancer cells in vitro. The enzyme is secreted by tumor cells and participates in the angiogenic process, leading to the release of angiostatin and the inhibition of tumor blood vessel growth. Due to its wide specificity towards nucleotide substrates, PGK is known to participate in the phosphorylation and activation of HIV antiretroviral drugs, which are nucleotide-based. Human isozymes References External links Illustration at arizona.edu EC 2.7.2 Glycolysis enzymes Glycolysis
Phosphoglycerate kinase
[ "Chemistry" ]
1,636
[ "Carbohydrate metabolism", "Glycolysis" ]
5,712,665
https://en.wikipedia.org/wiki/Bisphosphoglycerate%20mutase
Bisphosphoglycerate mutase (, BPGM) is an enzyme expressed in erythrocytes and placental cells. It is responsible for the catalytic synthesis of 2,3-Bisphosphoglycerate (2,3-BPG) from 1,3-bisphosphoglycerate. BPGM also has a mutase and a phosphatase function, but these are much less active, in contrast to its glycolytic cousin, phosphoglycerate mutase (PGM), which favors these two functions, but can also catalyze the synthesis of 2,3-BPG to a lesser extent. Tissue distribution Because the main function of bisphosphoglycerate mutase is the synthesis of 2,3-BPG, this enzyme is found only in erythrocytes and placental cells. In glycolysis, converting 1,3-BPG to 2,3-BPG would be very inefficient, as it just adds another unnecessary step. Since the main role of 2,3-BPG is to shift the equilibrium of hemoglobin toward the deoxy-state, its production is really only useful in the cells which contain hemoglobin- erythrocytes and placental cells. Function 1,3-BPG is formed as an intermediate in glycolysis. BPGM then takes this and converts it to 2,3-BPG, which serves an important function in oxygen transport. 2,3-BPG binds with high affinity to Hemoglobin, causing a conformational change that results in the release of oxygen. Local tissues can then pick up the free oxygen. This is also important in the placenta, where fetal and maternal blood come within such close proximity. With the placenta producing 2,3-BPG, a large amount of oxygen is released from nearby maternal hemoglobin, which can then dissociate and bind with fetal hemoglobin, which has a much lower affinity for 2,3-BPG. Structure Overall BPGM is a dimer composed of two identical protein subunits, each with its own active site. Each subunit consists six β-strands, β A-F, and ten α-helices, α 1–10. Dimerization occurs along the faces of β C and α 3 of both monomers. BPGM is roughly 50% identical to its PGM counterpart, with the main active-site residues conserved in nearly all PGMs and BPGMs. Important residues His11: the nucleophile of the 1,2-BPG to 1,3-BPG reaction. Rotates back and forth with the help of His-188 to get in an in-line position in order to attack the 1’ phosphate group. His-188: involved in overall stability of protein, as well as hydrogen bonding to substrate, as His-11, which it pulls into its catalytic position. Arg90: although not involved directly in binding, this positively charged residue is essential to overall stability of the protein. Can be substituted with Lysine with little effect on catalysis. Cys23: has little effect on overall structure, but large effect on reactivity of the enzyme. Mechanism of catalysis 1,3-BPG binds to the active site, which causes a conformational change, in which the cleft around the active site closes in on the substrate, securely locking it in place. 1,3-BPG forms a large number of hydrogen bonds to the surrounding residues, many which are positively charged, severely restricting its mobility. Its rigidity suggests a very enthalpically driven association. Conformational changes cause His11 to rotate, partially aided by hydrogen bonding to His188. His11 is brought in–line with the phosphate group, and then goes through an SN2 mechanism in which His11 is the nucleophile that attacks the phosphate group. The 2’ hydroxy group then attacks the phosphate and removes it from His11, thereby creating 2,3-BPG. References Further reading External links Carbohydrate metabolism EC 5.4.2
Bisphosphoglycerate mutase
[ "Chemistry" ]
874
[ "Carbohydrate metabolism", "Metabolism", "Carbohydrate chemistry" ]
5,712,711
https://en.wikipedia.org/wiki/Comet%20assay
The single cell gel electrophoresis assay (SCGE, also known as comet assay) is an uncomplicated and sensitive technique for the detection of DNA damage at the level of the individual eukaryotic cell. It was first developed by Östling & Johansson in 1984 and later modified by Singh et al. in 1988. It has since increased in popularity as a standard technique for evaluation of DNA damage/repair, biomonitoring and genotoxicity testing. It involves the encapsulation of cells in a low-melting-point agarose suspension, lysis of the cells in neutral or alkaline (pH>13) conditions, and electrophoresis of the suspended lysed cells. The term "comet" refers to the pattern of DNA migration through the electrophoresis gel, which often resembles a comet. The comet assay (single-cell gel electrophoresis) is a simple method for measuring deoxyribonucleic acid (DNA) strand breaks in eukaryotic cells. Cells embedded in agarose on a microscope slide are lysed with detergent and high salt to form nucleoids containing supercoiled loops of DNA linked to the nuclear matrix. Electrophoresis at high pH results in structures resembling comets, observed by fluorescence microscopy; the intensity of the comet tail relative to the head reflects the number of DNA breaks. The likely basis for this is that loops containing a break lose their supercoiling and become free to extend toward the anode. This is followed by visual analysis with staining of DNA and calculating fluorescence to determine the extent of DNA damage. This can be performed by manual scoring or automatically by imaging software. Procedure Encapsulation A sample of cells, either derived from an in vitro cell culture or from an in vivo test subject is dispersed into individual cells and suspended in molten low-melting-point agarose at 37 °C. This mono-suspension is cast on a microscope slide. A glass cover slip is held at an angle and the mono-suspension is applied to the point of contact between the coverslip and the slide. As the coverslip is lowered onto the slide the molten agarose spreads to form a thin layer. The agarose is gelled at 4 °C and the coverslip removed. The agarose forms a matrix of carbohydrate fibres that encapsulate the cells, anchoring them in place. The agarose is considered to be osmotic-neutral, therefore solutions can penetrate the gel and affect the cells without cells shifting position. In an in vitro study the cells would be exposed to a test agent – typically UV light, ionising radiation, or a genotoxic chemical – to induce DNA damage in the encapsulated cells. For calibration, hydrogen peroxide is usually used to provide a standardized level of DNA damage. Lysis The slides are then immersed in a solution that cause the cells to lyse. The lysis solution often used in the comet assay consists of a highly concentrated aqueous salt (often, common table salt can be used) and a detergent (such as Triton X-100 or sarcosinate). The pH of the lysis solution can be adjusted (usually between neutral and alkaline pH) depending upon the type of damage the researcher is investigating. The aqueous salt disrupts proteins and their bonding patterns within the cell as well as disrupting the RNA content of the cell. The detergent dissolves the cellular membranes. Through the action of the lysis solution the cells are destroyed. All proteins, RNA, membranes and cytoplasmic and nucleoplasmic constituents are disrupted and diffuse into the agarose matrix. Only the DNA of the cell remains, and unravels to fill the cavity in the agarose that the whole cell formerly filled. This structure is called nucleoid (a general term for a structure in which DNA is concentrated). Electrophoresis After lysis of the cells (typically 1 to 2 hours at 4 °C) the slides are washed in distilled water to remove all salts and immersed in a second solution – an electrophoresis solution. Again this solution can have its pH adjusted depending upon the type of damage that is being investigated. The slides are left for ~20 minutes in the electrophoresis solution prior to an electric field being applied. In alkaline conditions the DNA double helix is denatured and the nucleoid becomes single stranded. An electric field is applied (typically 1 V/cm) for ~20 minutes. The slides are then neutralised to pH 7, stained with a DNA-specific fluorescent stain and analysed using a microscope with an attached CCD (charge-coupled device – essentially a digital camera) that is connected to a computer with image analysis software. Background The concept underlying the SCGE assay is that undamaged DNA retains a highly organized association with matrix proteins in the nucleus. When damaged, this organization is disrupted. The individual strands of DNA lose their compact structure and relax, expanding out of the cavity into the agarose. When the electric field is applied the DNA, which has an overall negative charge, is drawn towards the positively charged anode. Undamaged DNA strands are too large and do not leave the cavity, whereas the smaller the fragments, the farther they are free to move in a given period of time. Therefore, the amount of DNA that leaves the cavity is a measure of the amount of DNA damage in the cell. The image analysis measures the overall intensity of the fluorescence for the whole nucleoid and the fluorescence of the migrated DNA and compares the two signals. The stronger the signal from the migrated DNA the more damage there is present. The overall structure resembles a comet (hence "comet assay") with a circular head corresponding to the undamaged DNA that remains in the cavity and a tail of damaged DNA. The brighter and longer the tail, the higher the level of damage. The comet assay is a versatile technique for detecting damage and with adjustments to the protocol can be used to quantify the presence of a wide variety of DNA altering lesions (damage). The damage usually detected are single strand breaks and double strand breaks. It is sometimes stated that alkaline conditions and complete denaturating of the DNA is necessary to detect single strand breaks. However this is not true, both single- and double strand breaks are also detected in neutral conditions. In alkaline conditions, however, additional DNA structures are detected as DNA damage: AP sites (abasic sites missing either a pyrimidine or purine nucleotide) and sites where excision repair is taking place. The comet assay is an extremely sensitive DNA damage assay. This sensitivity needs to be handled carefully as it is also vulnerable to physical changes which can affect the reproducibility of results. Essentially, anything that can cause DNA damage or denaturation except the factor(s) being researched is to be avoided. The most common form of the assay is the alkaline version although there is as yet no definitive alkaline assay protocol. Due to its simple and inexpensive setup, it can be used in conditions where more complex assays are not available. Applications These include genotoxicity testing, human biomonitoring and molecular epidemiology, ecogenotoxicology, as well as fundamental research in DNA damage and repair. For example, Swain and Rao, using the comet assay reported marked increases in several types of DNA damages in rat brain neurons and astrocytes during aging, including single-strand breaks, double-strand breaks and modified bases (8-OHdG and uracil). Sperm DNA fragmentation A comet assay can determine the degree of DNA fragmentation in sperm cells. The degree of DNA fragmentation has been associated with outcomes of in vitro fertilization. The comet has been modified for use with sperm cells as a tool for male infertility diagnosis To break down these tightly bound protamine proteins in order to use the comet for sperm, additional steps in the de-condensation protocol are required. References Further reading Dhawan & Anderson (2009): The Comet Assay in Toxicology. Biochemistry detection reactions Chemical tests Electrophoresis Molecular biology
Comet assay
[ "Chemistry", "Biology" ]
1,711
[ "Instrumental analysis", "Biochemistry detection reactions", "Chemical tests", "Biochemical separation processes", "Biochemical reactions", "Microbiology techniques", "Molecular biology techniques", "Molecular biology", "Biochemistry", "Electrophoresis" ]